- More precision of my philosophy about Closures and generic Lambdas and about technology and more of my thoughts.. - 1 Update
- More of my philosophy about Closures and generic Lambdas and about technology and more of my thoughts.. - 1 Update
- More of my philosophy about non-linear regression and about technology and more of my thoughts.. - 1 Update
- More of my philosophy about C# and Delphi and about technology and more of my thoughts.. - 1 Update
- More of my philosophy about Delphi and Freepascal compilers and about technology and more of my thoughts.. - 1 Update
Amine Moulay Ramdane <aminer68@gmail.com>: Nov 03 04:13PM -0700 Hello, More precision of my philosophy about Closures and generic Lambdas and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i think that Delphi and Freepascal and C++ have implemented and supported typed Lambdas, and i think that Delphi will soon support generic Lambdas, and i think that C++ 20 supports all the following: typed Lambdas and generic Lambdas and templated Lambdas, but i think that the abstraction with generic Lambdas and Closures can play the role of how we use message passing with channels in Go and passing the data and not using a Lock , but hiding it so that to avoid race conditions in parallel programming, so i think that there is not only the methodology of Rust with the Rust's ownership system so that to avoid race conditions, but there is also a methodology in Delphi and Rust with generic Lambdas and Closures, and it consists of hiding the Lock inside the closure and passing the Lambdas that will be executed inside another Lambda inside the closure so that to avoid race conditions in parallel programming, and of course you have to be disciplined so that to avoid to use global data , but you have to create the data (and i think you can also use C/C++ or Delphi or Freepascal _alloca function that i am talking about below so that to allocate size bytes of space from the Stack) and pass it with the Lambdas. More of my philosophy about non-linear regression and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature convergence, and it is also one of the most important thing, and it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error. More of my philosophy about non-linear regression and more.. I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned ! More of my philosophy about C# and Delphi and about technology and more of my thoughts.. I invite you to read the following article: Why C# coders should shut up about Delphi Read more here: https://jonlennartaasenden.wordpress.com/2016/10/18/why-c-coders-should-shut-up-about-delphi/ More of my philosophy about Delphi and Freepascal compilers and about technology and more of my thoughts.. According to the 20th edition of the State of the Developer Nation report, there were 26.8 million active software developers in the world at the end of 2021, and from the following article on Delphi programming language, there is 3.25% Delphi software developers this year, so it is around 1 million Delphi software developers in 2022. Here are the following articles, read them carefully: https://blogs.embarcadero.com/considerations-on-delphi-and-the-stackoverflow-2022-developer-survey/ And read here: https://www.future-processing.com/blog/how-many-developers-are-there-in-the-world-in-2019/ I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, of course that i have already programmed in C++, for example you can download my Open source software project of Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well from my website here: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library But why do you think i am also programming in Delphi and Freepascal ? Of course that Delphi and Freepascal compilers support modern Object Pascal, it is not only Pascal, but it is modern Object Pascal, i mean that modern Object Pascal of for example Delphi and Freepascal support object oriented programming and support Anonymous methods or typed Lambdas , so i think that it is a decent programming language, even if i know that the new C++ 20 supports generic Lambdas and templated Lambdas, but i think that Delphi will soon also support generic Lambdas, and in Delphi and Freepascal compilers there is no big runtime like in C# and such compilers, so you get small native executables in Delphi and Freepascal, and inline assembler is supported by both Delphi and Freepascal, and Lazarus the IDE of Freepascal and Delphi come both with one of the best GUI tools, and of course you can make .SO, .DLL, executables, etc. in both Delphi and Freepascal, and both Delphi and Freepascal compilers are Cross platform to Windows, Linux and Mac and Android etc. , and i think that modern Object Pascal of Delphi or Freepascal is more strongly typed than C++ , but less strongly typed than ADA programming language, but i think that modern Object Pascal of Delphi and Freepascal are not Strict as the programming language ADA and are not strict as the programming language Rust or the pure functional programming languages, so it can also be flexible and advantageous to not be this kind of strictness, and the compilation times of Delphi is extremely fast , and of course Freepascal supports the Delphi mode so that to be compatible with Delphi and i can go on and on, and it is why i am also programming in Delphi and Freepascal. And you can read about the last version 11.2 of Delphi from here: https://www.embarcadero.com/products/delphi And you can read about Freepascal and Lazarus from here: https://www.freepascal.org/ https://www.lazarus-ide.org/ More of my philosophy about Asynchronous programming and about the futures and about the ActiveObject and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i think from my new implementation of future below, you can notice that Asynchronous programming is not a simple task, since it can get too much complicated , since you can notice in my implementation below that if i make the starting of the thread of the future out of the constructor and if i make the passing of the parameter as a pointer to the future out of the constructor , it will get more complex to get the automaton of how to use and call the methods right and safe, so i think that there is still a problem with Asynchronous programming and it is that when you have many Asynchronous tasks or threads it can get really complex, and i think that it is the weakness of Asynchronous programming, and of course i am also speaking of the implementation of a sophisticated ActiveObject or a future or complex Asynchronous programming. More of my philosophy about my new updated implementation of a future and about the ActiveObject and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just updated my implementation of a future, and now both the starting the thread of the future and the passing the parameter as a pointer to the future is made from the constructor so that to make safe the system of the automaton of the how to use and call the methods, and I have just added support for exceptions, so you have to know that programming with futures is asynchronous programming, but so that to be robust the future implementation has to deal correctly with "exceptions", so in my implementation of a future when an exception is raised inside the future you will receive the exception, so i have implemented two things: The HasException() method so that to detect the exception from inside the future, and the the exception and its address is returned as a string in the ExceptionStr property, and my implementation of a future does of course support passing parameters as a pointer to the future, also my implementation of a future works in Windows and Linux, and of course you can also use my following more sophisticated Threadpool engine with priorities as a sophisticated ActiveObject or such and pass the methods or functions and there parameters to it, here it is: Threadpool engine with priorities https://sites.google.com/site/scalable68/threadpool-engine-with-priorities And stay tuned since i will enhance more my above Threadpool engine with priorities. So you can download my new updated portable and efficient implementation of a future in Delphi and FreePascal version 1.32 from my website here: https://sites.google.com/site/scalable68/a-portable-and-efficient-implementation-of-a-future-in-delphi-and-freepascal And here is a new example program of how to use my implementation of a future in Delphi and Freepascal and notice that the interface has changed a little bit: -- program TestFuture; uses system.SysUtils, system.Classes, Futures; type TTestFuture1 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; TTestFuture2 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; var obj1:TTestFuture1; obj2:TTestFuture2; a:variant; function TTestFuture1.Compute(ptr:pointer): Variant; begin raise Exception.Create('I raised an exception'); end; function TTestFuture2.Compute(ptr:pointer): Variant; begin writeln(nativeint(ptr)); result:='Hello world !'; end; begin writeln; obj1:=TTestFuture1.create(pointer(12)); if obj1.GetValue(a) then writeln(a) else if obj1.HasException then writeln(obj1.ExceptionStr); obj1.free; writeln; obj2:=TTestFuture2.create(pointer(12)); if obj2.GetValue(a) then writeln(a); obj2.free; end. --- More of my philosophy about quantum computing and about matrix operations and about scalability and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it: Quantum Computing: Now Widely Available! https://www.youtube.com/watch?v=laqpfQ8-jFI But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, read about them on the below paper, so as you notice in the following paper that many matrix operations and also the linear systems of equations solver can be done in a quantum computer, read about it here in the following paper: Quantum algorithms for matrix operations and linear systems of equations Read more here: https://arxiv.org/pdf/2202.04888.pdf So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it of course since i think it will be commercial, but i think that there is a weakness with this kind of configuration of the powerful parallel quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. that needs a much faster latency, other than that Moore's law will still be effective in classical computers since it will permit us to have really powerful classical computer at a low cost and it will be really practical since the quantum computer is big in size and not so practical, so read about the two inventions below that will make logic gates thousands of times faster or a million times faster than those in existing computers so that to notice it, so i think that the business of classical computers will still be great in the future even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business is not only dependent on Moore's law and Bezos' Law , but it is also dependent on the latency of accessing internet, so read my following thoughts about Moore's law and about Bezos' Law: More of my philosophy about Moore's law and about Bezos' Law.. For RAM chips and flash memory, Moore's Law means that in eighteen months you'll pay the same price as today for twice as much storage. But other computing components are also seeing their price versus performance curves skyrocket exponentially. Data storage doubles every twelve months. More about Moore's law and about Bezos' Law.. "Parallel code is the recipe for unlocking Moore's Law" And: "BEZOS' LAW The Cost of Cloud Computing will be cut in half every 18 months - Bezos' Law Like Moore's law, Bezos' Law is about exponential improvement over time. If you look at AWS history, they drop prices constantly. In 2013 alone they've already had 9 price drops. The difference; however, between Bezos' and Moore's law is this: Bezos' law is the first law that isn't anchored in technical innovation. Rather, Bezos' law is anchored in confidence and market dynamics, and will only hold true so long as Amazon is not the aggregate dominant force in Cloud Computing (50%+ market share). Monopolies don't |
Amine Moulay Ramdane <aminer68@gmail.com>: Nov 03 01:24PM -0700 Hello, More of my philosophy about Closures and generic Lambdas and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i think that Delphi and Freepascal and C++ have implemented and supported typed Lambdas, and i think that Delphi will soon support generic Lambdas, and i think that C++ 20 supports all the following: typed Lambdas and generic Lambdas and templated Lambdas, but i think that the abstraction with generic Lambdas and Closures can play the role of how we use message passing with channels in Go and passing the data and not using a Lock and hiding so that to avoid race conditions in parallel programming, so i think that there is not only the methodology of Rust with the Rust's ownership system so that to avoid race conditions, but there i also a methodology in Delphi and Rust with generic Lambdas and Closures, and it consists of hiding the Lock inside the closure and passing the Lambdas that will executed inside another Lambda inside the closure so that to avoid race conditions in parallel programming, and of course you have to be disciplined so that to avoid to use global data , but you have to create the data (and i think you can also use C/C++ or Delphi or Freepascal _alloca function that i am talking about below so that to allocate size bytes of space from the Stack) and pass it with the Lambdas. More of my philosophy about non-linear regression and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature convergence, and it is also one of the most important thing, and it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error. More of my philosophy about non-linear regression and more.. I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned ! More of my philosophy about C# and Delphi and about technology and more of my thoughts.. I invite you to read the following article: Why C# coders should shut up about Delphi Read more here: https://jonlennartaasenden.wordpress.com/2016/10/18/why-c-coders-should-shut-up-about-delphi/ More of my philosophy about Delphi and Freepascal compilers and about technology and more of my thoughts.. According to the 20th edition of the State of the Developer Nation report, there were 26.8 million active software developers in the world at the end of 2021, and from the following article on Delphi programming language, there is 3.25% Delphi software developers this year, so it is around 1 million Delphi software developers in 2022. Here are the following articles, read them carefully: https://blogs.embarcadero.com/considerations-on-delphi-and-the-stackoverflow-2022-developer-survey/ And read here: https://www.future-processing.com/blog/how-many-developers-are-there-in-the-world-in-2019/ I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, of course that i have already programmed in C++, for example you can download my Open source software project of Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well from my website here: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library But why do you think i am also programming in Delphi and Freepascal ? Of course that Delphi and Freepascal compilers support modern Object Pascal, it is not only Pascal, but it is modern Object Pascal, i mean that modern Object Pascal of for example Delphi and Freepascal support object oriented programming and support Anonymous methods or typed Lambdas , so i think that it is a decent programming language, even if i know that the new C++ 20 supports generic Lambdas and templated Lambdas, but i think that Delphi will soon also support generic Lambdas, and in Delphi and Freepascal compilers there is no big runtime like in C# and such compilers, so you get small native executables in Delphi and Freepascal, and inline assembler is supported by both Delphi and Freepascal, and Lazarus the IDE of Freepascal and Delphi come both with one of the best GUI tools, and of course you can make .SO, .DLL, executables, etc. in both Delphi and Freepascal, and both Delphi and Freepascal compilers are Cross platform to Windows, Linux and Mac and Android etc. , and i think that modern Object Pascal of Delphi or Freepascal is more strongly typed than C++ , but less strongly typed than ADA programming language, but i think that modern Object Pascal of Delphi and Freepascal are not Strict as the programming language ADA and are not strict as the programming language Rust or the pure functional programming languages, so it can also be flexible and advantageous to not be this kind of strictness, and the compilation times of Delphi is extremely fast , and of course Freepascal supports the Delphi mode so that to be compatible with Delphi and i can go on and on, and it is why i am also programming in Delphi and Freepascal. And you can read about the last version 11.2 of Delphi from here: https://www.embarcadero.com/products/delphi And you can read about Freepascal and Lazarus from here: https://www.freepascal.org/ https://www.lazarus-ide.org/ More of my philosophy about Asynchronous programming and about the futures and about the ActiveObject and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i think from my new implementation of future below, you can notice that Asynchronous programming is not a simple task, since it can get too much complicated , since you can notice in my implementation below that if i make the starting of the thread of the future out of the constructor and if i make the passing of the parameter as a pointer to the future out of the constructor , it will get more complex to get the automaton of how to use and call the methods right and safe, so i think that there is still a problem with Asynchronous programming and it is that when you have many Asynchronous tasks or threads it can get really complex, and i think that it is the weakness of Asynchronous programming, and of course i am also speaking of the implementation of a sophisticated ActiveObject or a future or complex Asynchronous programming. More of my philosophy about my new updated implementation of a future and about the ActiveObject and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just updated my implementation of a future, and now both the starting the thread of the future and the passing the parameter as a pointer to the future is made from the constructor so that to make safe the system of the automaton of the how to use and call the methods, and I have just added support for exceptions, so you have to know that programming with futures is asynchronous programming, but so that to be robust the future implementation has to deal correctly with "exceptions", so in my implementation of a future when an exception is raised inside the future you will receive the exception, so i have implemented two things: The HasException() method so that to detect the exception from inside the future, and the the exception and its address is returned as a string in the ExceptionStr property, and my implementation of a future does of course support passing parameters as a pointer to the future, also my implementation of a future works in Windows and Linux, and of course you can also use my following more sophisticated Threadpool engine with priorities as a sophisticated ActiveObject or such and pass the methods or functions and there parameters to it, here it is: Threadpool engine with priorities https://sites.google.com/site/scalable68/threadpool-engine-with-priorities And stay tuned since i will enhance more my above Threadpool engine with priorities. So you can download my new updated portable and efficient implementation of a future in Delphi and FreePascal version 1.32 from my website here: https://sites.google.com/site/scalable68/a-portable-and-efficient-implementation-of-a-future-in-delphi-and-freepascal And here is a new example program of how to use my implementation of a future in Delphi and Freepascal and notice that the interface has changed a little bit: -- program TestFuture; uses system.SysUtils, system.Classes, Futures; type TTestFuture1 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; TTestFuture2 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; var obj1:TTestFuture1; obj2:TTestFuture2; a:variant; function TTestFuture1.Compute(ptr:pointer): Variant; begin raise Exception.Create('I raised an exception'); end; function TTestFuture2.Compute(ptr:pointer): Variant; begin writeln(nativeint(ptr)); result:='Hello world !'; end; begin writeln; obj1:=TTestFuture1.create(pointer(12)); if obj1.GetValue(a) then writeln(a) else if obj1.HasException then writeln(obj1.ExceptionStr); obj1.free; writeln; obj2:=TTestFuture2.create(pointer(12)); if obj2.GetValue(a) then writeln(a); obj2.free; end. --- More of my philosophy about quantum computing and about matrix operations and about scalability and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it: Quantum Computing: Now Widely Available! https://www.youtube.com/watch?v=laqpfQ8-jFI But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, read about them on the below paper, so as you notice in the following paper that many matrix operations and also the linear systems of equations solver can be done in a quantum computer, read about it here in the following paper: Quantum algorithms for matrix operations and linear systems of equations Read more here: https://arxiv.org/pdf/2202.04888.pdf So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it of course since i think it will be commercial, but i think that there is a weakness with this kind of configuration of the powerful parallel quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. that needs a much faster latency, other than that Moore's law will still be effective in classical computers since it will permit us to have really powerful classical computer at a low cost and it will be really practical since the quantum computer is big in size and not so practical, so read about the two inventions below that will make logic gates thousands of times faster or a million times faster than those in existing computers so that to notice it, so i think that the business of classical computers will still be great in the future even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business is not only dependent on Moore's law and Bezos' Law , but it is also dependent on the latency of accessing internet, so read my following thoughts about Moore's law and about Bezos' Law: More of my philosophy about Moore's law and about Bezos' Law.. For RAM chips and flash memory, Moore's Law means that in eighteen months you'll pay the same price as today for twice as much storage. But other computing components are also seeing their price versus performance curves skyrocket exponentially. Data storage doubles every twelve months. More about Moore's law and about Bezos' Law.. "Parallel code is the recipe for unlocking Moore's Law" And: "BEZOS' LAW The Cost of Cloud Computing will be cut in half every 18 months - Bezos' Law Like Moore's law, Bezos' Law is about exponential improvement over time. If you look at AWS history, they drop prices constantly. In 2013 alone they've already had 9 price drops. The difference; however, between Bezos' and Moore's law is this: Bezos' law is the first law that isn't anchored in technical innovation. Rather, Bezos' law is anchored in confidence and market dynamics, and will only hold true so long as Amazon is not the aggregate dominant force in Cloud Computing (50%+ market share). Monopolies don't cut prices." |
Amine Moulay Ramdane <aminer68@gmail.com>: Nov 03 11:52AM -0700 Hello, More of my philosophy about non-linear regression and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature convergence, and it is also one of the most important thing, and it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error. More of my philosophy about non-linear regression and more.. I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned ! More of my philosophy about C# and Delphi and about technology and more of my thoughts.. I invite you to read the following article: Why C# coders should shut up about Delphi Read more here: https://jonlennartaasenden.wordpress.com/2016/10/18/why-c-coders-should-shut-up-about-delphi/ More of my philosophy about Delphi and Freepascal compilers and about technology and more of my thoughts.. According to the 20th edition of the State of the Developer Nation report, there were 26.8 million active software developers in the world at the end of 2021, and from the following article on Delphi programming language, there is 3.25% Delphi software developers this year, so it is around 1 million Delphi software developers in 2022. Here are the following articles, read them carefully: https://blogs.embarcadero.com/considerations-on-delphi-and-the-stackoverflow-2022-developer-survey/ And read here: https://www.future-processing.com/blog/how-many-developers-are-there-in-the-world-in-2019/ I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, of course that i have already programmed in C++, for example you can download my Open source software project of Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well from my website here: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library But why do you think i am also programming in Delphi and Freepascal ? Of course that Delphi and Freepascal compilers support modern Object Pascal, it is not only Pascal, but it is modern Object Pascal, i mean that modern Object Pascal of for example Delphi and Freepascal support object oriented programming and support Anonymous methods or typed Lambdas , so i think that it is a decent programming language, even if i know that the new C++ 20 supports generic Lambdas and templated Lambdas, but i think that Delphi will soon also support generic Lambdas, and in Delphi and Freepascal compilers there is no big runtime like in C# and such compilers, so you get small native executables in Delphi and Freepascal, and inline assembler is supported by both Delphi and Freepascal, and Lazarus the IDE of Freepascal and Delphi come both with one of the best GUI tools, and of course you can make .SO, .DLL, executables, etc. in both Delphi and Freepascal, and both Delphi and Freepascal compilers are Cross platform to Windows, Linux and Mac and Android etc. , and i think that modern Object Pascal of Delphi or Freepascal is more strongly typed than C++ , but less strongly typed than ADA programming language, but i think that modern Object Pascal of Delphi and Freepascal are not Strict as the programming language ADA and are not strict as the programming language Rust or the pure functional programming languages, so it can also be flexible and advantageous to not be this kind of strictness, and the compilation times of Delphi is extremely fast , and of course Freepascal supports the Delphi mode so that to be compatible with Delphi and i can go on and on, and it is why i am also programming in Delphi and Freepascal. And you can read about the last version 11.2 of Delphi from here: https://www.embarcadero.com/products/delphi And you can read about Freepascal and Lazarus from here: https://www.freepascal.org/ https://www.lazarus-ide.org/ More of my philosophy about Asynchronous programming and about the futures and about the ActiveObject and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i think from my new implementation of future below, you can notice that Asynchronous programming is not a simple task, since it can get too much complicated , since you can notice in my implementation below that if i make the starting of the thread of the future out of the constructor and if i make the passing of the parameter as a pointer to the future out of the constructor , it will get more complex to get the automaton of how to use and call the methods right and safe, so i think that there is still a problem with Asynchronous programming and it is that when you have many Asynchronous tasks or threads it can get really complex, and i think that it is the weakness of Asynchronous programming, and of course i am also speaking of the implementation of a sophisticated ActiveObject or a future or complex Asynchronous programming. More of my philosophy about my new updated implementation of a future and about the ActiveObject and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just updated my implementation of a future, and now both the starting the thread of the future and the passing the parameter as a pointer to the future is made from the constructor so that to make safe the system of the automaton of the how to use and call the methods, and I have just added support for exceptions, so you have to know that programming with futures is asynchronous programming, but so that to be robust the future implementation has to deal correctly with "exceptions", so in my implementation of a future when an exception is raised inside the future you will receive the exception, so i have implemented two things: The HasException() method so that to detect the exception from inside the future, and the the exception and its address is returned as a string in the ExceptionStr property, and my implementation of a future does of course support passing parameters as a pointer to the future, also my implementation of a future works in Windows and Linux, and of course you can also use my following more sophisticated Threadpool engine with priorities as a sophisticated ActiveObject or such and pass the methods or functions and there parameters to it, here it is: Threadpool engine with priorities https://sites.google.com/site/scalable68/threadpool-engine-with-priorities And stay tuned since i will enhance more my above Threadpool engine with priorities. So you can download my new updated portable and efficient implementation of a future in Delphi and FreePascal version 1.32 from my website here: https://sites.google.com/site/scalable68/a-portable-and-efficient-implementation-of-a-future-in-delphi-and-freepascal And here is a new example program of how to use my implementation of a future in Delphi and Freepascal and notice that the interface has changed a little bit: -- program TestFuture; uses system.SysUtils, system.Classes, Futures; type TTestFuture1 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; TTestFuture2 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; var obj1:TTestFuture1; obj2:TTestFuture2; a:variant; function TTestFuture1.Compute(ptr:pointer): Variant; begin raise Exception.Create('I raised an exception'); end; function TTestFuture2.Compute(ptr:pointer): Variant; begin writeln(nativeint(ptr)); result:='Hello world !'; end; begin writeln; obj1:=TTestFuture1.create(pointer(12)); if obj1.GetValue(a) then writeln(a) else if obj1.HasException then writeln(obj1.ExceptionStr); obj1.free; writeln; obj2:=TTestFuture2.create(pointer(12)); if obj2.GetValue(a) then writeln(a); obj2.free; end. --- More of my philosophy about quantum computing and about matrix operations and about scalability and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it: Quantum Computing: Now Widely Available! https://www.youtube.com/watch?v=laqpfQ8-jFI But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, read about them on the below paper, so as you notice in the following paper that many matrix operations and also the linear systems of equations solver can be done in a quantum computer, read about it here in the following paper: Quantum algorithms for matrix operations and linear systems of equations Read more here: https://arxiv.org/pdf/2202.04888.pdf So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it of course since i think it will be commercial, but i think that there is a weakness with this kind of configuration of the powerful parallel quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. that needs a much faster latency, other than that Moore's law will still be effective in classical computers since it will permit us to have really powerful classical computer at a low cost and it will be really practical since the quantum computer is big in size and not so practical, so read about the two inventions below that will make logic gates thousands of times faster or a million times faster than those in existing computers so that to notice it, so i think that the business of classical computers will still be great in the future even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business is not only dependent on Moore's law and Bezos' Law , but it is also dependent on the latency of accessing internet, so read my following thoughts about Moore's law and about Bezos' Law: More of my philosophy about Moore's law and about Bezos' Law.. For RAM chips and flash memory, Moore's Law means that in eighteen months you'll pay the same price as today for twice as much storage. But other computing components are also seeing their price versus performance curves skyrocket exponentially. Data storage doubles every twelve months. More about Moore's law and about Bezos' Law.. "Parallel code is the recipe for unlocking Moore's Law" And: "BEZOS' LAW The Cost of Cloud Computing will be cut in half every 18 months - Bezos' Law Like Moore's law, Bezos' Law is about exponential improvement over time. If you look at AWS history, they drop prices constantly. In 2013 alone they've already had 9 price drops. The difference; however, between Bezos' and Moore's law is this: Bezos' law is the first law that isn't anchored in technical innovation. Rather, Bezos' law is anchored in confidence and market dynamics, and will only hold true so long as Amazon is not the aggregate dominant force in Cloud Computing (50%+ market share). Monopolies don't cut prices." More of my philosophy about latency and contention and concurrency and parallelism and more of my thoughts.. I think i am highly smart and i have just posted, read it below, about the new two inventions that will make logic gates thousands of times faster or a million times faster than those in existing computers, and i think that there is still a problem with those new inventions, and it is about the latency and concurrency, since you need concurrency and you need preemptive or non-preemptive scheduling of the coroutines , so since the HBM is 106.7 ns in latency and the DDR4 is 73.3 ns in latency and the AMD 3D V-Cache has also almost the same cost in latency, so as you notice that this kind of latency is still costly , also there is a latency that is the Time slice that takes a coroutine to execute and it is costly in latency, since this kind of latency and Time slice is a waiting time that looks like the time wasted in a contention in parallelism, so by logical analogy this kind of latency and Time slice create like a contention like in parallelism that reduces scalability, so i think it is why those new inventions have this kind of limit or constraints in a "concurrency" environment. And i invite you to read my following smart thoughts about preemptive and non-preemptive timesharing: https://groups.google.com/g/alt.culture.morocco/c/JuC4jar661w More of my philosophy about Fastest-ever |
Amine Moulay Ramdane <aminer68@gmail.com>: Nov 03 07:43AM -0700 Hello, More of my philosophy about C# and Delphi and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I invite you to read the following article: Why C# coders should shut up about Delphi Read more here: https://jonlennartaasenden.wordpress.com/2016/10/18/why-c-coders-should-shut-up-about-delphi/ More of my philosophy about Delphi and Freepascal compilers and about technology and more of my thoughts.. According to the 20th edition of the State of the Developer Nation report, there were 26.8 million active software developers in the world at the end of 2021, and from the following article on Delphi programming language, there is 3.25% Delphi software developers this year, so it is around 1 million Delphi software developers in 2022. Here are the following articles, read them carefully: https://blogs.embarcadero.com/considerations-on-delphi-and-the-stackoverflow-2022-developer-survey/ And read here: https://www.future-processing.com/blog/how-many-developers-are-there-in-the-world-in-2019/ I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, of course that i have already programmed in C++, for example you can download my Open source software project of Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well from my website here: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library But why do you think i am also programming in Delphi and Freepascal ? Of course that Delphi and Freepascal compilers support modern Object Pascal, it is not only Pascal, but it is modern Object Pascal, i mean that modern Object Pascal of for example Delphi and Freepascal support object oriented programming and support Anonymous methods or typed Lambdas , so i think that it is a decent programming language, even if i know that the new C++ 20 supports generic Lambdas and templated Lambdas, but i think that Delphi will soon also support generic Lambdas, and in Delphi and Freepascal compilers there is no big runtime like in C# and such compilers, so you get small native executables in Delphi and Freepascal, and inline assembler is supported by both Delphi and Freepascal, and Lazarus the IDE of Freepascal and Delphi come both with one of the best GUI tools, and of course you can make .SO, .DLL, executables, etc. in both Delphi and Freepascal, and both Delphi and Freepascal compilers are Cross platform to Windows, Linux and Mac and Android etc. , and i think that modern Object Pascal of Delphi or Freepascal is more strongly typed than C++ , but less strongly typed than ADA programming language, but i think that modern Object Pascal of Delphi and Freepascal are not Strict as the programming language ADA and are not strict as the programming language Rust or the pure functional programming languages, so it can also be flexible and advantageous to not be this kind of strictness, and the compilation times of Delphi is extremely fast , and of course Freepascal supports the Delphi mode so that to be compatible with Delphi and i can go on and on, and it is why i am also programming in Delphi and Freepascal. And you can read about the last version 11.2 of Delphi from here: https://www.embarcadero.com/products/delphi And you can read about Freepascal and Lazarus from here: https://www.freepascal.org/ https://www.lazarus-ide.org/ More of my philosophy about Asynchronous programming and about the futures and about the ActiveObject and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i think from my new implementation of future below, you can notice that Asynchronous programming is not a simple task, since it can get too much complicated , since you can notice in my implementation below that if i make the starting of the thread of the future out of the constructor and if i make the passing of the parameter as a pointer to the future out of the constructor , it will get more complex to get the automaton of how to use and call the methods right and safe, so i think that there is still a problem with Asynchronous programming and it is that when you have many Asynchronous tasks or threads it can get really complex, and i think that it is the weakness of Asynchronous programming, and of course i am also speaking of the implementation of a sophisticated ActiveObject or a future or complex Asynchronous programming. More of my philosophy about my new updated implementation of a future and about the ActiveObject and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just updated my implementation of a future, and now both the starting the thread of the future and the passing the parameter as a pointer to the future is made from the constructor so that to make safe the system of the automaton of the how to use and call the methods, and I have just added support for exceptions, so you have to know that programming with futures is asynchronous programming, but so that to be robust the future implementation has to deal correctly with "exceptions", so in my implementation of a future when an exception is raised inside the future you will receive the exception, so i have implemented two things: The HasException() method so that to detect the exception from inside the future, and the the exception and its address is returned as a string in the ExceptionStr property, and my implementation of a future does of course support passing parameters as a pointer to the future, also my implementation of a future works in Windows and Linux, and of course you can also use my following more sophisticated Threadpool engine with priorities as a sophisticated ActiveObject or such and pass the methods or functions and there parameters to it, here it is: Threadpool engine with priorities https://sites.google.com/site/scalable68/threadpool-engine-with-priorities And stay tuned since i will enhance more my above Threadpool engine with priorities. So you can download my new updated portable and efficient implementation of a future in Delphi and FreePascal version 1.32 from my website here: https://sites.google.com/site/scalable68/a-portable-and-efficient-implementation-of-a-future-in-delphi-and-freepascal And here is a new example program of how to use my implementation of a future in Delphi and Freepascal and notice that the interface has changed a little bit: -- program TestFuture; uses system.SysUtils, system.Classes, Futures; type TTestFuture1 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; TTestFuture2 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; var obj1:TTestFuture1; obj2:TTestFuture2; a:variant; function TTestFuture1.Compute(ptr:pointer): Variant; begin raise Exception.Create('I raised an exception'); end; function TTestFuture2.Compute(ptr:pointer): Variant; begin writeln(nativeint(ptr)); result:='Hello world !'; end; begin writeln; obj1:=TTestFuture1.create(pointer(12)); if obj1.GetValue(a) then writeln(a) else if obj1.HasException then writeln(obj1.ExceptionStr); obj1.free; writeln; obj2:=TTestFuture2.create(pointer(12)); if obj2.GetValue(a) then writeln(a); obj2.free; end. --- More of my philosophy about quantum computing and about matrix operations and about scalability and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it: Quantum Computing: Now Widely Available! https://www.youtube.com/watch?v=laqpfQ8-jFI But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, read about them on the below paper, so as you notice in the following paper that many matrix operations and also the linear systems of equations solver can be done in a quantum computer, read about it here in the following paper: Quantum algorithms for matrix operations and linear systems of equations Read more here: https://arxiv.org/pdf/2202.04888.pdf So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it of course since i think it will be commercial, but i think that there is a weakness with this kind of configuration of the powerful parallel quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. that needs a much faster latency, other than that Moore's law will still be effective in classical computers since it will permit us to have really powerful classical computer at a low cost and it will be really practical since the quantum computer is big in size and not so practical, so read about the two inventions below that will make logic gates thousands of times faster or a million times faster than those in existing computers so that to notice it, so i think that the business of classical computers will still be great in the future even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business is not only dependent on Moore's law and Bezos' Law , but it is also dependent on the latency of accessing internet, so read my following thoughts about Moore's law and about Bezos' Law: More of my philosophy about Moore's law and about Bezos' Law.. For RAM chips and flash memory, Moore's Law means that in eighteen months you'll pay the same price as today for twice as much storage. But other computing components are also seeing their price versus performance curves skyrocket exponentially. Data storage doubles every twelve months. More about Moore's law and about Bezos' Law.. "Parallel code is the recipe for unlocking Moore's Law" And: "BEZOS' LAW The Cost of Cloud Computing will be cut in half every 18 months - Bezos' Law Like Moore's law, Bezos' Law is about exponential improvement over time. If you look at AWS history, they drop prices constantly. In 2013 alone they've already had 9 price drops. The difference; however, between Bezos' and Moore's law is this: Bezos' law is the first law that isn't anchored in technical innovation. Rather, Bezos' law is anchored in confidence and market dynamics, and will only hold true so long as Amazon is not the aggregate dominant force in Cloud Computing (50%+ market share). Monopolies don't cut prices." More of my philosophy about latency and contention and concurrency and parallelism and more of my thoughts.. I think i am highly smart and i have just posted, read it below, about the new two inventions that will make logic gates thousands of times faster or a million times faster than those in existing computers, and i think that there is still a problem with those new inventions, and it is about the latency and concurrency, since you need concurrency and you need preemptive or non-preemptive scheduling of the coroutines , so since the HBM is 106.7 ns in latency and the DDR4 is 73.3 ns in latency and the AMD 3D V-Cache has also almost the same cost in latency, so as you notice that this kind of latency is still costly , also there is a latency that is the Time slice that takes a coroutine to execute and it is costly in latency, since this kind of latency and Time slice is a waiting time that looks like the time wasted in a contention in parallelism, so by logical analogy this kind of latency and Time slice create like a contention like in parallelism that reduces scalability, so i think it is why those new inventions have this kind of limit or constraints in a "concurrency" environment. And i invite you to read my following smart thoughts about preemptive and non-preemptive timesharing: https://groups.google.com/g/alt.culture.morocco/c/JuC4jar661w More of my philosophy about Fastest-ever logic gates and more of my thoughts.. "Logic gates are the fundamental building blocks of computers, and researchers at the University of Rochester have now developed the fastest ones ever created. By zapping graphene and gold with laser pulses, the new logic gates are a million times faster than those in existing computers, demonstrating the viability of "lightwave electronics.". If these kinds of lightwave electronic devices ever do make it to market, they could be millions of times faster than today's computers. Currently we measure processing speeds in Gigahertz (GHz), but these new logic gates function on the scale of Petahertz (PHz). Previous studies have set that as the absolute quantum limit of how fast light-based computer systems could possibly get." Read more here: https://newatlas.com/electronics/fastest-ever-logic-gates-computers-million-times-faster-petahertz/ Read my following news: And with the following new discovery computers and phones could run thousands of times faster.. Prof Alan Dalton in the School of Mathematical and Physics Sciences at the University of Sussex, said: "We're mechanically creating kinks in a layer of graphene. It's a bit like nano-origami. "Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future. "This kind of technology -- "straintronics" using nanomaterials as opposed to electronics -- allows space for more chips inside any device. Everything we want to do with computers -- to speed them up -- can be done by crinkling graphene like this." Dr Manoj Tripathi, Research Fellow in Nano-structured Materials at the University of Sussex and lead author on the paper, said: "Instead of having to add foreign materials into a device, we've shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate." The development is a greener, more sustainable technology. Because no additional materials need to be added, and because this process works at room temperature rather than high temperature, it uses less energy to create. Read more here: https://www.sciencedaily.com/releases/2021/02/210216100141.htm But I think that mass production of graphene still hasn't quite begun, so i think the inventions above of the Fastest-ever logic gates that uses graphene and of the one with nanomaterials that uses graphene will not be commercialized fully until perhaps around year 2035 or 2040 or so, so read the following so that to understand why: "Because large-scale mass production of graphene still hasn't quite begun , the market is a bit limited. However, that leaves a lot of room open |
Amine Moulay Ramdane <aminer68@gmail.com>: Nov 03 06:25AM -0700 Hello, More of my philosophy about Delphi and Freepascal compilers and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. According to the 20th edition of the State of the Developer Nation report, there were 26.8 million active software developers in the world at the end of 2021, and from the following article on Delphi programming language, there is 3.25% Delphi software developers this year, so it is around 1 million Delphi software developers in 2022. Here is the following articles, read them carefully: https://blogs.embarcadero.com/considerations-on-delphi-and-the-stackoverflow-2022-developer-survey/ And read here: https://www.future-processing.com/blog/how-many-developers-are-there-in-the-world-in-2019/ I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, of course that i have already programmed in C++, for example you can download my Open source software project of Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well from my website here: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library But why do you think i am also programming in Delphi and Freepascal ? Of course that Delphi and Freepascal compilers support modern Object Pascal, it is not only Pascal, but it is modern Object Pascal, i mean that modern Object Pascal of for example Delphi and Freepascal support object oriented programming and support Anonymous methods or typed Lambdas , so i think that it is a decent programming language, even if i know that the new C++ 20 supports generic Lambdas and templated Lambdas, and in Delphi and Freepascal compilers there is no runtime like in C# and such compilers, so you get small native executables in Delphi and Freepascal, and inline assembler is supported by both Delphi and Freepascal, and Lazarus the IDE of Freepascal and Delphi come both with one of the best GUI tools, and of course you can make .SO, .DLL, executables, etc. in both Delphi and Freepascal, and both Delphi and Freepascal compilers are Cross platform to Windows, Linux and Mac and Android etc. , and i think that modern Object Pascal of Delphi or Freepascal is more strongly typed than C++ , but less strongly typed than ADA programming language, but i think that modern Object Pascal of Delphi and Freepascal are not Strict as the programming language ADA and are not strict as the programming language Rust or the pure functional programming languages, so it can also be flexible and advantageous to not be this kind of strictness, and the compilation times of Delphi is extremely fast , and of course Freepascal supports the Delphi mode so that to be compatible with Delphi and i can go on and on, and it is why i am also programming in Delphi and Freepascal. And you can read about the last version 11.2 of Delphi from here: https://www.embarcadero.com/products/delphi And you can read about Freepascal and Lazarus from here: https://www.freepascal.org/ https://www.lazarus-ide.org/ More of my philosophy about Asynchronous programming and about the futures and about the ActiveObject and about technology and more of my thoughts.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i think from my new implementation of future below, you can notice that Asynchronous programming is not a simple task, since it can get too much complicated , since you can notice in my implementation below that if i make the starting of the thread of the future out of the constructor and if i make the passing of the parameter as a pointer to the future out of the constructor , it will get more complex to get the automaton of how to use and call the methods right and safe, so i think that there is still a problem with Asynchronous programming and it is that when you have many Asynchronous tasks or threads it can get really complex, and i think that it is the weakness of Asynchronous programming, and of course i am also speaking of the implementation of a sophisticated ActiveObject or a future or complex Asynchronous programming. More of my philosophy about my new updated implementation of a future and about the ActiveObject and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just updated my implementation of a future, and now both the starting the thread of the future and the passing the parameter as a pointer to the future is made from the constructor so that to make safe the system of the automaton of the how to use and call the methods, and I have just added support for exceptions, so you have to know that programming with futures is asynchronous programming, but so that to be robust the future implementation has to deal correctly with "exceptions", so in my implementation of a future when an exception is raised inside the future you will receive the exception, so i have implemented two things: The HasException() method so that to detect the exception from inside the future, and the the exception and its address is returned as a string in the ExceptionStr property, and my implementation of a future does of course support passing parameters as a pointer to the future, also my implementation of a future works in Windows and Linux, and of course you can also use my following more sophisticated Threadpool engine with priorities as a sophisticated ActiveObject or such and pass the methods or functions and there parameters to it, here it is: Threadpool engine with priorities https://sites.google.com/site/scalable68/threadpool-engine-with-priorities And stay tuned since i will enhance more my above Threadpool engine with priorities. So you can download my new updated portable and efficient implementation of a future in Delphi and FreePascal version 1.32 from my website here: https://sites.google.com/site/scalable68/a-portable-and-efficient-implementation-of-a-future-in-delphi-and-freepascal And here is a new example program of how to use my implementation of a future in Delphi and Freepascal and notice that the interface has changed a little bit: -- program TestFuture; uses system.SysUtils, system.Classes, Futures; type TTestFuture1 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; TTestFuture2 = class(TFuture) public function Compute(ptr:pointer): Variant; override; end; var obj1:TTestFuture1; obj2:TTestFuture2; a:variant; function TTestFuture1.Compute(ptr:pointer): Variant; begin raise Exception.Create('I raised an exception'); end; function TTestFuture2.Compute(ptr:pointer): Variant; begin writeln(nativeint(ptr)); result:='Hello world !'; end; begin writeln; obj1:=TTestFuture1.create(pointer(12)); if obj1.GetValue(a) then writeln(a) else if obj1.HasException then writeln(obj1.ExceptionStr); obj1.free; writeln; obj2:=TTestFuture2.create(pointer(12)); if obj2.GetValue(a) then writeln(a); obj2.free; end. --- More of my philosophy about quantum computing and about matrix operations and about scalability and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it: Quantum Computing: Now Widely Available! https://www.youtube.com/watch?v=laqpfQ8-jFI But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, read about them on the below paper, so as you notice in the following paper that many matrix operations and also the linear systems of equations solver can be done in a quantum computer, read about it here in the following paper: Quantum algorithms for matrix operations and linear systems of equations Read more here: https://arxiv.org/pdf/2202.04888.pdf So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it of course since i think it will be commercial, but i think that there is a weakness with this kind of configuration of the powerful parallel quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. that needs a much faster latency, other than that Moore's law will still be effective in classical computers since it will permit us to have really powerful classical computer at a low cost and it will be really practical since the quantum computer is big in size and not so practical, so read about the two inventions below that will make logic gates thousands of times faster or a million times faster than those in existing computers so that to notice it, so i think that the business of classical computers will still be great in the future even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business is not only dependent on Moore's law and Bezos' Law , but it is also dependent on the latency of accessing internet, so read my following thoughts about Moore's law and about Bezos' Law: More of my philosophy about Moore's law and about Bezos' Law.. For RAM chips and flash memory, Moore's Law means that in eighteen months you'll pay the same price as today for twice as much storage. But other computing components are also seeing their price versus performance curves skyrocket exponentially. Data storage doubles every twelve months. More about Moore's law and about Bezos' Law.. "Parallel code is the recipe for unlocking Moore's Law" And: "BEZOS' LAW The Cost of Cloud Computing will be cut in half every 18 months - Bezos' Law Like Moore's law, Bezos' Law is about exponential improvement over time. If you look at AWS history, they drop prices constantly. In 2013 alone they've already had 9 price drops. The difference; however, between Bezos' and Moore's law is this: Bezos' law is the first law that isn't anchored in technical innovation. Rather, Bezos' law is anchored in confidence and market dynamics, and will only hold true so long as Amazon is not the aggregate dominant force in Cloud Computing (50%+ market share). Monopolies don't cut prices." More of my philosophy about latency and contention and concurrency and parallelism and more of my thoughts.. I think i am highly smart and i have just posted, read it below, about the new two inventions that will make logic gates thousands of times faster or a million times faster than those in existing computers, and i think that there is still a problem with those new inventions, and it is about the latency and concurrency, since you need concurrency and you need preemptive or non-preemptive scheduling of the coroutines , so since the HBM is 106.7 ns in latency and the DDR4 is 73.3 ns in latency and the AMD 3D V-Cache has also almost the same cost in latency, so as you notice that this kind of latency is still costly , also there is a latency that is the Time slice that takes a coroutine to execute and it is costly in latency, since this kind of latency and Time slice is a waiting time that looks like the time wasted in a contention in parallelism, so by logical analogy this kind of latency and Time slice create like a contention like in parallelism that reduces scalability, so i think it is why those new inventions have this kind of limit or constraints in a "concurrency" environment. And i invite you to read my following smart thoughts about preemptive and non-preemptive timesharing: https://groups.google.com/g/alt.culture.morocco/c/JuC4jar661w More of my philosophy about Fastest-ever logic gates and more of my thoughts.. "Logic gates are the fundamental building blocks of computers, and researchers at the University of Rochester have now developed the fastest ones ever created. By zapping graphene and gold with laser pulses, the new logic gates are a million times faster than those in existing computers, demonstrating the viability of "lightwave electronics.". If these kinds of lightwave electronic devices ever do make it to market, they could be millions of times faster than today's computers. Currently we measure processing speeds in Gigahertz (GHz), but these new logic gates function on the scale of Petahertz (PHz). Previous studies have set that as the absolute quantum limit of how fast light-based computer systems could possibly get." Read more here: https://newatlas.com/electronics/fastest-ever-logic-gates-computers-million-times-faster-petahertz/ Read my following news: And with the following new discovery computers and phones could run thousands of times faster.. Prof Alan Dalton in the School of Mathematical and Physics Sciences at the University of Sussex, said: "We're mechanically creating kinks in a layer of graphene. It's a bit like nano-origami. "Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future. "This kind of technology -- "straintronics" using nanomaterials as opposed to electronics -- allows space for more chips inside any device. Everything we want to do with computers -- to speed them up -- can be done by crinkling graphene like this." Dr Manoj Tripathi, Research Fellow in Nano-structured Materials at the University of Sussex and lead author on the paper, said: "Instead of having to add foreign materials into a device, we've shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate." The development is a greener, more sustainable technology. Because no additional materials need to be added, and because this process works at room temperature rather than high temperature, it uses less energy to create. Read more here: https://www.sciencedaily.com/releases/2021/02/210216100141.htm But I think that mass production of graphene still hasn't quite begun, so i think the inventions above of the Fastest-ever logic gates that uses graphene and of the one with nanomaterials that uses graphene will not be commercialized fully until perhaps around year 2035 or 2040 or so, so read the following so that to understand why: "Because large-scale mass production of graphene still hasn't quite begun , the market is a bit limited. However, that leaves a lot of room open for investors to get in before it reaches commercialization. The market was worth $78.7 million in 2019 and, according to Grand View Research, is expected to rise drastically to $1.08 billion by 2027. North America currently has the bulk of market share, but the Asia-Pacific area is expected to have the quickest growth in adoption of graphene uses in coming |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment