Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Dec 31 07:41AM -0800
On Thursday, December 22, 2022, I wrote: > but I can't remember having seen something > like this in the C++ standard library nor Boost > nor wxWidgets. I've written code that will work on C++11, and which is optimised for C++17 which has guaranteed return value optimisation: #ifndef HEADER_INCLUSION_GUARD_RESERVER #define HEADER_INCLUSION_GUARD_RESERVER #include <mutex> // recursive_mutex, lock_guard, unique_lock #include <utility> // move template<typename T> class Reserver final { Reserver(void) = delete; Reserver(Reserver const &) = delete; //Reserver(Reserver &&) = delete; - see below Reserver &operator=(Reserver const &) = delete; Reserver &operator=(Reserver &&) = delete; Reserver const volatile *operator&(void) const volatile = delete; T &obj; #ifdef __cpp_guaranteed_copy_elision std::lock_guard<std::recursive_mutex> m_lock; Reserver(Reserver &&arg) = delete; #else std::unique_lock<std::recursive_mutex> m_lock; public: Reserver(Reserver &&arg) : m_lock( std::move(arg.m_lock) ), obj(arg.obj) {}
Amine Moulay Ramdane <aminer68@gmail.com>: Dec 28 02:40PM -0800
Hello, More of my philosophy about the best solutions and the genetic algorithms and more of my thoughts.. I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so notice that i am discovering the below patterns in the genetic algorithm with my fluid intelligence, and i have just discovered another pattern with my fluid intelligence in the genetic algorithm, and it is when you make bigger the size of the population of the genetic algorithm , the best solutions are improved with the mutations in this "bigger" population, so this "higher" the "probability" of finding more best solutions, so i think that it makes the genetic algorithm better. More of my philosophy about the accuracy of the genetic algorithm and about the genetic algorithm and more of my thoughts.. I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just tested the genetic algorithm on some applications, and i think that the genetic algorithm is by nature not suited for giving exact solutions, since it gives good approximations of the solution, but good approximations are still good for many applications, also about the cost function in the genetic algorithm or in artificial intelligence in general, so i predict that artificial intelligence like with deep learning and transformers and the like will not attain general artificial intelligence, since i say that the cost function that permits to guide artificial intelligence can not be implemented just with deep learning and transformers and the like so that to make general artificial intelligence, but i predict that we have to understand human consciousness", and it is human consciousness that will make the cost function in artificial intelligence really smart so that to permit general artificial intelligence, so then we have to understand the brain consciousness for that, but i think that artificial intelligence with deep learning and transformers and the like is also powerful since it will permit an accelerating returns that is so appreciable, so then i can logically infer that african countries have to be connected to internet with a computer and a smartphone, since i think that so that to adapt to the law of accelerating returns, people have to access internet and learn efficiently from internet, so for example read the following web page about the share of internet users in Africa as of January 2022, by country: https://www.statista.com/statistics/1124283/internet-penetration-in-africa-by-country/ So notice in the above web page how Morocco my arab country is the best country in Africa that has 84.1% of its people connected to internet and notice that the other arab country of Egypt is also good at that, since it has 79.1% of its people that are connected to internet, but i am noticing that arabs of north african countries are adapting much more efficiently since the majority of them are being well connected to internet, and thus i predict that they will adapt much more efficiently to law of accelerating returns, but notice in the above web page how many black african countries are really poorly connected to internet. Other than that i invite you to read my previous thoughts about the genetic algorithm so that you understand my views: More precision of my philosophy about the essence of the genetic algorithm and more of my thoughts.. So as you are noticing that in my new below thoughts, i am saying that the distribution of the population fights the premature convergence by lack of diversity, but why am i not saying a "good" distribution? since it is inherent that the population has to be well distributed so that the genetic algorithm explores correctly. And as you have just noticed that this thoughts are the thoughts of mine that i am discovering and sharing them with you, so reread all my thoughts below: I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so as you have just noticed, i have just showed you how to avoid premature convergence by lack of diversity, read about it below, but i think i have to explain one more important thing about the genetic algorithm, and it is that when you start a genetic algorithm, you are using a population, so since the distribution of the population also fights against the premature convergence by lack of diversity, so then so that to lower the probability to a small probability of getting stuck in a local optimum by lack of diversity, you can rerun the genetic algorithm a number of times by using a new distribution of the population in every execution of the genetic algorithm and using a good size of the population, or you can use my below methodology so that to avoid it efficiently in a single execution. More of my philosophy about premature convergence of the genetic algorithm and more of my thoughts.. I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i am again discovering patterns with my fluid intelligence, and it is that the standard genetic algorithm has a problem, and it is that it can get stuck in a local optimum and have a premature convergence and the premature convergence of a genetic algorithm arises when the genes of some high rated individuals quickly attain to dominate the population, constraining it to converge to a local optimum. The premature convergence is generally due to the loss of diversity within the population, so i think that you have to solve this problem by using "probability", i mean that you have to divide the population of the genetic algorithm in many groups of population and do the crossover and mutations in each group, so this will lower much more the probability to a small probability of getting stuck in a local optimum and of having a premature convergence, so then i will invite you to look below at the just new article of Visual Studio Magazine of The Traveling Salesman Problem using an evolutionary algorithm with C#, and how it is not talking about all my patterns that i am discovering with my fluid intelligence, and it is not explaining as i am explaining the genetic algorithm. More of my philosophy about the evolution of genetics of humans and about the genetic algorithm and more of my thoughts.. The cost function of a neural network is in general neither convex nor concave, so in deep learning you can use evolutionary algorithms such as the genetic algorithm or PSO and such, so you have then to know that in such situations you have to loop in a number of iterations so that to find better solutions, so for example the genetics of humans has evolved in a such way , since i think that the great number of iterations with the crossover steps and the mutations and the selection of the process of evolution of genetics of humans that look like a genetic algorithm, is what made humans be so "optimized" by for example having a smart brain, and of course you have to read my following thoughts so that to understand the rest of the patterns that i have discovered with my fluid intelligence: More precision of my philosophy about the Traveling Salesman Problem Using an Evolutionary Algorithm and more of my thoughts.. I invite you to look at the following interesting just new article of Visual Studio Magazine of The Traveling Salesman Problem Using an Evolutionary Algorithm with C#: https://visualstudiomagazine.com/articles/2022/12/20/traveling-salesman-problem.aspx I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, and i have just understood rapidly the above program of The Traveling Salesman Problem using an evolutionary algorithm(a genetic algorithm) with C#, and i think that i am discovering the most important patterns with my fluid intelligence in the above program of the Traveling Salesman Problem using the genetic algorithm, and it is that the "crossover" steps in the genetic algorithm exploit better solution, and it means that they exploit locally the better solution, and using "mutation(s)" in the genetic algorithm you explore far away from the locally, and if the exploration finds a better solution , the exploitation will try to find a better solution near the found solution of the exploration, so this way of the genetic algorithm to balance the explore and the exploit is what makes the genetic algorithm interesting, so you have to understand it correctly so that to understand the genetic algorithm. More of my philosophy about non-linear regression and about logic and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" , so i think that R-squared is invalid for non-linear regression, but i think that something that look like R-squared for non-linear regression is to use Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, but if you calculate just the standard error of the estimate (Mean Square Error), it is not sufficient since you have to know what is the size of the standard error of the estimate relatively to the curve and its axes, so read my following thoughts so that to understand more: So the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature convergence, and it is also one of the most important thing, and it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error. More of my philosophy about non-linear regression and more.. I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned ! More of my philosophy about the truth table of the logical implication and about automation and about artificial intelligence and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above", and now i will ask a philosophical question of: What is a logical implication in mathematics ? So i think i have to discover patterns with my fluid intelligence in the following truth table of the logical implication: p q p -> q 0 0 1 0 1 1 1 0 0 1 1 1 Note that p and q are logical variables and the symbol -> is the logical implication. And here are the patterns that i am discovering with my fluid intelligence that permit to understand the logical implication in mathematics: So notice in the above truth table of the logical implication that p equal 0 can imply both q equal 0 and q equal 1, so for example it can model the following cases in reality: If it doesn't rain , so it can be that you can take or not your umbrella, so the pattern is that you can take your umbrella since it can be that another logical variable can be that it can rain in the future, so you have to take your umbrella, so as you notice that it permits to model cases of the reality , and it is the same for the case in the above truth table of the implication of if p equal 1, it imply that q equal 0 , since the implication is not causation, but p equal 1 means for example that it rains in the present, so even if there is another logical variable that says that it will not rain in the future, so you have to take your umbrella, and it is why in the above truth table p equal 1 imply q equal 1 is false, so then of course i say that the truth table of the implication permits to model the case of causation, and it is why it is working. More of my philosophy about objective truth and subjective truth and more of my thoughts.. Today i will use my fluid intelligence so that to explain more the way of logic, and i will discover patterns with my fluid intelligence so that to explain the way of logic, so i will start by asking the following philosophical question: What is objective truth and what is subjective truth ? So for example when we look at the the following equality: a + a = 2*a, so it is objective truth, since it can be made an acceptable general truth, so then i can say that objective truth is a truth that can be made an acceptable general truth, so then subjective truth is a truth that can not be made acceptable general truth, like saying that Jeff Bezos is the best human among humans is a subjective truth. So i can say that we are in mathematics also using the rules of logic so that to logically prove that a theorem or the like is truth or not, so notice the following truth table of the logical implication: p q p -> q 0 0 1 0 1 1 1 0 0 1 1 1 Note that p and q are logical variables and the symbol -> is the logical implication. The above truth table of the logical implication permits us to logically infer a rule in mathematics that is so important in logic and it is the following: (p implies q) is equivalent to ((not p) or q) And of course we are using this rule in logical proofs since we are modeling with all the logical truth table of the logical implication and this includes the case of the causation in | Amine Moulay Ramdane <aminer68@gmail.com>: Dec 28 09:54AM -0800
Hello, More of my philosophy about the accuracy of the genetic algorithm and about the genetic algorithm and more of my thoughts.. I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just tested the genetic algorithm on some applications, and i think that the genetic algorithm is by nature not suited for giving exact solutions, since it gives good approximations of the solution, but good approximations are still good for many applications, also about the cost function in the genetic algorithm or in artificial intelligence in general, so i predict that artificial intelligence like with deep learning and transformers and the like will not attain general artificial intelligence, since i say that the cost function that permits to guide artificial intelligence can not be implemented just with deep learning and transformers and the like so that to make general artificial intelligence, but i predict that we have to understand human consciousness", and it is human consciousness that will make the cost function in artificial intelligence really smart so that to permit general artificial intelligence, so then we have to understand the brain consciousness for that, but i think that artificial intelligence with deep learning and transformers and the like is also powerful since it will permit an accelerating returns that is so appreciable, so then i can logically infer that african countries have to be connected to internet with a computer and a smartphone, since i think that so that to adapt to the law of accelerating returns, people have to access internet and learn efficiently from internet, so for example read the following web page about the share of internet users in Africa as of January 2022, by country: https://www.statista.com/statistics/1124283/internet-penetration-in-africa-by-country/ So notice in the above web page how Morocco my arab country is the best country in Africa that has 84.1% of its people connected to internet and notice that the other arab country of Egypt is also good at that, since it has 79.1% of its people that are connected to internet, but i am noticing that arabs of north african countries are adapting much more efficiently since the majority of them are being well connected to internet, and thus i predict that they will adapt much more efficiently to law of accelerating returns, but notice in the above web page how many black african countries are really poorly connected to internet. Other than that i invite you to read my previous thoughts about the genetic algorithm so that you understand my views: More precision of my philosophy about the essence of the genetic algorithm and more of my thoughts.. So as you are noticing that in my new below thoughts, i am saying that the distribution of the population fights the premature convergence by lack of diversity, but why am i not saying a "good" distribution? since it is inherent that the population has to be well distributed so that the genetic algorithm explores correctly. And as you have just noticed that this thoughts are the thoughts of mine that i am discovering and sharing them with you, so reread all my thoughts below: I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so as you have just noticed, i have just showed you how to avoid premature convergence by lack of diversity, read about it below, but i think i have to explain one more important thing about the genetic algorithm, and it is that when you start a genetic algorithm, you are using a population, so since the distribution of the population also fights against the premature convergence by lack of diversity, so then so that to lower the probability to a small probability of getting stuck in a local optimum by lack of diversity, you can rerun the genetic algorithm a number of times by using a new distribution of the population in every execution of the genetic algorithm and using a good size of the population, or you can use my below methodology so that to avoid it efficiently in a single execution. More of my philosophy about premature convergence of the genetic algorithm and more of my thoughts.. I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i am again discovering patterns with my fluid intelligence, and it is that the standard genetic algorithm has a problem, and it is that it can get stuck in a local optimum and have a premature convergence and the premature convergence of a genetic algorithm arises when the genes of some high rated individuals quickly attain to dominate the population, constraining it to converge to a local optimum. The premature convergence is generally due to the loss of diversity within the population, so i think that you have to solve this problem by using "probability", i mean that you have to divide the population of the genetic algorithm in many groups of population and do the crossover and mutations in each group, so this will lower much more the probability to a small probability of getting stuck in a local optimum and of having a premature convergence, so then i will invite you to look below at the just new article of Visual Studio Magazine of The Traveling Salesman Problem using an evolutionary algorithm with C#, and how it is not talking about all my patterns that i am discovering with my fluid intelligence, and it is not explaining as i am explaining the genetic algorithm. More of my philosophy about the evolution of genetics of humans and about the genetic algorithm and more of my thoughts.. The cost function of a neural network is in general neither convex nor concave, so in deep learning you can use evolutionary algorithms such as the genetic algorithm or PSO and such, so you have then to know that in such situations you have to loop in a number of iterations so that to find better solutions, so for example the genetics of humans has evolved in a such way , since i think that the great number of iterations with the crossover steps and the mutations and the selection of the process of evolution of genetics of humans that look like a genetic algorithm, is what made humans be so "optimized" by for example having a smart brain, and of course you have to read my following thoughts so that to understand the rest of the patterns that i have discovered with my fluid intelligence: More precision of my philosophy about the Traveling Salesman Problem Using an Evolutionary Algorithm and more of my thoughts.. I invite you to look at the following interesting just new article of Visual Studio Magazine of The Traveling Salesman Problem Using an Evolutionary Algorithm with C#: https://visualstudiomagazine.com/articles/2022/12/20/traveling-salesman-problem.aspx I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, and i have just understood rapidly the above program of The Traveling Salesman Problem using an evolutionary algorithm(a genetic algorithm) with C#, and i think that i am discovering the most important patterns with my fluid intelligence in the above program of the Traveling Salesman Problem using the genetic algorithm, and it is that the "crossover" steps in the genetic algorithm exploit better solution, and it means that they exploit locally the better solution, and using "mutation(s)" in the genetic algorithm you explore far away from the locally, and if the exploration finds a better solution , the exploitation will try to find a better solution near the found solution of the exploration, so this way of the genetic algorithm to balance the explore and the exploit is what makes the genetic algorithm interesting, so you have to understand it correctly so that to understand the genetic algorithm. More of my philosophy about non-linear regression and about logic and about technology and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" , so i think that R-squared is invalid for non-linear regression, but i think that something that look like R-squared for non-linear regression is to use Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, but if you calculate just the standard error of the estimate (Mean Square Error), it is not sufficient since you have to know what is the size of the standard error of the estimate relatively to the curve and its axes, so read my following thoughts so that to understand more: So the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature convergence, and it is also one of the most important thing, and it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error. More of my philosophy about non-linear regression and more.. I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned ! More of my philosophy about the truth table of the logical implication and about automation and about artificial intelligence and more of my thoughts.. I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above", and now i will ask a philosophical question of: What is a logical implication in mathematics ? So i think i have to discover patterns with my fluid intelligence in the following truth table of the logical implication: p q p -> q 0 0 1 0 1 1 1 0 0 1 1 1 Note that p and q are logical variables and the symbol -> is the logical implication. And here are the patterns that i am discovering with my fluid intelligence that permit to understand the logical implication in mathematics: So notice in the above truth table of the logical implication that p equal 0 can imply both q equal 0 and q equal 1, so for example it can model the following cases in reality: If it doesn't rain , so it can be that you can take or not your umbrella, so the pattern is that you can take your umbrella since it can be that another logical variable can be that it can rain in the future, so you have to take your umbrella, so as you notice that it permits to model cases of the reality , and it is the same for the case in the above truth table of the implication of if p equal 1, it imply that q equal 0 , since the implication is not causation, but p equal 1 means for example that it rains in the present, so even if there is another logical variable that says that it will not rain in the future, so you have to take your umbrella, and it is why in the above truth table p equal 1 imply q equal 1 is false, so then of course i say that the truth table of the implication permits to model the case of causation, and it is why it is working. More of my philosophy about objective truth and subjective truth and more of my thoughts.. Today i will use my fluid intelligence so that to explain more the way of logic, and i will discover patterns with my fluid intelligence so that to explain the way of logic, so i will start by asking the following philosophical question: What is objective truth and what is subjective truth ? So for example when we look at the the following equality: a + a = 2*a, so it is objective truth, since it can be made an acceptable general truth, so then i can say that objective truth is a truth that can be made an acceptable general truth, so then subjective truth is a truth that can not be made acceptable general truth, like saying that Jeff Bezos is the best human among humans is a subjective truth. So i can say that we are in mathematics also using the rules of logic so that to logically prove that a theorem or the like is truth or not, so notice the following truth table of the logical implication: p q p -> q 0 0 1 0 1 1 1 0 0 1 1 1 Note that p and q are logical variables and the symbol -> is the logical implication. The above truth table of the logical implication permits us to logically infer a rule in mathematics that is so important in logic and it is the following: (p implies q) is equivalent to ((not p) or q) And of course we are using this rule in logical proofs since we are modeling with all the logical truth table of the logical implication and this includes the case of the causation in it, so it is why it is working. And i think that the above rule is the most important rule that permits in mathematics to prove like the following kind of logical proofs: (p -> q) is equivalent to ((not(q) -> not(p)) Note: the symbol -> means implies and p and q are logical variables. or (not(p) -> 0) is equivalent to p And for fuzzy logic, here is the generalized form(that includes fuzzy logic) for the three operators AND,OR,NOT: x AND y is equivalent to min(x,y) x OR y is equivalent to max(x,y) NOT(x) is equivalent to (1 - x) So now you are understanding that the medias like CNN have to be objective by seeking the attain the objective truth so that democracy works correctly. |
David Brown <david.brown@hesbynett.no>: Dec 28 05:09PM +0100
On 27/12/2022 23:31, Alf P. Steinbach wrote: > Say no to the newfangled enum classes, they're Just Wrong. Define the > enum like this: > struct Month{ enum Enum{ january = 1, february, ... }; }; Why? I can see how it can work, but I have not heard of why it might be better than enum classes. Can you expand on your reasoning? | JiiPee <kerrttuPoistaTama11@gmail.com>: Dec 28 09:01PM +0200
On 28/12/2022 18:09, David Brown wrote: > Why? > I can see how it can work, but I have not heard of why it might be > better than enum classes. Can you expand on your reasoning? is it that enum Enum converts directly to an integer, as enum class does not? Easier to use? | "Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Dec 28 10:47PM +0100
On 28 Dec 2022 20:01, JiiPee wrote: > > better than enum classes. Can you expand on your reasoning? > is it that enum Enum converts directly to an integer, as enum class does > not? Easier to use? Mainly that. Choosing a class instead of a namespace as container of the enumerator names is mostly gut-feeling. With class the names can be inherited into another class, and they can be referred to in a succinct manner via a local short type alias. With namespace the names can be made available for unqualified use in a local scope via `using namespace`, or referred to in a succinct manner via a namespace alias, but not in a class scope. Classes also become almost /necessary/ to express enum generalizations. For example, 0 is an Input_stream_id; 1 and 2 are Output_stream_id; and any Input_stream_id or Output_stream_id is a general Stream_id. Unfortunately class inheritance goes the wrong way for expressing enum relationships directly, so in the experimental code below (what a more concise syntax would have to be translated to) there are just implicit conversions from Input_stream_id and Output_stream_id to Stream_id: // struct Input_stream_id{ enum Enum{ in = 0 }; }; // struct Output_stream_id{ enum Enum{ out = 1, err = 2 }; }; class Input_stream_id; struct Input_stream_id_names { static const Input_stream_id in; // 0 }; class Output_stream_id; struct Output_stream_id_names { static const Output_stream_id out; // 1 static const Output_stream_id err; // 2 }; class Input_stream_id: public Input_stream_id_names { const int m_value; public: explicit constexpr Input_stream_id( const int value ): m_value( value ) {} constexpr operator int() const { return m_value; } }; class Output_stream_id: public Output_stream_id_names { const int m_value; public: explicit constexpr Output_stream_id( const int value ): m_value( value ) {} constexpr operator int() const { return m_value; } }; class Stream_id: public Input_stream_id_names, public Output_stream_id_names { const int m_value; public: explicit constexpr Stream_id( const int value ): m_value( value ) {} constexpr Stream_id( const Input_stream_id value ): m_value( value ) {} constexpr Stream_id( const Output_stream_id value ): m_value( value ) {} constexpr operator int() const { return m_value; } }; inline constexpr Input_stream_id Input_stream_id_names::in = Input_stream_id( 0 ); inline constexpr Output_stream_id Output_stream_id_names::out = Output_stream_id( 1 ); inline constexpr Output_stream_id Output_stream_id_names::err = Output_stream_id( 2 ); - Alf | Tim Rentsch <tr.17687@z991.linuxsc.com>: Dec 28 12:25PM -0800
>> There is a tacit assumption in that statement that longer is >> always easier to read or more readily comprehended. > No, there isn't. Of course there is. The phrase "brevity-over-clarity" suggests a false dichotomy between brief and clear. Anyone who pretends otherwise is just being obtuse. | Tim Rentsch <tr.17687@z991.linuxsc.com>: Dec 28 12:48PM -0800
> Juha Nieminen <nospam@thanks.invalid> writes: >> Maybe you also have a thin skin in addition to a thick skull... > Also an insult. Saying someone has thin skin can be an insult, but it doesn't have to be. Some people are more sensitive than others. I have heard someone say something about how thin or thick someone's skin was, and the statement was made in a way that I would characterize as a most benevolent manner. Please note that I am not making any claims as to what Juha intended in his statement. | Keith Thompson <Keith.S.Thompson+u@gmail.com>: Dec 28 01:38PM -0800
> benevolent manner. > Please note that I am not making any claims as to what > Juha intended in his statement. I was making a claim as to what Juha intended. By raising an irrelevant point about a figure of speech, you risk rekindling an argument that had died out more than a week ago. Please don't do that. -- Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com Working, but not speaking, for XCOM Labs void Void(void) { Void(); } /* The recursive call of the void */ |
"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Dec 27 11:31PM +0100
On 25 Dec 2022 15:40, Stefan Ram wrote: > with them. To learn how to use "Month" takes me at least one > additional lookup. It could be even more lookups if even > "Month" then redirects me to more opaque types. Hm, I don't buy that argument. But I would declare the function as auto month() const -> Month::Enum; Say no to the newfangled enum classes, they're Just Wrong. Define the enum like this: struct Month{ enum Enum{ january = 1, february, ... }; }; > Would there be a "var", it would tell me clearly that > - the function may change the object state, it is > not to be considered "const". I guess the keyword `mutable` is a little underused and could be set to work, like they did with `auto`. However it's not possible to change the defaults, so it would be a matter of just adding verbosity to express explicitly what would otherwise be expressed implicitly. - Alf | Bonita Montero <Bonita.Montero@gmail.com>: Dec 27 04:59AM +0100
Am 26.12.2022 um 22:22 schrieb Chris M. Thomasson: > B: MEMBAR #StoreLoad | #LoadLoad > C: MEMBAR #LoadStore | #StoreStore > D: MEMBAR #StoreLoad | #StoreStore 1. I use proper C++ barriers so I won't have to care for that. 2. SPARC is dead. | Michael S <already5chosen@yahoo.com>: Dec 27 03:38AM -0800
On Monday, December 26, 2022 at 11:23:06 PM UTC+2, Chris M. Thomasson wrote: | Michael S <already5chosen@yahoo.com>: Dec 27 03:42AM -0800
On Monday, December 26, 2022 at 11:23:06 PM UTC+2, Chris M. Thomasson wrote: > Imagine you are on a SPARC in RMO mode, you are programming in assembly > language. If I am not mistaken, SPARC RMO is paper spec that was never implemented in hardware. Which does not mean that it is impossible to imagine that I am programming it in assembler, but it takes stronger imagination than I posses. | "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 27 01:29PM -0800
On 12/26/2022 7:59 PM, Bonita Montero wrote: >> D: MEMBAR #StoreLoad | #StoreStore > 1. I use proper C++ barriers so I won't have to care for that. > 2. SPARC is dead. Well, suppose you were tasked with creating the guts for C++ membars for the SPARC. Imvvho, the SPARC in RMO mode is a good place to learn. The MEMBAR instruction is pretty damn diverse! :^) | "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 27 01:31PM -0800
On 12/27/2022 3:42 AM, Michael S wrote: > If I am not mistaken, SPARC RMO is paper spec that was never implemented > in hardware. Which does not mean that it is impossible to imagine that I am > programming it in assembler, but it takes stronger imagination than I posses. SPARC RMO is a real thing. https://www.linuxjournal.com/article/8212 "Solaris on SPARC uses total-store order (TSO); however, Linux runs SPARC in relaxed-memory order (RMO) mode." | Michael S <already5chosen@yahoo.com>: Dec 27 02:09PM -0800
On Tuesday, December 27, 2022 at 11:31:33 PM UTC+2, Chris M. Thomasson wrote: > https://www.linuxjournal.com/article/8212 > "Solaris on SPARC uses total-store order (TSO); however, Linux runs > SPARC in relaxed-memory order (RMO) mode." I am pretty sure that the article got it wrong. OS can set control bits in register to any value it wishes, but the underlying hardware will still behave as TSO. At least, if the hardware is made by Sun/Oracle or Fujitsu, but all other SPARC CPU vendors became irrelevant since ~1996, anyway. | "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 27 02:17PM -0800
On 12/27/2022 2:09 PM, Michael S wrote: > hardware will still behave as TSO. > At least, if the hardware is made by Sun/Oracle or Fujitsu, but all other SPARC > CPU vendors became irrelevant since ~1996, anyway. [...] Are you telling me that SPARC RMO mode was run as if it was TSO in the hardware? I need to ask Paul. | "gdo...@gmail.com" <gdotone@gmail.com>: Dec 27 01:22AM -0800
compiling a simple program, including <iostream> using: g++ -c -Werror -Weverything exercise_2_17.cpp -Weverything is producing this: g++ -c -Werror -Weverything exercise_2_17.cpp error: include location '/usr/local/include' is unsafe for cross-compilation [-Werror,-Wpoison-system-directories] 1 error generated. if I don't use -Weverything source compiles fine, no errors or warning messages. what does that mean? using an intel based Mac, clang++ gives the same message. Apple clang version 14.0.0 (clang-1400.0.29.202) Target: x86_64-apple-darwin22.1.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin apparently g++ uses the clang++ compiler. g++ -v Apple clang version 14.0.0 (clang-1400.0.29.202) Target: x86_64-apple-darwin22.1.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin adding -Wsystem-headers gives so many warnings the it stops at its default limit. | "Öö Tiib" <ootiib@hot.ee>: Dec 27 02:50AM -0800
> 1 error generated. > if I don't use -Weverything source compiles fine, no errors or warning messages. > what does that mean? The -Weverything of clang produces all kinds of silly warnings some of what you might want to disable. If you want to disable only that warning then use -Wno-poison-system-directories > Thread model: posix > InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin > adding -Wsystem-headers gives so many warnings the it stops at its default limit. Yes, there are no gcc installed on Apple by default. You need to install it yourself if you want real gcc and of course there are artificial difficulties made by Apple to that. But without challenges ... life is boring. ;-) |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 26 01:22PM -0800
Imagine you are on a SPARC in RMO mode, you are programming in assembly language. What memory barrier instruction is the most efficient _and_ correct for use _after_ using an atomic RMW instruction to take exclusive access? Think about locking a mutex... A: MEMBAR #LoadLoad B: MEMBAR #LoadStore | #LoadLoad C: MEMBAR #StoreLoad | #LoadLoad D: MEMBAR #StoreLoad | #StoreStore What about the membar we have to use before atomically unlocking this mutex? A: MEMBAR #LoadLoad B: MEMBAR #StoreLoad | #LoadLoad C: MEMBAR #LoadStore | #StoreStore D: MEMBAR #StoreLoad | #StoreStore | "Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Dec 26 11:17PM +0100
On 26 Dec 2022 22:22, Chris M. Thomasson wrote: > B: MEMBAR #StoreLoad | #LoadLoad > C: MEMBAR #LoadStore | #StoreStore > D: MEMBAR #StoreLoad | #StoreStore I really don't have the foggiest idea, but I wish I had! :-o Google tells me that "RMO mode" is relaxed memory order, and I remember that's been mentioned in what I've read about C++ threading. I can understand that being relevant, but "a SPARC"? Anyway, best wishes for the coming new year. Hopefully at the end there will be less war, less crisis, everything better except the climate (which is FUBAR), and with everything better we can be happy no matter what the climate does. Except my old fav idea of saving the polar bears by transporting them to Antarctica, because the penguins -- possible Antarctica food source -- are now an endangered species. But, better. - Alf |
ram@zedat.fu-berlin.de (Stefan Ram): Dec 25 02:40PM
From the code examples for the C++ core guideline "P.1: Express ideas directly in code": |class Date { |public: | Month month() const; // do | int month(); // don't | // ... |}; . Ok, when I read "int month()", I am not sure whether January is 0 or 1, but otherwise I have the idea that possible values probably are 0..11 or 1..12 and how to deal with them. To learn how to use "Month" takes me at least one additional lookup. It could be even more lookups if even "Month" then redirects me to more opaque types. When "const" is missing, this tells me that - the function might be effectivly const, but the programmer just forgot to mark it correspondingly, or - the function may change the object's state. Would there be a "var", it would tell me clearly that - the function may change the object state, it is not to be considered "const". | ram@zedat.fu-berlin.de (Stefan Ram): Dec 25 08:22PM
>What if the return value is January? I'd say: If the language actually has a means of indicating a month number, then one should use this. I didn't know C++20 had this! Using names from the standard library is more clear and readable than using names from a custom library would be. I've started to read the Core Guidelines because it's one of the few sources that might already contain a bit of C++20 (except the ISO standard itself). There are not many books for C++20 yet. Lippman, Meyers, Sutter, Stroustrup do not seem to have updated their works yet. Today's fast pace of standardization puts a strain on authors! | "Öö Tiib" <ootiib@hot.ee>: Dec 24 06:19PM -0800
On Saturday, 24 December 2022 at 21:40:36 UTC+2, Malcolm McLean wrote: > source to the program source, then you've got to specify how to invoke yacc to > generate the C. That adds considerable complication to the build system and > introduces several points at which it could break. That is, yes, problem for small project where the toolchain was for example just set up by Project -> New... of some IDE like Visual Studio, XCode or QtCreator with sole purpose of trying something out. The ways how to add some dependent library nothing to talk of more tools to toolchain can feel rather arcane and take some learning. In bigger project that anyway has multiple dependencies, targets multiple toolchains, has several tools added to those and ultimately produces and deploys multiple modules ... adding lex and yacc may be considered relatively trivial task. | scott@slp53.sl.home (Scott Lurndal): Dec 25 05:07PM
>toolchains, has several tools added to those and ultimately produces and >deploys multiple modules ... adding lex and yacc may be considered >relatively trivial task. Adding lex and yacc _are_ trival tasks. acgram.c acgram.h: $A/acgram.y $(YACC_CMD) $A/acgram.y mv y.tab.c acgram.c mv y.tab.h acgram.h ... acgram.$o: acgram.c $(P1_H) $(CC_CMD) $(YYDEBUG) acgram.c aclex.c: $A/aclex.l $(LEX) $(LFLAGS) $A/aclex.l mv lex.yy.c aclex.c aclex.$o: aclex.c acgram.h $(P1_H) $(CC_CMD) aclex.c | "daniel...@gmail.com" <danielaparker@gmail.com>: Dec 25 10:55AM -0800
On Sunday, December 25, 2022 at 12:07:18 PM UTC-5, Scott Lurndal wrote: > mv lex.yy.c aclex.c > aclex.$o: aclex.c acgram.h $(P1_H) > $(CC_CMD) aclex.c Nonetheless, they don't appear to be widely used. Most significant programming languages - JavaScript, Java Open JDK, PHP, C#, clang, gcc, Golang, Lua, Swift, Julia, rust - use a handwritten parser, Ruby uses the Bison parser generator, PHP, and Python use parser generators, Kotlin appears to use FLEX to tokenize and the rest is hand written. Apart from programming language compilers, looking at implementations on github of XML, JSON, and YAML for C++, Python and Ruby, most appear to use a handwritten parser. Daniel | "gdo...@gmail.com" <gdotone@gmail.com>: Dec 24 06:27PM -0800
is it best practice to use new(nothrow) as to not have to handle the exception? | "Öö Tiib" <ootiib@hot.ee>: Dec 24 07:54PM -0800
> is it best practice to use new(nothrow) as to not have to handle the exception? Best practice is not to use explicit new but standard library containers. Standard library does not use new(nothrow). Do you have some plan what to do when allocation fails somewhere? If no then you can just catch std::bad_alloc in main and then report that program died because it is out of memory. If yes then you can catch at points where it is easiest to switch to that plan. With new(nothrow) you will have to check in code after every new. Check that has no idea what to do if it failed or how far it is from place where there is something to do. How is it better? Also the exceptions that are likely never thrown typically cost lot less performance than those checks all over code base. | Paavo Helde <eesnimi@osa.pri.ee>: Dec 25 10:30AM +0200
> is it best practice to use new(nothrow) as to not have to handle the exception? No, because then you would handle the nullptr return, which becomes more cumbersome as soon as you have to do that in more than one place. Best practice is to figure out if and why are you wanting to do a dynamic allocation in the first place, then use std::make_unique or std::make_shared as appropriate. | Andrey Tarasevich <andreytarasevich@hotmail.com>: Dec 25 01:09AM -0800
> is it best practice to use new(nothrow) as to not have to handle the exception? Do you have a good strategy for handling a null return? If you do, then just go ahead and use 'new(nothrow)' if you like it better. If you don't, then it makes no difference. Better use plain 'new', since it looks cleaner. -- Best regards, Andrey |
|