Saturday, December 31, 2022

Digest for comp.lang.c++@googlegroups.com - 14 updates in 4 topics

Juha Nieminen <nospam@thanks.invalid>: Dec 31 03:00PM

> insults and then largely died out -- until you showed up a week
> later and rekindled it.
 
> Read the room.
 
And what exactly is wrong with continuing this discussion?
 
I presented the blasphemous proposition (which is quite commonly
held) that using full English words in variable names makes code
easier to read. Apparently this blasphemy was so utterly
egregious that it elicited a dozen or so people to barrage me
in a huge flamewar, trying to make me see the error of my ways
and recant. When I refused to do so, it only strengthened their
resolve.
 
Which makes absolutely no sense.
 
There was especially one person in this thread who apparently
had taken as his life mission to oppose me no matter what,
and to convert me to the right path (and it's the same person
who started whining "you insulted me!!!" out of the blue),
but clearly I am the bad guy here, apparently because I have
committed blasphemy and heresy.
David Brown <david.brown@hesbynett.no>: Dec 31 04:20PM +0100

On 31/12/2022 16:00, Juha Nieminen wrote:
>> later and rekindled it.
 
>> Read the room.
 
> And what exactly is wrong with continuing this discussion?
 
It's going nowhere, that's what's wrong with continuing it. Too many
people got worked up and misunderstood or misread what others were
saying, and the discussion was getting too ugly. It was spoiling what
is normally a friendly and respectful group. The only good way out of
such threads is for people to appreciate it is over, and leave it all
alone. And that means /really/ leaving it alone, not digging it up and
bringing it into other threads with sarcastic comments. Pretty much
everyone had managed that, until Tim made one of his rounds of fly-by
necroposting.
 
So let's please just leave it there.
"daniel...@gmail.com" <danielaparker@gmail.com>: Dec 31 07:58AM -0800


> largely died out -- until you showed up a week
> later and rekindled it.
 
On Saturday, December 31, 2022 at 10:20:45 AM UTC-5, David Brown wrote:
 
> It's going nowhere, that's what's wrong with continuing it.
 
For those that think there's too much posting, stop posting, don't talk.
Jack Lemmon <invalid@invalid.net>: Dec 31 06:03PM

On 31/12/2022 15:00, Juha Nieminen wrote:
> And what exactly is wrong with continuing this discussion?
 
It's getting boring and we have to move forward. Rest for a while and
perhaps come back in 4 weeks time when you might have new ideas.
Michael S <already5chosen@yahoo.com>: Dec 31 02:46PM -0800

On Saturday, December 31, 2022 at 5:00:52 PM UTC+2, Juha Nieminen wrote:
> but clearly I am the bad guy here, apparently because I have
> committed blasphemy and heresy.
 
As long as you realized it, the mission is accomplished.
Michael S <already5chosen@yahoo.com>: Dec 31 02:47PM -0800

On Saturday, December 31, 2022 at 5:20:45 PM UTC+2, David Brown wrote:
> people got worked up and misunderstood or misread what others were
> saying, and the discussion was getting too ugly. It was spoiling what
> is normally a friendly and respectful group.
 
Do we read the discussion in the same group?
Juha Nieminen <nospam@thanks.invalid>: Dec 31 03:05PM


> The first one is there probably because we have many classes whose
> members are often not initialized by the constructions, but later, by
> the framework.
 
I think that using [[maybe_unused]] is better than disabling the warning
because it more explicitly indicates that it being unused is not a mistake
and the programmer was aware of it.
 
One warning, or was it actually an error, that does annoy me quite a bit
is the "narrowing conversion" thingie. I think it was introduced in C++11
or newer, and was never a problem before. To this day I'm not sure why
that warning/error is necessary.
 
(Maybe it has something to do with uniform initialization, which turned
out to be quite a mess?)
David Brown <david.brown@hesbynett.no>: Dec 31 04:40PM +0100

On 31/12/2022 16:05, Juha Nieminen wrote:
 
> I think that using [[maybe_unused]] is better than disabling the warning
> because it more explicitly indicates that it being unused is not a mistake
> and the programmer was aware of it.
 
It also has the advantage of allowing people to compile with other flags
(such as -Wextra without the -Wno- flag) without triggering the warning.
 
> that warning/error is necessary.
 
> (Maybe it has something to do with uniform initialization, which turned
> out to be quite a mess?)
 
I think warnings (or errors) about narrowing conversions are a good
idea. If I accidentally try to use a double as an integer, I'd rather
be told.
 
I find the "uniform initialisation" syntax ugly for scaler variables,
but the fact that narrowing conversions are not allowed is an advantage
here IMHO.
Paavo Helde <eesnimi@osa.pri.ee>: Dec 31 06:50PM +0200

31.12.2022 17:05 Juha Nieminen kirjutas:
 
> I think that using [[maybe_unused]] is better than disabling the warning
> because it more explicitly indicates that it being unused is not a mistake
> and the programmer was aware of it.
 
You are right in that the unused parameter warnings should be dealt
case-by-case instead of a global compiler option. I guess there was no
time or willingness to do that when the compiler suddenly started to
spit out hundreds of such warnings, and after adding the global option
there has been no motivation to do that.
 
Still, I'm not convinced [[maybe_unused]] is the best solution always.
As far as I can see, this is meant more for conditional compilation,
where a thing might be indeed sometimes used and sometimes not,
depending on preprocessor macro definitions.
 
In my code, I get this warning mainly for virtual function overrides
where some parameter is e.g. only used by 1 override of 10. And in those
9 other overrides the parameter is not maybe unused, but definitely
unused. Alas, by some reason there is no [[unused]] attribute.
 
In this scenario, I believe it might be better (i.e. more readable) to
just delete or comment out the parameter name, it will have the same
effect.
David Brown <david.brown@hesbynett.no>: Dec 31 06:47PM +0100

On 31/12/2022 17:50, Paavo Helde wrote:
 
> In this scenario, I believe it might be better (i.e. more readable) to
> just delete or comment out the parameter name, it will have the same
> effect.
 
That can often be the best choice - though of course "best" will depend
on many things.
 
Another option is a cast to void - I believe most compilers treat that
as considering the parameter "used" without generating any code. You
may feel a comment is warranted to explain what you are doing.
Frederick Virchanza Gotham <cauldwell.thomas@gmail.com>: Dec 31 07:41AM -0800

On Thursday, December 22, 2022, I wrote:
> but I can't remember having seen something
> like this in the C++ standard library nor Boost
> nor wxWidgets.
 
I've written code that will work on C++11, and which is optimised for C++17 which has guaranteed return value optimisation:
 
#ifndef HEADER_INCLUSION_GUARD_RESERVER
#define HEADER_INCLUSION_GUARD_RESERVER
 
#include <mutex> // recursive_mutex, lock_guard, unique_lock
#include <utility> // move
 
template<typename T>
class Reserver final {
 
Reserver(void) = delete;
Reserver(Reserver const &) = delete;
//Reserver(Reserver &&) = delete; - see below
Reserver &operator=(Reserver const &) = delete;
Reserver &operator=(Reserver &&) = delete;
Reserver const volatile *operator&(void) const volatile = delete;
 
T &obj;
 
#ifdef __cpp_guaranteed_copy_elision
std::lock_guard<std::recursive_mutex> m_lock;
Reserver(Reserver &&arg) = delete;
#else
std::unique_lock<std::recursive_mutex> m_lock;
public:
Reserver(Reserver &&arg)
: m_lock( std::move(arg.m_lock) ),
obj(arg.obj) {}

Thursday, December 29, 2022

Digest for comp.programming.threads@googlegroups.com - 2 updates in 2 topics

Amine Moulay Ramdane <aminer68@gmail.com>: Dec 28 02:40PM -0800

Hello,
 
 
 
 
More of my philosophy about the best solutions and the genetic algorithms and more of my thoughts..
 
I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so notice that i am discovering the below patterns in the genetic algorithm with my fluid intelligence, and i have just discovered another pattern with my fluid intelligence in the genetic algorithm, and it is when you make bigger the size of the population of the genetic algorithm , the best solutions are improved with the mutations in this "bigger" population, so this "higher" the "probability" of finding more best solutions, so i think that it makes the genetic algorithm better.
 
More of my philosophy about the accuracy of the genetic algorithm and about the genetic algorithm and more of my thoughts..
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just tested the genetic algorithm on some applications, and i think
that the genetic algorithm is by nature not suited for giving exact solutions, since it gives good approximations of the solution, but good approximations are still good for many applications, also about the cost function in the genetic algorithm or in artificial intelligence in general, so i predict that artificial intelligence like with deep learning and transformers and the like will not attain general artificial intelligence, since i say that the cost function that permits to guide artificial intelligence can not be implemented just with deep learning and transformers and the like so that to make general artificial intelligence, but i predict that we have to understand human consciousness", and it is human consciousness that will make the cost function in artificial intelligence really smart so that to permit general artificial intelligence, so then we have to understand the brain consciousness for that, but i think that artificial intelligence with deep learning and transformers and the like is also powerful since it will permit an accelerating returns that is so appreciable, so then i can logically infer that african countries have to be connected to internet with a computer and a smartphone, since i think that so that to adapt to the law of accelerating returns, people have to access internet and learn efficiently from internet, so for example read the following web page about the share of internet users in Africa as of January 2022, by country:
 
https://www.statista.com/statistics/1124283/internet-penetration-in-africa-by-country/
 
 
So notice in the above web page how Morocco my arab country is the best country in Africa that has 84.1% of its people connected to internet and notice that the other arab country of Egypt is also good at that, since it has 79.1% of its people that are connected to internet, but i am noticing that arabs of north african countries are adapting much more efficiently since the majority of them are being well connected to internet, and thus i predict that they will adapt much more efficiently to law of accelerating returns, but notice in the above web page how many black african countries are really poorly connected to internet.
 
Other than that i invite you to read my previous thoughts about the genetic algorithm so that you understand my views:
 
 
More precision of my philosophy about the essence of the genetic algorithm and more of my thoughts..
 
 
So as you are noticing that in my new below thoughts, i am saying that the distribution of the population fights the premature convergence by lack of diversity, but why am i not saying a "good" distribution? since it is inherent that the population has to be well distributed so that the genetic algorithm explores correctly. And as you have just noticed that this thoughts are the thoughts of mine that i am discovering and sharing them with you, so reread
all my thoughts below:
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so
as you have just noticed, i have just showed you how to avoid premature convergence by lack of diversity, read about it below, but i think i have to explain one more important thing about the genetic algorithm, and it is that when you start a genetic algorithm, you are using a population, so since the distribution of the population also fights against the premature convergence by lack of diversity, so then so that to lower the probability to a small probability of getting stuck in a local optimum by lack of diversity, you can rerun the genetic algorithm a number of times by using a new distribution of the population in every execution of the genetic algorithm and using a good size of the population, or you can use my below methodology so that to avoid it efficiently in a single execution.
 
 
More of my philosophy about premature convergence of the genetic algorithm and more of my thoughts..
 
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i am again discovering patterns with my fluid intelligence, and it is that the standard genetic algorithm has a problem, and it is that it can get stuck in a local optimum and have a premature convergence and the premature convergence of a genetic algorithm arises when the genes of some high rated individuals quickly attain to dominate the population, constraining it to converge to a local optimum. The premature convergence is generally due to the loss of diversity within the population, so i think that you have to solve this problem by using "probability", i mean that you have to divide the population of the genetic algorithm in many groups of population and do the crossover and mutations in each group, so this will lower much more the probability to a small probability of getting stuck in a local optimum and of having a premature convergence, so then i will invite you to look below at the just new article of Visual Studio Magazine of The Traveling Salesman Problem using an evolutionary algorithm with C#, and how it is not talking about all my patterns that i am discovering with my fluid intelligence, and it is not explaining as i am explaining the genetic algorithm.
 
More of my philosophy about the evolution of genetics of humans and about the genetic algorithm and more of my thoughts..
 
The cost function of a neural network is in general neither convex nor concave, so in deep learning you can use evolutionary algorithms such as the genetic algorithm or PSO and such, so you have then to know that in such situations you have to loop in a number of iterations so that to find better solutions, so for example the genetics of humans has evolved in a such way , since i think that the great number of iterations with the crossover steps and the mutations and the selection of the process of evolution of genetics of humans that look like a genetic algorithm, is what made humans be so "optimized" by for example having a smart brain, and of course you have to read my following thoughts so that to understand the rest of the patterns that i have discovered with my fluid intelligence:
 
More precision of my philosophy about the Traveling Salesman Problem Using an Evolutionary Algorithm and more of my thoughts..
 
I invite you to look at the following interesting just new article
of Visual Studio Magazine of The Traveling Salesman Problem Using an Evolutionary Algorithm with C#:
 
https://visualstudiomagazine.com/articles/2022/12/20/traveling-salesman-problem.aspx
 
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, and i have just understood rapidly the above program of The Traveling Salesman Problem using an evolutionary algorithm(a genetic algorithm) with C#, and i think that i am discovering the most important patterns with my fluid intelligence in the above program of the Traveling Salesman Problem using the genetic algorithm, and it is that the "crossover" steps in the genetic algorithm exploit better solution, and it means that they exploit locally the better solution, and using "mutation(s)" in the genetic algorithm you explore far away from the locally, and if the exploration finds a better solution , the exploitation will try to find a better solution near the found solution of the exploration, so this way of the genetic algorithm to balance the explore and the exploit is what makes the genetic algorithm interesting, so you have to understand it correctly so that to understand the genetic algorithm.
 
More of my philosophy about non-linear regression and about logic and about technology and more of my thoughts..
 
 
I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" , so i think that R-squared is invalid for non-linear regression, but i think that something that look like R-squared for non-linear regression is to use Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, but if you calculate just the standard error of the estimate (Mean Square Error), it is not sufficient since you have to know what is the size of the standard error of the estimate relatively to the curve and its axes, so read my following thoughts so that to understand more:
 
So the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature
convergence, and it is also one of the most important thing, and
it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am
doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error.
 
More of my philosophy about non-linear regression and more..
 
I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned !
 
 
More of my philosophy about the truth table of the logical implication and about automation and about artificial intelligence and more of my thoughts..
 
 
I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above", and now
i will ask a philosophical question of:
 
What is a logical implication in mathematics ?
 
So i think i have to discover patterns with my fluid intelligence
in the following truth table of the logical implication:
 
p q p -> q
0 0 1
0 1 1
1 0 0
1 1 1
 
Note that p and q are logical variables and the symbol -> is the logical implication.
 
And here are the patterns that i am discovering with my fluid intelligence that permit to understand the logical implication in mathematics:
 
So notice in the above truth table of the logical implication
that p equal 0 can imply both q equal 0 and q equal 1, so for
example it can model the following cases in reality:
 
If it doesn't rain , so it can be that you can take or not your umbrella, so the pattern is that you can take your umbrella since
it can be that another logical variable can be that it can rain
in the future, so you have to take your umbrella, so as you
notice that it permits to model cases of the reality ,
and it is the same for the case in the above truth table of the implication of if p equal 1, it imply that q equal 0 , since the implication is not causation, but p equal 1 means for example
that it rains in the present, so even if there is another logical variable that says that it will not rain in the future, so you have
to take your umbrella, and it is why in the above truth table
p equal 1 imply q equal 1 is false, so then of course i say that
the truth table of the implication permits to model the case of causation, and it is why it is working.
 
More of my philosophy about objective truth and subjective truth and more of my thoughts..
 
Today i will use my fluid intelligence so that to explain more
the way of logic, and i will discover patterns with my fluid intelligence so that to explain the way of logic, so i will start by asking the following philosophical question:
 
What is objective truth and what is subjective truth ?
 
So for example when we look at the the following equality: a + a = 2*a,
so it is objective truth, since it can be made an acceptable general truth, so then i can say that objective truth is a truth that can be made an acceptable general truth, so then subjective truth is a truth that can not be made acceptable general truth, like saying that Jeff Bezos is the best human among humans is a subjective truth. So i can say that we are in mathematics also using the rules of logic so that to logically prove that a theorem or the like is truth or not, so notice the following truth table of the logical implication:
 
p q p -> q
0 0 1
0 1 1
1 0 0
1 1 1
 
Note that p and q are logical variables and the symbol -> is the logical implication.
 
The above truth table of the logical implication permits us
to logically infer a rule in mathematics that is so important in logic and it is the following:
 
(p implies q) is equivalent to ((not p) or q)
 
 
And of course we are using this rule in logical proofs since
we are modeling with all the logical truth table of the
logical implication and this includes the case of the causation in
Amine Moulay Ramdane <aminer68@gmail.com>: Dec 28 09:54AM -0800

Hello,
 
 
 
More of my philosophy about the accuracy of the genetic algorithm and about the genetic algorithm and more of my thoughts..
 
 
I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just tested the genetic algorithm on some applications, and i think
that the genetic algorithm is by nature not suited for giving exact solutions, since it gives good approximations of the solution, but good approximations are still good for many applications, also about the cost function in the genetic algorithm or in artificial intelligence in general, so i predict that artificial intelligence like with deep learning and transformers and the like will not attain general artificial intelligence, since i say that the cost function that permits to guide artificial intelligence can not be implemented just with deep learning and transformers and the like so that to make general artificial intelligence, but i predict that we have to understand human consciousness", and it is human consciousness that will make the cost function in artificial intelligence really smart so that to permit general artificial intelligence, so then we have to understand the brain consciousness for that, but i think that artificial intelligence with deep learning and transformers and the like is also powerful since it will permit an accelerating returns that is so appreciable, so then i can logically infer that african countries have to be connected to internet with a computer and a smartphone, since i think that so that to adapt to the law of accelerating returns, people have to access internet and learn efficiently from internet, so for example read the following web page about the share of internet users in Africa as of January 2022, by country:
 
https://www.statista.com/statistics/1124283/internet-penetration-in-africa-by-country/
 
 
So notice in the above web page how Morocco my arab country is the best country in Africa that has 84.1% of its people connected to internet and notice that the other arab country of Egypt is also good at that, since it has 79.1% of its people that are connected to internet, but i am noticing that arabs of north african countries are adapting much more efficiently since the majority of them are being well connected to internet, and thus i predict that they will adapt much more efficiently to law of accelerating returns, but notice in the above web page how many black african countries are really poorly connected to internet.
 
Other than that i invite you to read my previous thoughts about the genetic algorithm so that you understand my views:
 
 
More precision of my philosophy about the essence of the genetic algorithm and more of my thoughts..
 
 
So as you are noticing that in my new below thoughts, i am saying that the distribution of the population fights the premature convergence by lack of diversity, but why am i not saying a "good" distribution? since it is inherent that the population has to be well distributed so that the genetic algorithm explores correctly. And as you have just noticed that this thoughts are the thoughts of mine that i am discovering and sharing them with you, so reread
all my thoughts below:
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so
as you have just noticed, i have just showed you how to avoid premature convergence by lack of diversity, read about it below, but i think i have to explain one more important thing about the genetic algorithm, and it is that when you start a genetic algorithm, you are using a population, so since the distribution of the population also fights against the premature convergence by lack of diversity, so then so that to lower the probability to a small probability of getting stuck in a local optimum by lack of diversity, you can rerun the genetic algorithm a number of times by using a new distribution of the population in every execution of the genetic algorithm and using a good size of the population, or you can use my below methodology so that to avoid it efficiently in a single execution.
 
 
More of my philosophy about premature convergence of the genetic algorithm and more of my thoughts..
 
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i am again discovering patterns with my fluid intelligence, and it is that the standard genetic algorithm has a problem, and it is that it can get stuck in a local optimum and have a premature convergence and the premature convergence of a genetic algorithm arises when the genes of some high rated individuals quickly attain to dominate the population, constraining it to converge to a local optimum. The premature convergence is generally due to the loss of diversity within the population, so i think that you have to solve this problem by using "probability", i mean that you have to divide the population of the genetic algorithm in many groups of population and do the crossover and mutations in each group, so this will lower much more the probability to a small probability of getting stuck in a local optimum and of having a premature convergence, so then i will invite you to look below at the just new article of Visual Studio Magazine of The Traveling Salesman Problem using an evolutionary algorithm with C#, and how it is not talking about all my patterns that i am discovering with my fluid intelligence, and it is not explaining as i am explaining the genetic algorithm.
 
More of my philosophy about the evolution of genetics of humans and about the genetic algorithm and more of my thoughts..
 
The cost function of a neural network is in general neither convex nor concave, so in deep learning you can use evolutionary algorithms such as the genetic algorithm or PSO and such, so you have then to know that in such situations you have to loop in a number of iterations so that to find better solutions, so for example the genetics of humans has evolved in a such way , since i think that the great number of iterations with the crossover steps and the mutations and the selection of the process of evolution of genetics of humans that look like a genetic algorithm, is what made humans be so "optimized" by for example having a smart brain, and of course you have to read my following thoughts so that to understand the rest of the patterns that i have discovered with my fluid intelligence:
 
More precision of my philosophy about the Traveling Salesman Problem Using an Evolutionary Algorithm and more of my thoughts..
 
I invite you to look at the following interesting just new article
of Visual Studio Magazine of The Traveling Salesman Problem Using an Evolutionary Algorithm with C#:
 
https://visualstudiomagazine.com/articles/2022/12/20/traveling-salesman-problem.aspx
 
 
I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, and i have just understood rapidly the above program of The Traveling Salesman Problem using an evolutionary algorithm(a genetic algorithm) with C#, and i think that i am discovering the most important patterns with my fluid intelligence in the above program of the Traveling Salesman Problem using the genetic algorithm, and it is that the "crossover" steps in the genetic algorithm exploit better solution, and it means that they exploit locally the better solution, and using "mutation(s)" in the genetic algorithm you explore far away from the locally, and if the exploration finds a better solution , the exploitation will try to find a better solution near the found solution of the exploration, so this way of the genetic algorithm to balance the explore and the exploit is what makes the genetic algorithm interesting, so you have to understand it correctly so that to understand the genetic algorithm.
 
More of my philosophy about non-linear regression and about logic and about technology and more of my thoughts..
 
 
I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" , so i think that R-squared is invalid for non-linear regression, but i think that something that look like R-squared for non-linear regression is to use Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, but if you calculate just the standard error of the estimate (Mean Square Error), it is not sufficient since you have to know what is the size of the standard error of the estimate relatively to the curve and its axes, so read my following thoughts so that to understand more:
 
So the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature
convergence, and it is also one of the most important thing, and
it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am
doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error.
 
More of my philosophy about non-linear regression and more..
 
I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned !
 
 
More of my philosophy about the truth table of the logical implication and about automation and about artificial intelligence and more of my thoughts..
 
 
I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above", and now
i will ask a philosophical question of:
 
What is a logical implication in mathematics ?
 
So i think i have to discover patterns with my fluid intelligence
in the following truth table of the logical implication:
 
p q p -> q
0 0 1
0 1 1
1 0 0
1 1 1
 
Note that p and q are logical variables and the symbol -> is the logical implication.
 
And here are the patterns that i am discovering with my fluid intelligence that permit to understand the logical implication in mathematics:
 
So notice in the above truth table of the logical implication
that p equal 0 can imply both q equal 0 and q equal 1, so for
example it can model the following cases in reality:
 
If it doesn't rain , so it can be that you can take or not your umbrella, so the pattern is that you can take your umbrella since
it can be that another logical variable can be that it can rain
in the future, so you have to take your umbrella, so as you
notice that it permits to model cases of the reality ,
and it is the same for the case in the above truth table of the implication of if p equal 1, it imply that q equal 0 , since the implication is not causation, but p equal 1 means for example
that it rains in the present, so even if there is another logical variable that says that it will not rain in the future, so you have
to take your umbrella, and it is why in the above truth table
p equal 1 imply q equal 1 is false, so then of course i say that
the truth table of the implication permits to model the case of causation, and it is why it is working.
 
More of my philosophy about objective truth and subjective truth and more of my thoughts..
 
Today i will use my fluid intelligence so that to explain more
the way of logic, and i will discover patterns with my fluid intelligence so that to explain the way of logic, so i will start by asking the following philosophical question:
 
What is objective truth and what is subjective truth ?
 
So for example when we look at the the following equality: a + a = 2*a,
so it is objective truth, since it can be made an acceptable general truth, so then i can say that objective truth is a truth that can be made an acceptable general truth, so then subjective truth is a truth that can not be made acceptable general truth, like saying that Jeff Bezos is the best human among humans is a subjective truth. So i can say that we are in mathematics also using the rules of logic so that to logically prove that a theorem or the like is truth or not, so notice the following truth table of the logical implication:
 
p q p -> q
0 0 1
0 1 1
1 0 0
1 1 1
 
Note that p and q are logical variables and the symbol -> is the logical implication.
 
The above truth table of the logical implication permits us
to logically infer a rule in mathematics that is so important in logic and it is the following:
 
(p implies q) is equivalent to ((not p) or q)
 
 
And of course we are using this rule in logical proofs since
we are modeling with all the logical truth table of the
logical implication and this includes the case of the causation in it,
so it is why it is working.
 
And i think that the above rule is the most important rule that permits
in mathematics to prove like the following kind of logical proofs:
 
(p -> q) is equivalent to ((not(q) -> not(p))
 
Note: the symbol -> means implies and p and q are logical
variables.
 
or
 
(not(p) -> 0) is equivalent to p
 
 
And for fuzzy logic, here is the generalized form(that includes fuzzy logic) for the three operators AND,OR,NOT:
 
x AND y is equivalent to min(x,y)
x OR y is equivalent to max(x,y)
NOT(x) is equivalent to (1 - x)
 
So now you are understanding that the medias like CNN have to be objective by seeking the attain the objective truth so that democracy works correctly.
 
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Wednesday, December 28, 2022

Digest for comp.lang.c++@googlegroups.com - 6 updates in 2 topics

David Brown <david.brown@hesbynett.no>: Dec 28 05:09PM +0100

On 27/12/2022 23:31, Alf P. Steinbach wrote:
 
> Say no to the newfangled enum classes, they're Just Wrong. Define the
> enum like this:
 
>     struct Month{ enum Enum{ january = 1, february, ... }; };
 
Why?
 
I can see how it can work, but I have not heard of why it might be
better than enum classes. Can you expand on your reasoning?
JiiPee <kerrttuPoistaTama11@gmail.com>: Dec 28 09:01PM +0200

On 28/12/2022 18:09, David Brown wrote:
 
> Why?
 
> I can see how it can work, but I have not heard of why it might be
> better than enum classes. Can you expand on your reasoning?
 
is it that enum Enum converts directly to an integer, as enum class does
not? Easier to use?
"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Dec 28 10:47PM +0100

On 28 Dec 2022 20:01, JiiPee wrote:
> > better than enum classes.  Can you expand on your reasoning?
 
> is it that enum Enum converts directly to an integer, as enum class does
> not? Easier to use?
 
Mainly that.
 
Choosing a class instead of a namespace as container of the enumerator
names is mostly gut-feeling. With class the names can be inherited into
another class, and they can be referred to in a succinct manner via a
local short type alias. With namespace the names can be made available
for unqualified use in a local scope via `using namespace`, or referred
to in a succinct manner via a namespace alias, but not in a class scope.
 
Classes also become almost /necessary/ to express enum generalizations.
For example, 0 is an Input_stream_id; 1 and 2 are Output_stream_id; and
any Input_stream_id or Output_stream_id is a general Stream_id.
Unfortunately class inheritance goes the wrong way for expressing enum
relationships directly, so in the experimental code below (what a more
concise syntax would have to be translated to) there are just implicit
conversions from Input_stream_id and Output_stream_id to Stream_id:
 
 
// struct Input_stream_id{ enum Enum{ in = 0 }; };
// struct Output_stream_id{ enum Enum{ out = 1, err = 2 }; };
 
class Input_stream_id;
struct Input_stream_id_names
{
static const Input_stream_id in; // 0
};
 
class Output_stream_id;
struct Output_stream_id_names
{
static const Output_stream_id out; // 1
static const Output_stream_id err; // 2
};
 
class Input_stream_id:
public Input_stream_id_names
{
const int m_value;
 
public:
explicit constexpr Input_stream_id( const int value ): m_value(
value ) {}
constexpr operator int() const { return m_value; }
};
 
class Output_stream_id:
public Output_stream_id_names
{
const int m_value;
 
public:
explicit constexpr Output_stream_id( const int value ):
m_value( value ) {}
constexpr operator int() const { return m_value; }
};
 
class Stream_id:
public Input_stream_id_names,
public Output_stream_id_names
{
const int m_value;
 
public:
explicit constexpr Stream_id( const int value ): m_value( value
) {}
 
constexpr Stream_id( const Input_stream_id value ): m_value(
value ) {}
constexpr Stream_id( const Output_stream_id value ): m_value(
value ) {}
constexpr operator int() const { return m_value; }
};
 
inline constexpr Input_stream_id Input_stream_id_names::in
= Input_stream_id( 0 );
inline constexpr Output_stream_id Output_stream_id_names::out
= Output_stream_id( 1 );
inline constexpr Output_stream_id Output_stream_id_names::err
= Output_stream_id( 2 );
 
 
- Alf
Tim Rentsch <tr.17687@z991.linuxsc.com>: Dec 28 12:25PM -0800


>> There is a tacit assumption in that statement that longer is
>> always easier to read or more readily comprehended.
 
> No, there isn't.
 
Of course there is. The phrase "brevity-over-clarity" suggests
a false dichotomy between brief and clear. Anyone who pretends
otherwise is just being obtuse.
Tim Rentsch <tr.17687@z991.linuxsc.com>: Dec 28 12:48PM -0800


> Juha Nieminen <nospam@thanks.invalid> writes:
 
>> Maybe you also have a thin skin in addition to a thick skull...
 
> Also an insult.
 
Saying someone has thin skin can be an insult, but it
doesn't have to be. Some people are more sensitive
than others. I have heard someone say something about
how thin or thick someone's skin was, and the statement
was made in a way that I would characterize as a most
benevolent manner.
 
Please note that I am not making any claims as to what
Juha intended in his statement.
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Dec 28 01:38PM -0800

> benevolent manner.
 
> Please note that I am not making any claims as to what
> Juha intended in his statement.
 
I was making a claim as to what Juha intended. By raising an irrelevant
point about a figure of speech, you risk rekindling an argument that had
died out more than a week ago. Please don't do that.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for XCOM Labs
void Void(void) { Void(); } /* The recursive call of the void */
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Tuesday, December 27, 2022

Digest for comp.lang.c++@googlegroups.com - 11 updates in 4 topics

"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Dec 27 11:31PM +0100

On 25 Dec 2022 15:40, Stefan Ram wrote:
> with them. To learn how to use "Month" takes me at least one
> additional lookup. It could be even more lookups if even
> "Month" then redirects me to more opaque types.
 
Hm, I don't buy that argument. But I would declare the function as
 
auto month() const -> Month::Enum;
 
Say no to the newfangled enum classes, they're Just Wrong. Define the
enum like this:
 
struct Month{ enum Enum{ january = 1, february, ... }; };
 
 
 
 
> Would there be a "var", it would tell me clearly that
> - the function may change the object state, it is
> not to be considered "const".
 
I guess the keyword `mutable` is a little underused and could be set to
work, like they did with `auto`.
 
However it's not possible to change the defaults, so it would be a
matter of just adding verbosity to express explicitly what would
otherwise be expressed implicitly.
 
 
- Alf
Bonita Montero <Bonita.Montero@gmail.com>: Dec 27 04:59AM +0100

Am 26.12.2022 um 22:22 schrieb Chris M. Thomasson:
 
> B: MEMBAR #StoreLoad | #LoadLoad
 
> C: MEMBAR #LoadStore | #StoreStore
 
> D: MEMBAR #StoreLoad | #StoreStore
 
1. I use proper C++ barriers so I won't have to care for that.
2. SPARC is dead.
Michael S <already5chosen@yahoo.com>: Dec 27 03:38AM -0800

On Monday, December 26, 2022 at 11:23:06 PM UTC+2, Chris M. Thomasson wrote:
Michael S <already5chosen@yahoo.com>: Dec 27 03:42AM -0800

On Monday, December 26, 2022 at 11:23:06 PM UTC+2, Chris M. Thomasson wrote:
> Imagine you are on a SPARC in RMO mode, you are programming in assembly
> language.
 
If I am not mistaken, SPARC RMO is paper spec that was never implemented
in hardware. Which does not mean that it is impossible to imagine that I am
programming it in assembler, but it takes stronger imagination than I posses.
 
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 27 01:29PM -0800

On 12/26/2022 7:59 PM, Bonita Montero wrote:
 
>> D: MEMBAR #StoreLoad | #StoreStore
 
> 1. I use proper C++ barriers so I won't have to care for that.
> 2. SPARC  is dead.
 
Well, suppose you were tasked with creating the guts for C++ membars for
the SPARC. Imvvho, the SPARC in RMO mode is a good place to learn. The
MEMBAR instruction is pretty damn diverse! :^)
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 27 01:31PM -0800

On 12/27/2022 3:42 AM, Michael S wrote:
 
> If I am not mistaken, SPARC RMO is paper spec that was never implemented
> in hardware. Which does not mean that it is impossible to imagine that I am
> programming it in assembler, but it takes stronger imagination than I posses.
 
SPARC RMO is a real thing.
 
https://www.linuxjournal.com/article/8212
 
"Solaris on SPARC uses total-store order (TSO); however, Linux runs
SPARC in relaxed-memory order (RMO) mode."
 
 
Michael S <already5chosen@yahoo.com>: Dec 27 02:09PM -0800

On Tuesday, December 27, 2022 at 11:31:33 PM UTC+2, Chris M. Thomasson wrote:
 
> https://www.linuxjournal.com/article/8212
 
> "Solaris on SPARC uses total-store order (TSO); however, Linux runs
> SPARC in relaxed-memory order (RMO) mode."
 
I am pretty sure that the article got it wrong.
OS can set control bits in register to any value it wishes, but the underlying
hardware will still behave as TSO.
At least, if the hardware is made by Sun/Oracle or Fujitsu, but all other SPARC
CPU vendors became irrelevant since ~1996, anyway.
 
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 27 02:17PM -0800

On 12/27/2022 2:09 PM, Michael S wrote:
> hardware will still behave as TSO.
> At least, if the hardware is made by Sun/Oracle or Fujitsu, but all other SPARC
> CPU vendors became irrelevant since ~1996, anyway.
[...]
 
Are you telling me that SPARC RMO mode was run as if it was TSO in the
hardware? I need to ask Paul.
Lynn McGuire <lynnmcguire5@gmail.com>: Dec 27 03:52PM -0600

"C++ 23 Standard Won't Have a Key Parallelism Feature"

https://thenewstack.io/c-23-standard-wont-have-a-key-parallelism-feature/
 
The C++ 2023 standard won't have an asynchronous algorithm feature
called senders and receivers.
 
Lynn
"gdo...@gmail.com" <gdotone@gmail.com>: Dec 27 01:22AM -0800

compiling a simple program, including <iostream>
 
using:
g++ -c -Werror -Weverything exercise_2_17.cpp
 
-Weverything is producing this:
 
g++ -c -Werror -Weverything exercise_2_17.cpp
error: include location '/usr/local/include' is unsafe for cross-compilation [-Werror,-Wpoison-system-directories]
1 error generated.
 
if I don't use -Weverything source compiles fine, no errors or warning messages.
 
what does that mean?
using an intel based Mac, clang++ gives the same message.
 
Apple clang version 14.0.0 (clang-1400.0.29.202)
Target: x86_64-apple-darwin22.1.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
 
apparently g++ uses the clang++ compiler.
 
g++ -v
Apple clang version 14.0.0 (clang-1400.0.29.202)
Target: x86_64-apple-darwin22.1.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
 
adding -Wsystem-headers gives so many warnings the it stops at its default limit.
"Öö Tiib" <ootiib@hot.ee>: Dec 27 02:50AM -0800

> 1 error generated.
 
> if I don't use -Weverything source compiles fine, no errors or warning messages.
 
> what does that mean?
 
The -Weverything of clang produces all kinds of silly warnings some of what
you might want to disable. If you want to disable only that warning then
use -Wno-poison-system-directories
 
 
> Thread model: posix
> InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
 
> adding -Wsystem-headers gives so many warnings the it stops at its default limit.
 
Yes, there are no gcc installed on Apple by default. You need to install
it yourself if you want real gcc and of course there are artificial difficulties
made by Apple to that. But without challenges ... life is boring. ;-)
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Monday, December 26, 2022

Digest for comp.lang.c++@googlegroups.com - 2 updates in 1 topic

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 26 01:22PM -0800

Imagine you are on a SPARC in RMO mode, you are programming in assembly
language. What memory barrier instruction is the most efficient _and_
correct for use _after_ using an atomic RMW instruction to take
exclusive access? Think about locking a mutex...
 
A: MEMBAR #LoadLoad
 
B: MEMBAR #LoadStore | #LoadLoad
 
C: MEMBAR #StoreLoad | #LoadLoad
 
D: MEMBAR #StoreLoad | #StoreStore
 
 
What about the membar we have to use before atomically unlocking this mutex?
 
 
A: MEMBAR #LoadLoad
 
B: MEMBAR #StoreLoad | #LoadLoad
 
C: MEMBAR #LoadStore | #StoreStore
 
D: MEMBAR #StoreLoad | #StoreStore
"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Dec 26 11:17PM +0100

On 26 Dec 2022 22:22, Chris M. Thomasson wrote:
 
> B: MEMBAR #StoreLoad | #LoadLoad
 
> C: MEMBAR #LoadStore | #StoreStore
 
> D: MEMBAR #StoreLoad | #StoreStore
 
I really don't have the foggiest idea, but I wish I had! :-o
 
Google tells me that "RMO mode" is relaxed memory order, and I remember
that's been mentioned in what I've read about C++ threading.
 
I can understand that being relevant, but "a SPARC"?
 
Anyway, best wishes for the coming new year.
 
Hopefully at the end there will be less war, less crisis, everything
better except the climate (which is FUBAR), and with everything better
we can be happy no matter what the climate does. Except my old fav idea
of saving the polar bears by transporting them to Antarctica, because
the penguins -- possible Antarctica food source -- are now an endangered
species. But, better.
 
- Alf
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Sunday, December 25, 2022

Digest for comp.lang.c++@googlegroups.com - 12 updates in 4 topics

Bo Persson <bo@bo-persson.se>: Dec 25 08:49PM +0100

On 2022-12-25 at 15:40, Stefan Ram wrote:
> with them. To learn how to use "Month" takes me at least one
> additional lookup. It could be even more lookups if even
> "Month" then redirects me to more opaque types.
 
What if the return value is January?
 
https://en.cppreference.com/w/cpp/chrono/month
red floyd <no.spam.here@its.invalid>: Dec 25 12:39PM -0800

On 12/25/2022 12:22 PM, Stefan Ram wrote:
> for C++20 yet. Lippman, Meyers, Sutter, Stroustrup do not
> seem to have updated their works yet. Today's fast pace of
> standardization puts a strain on authors!
 
Josuttis seems to have one out now.
 
https://www.amazon.com/20-Complete-Guide-Nicolai-Josuttis/dp/3967309207/
Bonita Montero <Bonita.Montero@gmail.com>: Dec 25 09:49PM +0100

Am 25.12.2022 um 21:39 schrieb red floyd:
 
> Josuttis seems to have one out now.
 
> https://www.amazon.com/20-Complete-Guide-Nicolai-Josuttis/dp/3967309207/
 
mega.nz has a version for free:
 
https://mega.nz/file/mpcQmSYC#AP0h6VvlX019meVCBQ3Va0m44ptL4L_YrE1V7Eh0Fs0
 
Password is "usenetrocks".
ram@zedat.fu-berlin.de (Stefan Ram): Dec 25 02:40PM

From the code examples for the C++ core guideline
"P.1: Express ideas directly in code":
 
|class Date {
|public:
| Month month() const; // do
| int month(); // don't
| // ...
|};
 
. Ok, when I read "int month()", I am not sure whether
January is 0 or 1, but otherwise I have the idea that
possible values probably are 0..11 or 1..12 and how to deal
with them. To learn how to use "Month" takes me at least one
additional lookup. It could be even more lookups if even
"Month" then redirects me to more opaque types.
 
When "const" is missing, this tells me that
- the function might be effectivly const, but the
programmer just forgot to mark it correspondingly, or
- the function may change the object's state.
 
Would there be a "var", it would tell me clearly that
- the function may change the object state, it is
not to be considered "const".
ram@zedat.fu-berlin.de (Stefan Ram): Dec 25 08:22PM

>What if the return value is January?
 
I'd say: If the language actually has a means of indicating
a month number, then one should use this. I didn't know C++20
had this!
 
Using names from the standard library is more clear and
readable than using names from a custom library would be.
 
I've started to read the Core Guidelines because it's one
of the few sources that might already contain a bit of C++20
(except the ISO standard itself). There are not many books
for C++20 yet. Lippman, Meyers, Sutter, Stroustrup do not
seem to have updated their works yet. Today's fast pace of
standardization puts a strain on authors!
"Öö Tiib" <ootiib@hot.ee>: Dec 24 06:19PM -0800

On Saturday, 24 December 2022 at 21:40:36 UTC+2, Malcolm McLean wrote:
> source to the program source, then you've got to specify how to invoke yacc to
> generate the C. That adds considerable complication to the build system and
> introduces several points at which it could break.
 
That is, yes, problem for small project where the toolchain was for example
just set up by Project -> New... of some IDE like Visual Studio, XCode or
QtCreator with sole purpose of trying something out.
The ways how to add some dependent library nothing to talk of more tools
to toolchain can feel rather arcane and take some learning.
In bigger project that anyway has multiple dependencies, targets multiple
toolchains, has several tools added to those and ultimately produces and
deploys multiple modules ... adding lex and yacc may be considered
relatively trivial task.
scott@slp53.sl.home (Scott Lurndal): Dec 25 05:07PM

>toolchains, has several tools added to those and ultimately produces and
>deploys multiple modules ... adding lex and yacc may be considered
>relatively trivial task.
 
Adding lex and yacc _are_ trival tasks.
 
acgram.c acgram.h: $A/acgram.y
$(YACC_CMD) $A/acgram.y
mv y.tab.c acgram.c
mv y.tab.h acgram.h
...
 
acgram.$o: acgram.c $(P1_H)
$(CC_CMD) $(YYDEBUG) acgram.c
 
aclex.c: $A/aclex.l
$(LEX) $(LFLAGS) $A/aclex.l
mv lex.yy.c aclex.c
 
aclex.$o: aclex.c acgram.h $(P1_H)
$(CC_CMD) aclex.c
"daniel...@gmail.com" <danielaparker@gmail.com>: Dec 25 10:55AM -0800

On Sunday, December 25, 2022 at 12:07:18 PM UTC-5, Scott Lurndal wrote:
 
> mv lex.yy.c aclex.c
 
> aclex.$o: aclex.c acgram.h $(P1_H)
> $(CC_CMD) aclex.c
 
Nonetheless, they don't appear to be widely used.
 
Most significant programming languages - JavaScript, Java Open JDK, PHP, C#, clang, gcc, Golang, Lua,
Swift, Julia, rust - use a handwritten parser, Ruby uses the Bison parser generator, PHP, and Python use
parser generators, Kotlin appears to use FLEX to tokenize and the rest is hand written.
 
Apart from programming language compilers, looking at implementations on github of XML, JSON,
and YAML for C++, Python and Ruby, most appear to use a handwritten parser.
 
Daniel
"gdo...@gmail.com" <gdotone@gmail.com>: Dec 24 06:27PM -0800

is it best practice to use new(nothrow) as to not have to handle the exception?
"Öö Tiib" <ootiib@hot.ee>: Dec 24 07:54PM -0800

> is it best practice to use new(nothrow) as to not have to handle the exception?
 
Best practice is not to use explicit new but standard library containers.
Standard library does not use new(nothrow).
 
Do you have some plan what to do when allocation fails somewhere?
If no then you can just catch std::bad_alloc in main and then report that
program died because it is out of memory. If yes then you can catch at
points where it is easiest to switch to that plan.
With new(nothrow) you will have to check in code after every new.
Check that has no idea what to do if it failed or how far it is from
place where there is something to do. How is it better?
 
Also the exceptions that are likely never thrown typically cost lot less
performance than those checks all over code base.
Paavo Helde <eesnimi@osa.pri.ee>: Dec 25 10:30AM +0200

> is it best practice to use new(nothrow) as to not have to handle the exception?
 
No, because then you would handle the nullptr return, which becomes more
cumbersome as soon as you have to do that in more than one place.
 
Best practice is to figure out if and why are you wanting to do a
dynamic allocation in the first place, then use std::make_unique or
std::make_shared as appropriate.
Andrey Tarasevich <andreytarasevich@hotmail.com>: Dec 25 01:09AM -0800

> is it best practice to use new(nothrow) as to not have to handle the exception?
 
Do you have a good strategy for handling a null return?
 
If you do, then just go ahead and use 'new(nothrow)' if you like it better.
 
If you don't, then it makes no difference. Better use plain 'new', since
it looks cleaner.
 
--
Best regards,
Andrey
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.