Sunday, January 26, 2020

Digest for comp.lang.c++@googlegroups.com - 22 updates in 6 topics

aminer68@gmail.com: Jan 26 02:18PM -0800

Hello,
 
 
More precision about computational complexity..
 
I previously classified (read below) the time complexities such
as n*log(n) and n as average resistance(read below my analogy with material resistance), but why i am classifying them as average resistance?, i think we can abstract with a layer that classify
a quadratic and exponential time complexities as "bad" and log(n) complexity as good, but i think we can say that n*log(n) and n time complexities are "passable", so i think this layer of abstraction permits to say and classify them as average. This is how i have classified by using this layer of abstraction.
 
Read the rest of my previous thoughts to understand more:
 
We have to be more smart about computational complexity..
 
I think we have to notice that composition in computational complexity
like having an f(n) and g(n) time complexities and there compositional computational complexities by adding them that equal max(f(n),g(n)), has been designed as it has been designed to also allow more clarity, but this way of doing is too fuzzy that lacks precision that is needed to better classify algorithms, so i think that we have to
have the two methods, the one that calculates precisely the computational complexity and the one that simplify by using the methods
of composition such the max(f(n),g(n)) above, because i think that when you calculate precisely the computational complexity this will allow us to classify better the algorithms like by knowing better about the resistance of the algorithms(by analogy with material resistance , read below to understand), and this can be done by plotting different computatiional complexities of the needed algorithm on a graph on a computer, this will permit to predict better and to know better about the algorithms. I give you an example so that you understand that precise mathematical calculations can be really useful, look at my powerful tool that implements USL here:
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
It is less fuzzy than other tools, because look for example at how i am providing you with an option to use of my powerful tool that implements USL, here is what i say about it on my webpage above:
 
======================================
 
But this is not the end of the story..
 
You have to optimize the criterion of the cost for a better QoS, and for this i have supplied you with a second option called -d that you have to run for that, so you have to type at the command prompt:
 
usl data.csv -d 0.3 0.1
 
the 0.3 is the slope of the secant with a step 0.1, so since the step is 0.1 so this will approximate a derivative of the USL equation that equal 0.3, so here is the output of my program when you run it with -d 0.3 0.1:
 
--
 
Peak number is: 449.188
Predicted scalability peak is: 18.434
Coefficient of determination R-squared is: 0.995
The derivative of the USL equation at delta(y)/delta(x)=0.300
with a step delta(x)=0.100, gives a number and derivative of
a secant or a derivative delta(y)/delta(x) of: 16.600 and 0.300
 
--
 
=======================================
 
 
Read the rest of my previous thoughts:
 
About the classification of complexities in computational complexity..
 
I think there is something happening in computational complexity,
let's take for example the time complexity, since formally when we have a compositional of complexities like having an f(n) and g(n) time complexities, computational complexity says that there compositional complexity by adding them is max(f(n),g(n)), but this is too "fuzzy" and it lacks precision for better classification of time complexities, since time complexities are like material resistance in physics, this is why i am for example saying that a time complexity of n*log(n) and n are average resistance compared with other complexities that exists(read my below analogy with material resistance), this way i am classifying, so
i think the right way is that in compositional complexity by adding of two time complexities of for example f(n) and g(n) is to take the average
of (f(n)+g(n))/2 and we can formally generalize the calculation, this
way we are not going to loose precision to be better classification
of complexities, like in classification of material resistance in physics.
 
Yet more rigorous about computational complexity..
 
 
I said the following (read below)
 
"I said previously(read below) that for example the time complexities such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is
an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than if it was a quadratic complexity or exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict, since computational complexity can predict
the resistance of the algorithm if it is high or low or average (by analogy with material resistance, read below to understand)."
 
 
Hope that you are understanding my abstract reasoning that in
for example time complexity, the resistance of the algorithm is like the material resistance in physics , and it is the resistance of the algorithm compared to other complexities, it is not resistance of the algorithm in front of the input that is given. And this prediction
in computational complexity is very important because it predicts
that the algorithm is the one that is this resistance that exists in reality, it is like material resistance in physics. (read below my analogy with material resistance).
 
 
More rigorous about precision of computational complexity..
 
When we say that 2+2=4, it is a system that inherently contains enough precision that we call enough precision, it is in fuzzy logic that it is 100% precision that is a truth that is equal 1
 
But when for example we say:
 
"If i say that a person is obese, so he has a high risk to get a disease because he is obese."
 
What is it equal in fuzzy logic ?
 
It is not an exact value in fuzzy logic, but it is not exact precision but it is fuzzy and it is equal to a high risk to get a disease.
 
That's the same for time complexities of n*log(n) and n , they
are fuzzy, but there level of fuzziness can predict the resistance of the algorithm that it is average (by analogy with material resistance, read below to understand), and i think that it is how can be viewed computational complexity.
 
Read the rest of my previous thoughts to understand better:
 
Yet about what can we take as enough precision and about computational complexity..
 
 
As you have just noticed i said before that 2+2=4 is a system that inherently contains enough precision that we call enough precision, because we have to know that it is judged and dictated by our minds of we humans, but when the mind sees a time complexity of n*log(n) or n , it will measure them by reference to the other time complexities that exists, and we can notice that they are average time complexities if we compare them with an exponential time complexity and with a log(n) time
complexity, so this measure dictates that the time complexities of n*log(n) and n are average resistance (by analogy with material resistance, read below to understand), but our minds of humans will also
notice that this average resistance is not an exact resistance, so
they are missing precision and exactitude, so like the example that i give below of the obese person, we can call the time complexities such as n*log(n) and n as fuzzy and this look like probability calculations.
 
Read the rest of all my previous thoughts to understand:
 
I will add again more logical rigor to my post about about computational complexity:
 
As you have just noticed in my previous post (read below), i said the following:
 
That time complexities such as n*log(n) and n are fuzzy.
 
But we have to be more logical rigor:
 
But what can we take as enough precision ?
 
I think that when we say 2+2=4, it has no missing part
of precision, so this fact is enough precision,
but if we say a time complexity of n*log(n), there is a missing part
of precision, because n*log(n) is dependent on reality that needs
in this case more precision about an exact precision about the resistance of the algorithm(read below my analogy with material resistance), so this is why we can affirm that time complexities such as n*log(n) and n are fuzzy, because there is a missing part of precision, but eventhough there is a missing part of precision, there is enough precision that permits to predict that there resistance in reality are average resistance.
 
Read the rest of my previous thoughts to understand:
 
I correct again one last typo, here is my final post about computational complexity:
 
I continu about computational complexity by being more and more rigorous, read again:
 
I said previously(read below) that for example the time complexities such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is
an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than if it was a quadratic complexity or exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict, since computational complexity can predict
the resistance of the algorithm if it is high or low or average (by analogy with material resistance, read below to understand).
 
Read my previous thoughts to understand:
 
What is science? and is computational complexity science ?
 
You just have seen me talking about computational complexity,
but we need to answer the questions of: What is science ?
and is computational complexity science ?
 
I think that we have to be more smart because there is like
higher level abstractions in science, and we can be in those
abstractions exact precisions of science, but we can be more fuzzy
precisions that are useful and that are also science, to understand me more, let me give you an example:
 
If i say that a person is obese, so he has a high risk to get a disease because he is obese.
 
Now you are understanding more that with this abstraction we are not
exact precision, but we are more fuzzy , but this fuzziness
is useful and its level of precision is also useful, but is it
science ? i think that this probabilistic calculations are
also science that permits us to predict that the obese person
has a high risk to get a disease. And this probabilistic calculations
are like a higher level abstractions that lack exact precision but
they are still useful precisions. This is how look like computational
complexity and its higher level abstractions, so you are immediately
understanding that a time complexity of O(n*log(n)) or a O(n)
is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity
is a low level resistance when n grows large, and we can immediately
notice that a log(n) time complexity is a high level of resistance
when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example
above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also
science, because it is like probability calculations that permits us
to predict.
 
Read the rest of my previous thoughts to understand better:
 
The why of computational complexity..
 
 
Here is my previous answer about computational complexity and the rest
of my current answer is below:
 
 
=====================================================================
Horand gassmann wrote:
 
"Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much."
 
 
I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so
as you are noticing having time complexities of log(n) and n^2
are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing.
=================================================================
 
 
 
I think i am understanding better Lemire and Horand gassmann,
they say that if it is not exact needed practical precision, so it is
not science or engineering, but i don't agree with this, because
science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations,
but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science.
 
 
Read the rest of my previous thoughts to understand better:
 
 
More on computational complexity..
 
Notice how Horand gassmann has answered in sci.math newsgroup:
 
Horand gassmann wrote the following:
 
"You are right, of course, on one level. An O(log n)
algorithm is better than an O(n) algorithm *for
large enough inputs*. Lemire understands that, and he
addresses it in his blog. The important consideration
is that _theoretical_ performance is a long way from
_practical_ performance."
 
 
And notice how what Lemire wrote about computational complexity:
 
"But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing
aminer68@gmail.com: Jan 26 02:04PM -0800

Hello,
 
 
What is software engineering and what is computer science ?
 
And is software engineering a science ?
 
I will start by answering the question like this:
 
 
----
More rigorous about precision of computational complexity..
 
When we say that 2+2=4, it is a system that inherently contains enough precision that we call enough precision, it is in fuzzy logic that it is 100% precision that is a truth that is equal 1
 
But when for example we say:
 
"If i say that a person is obese, so he has a high risk to get a disease because he is obese."
 
What is it equal in fuzzy logic ?
 
It is not an exact value in fuzzy logic, but it is not exact precision but it is fuzzy and it is equal to a high risk to get a disease.
 
That's the same for time complexities of n*log(n) and n , they
are fuzzy, but there level of fuzziness can predict the resistance of the algorithm that it is average (by analogy with material resistance, read my previous post about it), and i think that it is how can be viewed computational complexity.
----------
 
 
So in science you have to understand the above nuances, so as
you are noticing, it is also about fuzziness of "precision", this is how
you have to be smart, because software engineering is more fuzziness
of precision, i mean in software engineering you can do model checking
of Petri nets , but this model checking is not an exact precision of knowledge that is necessary to understand how to implement a model checker of Petri nets, but i think that model checking of Petri nets is also a level of a science that can predict, so software engineering
is like that, it can work with models but it doesn't know the inside of the models, for example in reliability growth modeling, software engineering works with the Jelinski-Moranda model but it doesn't know the inside of from where comes the Jelinski-Moranda model, i give you another example: in parallel programming software engineering works
with model checking of Petri nets that permits to know
if the parallel program is deadlock-free, but software engineering
doesn't know the inside of how is made a tool of model checking
of Petri nets that permits us to know if a Petri net is bounded
or live. So computer science learns us how to do a structural analysis of Petri nets with mathematics as i am knowing how to do it. So finally
we can say that software engineering works in higher level manner
than computer science , because i think computer science is concerned
with the inside of how the things are made.
 
 
Thank you,
Amine Moulay Ramdane.
Paavo Helde <myfirstname@osa.pri.ee>: Jan 26 10:55AM +0200

On 26.01.2020 0:23, Bart wrote:
> (based on some identifiers going out of scope?) would be suitable here.
 
> Does 'vga_lock' even have local scope here? If global, how does C++ know
> when to do the unlock?
 
'vga_lock' looks like a misnamed mutex here. In C++ one would have a
mutex variable with a larger scope and/or duration, and a local RAII
lock variable in the exact to-be-locked scope. In addition to automatic
cleanup, the C++ approach would also help in getting the terminology and
variable naming style more consistent.
 
Whenever there is a resource to be released, the C++ style RAII is
appropriate, one just has to ensure the RAII class object has the needed
lifetime.
 
This has not much to do with the existing interface or usage. When
wrapping existing C interfaces in C++, one often defines new small
auxiliary RAII classes for resource releasing (thereby adding new names
to the interface), this is pretty normal.
"Öö Tiib" <ootiib@hot.ee>: Jan 26 03:47AM -0800

On Sunday, 26 January 2020 10:55:27 UTC+2, Paavo Helde wrote:
> lock variable in the exact to-be-locked scope. In addition to automatic
> cleanup, the C++ approach would also help in getting the terminology and
> variable naming style more consistent.
 
Additionally the "spin_lock_irqsave" and "spin_unlock_irqrestore"
have to be macros. These seemingly take "flags" by value while clearly
use it by reference. That is easy in C++ but can't be done in C
without macro (and risk of confusing someone who thinks it is
function call).
 
> wrapping existing C interfaces in C++, one often defines new small
> auxiliary RAII classes for resource releasing (thereby adding new names
> to the interface), this is pretty normal.
 
Also std::unique_ptr can be rather easily used as generic RAII class
for binding whatever into destructor. It has already all boiler-plate
in place:
 
#include <iostream> // for cout
#include <memory> // for unique_ptr
#include <functional> // for function
 
using D = int;
using PostPrcD = std::unique_ptr<D, std::function<void(D*)>>;

int main()
{
D d; // thing that needs post-processing

PostPrcD p{&d, [](D* ptr){std::cout << "processing " << *ptr << "\n";}};

// complex logic, throws, early returns, whatever
// ...
d = 42;
// ...

} // p firmly post-processes d, currently outputs "processing 42"
David Brown <david.brown@hesbynett.no>: Jan 26 12:54PM +0100

On 25/01/2020 20:56, Bart wrote:
 
>> What do you have to say about my answer?
 
> If DB can avoid using 'goto' in C for (I understand) several decades,
> then he can do it for C++ too.
 
Let's be clear here. I "avoid using goto in C" in the same sense as I
avoid writing 2000 line functions, avoid using variable names in
Swahili, and avoid 8 layers of nested indentation.
 
You make it sound like my code - being C - should naturally contain
gotos, and I then go to extraordinary efforts to remove the gotos. I
don't. They were never there. I structure my code - gotos do not occur
in structured code. They occur only when you try to use a code
structure that does not match the structure of the task.
 
> gotos, DB is at the other end of the scale, and I would say extreme
> (actually you can't be more extreme without using a negative number of
> gotos!).
 
I am "extreme" in the same way that people who don't eat Brussels
sprouts are extreme. I don't find "goto" useful in the coding I do, and
I think any attempt to use it would make my code worse. So I don't use
it. If I thought it would make my code better, I /would/ use it.
 
 
> I'm not convinced that C++ features such RAII, exceptions, destructors
> and so on are enough to deal with all the use-cases of goto I listed
> earlier in the sub-thread, or the ones I didn't think of.
 
RAII certainly deals with many possible cases where you think goto is a
good idea. Your other cases can usually be handled by other methods,
whether in C or C++. Often I think the answer is to view the first code
as a prototype or proof of concept that was gradually built while
understanding the task at hand. Now that you understand it, scrap the
code and write new code to handle the task. Trying to "remove gotos"
while leaving everything else untouched is rarely the best solution.
 
I am also willing to accept that for some of the code you write, "goto"
might be the best choice. Certainly if you take as written all your
other coding preferences (huge files, minimal locality, old C, no
pre-processor, minimal types, etc.), then it would not surprise me to
hear that "goto" fits nicely. I can't comprehend why you have picked
these rules, but they are your choices - and if goto is a good fit with
them, then it makes perfect sense to use it. But accept that other
people have different sets of rules, and "goto" never (or almost never)
enters the picture.
 
One type of C coding where "goto" is common - and regularly the right
solution - is code generated automatically, such as from language
translators. I doubt if anyone will want to take that use of goto away
from you, whether the code is in C or C++.
 
 
> So they cannot by themselves account for using for being able to use 0
> gotos. It seems some people just don't like them, and go to extra
> measures to avoid them. Note the word 'extra'.
 
No, you fail to understand. We are not suggesting going to extra effort
to remove "goto". I am suggesting not wasting effort putting in "goto"
in the first place - it was never there in the first place. Ian is
suggesting less effort in later maintenance and adaption of the code by
removing "goto" (presumably as part of other refactoring work, rather
than as a goal in itself). Writing clearer and higher quality code
saves effort overall, even if it appears to take more short-term effort.
 
 
> BTW C, and by extension C++, contains flow control that IMO can be even
> worse that goto when abused.
 
Yes - and no one is recommending them either. (C++ restricts some kinds
of flow that are allowed in C.)
 
>         puts("B");
>     }
 
> Should unstructured switch statements like this be banned?
 
Yes, obviously.
 
> Well, that's
> not possible; people still need to use switch, and they will also come
> up with use-cases for weird applications of it.
 
Most C and C++ programmers use switch statements as though it were
strongly structured - they don't mix it with gotos, loops, conditionals,
etc., in a non-structured way. (There is no need to show examples of
exceptions.)
 
 
> With goto, at least, you know that some underhand flow control is going on!
 
Use brackets and indentation well, and it is entirely clear when
underhand flow control is going on with switch too.
 
Divide up your code into smaller sections and functions, and it is
entirely clear that underhand flow control /can't/ be going on.
 
(The exception, perhaps, is C++ exceptions - these have been called
"hidden goto's" by some. Done well, C++ exceptions can be a very good
thing for program structure - done badly, they can be a very bad thing.)
Bart <bc@freeuk.com>: Jan 26 12:44PM

On 26/01/2020 11:54, David Brown wrote:
> don't.  They were never there.  I structure my code - gotos do not occur
> in structured code.  They occur only when you try to use a code
> structure that does not match the structure of the task.
 
No, the use-cases for goto are always there: common code in a function,
nested loop control, usual code flow (plus the temporary uses during
development which ought to be eventually removed but which doesn't
always happen).
 
I don't know how /you/ get around those, except by preempting those
situations, which to me sounds like premature refactorisation.
 
(It's not as though my own code is bristling with gotos. Average
incidence is 1 per 4-500 lines. Here, in this rare example of manually
written C, there is one per 800 lines:
 
https://github.com/sal55/langs/blob/master/bignum.c
https://github.com/sal55/langs/blob/master/bignum.h
 
(No macros either in that project.)
 
The first one ought to be an extra loop (the sort that executes either
once, or twice), but I left like that as I thought it was clearer.
 
The second is for a nested loop exit, since the program was hand-ported
from my own language, where those 1600 lines of code have only the one goto.
 
 
 
> I am "extreme" in the same way that people who don't eat Brussels
> sprouts are extreme.  I don't find "goto" useful in the coding I do, and
> I think any attempt to use it would make my code worse.
 
So do I; when I use 'goto', I feel bad about it. But not bad enough to
write more elaborate code to eliminate it, because /that/ would make it
worse. I might, however, think of a language features to eliminate that
use-case.
 
Yesterday I looked at one 5000-line module (not C), which had 4 gotos. I
identified one that ought to have been replaced by a loop control
statement (to restart a loop; you don't have that in C). The other 3
could be replaced by the 'recase' feature I mentioned yesterday, but
which hasn't yet been rolled out to this language.
 
So I'm doing something about it, but in ways that make the code simpler
and clearer, not longer and with extra bits.
 
> might be the best choice.  Certainly if you take as written all your
> other coding preferences (huge files, minimal locality, old C, no
> pre-processor, minimal types, etc.),
 
(I don't write huge files. What you have in mind are the monolithic
generated-C files that I've used for distribution, or possibly the
amalgamated original-source files, also generated, used to simplify
transmission (eg. upload to github or copy to an RPi via a memory-stick).
 
The average module size of my original source is around 1000 lines.)
David Brown <david.brown@hesbynett.no>: Jan 26 02:58PM +0100

On 26/01/2020 13:44, Bart wrote:
>> occur in structured code.  They occur only when you try to use a code
>> structure that does not match the structure of the task.
 
> No, the use-cases for goto are always there:
 
No, they are not.
 
> common code in a function,
 
There are lots of ways to handle common code. I've never felt "goto" to
be a good way. (That doesn't mean I don't think it can't occur - merely
that I don't think it is often a good way, and I've never felt its need
in my own code.)
 
 
> nested loop control,
 
Gotos are most certainly not needed for that.
 
Again, I'd suggest that if you think "goto" is necessary (in manually
written code) to handle the flow you need, then your code structure is
probably not ideal.
 
And if you think it is the only efficient solution because extra tests,
duplicated calculations, local flag variables, etc., would have a
run-time cost, then you are not using the right tools or not using them
in the right way. Either efficiency doesn't matter, so the cost is
irrelevant, or you use good tools and let the compiler handle the
optimisation from flags to gotos. It is very rare that you need maximal
code efficiency and have no choice but to use poor tools.
 
> usual code flow
 
"goto" is not part of the usual code flow in any structured programming.
It is, at most, for very /unusual/ code flow requirements.
 
> (plus the temporary uses during
> development which ought to be eventually removed but which doesn't
> always happen).
 
If it ought to be removed, it ought to be removed. "I should have done
this but I didn't" is hardly an excuse!
 
And again, this shows your misunderstanding. Avoiding goto is not about
/removing/ gotos from code you write - it is about not writing "goto" in
the first place. And avoiding goto is not the aim in itself - avoiding
unstructured coding is the aim (which itself is a consequence of the
higher aim of writing clear, legible and maintainable code).
 
 
> I don't know how /you/ get around those, except by preempting those
> situations, which to me sounds like premature refactorisation.
 
I don't get myself into those situations.
 
My first programming language was BASIC, but I moved on to structured
coding. (And even in my BASIC programming, I preferred structured BASICs.)
 
> written C, there is one per 800 lines:
 
>   https://github.com/sal55/langs/blob/master/bignum.c
>   https://github.com/sal55/langs/blob/master/bignum.h
 
You don't use goto much, yet you think it is inconceivable that others
don't use it at all?
 
> (No macros either in that project.)
 
Only you consider that a good thing.
 
 
> The second is for a nested loop exit, since the program was hand-ported
> from my own language, where those 1600 lines of code have only the one
> goto.
 
I haven't looked at your code. But I have already mentioned that
automated code generation and translation into C may be /good/ reasons
for using goto. The same would apply (but to a lesser extent) when you
are trying to do manual literal translations from another language to C.
Literal translations rarely end up with good quality code as the result.
 
> write more elaborate code to eliminate it, because /that/ would make it
> worse. I might, however, think of a language features to eliminate that
> use-case.
 
I don't avoid goto because it makes me feel bad - I avoid bad coding
because it is bad coding. That naturally results in negligible use of
goto, because goto has negligible use in good coding.
 
And C has plenty of ways to write code well (leading, of course, to
plenty of ways to write code badly) - it doesn't need more. If I want
more ways to write code, I use a different language (like C++). And
while C++ certainly has improved by making it easier to structure code
in different ways, none of these has been with the aim of "avoiding goto".
 
 
Paavo Helde <myfirstname@osa.pri.ee>: Jan 26 04:08PM +0200

On 25.01.2020 16:58, Bart wrote:
> D;
> goto L;
> }
 
This is an example of some pretty convoluted logic, not even because of
'goto', but rather because of multiple code execution paths (two
non-default exit paths from inner loop body and no normal exit path). I
would rewrite this along the lines:
 
bool foobar(...) {
for (...) {
B;
if (!C) {
D;
return false;
}
}
return true;
}
 
while (true) {
A;
if (foobar(...)) {
break;
}
}
Bart <bc@freeuk.com>: Jan 26 02:53PM

On 26/01/2020 13:58, David Brown wrote:
 
> Again, I'd suggest that if you think "goto" is necessary (in manually
> written code) to handle the flow you need, then your code structure is
> probably not ideal.
 
Do you ever use loop 'break' in your code? Because some consider that
impure too.
 
If so, what would you feel about someone claiming they have never ever
used 'break' in their C or C++ code?
 
(I can't easily distinguish loop-break from switch-break in source.
However, I've got my C compiler to help out and that tells me that the
235Kloc sqlite3.c + shell.c program contains:
 
645 gotos
300 loop-breaks
800 switch-breaks
 
Interesting that there are 800 useless switch-breaks that nobody objects
to (where omitting one is an undetectable bug), more than the number of
universally hated gotos. Anyway, loop-break is extensively used here.)
 
 
>> usual code flow
 
> "goto" is not part of the usual code flow in any structured programming.
>  It is, at most, for very /unusual/ code flow requirements.
 
I meant 'unusual'.
 
>> eventually removed but which doesn't always happen).
 
> If it ought to be removed, it ought to be removed.  "I should have done
> this but I didn't" is hardly an excuse!
 
Because it's working code. Removing goto would be disruptive and
introduce bugs.
 
> the first place.  And avoiding goto is not the aim in itself - avoiding
> unstructured coding is the aim (which itself is a consequence of the
> higher aim of writing clear, legible and maintainable code).
 
The next level or two beyond structured code is functional programming.
And we all know how legible that can be.
 
>> (No macros either in that project.)
 
> Only you consider that a good thing.
 
Yes; most uses of macros in C I consider a failing in the language.
 
(However, I've since introduced parametric macros for the first time in
my own languages. AST-based not token-based like C, so more restrictive
(they have to result in a well-formed expression). The feature is still
experimental.)
Manfred <noname@add.invalid>: Jan 26 06:50PM +0100

On 1/26/2020 2:58 PM, David Brown wrote:
> On 26/01/2020 13:44, Bart wrote:
 
>> No, the use-cases for goto are always there:
 
> No, they are not.
 
I wouldn't be that hard.
 
 
> Again, I'd suggest that if you think "goto" is necessary (in manually
> written code) to handle the flow you need, then your code structure is
> probably not ideal.
 
This is an interesting point. The fact is that what is considered to be
good code structure is subjective, because it has to do with how it fits
the programmer mindset (either the writer's or reader's). And
(fortunately) that is not fixed for everyone (it is, to some extent, in
the eye of the beholder).
I personally never had the need for goto, because my way of structuring
code does not fit well with it (this is probably similar to your attitude).
However, I am willing to consider that this may be due to my background
with coding (I started with Fortan, then C and shortly after that C++),
and give the benefit of the doubt to someone who may have had a
different path - specifically if having a considerable background or
habit with ASM (I have no idea if this is Bart's case, but it may fit
kernel programmers).
In other words, in ASM programming goto (its equivalents) is ubiquitous,
thus, to someone used to that way of coding, I would not be surprised
that goto is clearer and more readable than structured loops more often
than I would think.
 
From that perspective, goto in C++ coding is considerably less useful
than in C.
 
> irrelevant, or you use good tools and let the compiler handle the
> optimisation from flags to gotos.  It is very rare that you need maximal
> code efficiency and have no choice but to use poor tools.
 
This is true, but in case of kernel code I understand why they didn't
want to put those "extra tests, duplicated calculations, local flag
variables, etc." in the source in the first place.
 
<snip>
 
>> (No macros either in that project.)
 
> Only you consider that a good thing.
 
(I think in comp.lang.c++ there are more people who do not consider
macros a good thing)
David Brown <david.brown@hesbynett.no>: Jan 26 07:10PM +0100

On 26/01/2020 15:53, Bart wrote:
>> probably not ideal.
 
> Do you ever use loop 'break' in your code? Because some consider that
> impure too.
 
I do use "break", yes. (I rarely find use for "continue".) I also am
happy with multiple "return" points in a function. Early returns based
on parameter values are typically, I think, simpler and clearer than
adding new layers of "if" statements or extra functions. I am more
cautious about extra returns in the middle of a function - it has to be
clear why it is happening. Exiting multiple loops early is an example
of such cases. Here, the "early return" means a new way to exit the
function, but unlike using "goto" it avoids extra entry points to other
parts of the function.
 
(I am aware that some people disapprove of "break" or "continue" in
loops, and any "return" except as the last line of a function.)
 
As well as readability and maintainability of code, this is all about
keeping control of the number of paths through the function, and its
cyclomatic complexity. If you want to be sure that a function will
work, you need to know all the possible paths through it. You need to
know all the loops - and be sure you can guarantee progress towards an
end result. Jumping around makes this massively more difficult.
 
 
> If so, what would you feel about someone claiming they have never ever
> used 'break' in their C or C++ code?
 
(I assume you are meaning "break" from loops here, and not in switch
statements.)
 
I would believe someone who claimed this, yes. People have different
styles, and different needs in their coding.
 
 
> Interesting that there are 800 useless switch-breaks that nobody objects
> to (where omitting one is an undetectable bug), more than the number of
> universally hated gotos. Anyway, loop-break is extensively used here.)
 
Why do you think no one objects to having to put "break" in switch
cases? I suspect that the solid majority of C programmers would have
been happier if switch statements were properly structured with respect
to the rest of the language, with cases starting new blocks (and
therefore no need for "break"). As a second-best choice, most would
prefer that cases functioned as though they had "break" at the end, and
you needed an explicit "fall-through" indicator (you'd need a nice way
of having multiple cases).
 
People will tell you that they have no problem using switch in C and
C++, and that their compiler tells them if they forget a "break", and
perhaps they will tell you /why/ C has switch the way it does. Do not
mistake that for thinking they prefer it that way or think it would be a
good way to design a "switch" statement in a modern language.
 
Also, do not imagine that goto is universally hated. It is often
decried, but not universally. And people who have expressed an opinion
here do not (I think) actively goto for its own sake - they avoid the
kind of code structures in which "goto" might seem a feasible solution.
 
 
>> "goto" is not part of the usual code flow in any structured
>> programming.   It is, at most, for very /unusual/ code flow requirements.
 
> I meant 'unusual'.
 
Fair enough. I always aim for /usual/ code flow - code should flow in
the way people expect.
 
>> done this but I didn't" is hardly an excuse!
 
> Because it's working code. Removing goto would be disruptive and
> introduce bugs.
 
That sounds like a flawed development methodology.
 
Some code is "write and forget", and its quality and future
maintainability does not matter. But for most serious code, the code is
not ready until it is clear and complete, where you can honestly say "I
did a good job on that code" as you check it in to the project. You
know that you - or anyone else - can take the code later, understand
what it does, change it, re-use it, without a risk of mistakes, without
spending hours figuring out what it does and how it does it. It is not
enough that code works - it must be clear that it is correct. That is
how you write /robust/ code.
 
What you are describing is "fragile" code - code that other programmers
(including your future self) are afraid to touch because it could easily
break.
 
Now, many people /do/ write fragile code. It may even be most people.
And many managers and project leaders are only interested in short-term
costs and place little emphasis on long-term reliability and usability
of code, leading to "it passed the test, ship it and we're done"
mentality. (Note - this is /not/ "test-driven development".) The
result is vast wastage overall.
 
<https://dilbert.com/strip/2014-08-12>
 
>> consequence of the higher aim of writing clear, legible and
>> maintainable code).
 
> The next level or two beyond structured code is functional programming.
 
I was talking about goals for development, not levels of programming
language.
 
And no, there is not really a hierarchy of programming languages like
this that puts "functional programming" one or two levels above
"structured programming". They are orthogonal concepts. You might have
more of a point if you were comparing functional programming with
imperative programming, but really there is no clear distinction or
ordering.
 
> And we all know how legible that can be.
 
Functional programming can be very legible. It is just a little fussier
about who its readers are. For example, this is a functional
programming quicksort function (based on a Haskell example on Wikipedia):
 
quicksort [] = []
quicksort (pivot : tail) =
quicksort [x | x <- tail, x < pivot]
++ [pivot] ++
quicksort [x | x <- tail, x > pivot]
 
Are you going to tell me that this is hard to follow, even if you have
never seen Haskell before?
 
The Python equivalent would be:
 
def quicksort(data) :
if data == [] :
return []
pivot, tail = data[0], data[1:]
return quicksort([x for x in tail if x < pivot]) + [pivot] +
quicksort([x for x in tail if x > pivot])
 
 
That is Python code written in a functional programming style. Do you
find that illegible?
 
(Of course a lot of functional programming code will be hard to
understand if you are unfamiliar with the language and style. The same
applies to most languages.)
 
 
>>> (No macros either in that project.)
 
>> Only you consider that a good thing.
 
> Yes; most uses of macros in C I consider a failing in the language.
 
That is a non-sequitor.
 
C is a programming language - it is specified in standards, implemented
by compilers. It has its limitations and its questionable design
decisions (especially seen with modern eyes), as do all languages.
There are certainly things that are often done using C macros that could
be better done with other features in other languages (C++ provides
features that replace many uses of macros in C).
 
But given that a programmer is writing in C, it makes sense to use
macros whenever they are appropriate for the aims of the task.
 
 
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 26 06:57PM

On Sun, 2020-01-26, Manfred wrote:
> On 1/26/2020 2:58 PM, David Brown wrote:
...
> the programmer mindset (either the writer's or reader's). And
> (fortunately) that is not fixed for everyone (it is, to some extent, in
> the eye of the beholder).
 
Less so than many other things (like naming and high-level design),
I think. There's Dijkstra, there's Structured programming, and a
general agreement that complex loops, complex loop conditions and
vague invariants are worse than simpler ones.
 
(But people surely disagree about how much effort should be spent
making the structure good/better.)
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Robert Wessel <robertwessel2@yahoo.com>: Jan 26 01:14PM -0600

On Sun, 26 Jan 2020 14:58:45 +0100, David Brown
>irrelevant, or you use good tools and let the compiler handle the
>optimisation from flags to gotos. It is very rare that you need maximal
>code efficiency and have no choice but to use poor tools.
 
 
I don't think efficiency is a main concern here, or even an important
one. Rather some uses of goto are, at least IMO, clearer than many of
the alternatives. The error handling and nested-loop escape being two
common ones. C++/RAII mostly removes one of those.
 
Certainly everything can be written without gotos, I just don't think
that's always the clearest way to write the code. "Clarity" being a
somewhat subjective determination, of course. A bit of code in an
unfamiliar idiom, even if simpler, can be less "clear".
 
Still, gotos should be used quite sparingly. And uses should avoid
the "spaghetti" code of the distant past.
 
On the other hand, this is a bit of a tempest in a teapot. When
Dijkstra wrote "Goto considered harmful", his context was unrestricted
branching across large monolithic programs (sometimes 10's of KLOCs).
The problems that introduces are just not possible if you have
reasonably sized routines. I'm not suggesting that unrestricted
branching across a 50 line routine is a good idea, but he'd never have
written that paper to deal with those issues. You just cannot have
the morass you get from a bunch of labels in a chunk of code
referenced by dozens of gotos, many from multiple thousands of lines
away.
 
I just think people have gone a bit overboard with the idea, so that
it's ended up as "if there's any way to avoid a goto, do so" (which we
know is actually always possible*), rather than focusing on clarity.
 
 
*That's even been shown to be generally possible to do mechanically.
For a while there was a market in restructuring tools (mostly for
Cobol), that would turn any unstructured (Cobol) program into one with
none of those nasty gotos. A complete horror was a common result,
where the program got turned into a mess of state machines and state
variables implemented as endless IF blocks, and making the old version
of the program a relative paragon of clarity. But there were no
gotos.
Robert Wessel <robertwessel2@yahoo.com>: Jan 26 01:20PM -0600

On Sun, 26 Jan 2020 19:10:47 +0100, David Brown
>parts of the function.
 
>(I am aware that some people disapprove of "break" or "continue" in
>loops, and any "return" except as the last line of a function.)
 
 
I'd point out that the justification for a multi-level break (or
continue) is exactly the same as for an early return - avoiding those
extra layers of ifs, state variables or functions just to implement
the control flow. I think C should have such a feature*, but it
doesn't. A goto can simulate that pretty well.
 
Again, something to be used sparingly.
 
 
*Allow a break or continue to reference a label attached to a loop or
switch.
Ian Collins <ian-news@hotmail.com>: Jan 27 08:21AM +1300

On 27/01/2020 01:44, Bart wrote:
> nested loop control, usual code flow (plus the temporary uses during
> development which ought to be eventually removed but which doesn't
> always happen).
 
These points have already been covered in this thread, why bring them up
again?
 
--
Ian.
David Brown <david.brown@hesbynett.no>: Jan 26 08:46PM +0100

On 26/01/2020 18:50, Manfred wrote:
 
>>> No, the use-cases for goto are always there:
 
>> No, they are not.
 
> I wouldn't be that hard.
 
I don't think these "use-cases for goto" exist (outside of generated
code) - not in the sense of programming tasks for which "goto" is the
best answer, or the only natural choice.
 
But other people have different balances of how they like to code, and
what they consider "best". And of course there can be differences for
different types of programming tasks. (For one thing, keeping a
consistent style with existing code in a project is often very important
- even if it is not a great style to begin with.)
 
However, what we can definitely say is that for myself and for several
other people in this thread, each with decades of experience at C and/or
C++ coding, these "use-cases for goto" are not there in the tasks we face.
 
> the programmer mindset (either the writer's or reader's). And
> (fortunately) that is not fixed for everyone (it is, to some extent, in
> the eye of the beholder).
 
Agreed.
 
> thus, to someone used to that way of coding, I would not be surprised
> that goto is clearer and more readable than structured loops more often
> than I would think.
 
I have a strong background in assembly too. And BASIC before that.
(And a dozen or more languages after that, to varying degrees.)
 
But I agree that background has an influence here. In particular, some
people seem to consider software development as something you learn once
and do in the same way ever after - even if the language changes. They
might be using C++ on the latest MSVC or gcc compiler, but they are
using K&R 1 as their bible and declare all their variables at the start
of their functions because that's what they learned for Pascal at school
in 1980.
 
I am not suggesting you have to be that extreme to use gotos in your
code (nor am I accusing any particular poster here of being such a
programmer, before anyone takes offence), but I think there is a strong
element of old-fashioned style in goto-rich code.
 
 
> This is true, but in case of kernel code I understand why they didn't
> want to put those "extra tests, duplicated calculations, local flag
> variables, etc." in the source in the first place.
 
Linux is old - and its origins go back to a time when you needed "manual
optimisation" to get efficient results. You don't need them now - the
kernel won't compile with a compiler that can't do reasonable
optimisation. But there is good sense in keeping a consistent style
with existing code, and there is good sense in not changing existing
code unnecessarily, especially in a project with such a wide spread of
developers involved.
 
 
>> Only you consider that a good thing.
 
> (I think in comp.lang.c++ there are more people who do not consider
> macros a good thing)
 
Abuse of macros is a bad thing - I hope all will agree on that.
 
Use of macros when there are better alternatives is a bad thing - again,
I hope that will have full agreement. And C++ provides more
alternatives than C does for many typical use-cases of macros.
 
Some of the disadvantages of macros may be even more problematic in C++,
such as their unscoped nature (since C++ has more scoping levels than C).
 
All this changes the balance of when it is a good thing (making code
clearer, simpler and easier to maintain) or a bad thing to use a macro
in C or C++. So there are fewer circumstances when a macro is a good
solution in C++ than there are in C.
 
But you can't generalise that macros are good or bad - it is only usage
(including definitions) of particular macros that are good or bad.
Bart <bc@freeuk.com>: Jan 26 08:51PM

On 26/01/2020 19:20, Robert Wessel wrote:
 
> Again, something to be used sparingly.
 
> *Allow a break or continue to reference a label attached to a loop or
> switch.
 
I've implemented multi-level breaks (and a couple of other loop controls
you don't find in C and C++).
 
If the inner loops are numbered from 1 (where the break statement is) to
N (outermost loop), then you write the equivalent of:
 
break; // or break 1
break 2 // next outer loop
break N; // break out of all loops
 
Not everyone likes applying an index like this (eg. you add or remove
loops, and the numbers have to change); they prefer labeled statements.
 
However I've found the most common requirement (other than the
inner-most) is to break out of all loops. So I have a special form:
 
break all;
 
(Which is exactly the same as break 0, is a synonym for N.)
 
No indexing needed nor any labeling. And you can wrap extra inner loops
around the break statement without having to change anything.
David Brown <david.brown@hesbynett.no>: Jan 26 10:15PM +0100

On 26/01/2020 20:14, Robert Wessel wrote:
 
 
> I just think people have gone a bit overboard with the idea, so that
> it's ended up as "if there's any way to avoid a goto, do so" (which we
> know is actually always possible*), rather than focusing on clarity.
 
Speaking only for myself, my focus /is/ on clarity. My focus is not
"avoiding goto", certainly not on "removing goto", and not based on the
old "Goto considered harmful" paper (no matter how much I respect the
author). As you say, the context of that paper is different from the
situation now, and I think we can all agree that spaghetti programming
is a bad idea.
 
And I agree that occasionally a goto may be a clear way to write code -
it simply hasn't been a clear way to write any code that /I/ have had to
write.
 
I do think, however, that if you have a function for which a goto would
seem the clearest solution, then it is likely to be an overly complex
function. Functions with too many paths, branches, loops, etc., are
often hard to analyse (mentally, formally or with analysis tools) to be
sure all the paths and loops work as expected. A "goto" should always
be treated with suspicion - either it is a poor solution to the problem,
or perhaps you need to change the problem.
 
I am sure there are some types of coding situations where goto is
considered a reasonable choice - I just don't meet these situations.
Sam <sam@email-scan.com>: Jan 26 09:43AM -0500


> Hello,
 
> About the invariants of a system..
 
Sir, this is Wendy's.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 26 03:51PM +0100

> We could also do without the racist white Arab stuff.
 
I'll bet my right hand that Amine is manic-depressive.
He has manic phases where he writes waterfalls and depressive
phases where you won't read anything from him. He should get
a proper treatment.
Christian Gollwitzer <auriocus@gmx.de>: Jan 19 09:55AM +0100


> I think you are missing the forest for the trees. This
> is the CMW ambassador:
 
> https://github.com/Ebenezer-group/onwards/blob/master/src/cmw/tiers/cmwA.cc
 
I think you didn't understand what I was writing:
 
https://github.com/Ebenezer-group/onwards/commits/master/src/cmw/tiers/cmwA.cc
 
The commit history doesn't tell me anything. No useful messages.
 
Christian
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 20 03:56PM -0800

On 1/20/2020 3:52 PM, Mr Flibble wrote:
 
>> I most likely just consumed a sarcastic sandwich! Humm, tastes like
>> sausages. ;^)
 
> https://www.youtube.com/watch?v=9lru1Qxc1l8
 
I see.
 
Your not a cop! SkinJobs, any compassion? Your little people, perhaps
because he is not corrupt?
 
Nothing wrong with a synthetic.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: