Sunday, September 12, 2021

Digest for comp.lang.c++@googlegroups.com - 25 updates in 3 topics

Ian Collins <ian-news@hotmail.com>: Sep 12 12:11PM +1200

On 12/09/2021 07:56, Bart wrote:
> a[i]=b+c*d;
> fn(a[i]);
> }
 
<snip listings>
 
> So which one gets the prize?
 
The one which runs correctly the fastest!
 
You appear to be stuck in the "C as a high level assembler" mindset.
This shouldn't be true for C and definitely isn't true for C++.
 
Optimised code often bears little resemblance to the original source and
the same source compiled with the same compiler can be optimised in
different ways depending on the context.
 
<snip>
 
> But it's still not as easy to follow as either of mine.
 
> So, yes, decent tools are important...
 
They are, and decent appears to have different meanings to different
people! From my perspective a decent compiler will correctly compile my
code, provide excellent diagnostics and a high degree of optimisation.
 
>> it. How else would it work?
 
> Have a look at my first example above; would the a[i]=b+c*d be
> associated with anything more meaningful than those two lines of assembly?
 
Does it matter?
 
>> to look at the assembly for that.
 
> That's the kind of thing that the unit tests Ian is always on about
> don't really work.
 
Unit tests test logic, not performance. We run automated regression
tests on real hardware to track performance. If there's a change
between builds, it's trivial to identify the code commits that caused
the change.
 
 
> Then just scale up the size of the project; you will hit a point where
> it /is/ a problem! Or change the threshold at which any hanging about
> becomes incredibly annoying; mine is about half a second.
 
Correct, so you scale up the thing you have control over, the build
infrastructure. It's safe to say that no one here has their own C++
compiler the can tweak to go faster! Even with your tools, you have to
sacrifice diagnostics and optimisations for speed.
 
> Just evading the issue by, insteading of getting a tool to work more
> quickly, making it try to avoid compiling things as much as possible,
> isn't a satisfactory solution IMO.
 
A build system is more than just a compiler, there are plenty of other
tools you can deploy to speed up builds.
 
> It's like avoiding spending too long driving your car, due to its only
> managing to do 3 mph, by cutting down on your trips as much as possible.
> It's a slow car - /that's/ the problem.
 
Poor analogy. A better one is your car is slow because it only has a
single cylinder engine, so you can make it faster with a bigger cylinder
or more of them!
 
--
Ian.
Michael S <already5chosen@yahoo.com>: Sep 12 01:29AM -0700

On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:
> $(CC) $(CPPFLAGS) -MM -MT '$*.o $@' -MF $@ $<
 
> (As a side note, it wouldn't hurt if the GCC people updated their docs
> from time to time...)
 
 
gcc maintainers have policy against updating/fixing docs.
From their perspective, compiler and docs are inseparable parts of holy "release".
I tried to change their mind about it few years ago, but didn't succeed.
So, if you are not satisfied with quality of gcc docs supplied with your release of gcc compiler then the best you can do is to look at the docs for the most recent "release". I.e. right now 11.2. Naturally, in order to be sure that these docs apply, you'd have to update the compiler itself too.
Juha Nieminen <nospam@thanks.invalid>: Sep 12 08:56AM

> Very nice. Now you have a single globals.h type file (VERY common in large
> projects). How does gcc figure out which C files it needs to build from that?
 
It doesn't. It only compiles what you tell it to compile.
 
It has to be *something else* that runs it and tells it what to
compile. Often this is the 'make' program (which is reading a
file usually named 'Makefile').
Juha Nieminen <nospam@thanks.invalid>: Sep 12 09:00AM

> The 'supercomputer' on my desk is not significantly faster than the RPi4
> you mention below.
 
Then you must have a PC from the 1990's, because the Raspberry Pi 4
is a *very slow* system, believe me. I know, I have one. What takes
a few seconds to compile on my PC can take a minute to compile on
the Pi.
 
> If your code is fairly standard C, try using Tiny C. I expect your
> program will build in one second or thereabouts.
 
It's C++. (This is a C++ newsgroup, after all.)
Juha Nieminen <nospam@thanks.invalid>: Sep 12 09:10AM

> because all the little calls that should be inlined out of existence,
> become layers of calls that you might have to dig through. Most of the
> what you are looking for is drowned out in the noise.
 
Most interactive debuggers support stepping into a function call, or
stepping over it (ie. call the function but don't break there and
just wait for it to return).
 
>> should follow the source code line-by-line, with *each* line included
>> and nothing optimized away.
 
> That was true 20 years ago, perhaps, with C. Not now, and not with C++.
 
I don't see how it isn't true now. If there's a bug in your code, you
need to see and examine every line of code that could be the culprit.
If the compiler has done things at compile time and essentially
optimized the faulty line of code away (essentially "merging" it with
subsequent lines), you'll be drawn to the wrong line of code. The first
line of code that exhibits the wrong values may not be the one that's
actually creating the wrong values, because that line has been optimized
away. (The same applies to optimizing away function calls.)
 
>> and detrimental to development if it's too long.
 
> Don't solve that by using weaker tools. Solve it by improving how you
> use the tools
 
That's exactly what I'm doing by doing a fast "g++ -O0" compilation
instead of a slow "g++ -O3" compilation.
 
> time costs more than the computer's time. "I don't use optimisation
> because it is too slow" is an excuse for hobby developers, not
> professionals.
 
There's no reason to use optimizations while writing code and testing it.
"I don't use optimization because it is too slow" is *perfectly valid*.
If it is too slow, and is slowing down your development, it's *good*
to make it faster. I doubt your boss will be unhappy with you developing
the program in less time.
 
You can compile the final result with optimizations, of course.
 
> It is better to only compile the bits that need to be compiled. Who
> cares how long a full build takes?
 
In the example I provided the project consists of two source files
and one header file. It's very heavy to compile. Inclusion optimization
isn't of much help.
HorseyWorsey@the_stables.com: Sep 12 09:18AM

On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
 
>It has to be *something else* that runs it and tells it what to
>compile. Often this is the 'make' program (which is reading a
>file usually named 'Makefile').
 
Well thanks for that valuable input, we're all so much more informed now.
Juha Nieminen <nospam@thanks.invalid>: Sep 12 09:21AM

>>compile. Often this is the 'make' program (which is reading a
>>file usually named 'Makefile').
 
> Well thanks for that valuable input, we're all so much more informed now.
 
You made that sound sarcastic. If it is indeed sarcasm, I don't
really understand why.
HorseyWorsey@the_stables.com: Sep 12 09:23AM

On Sat, 11 Sep 2021 18:40:58 +0200
>/not/ a replacement for a build system. It is a tool to help automate
>your build systems. The output of "gcc -MD" is a dependency file, which
>your makefile (or other build system) imports.
 
Yes, I understand perfectly. You create huge dependency files which either
have to be stored in git (or similar) and updated when appropriate, or auto
generated in the makefile and then further used in the makefile which has to
be manually written anyway unless its simple so what exactly is the point?
 
>> the compiler is sod all use if you need to fire off a script to auto generate
 
>> some code first.
 
>No, it is not. It works fine - as long as you understand how your build
 
Excuse me? Ok, please do tell me how the compiler knows which script file to
run to generate the header file. This'll be interesting.
 
>the header and the C file. Then this updated dependency file is
>imported by make, and shows that the object file (needed by for the
>link) depends on the updated C file, so the compiler is called on the file.
 
And that is supposed to be simpler than writing a Makefile yourself is it?
Riiiiiight.
 
 
>> out.
 
>As noted above, I do that fine. It's not rocket science, but it does
>require a bit of thought and trial-and-error to get the details right.
 
And is far more work that just putting 2 lines in a makefile consisting of a
dummy target and a script call. But each to their own.
 
>for
>> the real world its little use.
 
>OK, so you are ignorant and nasty. You don't know how automatic
 
Nasty? Don't be such a baby.
HorseyWorsey@the_stables.com: Sep 12 09:29AM

On Sun, 12 Sep 2021 09:21:50 -0000 (UTC)
 
>> Well thanks for that valuable input, we're all so much more informed now.
 
>You made that sound sarcastic. If it is indeed sarcasm, I don't
>really understand why.
 
Try following a thread before replying. A couple of posters were claiming the
compiler could automate the entire build system and I gave some basic examples
of why it couldn't. Now one of them is back peddaling and basically saying it
can automate all the bits except the bits it can't when you need to edit the
makefile yourself. Genius. Then you come along and mention Makefiles. Well
thanks for the heads up, I'd forgotten what they were called.
 
Yes, it was sarcasm.
David Brown <david.brown@hesbynett.no>: Sep 12 11:32AM +0200

On 12/09/2021 01:09, Manfred wrote:
 
> with the addition of the -MT gcc option, which removes the need for the
> nasty 'sed' command in the "%.d: %.c" rule - which is the kind of thing
> that tends to keep people away.
 
That is /exactly/ where I got all this! I had been using the
suggestions from that page, plus a couple of other "automate
dependencies with makefiles and gcc" web pages, along with sed, for many
years. Then when starting a new project not long ago, I needed a bit
more complicated makefile than usual (building a few different
variations at the same time). While thinking about the makefile and
looking at gcc manual, I realised this could replace the "sed" and make
things a little easier for others working on the same project but using
Windows.
 
 
> I guess in that rule one can use the single command:
> %.d: %.c
>     $(CC) $(CPPFLAGS) -MM -MT '$*.o $@' -MF $@ $<
 
Yes, something like that is the starting point - but there's usually a
bit in there about directories if you want your build directories
separate from the source directories.
 
> (As a side note, it wouldn't hurt if the GCC people updated their docs
> from time to time...)
 
Agreed!
David Brown <david.brown@hesbynett.no>: Sep 12 11:46AM +0200

On 12/09/2021 10:29, Michael S wrote:
> On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:
 
>> (As a side note, it wouldn't hurt if the GCC people updated their docs
>> from time to time...)
 
Your reference here was to the "make" manual, rather than the gcc
documentation. But the gcc folk could add an example like this to their
manual for the "-MT" option.
 
 
> gcc maintainers have policy against updating/fixing docs.
> From their perspective, compiler and docs are inseparable parts of holy "release".
 
Well, yes. The gcc manual of a particular version documents the gcc of
that version. It seems an excellent policy to me.
 
It would be a little different if they were publishing a tutorial on
using gcc.
 
> I tried to change their mind about it few years ago, but didn't succeed.
 
Thankfully. It would be rather messy if they only had one reference
manual which was full of comments about which versions the particular
options or features applied to, as these come and go over time.
 
I suppose it would be possible to make some kind of interactive
reference where you selected your choice of compiler version, target
processor, etc., and the text adapted to suit. That could be a useful
tool, and help people see exactly what applied to their exact toolchain.
But it would take a good deal of work, and a rather different thing
from the current manuals.
 
> docs for the most recent "release". I.e. right now 11.2. Naturally, in
> order to be sure that these docs apply, you'd have to update the
> compiler itself too.
 
I think most people /do/ look up the gcc documents online, rather than
locally. The gcc website has many versions easily available, so you can
read the manual for the version you are using. And while new features
in later gcc versions add to the manuals, it's rare that there are
changes to the text for existing features. The documentation for "-MT"
is substantially the same for the latest development version of gcc 12
and for gcc 3.0 from about 20 years ago.
Ian Collins <ian-news@hotmail.com>: Sep 12 09:51PM +1200

On 12/09/2021 21:10, Juha Nieminen wrote:
 
> There's no reason to use optimizations while writing code and testing it.
 
There may be many!
 
Unoptimised code being too slow or too big to run on the target is
common in real-time or pretend (i.e. Linux) real-time systems. Getting
more comprehensive error checking is another.
 
 
--
Ian.
David Brown <david.brown@hesbynett.no>: Sep 12 11:53AM +0200

On 12/09/2021 02:11, Ian Collins wrote:
 
> Optimised code often bears little resemblance to the original source and
> the same source compiled with the same compiler can be optimised in
> different ways depending on the context.
 
With modern C++, generated code - optimised or unoptimised - regularly
bears no resemblance to the original code. Use a few templates and
lambdas, and the source code structure is guaranteed to be very
different from the object code structure. But I suspect that Bart
disapproves of templates and lambdas.
 
But sometimes I find it useful to look at, debug or otherwise work with
the generated assembly - and for that, for me, -O1 or -O2 is better than
-O0 because there is so much less noise.
Paavo Helde <myfirstname@osa.pri.ee>: Sep 12 12:57PM +0300

>> your makefile (or other build system) imports.
 
> Yes, I understand perfectly. You create huge dependency files which either
> have to be stored in git (or similar) and updated when appropriate, or auto
 
What on earth are you babbling about? That's becoming insane.
 
In the rare chance you are not actually trolling: the dependency files
are generated by each build afresh, and they get used by the next build
in the same build tree for deciding which source files need to be
recompiled when some header file has changed. This is all automatic,
there are no manual steps involved except for setting it up once when
writing the initial Makefile (in case one still insists on writing
Makefiles manually).
 
There is no more point to put the dependency files into git than there
is to put the compiled object files there (in fact, a dependency file is
useless without object files).
Bart <bc@freeuk.com>: Sep 12 11:17AM +0100

On 12/09/2021 01:11, Ian Collins wrote:
 
> <snip listings>
 
>> So which one gets the prize?
 
> The one which runs correctly the fastest!
 
Let's say none of them run correctly and your job is to find out why. Or
maybe you're comparing two compilers at the same optimisation level, and
you want to find why one runs correctly and the other doesn't.
 
Or maybe this is part of a benchmark where writing to a[i] is part of
the test, but it's hard to gauge where one lot of generated code is
better than another, because the other has disappeared completely!
 
(I suppose in your world, a set of benchmark results where every one
runs in 0.0 seconds is perfection! I would say those are terrible
benchmarks.)
 
>> associated with anything more meaningful than those two lines of
>> assembly?
 
> Does it matter?
 
Ask why you're looking at the ASM in the first place. If there's no
discernible correspondence with your source, then you might as well look
at any random bit of ASM code; it would be just as useful!
 
 
> tests on real hardware to track performance.  If there's a change
> between builds, it's trivial to identify the code commits that caused
> the change.
 
Most of the stuff I do is not helped with unit tests.
 
Where there are things that possibly be tested by ticking off entries in
a list, you find the real problems come up with combinations or contexts
you haven't anticipated and that can't be enumerated.
 
 
 
> Correct, so you scale up the thing you have control over, the build
> infrastructure.  It's safe to say that no one here has their own C++
> compiler the can tweak to go faster!
 
There are lots of C compilers around that are faster than ones like gcc,
clang and msvc. Tiny C is an extreme example.
 
I guess there are not so many independent compilers for C++ written by
individuals, which tend to be the faster ones.
 
I don't have the skills, knowledge and inclination to have a go at C++,
but I just get the feeling that such a streamlined product ought to be
possible.
 
After all, most functionality of C++ is implemented in user-code (isn't
it?), so the core language must be quite small?
 
 
> Poor analogy.  A better one is your car is slow because it only has a
> single cylinder engine, so you can make it faster with a bigger cylinder
> or more of them!
 
OK, let's say it's slow around town because your car is a 40-ton truck,
and you need to file the equivalent of a flight-plan with the
authorities before any trip.
David Brown <david.brown@hesbynett.no>: Sep 12 12:21PM +0200

On 12/09/2021 11:10, Juha Nieminen wrote:
 
> Most interactive debuggers support stepping into a function call, or
> stepping over it (ie. call the function but don't break there and
> just wait for it to return).
 
They do indeed. But in the world of embedded development, things can
get complicated. Stepping over functions works in some cases, but often
there is interaction with interrupts, timers, hardware, etc., that means
you simply cannot step like that.
 
> line of code that exhibits the wrong values may not be the one that's
> actually creating the wrong values, because that line has been optimized
> away. (The same applies to optimizing away function calls.)
 
If the compiler is optimising away a line of code, and that is causing a
problem, then the bug lies in the surrounding code that causes it to be
optimised away.
 
>> use the tools
 
> That's exactly what I'm doing by doing a fast "g++ -O0" compilation
> instead of a slow "g++ -O3" compilation.
 
People clearly have different needs, experiences, and ideas. All I can
tell you is that since "gcc -O0" does not work as a compiler that
handles my needs for either static analysis or code generation, it is
irrelevant how fast it runs - it is almost entirely useless to me.
 
When I write code, I make mistakes sometimes. I want all the help I can
get to avoid making mistakes, and to find the mistakes as soon as
possible. One major help is the static analysis from a good compiler.
I want that feedback as soon as I have finished writing a piece of code,
not the next day after a nightly build. "gcc -O2" gives me what I need,
"gcc -O0" does not.
 
>> because it is too slow" is an excuse for hobby developers, not
>> professionals.
 
> There's no reason to use optimizations while writing code and testing it.
 
I'm sorry, but you are simply wrong.
 
There are some kinds of programming where it doesn't matter how big and
slow the result is.
 
There are other kinds of programming where it /does/ matter. In my
world, object code that is too big is broken - it will not fit on the
device, and cannot possibly work. Object code that is too slow is
broken - it will not do what it has to do within the time required.
 
For many programmers, a major reason they are using C or C++ in the
first place is because the efficiency of the results matters. Otherwise
they would likely use languages that offer greater developer efficiency
for many tasks, such as Python. (I'm not suggesting efficiency is the
/only/ reason for using C or C++, but it is a major one.) Trying to do
your development and testing without a care for the speed is then a very
questionable strategy.
 
I can happily believe that for /you/, and the kind of code /you/ work
on, optimisation is not an issue. But that is not the case for a great
many other programmers.
 
And since using an optimised compiler - on appropriate hardware for a
professional developer - is rarely an issue, it seems to me a very
backwards idea to disable optimisation for a gain in build speed. (If
you really find it helpful in debugging, then I can appreciate that as a
justification.) Who cares if it takes 0.1 seconds or 2 seconds to
compile the file you've just saved? It is vastly more important that
you get good warning feedback after 2 seconds instead of the next day,
and that you test the real code rather than a naïve build that hides
many potential errors.
 
 
Bart <bc@freeuk.com>: Sep 12 11:29AM +0100

On 12/09/2021 10:00, Juha Nieminen wrote:
> is a *very slow* system, believe me. I know, I have one. What takes
> a few seconds to compile on my PC can take a minute to compile on
> the Pi.
 
My machine is from 2010. I probably under-represented the RPi4 timings
because it was running a 32-bit OS, so programs were 32-bit, but my PC
compilers were 64-bit.
 
There might be other reasons for the discrepancy you see; maybe your
projects uses lots of files, and your PC uses SSD while the RPI uses the
slower (?) SD.
 
Or your project is large and is putting pressure on the RPi4's perhaps
more limited RAM.
 
(I chose my comparisons to be within the capabilities of both machines.)
 
>> If your code is fairly standard C, try using Tiny C. I expect your
>> program will build in one second or thereabouts.
 
> It's C++. (This is a C++ newsgroup, after all.)
 
See my reply to Ian.
David Brown <david.brown@hesbynett.no>: Sep 12 12:49PM +0200

> have to be stored in git (or similar) and updated when appropriate, or auto
> generated in the makefile and then further used in the makefile which has to
> be manually written anyway unless its simple so what exactly is the point?
 
No, you still don't understand.
 
Of course the dependency files are /not/ stored in your repositories -
the whole point is that they are created and updated when appropriate.
 
Yes, the main makefile is written manually (or at least, that's what I
do - there are tools that generate makefiles, and there are other build
systems). The automatic dependency generation means that I never have
to track or manually update the dependencies.
 
So when I add a new C or C++ file to my project, I don't need to make
/any/ changes to my makefile or build setup. It is found automatically,
and its dependencies are tracked automatically, next time I do a "make".
If I change the headers included by a C or C++ file, or by another
header file, I don't need to change anything - it is all automatic when
I do a "make".
 
The dependency files are re-built automatically, if and when needed.
 
 
>> No, it is not. It works fine - as long as you understand how your build
 
> Excuse me? Ok, please do tell me how the compiler knows which script file to
> run to generate the header file. This'll be interesting.
 
It knows in the same way as the programmer knows where to file the sales
department accounts.
 
Confused? Yes, you surely are.
 
It is /not/ the /compiler's/ job to know this! It is the /build/ system
that says what programs are run on which files in order to create all
the files needed.
 
>> link) depends on the updated C file, so the compiler is called on the file.
 
> And that is supposed to be simpler than writing a Makefile yourself is it?
> Riiiiiight.
 
Who do you think wrote the makefile? A friendly goblin? /I/ wrote the
makefile. /I/ put rules in the makefile to run "gcc -MD" as and when
needed in order to generate the dependencies. The point is that no one
- not me, nor anyone else - needs to keep manually updating the makefile
to track the simple dependencies that can be calculated automatically.
 
>>> the real world its little use.
 
>> OK, so you are ignorant and nasty. You don't know how automatic
 
> Nasty? Don't be such a baby.
 
I don't yet know whether you are wilfully ignorant, or trolling.
David Brown <david.brown@hesbynett.no>: Sep 12 12:55PM +0200

> can automate all the bits except the bits it can't when you need to edit the
> makefile yourself. Genius. Then you come along and mention Makefiles. Well
> thanks for the heads up, I'd forgotten what they were called.
 
Ah, so you are saying that /you/ have completely misunderstood the
thread and what people wrote, and thought mocking would make you look
clever.
 
I guess you'll figure it out in the end, and we can look forward to
another name change so you can pretend it wasn't you who got everything
so wrong.
 
Michael S <already5chosen@yahoo.com>: Sep 12 04:42AM -0700

On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown wrote:
> Thankfully. It would be rather messy if they only had one reference
> manual which was full of comments about which versions the particular
> options or features applied to, as these come and go over time.
 
That's not what I was suggesting.
I was suggesting to add an clarifications and suggestions for a feature (it was something about function attribute 'optimize')
that existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs
Obviously, a previous version of the manual could have been available to historians among us in gcc source control database.
 
Instead, said clarifications+suggestions were added to the *next* release of the manual. Oh, in fact, no, it didn't made it into gcc6 manual. It was added to gcc 7 manual.
So, gcc5 users now have no way to know that changes in docs apply to gcc5 every bit as much as they apply to gcc7 and later.
 
> > compiler itself too.
 
> I think most people /do/ look up the gcc documents online, rather than
> locally.
 
I am pretty sure that it is a case. And that was exactly my argument *for* updating online copy of gcc5 docs.
And the argument of maintainers was that people that read manuals locally do exist.
 
> read the manual for the version you are using. And while new features
> in later gcc versions add to the manuals, it's rare that there are
> changes to the text for existing features.
 
In my specific case it was a change to the text of existing feature.
 
Juha Nieminen <nospam@thanks.invalid>: Sep 12 09:20AM

> extern "C" marks the function as having C naming conventions and C
> calling conventions
 
Indeed, extern "C" doesn't mean that the code being called using
those functions has actually been compiled as C. It merely changes
the naming of the functions in the object files to be compatible
with how functions are named in C.
 
But that makes me wonder: If an extern "C" declared function has
C++ types as parameters (or return value type), how are those
encoded in the name? Or are they at all? Can you overload extern "C"
functions?
Paavo Helde <myfirstname@osa.pri.ee>: Sep 12 12:34PM +0300

12.09.2021 12:20 Juha Nieminen kirjutas:
 
> But that makes me wonder: If an extern "C" declared function has
> C++ types as parameters (or return value type), how are those
> encoded in the name? Or are they at all?
 
They aren't.
 
> Can you overload extern "C"
> functions?
 
No.
David Brown <david.brown@hesbynett.no>: Sep 12 12:24PM +0200

On 12/09/2021 11:20, Juha Nieminen wrote:
 
> But that makes me wonder: If an extern "C" declared function has
> C++ types as parameters (or return value type), how are those
> encoded in the name?
 
They are not.
 
> Or are they at all? Can you overload extern "C"
> functions?
 
No.
 
Although in theory a compiler can support different ABI's or calling
conventions for C and C++ functions, in practice making a function
extern "C" simply disables all name mangling for the function. That in
turn means you can't overload it, or have it in a namespace.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 12 12:29PM +0200

Am 12.09.2021 um 12:24 schrieb David Brown:
 
> conventions for C and C++ functions, in practice making a function
> extern "C" simply disables all name mangling for the function. That
> in turn means you can't overload it, or have it in a namespace.
 
You can actually define it in a namespace, but it won't belong to it.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 12 08:24AM +0200

Am 11.09.2021 um 22:05 schrieb Alf P. Steinbach:
> declaration of `pLambda` — is flawed because if you put this code in
> any function other than `main` it will then use a dangling reference
> on second call of that function.
 
Of course there could be a dangling reference, but that's not what I'm
discussing.
 
 
> compiler can rewrite to a capture-less lambda. Don't know if they do
> though. And conversion to C function pointer would need to be a language
> extension.
 
Of course I could make i static, but that's not what I'm talking about.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: