Monday, September 13, 2021

Digest for comp.lang.c++@googlegroups.com - 25 updates in 3 topics

"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Sep 13 07:05PM +0200

On 12 Sep 2021 16:45, Bonita Montero wrote:
> won't be called again after the API-call has finished. So in this
> cases such a conversion if a callable object to a bare function-pointer
> would be nice.
 
As Paavo Helde notes in his reply it's called a trampoline.
 
Trampolines were used in Borland Pascal's windowing library, and I guess
they did the same in Borland C++.
 
One C++ expert that's done some work on trampolines: Andrei
Alexandrescu. Andrei is or was much like Bjarne. One could e-mail him
and he'd answer to the best of his ability. As I recall I've only
¹intersected his path five times, but I think it's enough to say that
the opinion that you can mail him and will probably get help with this
is an informed one.
 
One C++ programmer who's submitted or at least started on a proposal for
standardization of C++ trampoline generation: the good Puppy (I don't
recall his real name) in the C++ Lounge at Stack Overflow.
 
You could visit the Lounge and air the question. Chances are that the
puppy wrote some implementation for his proposal. As I recall he
attended one committee meeting for this.
 
 
- Alf
 
Notes:
¹ I provided some feedback on the first ScopeGuard implementation
(Andrei helped the original inventor Petru Marginean publish a DDJ
article about it and provided some helper functionality), namely that
their use of `__LINE__` at that time didn't work with a special option
in Visual C++, and that they swallowed exceptions; I was one of the
reviewers of his "Mojo" framework for C++03 move semantics, where I
failed to see the big problem that someone else noticed, and Andrei then
corrected, and I even at first failed to compile it, but Andrei helped
me out; I fixed the failure/succeess return code of `WinMain` for the D
language, where Andrei and Walter Bright somehow, very perplexingly, got
that wrong; as a clc++m contributor I engaged in an escalating debate
with Andrei about SESE versus SEME, where I used so strong words that a
posting was rejected and I had to apologize, and the moderators
explained that they had accepted the posting without looking because it
was two experts debating (that was the first time I was ever called a
C++ expert); and, but I'm not sure I remember this correctly, but
something like this, later as clc++m moderator I accepted a posting that
included a link to an illegal PDF of Andrei's "Modern C++ Design" book,
and he was absolutely not pleased about that.
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Sep 12 05:38PM -0700

> On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown wrote:
>> On 12/09/2021 10:29, Michael S wrote:
[...]
> gcc.gnu.org/onlinedocs Obviously, a previous version of the manual
> could have been available to historians among us in gcc source control
> database.
 
Then the online version of the gcc-5.5.0 documentation would be out of
sync with the released version. I'm not saying that's completely a bad
thing, but it's something to consider.
 
> manual. It was added to gcc 7 manual.
> So, gcc5 users now have no way to know that changes in docs apply to
> gcc5 every bit as much as they apply to gcc7 and later.
 
gcc has always (?) been released as gcc-X.Y.Z.tar.gz (or .bz2), which
includes source code and documentation. If the documentation in
gcc-5.5.0.tar.bz2 incorrectly describes the behavior of gcc-5.5.0,
that's obviously a problem. I think the gcc maintainers would consider
that to be a bug in the gcc 5.5.0 release, and they would no more release
an updated "gcc-5.5.0.tar.bz2" to correct a documentation error than to
correct a code error. That would cause too much confusion for users who
already downloaded the old version of the tar file.
 
If they considered the error important enough to justify a new release,
they could release a new gcc-5.5.1.tar.bz2 or gcc-5.6.0.tar.bz2, perhaps
with only documentation updates (which would be mentioned in the release
notes). But the long-term solution is to fix it in a newer release (the
latest is 11.2.0), and there's a legitimate question about how much
effort is justified to support gcc-5.* users.
 
Adding footnotes to the online versions of the manuals isn't a bad idea,
but again there are question about how much effort it takes to support
something that few people are using.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips
void Void(void) { Void(); } /* The recursive call of the void */
Ian Collins <ian-news@hotmail.com>: Sep 13 12:51PM +1200

On 12/09/2021 22:17, Bart wrote:
 
> Let's say none of them run correctly and your job is to find out why. Or
> maybe you're comparing two compilers at the same optimisation level, and
> you want to find why one runs correctly and the other doesn't.
 
In that rare event, unit tests pass, target behavior is in correct,
study whatever assembly is generated. This will be optimised code,
which is often fun to read...
 
 
> (I suppose in your world, a set of benchmark results where every one
> runs in 0.0 seconds is perfection! I would say those are terrible
> benchmarks.)
 
Where did you get the strange idea from? Benchmarks measure
performance, a step change in the results is worth checking out (and we
often do).
 
 
> Ask why you're looking at the ASM in the first place. If there's no
> discernible correspondence with your source, then you might as well look
> at any random bit of ASM code; it would be just as useful!
 
If your target code is, by necessity, optimised then you don't have a
choice.
 
>> between builds, it's trivial to identify the code commits that caused
>> the change.
 
> Most of the stuff I do is not helped with unit tests.
 
So it doesn't have any functions that can be tested? That's a new on
one me!
 
> Where tshere are things that possibly be tested by ticking off entries in
> a list, you find the real problems come up with combinations or contexts
> you haven't anticipated and that can't be enumerated.
 
If you find a problem, add a test fro it to prove that you have fixed it
and to make sure it does not recur.
 
--
Ian.
Manfred <noname@add.invalid>: Sep 13 03:52AM +0200

On 9/12/2021 11:46 AM, David Brown wrote:
 
> Your reference here was to the "make" manual, rather than the gcc
> documentation. But the gcc folk could add an example like this to their
> manual for the "-MT" option.
 
Yes, I know I linked a page from the "make" docs, and I meant to write
the /GNU/ people (but my fingers typed gcc) - or whatever team takes
care of GNU make.
I think the GCC man page is OK, and so is the paragraph about -MT, but a
page of "GNU make" that titles "Automatic Prerequisites" would be nice
to give an example of the 1-line rule command that uses -MT instead of a
'sed' hieroglyph sequence (which I can read, but defining it user
friendly could be controversial) + a temp file + a couple of rogue rm -f
commands, expecially considering that -MT is available since gcc 3.04
(i.e. it dates back a /long/ time))
I guess someone from the gcc team somewhere during the last couple of
decades could have sent a carrier pigeon to their colleague in the
'make' maintenance team next door, with a small paper roll along the
lines of "hey, we've got this cool feature that might make makefile
writers' life easier, what do you think about it?" and the receiver
might have put hand to their doc page...
 
>> From their perspective, compiler and docs are inseparable parts of holy "release".
 
> Well, yes. The gcc manual of a particular version documents the gcc of
> that version. It seems an excellent policy to me.
 
Yes, for the record the page that I linked is not about a specific
version of 'make', it is part of the GNU make online manual.
 
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Sep 12 07:10PM -0700

Manfred <noname@add.invalid> writes:
[...]
> Yes, for the record the page that I linked is not about a specific
> version of 'make', it is part of the GNU make online manual.
 
The link you posted was:
 
https://www.gnu.org/software/make/manual/html_node/Automatic-Prerequisites.html
 
The URL doesn't refer to a particular version of GNU make, but I believe
it will always refer to the latest version of the manual. If you go up
a couple of levels, it says:
 
This is Edition 0.75, last updated 17 January 2020, of The GNU Make
Manual, for GNU make version 4.3.
 
I expect that when 4.4 is released the URL will refer to a newer version
of the manual.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips
void Void(void) { Void(); } /* The recursive call of the void */
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 12 11:12PM -0400

> On Sat, 11 Sep 2021 18:40:58 +0200
> David Brown <david.brown@hesbynett.no> wrote:
>> On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
...
>> your makefile (or other build system) imports.
 
> Yes, I understand perfectly. You create huge dependency files which either
> have to be stored in git (or similar) and updated when appropriate, or auto
 
Why in the world would you store them in git? They are
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated
automatically when needed.
The convention I've seen is that version control systems like git are
used only to save the source files from which other files are generated
- they aren't used to store generated files. I know of one main
exception to that, when using clearcase/clearmake - but the feasibility
of doing that depends upon features of clearcase and clearmake that are
not, to the best of my knowledge, shared by git and make, respectively.
 
A dependencies file retrieved from git would have to be replaced almost
immediately with a freshly generated one, which makes storing it there
even less reasonable than storing a .o file.
 
> generated in the makefile and then further used in the makefile which has to
> be manually written anyway unless its simple so what exactly is the point?
 
It greatly simplifies writing the makefile - it need only contain an
include line referencing the dependency file, rather than containing all
of those individual dependencies.
 
 
>> No, it is not. It works fine - as long as you understand how your build
 
> Excuse me? Ok, please do tell me how the compiler knows which script file to
> run to generate the header file. This'll be interesting.
 
The build system is what needs to know how to generate the header file.
The dependency file created by gcc -MD is intended to be used by the
build system, not as a replacement for a build system. All the compiler
needs to know is whether any given translation unit that it compiles
#includes the header; if so, it generates the appropriate line in a
dependency file.
 
>> imported by make, and shows that the object file (needed by for the
>> link) depends on the updated C file, so the compiler is called on the file.
 
> And that is supposed to be simpler than writing a Makefile yourself is it?
 
It certainly is. It happens automatically when your build system is
properly set up, without requiring user intervention to update the
dependencies when they change, and it includes all dependencies, both
direct and indirect, which is something so difficult that most people
wouldn't even attempt it if forced to insert the dependencies manually.
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 12 11:13PM -0400

On 9/12/21 5:29 AM, HorseyWorsey@the_stables.com wrote:
...
> Try following a thread before replying. A couple of posters were claiming the
> compiler could automate the entire build system
 
I have been following the thread, and no one said anything of the kind.
The claim that started this subthread was:
 
On 9/8/21 1:22 PM, Paavo Helde wrote:
> Such dependencies are taken care automatically by the gcc -MD option,
> which you have to specify for both Makefile and CMake based builds
 
Note that he only referred to "such dependencies", in context, that
refers only to the dependencies that result from #include directives. He
made no claim that all dependencies could be automated that way.
Secondly, he quite clearly indicated that gcc -MD was to be used with
Makefile or CMake based builds, he did not in any way suggest that it
replaced the need for a build system.
Juha Nieminen <nospam@thanks.invalid>: Sep 13 05:23AM

> Yes, it was sarcasm.
 
Well, good luck trying to get any more answers from me. I don't usually
placate to assholes.
Juha Nieminen <nospam@thanks.invalid>: Sep 13 05:28AM


> Unoptimised code being too slow or too big to run on the target is
> common in real-time or pretend (i.e. Linux) real-time systems. Getting
> more comprehensive error checking is another.
 
Rather obviously you need to test that your program works when compiled
with optimizations (there are situations where bugs manifest themselves
only when optimizations are turned on).
 
But wasn't my point. My point is that during development, when you are
writing, testing and debugging your code, you rarely need to turn on
optimizations. You can, of course (especially if it makes little
difference in compilation speed), but at the point where -O0 takes
10 seconds to compile and -O3 takes 1 minute to compile, you might
reconsider.
David Brown <david.brown@hesbynett.no>: Sep 13 08:38AM +0200

On 12/09/2021 20:29, Michael S wrote:
 
> I suppose, you are talking about TI compilers.
> IIRC, in their old docs (around 1998 to 2002) it was documented in relatively clear way.
> But it was quite a long time ago so it's possible that I misremember.
 
I was omitting the names, to protect the guilty. Yes, it was TI
compilers. And not just such old ones either, or just one target device
- I have seen the same "feature" on wildly separate twos for at least
two TI device architectures. Neither was well documented, certainly not
with the flashing red warning lamps you would expect for such a
pointless, unexpected and critical deviation from standard C.
David Brown <david.brown@hesbynett.no>: Sep 13 08:48AM +0200

On 12/09/2021 22:18, Vir Campestris wrote:
> that a bit of code really ought to be a function because it's
> duplicated. Which means you end up in the same bit of machine code from
> two different source locations.
 
The relationship between source code and object code sections is far
looser with modern optimising compilers, and C++ rather than C. It does
pose challenges for debugging sometimes, but it is worth the cost (IMHO)
for being able to write better code with fewer bugs in the first place!
 
 
> This is especially fun when looking at post-mortem dump files of some
> code somebody else wrote.
 
Debugging someone else's code is always horrible...
 
> compiler-made-bad-code bug in my entire career UB resulting in different
> behaviour from bad source is much more common. And if you want to be
> sure about that you need to debug with the target optimisation level.
 
I have hit a few bugs in compilers over the years. I've occasionally
had to use compilers that had a fair number of bugs, which can be an
interesting experience, and I've even found bugs in gcc on occasion. I
agree with you that you need to do your debugging and testing at the
target optimisation level (with occasional variations during
bug-hunting, depending on the kind of bug). And when the compiler bug
involves ordering of instructions in connection with interrupt
disabling, you need to be examining the assembly code when fully
optimising - there are no alternatives (certainly not testing).
David Brown <david.brown@hesbynett.no>: Sep 13 09:23AM +0200

On 13/09/2021 07:28, Juha Nieminen wrote:
> difference in compilation speed), but at the point where -O0 takes
> 10 seconds to compile and -O3 takes 1 minute to compile, you might
> reconsider.
 
Such time differences are not impossible, I suppose, but very rare.
 
It is not often that "gcc -O3" is a good choice - the code produced is
seldom faster, and regularly slower, than "gcc -O2". It is usually only
worth using if you have code that deals with large data amounts of data
where auto-vectorisation helps. And even then, most people use it
incorrectly - without additional flags to specify the target processor
(or "-fmarch=native") you miss out on much of the possibilities.
Otherwise you risk slowdowns due to the significantly larger code and
higher cache usage, rather than speedups due to loop unrolling and the
like. I'd only use -O3 for code that I have measured and tested to show
that it is actually faster than using -O2. (And even then, I'd probably
specify individual extra optimisation flags as pragmas in the code that
benefits from them.)
 
 
Just for fun, I've tested build times for a project I am working on.
The complete build for the 140 object files (including dependency
generation and linking, though compilation is the major effort) with
different -O levels are:
 
-O level real time user time
=========================================
0 20.8 s 1m 32 s
1 23.1 s 1m 43 s
2 24.3 s 1m 48 s
3 25.5 s 1m 52 s
 
 
So going from -O2 down to -O0 might save about 20% of the compilation
time - giving rubbish code, pointless testing, impenetrable assembly
listings, and vastly weaker static analysis.
 
Now, there might be unusual cases where there are extreme timing
differences, but I believe these figures are not atypical. If you have
particularly large source code files - as you might, for generated
source code, simulation code, and a other niche uses - then
inter-procedural optimisations with -O1 and above will definitely slow
you down. In such cases, I'd add pragmas to disable the optimisations
that scale badly with code size, and then use -O2.
Bart <bc@freeuk.com>: Sep 13 11:03AM +0100

On 13/09/2021 01:51, Ian Collins wrote:
 
>> Most of the stuff I do is not helped with unit tests.
 
> So it doesn't have any functions that can be tested?  That's a new on
> one me!
 
You mean some tiny leaf function that has a well-defined task with a
known range of inputs? That would be in the minority.
 
The problems I have to deal with are several levels above that and
involve the bigger picture. A flaw in an approach might be discovered
that means changes to global data structures and new or rewritten functions.
 
Also, if you're developing languages then you might have multiple sets
of source code where the problem might lie.
 
One extreme example a few years back, involved six distinct sets of
source! (Sources of my systems language; revised sources I was testing;
sources of my C compiler that I was testing the revised compiler on; the
sources of Seed7 I was testing that rebuilt C compiler on; a test
program Bas7.sd7 (a Basic interpreter) that I ran rebuilt Seed7 on; and
a test program test.bas to try out.)
 
Maybe unit tests could have applied to one of those sources, such as
that C compiler, which might have inherent bugs exposed by the revised
implementation language.
 
But how effective are those in such a product? gcc has been going since
1987; there are still active bugs!
 
>> you haven't anticipated and that can't be enumerated.
 
> If you find a problem, add a test fro it to prove that you have fixed it
> and to make sure it does not recur.
 
My 'unit tests' for language products consist of running non-trivial
applications to see if they still work.
 
Or running multiple generations of a compiler.
 
So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe
and the result can build a range of C programs, if I take that tcc.exe
and build Tiny C with it, that new tcc2.exe doesn't work (error in the
generated binaries).
 
So where do you start with that?
 
(The problem is still there. I don't have 100% confidence in bcc's code
generator; so I will replace it at some point with a new one, and try
again.)
HorseyWorsey@the_stables.com: Sep 13 10:52AM

On Sun, 12 Sep 2021 09:48:22 -0700 (PDT)
 
>Are you saying that all of us have to teach ourselves cmake even despite the
>fact that writing makefiles by hand + utilizing .d files generated by compiles
>served our needs rather well for last 10-20-30 years?
 
No.
HorseyWorsey@the_stables.com: Sep 13 10:55AM

On Sun, 12 Sep 2021 18:45:07 +0200
 
>You had no point - you failed to read or understand someone's post,
>thought it would make you look smart or cool to mock them, and have been
>digging yourself deeper in a hole ever since.
 
If writing that load of BS makes you feel better then do carry on.
 
>will deny that. Other people have, which is the beauty of Usenet - even
>the worst posters can sometimes inspire a thread that is helpful or
>interesting to others.
 
I've learned of you move the goalposts when you're losing the argument.
"Cmake is better than makefiles which are ancient and useless"
"Oh ok, makefiles are fine. You can do everything with dependency files and
don't need to write the makefile yourself"
"Oh ok, you can't do everything with dependency files and do need to write some
of the makefile yourself".
 
Etc etc etc.
 
>> to continue with this.
 
>I suppose that is as close to an apology and admission of error as we
>will ever get.
 
No apology and I'm not wrong.
HorseyWorsey@the_stables.com: Sep 13 10:56AM

On Mon, 13 Sep 2021 05:23:16 -0000 (UTC)
>> Yes, it was sarcasm.
 
>Well, good luck trying to get any more answers from me. I don't usually
>placate to assholes.
 
I'm not interested what you do with your donkey or its hole.
David Brown <david.brown@hesbynett.no>: Sep 13 02:15PM +0200

> "Oh ok, you can't do everything with dependency files and do need to write some
> of the makefile yourself".
 
> Etc etc etc.
 
I'd love to see a reference where I mention CMake at all. It's not a
tool I have ever used. As for other people's posts, can you give any
reference to posts that suggest that "gcc -MD" is anything other than an
aid to generating dependency information that can be used by a build
system (make, ninja, presumably CMake, and no doubt many other systems)?
 
No, I am confident that you cannot.
 
You have misunderstood and misrepresented others all along. It's fine
to misunderstand or misread - it happens to everyone at times. Mocking,
lying, sarcasm to try to hide your mistakes when they are pointed out to
you - that is much less fine.
scott@slp53.sl.home (Scott Lurndal): Sep 13 02:55PM

>> Yes, it was sarcasm.
 
>Well, good luck trying to get any more answers from me. I don't usually
>placate to assholes.
 
Come now, its nom-de-post is sufficient to make one wary of its goals.
scott@slp53.sl.home (Scott Lurndal): Sep 13 03:09PM

>On 12/09/2021 22:18, Vir Campestris wrote:
ace!
 
>> This is especially fun when looking at post-mortem dump files of some
>> code somebody else wrote.
 
>Debugging someone else's code is always horrible...
 
I've been doing that for forty years now; it's more likely than
not that any programmer in a large organization or working on a
legacy codebase will have to debug someone else's code.
 
Frankly, it's not that difficult - although it can, at times,
be a time-consuming slog - there was a time when a custom X.509
certificate management system (running on Solaris) would very
occasionally SIGSEGV. This was a large codebase that includes
several cryptographic libraries (bsafe, libssl, etc.). I finally
tracked it down to a linkage issue - different parts of the
application had linked against different versions of libssl and
would call a setup function from one and try to use the resulting
data structure in another.
 
 
>> behaviour from bad source is much more common. And if you want to be
>> sure about that you need to debug with the target optimisation level.
 
>I have hit a few bugs in compilers over the years.
 
My first was back in the Portable C Compiler days - Motorola
had ported it to the M88100 processor family and we were running
cfront to compile C++ code and compiled the resulting C with
the Motorola PCC port (for which we had the source).
 
cfront generates expressions with up to a dozen or more comma
operators - when PCC was processing the parse tree for those
statements, it would run out of temporary registers and the codegen
phase would fail. I implemented a sethi-ullman algorithm that,
given a tree, would compute the number of temps required and added
code to spill the registers to the stack if needed.
 
More recently, we've run into several GCC bugs in the ARM64 world;
we have one of the GCC engineers on staff who generates
the fixes and pushes them upstream. The latest issue was related
to bitfields in structs when compiled for big-endian on ARM64.
HorseyWorsey@the_stables.com: Sep 13 03:32PM

On Mon, 13 Sep 2021 14:15:19 +0200
>to misunderstand or misread - it happens to everyone at times. Mocking,
>lying, sarcasm to try to hide your mistakes when they are pointed out to
>you - that is much less fine.
 
The usual response from people on this group, pretend something wasn't said
when it becomes inconvenient.
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 13 12:07PM -0400

> On Mon, 13 Sep 2021 14:15:19 +0200
> David Brown <david.brown@hesbynett.no> wrote:
...
>> reference to posts that suggest that "gcc -MD" is anything other than an
>> aid to generating dependency information that can be used by a build
>> system (make, ninja, presumably CMake, and no doubt many other systems)?
...
> The usual response from people on this group, pretend something wasn't said
> when it becomes inconvenient.
 
All you have to do to prove that it was said would be to cite the
relevant message by author, date, and time, and to quote the relevant
text. Of course, since you misunderstood that text the first time, when
people point out that fact, it might seem to you they are merely
engaging in a cover-up. There's not much that anyone else can do about
such self-delusion.
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 12 11:13PM -0400

On 9/12/21 9:44 AM, Chris Vine wrote:
> On Sun, 12 Sep 2021 09:20:23 -0000 (UTC)
> Juha Nieminen <nospam@thanks.invalid> wrote:
...
 
> For functions, it does more than affecting naming. The language linkage
> of a function determines two things: the function's name (name mangling)
> and the function's type (calling convention).
 
Furthermore, and not widely appreciated, those two aspects are separable
by using a typedef for a function's type. The language linkage of the
typedef determines the calling convention, while the language linkage of
the function itself determines the name mangling.
Juha Nieminen <nospam@thanks.invalid>: Sep 13 05:34AM

> gcc allows you to pass a function pointer
> with C++ language linkage to a C function expecting a C function
> pointer, but strictly speaking it is undefined behaviour.
 
Does that mean that if you are using a C library and you give it a
function pointer (eg. a callback function), you ought to make that
function extern "C"?
 
How about std::qsort()? Does its comparator function pointer need to be
declared extern "C"? (Or does the standard require std::qsort() to be
usable with C++ function pointers directly?)
Paavo Helde <myfirstname@osa.pri.ee>: Sep 13 09:55AM +0300

13.09.2021 08:34 Juha Nieminen kirjutas:
 
> Does that mean that if you are using a C library and you give it a
> function pointer (eg. a callback function), you ought to make that
> function extern "C"?
 
Strictly speaking, yes, otherwise the calling conventions might be mixed
up. But meanstream C++ implementations are using the same calling
conventions for C and C++ code, so they don't mind.
 
 
> How about std::qsort()? Does its comparator function pointer need to be
> declared extern "C"? (Or does the standard require std::qsort() to be
> usable with C++ function pointers directly?)
 
Indeed, the C++ standard requires qsort to accept both C and C++
callbacks. This is a copy-paste from N4842:
 
void qsort(void* base, size_t nmemb, size_t size, c-compare-pred * compar);
void qsort(void* base, size_t nmemb, size_t size, compare-pred * compar);
David Brown <david.brown@hesbynett.no>: Sep 13 09:30AM +0200

On 13/09/2021 05:13, James Kuyper wrote:
> by using a typedef for a function's type. The language linkage of the
> typedef determines the calling convention, while the language linkage of
> the function itself determines the name mangling.
 
That is not something I knew.
 
It does not affect my own coding, as I know the ABI's are the same for C
and C++ and everything is handled by the same compiler. But it might be
relevant in odd cases.
 
One thing that might be relevant in the future is some of the ideas
being considered for changing exceptions into a simpler and more
deterministic system, with far lower overheads, clearer code and better
checking by making them more static and less run-time. Some of the
papers around this have suggested using additional registers or
processor flags as cheap ways to return exception information to the
caller - and that might mean a change to the C++ ABI compared to the C
ABI on the same platform, even for otherwise simple function parameters.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: