Tuesday, September 14, 2021

Digest for comp.lang.c++@googlegroups.com - 12 updates in 3 topics

Nikki Locke <nikki@trumphurst.com>: Sep 14 10:23PM

Available C++ Libraries FAQ
 
URL: http://www.trumphurst.com/cpplibs/
 
This is a searchable list of libraries and utilities (both free
and commercial) available to C++ programmers.
 
If you know of a library which is not in the list, why not fill
in the form at http://www.trumphurst.com/cpplibs/cppsub.php
 
Maintainer: Nikki Locke - if you wish to contact me, please use the form on the website.
Ian Collins <ian-news@hotmail.com>: Sep 14 12:05PM +1200

On 13/09/2021 22:03, Bart wrote:
>> one me!
 
> You mean some tiny leaf function that has a well-defined task with a
> known range of inputs? That would be in the minority.
 
Doesn't every bit of well designed code have functions, leaf or
otherwise, with well defined tasks?
 
> that means changes to global data structures and new or rewritten functions.
 
> Also, if you're developing languages then you might have multiple sets
> of source code where the problem might lie.
 
So just use those for higher level testing.
 
<snip>
 
> Maybe unit tests could have applied to one of those sources, such as
> that C compiler, which might have inherent bugs exposed by the revised
> implementation language.
 
Unit tests apply to small "units" of code, such as classes or functions,
not whole products.
 
>> and to make sure it does not recur.
 
> My 'unit tests' for language products consist of running non-trivial
> applications to see if they still work.
 
Those aren't by any definition unit tests. They are what would normally
be know as acceptance tests.
 
> and build Tiny C with it, that new tcc2.exe doesn't work (error in the
> generated binaries).
 
> So where do you start with that?
 
By testing the logic in your code?
 
--
Ian.
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 13 10:18PM -0400

> On Mon, 13 Sep 2021 14:15:19 +0200
> David Brown <david.brown@hesbynett.no> wrote:
...
>> reference to posts that suggest that "gcc -MD" is anything other than an
>> aid to generating dependency information that can be used by a build
>> system (make, ninja, presumably CMake, and no doubt many other systems)?
...
> The usual response from people on this group, pretend something wasn't said
> when it becomes inconvenient.
 
All you have to do to prove that it was said would be to cite the
relevant message by author, date, and time, and to quote the relevant
text. Of course, since you misunderstood that text the first time, when
people point out that fact, it might seem to you they are merely
engaging in a cover-up. There's not much that anyone else can do about
such self-delusion.
David Brown <david.brown@hesbynett.no>: Sep 14 09:12AM +0200

On 13/09/2021 23:42, Scott Lurndal wrote:
>> hash or tag?
 
> Maybe. If you have the exact same compiler, assembler and linker. Maybe.
 
> And not if the linker uses any form of address space randomization.
 
All these things vary by project. For the kinds of things I do, I make
a point of archiving the toolchain tool (though not in a git
repository). Reproducible builds are important for me. Other kinds of
projects have different setups and are perhaps build using a variety of
different tools.
 
So while /I/ keep track of released binaries - and when re-opening old
projects, I have something to compare so that I can check the re-created
build environment - it is not something everyone needs or does.
 
Another advantage I see of binaries in the repositories is it makes life
easier for people involved in testing or other work - they can pull out
the latest binaries without having to have all the toolchain themselves.
Bart <bc@freeuk.com>: Sep 14 10:28AM +0100

On 14/09/2021 01:05, Ian Collins wrote:
>> generated binaries).
 
>> So where do you start with that?
 
> By testing the logic in your code?
 
Which bit of logic out of 10s of 1000s of lines? The actual bug might be
in bcc, or maybe in mm.exe which generated the code of bcc.exe, or it
might be a latent bug in tcc.exe (or maybe yet another quirk of C which
I wasn't aware of), and I'm not going to start delving into /its/ 25K
lines of C code, because the next program might have 250K lines or 2.5M.
 
I get the impression from you that, with a product like a compiler, if
it passes all its unit tests, then it is unnecessary to test it further
with any actual applications! Just ship it immediately.
 
In actuality, you will see new bugs you didn't anticipate. The bug may
only manifest itself in a second or subsequent generation. Or the
application built with your compiler may only go wrong with certain inputs.
scott@slp53.sl.home (Scott Lurndal): Sep 14 03:01PM

>repository). Reproducible builds are important for me. Other kinds of
>projects have different setups and are perhaps build using a variety of
>different tools.
 
In our case, the debuginfo files (DWARF data extracted from the ELF
prior to shipping to customers) are saved for each software 'drop'
to a customer. Much easier to deal with than finding the particular
version of the toolset used to build the product.
David Brown <david.brown@hesbynett.no>: Sep 14 05:07PM +0200

On 14/09/2021 17:01, Scott Lurndal wrote:
> prior to shipping to customers) are saved for each software 'drop'
> to a customer. Much easier to deal with than finding the particular
> version of the toolset used to build the product.
 
Debug information is never reproducible - there are always paths,
timestamps, etc., that differ. But none of our customers are interested
in debug information or even stripped elf files - it's the .bin or .hex
images that must match entirely.
 
The toolset version is in the makefiles - when
"/opt/gcc-arm-none-eabi-10-2020-q4-major/bin/" is explicit in the
makefile, there's never any doubt about the toolchain.
Ian Collins <ian-news@hotmail.com>: Sep 15 08:17AM +1200

On 14/09/2021 21:28, Bart wrote:
 
>>> So where do you start with that?
 
>> By testing the logic in your code?
 
> Which bit of logic out of 10s of 1000s of lines?
 
If you've never had tests, adding a full set after the fact will be too
painful. What you can do is add tests for code you are about to change
in order to enure you understand what the code currently does and that
your change hasn't broken anything.
 
<snip>
 
 
> I get the impression from you that, with a product like a compiler, if
> it passes all its unit tests, then it is unnecessary to test it further
> with any actual applications! Just ship it immediately.
 
Where did you get that strange idea? Not from me. There should always
be layers of testing.
 
--
Ian.
Vir Campestris <vir.campestris@invalid.invalid>: Sep 14 09:42PM +0100

On 13/09/2021 22:24, Ian Collins wrote:
>> dumps.
 
> But they can be recreated from the source and a given source control
> hash or tag?
 
In theory, yes.
 
In practice I've never had to try. But you can think of the binaries as
a cache.
 
Andy
Tim Woodall <news001@woodall.me.uk>: Sep 14 07:49AM

> modules. Perfect time for you to read this guide and benefit from the
> massive compilation speedups. This article reflects the state as of
> September 2021."
 
Thanks for the link.
 
I have a couple of quick questions.
 
In the "module" example we have
#include <iostream>;
 
Is the semicolon required here? Subtle, hard to spot change if it is!
 
 
This statement:
 
Unexported symbols have private visibility, which is the opposite of
normal C++ behaviour, where only symbols within anonymous namespaces are
unexported.
 
 
Has the export of symbols in anonymous namespaces changed? One upon a
time symbols in the anonymous namespaces were exported, but with a
mangled name that couldn't be deduced elsewhere.
 
 
I like the "cleanness" of anonymous namespaces over static. But I've
seen places where the sheer volume of exported but unused by the linker
symbols causes problems, especially for tools that, for example, attempt
to check for circular dependencies being introduced.
Anand Hariharan <mailto.anand.hariharan@gmail.com>: Sep 14 07:22AM -0700

On Tuesday, September 14, 2021 at 2:50:20 AM UTC-5, Tim Woodall wrote:
 
> In the "module" example we have
> #include <iostream>;
 
> Is the semicolon required here? Subtle, hard to spot change if it is!
 
The code in the page was
 
import <iostream>;
 
Code uses 'import' not 'include'; also the '#' is omitted. In other words, no preprocessor at play.
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Sep 14 09:55AM -0700


> The code in the page was
 
> import <iostream>;
 
> Code uses 'import' not 'include'; also the '#' is omitted. In other words, no preprocessor at play.
 
And to answer the question, yes, the semicolon is required.
 
From the N4885 draft:
 
module-declaration :
export-keyword[opt] module-keyword module-name module-partition[opt] attribute-specifier-seq[opt] ;
 
module-name :
module-name-qualifier[opt] identifier
 
module-partition :
: module-name-qualifier[opt] identifier
 
module-name-qualifier :
identifier .
module-name-qualifier identifier .
 
("module-keyword" is "module". The "module", "import", and "export"
keywords are treated specially.)
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips
void Void(void) { Void(); } /* The recursive call of the void */
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: