Friday, October 9, 2020

Digest for comp.lang.c++@googlegroups.com - 10 updates in 3 topics

Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Oct 09 10:23PM +0100

neoGFX (C++ app/game engine) drag drop support (windows explorer file drop):
 
https://www.youtube.com/watch?v=05HyiIcLAfM&feature=youtu.be
 
That is all.
 
/Flibble
 
--
¬
Jorgen Grahn <grahn+nntp@snipabacken.se>: Oct 09 06:11AM

On Thu, 2020-10-08, Juha Nieminen wrote:
> exist. I'm sure that if this were a very rare event, most programs
> wouldn't even bother doing that, and would happily misbehave if it
> ever happened.)
 
Also, if programmers were really interested in handling file I/O
errors, there would be well-known test tools which simulated such
errors. (Perhaps there /are/ such tools, and I haven't been
interested enough to look them up.)
 
Another area where I/O errors are important, and also more likely, is
network I/O. There you can use e.g. the Linux Traffic Control and
firewall for some of the tests.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Jorgen Grahn <grahn+nntp@snipabacken.se>: Oct 09 06:22AM

On Thu, 2020-10-08, Öö Tiib wrote:
> worse printf or << plus point that noobs don't deserve anything
> better than we had for decades. I think Neanderthals went extinct
> because of such stagnated view.
 
Partly I agree. Comp.lang.c++ hasn't been very interesting for
decades, and discussions tend to gravitate to a few, old, boring
topics over and over again.
 
Partly I think it /is/ hard to successfully reinvent something that
already exists: it needs to be /a lot/ better to be worth it.
 
As we see in this thread, not even ostream formatting won completely
over printf. I can imagine sitting in a project in the future which
uses both printf, ostream formatting, Boost.format (in several
flavors) and this one. I don't look forward to that.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Juha Nieminen <nospam@thanks.invalid>: Oct 09 06:32AM

> an allocation fails, your program is going to hit an OS-detectable fault
> as it tries to access non-existent memory. A clean OS-controlled crash
> is fine in those truly exceptional circumstances for many programs.
 
When eg. a video game crashes, one has to wonder how often that is because
the developers didn't bother with error handling and just allow the
program to go haywire in certain error situations.
Juha Nieminen <nospam@thanks.invalid>: Oct 09 06:36AM

> errors, there would be well-known test tools which simulated such
> errors. (Perhaps there /are/ such tools, and I haven't been
> interested enough to look them up.)
 
Well, one of the core ideas of TDD is that file streams can be mocked
up and the code tested for (simulated) file I/O errors using those.
 
> Another area where I/O errors are important, and also more likely, is
> network I/O. There you can use e.g. the Linux Traffic Control and
> firewall for some of the tests.
 
Network I/O may be another example where proper error handling is more
common, but only because such errors are likewise more common (eg. no
network connection, etc.)
David Brown <david.brown@hesbynett.no>: Oct 09 10:53AM +0200

On 09/10/2020 08:32, Juha Nieminen wrote:
 
> When eg. a video game crashes, one has to wonder how often that is because
> the developers didn't bother with error handling and just allow the
> program to go haywire in certain error situations.
 
There is always a cost-benefit trade-off involved. Checking for errors,
and handling them, has many costs. Failing to check for errors that
occur also (obviously) has many costs. Did the video game developers
feel that the error would be so rare that it is better to have an
occasional crash, than to make the game slower for everyone else? Did
the managers feel it was better to release the game earlier than to take
the time to write code that handles the situation? Did some "check
everything" PHB insist that the error handling be written, but no one
tested the handling code as the error didn't occur in the lab, and the
handling code had bugs? Did overworked coders skip the error handling
to meet their deadlines? Yes, you can wonder - but there are /lots/ of
possible explanations.
 
(Of course, by far the most likely cause of a crash is a bug in the code
that no one spotted, rather than the result of any deliberate decision
or lack of error handling.)
Ian Collins <ian-news@hotmail.com>: Oct 10 07:44AM +1300

On 09/10/2020 19:11, Jorgen Grahn wrote:
> errors, there would be well-known test tools which simulated such
> errors. (Perhaps there /are/ such tools, and I haven't been
> interested enough to look them up.)
 
We use a C mocking library I wrote to mock I/O calls which can provide
test data and create an possible error condition. Unit tests shouldn't
use real files or devices.
 
> Another area where I/O errors are important, and also more likely, is
> network I/O. There you can use e.g. the Linux Traffic Control and
> firewall for some of the tests.
 
Or simply mock the calls.
 
--
Ian.
scott@slp53.sl.home (Scott Lurndal): Oct 09 07:54PM

>> network I/O. There you can use e.g. the Linux Traffic Control and
>> firewall for some of the tests.
 
>Or simply mock the calls.
 
Or actually have clients that generate the traffic for testing. Or use
PCAP files to inject traffic (e.g. into a TUN/TAP interface). Or use
commercial traffic generators (e.g. IXIA, now Keysight).
 
We use all of that to test software models (and hardware) for 5G. From generating
RFOE packets in the radio head to processing and converting to IP pkts in the MAC,
to routing packets through 100Gb layer 2 switching and packet processing (e.g. IPsec)
pipelines, the entire path is modeled for both normal and exceptional conditions.
 
Using public domain software as a proxy to determine that nobody does error handling
is unwise at best.
Ian Collins <ian-news@hotmail.com>: Oct 10 09:43AM +1300

On 10/10/2020 08:54, Scott Lurndal wrote:
> RFOE packets in the radio head to processing and converting to IP pkts in the MAC,
> to routing packets through 100Gb layer 2 switching and packet processing (e.g. IPsec)
> pipelines, the entire path is modeled for both normal and exceptional conditions.
 
Which is great for system testing, I was talking about unit testing
(maybe I should have been clearer) error handling. Mocking is by far
the best way to generate and test the handling of error conditions.
 
--
Ian.
Juha Nieminen <nospam@thanks.invalid>: Oct 09 06:57AM

> I cannot see any motivation for modules apart of (fast compilation
> and hiding macros).
 
Many more modern programming languages (including the "C-like" or "C'ish"
languages) do not use header files to declare their public interfaces.
In essence, the public interface visible to other modules/compilation units
is kind of "generated automatically" from the source file itself.
 
While there are some situations where C style header files can be useful
and practical, probably in more than 99% of cases they are completely
extraneous and there's no reason why a mechanism couldn't exist that kind
of generates them automatically from the source file (not necessarily as
a literal file, but as some kind of metadata that acts in the same way as
a properly-designed header file does).
 
C style header files have a lot of problems.
 
For starters, many people use them wrongly (quite often so with beginners,
but sometimes even for more experienced programmers). One could think
rather elitistically "who cares if somebody uses them in the wrong way?
They should just learn to use them properly!" but we shouldn't dismiss
language design decisions that naturally lead to better code, and naturally
make people avoid bad code.
 
For example, it's a quite common misconception amongst beginners (both in
C and C++) that *all* type, function and class declarations used in a
source file should be put in the header file. That the header file is
kind of like the place for those. They fail to understand the concept that
a header file is not "a file where all declarations go", but it's the
public interface of the module. The part of it that should be visible
to other modules. Thus it should be as minimalistic and clean as reasonable,
and not contain any internal implementation details that are either not
used or not needed elsewhere.
 
Even many more experienced programmers haven't learned this and still
dump everything that the source file uses in the header file, with little
to no thought paid to actually designing a good public interface.
 
Of course there's then the technical problems with header files, like the
problem of recursive inclusion, which is pretty much always an error,
but which is usually very hard to find out the problem because it causes
really strange and confusing errors about missing names that "quite
clearly" shouldn't be missing. Code that has been compiling just fine for
months, and has not been modified in any way, may suddenly out of the
blue start giving strange errors about missing names, because of a change
in a seemingly unrelated header file somewhere else entirely.
 
Include file dependency explosion is also a problem that very few C and
C++ programmers pay attention to, but which genuinely can cause enormously
longer compile times in large projects because of just one little change
in one single header file. There are many techniques that can be used
to significantly reduce include file dependency chains, but most
programmers don't bother, and many aren't even aware of the problem or
know the techniques to alleviate it.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: