Friday, May 27, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

mrs@kithrup.com (Mike Stump): May 27 02:28AM

In article <ngnl8b$3kn$1@jstuckle.eternal-september.org>,
>required, it is the responsibility of the programmer.
 
>And it also will not generate code to check "int i = *pi;". That is
>also undefined behavior.
 
You two are amusing to watch. First, the statement:
 
Neither C nor C++ generates unnecessary code to check for
programming errors
 
is wrong. Witness the power of my fully operational compiler:
 
up:tmp mrs$ cat t.c
int *pi = 0;
 
int main() {
int i = *pi;
return i;
}
$ clang -O3 -Wall t.c -fsanitize=null -fsanitize-undefined-trap-on-error -S
$ cat t.s
.section __TEXT,__text,regular,pure_instructions
.macosx_version_min 10, 11
.globl _main
.align 4, 0x90
_main: ## @main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp0:
.cfi_def_cfa_offset 16
Ltmp1:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp2:
.cfi_def_cfa_register %rbp
movq _pi(%rip), %rax
testq %rax, %rax
je LBB0_1
## BB#2:
movl (%rax), %eax
popq %rbp
retq
LBB0_1:
 
 
For those that can't read the generated code that je above is exactly
this unnecessary code that checks for programming errors that Jerry
said didn't exist. I claim existence proof, QED.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 26 09:23PM -0700

Mike Stump wrote:
> exactly this unnecessary code that checks for programming
> errors that Jerry said didn't exist. I claim existence proof,
> QED.
 
Which chapter in the C or C++ standard documents discuss
sanitizer extensions? Or are they part of Kostya Serebryany's team
at Google's extensions that have nothing to do with implementibg
or supporting the standards?
 
Best regards,
Rick C. Hodgin
mrs@kithrup.com (Mike Stump): May 27 05:50AM

In article <de6f4c61-fc6d-4c70-88f2-71cb95a6bcc7@googlegroups.com>,
>Which chapter in the C or C++ standard documents discuss sanitizer
>extensions?
 
It is covered here:
 
5 A conforming implementation executing a well-formed program shall pro-
duce the same observable behavior as one of the possible execution
sequences of the corresponding instance of the abstract machine with
the same program and the same input. However, if any such execution
sequence contains an undefined operation, this International Standard
places no requirement on the implementation executing that program
with that input (not even with regard to operations preceding the
first undefined operation).
 
However, I like the footnote better:
 
5) This provision is sometimes called the "as-if" rule, because an im-
plementation is free to disregard any requirement of the Standard as
long as the result is as if the requirement had been obeyed, as far as
can be determined from the observable behavior of the program. For
instance, an actual implementation need not evaluate part of an ex-
pression if it can deduce that its value is not used and that no side
effects affecting the observable behavior of the program are produced.
 
because it has the actual rule name, "as-if". The footnote is
non-normative.
 
>Or are they part of Kostya Serebryany's team at Google's extensions
>that have nothing to do with implementibg or supporting the
>standards?
 
They aren't required by the language standard, if that's the question.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 27 06:50AM -0700

On Friday, May 27, 2016 at 2:00:21 AM UTC-4, Mike Stump wrote:
> effects affecting the observable behavior of the program are produced.
 
> because it has the actual rule name, "as-if". The footnote is
> non-normative.
 
I believe your statement here is as other posters in this thread have
already said, that because it involves undefined behavior, anything is
possible.
 
> >that have nothing to do with implementibg or supporting the
> >standards?
 
> They aren't required by the language standard, if that's the question.
 
However, I believe that you are stretching it a bit in specifying that
anything related to the inclusion of sanitizers has anything to do with
C/C++.
 
These are after-the-thought tools which were constructed to help find bugs,
but do not add anything to the language definition itself, but merely work
within it to look for those things they've been programmed to look for.
 
Similar sanitizers can be ported to other languages, for example, though
in many of those cases such errors as are possible in C/C++ cannot occur,
and as a result their debug featuers are not needed because the type of
errant code they catch is impossible.
 
In addition, I'm currently working on a sanitizing debugger which is
generic and does not require special compilation options, but only some
interfacing protocol which enables tracking of the malloc() family of
functions, and then also allows functions, or address blocks to be
examined when running.
 
Within those blocks, the sanitizing debugger (called sdebug) decodes every
x86 assembly instruction on-the-fly to see if it addresses a memory block,
and if it's outside of an allocated range, and not within a known range of
valid locations. If it is, it calls a special function which can then
break to the debugger, generate an output file, or whatever, and then exit
or return if need be.
 
-----
Sanitizers are valuable tools, but they have nothing to do with the C/C++
languages, except because of their nature, that's where they're needed,
and that's where they've been implemented.
 
Best regards,
Rick C. Hodgin
Jerry Stuckle <jstucklex@attglobal.net>: May 27 10:52AM -0400

On 5/26/2016 10:28 PM, Mike Stump wrote:
 
> For those that can't read the generated code that je above is exactly
> this unnecessary code that checks for programming errors that Jerry
> said didn't exist. I claim existence proof, QED.
 
Which is why people don't use your compiler. It creates a bunch of
unnecessary crap. But I guess you need it. Good programmers don't.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 27 08:04AM -0700

On Friday, May 27, 2016 at 10:52:06 AM UTC-4, Jerry Stuckle wrote:
> > said didn't exist. I claim existence proof, QED.
 
> Which is why people don't use your compiler. It creates a bunch of
> unnecessary crap. But I guess you need it. Good programmers don't.
 
GCC supports sanitizers too, though their development lags behind CLang.
 
As of September 2015, it was reported:
 
http://blog.jetbrains.com/clion/2015/09/cpp-annotated-summer-edition/
 
World-wide, GCC is first with a 65% share of the entire C++ market. CLang
is second with 20%. In the Windows-only world, GCC has 36%, and Visual
C++ has 34%.
 
I find that Windows GCC statistic surprising, though I also use GCC on
nearly everything I develop in Windows, though I use Visual C++ for all
development, and GCC only for "GCC releases" of the same product.
 
Best regards,
Rick C. Hodgin
scott@slp53.sl.home (Scott Lurndal): May 27 03:46PM


>> $ clang -O3 -Wall t.c -fsanitize=null -fsanitize-undefined-trap-on-error -S
 
>Which is why people don't use your compiler. It creates a bunch of
>unnecessary crap. But I guess you need it. Good programmers don't.
 
Don't be silly. clang is widely used, particularly by Apple. In
addition, the soi disant 'unnecessary crap' is optional, as noted
in the parameters specified to the compiler.
Jerry Stuckle <jstucklex@attglobal.net>: May 27 12:04PM -0400

On 5/27/2016 11:46 AM, Scott Lurndal wrote:
 
> Don't be silly. clang is widely used, particularly by Apple. In
> addition, the soi disant 'unnecessary crap' is optional, as noted
> in the parameters specified to the compiler.
 
Ah, so it's optional, then. Well, you can call debug code optional,
also. And you don't find it in release versions.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: May 27 06:18PM +0200

On 27/05/16 15:50, Rick C. Hodgin wrote:
 
> These are after-the-thought tools which were constructed to help find bugs,
> but do not add anything to the language definition itself, but merely work
> within it to look for those things they've been programmed to look for.
 
That is correct. Sanitizers like this are a debugging aid, not part of
the language. They are still very useful.
 
> in many of those cases such errors as are possible in C/C++ cannot occur,
> and as a result their debug featuers are not needed because the type of
> errant code they catch is impossible.
 
You don't get something for nothing here. Sure, there are some types of
C undefined behaviour that could easily be defined in another language
(such as making integers wrap-round on overflow). But in a case like
checking for null pointers, you either have to have run-time checking
for null pointers and corresponding exceptions or other error
mechanisms, or you define a language that does not allow null pointers
to exist, and then loose the convenience of nullable pointers.
Somewhere along the line, you pay. It may be a different balance, and
for many modern languages there is an emphasis on reducing the cost in
development even if it means lower efficiency at run-time. And for many
purposes, this is a good trade-off.
 
Rosario19 <Ros@invalid.invalid>: May 27 06:30PM +0200

> .cfi_def_cfa_register %rbp
> movq _pi(%rip), %rax
> testq %rax, %rax
 
"testq" wuold be rax&rax
so if (rax&rax)==0 goto LBB0_1

else return...
 
so the same if rax==0 goto LBB0_1
only more unreadable...
 
scott@slp53.sl.home (Scott Lurndal): May 27 05:08PM

>> testq %rax, %rax
 
>"testq" wuold be rax&rax
>so if (rax&rax)==0 goto LBB0_1
 
http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 27 10:18AM -0700

On Friday, May 27, 2016 at 12:18:46 PM UTC-4, David Brown wrote:
> for many modern languages there is an emphasis on reducing the cost in
> development even if it means lower efficiency at run-time. And for many
> purposes, this is a good trade-off.
 
C/C++ is desirable when maximum speed is required. Most code in most
applications doesn't need to be that fast though, and could do with the
extra type safety another language would provide.
 
I think a hybrid approach is needed in software. I think C/C++ should
evolve to a language which provides tiers, where the central high-speed
algorithms can compile in tier-1 with all safeties removed, while tier-2
code would run with some safeties, and tier-3 would be entirely type
safe.
 
In this way, only that 5% of your program that accounts for 95% of its
performance slowdown would require the tier-1 code. Everything else
could be tier-2 at least.
 
I will consider this for CAlive, though I've recently defined a similar
tier model for the type of code that can exist in a source file or block.
I call them "compile classes," but that name can be changed. It relates
to normal programs being class-1, which are standard flow control (no
goto, no &function addresses, etc.). Class-2 introduces those features,
and class-3 allows for a completely free-form ability to manipulate and
use everything the compiler knows about the compiled program:
 
https://groups.google.com/forum/#!topic/caliveprogramminglanguage/bOksHC_-OE4
 
This evolution of adding additional type safety based on the compile tier
would go a step or two further, though, tightening the language down
significantly.
 
It would also allow the code to be compiled using tier-3 constraints to
help track down those bugs in the tier-1 model which would cause issues,
but in tier-2 and tier-3 safety and automatically route to a runtime
handler.
 
Best regards,
Rick C. Hodgin
scott@slp53.sl.home (Scott Lurndal): May 27 01:07PM


>Not when it's the only thing running on the mainframe. That's why it's
>run overnight.
 
>A cross-compiler on a non-mainframe would take much longer.
 
30 years ago, maybe. Today? High-end xeon systems are performance
competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
intel CPU's in place of custom CMOS logic.
Jerry Stuckle <jstucklex@attglobal.net>: May 27 10:58AM -0400

On 5/27/2016 9:07 AM, Scott Lurndal wrote:
 
> 30 years ago, maybe. Today? High-end xeon systems are performance
> competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
> intel CPU's in place of custom CMOS logic.
 
But that's not the advantage of mainframes. Mainframe's advantage is
super fast I/O access to huge datasets through multiple simultaneous
channels. Your xeon systems can't compete.
 
If there were no advantage to mainframes, why would companies spend
millions of dollars on them?
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
scott@slp53.sl.home (Scott Lurndal): May 27 03:51PM


>But that's not the advantage of mainframes. Mainframe's advantage is
>super fast I/O access to huge datasets through multiple simultaneous
>channels. Your xeon systems can't compete.
 
For a compile? Yes, the xeon systems can compete. Likely outcompete.
 
And there are high-end xeon systems with "super fast I/O access".
 
The SoC I work with has 48 cores, two 40Gbit/s network controllers, along
with 16 SATA controllers and 40 lanes of Gen 3 PCI Express.
 
IBM is still stuck on 8Gbit/sec fiberchannel.
David Brown <david.brown@hesbynett.no>: May 27 06:10PM +0200

On 27/05/16 16:58, Jerry Stuckle wrote:
 
> But that's not the advantage of mainframes. Mainframe's advantage is
> super fast I/O access to huge datasets through multiple simultaneous
> channels. Your xeon systems can't compete.
 
That may be true - but compilation, even of large programs, does not
need such I/O. If you are working with such large software, and trying
to get as short rebuild times as possible, you would have all the source
and object files in ram. Most build processes are scalable across
multiple cores and multiple machines, at least until the final link
stages. Multi-core x86 systems, or blade racks if you want more power,
will outperform mainframes in compile-speed-per-dollar by an order of
magnitude at least. Even with the most compute-optimised mainframes,
you are paying significantly for the reliability and other mainframe
features that are unnecessary on a build server.
 
 
> If there were no advantage to mainframes, why would companies spend
> millions of dollars on them?
 
Security, reliability, backwards compatibility, guarantees of long-term
availability of parts, massive virtualisation, etc. There are plenty of
reasons for using mainframes - computational speed, however, is
certainly not one of them.
Jerry Stuckle <jstucklex@attglobal.net>: May 27 12:13PM -0400

On 5/27/2016 11:51 AM, Scott Lurndal wrote:
>> super fast I/O access to huge datasets through multiple simultaneous
>> channels. Your xeon systems can't compete.
 
> For a compile? Yes, the xeon systems can compete. Likely outcompete.
 
Not a chance. Compiling is quite I/O intensive, especially when
performing parallel compliations.
 
> And there are high-end xeon systems with "super fast I/O access".
 
Not even close to mainframes. But I can see you've never worked on
mainframes.
 
> The SoC I work with has 48 cores, two 40Gbit/s network controllers, along
> with 16 SATA controllers and 40 lanes of Gen 3 PCI Express.
 
So? It doesn't even come close to what a mainframe can handle. It
*might* be able to perform some of the I/O.
 
But even with your claims, what is the actual throughput on your
40Gbit/s network controllers? Or any of your other controllers?
 
 
> IBM is still stuck on 8Gbit/sec fiberchannel.
 
You mean 16 fiber channels (maybe more now, I'm not positive), all able
to run full speed concurrently. Transfer rates can easily exceed
100Gbps maintained.
 
So I ask you again - if your xeon systems are so great, why do companies
spend millions of dollars on mainframes?
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
scott@slp53.sl.home (Scott Lurndal): May 27 04:49PM


>> For a compile? Yes, the xeon systems can compete. Likely outcompete.
 
>Not a chance. Compiling is quite I/O intensive, especially when
>performing parallel compliations.
 
Actually, not so much, with intellegent operating systems using
spare memory as a file cache. Which all of them do.
 
 
>> And there are high-end xeon systems with "super fast I/O access".
 
>Not even close to mainframes. But I can see you've never worked on
>mainframes.
 
I designed and wrote mainframe operating systems for fourteen years. I
do know whereof I speak. And yes, in the 80's and 90's, mainframe
I/O capabilities were superior.
 
 
>But even with your claims, what is the actual throughput on your
>40Gbit/s network controllers? Or any of your other controllers?
 
Line rate, of course. Our customers would not purchase them otherwise.
 
 
>You mean 16 fiber channels (maybe more now, I'm not positive), all able
>to run full speed concurrently. Transfer rates can easily exceed
>100Gbps maintained.
 
As does the 2x 40Gbps + 16 6Gbps SATA, even leaving out the PCIe.
 
 
>So I ask you again - if your xeon systems are so great, why do companies
>spend millions of dollars on mainframes?
 
They're not _my_ xeon systems.
 
Mainframes still exist because customers have had them for decades and can't
afford to change them out and move to a different architecture.
 
The same reason that the Unisys Mainframes are still being built
(albeit emulated using Intel processors).
 
IBM has sold _VERY FEW_ Z-series to _new_ customers for a decade or two, with
one or two exceptions. IBM's sole advantages are in the robustness
of the hardware(MTBF) and the ability to change out hardware without
massive disruptions to running code (which supports the large MTBF).
System-Z Revenue (mainly from customer refreshes) in 2015 was down 23%
(as per the 2015 annual report).
 
"Performance in 2014 reflected year-to-year declines related to
the System Z product cycle".
ram@zedat.fu-berlin.de (Stefan Ram): May 26 10:57PM

>But for a pendulum T ~ 2 pi sqrt( L / g ), so the time
>changes with the /square root/ of the length L.
 
The quoted text is literally true, but still I believe
I might have been wrong, because I used this in the context
of a hypothetical change of more lengths than of just »L«,
and such changes also might possibly change »g«, and I
failed to take this into account. But since this is off topic
here anyways, I will not write more about this.
mrs@kithrup.com (Mike Stump): May 27 03:12AM

In article <7aksib9t25vto4al8ip637griillus9e7n@4ax.com>,
>>dynamically.
 
>and if the system has no memory what return "new"?
>i think return 0 to p...
 
It does not, nor, can it and call itself C++.
"Öö Tiib" <ootiib@hot.ee>: May 27 12:04AM -0700

On Friday, 27 May 2016 06:30:13 UTC+3, Mike Stump wrote:
 
> >and if the system has no memory what return "new"?
> >i think return 0 to p...
 
> It does not, nor, can it and call itself C++.
 
That is correct. May be worth to note that after C++11 we have special
non-throwing new:

int* p2 = new (std::nothrow) int(42);
 
It will return 0 if system does not have memory for int, since it may
not throw 'bad_alloc' like usual new does.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 27 09:17AM +0200

On 27.05.2016 09:04, Öö Tiib wrote:
 
> int* p2 = new (std::nothrow) int(42);
 
> It will return 0 if system does not have memory for int, since it may
> not throw 'bad_alloc' like usual new does.
 
`std::nothrow` was introduced in C++98, the first C++ standard.
 
 
Cheers & hth.,
 
- Alf
"Öö Tiib" <ootiib@hot.ee>: May 27 12:56AM -0700

On Friday, 27 May 2016 10:18:12 UTC+3, Alf P. Steinbach wrote:
 
> > It will return 0 if system does not have memory for int, since it may
> > not throw 'bad_alloc' like usual new does.
 
> `std::nothrow` was introduced in C++98, the first C++ standard.
 
Thanks for correcting.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 27 04:10AM +0200

On 27.05.2016 00:57, Stefan Ram wrote:
> and such changes also might possibly change »g«, and I
> failed to take this into account. But since this is off topic
> here anyways, I will not write more about this.
 
He he. I thought it discussing it could be an amusing weekend activity.
 
Anyway. ;-)
 
 
Cheers!,
 
- Alf
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: May 27 01:40AM +0100

On 26/05/2016 23:39, Stefan Ram wrote:
 
> - Don't talk to strangers! (Demeter's Law)
 
Law of Demeter is bollocks.
 
Law of Demeter or Principle of Least Knowledge is a simple design style
for developing software that purportedly results in a lower coupling
between elements of a computer program. Lower coupling is beneficial as
it results in code which is easier to maintain and understand.
 
It is the opinion of this programmer however that the Law of Demeter is
in fact a load of old toot (rubbish) as it advocates invoking methods on
global objects: this is in contradiction to a "Principle of Least
Knowledge". "For pragmatic reasons" are weasel words. Global variables
(non-constants) are bad, period.
 
/Flibble
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: