Friday, May 27, 2016

Digest for comp.lang.c++@googlegroups.com - 18 updates in 4 topics

bleachbot <bleachbot@httrack.com>: May 27 10:15PM +0200

ram@zedat.fu-berlin.de (Stefan Ram): May 27 11:28PM

>C++ has become too big, too obese.
 
I thought so too. The committee even wrote (posted by
Bjarne Stroustrup):
 
»C++ is already too large and complicated for our taste«.
 
(This was when they were still posting to Usenet.)
 
But C++11, C++14 and C++17 miraculously managed to get
easier for beginners while getting even larger. The C++
Core Guidelines might be another step in this direction.
 
For example, one can compare C++98's
 
for( ::std::vector< T >::iterator it = jobs.begin(); it != jobs.end(); ++it )
 
with C++14's
 
for( auto const & job: jobs )
 
.
Paavo Helde <myfirstname@osa.pri.ee>: May 27 09:20PM +0300

On 27.05.2016 19:13, Jerry Stuckle wrote:
 
> So I ask you again - if your xeon systems are so great, why do companies
> spend millions of dollars on mainframes?
 
Maybe because highly-paid consultants advise them to continue spending
millions of dollars on obsolete technologies? Maybe otherwise their
consulting fees might start sticking out?
Jerry Stuckle <jstucklex@attglobal.net>: May 27 04:37PM -0400

On 5/27/2016 12:49 PM, Scott Lurndal wrote:
>> performing parallel compliations.
 
> Actually, not so much, with intellegent operating systems using
> spare memory as a file cache. Which all of them do.
 
Sure, once the files are fetched from disk, they can be cached. But not
until. And only when you have sufficient memory available to provide
the cache.
 
 
> I designed and wrote mainframe operating systems for fourteen years. I
> do know whereof I speak. And yes, in the 80's and 90's, mainframe
> I/O capabilities were superior.
 
And they still are today. That's why mainframes are still popular,
despite the higher cost.
 
 
>> But even with your claims, what is the actual throughput on your
>> 40Gbit/s network controllers? Or any of your other controllers?
 
> Line rate, of course. Our customers would not purchase them otherwise.
 
That's not answering the question. No network operates at the
theoretical maximum speed except for possibly short bursts.
 
>> to run full speed concurrently. Transfer rates can easily exceed
>> 100Gbps maintained.
 
> As does the 2x 40Gbps + 16 6Gbps SATA, even leaving out the PCIe.
 
Wrong, 16 SATA cannot concurrently access memory, much less along with
the network and PCIe.
 
 
>> So I ask you again - if your xeon systems are so great, why do companies
>> spend millions of dollars on mainframes?
 
> They're not _my_ xeon systems.
 
No, because they know what their mainframes do, and what your xeon
system does. These companies don't make money by being stupid.
 
> Mainframes still exist because customers have had them for decades and can't
> afford to change them out and move to a different architecture.
 
Wrong again. Mainframes exist because they are still the fastest
around. And it would be much cheaper in the long run to change to your
xeon architecture if it were even as good. But they know their
mainframes still run rings around your architecture - despite your claims.
 
> (as per the 2015 annual report).
 
> "Performance in 2014 reflected year-to-year declines related to
> the System Z product cycle".
 
You mean they have sold Z-series to new customers, right? So much for
your "have had them for decades..." argument. And yes, the robustness
and ability to change out hardware are advantageous - but hardware is
pretty solid overall now at all levels.
 
And yes, System-Z revenue is down - but that's not just because of them
being mainframes. There are numerous economic reasons all over the
world. Sales of computers in general have been down for the last couple
of years.
 
Really - you come across as a salesman with only one argument - one that
doesn't recognize the advantages of the competition. Such a position
will NEVER work long term.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: May 27 04:37PM -0400

On 5/27/2016 2:20 PM, Paavo Helde wrote:
 
> Maybe because highly-paid consultants advise them to continue spending
> millions of dollars on obsolete technologies? Maybe otherwise their
> consulting fees might start sticking out?
 
ROFLMAO! That's the best one I've heard all week.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Paavo Helde <myfirstname@osa.pri.ee>: May 27 11:41PM +0300

On 27.05.2016 23:37, Jerry Stuckle wrote:
>> millions of dollars on obsolete technologies? Maybe otherwise their
>> consulting fees might start sticking out?
 
> ROFLMAO! That's the best one I've heard all week.
 
I'll take this as a compliment! :-)
Jerry Stuckle <jstucklex@attglobal.net>: May 27 04:42PM -0400

On 5/27/2016 12:10 PM, David Brown wrote:
> magnitude at least. Even with the most compute-optimised mainframes,
> you are paying significantly for the reliability and other mainframe
> features that are unnecessary on a build server.
 
Actually, it does. And no matter how much you try, you can't run it
from ram until you get it into ram. Plus, unless you have a terabyte or
more of ram, you aren't going to be able to run multiple compilation and
keep all of the intermediate files in ram.
 
Sure, you can do it, when you are compiling the 100 line programs you
write. But you have no idea what it takes to compile huge programs such
as the one I described.
 
> availability of parts, massive virtualisation, etc. There are plenty of
> reasons for using mainframes - computational speed, however, is
> certainly not one of them.
 
I won't address each one individually - just to say that every one of
your "reasons" is pure hogwash.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: May 28 08:46AM +1200

On 05/28/16 02:58, Jerry Stuckle wrote:
 
> But that's not the advantage of mainframes. Mainframe's advantage is
> super fast I/O access to huge datasets through multiple simultaneous
> channels. Your xeon systems can't compete.
 
Correct - that is why they are used for transaction processing. When
transaction processing is scaled to extreme (Google or Facebook for
example), custom Xeon based hardware takes over.
 
> If there were no advantage to mainframes, why would companies spend
> millions of dollars on them?
 
There are advantages, but compiling C++ code isn't one of them.
 
--
Ian
Ian Collins <ian-news@hotmail.com>: May 28 08:49AM +1200

On 05/28/16 08:42, Jerry Stuckle wrote:
> from ram until you get it into ram. Plus, unless you have a terabyte or
> more of ram, you aren't going to be able to run multiple compilation and
> keep all of the intermediate files in ram.
 
You don't need to keep the intermediate files in RAM, just the generated
objects.
 
--
Ian
JiiPee <no@notvalid.com>: May 27 11:12PM +0100

On 25/05/2016 17:00, Jerry Stuckle wrote:
> Just because it doesn't affect YOU doesn't mean it's not a valid
> complaint. It's just not valid for YOU.
 
> But it is very valid for a lot of other programmers.
 
But "a lot" is not necessary a lot percentage- wise. 100 is "a lot" but
its not a lot if there is 100 for every 10000000.
Thats a general rule anyway: how many it affects *percentage-wise* if
normally or almost always which matter what comes to language-issue. For
example if feature XX helps 200 people in the world (which is "alot")
but it does not help the rest 8999999 , then most surely it is not added
to the language.
jacobnavia <jacob@jacob.remcomp.fr>: May 28 12:30AM +0200

Le 25/05/2016 à 10:09, Juha Nieminen a écrit :
> someone complains about C++, the "long compile times" argument is
> brought up, like it were some kind of crucial core flaw that affects
> every single C++ programmer.
 
Excuse me but given that any C++ programmer must compile his/her code
sometime, it SURELY affects EVERY SINGLE C++ PROGRAMMER.
 
That this is no reason to switch languages is obvious, most people are
saying that the compile times go up, and even if the machines now are
much more powerful, the compile times still go up.
 
That is (for you) not a reason for concern. The fact that you report,
(that many people complain about compilation times) is not a reason for
you to reflect a bit and notice that the long compilation time is just a
symptom of a more serious condition:
 
General obesity.
 
C++ has become too big, too obese. Pushed by a MacDonald industry, every
single feature that anybody can conceive has been added to that
language, and the language is now a vast mass of FAT.
 
Too complex now even for its own creator, Bjarne acknowledged that he
just could not implement the latest feature he wanted to add. The
concept of what C++ should be has disappeared beyond a confusion of
features piled upon features.
 
I think a reflexion is needed. We could re-create a leaner language
where all the features of C++ could be maintained (and many more added!)
if we open the language and let people program the compiler itself.
 
I will publish soon a document explaining this in detail.
Ian Collins <ian-news@hotmail.com>: May 28 10:59AM +1200

On 05/28/16 10:30, jacobnavia wrote:
 
> I think a reflexion is needed. We could re-create a leaner language
> where all the features of C++ could be maintained (and many more added!)
> if we open the language and let people program the compiler itself.
 
Are you mistaking C++ for Java? What isn't "open" with the C++
development process? Anyone is free to work on one of the opensource
compilers. Is your lcc compiler source freely available for people to
modify?
 
> I will publish soon a document explaining this in detail.
 
That would be interesting.
 
--
Ian
David Brown <david.brown@hesbynett.no>: May 27 07:40PM +0200

On 27/05/16 19:18, Rick C. Hodgin wrote:
 
> C/C++ is desirable when maximum speed is required. Most code in most
> applications doesn't need to be that fast though, and could do with the
> extra type safety another language would provide.
 
Agreed, mostly - but you should be very wary of writing "C/C++" as
though it were one language. C++ offers a great deal more safety than
C. Some of it is zero cost (like type safety), some low cost (like
exceptions), some has more cost (like run-time checking of constraints).
 
If you don't like null pointers, try using references - at zero cost.
If you want to use pointer features but be sure that null dereferences
are caught, you can make your own pointer-like class which checks for
nulls at dereference time.
 
C cannot let you do that sort of thing without explicit conditionals,
macros or function calls. Of course, some people consider that to be a
strength of C - it is all explicit.
 
> algorithms can compile in tier-1 with all safeties removed, while tier-2
> code would run with some safeties, and tier-3 would be entirely type
> safe.
 
I disagree. Simply use C or C++ where appropriate, and other languages
where /they/ are appropriate. It does not help to try to make C++ also
do the job of assembler and of Python, Rust or Go.
 
 
> In this way, only that 5% of your program that accounts for 95% of its
> performance slowdown would require the tier-1 code. Everything else
> could be tier-2 at least.
 
For big programs, write different parts of the program in different
languages if that makes more sense. This is especially true for
languages like Go that can be compiled and linked along with C or C++.
 
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 27 10:57AM -0700

On Friday, May 27, 2016 at 1:40:44 PM UTC-4, David Brown wrote:
 
> For big programs, write different parts of the program in different
> languages if that makes more sense. This is especially true for
> languages like Go that can be compiled and linked along with C or C++.
 
I think having a single tool and source code base is desirable. C and
C++ aren't going away anytime soon, and I think having code written
with these tier settings for compilation would allow the one code base
to run with the safety of other languages, and the speed of C/C++ when
and where required, and without changing anything other than a few
flags.
 
For significant alternate coding needs, such as GUI and ad hoc data
processing, I would recommend something like what I'm developing with
Visual FreePro, Jr., an XBASE language with a powerful GUI. It can
handle data things, while then allowing integration with the other
compiler so the single compiler suite can handle both types of code
from a single source file using simple blocks:
 
_vjr {
// Visual FreePro, Jr. code goes here
}
 
_calive {
// CAlive code goes here
}
 
...and so on.
 
Best regards,
Rick C. Hodgin
legalize+jeeves@mail.xmission.com (Richard): May 27 07:34PM

[Please do not mail me a copy of your followup]
 
David Brown <david.brown@hesbynett.no> spake the secret code
>If you want to use pointer features but be sure that null dereferences
>are caught, you can make your own pointer-like class which checks for
>nulls at dereference time.
 
Midway between those two approaches is to use templates to decorate
raw pointers with intentions and then use a static analysis tool to
ensure that the raw pointers are used as intended.
 
This is what Stroustrup is talking about with respect to the C++
guidelines project on github.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Paavo Helde <myfirstname@osa.pri.ee>: May 27 11:36PM +0300

On 27.05.2016 20:18, Rick C. Hodgin wrote:
> algorithms can compile in tier-1 with all safeties removed, while tier-2
> code would run with some safeties, and tier-3 would be entirely type
> safe.
 
There is some sense here, but type safety is not one of the problems. In
C++ you can have type safety all the way down and sacrifice no
performance (if there is any difference then rather in the opposite
direction because if the compiler knows the right types it might be able
to produce better optimized code). Templates become helpful if you need
to provide a low-level typesafe algorithm for any numeric type, for example.
 
> goto, no &function addresses, etc.). Class-2 introduces those features,
> and class-3 allows for a completely free-form ability to manipulate and
> use everything the compiler knows about the compiled program:
 
Why 'goto' and function addresses? Nobody is using these anyway. Goto
can always be avoided by adding extra functions (which would be
optimized away by a decent compiler); functors can be inlined better
than function addresses, so there is no point of using function
addresses if performance is important. In short, I cannot see how goto
or using function addresses would speed up anything in C++, so there
would be no point in having a special "tier" for them.
 
The actual dangerous parts in my experience are race conditions and
deadlocks. Come up with a language that avoids race conditions and
deadlocks automatically with minimal runtime overhead, and you might
score huge points.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 27 02:53PM -0700

On Friday, May 27, 2016 at 4:37:00 PM UTC-4, Paavo Helde wrote:
> > code would run with some safeties, and tier-3 would be entirely type
> > safe.
 
> There is some sense here, but type safety is not one of the problems.
 
When I wrote "entirely type safe" I meant that every type would not be
able to cause a memory issue, all pointers would be validated to be
valid before each use, etc. It would essentially turn C/C++ into a
different programming language where the issues possible in C/C++ are
checked, so they can be safely recovered from.
 
> deadlocks. Come up with a language that avoids race conditions and
> deadlocks automatically with minimal runtime overhead, and you might
> score huge points.
 
My primary goals with CAlive and its parent framework (RDC, Rapid
Development Compiler) are in edit-and-continue and a LiveCode ABI
which allows all aspects of a running application to be updated in
real-time, including adding new local variables or parameters or return
variables which may be in called locations on the stack, etc., and to
provide a host of IDE/GUI editor features which make developers lives
more productive, with more information at their fingertips.
 
I want to do notably better than Microsoft Visual Studio, and maintain
that relative lead.
 
Best regards,
Rick C. Hodgin
Ramine <ramine@1.1>: May 27 04:17PM -0700

Hello,
 
Visual C++ Runtime Memory Manager is scaling well on multithreaded
programs, read this:
 
"Visual C++ Runtime Memory Manager: Since one of our libraries written
in C++ already required Microsoft's memory manager, it seemed natural to
give it a try for other parts of the application as well. The results
were similar to Intel's: it scaled well (though a bit slower than
Intel's), but was exhibiting the same problems with fragmentation."
 
Read more here:
 
https://personalnexus.wordpress.com/2011/06/11/looking-for-the-perfect-memory-manager/
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: