Tuesday, May 31, 2016

Digest for comp.lang.c++@googlegroups.com - 22 updates in 3 topics

JiiPee <no@notvalid.com>: May 31 01:50AM +0100

On 30/05/2016 23:43, JiiPee wrote:
> or C++11/14. But I think there is an essential difference as C++11 is
> much safer etc. So your code becomes less risky with better language,
> I think. So it does matter, at least somewhat.
 
dont we agree that its the old: "the right tool for the right job"?
sometimes is VB sometimes C++.
And, There is also this personal preference issue: I personally could
not imagine creating a complex game library with VB, ... at least for me
it matters. But maybe for somebody else VB would be better. People are
different... some languages are better for my thinking than others I think.
JiiPee <no@notvalid.com>: May 31 01:58AM +0100

On 30/05/2016 15:29, Jerry Stuckle wrote:
 
>> --- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
> Nope, just when they are trolling. Like yours here.
 
> But I know I'm trying to teach the pig to sing.
 
what am trying to say, that there might be other factors which are more
important than "getting a faster compiler". like these:
 
http://stackoverflow.com/questions/1073384/what-strategies-have-you-used-to-improve-build-times-on-large-projects
 
1. Forward declaration
2. pimpl idiom
3. Precompiled headers
4. Parallel compilation (e.g. MPCL add-in for Visual Studio).
5. Distributed compilation (e.g. Incredibuild for Visual Studio).
6. Incremental build
7. Split build in several "projects" so not compile all the code if not
needed.
 
 
how about consentrating on those? its possible they shortens the
compilation time much much more than changing languages or compilers.
mrs@kithrup.com (Mike Stump): May 31 01:18AM

In article <nihk25$j8p$1@jstuckle.eternal-september.org>,
 
>> Some indefinite mysterious "factors"?
 
>Code complexity is the main one. In languages like C++, template
>complexity and instantiation is another one.
 
Very true, meta programming, either by cpp usage or templates does
have a way of expanding compile times.
 
Additionally, headers for C++ tend to inflate compile times faster
than plain C. It is hoped for C++ that modules can help improve
compile times here; only time will tell if they do.
http://clang.llvm.org/docs/Modules.html for the interested.
Jerry Stuckle <jstucklex@attglobal.net>: May 30 09:42PM -0400

On 5/30/2016 6:43 PM, JiiPee wrote:
> everything... they have limited budject I think. so they chose what is
> more important and how many it affects. Thats how the C++ commitea also
> thinks I know.
 
You seem to be repeatedly disagreeing. And if you don't disagree, why
even bring it up?
 
> or C++11/14. But I think there is an essential difference as C++11 is
> much safer etc. So your code becomes less risky with better language, I
> think. So it does matter, at least somewhat.
 
I said nothing about C++11, C++14 or any other version. It's just
another of your red herrings.
 
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: May 30 09:42PM -0400

On 5/30/2016 8:50 PM, JiiPee wrote:
> not imagine creating a complex game library with VB, ... at least for me
> it matters. But maybe for somebody else VB would be better. People are
> different... some languages are better for my thinking than others I think.
 
Which is another of your red herrings. None of this has anything to do
with compile time for C++ programs.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: May 30 09:47PM -0400

On 5/30/2016 8:58 PM, JiiPee wrote:
 
> what am trying to say, that there might be other factors which are more
> important than "getting a faster compiler". like these:
 
> http://stackoverflow.com/questions/1073384/what-strategies-have-you-used-to-improve-build-times-on-large-projects
 
No, let's be honest. What you're trying to say is the just because
compile times are not important to YOU they are not important to anyone
else.
 
Everything else is red herrings.
 
> needed.
 
> how about consentrating on those? its possible they shortens the
> compilation time much much more than changing languages or compilers.
 
They can, when applied properly. But most programmers use as many of
these as possible already. And we still have problems with compile time.
 
Intelligent programmers do not compile every module every time. We use
make files or equivalent, for instance. We use compiled headers where
possible. In many cases parallel compilation doesn't offer significant
advantages, and distributed compilation has it's own problems - like
spending big bucks on multiple computers just to compile.
 
Every argument you have brought up is pure horse hockey. You're better
off just admitting that there are many programmers in this world who
work on complicated programs and do have compile time problems. The
fact you don't is really immaterial to the rest of the world.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
jacobnavia <jacob@jacob.remcomp.fr>: May 31 10:30AM +0200

Le 29/05/2016 à 12:20, Öö Tiib a écrit :
 
> and pages and how it failed in some odd meta-programming trick somewhere
> that user of the "leaner language" is supposed to know nothing about.
 
> Do you have ideas how to mitigate that effect?
 
Errors
 
If an error occurs when compiling the template to be executed in the
compiler environment an error message is issued with line/reason of the
error. That is why is better to use precompiled binaries to associate
the compile time function to an event. That means faster compile times
since the compiler has less work to do: just load that function.
 
If an error occurs when compiling the generated code, the error message
can only be
 
Error when compiling code generated by the "xxx" compile time function.
 
If the generated code contains lines (and it is not only a huge
expression in a single line), a line number is given relative to the
start of the generated code
"Öö Tiib" <ootiib@hot.ee>: May 31 05:29AM -0700

On Tuesday, 31 May 2016 11:30:33 UTC+3, jacobnavia wrote:
> error. That is why is better to use precompiled binaries to associate
> the compile time function to an event. That means faster compile times
> since the compiler has less work to do: just load that function.
 
The problem is not so much about compile time but relevance of
information contained in produced error messages.
 
For example "syntax error at line 42" is too few information for me.
I will stare at the line 42 in confusion for a while.
 
For another example 40 pages of log of how compiler got stuck after pile
of expanding (not written by me) macros, deducting arguments of (never
before seen by me) templates of trait classes and resolving ambiguity
between overloads (again not by me) is too lot of information for me.
I will also stare at the line I wrote that caused it all in confusion for a
while.
 
The 'static_assert' is tricky to put to some places and when the condition
is complex then it makes compilation slower too.
 
 
 
> If the generated code contains lines (and it is not only a huge
> expression in a single line), a line number is given relative to the
> start of the generated code
 
I do not understand the meaning of above sentences. Somehow those feel
partial expressions. There may be something obvious that I miss. Can you please
try to say same in some other words?
JiiPee <no@notvalid.com>: May 30 08:08PM +0100

On 30/05/2016 15:28, Jerry Stuckle wrote:
> compiling is lost productivity.
 
> And I never said it was a problem for *all* C++ programmers. But *YOU*
> said it was not a problem for ANY C++ programmers.
 
I guess one question is that how many percentage faster other languages
compile the same project (done with same structures)?
Also , I do not think we should only think on factor (how quickly is
compiled). We should take into consideration ALL factors, like how good
the langauge is... how long it takes to develop the code, how easy is to
maintain the code etc etc. Compiling is only on factor in regards to
things taking time.
 
How much faster other languages compile the same project?
jacobnavia <jacob@jacob.remcomp.fr>: May 31 02:52PM +0200

Le 31/05/2016 à 14:29, Öö Tiib a écrit :
 
 
> I do not understand the meaning of above sentences. Somehow those feel
> partial expressions. There may be something obvious that I miss. Can you please
> try to say same in some other words?
 
The compiler produces a stream of lexical events:
 
Start program, end program, start function, end function, etc.
You can tell the compiler to call a function that you have written at
any event generated by the compiler.
 
That function can generate either binary data or C code, like templates do.
 
There are several possibilities of errors:
 
If you associate a function source code with some event, the compiler
will compile that function in the fly, and execute it within the
compiler environment. There can be errors in the source code of the
function so that it is not loaded. In that case the compiler will tell
you the reason and the source code line where the error is detected.
 
Now, if that function generates code in the form of C code (say), errors
can occur when compiling that generated code because an error in the
generated C. That corresponds to an error when instatiating a template.
 
Note that even if there are superficial similarities between what I
propose and what exists now in the form of templates, there are vast
differences.
 
Since you can examine all the compiler environment, you can implement
the "concepts" idea of Bjarne very easily: you can check the types of
any variable or function, etc, and see immediately if it corrseponds to
the types you want to support.
Jerry Stuckle <jstucklex@attglobal.net>: May 30 06:21PM -0400

On 5/30/2016 3:08 PM, JiiPee wrote:
> maintain the code etc etc. Compiling is only on factor in regards to
> things taking time.
 
> How much faster other languages compile the same project?
 
It makes no difference what other languages do. They are not being
used. And yes, compiling is one factor. But it is an important one
because it is wasted programmer time.
 
How good the language is is pretty unimportant, also. What IS important
is how good the programmer is.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
JiiPee <no@notvalid.com>: May 30 11:43PM +0100

On 30/05/2016 23:21, Jerry Stuckle wrote:
 
> It makes no difference what other languages do. They are not being
> used. And yes, compiling is one factor. But it is an important one
> because it is wasted programmer time.
 
I dont think nobody disagrees. But as I said, Microsoft cannot do
everything... they have limited budject I think. so they chose what is
more important and how many it affects. Thats how the C++ commitea also
thinks I know.
 
 
> How good the language is is pretty unimportant, also. What IS important
> is how good the programmer is.
 
in a way true. but on the other hand, if your statement is the only
truth, then that would mean that it does not matter if we use 90 s C++
or C++11/14. But I think there is an essential difference as C++11 is
much safer etc. So your code becomes less risky with better language, I
think. So it does matter, at least somewhat.
 
 
 
scott@slp53.sl.home (Scott Lurndal): May 31 02:24PM


>> Line rate, of course. Our customers would not purchase them otherwise.
 
>That's not answering the question. No network operates at the
>theoretical maximum speed except for possibly short bursts.\
 
The controllers will run at 40Gbps, which is line rate.
 
And yes, there are networks that operate at much faster than
40Gbps.
 
 
>> As does the 2x 40Gbps + 16 6Gbps SATA, even leaving out the PCIe.
 
>Wrong, 16 SATA cannot concurrently access memory, much less along with
>the network and PCIe.
 
You obviously have no clue as to what you're talking about. We've
designed the SoC to maintain full bandwidth on all external ports, and
we've tested that it does so. There's a nice crossbar switch with
sufficient bisectional bandwidth to support 48 cores and the aforementioned
I/O devices concurrently.
Jerry Stuckle <jstucklex@attglobal.net>: May 31 12:31PM -0400

On 5/31/2016 10:24 AM, Scott Lurndal wrote:
 
>> That's not answering the question. No network operates at the
>> theoretical maximum speed except for possibly short bursts.\
 
> The controllers will run at 40Gbps, which is line rate.
 
Continuously? And you expect me to believe that? Unlike you, I am not
an idiot. I actually *understand* data transfer.
 
> And yes, there are networks that operate at much faster than
> 40Gbps.
 
And exactly where are those speeds? Considering a single fiber can
handle 10GBS, and even then they don't claim continuous data transfer at
that rate.
 
> we've tested that it does so. There's a nice crossbar switch with
> sufficient bisectional bandwidth to support 48 cores and the aforementioned
> I/O devices concurrently.
 
Oh, I know EXACTLY what I'm talking about. And 16 SATAs cannot transfer
data concurrently at full speed.
 
But then now I understand why your systems are less secure and less
reliable than mainframes. Smart people know you're statements are full
of crap - and don't believe anything else you say about your systems,
either. They go elsewhere where people don't try to bullshit them.
 
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
scott@slp53.sl.home (Scott Lurndal): May 31 05:12PM


>And exactly where are those speeds? Considering a single fiber can
>handle 10GBS, and even then they don't claim continuous data transfer at
>that rate.
 
https://en.wikipedia.org/wiki/100_Gigabit_Ethernet#100G_Port_Types
JiiPee <no@notvalid.com>: May 31 07:08PM +0100

On 25/05/2016 09:09, Juha Nieminen wrote:
> language. It would be one of the silliest possible reasons to change
> the entire programming language to something completely different and
> incompatible.
 
I kind of agree here. Me also, I have worked in software company and
also done other projects 25 years and compilation has not been major
problem never. Was a bit slow in 90 s but today its not slow with the
same (kind of) projects as the computers are very fast.
 
the point again: for most of the programmers, lets say > 95% , its not
an issue at all. maybe even 99%, donno....
 
> be telling others to change their language because *you* have to
> work with such codebases. It's not a problem that affects
> everybody.
 
exactly.
Jerry Stuckle <jstucklex@attglobal.net>: May 31 03:24PM -0400

On 5/31/2016 1:12 PM, Scott Lurndal wrote:
>> handle 10GBS, and even then they don't claim continuous data transfer at
>> that rate.
 
> https://en.wikipedia.org/wiki/100_Gigabit_Ethernet#100G_Port_Types
 
Which does not answer the question. So you have no answer.
 
Sorry, your ignorance of reality is showing.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: Jun 01 08:37AM +1200

On 06/ 1/16 04:31 AM, Jerry Stuckle wrote:
 
> And exactly where are those speeds? Considering a single fiber can
> handle 10GBS, and even then they don't claim continuous data transfer at
> that rate.
 
Fiber speeds are way higher than that and 40GbE can run over copper as
well as fiber. Intel 10/40 GbE chips are becoming mainstream on Xeon
server boards. I use 10 GbE copper in my home network which will
happily run at full speed.
 
>> I/O devices concurrently.
 
> Oh, I know EXACTLY what I'm talking about. And 16 SATAs cannot transfer
> data concurrently at full speed.
 
They can, common 8 port SAS/SATA controllers use 8xPCIe lanes which
provide ample bandwidth for 8 or 16 SATA drives.
 
--
Ian
legalize+jeeves@mail.xmission.com (Richard): May 31 10:14PM

[Please do not mail me a copy of your followup]
 
no@notvalid.com spake the secret code
>also done other projects 25 years and compilation has not been major
>problem never. Was a bit slow in 90 s but today its not slow with the
>same (kind of) projects as the computers are very fast.
 
It depends on how well you have cared for your build system over time.
 
It also depends on if you use Windows or linux as your development
environment. For the same code, it seems that edit-compile-test
cycles go faster on linux than on Windows. This seems to come down to
some issues with the NTFS file system (that were addressed in Windows
8 and beyond) and the tool chain.
 
There are many issues that enter into "how long does my build take".
Some of these can be laid at the feet of the compiler. Some of these
are due to a poorly structured build. If you have a well structured
build, you might not notice compile times. If you have a poorly
structured build, you might be incorrectly blaming the language for
your experience.
 
Let's make this concrete with an example. I have worked on projects
where it was considered a "good thing" to put all extern function
declarations in a single header file. Because every global function
declaration is in this single header file, every source file includes
this header file. If you add a new global function, remove an
existing global function, or change the signature of an existing
global function -- including renaming it -- then every single source
file in your entire project needs to be recompiled, whether it used
that function or not.
 
Imagine a system created with these sorts of "rules" emplaced upon it
by its team. This is a system and a culture that creates unnecessarily
long compile times, yet it doesn't really have anything to do with C++.
Good modularity is not being practiced on such a team.
 
They almost had the right idea. They thought "group similar things
together", but instead of considering similarity on a semantic basis
("When you use one of these functions, you end up using most if not all of
them because they are all related"), they thought on a syntactical basis
("all of these identifiers are functions", "all of these identifiers
are global variables"). I covered some of these design principles for
C++ libraries in this talk:
<http://www.slideshare.net/LegalizeAdulthood/consuming-and-creating-libraries-in-c>
 
One aspect of TDD for C++ that I found enjoyable is that my
edit-compile-test cycle times were reduced dramatically while working
on a class. Compiling and linking together my tests often takes much
less time than compiling and linking the entire application. The
testing cycle time is automatic (run by the build) and doesn't require
me to manually fiddle with the application to verify that I've made
forward progress. Manual/automated integration testing still comes
into play but it comes in far fewer cycles and after my unit tests are
passing when I've already caught most of my bugs.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Jorgen Grahn <grahn+nntp@snipabacken.se>: May 31 06:39AM

On Sat, 2016-05-28, Öö Tiib wrote:
> Search it up, it must be in C++ textbook you use for learning it.
> The union objects can be elements of containers and arrays.
 
> Can you tell ... why you have such a great wish? What you need it for?
 
My guess (and it's only a guess) is that:
 
- He's used to Python.
- The right solution is on a higher level: to learn to design in C++
without that feature.
 
The good news is that I've used both, and don't feel limited by the
lack of the feature in C++. Mixed containers fit nicely with other
Python features, but are kind of alien in C++.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
ram@zedat.fu-berlin.de (Stefan Ram): May 31 01:15AM

>I guess one question is that how many percentage faster other languages
>compile the same project (done with same structures)?
 
Languages don't compile projects.
Compilers compile source code.
 
That being said, when a C++ source code set has been changed,
sometimes many files have to be recompiled. Compilers for other
languages possibly might compile the same amount of source code
even slower, but they might have to do so less often, because
»modules« in other languages might be more »loosely coupled« and
thus don't have to be completely recompiled that often.
 
ram@zedat.fu-berlin.de (Stefan Ram): May 31 01:41AM

>Very true, meta programming, either by cpp usage or templates does
>have a way of expanding compile times.
 
When one has a java.util.ArrayList<o> object in Java, and the objects
of type o get bigger, the classes storing these objects do not have
to be recompiled, because they never store the objects directly, just
pointers. And the size of the pointers has not changed.
 
An ::std::vector<o> in C++ stores the objects directly and thus would
have to be recompiled. That's why C++ vectors sometimes are faster, at
least when small objects are stored. When one introduces pimpl into the
objects to avoid recompilation, the advantage of the direct storage is
also lost.
 
A C++ programmer does not like to trade execution speed for compile time,
because otherwise he would have chosen Java in the first place.
 
A very sophisticated build system might use PIMPL for development to
reduce compile times and then turn PIMPL off for test and production
releases. But this would be a source of another level of complexity
and therefore error-prone.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: