Saturday, May 28, 2016

Digest for comp.lang.c++@googlegroups.com - 20 updates in 3 topics

Ian Collins <ian-news@hotmail.com>: May 28 11:40AM +1200

On 05/28/16 11:28, Stefan Ram wrote:
 
> for( ::std::vector< T >::iterator it = jobs.begin(); it != jobs.end(); ++it )
 
> with C++14's
 
> for( auto const & job: jobs )
 
I agree with you there! I'm currently working with a large code base
some of which is over 20 years old and modernising the code definitely
equals simplifying the code.
 
--
Ian
Jerry Stuckle <jstucklex@attglobal.net>: May 27 08:48PM -0400

On 5/27/2016 6:12 PM, JiiPee wrote:
> example if feature XX helps 200 people in the world (which is "alot")
> but it does not help the rest 8999999 , then most surely it is not added
> to the language.
 
It doesn't matter what the "percentage" is. What matters is the ral
numbers. And performance is not the same as language issues.
 
Another troll post.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: May 27 08:50PM -0400

On 5/27/2016 4:49 PM, Ian Collins wrote:
>> keep all of the intermediate files in ram.
 
> You don't need to keep the intermediate files in RAM, just the generated
> objects.
 
The generated objects ARE the intermediate files. But once again you
show your lack of knowledge of anything but the smallest (i.e. MSDOS
1.0) systems.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: May 27 08:52PM -0400

On 5/27/2016 4:46 PM, Ian Collins wrote:
 
> Correct - that is why they are used for transaction processing. When
> transaction processing is scaled to extreme (Google or Facebook for
> example), custom Xeon based hardware takes over.
 
Try again. It doesn't work.
 
>> If there were no advantage to mainframes, why would companies spend
>> millions of dollars on them?
 
> There are advantages, but compiling C++ code isn't one of them.
 
Right. So say you.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: May 28 12:52PM +1200

On 05/28/16 12:50, Jerry Stuckle wrote:
 
>> You don't need to keep the intermediate files in RAM, just the generated
>> objects.
 
> The generated objects ARE the intermediate files.
 
Well then, RAM won't be that big a deal, will it?
 
--
Ian
Jerry Stuckle <jstucklex@attglobal.net>: May 27 09:01PM -0400

On 5/27/2016 8:52 PM, Ian Collins wrote:
>>> objects.
 
>> The generated objects ARE the intermediate files.
 
> Well then, RAM won't be that big a deal, will it?
 
So you say. But you obviously have no experience with large
compilations. Probably because your biggest program is around 100 LOC.
ROFLMAO!
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
JiiPee <no@notvalid.com>: May 28 02:50AM +0100

On 28/05/2016 01:48, Jerry Stuckle wrote:
> It doesn't matter what the "percentage" is. What matters is the ral
> numbers. And performance is not the same as language issues.
 
> Another troll post.
 
its not troll.
 
Surely its the relative number which matter. Or is it also in voting
that if 200 people in the country wants something that will become a
law?? surely not, unless there are 300 people in that country! (then it
would be 66%).
 
VS team must do many things so am sure they prioritize what they do. So
if only 0.01% of people need something and 30% of people need something
else, am sure they rather do that 30% issue.
JiiPee <no@notvalid.com>: May 28 02:54AM +0100

On 28/05/2016 02:50, JiiPee wrote:
 
> VS team must do many things so am sure they prioritize what they do.
> So if only 0.01% of people need something and 30% of people need
> something else, am sure they rather do that 30% issue.
 
am not saying the fast compilation time is not important.. surely it is,
but for most of the programmers its not an issue, like for me.
 
 
I guess Microsoft is prioritizing tasks what they improve.
Ian Collins <ian-news@hotmail.com>: May 28 02:01PM +1200

On 05/28/16 12:52, Jerry Stuckle wrote:
>> transaction processing is scaled to extreme (Google or Facebook for
>> example), custom Xeon based hardware takes over.
 
> Try again. It doesn't work.
 
Try what again?
 
Try reading:
 
https://code.facebook.com/posts/1538145769783718/open-compute-project-u-s-summit-2015-facebook-news-recap/
 
http://datacenterfrontier.com/facebook-open-compute-hardware-next-level/
 
--
Ian
Ian Collins <ian-news@hotmail.com>: May 28 02:09PM +1200

On 05/28/16 13:54, JiiPee wrote:
 
> am not saying the fast compilation time is not important.. surely it is,
> but for most of the programmers its not an issue, like for me.
 
It is an issue for those of us who use TDD and therefore build and run
tests very often. The solution is two-fold:
 
1) throw more hardware at it
2) better modularise the code.
 
I do both!
 
Building build farms is part of my day job, so the first is relatively
easy for me, but I appreciate that it isn't an option for everyone.
 
The second is good engineering practice.
 
--
Ian
Rosario19 <Ros@invalid.invalid>: May 28 06:37AM +0200

On Sat, 28 May 2016 00:30:29 +0200, jacobnavia wrote:
 
>C++ has become too big, too obese. Pushed by a MacDonald industry, every
>single feature that anybody can conceive has been added to that
>language, and the language is now a vast mass of FAT.
 
the compile time is not a problem...
because compilation is parallelizable process
so one can imagine even now to have 8 cpu
each cpu has its compiler program and compile one file of the many it
has to compile
or each cpu has its part of file to compile etc
Robert Wessel <robertwessel2@yahoo.com>: May 28 04:32AM -0500

On Sat, 28 May 2016 06:37:14 +0200, Rosario19 <Ros@invalid.invalid>
wrote:
 
>each cpu has its compiler program and compile one file of the many it
>has to compile
>or each cpu has its part of file to compile etc
 
 
Given that essentially all of my non-debug builds have LTCG turned on
(at least on those platforms where that's an option), the "link" step
is where a quite large chunk of the CPU time now gets burned, and it
doesn't parallelize.
Jerry Stuckle <jstucklex@attglobal.net>: May 28 08:34AM -0400

On 5/27/2016 9:50 PM, JiiPee wrote:
>> numbers. And performance is not the same as language issues.
 
>> Another troll post.
 
> its not troll.
 
Sorry, another troll post.
 
> that if 200 people in the country wants something that will become a
> law?? surely not, unless there are 300 people in that country! (then it
> would be 66%).
 
It would be if only 300 people voted.
 
> VS team must do many things so am sure they prioritize what they do. So
> if only 0.01% of people need something and 30% of people need something
> else, am sure they rather do that 30% issue.
 
That's only a very small part of it. It more depends on what that 30%
wants.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: May 28 08:35AM -0400

On 5/27/2016 9:54 PM, JiiPee wrote:
>> something else, am sure they rather do that 30% issue.
 
> am not saying the fast compilation time is not important.. surely it is,
> but for most of the programmers its not an issue, like for me.
 
Citation?
 
 
> I guess Microsoft is prioritizing tasks what they improve.
 
It's a huge problem for a large number of programmers. Just because it
isn't for YOU does not mean it's not important to OTHERS. YOU ARE NOT
THE WHOLE WORLD.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: May 28 08:38AM -0400

On 5/27/2016 10:01 PM, Ian Collins wrote:
 
> Try reading:
 
> https://code.facebook.com/posts/1538145769783718/open-compute-project-u-s-summit-2015-facebook-news-recap/
 
> http://datacenterfrontier.com/facebook-open-compute-hardware-next-level/
 
Nice ability to Google. Now try some *reliable* references. Then ask
some of those big companies why they use big iron.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: May 28 05:24PM +0200

On 27/05/16 22:42, Jerry Stuckle wrote:
>> features that are unnecessary on a build server.
 
> Actually, it does. And no matter how much you try, you can't run it
> from ram until you get it into ram.
 
The entire Debian source code repository is about 1.1 GLoC, perhaps
something like 30 GB total. And that is an absurdly big software
project - /way/ bigger than any individual program or project (even in
the mainframe world). A solid SSD will give you something like 0.5 GB
per second. So reading /all/ of those files is 15 seconds - at that is
if you don't want to splash out on a couple of disks in a raid system.
 
The disk speed is pretty much totally irrelevant in large compile times
(unless you are using a poor OS that has slow file access and bad
caching systems, like Windows - though Win8 is rumoured to be better).
 
The key issue is not the number of lines of code in the files, but the
number of lines of code /compiled/. You only need to read these
multi-MB header files once, but you need to compile them repeatedly.
Thus processor and memory is the issue, not disk speed.
 
> Plus, unless you have a terabyte or
> more of ram, you aren't going to be able to run multiple compilation and
> keep all of the intermediate files in ram.
 
 
Libreoffice is an example of a very large C++ project. From an report I
read, it took approximately an hour to build on an 8-core 1.4 GHz
Opteron system with 64 GB ram. Peak memory usage is only about 11 GB,
or 18 GB with link-time optimisation (which requires holding much more
in memory at a time).
 
The idea that you would need a TB or more of ram is just silly.
 
 
> Sure, you can do it, when you are compiling the 100 line programs you
> write. But you have no idea what it takes to compile huge programs such
> as the one I described.
 
You haven't described any programs, huge or otherwise.
 
(It is true that for the programs I write, compile times are rarely an
issue - in most cases, the code is written by a single person, for a
single purpose on a small embedded system. But sometimes I also have to
compile big code bases for other systems.)
 
>> certainly not one of them.
 
> I won't address each one individually - just to say that every one of
> your "reasons" is pure hogwash.
 
Are you telling me that mainframe customers are not interested in
security, or are you telling me that mainframes have poorer security
than the average Linux/Windows/Mac server? Are you telling me that
banks are not particularly concerned about reliability, or are you
telling me that they pick Dell servers over Z-Series because of Dell's
excellent reliability record?
 
Or could it be that you are simply talking nonsense again?
David Brown <david.brown@hesbynett.no>: May 28 06:24PM +0200

On 28/05/16 14:38, Jerry Stuckle wrote:
 
>> https://code.facebook.com/posts/1538145769783718/open-compute-project-u-s-summit-2015-facebook-news-recap/
 
>> http://datacenterfrontier.com/facebook-open-compute-hardware-next-level/
 
> Nice ability to Google. Now try some *reliable* references.
 
Okay, here are a few links. I did not use Google to find them. Do you
count these as reliable? If not, show us references that /you/ feel are
reliable - especially ones that show that Google and/or Facebook use
mainframes rather than Linux x86 servers for significant parts of their
computer centres, or ones that show anyone choosing a mainframe for a
build platform because of its greater performance. And since you
dislike Googling, I don't expect to see any "let me google that for you"
links - concrete references only.
 
<http://www.datacenterknowledge.com/archives/2016/04/28/guide-to-facebooks-open-source-data-center-hardware/>
 
<http://www.businessinsider.com/facebook-open-compute-project-history-2015-6?op=1%3fr=US&IR=T&IR=T>
 
<http://arstechnica.com/information-technology/2013/07/how-facebook-is-killing-the-hardware-business-as-we-know-it/>
 
 
> Then ask
> some of those big companies why they use big iron.
 
Why not ask IBM, since they have something like 90% of the mainframe market?
 
<https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_whousesmf.htm>
 
Apparently, long lifetimes, reliability, stability, security,
scalability, and compatibility with previous systems are the main
points, along with the ability to handle huge datasets and high
bandwidth communications.
 
I believe that list is quite close to the one I gave, which you labelled
as "hogwash".
 
And I don't see "build server" or "compilation of large C++ programs" on
IBM's list.
"Heinz-Mario Frühbeis" <Div@Earlybite.individcore.de>: May 28 12:23PM +0200

Hi,
 
it took many time over the last times, because it is a great wish of
mine to have this... different types in one collection (or such).
 
Now I have a solution..., first is an array, second is with a vector.
 
The array is working well, output is:
1000
700
300
NOT FOUND
NOT FOUND
FOUND at 2
NOT FOUND
NOT FOUND
 
But the vector "makes" problems..., the output is:
PUSHED_BACK 0x7fff63053760 1
PUSHED_BACK 0x7fff63053770 1
PUSHED_BACK 0x7fff63053780 2
Das Programm ist abgestürzt. // The program has crashed
 
The (first) question is: Why hasn't the vector 3 elements?
 
Below you will find the source...
 
A first other question is:
Is there another, better way to realize it? (no void*, no boost...)
 
Regards
Heinz-Mario Frühbeis
 
 
#include <iostream>
#include <vector>
 
using namespace std;
 
void To_Array();
void To_Vec();
 
class A{
public:
int val;
};
 
class B{
public:
int val;
};
 
template<typename t>
t* Add(int index, t *vVal = NULL, int vSet = 0){
static t* Values[100];
if(vSet == 1){
Values[index] = vVal;
}
return Values[index];
}
 
template<typename t>
t* Add_Vec(int index, t *vVal = NULL, int vSet = 0){
static std::vector <t*> vec;
if(vSet == 1){
vec.push_back(vVal);
cout << "PUSHED_BACK " << vVal << " " << vec.size() << endl; cout.flush();
}
return vec[index];
}
 
int main(){
To_Array(); // This is working
//To_Vec(); // Crash
return 0;
}
 
void To_Vec(){
A c_A;
B c_B;
A c_C;
 
c_A.val = 300;
c_B.val = 700;
c_C.val = 1000;
 
Add_Vec <A> (0, &c_A, 1);
Add_Vec <B> (1, &c_B, 1);
Add_Vec <A> (2, &c_C, 1);
 
cout << Add_Vec <A> (2)->val << endl; cout.flush();
cout << Add_Vec <B> (1)->val << endl; cout.flush();
cout << Add_Vec <A> (0)->val << endl; cout.flush();
 
for(int i = 0; i < 3; i++){
if( Add_Vec <A> (i) == &c_C){
cout << "FOUND at " << i << endl; cout.flush();
} else {
cout << "NOT FOUND \n"; cout.flush();
}
}
}
 
void To_Array(){
A c_A;
B c_B;
A c_C;
 
c_A.val = 300;
c_B.val = 700;
c_C.val = 1000;
 
Add <A> (0, &c_A, 1);
Add <B> (1, &c_B, 1);
Add <A> (2, &c_C, 1);
 
cout << Add <A> (2)->val << endl; cout.flush();
cout << Add <B> (1)->val << endl; cout.flush();
cout << Add <A> (0)->val << endl; cout.flush();
 
for(int i = 0; i < 5; i++){
if( Add <A> (i) == &c_C){
cout << "FOUND at " << i << endl; cout.flush();
} else {
cout << "NOT FOUND \n"; cout.flush();
}
}
}
"Öö Tiib" <ootiib@hot.ee>: May 28 04:37AM -0700

On Saturday, 28 May 2016 13:24:04 UTC+3, Heinz-Mario Frühbeis wrote:
> it took many time over the last times, because it is a great wish of
> mine to have this... different types in one collection (or such).
 
Alternative types put into one type is called 'union' in C and C++.
Search it up, it must be in C++ textbook you use for learning it.
The union objects can be elements of containers and arrays.
 
Can you tell ... why you have such a great wish? What you need it for?
 
> PUSHED_BACK 0x7fff63053770 1
> PUSHED_BACK 0x7fff63053780 2
> Das Programm ist abgestürzt. // The program has crashed
 
The 'Add_Vec <A> (2)->val' does likely crash at 'return vec[index];' that
has undefined behavior. If you want defined behavior (but not much more
pleasant) then perhaps use 'return vec.at(index);' ... that will throw
instead of maybe crashing.
 
 
> The (first) question is: Why hasn't the vector 3 elements?
 
What "the vector" you mean? The static vector in 'Add_Vec<A>' will have
2 elements and the static vector in 'Add_Vec<B>' will have one element
if your code actually calls 'To_Vec' (call is commented out in posted
code but seems to be happening in posted output).
 
 
> Below you will find the source...
 
> A first other question is:
> Is there another, better way to realize it? (no void*, no boost...)
 
To realize what? There is 'union'. If you want examples of reinventing or
extending it well then there is 'boost::variant'. There are lot of other
mature "Variant" class implementations in various frameworks like COM,
Qt etc. Also there are (less convenient) "Any" class implementations.
Search engine vomits additionally lot of square wheels invented to that
theme.
 
What you particularly attempt to do does not work. You seemingly assume
that different instantiations of a template function somehow share
internal static variables. These do not.
"Öö Tiib" <ootiib@hot.ee>: May 27 10:53PM -0700

On Friday, 27 May 2016 23:37:00 UTC+3, Paavo Helde wrote:
> deadlocks. Come up with a language that avoids race conditions and
> deadlocks automatically with minimal runtime overhead, and you might
> score huge points.
 
It most likely will be another tool of few geeks. For example Erlang.
It does not have data races, deadlocks are possible but rare. Sounds
great. However it is functional and asynchronous and that means that
most programmers won't like the constraints involved.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: