Tuesday, December 15, 2015

Digest for comp.lang.c++@googlegroups.com - 9 updates in 3 topics

Juha Nieminen <nospam@thanks.invalid>: Dec 15 09:12AM

> runtime polymorphism more than I did before. I pay a small runtime
> penalty (function indirection) but gain testability and more explicit
> design abstractions than I did before.
 
A virtual function call is negligibly slower than a regular (non-inlined)
function call. (Granted, last time I tested this was something like
5 years ago, but I doubt things have changed on that front.)
Of course virtual functions might not be as inlineable as regular
functions, but that's seldom a problem.
 
The major problem with a more "pure" OOD is that you will naturally be
allocating individual objects and handling them via pointers. (Inheritance
and virtual functions do not necessitate this, but often it leads to this,
depending on your overall design.)
 
In C++, allocating individual objects is significantly more expensive and
consumes more memory than handling objects by value (which is usually what
template-based code does). *That* is the major drawback (rather than the
use of virtual functions).
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Dec 15 09:59AM

On Tue, 15 Dec 2015 09:12:16 +0000 (UTC)
Juha Nieminen <nospam@thanks.invalid> wrote:
[snip]
> and consumes more memory than handling objects by value (which is
> usually what template-based code does). *That* is the major drawback
> (rather than the use of virtual functions).
 
And it gives rise to more cache misses because the object's memory
will be on the heap, so affecting locality.
 
Curiously, that may be less of an issue for objects which have been
written to provide move semantics. With the advent of move semantics,
many "value" objects are in fact just wrappers for internals which have
been allocated on the heap to enable moving to be implemented by
pointer assignment/swapping; so much of their memory is scattered about
the heap anyway.
 
Chris
scott@slp53.sl.home (Scott Lurndal): Dec 15 02:12PM

>> (rather than the use of virtual functions).
 
>And it gives rise to more cache misses because the object's memory
>will be on the heap, so affecting locality.
 
This doesn't follow. There's no fundamental difference between
objects allocated on the heap, and objects allocated on in the
data section or stack with respect to cache-line occupancy or
hit rates, particularly when the object exceeds the native
cache line size.
 
And, in fact, in multithreaded code, you want objects in difference
cache lines, generally and often specifically within the object as well.
(see the nice utility pahole(1)).
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Dec 15 07:07PM

On Tue, 15 Dec 2015 14:12:33 GMT
> data section or stack with respect to cache-line occupancy or
> hit rates, particularly when the object exceeds the native
> cache line size.
 
I don't agree. You have to access such objects via a pointer, and the
pointer will invariably be allocated on the stack in a different
region of memory.

> And, in fact, in multithreaded code, you want objects in difference
> cache lines, generally and often specifically within the object as
> well. (see the nice utility pahole(1)).
 
Could you flesh that one out for me further?
 
Chris
jacobnavia <jacob@jacob.remcomp.fr>: Dec 16 12:33AM +0100

Le 11/12/2015 12:23, Chris Vine a écrit :
> in the 1990s you would
> have implemented it by inheritance and virtual functions, or in the
> simplest cases by using function pointers.
 
Example:
When a compiler starts up it determines the cpu type it is running in.
If the program is for the current machine, it can choose from a set of
different code generators which model corresponds best to the machine.
 
The code generators offer the compiler a common set of actions and
produce different outputs.
 
Is this a case of "dependency injection" ?
 
Thanks
Geoff <geoff@invalid.invalid>: Dec 15 02:44PM -0800

On 13 Dec 2015 15:57:07 GMT, ram@zedat.fu-berlin.de (Stefan Ram)
wrote:
 
> return 0; }
 
>int main()
>{ return main1(); }
 
I couldn't get this to compile in VS2010, it doesn't like your using
NUMBER statement.
 
On OS X, compiled at -O3 with clang it spends all its time in the
enum_fast function, outputs results for n = 0, 1, 2, 3 rather quickly
then takes a very long time to output result for n = 4 or more.
Currently in 2h 12m at 98% CPU utilization and no result for n = 5
yet. I saw no evidence of any kernel time being consumed, the profiler
is telling me it's all in enum_fast at user level.
Vir Campestris <vir.campestris@invalid.invalid>: Dec 15 09:33PM

On 13/12/2015 23:18, David Brown wrote:
> flash). There are only a few devices around with more than 256K ram on
> board. But there are also plenty of chips with DDR interfaces of some
> sort giving pretty cheap support for 16+ MB ram.
 
Once we've gone for DDR it doesn't seem to increase the cost much to go
for a gig, not just a few MB. I'm getting this second hand though - I
write the software for the things, I don't buy them.
 
Andy
Gareth Owen <gwowen@gmail.com>: Dec 15 09:56PM


> Once we've gone for DDR it doesn't seem to increase the cost much to
> go for a gig, not just a few MB. I'm getting this second hand though -
> I write the software for the things, I don't buy them.
 
You pretty much can't get less than 128MB (1Gb) on a single DDR3 chip
these days, even if you wanted it. You might find some older stock
(particularly if you'll take DDR2), but nobody is actually making much
of the stuff.
David Brown <david.brown@hesbynett.no>: Dec 15 11:23PM +0100

On 15/12/15 22:56, Gareth Owen wrote:
> these days, even if you wanted it. You might find some older stock
> (particularly if you'll take DDR2), but nobody is actually making much
> of the stuff.
 
There are plenty of devices that are smaller, aimed at embedded systems.
These don't have the latest and greatest DDRxxx buses - they have
whatever was a reasonable choice when the chip family was designed, and
embedded processors and microcontrollers are designed to be available
for 10 years or more. In that world, DDR, DDR2, low power variants,
etc., are all perfectly normal.
 
A quick check on Digikey for the cheapest DDR (any version) memory has
an 8 MB (64 Mb) DDR chip at the top.
 
 
So lots of people make these things, and lots of people buy them. They
cost more per MB than the newest devices - economies of scale, plus
smaller geometries makes a big difference. But they are still cheaper
in absolute terms if you only need a small size.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: