Monday, February 2, 2015

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Geoff <geoff@invalid.invalid>: Jan 28 11:30AM -0800

On Wed, 28 Jan 2015 08:23:17 -0800 (PST), ghada glissa
 
>> Is there any predefined function that convert in to hex or octet string in c++.
 
>> Regards.
 
>No, i really need help,I'm new in C ++ and I can not easily handle its concepts.
 
You also seem to have trouble with the concept of how to reply to
other people on Usenet. DO NOT REPLY TO YOUR OWN POSTS.
Ian Collins <ian-news@hotmail.com>: Jan 28 12:16PM +1300

ghada glissa wrote:
 
>> Is there any predefined function that convert in to hex or octet string in c++.
 
>> Regards.
 
> Thank you for your reply.
 
Please reply to the correct message and quote some context!
 
> This function intToHex return a string or i want to get an array of char that will be easy to handle.
 
You asked for a function that converted to a string....
 
You can't return an array from a function, so how about:
 
template <typename T> std::array<unsigned char,sizeof(T)>
toChar( T n )
{
constexpr auto size = sizeof(T);
 
std::array<unsigned char,size> data;
 
for( auto i = size; i; --i, n >>= 8 )
{
data[i-1] = n;
}
 
return data;
}
 
Which passes your test.
 
 
--
Ian Collins
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 27 10:09PM

On 27/01/2015 21:55, ghada glissa wrote:
> Dear all,
 
> Is there any predefined function that convert in to hex or octet string in c++.
 
> Regards.
 
std::ostringstream.
 
/Flibble
Ian Collins <ian-news@hotmail.com>: Jan 29 09:22PM +1300

Rosario193 wrote:
 
>> Regards.
 
> #include <stdio.h>
> #define u8 unsigned char
 
No!
 
--
Ian Collins
Lynn McGuire <lmc@winsim.com>: Jan 27 12:04PM -0600

On 1/27/2015 11:44 AM, Vlad from Moscow wrote:
> As it is known Dr.Dobb's ceased to exist. What other well-known journals do sxist where articles on C++ are published?
 
Dr. Dobbs has gone virtual at:
http://www.drdobbs.com/cpp
 
MSDN magazine talks about C++ regularly.
https://msdn.microsoft.com/en-us/magazine/default.aspx
 
Lynn
"Tobias Müller" <troplin@bluewin.ch>: Jan 26 06:45AM


> The straightforward solution for this is to use a simple loop and
> std::bitset::test(). The implementation of std::bitset is all inline so
> the result could actually be pretty fast with a good optimizing compiler.
 
IME using operator[] is faster than test() because it omits bounds
checking.
It seems that the proxy object of operator[] can be optimized away easier
than bounds checking.
 
Tobi
scott@slp53.sl.home (Scott Lurndal): Jan 26 03:10PM

>Hi there,
 
>how would you implement an efficient search for the first bit conforming to a particular bit_state (true or false) in a std::bitset<N> ?
 
In my projects that use gcc, I typically use __builtin_ffs or
__builtin_ctz depending on the sense of the search.
 
scott
David Brown <david.brown@hesbynett.no>: Jan 26 08:40AM +0100

On 26/01/15 08:13, Tobias Müller wrote:
> just as much an oversimplification.
 
> One other very important impact of virtual functions is that they are a
> hard optimization boundary at compile time, i.e. inlining is impossible.
 
Virtual functions are not a hard optimisation boundary if the compiler
can figure them out at compile time (or link time, if you are using
link-time optimisation). Compilers have been improving quite a lot
recently at devirtualisation optimisations precisely to avoid
unnecessary costs in virtual functions.
 
Of course, if you access an object through a pointer to a base class,
and the compiler doesn't have all the relevant code at the time, then it
must use the virtual call mechanisms. But if the type of the object is
fully known, then the virtual call is handled directly - or can even be
inlined.
Ian Collins <ian-news@hotmail.com>: Jan 27 08:09PM +1300

Christopher Pisz wrote:
 
> I suppose you could argue the one time allocation on the heap at
> construction time vs on the stack, if your class resides on the stack
> anyway. However, what are you really saving and what are you giving up?
 
Locality of reference?
 
--
Ian Collins
jak <please@nospam.tnx>: Jan 27 05:38PM +0100

Il 27/01/2015 16:40, Christopher Pisz ha scritto:
> scenario and like I said, I suspect the OP was doing what he was doing
> just because "Me like meat. STL bad....C gud.", but looks like he lost
> interest in his topic.
 
I have not lost interest, indeed, I follow you with attention. my
problem was to serialize a resource. I use the static field in the class
to save the state of the resource even when I declare a new variable of
that class in the various functions to always know the situation of the
resource.
 
PS:
I had to cut the rest of the discussion because my application gave me a
sending error. sorry.
David Brown <david.brown@hesbynett.no>: Jan 27 11:58PM +0100

On 27/01/15 16:28, Christopher Pisz wrote:
 
> Well, this is what I am getting at. If there is _one_ allocation, there
> is no need to apply a rule of "I won't use any STL container, because
> allocation!", which is often what I hear from C programmers.
 
Certainly for a one-off allocation, the time taken for the malloc (or
whatever "new" uses) is irrelevant. But issues 2 and 3 below still apply.
 
I am trying to give /valid/ reasons here - I agree with you that some
programmers will give invalid or mythical reasons for not using the STL
(or any other feature of C++ or the library).
 
>> compile-time checking and (if necessary) faster run-time checks to help
>> catch errors sooner.
 
> What exactly does "amenable to compile time checking" mean?
 
In the case of an array, this would mean spotting some out-of-bounds
accesses at compile-time.
 
The general rule of compile-time checking (and also optimisation) is to
give the compiler as much information as possible, make as much as
possible static and constant, and keep scopes to a minimum. In this
respect, static allocation is always better than dynamic allocation.
 
 
> "Run time checks to catch errors sooner?" What run time check is going
> to occur on a c-array? Run time checks would be a reason to use an STL
> container in the first place. Of course none is faster than some.
 
Run-time checks on a C array need to be implemented manually (unless you
have a compiler that supports them as an extension of some sort). They
could of course be added in a class that wraps a C array. But they will
be more efficient than for a vector, because the size is fixed and known
at compile-time.
 
Note that the new std::array<> template gives many of the advantages of
C arrays combined with the advantages of vectors, especially when the
array<> object is allocated statically (or at least on the stack). The
point here is static allocation, not the use of C arrays.
 
> contiguous block of memory on the stack vs a contiguous block of memory
> on the heap being more likely to be in the cache. Have anything to
> reference?
 
Think about how a stack works - especially if we are talking about small
arrays. The code will regularly be accessing the data on the stack, so
the stack will be in the cpu caches. Data allocated on the heap will,
in general, not be in the cache before.
 
It may seem that this does not matter - after all, your program will
write to the new array before reading it, and thus it does not matter if
the old contents of the memory is in the cache. But unless you are
writing using very wide vector stores (or using cache zeroing
instructions), when you write your first 32-bit or 64-bit value, the
rest of that cache line has to be read in from main memory before that
new item is written to the cache. And even if you are using wide stores
that avoid reads to the cache, allocating the new cache line means
pushing out existing cache data - if that's dirty data, it means writing
it to memory.
 
How big a difference this makes will depend on usage patterns, cpu type,
cache policies, etc.
 
Regarding instruction and register usage, it's fairly clear that a
dynamically allocated structure is going to need one more pointer and
one more level of indirection than a stack allocated or (preferably) a
statically allocated structure.
 
 
> background where it was relevant, but then they adopt silly rules like
> "don't ever use an STL container, because 'allocation'" in places where
> the difference is negligible.
 
Regardless of speed issues (which are often not relevant), there are
code correctness and safety issues in using dynamic memory. With C++
used well, many of these issues are solved by using resource container
classes (including the STL), smart pointers, etc., rather than the
"naked" pointers of C. Programmers should be more wary of dynamic
memory in C than in C++. But no matter what the language, programmers
should be aware of the costs and benefits of particular constructs, and
use them appropriately.
 
 
> You do, but I have my doubts that the OP did. I am willing to bet he was
> doing what he did, "just because", but we'll never know.
 
Indeed.
 
legalize+jeeves@mail.xmission.com (Richard): Jan 27 09:17PM

[Please do not mail me a copy of your followup]
 
David Brown <david.brown@hesbynett.no> spake the secret code
 
>And allocating a block statically takes no instructions at all.
 
Another takeway from talking to the game guys that I didn't mention was
that they simply used fixed-sized arrays to hold all their level data.
The adjust the fixed size to handle the largest level in their shipping
product. Simple resource management that retains locality of reference.
 
If for some reason you don't like C arrays, there is std::array that
has the same benefits.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Christopher Pisz <nospam@notanaddress.com>: Jan 26 04:30PM -0600

On 1/12/2015 8:30 AM, alessio211734 wrote:
 
> ...
> static NearData nearCells[32];
 
> };
 
 
 
After all the discussion of profiling and virtual functions in child
threads, ....I still want to know why in the world anyone would want a
static C-Array in their C++ class.
 
I'd sure like to know how the author intended to use this class, because
I question both making it static and using the c-array vs a stl container.
Christopher Pisz <nospam@notanaddress.com>: Jan 27 09:40AM -0600

On 1/26/2015 6:57 PM, Chris Vine wrote:
>> up?
 
> That is ridiculous. Allocation on the heap is a heavy-weight
> operation compared with allocation on the stack,
 
No one is arguing otherwise.
 
The thing I have a problem with is I often end up working with people
whom use that as an excuse to never use an STL container at all,
regardless of how often allocation may occur.
 
 
> relatively speaking,
> particularly in multi-threaded programs which require thread-safe
> allocation and deallocation.
 
std::vector<int>(10) is going to be the same whether my program is
multithreaded or not.
 
If there are thread safety concerners, then most likely the container
would be inside some class and the same locking mechanisms are going to
be added whether it is a std::vector, a c-array, or a snuffleupugus.
 
> Dynamic allocation also reduces cache
> locality.
 
Someone else said this too. Not disagreeing, but I've never read it. Got
a reference?
 
> initial size calls the default constructor for each element, so you
> sometimes have to reserve a size and then pushback onto it to avoid
> that.
 
but again, how long does it take to construct an int? It depends on the
scenario and like I said, I suspect the OP was doing what he was doing
just because "Me like meat. STL bad....C gud.", but looks like he lost
interest in his topic.
 
 
 
 
 
 
 
 
 
 
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jan 27 07:51PM

On Tue, 27 Jan 2015 09:40:12 -0600
> would be inside some class and the same locking mechanisms are going
> to be added whether it is a std::vector, a c-array, or a
> snuffleupugus.
 
That's not the point. The heap is a global resource and the _heap
manager_ has to deal with concurrency in a multi-threaded program. This
has nothing to do with concurrent access to the same container (if you
are going to have concurrent access to the same container you need to at
least look at std::list if you are going to have a lot of contention).
The fact that the heap manager has to cope with concurrency is one of
the reasons why dynamic allocation has overhead.
 
> > locality.
 
> Someone else said this too. Not disagreeing, but I've never read it.
> Got a reference?
 
I cannot come up with a study (although I would be surprised if there
wasn't one), but it must do so, because memory allocated on the heap
will not be in the same cache line as the memory for the object itself.

> the scenario and like I said, I suspect the OP was doing what he was
> doing just because "Me like meat. STL bad....C gud.", but looks like
> he lost interest in his topic.
 
It's the cost of unnecessary zero initialization of built in types.
Small, but it is there and if you can avoid it then why not?
 
I just come back to the point that failing to make use of an array (or
std::array) instead of std::vector where you have a statically
(constexpr) sized container of built in types, just because you don't
like C programming practices, seems crazy to me. It's a bit like
failing to call std::reserve() on a vector when you actually know
programmatically at run time what the final size of the vector will be,
and instead just relying on the vector's normal algorithm to allocate
memory in multiple enlarging steps as it is pushed onto. The code will
still work but you will be doing unnecessary allocations of memory and
copying or moving of vector elements. Don't do it if you can avoid it.
 
Take the low hanging fruit. It is not premature optimization to follow
a few simple rules which will make all your programs faster without any
extra effort.
 
Chris
Juha Nieminen <nospam@thanks.invalid>: Jan 27 08:21AM

> After all the discussion of profiling and virtual functions in child
> threads, ....I still want to know why in the world anyone would want a
> static C-Array in their C++ class.
 
Because it's enormously more efficient.
 
It's faster to allocate and deallocate (allocating memory for the array
doesn't take any more additional time than allocating the instance of
the class the array is inside). It consumes less memory. It does not
contribute to memory fragmentation.
 
Sure, in a class that's instantiated a few times it doesn't matter.
However, in a class that's instantiated tens of thousands, or even
millions of times, it matters quite a lot.
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Christopher Pisz <nospam@notanaddress.com>: Jan 27 11:04AM -0600

On 1/27/2015 10:38 AM, jak wrote:
 
> PS:
> I had to cut the rest of the discussion because my application gave me a
> sending error. sorry.
 
 
Oh , I thought some fellow named Alessio was the OP. You are using 2
different accounts then I suppose.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jan 27 12:57AM

On Mon, 26 Jan 2015 18:43:56 -0600
Christopher Pisz <nospam@notanaddress.com> wrote:
[snip]
> construction time vs on the stack, if your class resides on the stack
> anyway. However, what are you really saving and what are you giving
> up?
 
That is ridiculous. Allocation on the heap is a heavy-weight
operation compared with allocation on the stack, relatively speaking,
particularly in multi-threaded programs which require thread-safe
allocation and deallocation. Dynamic allocation also reduces cache
locality. Furthermore std::vector constructed with a particular
initial size calls the default constructor for each element, so you
sometimes have to reserve a size and then pushback onto it to avoid
that.
 
Are you seriously saying that in your code you always use std::vector
instead of an array even when the size is known statically? And why do
you think std::array is in C++11?
 
The case for using std::vector even with statically known sizes is when
you may carry out move operations, as moving vectors just requires
swapping pointers. But you would still need to profile.
 
I find your question disturbing.
 
Chris
legalize+jeeves@mail.xmission.com (Richard): Jan 26 09:20PM

[Please do not mail me a copy of your followup]
 
=?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin@bluewin.ch> spake the secret code
>> simplistic bugaboos about particular language features.
 
>You are reducing the impact of virtual functions to cache hotness which is
>just as much an oversimplification.
 
No, I am summarizing a real discussion with game developers.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jan 27 12:25AM

On Mon, 26 Jan 2015 16:30:24 -0600
 
> I'd sure like to know how the author intended to use this class,
> because I question both making it static and using the c-array vs a
> stl container.
 
Clearly there is a use case for a statically sized container with
contiguous storage or std::array would not be in C++11. And if the
array is a non-static class member, it makes little difference whether
you use a plain array or std::array because both have the same compiler
generated copy constructor and assignment operator. That use case
generally involves efficiency, because unlike std::vector arrays are
not dynamically allocated (unless you explicitly new them, which is
stupid because std::vector is available). For the cases I have
mentioned, I use a plain array, partly because that is what I have done
for years, and partly because ... why not?
 
std::array permits a zero size and a plain array does not, but I have
never written code requiring a zero size array.
 
Whether it should be static member or not is orthogonal. If it is a
static member, it is of even less importance whether it is a plain
array or a std::array type. And maybe of course the OP was using
C++98/03.
 
Chris
David Brown <david.brown@hesbynett.no>: Jan 27 11:23AM +0100

On 27/01/15 01:43, Christopher Pisz wrote:
> hand, and if you know the size before hand, you can create the vector
> with that size, so what are we really optimizing away in the name of
> efficiency?
 
1.
 
Allocating a block of data on the stack takes a couple of instructions -
allocating it in heap can mean system calls, locks, threading issues,
etc. (Typically a new/malloc implementation holds a local pool that can
be quickly allocated, and only needs a system malloc to re-fill the
local pool - so allocation times can vary between "not much" to "a lot",
depending on the state of the local pool.)
 
And allocating a block statically takes no instructions at all.
 
If you are doing this once, dynamic allocation of a vector means wasting
a few microseconds, which is obviously not a concern. But if you are
doing it thousands of times, it adds up.
 
2.
 
Static allocation, or at least stack allocation, is far more amenable to
compile-time checking and (if necessary) faster run-time checks to help
catch errors sooner.
 
3.
 
Statically allocated data, or at least stack allocated data, requires
fewer instructions and fewer registers to access, making it faster. It
also has better locality of reference and is more likely to be in the
cache, which can make a huge difference (this depends on the size of the
data and the access patterns, of course).
 
 
Coming from an embedded background, where this is often much more
relevant than on desktop systems (partly because memory fragmentation on
heaps is a serious issue when you don't have virtual memory), there is a
clear golden rule that all allocations should be static if possible,
with stack allocation as a second-best. In many embedded systems,
dynamic allocation is not allowed at all - it is certainly never encouraged.
 
Christopher Pisz <nospam@notanaddress.com>: Jan 27 09:28AM -0600

On 1/27/2015 4:23 AM, David Brown wrote:
 
> If you are doing this once, dynamic allocation of a vector means wasting
> a few microseconds, which is obviously not a concern. But if you are
> doing it thousands of times, it adds up.
 
 
Well, this is what I am getting at. If there is _one_ allocation, there
is no need to apply a rule of "I won't use any STL container, because
allocation!", which is often what I hear from C programmers. More often
than not A) They are not even aware you can specify size at construction
time for a vector or reserve size after if you wish B) They haven't even
considered how often the container or class that owns the container is
being constructed.
 
 
> Static allocation, or at least stack allocation, is far more amenable to
> compile-time checking and (if necessary) faster run-time checks to help
> catch errors sooner.
 
What exactly does "amenable to compile time checking" mean?
 
"Run time checks to catch errors sooner?" What run time check is going
to occur on a c-array? Run time checks would be a reason to use an STL
container in the first place. Of course none is faster than some.
 
 
> also has better locality of reference and is more likely to be in the
> cache, which can make a huge difference (this depends on the size of the
> data and the access patterns, of course).
 
Not disagreeing with you, but I've never read any evidence of a
contiguous block of memory on the stack vs a contiguous block of memory
on the heap being more likely to be in the cache. Have anything to
reference?
 
> clear golden rule that all allocations should be static if possible,
> with stack allocation as a second-best. In many embedded systems,
> dynamic allocation is not allowed at all - it is certainly never encouraged.
 
Well, that's the thing. I often work with C programmers whom came from a
background where it was relevant, but then they adopt silly rules like
"don't ever use an STL container, because 'allocation'" in places where
the difference is negligible.
 
You do, but I have my doubts that the OP did. I am willing to bet he was
doing what he did, "just because", but we'll never know.
 
"Tobias Müller" <troplin@bluewin.ch>: Jan 26 07:13AM

> functions!" that is their simplistic takeway from the real advice of
> "keep your cache hot". The latter is what you need to remember, not
> simplistic bugaboos about particular language features.
 
You are reducing the impact of virtual functions to cache hotness which is
just as much an oversimplification.
 
One other very important impact of virtual functions is that they are a
hard optimization boundary at compile time, i.e. inlining is impossible.
 
Tobi
Christopher Pisz <nospam@notanaddress.com>: Jan 26 06:43PM -0600

On 1/26/2015 6:25 PM, Chris Vine wrote:
> stupid because std::vector is available). For the cases I have
> mentioned, I use a plain array, partly because that is what I have done
> for years, and partly because ... why not?
 
I often hear that same argument from C programmers: "std::vector
allocates." If you are using a c-array, you must know the size before
hand, and if you know the size before hand, you can create the vector
with that size, so what are we really optimizing away in the name of
efficiency?
 
I suppose you could argue the one time allocation on the heap at
construction time vs on the stack, if your class resides on the stack
anyway. However, what are you really saving and what are you giving up?
 
 
Wouter van Ooijen <wouter@voti.nl>: Jan 28 08:52PM +0100


> My question particularly pertains to methods of using C++ in the automation industry with regards to patterns, frameworks, practices, uses, etc.
 
> Think of it like this: with the rise of platforms like the Raspberry Pi, Beaglebone Black, Wandboard, Udoo, Odroid, etc. the use of C++ in industrial hardware and embedded projects is more viable than ever before.
 
> But the use of C++ for such embedded projects will be quite different from a "normal desktop program".
 
Different in what sense? The platforms you mention are Linux-level
systems, suited to heavy work, but not so much to fast (ms and faster)
real-time work. The methods to program such systems don't differ that
much from desktop systems.
 
For programming hard resource-constrained systems (small memory, not a
lot of CPU pwoer to spare, hard real-time requiremenst) C++ can (and IMO
should) be used, but in somje aspects not as it is used on a desktop.
For instance:
 
- no heap (unpredictable timing)
- no exceptions (see http://www.voti.nl/blog/?p=40 for more explanation)
- no RTTI
- (depending on the hardware) no floating point
 
Most of the standard and third-party C++ libraries don't adhere to these
limitatins, so they are not useable. Unfortunately, this is seldom clear
in advance.
 
Some C++ features can, when used correctly, be a big asset in such a
resource-constrained situation:
 
- templates (but note that the wrong use can spell a disaster)
- constexpr
 
Wouter van Ooijen
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: