Tuesday, January 27, 2015

Digest for comp.lang.c++@googlegroups.com - 11 updates in 4 topics

ghada glissa <ghadaglissa@gmail.com>: Jan 27 01:55PM -0800

Dear all,
 
Is there any predefined function that convert in to hex or octet string in c++.
 
Regards.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 27 10:09PM

On 27/01/2015 21:55, ghada glissa wrote:
> Dear all,
 
> Is there any predefined function that convert in to hex or octet string in c++.
 
> Regards.
 
std::ostringstream.
 
/Flibble
Ian Collins <ian-news@hotmail.com>: Jan 28 11:12AM +1300

ghada glissa wrote:
> Dear all,
 
> Is there any predefined function that convert in to hex or octet string in c++.
 
It's easy with a stringstream:
 
std::string intToHex( int n )
{
std::ostringstream os;
 
os << std::hex << n;
 
return os.str();
}
 
Or more generally:
 
template <typename T> std::string integerToHex( T n )
 
--
Ian Collins
ghada glissa <ghadaglissa@gmail.com>: Jan 27 02:42PM -0800

Le mardi 27 janvier 2015 22:55:23 UTC+1, ghada glissa a écrit :
> Dear all,
 
> Is there any predefined function that convert in to hex or octet string in c++.
 
> Regards.
 
Thank you for your reply.
This function intToHex return a string or i want to get an array of char that will be easy to handle.
 
for exp res=intToHex(X)=12f5e188
i want it
res[0]=12
res[1]=f5
res[2]=e1
res[3]=88
 
Regards.
Ian Collins <ian-news@hotmail.com>: Jan 28 12:16PM +1300

ghada glissa wrote:
 
>> Is there any predefined function that convert in to hex or octet string in c++.
 
>> Regards.
 
> Thank you for your reply.
 
Please reply to the correct message and quote some context!
 
> This function intToHex return a string or i want to get an array of char that will be easy to handle.
 
You asked for a function that converted to a string....
 
You can't return an array from a function, so how about:
 
template <typename T> std::array<unsigned char,sizeof(T)>
toChar( T n )
{
constexpr auto size = sizeof(T);
 
std::array<unsigned char,size> data;
 
for( auto i = size; i; --i, n >>= 8 )
{
data[i-1] = n;
}
 
return data;
}
 
Which passes your test.
 
 
--
Ian Collins
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jan 27 07:51PM

On Tue, 27 Jan 2015 09:40:12 -0600
> would be inside some class and the same locking mechanisms are going
> to be added whether it is a std::vector, a c-array, or a
> snuffleupugus.
 
That's not the point. The heap is a global resource and the _heap
manager_ has to deal with concurrency in a multi-threaded program. This
has nothing to do with concurrent access to the same container (if you
are going to have concurrent access to the same container you need to at
least look at std::list if you are going to have a lot of contention).
The fact that the heap manager has to cope with concurrency is one of
the reasons why dynamic allocation has overhead.
 
> > locality.
 
> Someone else said this too. Not disagreeing, but I've never read it.
> Got a reference?
 
I cannot come up with a study (although I would be surprised if there
wasn't one), but it must do so, because memory allocated on the heap
will not be in the same cache line as the memory for the object itself.

> the scenario and like I said, I suspect the OP was doing what he was
> doing just because "Me like meat. STL bad....C gud.", but looks like
> he lost interest in his topic.
 
It's the cost of unnecessary zero initialization of built in types.
Small, but it is there and if you can avoid it then why not?
 
I just come back to the point that failing to make use of an array (or
std::array) instead of std::vector where you have a statically
(constexpr) sized container of built in types, just because you don't
like C programming practices, seems crazy to me. It's a bit like
failing to call std::reserve() on a vector when you actually know
programmatically at run time what the final size of the vector will be,
and instead just relying on the vector's normal algorithm to allocate
memory in multiple enlarging steps as it is pushed onto. The code will
still work but you will be doing unnecessary allocations of memory and
copying or moving of vector elements. Don't do it if you can avoid it.
 
Take the low hanging fruit. It is not premature optimization to follow
a few simple rules which will make all your programs faster without any
extra effort.
 
Chris
legalize+jeeves@mail.xmission.com (Richard): Jan 27 09:17PM

[Please do not mail me a copy of your followup]
 
David Brown <david.brown@hesbynett.no> spake the secret code
 
>And allocating a block statically takes no instructions at all.
 
Another takeway from talking to the game guys that I didn't mention was
that they simply used fixed-sized arrays to hold all their level data.
The adjust the fixed size to handle the largest level in their shipping
product. Simple resource management that retains locality of reference.
 
If for some reason you don't like C arrays, there is std::array that
has the same benefits.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
David Brown <david.brown@hesbynett.no>: Jan 27 11:58PM +0100

On 27/01/15 16:28, Christopher Pisz wrote:
 
> Well, this is what I am getting at. If there is _one_ allocation, there
> is no need to apply a rule of "I won't use any STL container, because
> allocation!", which is often what I hear from C programmers.
 
Certainly for a one-off allocation, the time taken for the malloc (or
whatever "new" uses) is irrelevant. But issues 2 and 3 below still apply.
 
I am trying to give /valid/ reasons here - I agree with you that some
programmers will give invalid or mythical reasons for not using the STL
(or any other feature of C++ or the library).
 
>> compile-time checking and (if necessary) faster run-time checks to help
>> catch errors sooner.
 
> What exactly does "amenable to compile time checking" mean?
 
In the case of an array, this would mean spotting some out-of-bounds
accesses at compile-time.
 
The general rule of compile-time checking (and also optimisation) is to
give the compiler as much information as possible, make as much as
possible static and constant, and keep scopes to a minimum. In this
respect, static allocation is always better than dynamic allocation.
 
 
> "Run time checks to catch errors sooner?" What run time check is going
> to occur on a c-array? Run time checks would be a reason to use an STL
> container in the first place. Of course none is faster than some.
 
Run-time checks on a C array need to be implemented manually (unless you
have a compiler that supports them as an extension of some sort). They
could of course be added in a class that wraps a C array. But they will
be more efficient than for a vector, because the size is fixed and known
at compile-time.
 
Note that the new std::array<> template gives many of the advantages of
C arrays combined with the advantages of vectors, especially when the
array<> object is allocated statically (or at least on the stack). The
point here is static allocation, not the use of C arrays.
 
> contiguous block of memory on the stack vs a contiguous block of memory
> on the heap being more likely to be in the cache. Have anything to
> reference?
 
Think about how a stack works - especially if we are talking about small
arrays. The code will regularly be accessing the data on the stack, so
the stack will be in the cpu caches. Data allocated on the heap will,
in general, not be in the cache before.
 
It may seem that this does not matter - after all, your program will
write to the new array before reading it, and thus it does not matter if
the old contents of the memory is in the cache. But unless you are
writing using very wide vector stores (or using cache zeroing
instructions), when you write your first 32-bit or 64-bit value, the
rest of that cache line has to be read in from main memory before that
new item is written to the cache. And even if you are using wide stores
that avoid reads to the cache, allocating the new cache line means
pushing out existing cache data - if that's dirty data, it means writing
it to memory.
 
How big a difference this makes will depend on usage patterns, cpu type,
cache policies, etc.
 
Regarding instruction and register usage, it's fairly clear that a
dynamically allocated structure is going to need one more pointer and
one more level of indirection than a stack allocated or (preferably) a
statically allocated structure.
 
 
> background where it was relevant, but then they adopt silly rules like
> "don't ever use an STL container, because 'allocation'" in places where
> the difference is negligible.
 
Regardless of speed issues (which are often not relevant), there are
code correctness and safety issues in using dynamic memory. With C++
used well, many of these issues are solved by using resource container
classes (including the STL), smart pointers, etc., rather than the
"naked" pointers of C. Programmers should be more wary of dynamic
memory in C than in C++. But no matter what the language, programmers
should be aware of the costs and benefits of particular constructs, and
use them appropriately.
 
 
> You do, but I have my doubts that the OP did. I am willing to bet he was
> doing what he did, "just because", but we'll never know.
 
Indeed.
 
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 27 10:23PM

On Mon, 2015-01-26, Tobias Müller wrote:
>> Qsort [not qsorted. ..] etc
 
> My guess is that he actually meant functions in the mathematical sense
> (pure, without side effects).
 
Or in the Pascal sense: they have a return value. Pike doesn't
really define the terms (perhaps it wasn't necessary in the 1980s,
when Pascal was big?) but he says "Functions are used in expressions
[...]".
 
And then he goes on to give examples:
 
if(checksize(x)) // bad
if(validsize(x)) // good
 
and it strikes me that 25 years later, too many people haven't
even gotten /that/ right.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Robert Wessel <robertwessel2@yahoo.com>: Jan 27 01:55PM -0600

On Tue, 27 Jan 2015 12:54:38 -0600, Christopher Pisz
>or
>Should the thread terminate on its own?
 
>What is expected to happen if this occurs?
 
 
That is firmly in the realm of undefined behavior as far as C++ is
concerned, so no help there.
 
Windows will by default terminate a process if any of its threads
abend. But the application can have code to catch* (not in the C++
sense) traps, and if it does, pretty much anything can happen again.
 
Other OSs are usually similar (*nix, for example), although there are
variances (using the traditional "ATTACH" mechanism to implement
threads in MVS, for example, creates something that's halfway between
a process and a thread, and the attached process/thread can abend
without the other processes/threads getting killed, although there is
a notification).
 
 
*One possibility is Windows' structured exception handling (SEH)
Robert Wessel <robertwessel2@yahoo.com>: Jan 27 02:02PM -0600

On Tue, 27 Jan 2015 13:34:46 -0600, Paavo Helde
 
>Access violation is most usually caused by the program invoking Undefined
>Behavior, so anything can happen. Typically, in Linux world you could be
>pretty certain the process is killed in spot by the OS.
 
 
Well, unless the application set up a SIGSEGV or SIGBUS handler.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: