Wednesday, February 4, 2015

Digest for comp.lang.c++@googlegroups.com - 9 updates in 1 topic

Christopher Pisz <nospam@notanaddress.com>: Feb 04 04:08PM -0600

On 2/4/2015 3:57 PM, Luca Risolia wrote:
>> feature? Compile time?
 
> It's more a concept than a feature. There are many pre-C++11
> implementations of static assertions.
 
Yea, I see boost has it. I'll read up on it there. Maybe I can use that
at work.
ghada glissa <ghadaglissa@gmail.com>: Feb 04 02:40PM -0800

> lowest to highest address.
 
> --
> Ben.
 
I'm obliged to use memcpy to set the nonce value, not only to print the bytes in order, as it is specified in this function
 
void set_nonce(uint8_t *nonce,
long unsigned int extended_source_address,
uint32_t counter,uint8_t seclevel)
{
/* 8 bytes || 4 bytes || 1 byte */
/* extended_source_address || frame_counter || sec_lvl */
 
memcpy(nonce,&extended_source_address,8);
memcpy(nonce+8,&counter,4);
nonce[12] = seclevel;
 
}
jt@toerring.de (Jens Thoms Toerring): Feb 04 10:43PM

> If you want to use a unsigned int then use an unsigned int. There is no
> purpose at all to use a typedefed renaming when you intend on it being
> an specific type anyway. If you want to know the size, then use sizeof.
 
Sounds like you hadn't to deal a lot with programming near
to the hardware where you often need to know exactly how many
bytes a type has (e.g. wou need to writee to a 16-bit wide
register of some memory-mapped device) and using a unsigned
short int, hoping that it will have 2 byte everywhere). And
that the relevant types in <stdint.h> aren't guaramteed to
exist everywhere is really a problem with some rather unusal
systems - there you have a number of extra problems. And if
there's no stdint.h file at all, using a replacement heaer
file, created before compilation with some helper program,
is much cleaner than sprinkling the whole code with checks
for sizeof.
 
And it's not only a problem with hardware-near programming
- I just had to write something for reading and writing
binary MATLAB (a popular prgramming system for numerical
computations in science) files. And all the types in there
are defined as being either 1,2, 4 or 8 bytes wide, so to
properly read and write them you need to know the exact sizes
of your types. Again, doing it all with endlessly repeated
checks of sizeof would have been a mess. So, obviouslym, I
don't agree with your statement that "there's no purpose at
all" for these types.
 
Of course, any idiot can mis-use them - but what feature of
C++ can't be mis-used? I've seen a lot of code where people
used operator overloading to produce an incomprehensible mess.
And there are a number of people that use this as an argument
that operator overloading is "evil" per se. But used reasonably
it can make code a lot simpler to write and understand and thus
to maintain.
 
I agree in that in normal run-of-the-mill applications, these
types are unnecessary - and their use could be be a warning
sign that the author didn't really understood what he or she
was doing. But there are lots of legitimate uses for them
in other fields.
 
> stdint.h was a feeble attempt at cross-platform guarentees that aren't
> guarentees at all and the idea is a source of bugs time and again.
 
I couldn't disagre more - I had hoped for something like
that quite some time before C99. <stdint.h> in itself, of
course, can't make a program magically cross-platform (or
more precisely, cross-architecture) safe. But judicious use
of it can help a lot in achieving this goal.
 
> }
 
> // carry on using unsigned char for your project
> // without modern programmers having to sort through your 5000 typedefs.
 
Well, that's hardly an option when you have to support such
"strange" architectures...
 
> OS, C++ implementation, or maintainability of the code in the future has
> even been a consideration, much less what the type is in relation to
> what he wants it to be and how it related to the other types he is using.
 
Perhaps you should consider that the OP is rather new to program-
ming (at least in this field) and is learning and experimenting
- I'd rather doubt that this is for some serious project you might
have to maintain some day;-) I've been there myself, made lots of
mistakes, but because of this I think that I nowadays know how to
use memcpy() (or std::copy()) properly. And I'd rather prefer
people who are making the effort of trying to understand what
that is all about over those that just blindly copy some (bad)
habits from some blogs, written by someone who doesn't under-
stand what he's writing about, and then do cargo-cult program-
ming for the rest of their lifes.
 
But don't let that get in the way of a good rant, I enjoyed it;-)
Best regards, Jens
--
\ Jens Thoms Toerring ___ jt@toerring.de
\__________________________ http://toerring.de
Christian Gollwitzer <auriocus@gmx.de>: Feb 04 11:56PM +0100

Am 04.02.15 um 22:31 schrieb Christopher Pisz:
> If you want to use a unsigned int then use an unsigned int. There is no
> purpose at all to use a typedefed renaming when you intend on it being
> an specific type anyway. If you want to know the size, then use sizeof.
 
Fixed-width integers are very useful types in many fields of computing.
In fact, I claim that they are more useful then the "polymorphic" int,
long, short nonsense. For example, to implement the Mersenne twister you
need to use unsigned 64 bit arithmetics. Or to read image files from
disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
types. Extremely easy if you have either cstdint or inttypes, but
extremely annoying when you need to remember that long on Windows is
always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
Linux and 32 bit on a 32bit Linux etc. The names of these typedefs are
also widespread and there is cstdint, added in C++11 for that. I can't
understand why you think these are not useful, and I don't see why you
think it is a C problem rather than a C++ one.
 
OTOH, using the generic "int" makes the program behave differently on
different platforms. For example a program computing the factorial using
integer arithmetics overflows for different input at different
platforms, if just int or unsigned is used. "int" should REALLY mean an
integer, i.e. something that never overflows. Too late to get that into
the core C++, though.
 
Christian
Christopher Pisz <nospam@notanaddress.com>: Feb 04 05:07PM -0600

On 2/4/2015 4:43 PM, Jens Thoms Toerring wrote:
> file, created before compilation with some helper program,
> is much cleaner than sprinkling the whole code with checks
> for sizeof.
 
No, thank goodness. That's where C programmers lurk and I don't get
along with them at all. Imagine that!
 
Even so, Don't you have an entry point where one check can be made for
the entire project, whether it be an executable, a dll, or a library?
 
You know the beforehand what architecture you are targeting I'm sure.
 
 
 
> checks of sizeof would have been a mess. So, obviouslym, I
> don't agree with your statement that "there's no purpose at
> all" for these types.
 
Well, again, I am not seeing, why you need to use sizeof more than once.
 
Regardless. Do you really prefer to maintain and decypher lines like:
 
int4_t void(uint8_t x)
{
BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
unsigned char y = (byte[1] & x);
return (int)y;
}
 
?
 
because that is inevitably what I've ended up having to sort out when
more than one programmer get's his hands on it over the years. Granted,
it isn't as directly obvious as the example. It's usually quite indirect.
 
> that operator overloading is "evil" per se. But used reasonably
> it can make code a lot simpler to write and understand and thus
> to maintain.
 
Given the option to rename any type 3 or more times vs the option for it
to always have the same name, I think the latter is without question
simpler. If there is a specific reason to do otherwise, then that's
another matter. Like we said, the OP has no specific reason.
 
> ming for the rest of their lifes.
 
> But don't let that get in the way of a good rant, I enjoyed it;-)
> Best regards, Jens
 
He isn't learning, because he has over and over posted c-style code
chalk full of bad habits and errors as a result, with no justification
for their use. If the OP has any reason at all to use uint8_t then I'd
love for him to say what it is.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 11:15PM

On 04/02/2015 21:31, Christopher Pisz wrote:
> OS, C++ implementation, or maintainability of the code in the future has
> even been a consideration, much less what the type is in relation to
> what he wants it to be and how it related to the other types he is using.
 
You have just made the age old Usenet mistake of rather than admitting
your mistake you instead rant writing a load of absolute bollocks
digging an even bigger hole for yourself. Sorry mate but you are a n00b.
 
/Flibble
jt@toerring.de (Jens Thoms Toerring): Feb 04 11:18PM

> Thank you Mr Jend for your help, i used printf( "%02x", nonce[ i ] ) and it
> improve the display of values less than 0x10,i get this result
> 010000000048deac but i can't resolve the problem of order with memcpy.
 
There is no problem with memcpy(), as others and I have pointed
out, it's behaving exactly as it should. The problem is with what
you expect. Let's look at it again. Consider a system where
an int has two bytes (as was not uncommon a few years ago).
Now you write
 
unsigned int x = 256 + 5; // or 0x0105
 
There are obviously two bytes in that value, a more-significant
byte, 256 represented by 0x01, and a less-significant byte, 5 =
0x05 (where "more-significant" means it has a larger impact on
the resulting value - if you add or subtract 1 from the less-
significant byte, the value of 'x' changes by just 1, but if
you add or subtract 1 from the more-significant byte, 'x'
changes by +/-256).
 
Now: how is this stored in memory? There are two possibilities.
You can have a macine where the more-significant byte (0x01) is
storeed in a lower address than the less-significant byte (called
"big-endian"). And you can have a system where it's done just the
other way round.
 
If you now do
 
unsigned char *z = reinterpret_cast< unsigned char * >( &x );
 
then 'z' points to the address where 'x' starts. On a big-
endian system doing
 
printf( "%02x", z[ 0 ] );
 
prints out "01". On a little-endian system the exact same code
will print out "05". And, of course,
 
printf( "%02x", z[ 1 ] );
 
prints either "05" (big-endian) or "01" (little-endian).
 
So what you see here is not a defect of memcpy() or anything
else going wrong but purely the effect of how integers with
more than a single byte get stored on your machine. Obviously,
you're using a little-endian machine, and thus the byte with
the least significance (the "lowest digits" of the value)
get stored in memory at the lowest address. Thus that's the
one you get first when you look at the individual bytes via
your 'nonce' pointer. And when you then look at the higher
addresses via 'nonce' you then get the more and more signi-
ficant bytes. But if you'd have a computer with a different
CPU the result would be the exact reverse if it were a big-
endian CPU.
Regards, Jens
--
\ Jens Thoms Toerring ___ jt@toerring.de
\__________________________ http://toerring.de
Christopher Pisz <nospam@notanaddress.com>: Feb 04 05:22PM -0600

On 2/4/2015 4:56 PM, Christian Gollwitzer wrote:
> extremely annoying when you need to remember that long on Windows is
> always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
> Linux and 32 bit on a 32bit Linux etc.
 
How is remembering an int is 32bits on windows and 64bit on 64bit linux
harder than remembering what the hell a uint8_t is and that you better
be damn sure that _noone ever_ assigns anything but another uint8_t to
it directly or indirectly?
 
> also widespread and there is cstdint, added in C++11 for that. I can't
> understand why you think these are not useful, and I don't see why you
> think it is a C problem rather than a C++ one
 
I've already explained why more than once.
Hours and hours of debugging.
 
It's a C problem because those who program C-style are the ones whom
use it. It's on my list of craptastic habits I run across on the job
from bug generating programmers...that and the header says so.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 11:25PM

On 04/02/2015 22:56, Christian Gollwitzer wrote:
> platforms, if just int or unsigned is used. "int" should REALLY mean an
> integer, i.e. something that never overflows. Too late to get that into
> the core C++, though.
 
I couldn't agree with you more. The MISRA safety critical C++ coding
standard even bans the use of 'int', 'char' etc and enforces the use of
the sized integer typedefs to help ensure that there are no nasty
surprises causing planes to fall out of the sky etc.
 
Personally when I am not using iterators to loop I use std::size_t to
index (not 'int') and the sized integer typedefs nearly everywhere else.
IMO tt is important to know what the valid range of values for a
scalar type is in any algorithm and the sized integer typedefs allow
this. IMO 'int' should really only be used as the return type for main()!
 
/Flibble
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: