Thursday, February 5, 2015

Digest for comp.lang.c++@googlegroups.com - 25 updates in 7 topics

"Öö Tiib" <ootiib@hot.ee>: Feb 04 07:07PM -0800

On Thursday, 5 February 2015 03:13:29 UTC+2, Christopher Pisz wrote:
 
> Nope.
> Is it difficult to understand that they aren't guaranteed by that very
> standard?
 
That "not guaranteed" means that when you have <cstdint> included but
'uint8_t' does not compile then the system does not have 8 bit unsigned
bytes. You may achieve same effect with:
 
#include <limits>
static_assert( std::numeric_limits<unsigned char>::digits == 8, "should have 8 bit unsigned bytes");
 
I prefer to use 'uint8_t' where I need 8 bit bytes, achieves same,
documents it and is lot shorter to type.
 
> Is it also difficult to understand that they aren't needed without a
> _very_ specific scenario?
 
In practice you need them always when reading or storing data in some
binary format or communicating over some binary protocol.
 
> I mean, by your and other poster's logic, we might as well take unsigned
> char, char, int, unsigned int, and all the other primitives right out of
> the language, no?
 
Silver bullets are always wrong.
 
Let me look. I use 'int' when value can't logically exceed 30 000, 'long'
I use when value can't logically exceed 2 000 000 000, 'long long' I don't
use because 'int64_t' is slightly less to type.
I never use 'signed char' or 'short' because those make sense only when
I care about bits.
I never use any 'unsigned' variants. Lot to type and also if I don't need
bits but it is unsigned then it is either 'size_t' or 'wchar_t' anyway.
On rest of the cases I care about bits and so the 'int8_t' or 'uint32_t'
document it lot better.
 
The types that I never use do not bother me, maybe someone else needs
them and so let them be in language, they may do what they want.
 
> but they sure like to create 5000 line headers to make their own.
 
> Perhaps, if I wrote drivers for a living, I'd run across it more, but
> guess what? I don't, nor do hundreds of thousands of other C++ programmers.
 
Argumentum ad populum still remains logically fallacious by definition.
I do not care how many millions do it. If it does not suit my needs
or style then I don't do it.
 
Christian Gollwitzer <auriocus@gmx.de>: Feb 05 07:52AM +0100

Am 05.02.15 um 00:22 schrieb Christopher Pisz:
> harder than remembering what the hell a uint8_t is and that you better
> be damn sure that _noone ever_ assigns anything but another uint8_t to
> it directly or indirectly?
 
Do you have the same problem remembering what the hell the sin()
function from cmath means and how you assure that nobody ever reassigns
it to the cosine function? unit8_t etc. are defined in the standard C++,
they are part of the language. And they are defined with a precise meaning.
 
 
> It's a C problem because those who program C-style are the ones whom
> use it. It's on my list of craptastic habits I run across on the job
> from bug generating programmers...that and the header says so.
 
Those who need fixed-width integers are the ones who use it. If you
don't have fixed-width arithmetics but need it, it's very ugly to
express with lots of &0xFF etc. littered in the code. Why the hell, if
every current platform provides hardware support for these types anyway?
 
Christian
Ian Collins <ian-news@hotmail.com>: Feb 05 08:24PM +1300

Christopher Pisz wrote:
 
> It's a C problem because those who program C-style are the ones whom
> use it. It's on my list of craptastic habits I run across on the job
> from bug generating programmers...that and the header says so.
 
To paraphrase Flibble: utter bollocks mate.
 
A good percentage of the C++ code I write relies on fixed with types and
the code is about as far from "C-style" as you can get. Maybe you are
lucky enough to code in a bubble that doesn't interface with the real
world. Many of us don't and that most certainly does not make us
"C-style" programmers.
 
--
Ian Collins
David Brown <david.brown@hesbynett.no>: Feb 05 10:02AM +0100

On 05/02/15 01:41, Christopher Pisz wrote:
>> macro.
 
> I wouldn't think so. C++ is a big huge changing beast. No-one has
> everything memorized. That's why we get the big bucks, eh?
 
No arguments there - C++ is big, and getting bigger.
 
But static assertions are such an important part of good development
practice (as I see it) that many people have used them heavily long
before they became part of the standard. There are lots of pre-C++11
implementations around (google is your friend) - a simple one like this
works fine for most uses:
 
#define STATIC_ASSERT_NAME_(line) STATIC_ASSERT_NAME2_(line)
#define STATIC_ASSERT_NAME2_(line) assertion_failed_at_line_##line
#define static_assert(claim, warning) \
typedef struct { \
char STATIC_ASSERT_NAME_(__COUNTER__) [(claim) ? 2 : -2]; \
} STATIC_ASSERT_NAME_(__COUNTER__)
 
 
The key difference with the language support for static_assert is that
error messages are clearer.
 
> Never had to use it, or went looking for such a mechanism, because,
> again, never needed it. But sure, I come across things all the time,
> especially with C++11, that sound neat.
 
Static assertions let you make your assumptions explicit, and lets the
compiler check those assumptions at zero cost.
 
 
> I fully realize this. I was demonstrating how silly the concern was
> about whether or not a char is one byte. I was saying, if you really
> feel the need to check if it is one byte, then do sizeof.
 
Unfortunately for you, with all the other mistakes you have been making
with these posts, the assumption was that you were making another one.
 
> don't see anywhere in his code where the number of bits matters. It sure
> looks like he is concerned with bytes or nibbles, but who knows...his
> code is nowhere near a working example.
 
No, it does not look like anything of the sort. It looks like he wants
to examine the individual 8-bit bytes that make up the long integer he
has. And when you want to look at 8-bit bytes, uint8_t is the /correct/
type to use - anything else is wrong.
 
It is true that there were several errors in his code. One of them is
the use of "long unsigned int" instead of "uint64_t", since the size of
"long unsigned int" varies between platforms, and of the common PC
systems it is only 64-bit Linux (and other *nix) that have 64-bit longs.
 
> platforms where types can be different lengths and contain different
> ranges. The OP stated no such requirement and is, has, and I bet will
> continue to, write code in this style without and logical reason.
 
No, it is about being explicit and using the types you want, rather than
following bad Windows programmers' practice of picking a vaguely defined
type that looks like it works today, and ignoring any silent corruption
you will get in the future.
 
 
> I still say, and always will say that you know your architecture up
> front. In most cases, you have to in order to even compile with the
> correct options.
 
An increasing proportion of code is used on different platforms. Even
if you stick to the PC world, there are four major targets - Win32,
Win64, Linux32 and Linux64, which can have subtle differences. I agree
that one should not go overboard about portability - for example, code
that accesses the Windows API will only ever run on Windows. But it is
always better to be clear and explicit about what you are doing - if you
want a type that is 8 bits, or 64 bits, then say so.
 
The alternative is to say "I want a type to hold this information, but I
don't care about the details" - in C++11, that is written "auto". It is
not written "int".
 
When you write "int", you are saying "I want a type that can store at
least 16-bit signed integers, and is fast". Arguably you might know
your code will run on at least a 32-bit system, and then it means "at
least 32-bit signed integer" - but it does /not/ mean /exactly/ 32 bits.
"Long int" is worse - on some current and popular systems it is 32
bits, on others it is 64 bits.
 
 
> To truly be "platform independent" you have to jump through all manner
> of disgusting hoops and I have yet in 20 years ever come across any real
> life code, no matter how trivial, that was truly platform independent.
 
Perhaps you live in a little old-fashioned Windows world - in the *nix
world there is a vast array of code that is portable across a wide range
of systems. Many modern programs are portable across Linux, Windows and
Mac systems. And in the embedded world, there is a great deal of code
that can work fine on cpus from 8 bits to 64 bits, with big and little
endian byte ordering.
 
Of course, portable code like this depends (to a greater or lesser
extent) on non-portable abstraction layers to interact with hardware or
the OS.
 
And there are usually limits in portability - it is common to have some
basic requirements such as 8-bit chars (though there are real-world
systems with 16-bit and 32-bit chars) or two's compliment arithmetic.
Writing code that is portable across systems that fall outside of those
assumptions is often challenging, interesting, and pointless.
 
 
> You will, I guarantee, break your project at some point using typedefs
> for primitive types. I've seen it and wasted hours upon hours on it,
> when the target platform was known, singular, and Windows no less.
 
You can guarantee that you will break your project when you use the
/wrong/ types. And this attitude is particularly prevalent amongst
Windows programmers - it's just "64K should be enough for anyone" all
over again. It means that programs are pointlessly broken when changes
are made or the target /does/ change - such as from Win32 to Win64 -
because some amateur decided that they were always going to work on
Windows, and you can always store a pointer in a "long int".
David Brown <david.brown@hesbynett.no>: Feb 05 10:41AM +0100

On 05/02/15 02:13, Christopher Pisz wrote:
 
> Nope.
> Is it difficult to understand that they aren't guaranteed by that very
> standard?
 
They are guaranteed by the standards to exist and function exactly the
way they say, as long as the hardware supports it. So by using them,
you get what you want - or you get a clear compile-time failure. And
unless you are working with portability across rather specialised
architectures, the types /will/ exists. There are very few modern cpus
that don't use two's compliment integers, and the only modern cpus that
don't have 8-bit chars are specialised DSP's. So for almost all
practical purposes, the types /do/ exist.
 
 
> Is it also difficult to understand that they aren't needed without a
> _very_ specific scenario?
 
The OP needed a type big enough to hold 0xacde480000000001. Using "long
unsigned int" is wrong, because it is not big enough on many platforms.
Using "uint64_t" is correct, because it will always do the job.
 
The fixed-size integers are not specialised - they are standard, and
form part of good programming practice when you need a particular size.
 
<http://www.cplusplus.com/reference/cstdint/>
 
 
> I mean, by your and other poster's logic, we might as well take unsigned
> char, char, int, unsigned int, and all the other primitives right out of
> the language, no?
 
Yes.
 
If C and C++ (this is common to both languages) were being re-created
today, types such as "unsigned char" and "long int" /would/ be taken out
of the language. The fundamental types would be defined as fixed size,
two's complement - support for other types of signed integer would not
exist in a language designed today. It is likely that they would be
defined with some general mechanism (looking a bit like templates), but
that's just a detail.
 
Types named "int", "long_int", etc., might be defined with typedefs for
convenience. The definitions would be:
 
typedef int_fast16_t int;
typedef int_fast32_t long_int;
 
etc.
 
Personally, I would not bother with those in a new language - especially
when "auto" exists.
 
I would also include ranged types as a key feature - so that you would
use something like "range<0, 999_999>" for an integer of a specific
range, which would be implemented (in this case) as a uint32_t with
compile-time checking of values when possible.
 
 
(There are complications involving type compatibility that I am not
considering here - such as the difference between an int* and a long
int* even on platforms with 32-bit int and long int.)
 
 
>> almost universally understood by C and C++ programmers alike.
 
> Nope. I've worked with an aweful lot of C++ programmers. Somewhere
> around 5k, maybe? Never seen one use any of these types. Not once.
 
I think you work with Windows C++ programmers, who are used to the mess
that MS has made of types. Outside that, the types will be more common.
 
I also said that the types would be understood - that does not mean they
are commonly used (in the Windows world).
 
 
> I've also worked with a lesser amount of C programmer on C++ projects,
> whom inevitably created 80% of the bugs. They didn't use these either,
> but they sure like to create 5000 line headers to make their own.
 
Bad programming practice is not restricted to C programmers - C++
programmers do it too. And the inability of some C programmers to write
good C++ code does not mean there is a problem with an important part of
the C++ language standards just because it was first used in C.
 
> Perhaps, if I wrote drivers for a living, I'd run across it more, but
> guess what? I don't, nor do hundreds of thousands of other C++ programmers.
 
Certainly you would come across such types more in low-level programming
- drivers, interfaces, embedded systems, etc.
 
But if you have ever used a binary file format, or any other situation
where you transferred data into or out of your program in a binary
format, then you should have used these types.
 
>> illogical and inconsistent set of typedefs for the Windows APIs.
 
> Oh, you mean the typdefs they invented to alias already existing types?
> You are correct sir, quite illogical!
 
I mean code that uses home-made, non-standard typedefs that are often
vague (such as "WORD") - instead of using the well-known, well-defined,
well-supported standard fixed-size types that are part of the C and C++
languages.
David Brown <david.brown@hesbynett.no>: Feb 05 10:49AM +0100

On 05/02/15 02:30, Christopher Pisz wrote:
 
>> The fact that with every post you are inventing a new way to get things
>> wrong demonstrates /exactly/ why the fixed-size types were standardised.
 
> It does?
 
Yes.
 
Rather than making random and incorrect guesses as to the size of an
"int" on different platforms as you did here, people interested in
writing clear, correct and portable code will write "int32_t" when they
want a 32-bit signed integer, and "int64_t" when they want a 64-bit
signed integer. Then they won't have trouble with real differences
(such as the size of "long int" on different platforms) or imagined
differences (such as your mistake about "int" on 64-bit Linux).
Juha Nieminen <nospam@thanks.invalid>: Feb 05 11:28AM

> but i can't resolve the problem of order with memcpy.
 
You seem to think that memcpy is reversing your bytes. It doesn't do
anything like that. It just copies bytes verbatim.
 
Your "problem" of little-endianess is in the original value as well.
Just try printing its bytes directly and you'll see. memcpy isn't
changing anything.
 
The "problem" is in your hardware, not in memcpy or the compiler.
The hardware internally represents integral values with least-significant
bytes first, and that's what you are seeing.
 
If you want to print the bytes in the order you want, you have to
print them in reverse. (Although if you want your program to be
portable, you'll have to check the endianess first. This is an
easy check with a simple trick.)
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Feb 05 12:52PM

On Thu, 05 Feb 2015 10:02:17 +0100
> wants to examine the individual 8-bit bytes that make up the long
> integer he has. And when you want to look at 8-bit bytes, uint8_t is
> the /correct/ type to use - anything else is wrong.
 
Well, sort of. He appeared to be using uint8_t* to alias individual
bytes within an unsigned long. Such aliasing is only standard
conforming if in fact uint8_t is a typedef for unsigned char, which
gives you something of a chicken and egg situation.
 
That does not mean that I do not agree with the general point that,
subject to aliasing issues, if you mean "my code only works with 8-bit
2's complement char types without padding", then int8_t is a perfectly
reasonable way of documenting the fact and obtaining useful compiler
errors (namely, a missing type) if the requirements are not met.
 
Chris
David Brown <david.brown@hesbynett.no>: Feb 05 02:06PM +0100

On 05/02/15 13:52, Chris Vine wrote:
> bytes within an unsigned long. Such aliasing is only standard
> conforming if in fact uint8_t is a typedef for unsigned char, which
> gives you something of a chicken and egg situation.
 
The only possible way to implement uint8_t (on an architecture that
supports it, of course) is as a "#define" or typedef to either "char"
(if the target plain char is unsigned) or "unsigned char". There are no
other primitive types that it could be - and the standards require it to
be an alias for a primitive type.
 
Anyway, using memcpy works around aliasing - it always works as though
it copies the source to the destination using char* pointers.
 
In general, of course, aliasing can be an issue - and you have to
consider that an int32_t and an int may be subject to different aliasing
even if they are both 32 bits (since the int32_t could be a typedef for
"long int" rather than "int").
 
> 2's complement char types without padding", then int8_t is a perfectly
> reasonable way of documenting the fact and obtaining useful compiler
> errors (namely, a missing type) if the requirements are not met.
 
It is also suitable for the weaker use "I am only considering simple
8-bit two's complement bytes without padding" - maybe it could work with
other sized bytes, but it's not something you care about.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Feb 05 02:03PM

On Thu, 05 Feb 2015 14:06:53 +0100
> (if the target plain char is unsigned) or "unsigned char". There are
> no other primitive types that it could be - and the standards require
> it to be an alias for a primitive type.
 
I guess that must follow from the fact that C requires char to be at
least 8 bits in size.
 
Given that identity, I guess it depends on what you are documenting with
your use of either char or int8_t in any particular piece of code - your
conformity with size expectations or with aliasing requirements.

> Anyway, using memcpy works around aliasing - it always works as though
> it copies the source to the destination using char* pointers.
 
Good point.
 
Chris
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Feb 05 02:15PM

On Thu, 5 Feb 2015 14:03:05 +0000
> with your use of either char or int8_t in any particular piece of
> code - your conformity with size expectations or with aliasing
> requirements.
 
Actually it occurs to me that I am not correct with respect to int8_t,
because that could be typedef'ed to signed char (even on an
implementation where char is signed), whereas the aliasing exception
only applies to the char or unsigned char types. So if you want to
alias, stick with char, unsigned char and uint8_t.
 
Chris
David Brown <david.brown@hesbynett.no>: Feb 05 04:09PM +0100

On 05/02/15 15:15, Chris Vine wrote:
>>> standards require it to be an alias for a primitive type.
 
>> I guess that must follow from the fact that C requires char to be at
>> least 8 bits in size.
 
For standard integer types, char must be at least 8 bits, while short
must be at least 16 bits - thus if there is a standard integer type that
uses 8 bits, it must be char (plus signed char and unsigned char).
 
I think in theory it is possible for an implementation to define an
additional unsigned 8-bit extended integer type and use that for uint8_t
rather than unsigned char - but I cannot imagine a compiler doing so.
 
>> with your use of either char or int8_t in any particular piece of
>> code - your conformity with size expectations or with aliasing
>> requirements.
 
Documentation by stating your intentions clearly in the code, is one of
the important reasons for using fixed-size types. If you use "char",
you are saying "something like a simple 7-bit ASCII character or letter,
or a little piece of memory". If you use "uint8_t", you are saying "A
standard 8-bit byte of memory following sensible rules for memory access
- or alternatively an unsigned integer between 0 and 255". "char" is a
vague, sort-of type whose details can vary between targets - it's
signedness can even vary according to command-line options for some
compilers. "uint8_t" is clear and tightly defined.
 
I would not be adverse to a standard type named something like "mem8_t"
as a type specifically aimed at treating data as a raw memory contents,
thus leaving "uint8_t" for numbers (and with mem8_t, mem16_t, and
mem32_t, etc., having the "alias everything" feature, which would then
not apply to uint8_t). But until the standards folk add something like
that to the standard headers, "uint8_t" is the best type for the purpose.
 
> implementation where char is signed), whereas the aliasing exception
> only applies to the char or unsigned char types. So if you want to
> alias, stick with char, unsigned char and uint8_t.
 
That is correct. "int8_t" is an integer number between -128 and 127,
taking exactly one standard byte of storage using two's complement
arithmetic. Oddly, this is slightly different from C (C11 standard),
where the aliasing exception also applies signed char.
Christopher Pisz <nospam@notanaddress.com>: Feb 05 10:18AM -0600

On 2/4/2015 7:57 PM, Ian Collins wrote:
>> ex-driver writing guy put this crap in your code without any need for it.
 
> How then is this different form someone assigning a type that does not
> match a naked type?
 
Because it is hidden. It matched at one time and not at another.
scott@slp53.sl.home (Scott Lurndal): Feb 05 05:19PM


>It is a relic from C99. It is also not guaranteed to exist. It even says so.
 
[mindless rude rant elided]
 
 
>If you want to use a unsigned int then use an unsigned int. There is no
>purpose at all to use a typedefed renaming when you intend on it being
 
You have no clue about what requirements exist in programming,
do you?
 
There are many reasons to expect a specific size. Matching hardware
registers is a common reason in simulations, in operating systems,
in hypervisors, in embedded systems. That's why the explicitly sized
typedefs were added to the standards in the first place.
 
[more mindless elitist ranting elided]
scott@slp53.sl.home (Scott Lurndal): Feb 05 05:22PM

>> short int, hoping that it will have 2 byte everywhere).
 
>No, thank goodness. That's where C programmers lurk and I don't get
>along with them at all. Imagine that!
 
There are millions of lines of such code written in C++. I've personally
written two hypervisors, two full-system simulations and one
distributed operating system (for a massively parallel machine) in C++.
scott@slp53.sl.home (Scott Lurndal): Feb 05 05:27PM


>> I hope I never ever have to work on your code.
 
>The feeling is most definitely mutual mate; I wouldn't want you anywhere
>near my code.
 
I'll second this. Anyone who changes working existing code to match their
preferences (i.e. the comment about deleting usage of stdint.h types)
should be fired in any respectable development environment.
Christopher Pisz <nospam@notanaddress.com>: Feb 05 11:39AM -0600

On 2/5/2015 11:27 AM, Scott Lurndal wrote:
 
> I'll second this. Anyone who changes working existing code to match their
> preferences (i.e. the comment about deleting usage of stdint.h types)
> should be fired in any respectable development environment.
 
That's the thing. IT DOESN'T WORK!
 
Anyone who feels the need to use typedefed primitives on a project where
the target platform is known, and has been specified in the
requirements, and will never change, alongside programmers who will not,
should get hit by a bus.
 
Fibble clearly stated he will _always_ use typedefed primitives _everywhere_
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 05 06:15PM

On 05/02/2015 17:39, Christopher Pisz wrote:
> should get hit by a bus.
 
> Fibble clearly stated he will _always_ use typedefed primitives
> _everywhere_
 
Not everywhere: I also use the C++11 auto keyword.
 
/Flibble
Ian Collins <ian-news@hotmail.com>: Feb 06 08:13AM +1300

Christopher Pisz wrote:
> the target platform is known, and has been specified in the
> requirements, and will never change, alongside programmers who will not,
> should get hit by a bus.
 
Ah, so you do programme inside a bubble.
 
--
Ian Collins
alexo <alessandro@inwind.it>: Feb 05 08:11PM +0100

Hello all in the group :)
 
I just started learning the ncurses library but, even if it's not so
obscure as I wrongly thought, I can't get such a simple action as
showing a window on the console screen.
Could you please tel me what I missed in this example?
 
Thank you.
 
#include <ncurses.h>
#include <string.h>
 
void print_win(WINDOW *win, int, int);
 
int main(void)
{
initscr();
noecho();
cbreak();
curs_set(FALSE);
 
WINDOW *my_win;
 
int WIDTH = 15;
int HEIGHT = 8;
 
int winx = (COLS - WIDTH) / 2;
int winy = (LINES - HEIGHT) / 2;
 
my_win = newwin(HEIGHT, WIDTH, winy, winx);
 
if(my_win == NULL)
printw("memory error");
 
print_win(my_win, HEIGHT, WIDTH);
 
mvprintw(LINES - 1, (COLS - 11) /2, "press a key");
getch();
 
delwin(my_win);
endwin();
 
return 0;
}
 
void print_win(WINDOW *win, int h, int w)
{
char msg[] = "hello";
 
/* if I explicitly pass NULL
* I get the error strings,
*
* if I pass stdscr
* I get the correct behaviour
*/
 
if(box(win, 0, 0) == ERR)
printw("window drawing error\n");
 
if(mvwprintw(win, h/2, (w - strlen(msg)) /2, msg) == ERR)
printw("writing error\n");
 
if(wrefresh(win) == ERR)
printw("window rendering error\n");
}
woodbrian77@gmail.com: Feb 05 09:18AM -0800


> throw failure("No reply received. Is the CMWA running?");
> }catch(::std::exception const& ex){
> ::std::printf("%s: %s\n", argv[0],ex.what());
 
After reading in the recent "convert int to hex" thread,
I decided to remove the ::std from the line above. Now
it's just:
 
::printf("%s: %s\n", argv[0],ex.what());
 
I recall Alf S. arguing against using headers like cstdio.
I've been following that advice for a few years now without
any problems.
 
 
> return EXIT_FAILURE;
> }
> }
 
Brian
Ebenezer Enterprises - In G-d we trust.
http://webEbenezer.net
woodbrian77@gmail.com: Feb 05 08:52AM -0800


> > M4
 
> I use async IO and multiple process instances to minimize the
> need for threads. I think this approach scales well.
 
Perhaps Dietmar Kuhl has similar views. He talks about
"concurrent processing without [much] use of threads."
 
http://accu.org/index.php/conferences/accu_conference_2015/accu2015_sessions#asynchronous_operations

 
Brian
Ebenezer Enterprises - Trust in the L-rd with all your
heart and lean not on your own understanding. In all
your ways acknowledge Him, and He will make your paths
straight. Proverbs 3:5,6
 
http://webEbenezer.net
DSF <notavalid@address.here>: Feb 05 04:00AM -0500

On Sun, 25 Jan 2015 14:28:05 -0500, DSF <notavalid@address.here>
wrote:
 
Hello, group!
 
 
> If it were a runtime error, I could track down the class involved.
>But as it stands, I'm stuck! Any ideas on how to track down this
>error?
 
Problem found and solved! In retrospect, there was an "if it was a
snake, it would've bitten you" clue in my post "Linker errors
involving template". It only became obvious since I found this error.
 
How did I find it? Through many sleepless nights and blurry-eyed
days. :o)
 
Until I got the idea about 45 minutes ago to use the command line
version of the compiler and see if it gave a more detailed answer. It
did:
 
Error e:\library\include\CPP/FAList.h 530: Illegal structure operation
in function FAList<HashIndex>::Find(const HashIndex &) const
 
Finally! A class to go with FAList! (The same class referred to in
"Linker errors involving template".)
 
The problem? Almost too embarrassing to relate...almost.
 
HashIndex has six members:
bool firsthash;
int64 hash;
FString firstname;
FString originalname;
FAList<AlternateNames> altnames;
uint duplicates;
 
It overloads all of the comparison operators through friends. The
comparisons are all based on the member "hash". For some now unknown
reason, that was what I concentrated on. So I had the following code:
(Only one comparison operator shown for brevity, but they were all
this way...Sigh!)
 
HashIndex
{
public:
...
friend bool operator == (int64 h1, int64 h2);
};
inline bool operator==(int64 h1, int64 h2)
{return Cmp64To64(h1, h2) == 0);}
 
It finally occurred to me half an hour ago that you're supposed to
pass const references of the class to operator ==, not the member you
want to test! D'Oh!
 
So it became:
HashIndex
{
public:
...
friend bool operator==(const HashIndex& h1, const HashIndex& h2);
};
inline bool operator==(const HashIndex& h1, const HashIndex& h2)
{return Cmp64To64(h1.hash, h2.hash) == 0;}
 
And all is peaceful again in Whoville!
 
DSF
"'Later' is the beginning of what's not to be."
D.S. Fiscus
JiiPee <no@notvalid.com>: Feb 01 06:34PM

I know that for virtual call we say that the derived class function is
overriding the base class version, if they have the same function. But
how about in a case where its a non-virtual situation:
 
class A
{
public:
void foo() {}
};
 
class B : public A
{
public:
void foo() {}
};
 
Then we use B object:
B b;
b.foo();
 
In this case are we saying (/can we say) that B::foo is overriding
A::foo? I think we can say A::foo is "hiding" or we are "redefining foo"
but is this also overriding? I am talking about definitions. I have
found many websites where they call this "overriding" (like tutorial web
sites), see:
 
http://www.programiz.com/cpp-programming/function-overriding
http://www.techopedia.com/definition/24010/overriding
http://www.sciencehq.com/computing-technology/overriding-member-function-in-c.html
http://en.wikipedia.org/wiki/Method_overriding
http://xytang.blogspot.co.uk/2007/05/comparisons-of-c-and-java-iii-method.html
 
 
Also on wikipedia it says: "*Method overriding*, in object oriented
programming <http://en.wikipedia.org/wiki/Object_oriented_programming>,
is a language feature that allows a subclass
<http://en.wikipedia.org/wiki/Subclass_%28computer_science%29> or child
class to provide a specific implementation of a method
<http://en.wikipedia.org/wiki/Method_%28computer_science%29> that is
already provided by one of its superclasses
<http://en.wikipedia.org/wiki/Superclass_%28computer_science%29> or
parent classes. ... The version of a method that is executed will be
determined by the object
<http://en.wikipedia.org/wiki/Object_%28computer_science%29> that is
used to invoke it. "
So it does not speak anything about polymorphism there... meaning
that my example is "overriding" according to wikipedias definition for
that word. Or am I missing something?
 
Where can I find the official definition for the word "overriding" in
programming? I tried to google but did not find really, only this wikipedia.
Ian Collins <ian-news@hotmail.com>: Feb 01 09:51PM +1300

seeplus wrote:
> It was only when I recently noticed small errors in testing that this showed up.
 
> I dug out a backup of the module from before doing that Cppcheck
> run, and found the problem I had created with those fixes.
 
Version control is a wonderful thing....
 
--
Ian Collins
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: