Wednesday, February 4, 2015

Digest for comp.lang.c++@googlegroups.com - 25 updates in 3 topics

JiiPee <no@notvalid.com>: Feb 01 10:37PM

On 01/02/2015 22:19, Chris Vine wrote:
> provided YOU make it clear what you mean. But why not stick to common
> usage? And what has pushed your button so hard on this?
 
> Chris
 
ok, so overriding has a different meaning in C++ than in other
languages. I am not english, but as far as i understand the word
"override" really means that instead of using something you are using
another thing. And thats what is happening here: when I write that foo
in B then instead of using A's foo we are using B's foo, so it kind of
"overrides" A's foo, isnt it? am just talking about english language
grammar here, not really C++ language. So it would be natural for me to
say that even non-virtual function overrides.
JiiPee <no@notvalid.com>: Feb 01 10:56PM

On 01/02/2015 22:48, Chris Vine wrote:
>> natural for me to say that even non-virtual function overrides.
> In english, the word "override" could well include what is referred to
> in C++ as name hiding.
 
But hiding is not exactly the same as override anyway. With hiding the
parameter list and return value does not even need to match. And, hiding
is in a way also happening when there is a virtual base class function
(polymorhism), because hiding if basically that a new function
definition "blocks"
some base class function to be called anymore.
 
> But that is not how the word is defined and
> used in the holy standard. So if you are discussing the C++ language
> rather than the english language, the holy standard is what applies.
But I still have not found that the standard says like that. Somebody
showed me 2 places but none of them defines what "override" means
generally in C++. Am not saying you are not right, but I have not seen
the definition still yet. What "override" is defined in polymorphism has
been shown to me, but not what it means in non-polymorphism situation/case.
JiiPee <no@notvalid.com>: Feb 01 11:52PM

On 01/02/2015 23:22, Paavo Helde wrote:
 
> For C++ standard, see http://lmgtfy.com/?q=latest+c%2B%2B+standard+download
 
> Cheers
> Paavo
 
yes, I checked also.... ok then, I accept this as its never used on
non-virtual functions there.
JiiPee <no@notvalid.com>: Feb 01 11:58PM

On 01/02/2015 23:39, Öö Tiib wrote:
> languages (Javascript, Objective-C) you can override all methods
> of base class and in others (C++, C#, Delphi) you can override only
> functions that are marked as virtual.
 
ok, let it be like that then.
 
Some of those websites then are confusing by using override with
non-virtual functions. Maybe they are former Java programmers :)
 
>> only to polynorphism :).
> That logic I do not understand.
> Modifying and extending behavior of a class *is* polymorphism.
 
ok but now I have more evidence , because I checked the whole standard.
 
JiiPee <no@notvalid.com>: Feb 01 10:59PM

On 01/02/2015 22:54, Chris Vine wrote:
>> You get my point?
> No. Read the whole of §10.3 and it is quite clear.
 
> Chris
 
can you post it here please. I dont know where to find it.
 
Again, I am talking about logic here. Yes, its possible the standard has
not been written totally logically here... true, and that is another
issue. But anyway, thats what I am discussing here.
 
In a court of law so far there is no evidence that "override" belongs
only to polynorphism :).
JiiPee <no@notvalid.com>: Feb 01 08:56PM

On 01/02/2015 20:53, JiiPee wrote:
>> (not only for C++). As for C++ 'override' is even a a keyword in it.
>> Coding standards suggest to use that keyword for all overrides
>> to reduce defects by little deviations in virtual function signatures.
 
yes I know, the override-keyword in C++ refers to virtual functions. But
if we are talking about general use of override -word (not the c++ keyword).
Ian Collins <ian-news@hotmail.com>: Feb 02 11:10AM +1300

DSF wrote:
 
> Are there any conditions in which a class can be ineligible for use
> in a template? HashIndex is the only class that exhibits this
> problem. *Every* other class I have used with FAList (so far) works.
 
None that I can think of.
 
<snip>
 
> program to debug it, which led to another bug found, yadda, yadda,
> yadda. In other words, it's been a while. So I've been correcting
> much of the program to match several library changes.
 
Roll back to a "working" version and re-apply changes until it breaks.
 
--
Ian Collins
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Feb 01 10:41PM

On Sun, 01 Feb 2015 16:55:19 -0500
 
> Are there any conditions in which a class can be ineligible for use
> in a template? HashIndex is the only class that exhibits this
> problem. *Every* other class I have used with FAList (so far) works.
 
No, but I suspect this might be connected to your having some member
function definitions outside the class template definition. You may
have made a mistake in a way which results in what you thought was a
"definition" not in fact defining, but instead specializing. Or maybe
you haven't specialized but defined a non-member function by mistake.
In either case, you could end up with a declaration without a
definition. I hope in addition that you haven't used the C++11
"extern" keyword applied to templates, which would do the same.
 
If the misbehaving function is not defined in the class definition, why
not move it there and see what happens, and then go from there?
 
Chris
Christopher Pisz <nospam@notanaddress.com>: Feb 04 05:41PM -0600

On 2/4/2015 5:25 PM, Mr Flibble wrote:
>> different platforms. For example a program computing the factorial using
>> integer arithmetics overflows for different input at different
>> platforms, if just int or unsigned is used.
 
Why are you computing factorials when there are well tested, documented,
and widely used libraries already to do this?
 
Why don't you know what platform you are on?
 
 
> standard even bans the use of 'int', 'char' etc and enforces the use of
> the sized integer typedefs to help ensure that there are no nasty
> surprises causing planes to fall out of the sky etc.
 
I can find you a standard that says use GOTO as well, if you like.
 
> IMO tt is important to know what the valid range of values for a
> scalar type is in any algorithm and the sized integer typedefs allow
> this.
 
No, they don't.
As stated before, the types in stdint.h _are not guaranteed_
 
and no, it isn't
Who gives a flying poop what how many bits unsigned int count is in:
 
for(unsigned count = 0; count < 100; ++count)
{
}
 
as long as 100 fits, and 100 _will always_ fit.
 
and as stated before, _you already know your architecture_ for your project.
 
People who do silly things to be "platform independent" when platform
independence is not even necessary are silly.
 
> IMO 'int' should really only be used as the return type for main()!
 
> /Flibble
 
I hope I never ever have to work on your code.
Melzzzzz <mel@zzzzz.com>: Feb 05 12:42AM +0100

On Wed, 4 Feb 2015 07:49:17 -0800 (PST)
> memcpy(nonce,&address,8);
 
> return the nonce value is : 1000048deac
 
> So is the there any solution to keep the same order.
 
void* p = (void*)0xacde480000000001;
 
Ben Bacarisse <ben.usenet@bsb.me.uk>: Feb 04 11:51PM

>> lowest to highest address.
 
> I'm obliged to use memcpy to set the nonce value, not only to print
> the bytes in order,
 
My example was only supposed to shed some light on what you might be
puzzled by. If it does help you understand, just ignore it.
 
<snip>
--
Ben.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 11:58PM

On 04/02/2015 23:41, Christopher Pisz wrote:
 
> I hope I never ever have to work on your code.
 
The feeling is most definitely mutual mate; I wouldn't want you anywhere
near my code.
 
/Flibble
jt@toerring.de (Jens Thoms Toerring): Feb 05 12:11AM

> You know the beforehand what architecture you are targeting I'm sure.
 
For the few of Linux device drivers I've written I definitely
had not the advantage of knowing what architecture they might
end up being used on;-) That's one of the fields were you de-
finitely have to aim at being architecture-agnostic.
 
> > don't agree with your statement that "there's no purpose at
> > all" for these types.
 
> Well, again, I am not seeing, why you need to use sizeof more than once.
 
Consider the following: in that MATLAB file there's an uint16_t
field that tells me the type of the next field (and thus the
number of bytes it occupies). Now I have to decide on the type
what I'm going to store it in (as a kind of 'variant') and how
to read it from the file. Here I've got to deal with a slew of
different types (beside 1, 2, 4, 8 byte signed and unsigned in-
tegers also float, double, complex). Ugly enough. Now, if I'd
also have another, orthogonal matching for the proper type of
integer that fits in (lots of "if (sizeof(int) ==1)",
"if (sizeof(int) ==2)" etc.) this would become such an unholy
mess that I wouldn't understand it anymore 10 minutes lates.
But, I guess, that description isn't clear enough - I'd have
to show you - in the end I had the feeling that proper use of
the types I knew where in the file saved me from a lot of
grieve;-) But, perhaps, if I look at it again in a few weeks
or months I'll see the errors of my way.
 
> return (int)y;
> }
 
> ?
 
As I wrote, stupid use of all you can do isn't something I
consider a good idea. In your example there are several things
I haven't seen before 'int4_t", "BYTE" and "T_" (in front
of the string "A"). So I wouldn't like too much having to
maintain that code;-)
 
Just out of curiosity: is this something for Windows? My
private conviction (call it prejudice if you like) is that
this kind of use of made-up types and (often completely
unnecessary) casts etc. is some Windows-thing where people
actually haven't been exposed much to the existence of dif-
ferent CPU-architectures and types and casts have become
some kind of cargo-cult thingy, which they believe makes
them look smarter. I consider every cast in a program as
a warning flag that there's something going one that
needs close attention because the author basically says
"I'm know better than the compiler" - which, unfortunately,
often isn't the case. So, I agree that this is ugly code where
someone obviously didn't know what he (or, maybe, she) was
doing. But I still am convinced that the types from stdint.h
exist for a good reason - it's just that you shouldn't use a
knife as a srewdriver unless there's no alternative.
 
> chalk full of bad habits and errors as a result, with no justification
> for their use. If the OP has any reason at all to use uint8_t then I'd
> love for him to say what it is.
 
I haven't noticed him before so I gave him the benefit of
the doubt. Wouldn't be the first time I've got suckered;-)
 
Best regards, Jens
--
\ Jens Thoms Toerring ___ jt@toerring.de
\__________________________ http://toerring.de
David Brown <david.brown@hesbynett.no>: Feb 05 01:19AM +0100

On 04/02/15 22:53, Christopher Pisz wrote:
 
> I am still catching up to the latest C++ standard. Doesn't appear to be
> in my particular compiler at work. Will try when I get home. I can like
> to learn new stuff too!
 
It looks like you have a lot of catching up to do. Static assertions
have been a useful tool for C and C++ programming since the languages
were created - the only difference with C++11 is that static_assert() is
now part of the C++ standards, rather than having to be defined by a macro.
 
And sizeof(unsigned char) is always 1 - it is guaranteed by the
standards, even if a char is 16-bit or anything else. "sizeof" returns
the size of something in terms of char, not in terms of 8-bit bytes.
David Brown <david.brown@hesbynett.no>: Feb 05 01:26AM +0100

On 05/02/15 00:07, Christopher Pisz wrote:
 
> return (int)y;
> }
 
> ?
 
Is it really so difficult to comprehend that the standard size-specific
types defined in <cstdint> (or <stdint.h>) are called "standard" because
they are standardised by the language standards? Types like uint8_t are
almost universally understood by C and C++ programmers alike. People
who write things like "BYTE" are either dealing with ancient code (from
the pre-C99 days, before the types were standardised), or using Windows
illogical and inconsistent set of typedefs for the Windows APIs.
David Brown <david.brown@hesbynett.no>: Feb 05 01:35AM +0100

On 05/02/15 00:22, Christopher Pisz wrote:
> harder than remembering what the hell a uint8_t is and that you better
> be damn sure that _noone ever_ assigns anything but another uint8_t to
> it directly or indirectly?
 
The fact that with every post you are inventing a new way to get things
wrong demonstrates /exactly/ why the fixed-size types were standardised.
 
Just to save you from further embarrassment, the fixed-width types are
defined as:
 
int8_t, int16_t, int32_t, int64_t, uint8_t, uint16_t, uint32_t, uint64_t
 
(There are additional types defined too, but let's keep it simple for now.)
 
These are extremely simple to understand - each is a signed (two's
complement) or unsigned integer of the given bit size. If the hardware
does not support such a integer type, the typedef does not exist on that
platform.
 
So any time you need to use an integer that is exactly 32 bits in size,
you use an "int32_t" from <cstdint> (or <stdint.h>). The code will work
on any platform from the smallest 8-bit device to the largest 64-bit
system - or if the hardware is not able to support such integers, you
get a nice clear compile-time error. And since they are so clear and
simple, they also clearly document your intentions.
 
With modern C++, there is little use for "int" any more - if you are
working with some sort of structure or file format, and therefore need
to know the /correct/ size rather than guessing, you should use the
/standard/ fixed-size types. Apart from that, "auto" can replace a good
many uses of "int".
 
Christopher Pisz <nospam@notanaddress.com>: Feb 04 06:41PM -0600

On 2/4/2015 6:19 PM, David Brown wrote:
> have been a useful tool for C and C++ programming since the languages
> were created - the only difference with C++11 is that static_assert() is
> now part of the C++ standards, rather than having to be defined by a macro.
 
I wouldn't think so. C++ is a big huge changing beast. No-one has
everything memorized. That's why we get the big bucks, eh?
 
Never had to use it, or went looking for such a mechanism, because,
again, never needed it. But sure, I come across things all the time,
especially with C++11, that sound neat.
 
> And sizeof(unsigned char) is always 1 - it is guaranteed by the
> standards, even if a char is 16-bit or anything else. "sizeof" returns
> the size of something in terms of char, not in terms of 8-bit bytes.
 
I fully realize this. I was demonstrating how silly the concern was
about whether or not a char is one byte. I was saying, if you really
feel the need to check if it is one byte, then do sizeof. I may have
misunderstood earlier that the OP wants to check for bits, but I still
don't see anywhere in his code where the number of bits matters. It sure
looks like he is concerned with bytes or nibbles, but who knows...his
code is nowhere near a working example.
 
This entire topic has degenerated into hedge cases about mutant
platforms where types can be different lengths and contain different
ranges. The OP stated no such requirement and is, has, and I bet will
continue to, write code in this style without and logical reason.
 
I still say, and always will say that you know your architecture up
front. In most cases, you have to in order to even compile with the
correct options.
 
To truly be "platform independent" you have to jump through all manner
of disgusting hoops and I have yet in 20 years ever come across any real
life code, no matter how trivial, that was truly platform independent. I
have however come across several ugly projects littered with #if define
of the year, do windows #if define of the year, do linux, or #if define
of the year, do windows version X, but again you know your architecture.
 
If you are creating something so entirely trivial and amazingly small
that you never need to call another library or call the OS, then I still
hold fast to my claim that you can check sizeof at your entry point, and
as Victor pointed out earlier get the number of bits using numeric limits.
 
You will, I guarantee, break your project at some point using typedefs
for primitive types. I've seen it and wasted hours upon hours on it,
when the target platform was known, singular, and Windows no less.
David Brown <david.brown@hesbynett.no>: Feb 05 01:42AM +0100

On 04/02/15 23:40, ghada glissa wrote:
> memcpy(nonce+8,&counter,4);
> nonce[12] = seclevel;
 
> }
 
Your problem is that you don't understand about endianness:
 
<http://en.wikipedia.org/wiki/Endianness>
 
Once you have read that, you should understand why you are getting the
results you get.
 
The next step is to figure out what results you actually /want/, and if
that is different from what you get, you need to figure out how to get
it. People will be happy to help with that too, but you have to tell us
what you are really trying to do rather than posting small
non-compilable lines of code.
Christopher Pisz <nospam@notanaddress.com>: Feb 04 07:01PM -0600

On 2/4/2015 6:11 PM, Jens Thoms Toerring wrote:
> ferent CPU-architectures and types and casts have become
> some kind of cargo-cult thingy, which they believe makes
> them look smarter.
 
Yea, it isn't exactly what I came across, but I think gets the point
across. I dunno if that would even compile.
 
Windows made a define for BYTE. Some use it some don't. I'd rather never
see it and use unsigned char, because unsigned char is a byte and is
part of the language itself already. Similar to the stdint.h argument.
 
Windows also defines all manner of other types sometimes in more than 5
different ways. I suppose there was historical need to do so.
 
_bstr_t come's from COM (now with 'safety' pfft) and often I run across
programmers who mistakenly think that _bstr_t is some Windows type that
should always be used to represent a string, "because we're going to
need to turn it into one anyway when we go over COM!", which is poopy
doopy. Separate the Windows specific crap, separate out the COM crap,
into tidy interfaces, and convert before you go over the wire and when
you get things back, to keep things maintainable and testable. there are
probably ten other types defines for strings. MFC programmers love to
use all of them and make my life hell where 90% of processing is dealing
with blasted strings.
 
T_ is a Windows macro that resolves to, LPCSTR I think? or LPWCSTR
depending whether or not you set your compiler for unicode or multibyte.
I have a hard time arguing this, but I hate that macro. I want to deal
with std::string and std::wstring respectively. I'll be more than happy
to explicitly call the A or W version of a Windows API method in the
implementation of my interface that does the windows specific stuff.
It's there for convenience so you can switch your compiler options.
Problem is, just like the others, that someone somewhere is going to
count on it being what the define resolves to at the time and it isn't
going to work anyway when you switch from multibyte to unicode, should
that rare occasion occur, and you're probably going to have more
problems if it does.
 
 
I consider every cast in a program as
> doing. But I still am convinced that the types from stdint.h
> exist for a good reason - it's just that you shouldn't use a
> knife as a srewdriver unless there's no alternative.
 
Now agreed.
 
Robert Wessel <robertwessel2@yahoo.com>: Feb 04 07:13PM -0600

On Wed, 04 Feb 2015 15:31:55 -0600, Christopher Pisz
>> quite a while now, although I think its presence in the C++ standard
>> is rather newer (although support has been pretty universal).
 
>It is a relic from C99. It is also not guaranteed to exist. It even says so.
 
 
The header is guaranteed* to exist , but the *type* is not, which is
the point (in this case, since the output is apparently being sent
over the network), it's a away of enforcing that that's actually the
format being prepared - and failure to compile on a platform not
supporting the type is certainly an improvement over producing
incorrect results.
 
 
*FSVO "guaranteed"
 
 
>Try to compile that listing on Visual Studio 2003-2008 for example.
>Being part of C and not C++ Microsoft also deemed it a relic and dropped
>it. Later they brought it back. Granted MS loves to make their own rules.
 
 
It's been back since at least VS2010, and it was made part of C++11
(and in almost all C++ compilers since before that).
 
 
>it when you make a project wide decision to support Joe's new OS.
>Otherwise, be a realist and know that you are targeting Windows, Linux,
>OSX, Android, or just one of the above.
 
 
Again, it's not an unsigned int, it's an unsigned char.
 
FWIW, we don't make much use of stdint.h, although we do usually have
a project wide header file that defines various types for use in
places where layout and sizes are important, including exact layout of
structures headed for other platforms, sizes of elementary items in
the bignum library, the line ending pattern for the platform, some
code assumes ASCII, etc... We do try to limit the use of those to
places where they're strictly needed, and we have some other platform
constraints that we require (8 bit chars, two's-complement, no padding
in a structure between arrays of chars, conventional short/int/long
sizes etc.), and we have code that checks those (as well as providing
validation for the types header). We'd probably have been able to
base some of that on stdint.h, but much of that code predates that.
 
 
>stdint.h was a feeble attempt at cross-platform guarentees that aren't
>guarentees at all and the idea is a source of bugs time and again.
 
 
In this case it guarantees that there is an 8-bit, two's-complement
type (exactly). It's not like the unit_fast or uint_least types.
 
 
 
>When you start counting on types being of a certain size without
>checking sizeof, and start shuffling bits or bytes around, you are
>asking for trouble.
 
 
The whole point of the exact sized type is that it does that check
implicitly. If the platform does not have a two's-complement, 8-bit
(exactly) type, uint8_t will not be defined, and a program using it
will not compile. And while I mostly agree with you, this question is
clearly not about a Windows platform (where longs are not 64 bits).
 
 
>}
 
>// carry on using unsigned char for your project
>// without modern programmers having to sort through your 5000 typedefs.
 
 
sizeof(unsigned char) is *always* 1. You'd probably want to check
that CHAR_BIT and UCHAR_MAX are exactly 8 and 255 instead. Although
I'm not sure both are actually required.
 
 
 
>People who grab a byte of some other type or bitmask or count on bitwise
>operators just cause, pee me off. There had better be a specific and
>reason for it.
 
 
By definition, something going out on a TCP/IP network involves the
manipulation of bytes, although that may be hidden in a library you're
using. And my interpretation of the presented code is that is what's
intended. I may, of course, be wrong, but it looks like the setup for
an encryption* header, at which point the exact byte format will be an
issue. But one would hope that the OP's knowledge of encryption is
better than his knowledge of C++!
 
 
*I can't think of a single case where I've encountered the term
"nonce" that was not in a security/encryption context. Although a 64
bit nonce is on the short side in some cases.
 
 
 
>OS, C++ implementation, or maintainability of the code in the future has
>even been a consideration, much less what the type is in relation to
>what he wants it to be and how it related to the other types he is using.
 
 
Obviously I agree, I just think your rant against uint8_t is a bit
stronger than justified.
Christopher Pisz <nospam@notanaddress.com>: Feb 04 07:13PM -0600

On 2/4/2015 6:26 PM, David Brown wrote:
 
> Is it really so difficult to comprehend that the standard size-specific
> types defined in <cstdint> (or <stdint.h>) are called "standard" because
> they are standardised by the language standards?
 
Nope.
Is it difficult to understand that they aren't guaranteed by that very
standard?
 
Is it also difficult to understand that they aren't needed without a
_very_ specific scenario?
 
I mean, by your and other poster's logic, we might as well take unsigned
char, char, int, unsigned int, and all the other primitives right out of
the language, no?
 
> Types like uint8_t are
> almost universally understood by C and C++ programmers alike.
 
Nope. I've worked with an aweful lot of C++ programmers. Somewhere
around 5k, maybe? Never seen one use any of these types. Not once.
 
I've also worked with a lesser amount of C programmer on C++ projects,
whom inevitably created 80% of the bugs. They didn't use these either,
but they sure like to create 5000 line headers to make their own.
 
Perhaps, if I wrote drivers for a living, I'd run across it more, but
guess what? I don't, nor do hundreds of thousands of other C++ programmers.
 
 
> who write things like "BYTE" are either dealing with ancient code (from
> the pre-C99 days, before the types were standardised), or using Windows
> illogical and inconsistent set of typedefs for the Windows APIs.
 
Oh, you mean the typdefs they invented to alias already existing types?
You are correct sir, quite illogical!
Robert Wessel <robertwessel2@yahoo.com>: Feb 04 07:30PM -0600


>> return the nonce value is : 1000048deac
 
>> So is the there any solution to keep the same order.
 
>void* p = (void*)0xacde480000000001;
 
 
How does that help?
Christopher Pisz <nospam@notanaddress.com>: Feb 04 07:30PM -0600

On 2/4/2015 6:35 PM, David Brown wrote:
>> it directly or indirectly?
 
> The fact that with every post you are inventing a new way to get things
> wrong demonstrates /exactly/ why the fixed-size types were standardised.
 
It does?
 
> defined as:
 
> int8_t, int16_t, int32_t, int64_t, uint8_t, uint16_t, uint32_t, uint64_t
 
> (There are additional types defined too, but let's keep it simple for now.)
 
I'm not embarassed. Never used them, never will, and if I see them they
are going to be deleted. I work on a Windows platform and I write
standard native C++.
 
> you use an "int32_t" from <cstdint> (or <stdint.h>). The code will work
> on any platform from the smallest 8-bit device to the largest 64-bit
> system
 
Wrong.
 
The code will cease to behave as expected as soon as someone assigns a
type that does not match what the define resolves to at the time. Which
may or may not result in a compiler warning, which may or may not be
completely ignored, which may or may not cause countless hours of
debugging, which may or may not result in finding out the C-style
ex-driver writing guy put this crap in your code without any need for it.
 
- or if the hardware is not able to support such integers, you
> to know the /correct/ size rather than guessing, you should use the
> /standard/ fixed-size types. Apart from that, "auto" can replace a good
> many uses of "int".
 
When they remove int, unsigned int, float, and double from the language,
I'll be happy to use your stdint.h types.
 
If I am working with a file format, I let the standard file IO take care
of it for me. If I need to do something fancy, I'll still use the
standard file IO and write my own classes ala "Standard C++ IOStreams
and Locales: Advanced."
 
If I need to work with a structure that needs a fixed number of bytes,
then I am going to question what I am doing that for in 2015. Given that
I do not work on hardware drivers and there are all manner of
standardized communication mechanisms out here.
 
If I am doing anything that requires me to know the exact size in bits,
I am going to stop and ask myself, what the hell I am doing and if it is
indeed necessary in 2015. Most likely I am trying to reinvent something
that is already done, well tested, and widely used, and am going to
create more unmaintainable buggy messes. Again, given that I do not work
on hardware drivers.
 
 
Christopher Pisz <nospam@notanaddress.com>: Feb 04 07:36PM -0600

On 2/4/2015 7:30 PM, Robert Wessel wrote:
 
>> void* p = (void*)0xacde480000000001;
 
> How does that help?
 
I think he is mocking :P
Ian Collins <ian-news@hotmail.com>: Feb 05 02:57PM +1300

Christopher Pisz wrote:
 
> I'm not embarassed. Never used them, never will, and if I see them they
> are going to be deleted. I work on a Windows platform and I write
> standard native C++.
 
So how do you interface with networking functions in your perfect, pure
windows world?
 
> completely ignored, which may or may not cause countless hours of
> debugging, which may or may not result in finding out the C-style
> ex-driver writing guy put this crap in your code without any need for it.
 
How then is this different form someone assigning a type that does not
match a naked type?
 
--
Ian Collins
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: