Thursday, April 28, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Christopher Pisz <nospam@notanaddress.com>: Apr 28 11:57AM -0500

On 4/27/2016 3:39 AM, Jens Thoms Toerring wrote:
> IXmlParser class? Otherwise simply drop these two dec-
> larations.
> Regards, Jens
 
Thanks. I just has to change T to typename and it works as expected.
For the methods FromString and ToString, yea I want to overload them,
because the domain object being returned depends on what kind of file we
are parsing, which in turn, is the reason for trying to make this
generic in the first place.
 
 
--
I have chosen to troll filter/ignore all subthreads containing the
words: "Rick C. Hodgins", "Flibble", and "Islam"
So, I won't be able to see or respond to any such messages
---
Jerry Stuckle <jstucklex@attglobal.net>: Apr 27 09:05PM -0400

On 4/27/2016 3:48 PM, Gareth Owen wrote:
>>> that. It was merely asser
 
>> So that wasn't an answer to my question, was it. Just another red herring.
 
> Huh?
 
Yea, right.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Apr 27 09:07PM -0400

On 4/27/2016 4:00 PM, Gareth Owen wrote:
>> just concatenating files. Otherwise, why would you even need ar?
 
> It's concatenating with the addition of headers for locating the objects
> within the archive. There is no change to the structure of the objects.
 
When you use the gcc compiler and tools, that is.
 
> adb30c5f32e1dccb926beee7faca54a5 -
 
> So, that'll be 140 bytes of header, followed by the object file,
> unmodified in its entirety.
 
Yup, when you use the gcc compiler and tools.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Apr 27 09:08PM -0400

On 4/27/2016 4:20 PM, Ian Collins wrote:
>> library. In fact, they have to be as, among other things external
>> references need to be satisfied.
 
> It looks like the night shift has already disproved this.
 
Yup, a shift you obviously have no inkling of.
 
>> But some people are SO DENSE.
 
> and you have ably demonstrated this...
 
Yes, you definitely have.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Apr 27 09:11PM -0400

On 4/27/2016 4:07 PM, Christian Gollwitzer wrote:
> incompatible format, say a gcc with a.out backend would do. But the ones
> I list above are the important players today, and those are compatible.
 
> Christian
 
Ah, but people have said that any .o files from any compiler are
compatible - see all of the posts in this thread.
 
And "important players" is a matter of opinion. A compiler is quite
important if you need to use it for a specific instance.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Apr 27 09:12PM -0400

On 4/27/2016 4:20 PM, Gareth Owen wrote:
 
> And even then, if you had an incompatible .o file, running "ar" on it
> would not, as Jerry seems to suggest, magically fix up those
> incompatibilities.
 
It does if you use the ar from the appropriate toolkit.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Apr 27 09:15PM -0400

On 4/27/2016 4:06 PM, Robert Wessel wrote:
> loader). But if you don't, you end up not being able to use all of
> the various tools and that can normally interoperate because they use
> the same object file format. Again, more common on Windows.
 
Which is exactly what I have been saying, but people have been arguing.
.o files don't necessarily need to be compatible, as long as the library
and executable files which are generated are compatible.
 
Object file formats do NOT need to be anywhere near the same as library
(static or dynamic) file formats, as long as there are tools which
convert them to OS-dependent standard formats.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
"Öö Tiib" <ootiib@hot.ee>: Apr 27 06:33PM -0700

On Thursday, 28 April 2016 04:08:04 UTC+3, Jerry Stuckle wrote:
 
> > So, that'll be 140 bytes of header, followed by the object file,
> > unmodified in its entirety.
 
> Yup, when you use the gcc compiler and tools.
 
Jerry, you go too far here in that denial. The 'ar' is Unix utility
(pretty much like 'tar') and not some sort of gcc tool.
Ian Collins <ian-news@hotmail.com>: Apr 28 04:21PM +1200

On 04/28/16 13:12, Jerry Stuckle wrote:
>> would not, as Jerry seems to suggest, magically fix up those
>> incompatibilities.
 
> It does if you use the ar from the appropriate toolkit.
 
The only "toolkit" ar comes from is the operating system.
 
--
Ian Collins
Ian Collins <ian-news@hotmail.com>: Apr 28 04:27PM +1200

On 04/28/16 16:21, Ian Collins wrote:
>>> incompatibilities.
 
>> It does if you use the ar from the appropriate toolkit.
 
> The only "toolkit" ar comes from is the operating system.
 
http://pubs.opengroup.org/onlinepubs/9699919799/
 
--
Ian Collins
Gareth Owen <gwowen@gmail.com>: Apr 28 05:52AM +0100


> Object file formats do NOT need to be anywhere near the same as library
> (static or dynamic) file formats, as long as there are tools which
> convert them to OS-dependent standard formats.
 
They do not *have* to be -- no-one ever said they had to be, because we
all now that on Windows, they are not.
 
On Unix, they *are*, and you -- despite your claims of great experience
-- have yet to state a single unix platform / compiler toolchain on
which this is not the case. Nor have you named a single toolchain on
which 'ar' performs any non-trivial modification of the object format.
 
So put up or shut up - from your vast claimed experience -- on what Unix
platform did 'ar' make such transformations, and what was the nature of
those transformations?
 
Name a platform.
Gareth Owen <gwowen@gmail.com>: Apr 28 05:53AM +0100


> Ah, but people have said that any .o files from any compiler are
> compatible - see all of the posts in this thread.
 
On which Unix platform is this not true?
 
> And "important players" is a matter of opinion. A compiler is quite
> important if you need to use it for a specific instance.
 
Which Unix compiler - important or otherwise - produces .o files
incompatible from the native compiler for that platform?
 
Name the compiler.
David Brown <david.brown@hesbynett.no>: Apr 28 10:13AM +0200

On 27/04/16 21:42, Jerry Stuckle wrote:
>> where it refers to any sort of calling conventions at all.
 
> It's not required by the C standard - but it is required by the OS for
> language-neutral libraries.
 
On /windows/, Pascal-like calling conventions are used for the Win16 and
Win32 APIs. Thus on /windows/, it is the calling convention used by
/dynamic/ libraries, and programming languages and their compilers that
want to directly call functions in /dynamic/ libraries must support it.
And we have already established that on /windows/, there is no standard
for static libraries or object code formats, with every tool picking
their own methods.
 
On *nix, C like calling conventions are the standard, and are used by
dynamic libraries, static libraries and object code. After all, the OS
kernel is written in C, the standard libraries are written in C, and a
fair percentage of the applications are written in C. Why would you
possibly imagine that the ABI would specify something other than C
calling conventions for libraries? How would printf work? (Variable
argument functions are easily handled with C calling conventions, but
not with Pascal conventions.)
 
 
> those libraries can be called by any language. And although I hate to
> use Wikipedia as a reference, it is somewhat accurate in this case:
> https://en.wikipedia.org/wiki/X86_calling_conventions
 
If you try reading that page, you will perhaps notice that there is no
such thing as "the OS" - there are many operating systems, and each may
have its own calling conventions for its APIs and standard libraries.
*nix systems (including Linux) use a single C calling convention for the
API and within object files. On Windows, there is one calling
convention ("stdcall") for the main APIs, but there are a variety of
other conventions in common use, and different tools use different
conventions by default.
 
This is why people have been discussing *nix systems, where these things
are specified and different tools all cooperate, rather than the Windows
world (and its DOS predecessor) where there is a jumble of local
conventions, often designed specifically for vendor lock-in rather than
cooperation.
 
> was used by a lot of companies (and taught in the universities) in the
> 70's and 80's. But companies went away from it for a number of (valid)
> reasons.
 
Yes, there have been other Pascals. But the users of Delphi and its
predecessors Borland Pascal and Turbo Pascal probably outweigh the
combined number of users for every other Pascal by a factor of a hundred.
 
And any Pascal implementation that is capable of accessing external
library code directly (rather than through wrappers provided by the
implementation) will have to support the calling conventions of the
target OS. On Windows, that means "stdcall" - on *nix, that means C
calling conventions. I don't know what calling conventions were used on
UCSD P-code systems - I was perhaps 9 or 10 when I used that on a VAX,
and did not get far beyond a "Hello, world" program.
 
 
> Nope, you have a habit of skipping answers which conflict with your
> limited knowledge, and then claim you don't know what someone is talking
> about.
 
The only thing I can guess you meant was:
 
"""
Then why aren't object files compatible across compilers and operating
systems? If it were generated with gcc on Linux, I should be able to
link it in using a Microsoft compiler on Windows.
"""
 
But that has already been answered by others.
 
 
>> wrong, and unable and unwilling to think or learn from anything anybody
>> else says.
 
> Of course not. It would conflict with your limited knowledge.
 
Assuming I have guessed the correct question, Scott's answer does not
"conflict with my limited knowledge" - it matches perfectly with what I
know.
 
(And yes, my knowledge /is/ limited, as is everybody's. But the
difference between you and everyone else in this thread is that we all
/know/ that our knowledge is limited, and are happy to look up
references, ask questions, state our uncertainties or correct our
mistakes. You, on the other hand, seem quite happy to pontificate with
a total confidence even when your claims are pure fantasy, and everyone
around you is laughing. I am sure you would make an excellent poker
player, but I am very glad I don't have to deal with you professionally.)
 
 
>> your obvious knowledge and experience (after all, you worked for IBM!).
 
> Did you ever figure out how to determine at compile time whether a
> program was being built on a big endian or a little endian machine?
 
Um, what /exactly/ does that have to do with the topic under discussion
here?
 
There is no implementation-independent way to do this in C or C++. I
have never claimed otherwise. There are many implementation-dependent
ways - most compilers that target multiple platforms have pre-defined
macros that can be tested, and there are libraries (such as from boost)
that combine a range of these tests for convenience.
 
With C++11 or C++14 constexpr functions, it is possible that there is a
way to do it - but that would only give you limited capabilities. In
particular, to be really useful, you need the endianness to be testable
in the preprocessor.
 
 
> I thought not. So much for your "expertise".
 
If you can post a completely implementation-independent method of
determining the endianness of a target at compile time, using nothing
but the features required in one of the C or C++ standards (pick any one
you like), I will happily admit to being wrong in this case. The result
must be a "constant expression" - legally usable for things like
template instantiation. It is not sufficient to give an expression that
a compiler can often optimise away at compile time. Bonus points if it
is testable in the pre-processor.
 
You have raised this challenge - now it is up to you. If you can post
such code, your standing on Usenet will increase significantly.
Certainly I will publicise your success in c.l.c and c.l.c++, and it
will be seen even by the many people who have killfiled you. If not,
then your reputation - low as it is - will drop further.
Christian Gollwitzer <auriocus@gmx.de>: Apr 28 10:34AM +0200

Am 28.04.16 um 10:13 schrieb David Brown:
> is testable in the pre-processor.
 
> You have raised this challenge - now it is up to you. If you can post
> such code, your standing on Usenet will increase significantly.
 
Be careful with such a challenge. He might fight the referee. After
posting a near shot to the solution, he will then refrain to admit that
he is wrong.
 
For instance, a practical test would be
 
const int dummy = 1;
bool little_endian = *(reinterpret_cast<char *> (&dummy));
 
Of course, this test is not covered by any standard, so it fails your
challenge, but still will give the answer that most people want to know
on platforms, where endianness is a reasonable question to ask - because
what people really want to know IMHO, is the binary serialization of
native datatypes, when you access it from a char array, in order to do
binary I/O. Otherwise there is no need to care about memory layout at
all. Probably any program which has the need to query endianness will
never have a chance to run on exotic platforms where sizeof(int) ==
sizeof(char), or two's complement isn't used etc.
 
Christian
"Öö Tiib" <ootiib@hot.ee>: Apr 28 02:08AM -0700

On Thursday, 28 April 2016 11:34:23 UTC+3, Christian Gollwitzer wrote:
 
> For instance, a practical test would be
 
> const int dummy = 1;
> bool little_endian = *(reinterpret_cast<char *> (&dummy));
 
Read challenge carefully.
That code fails the challenge because 'little_endian' is not compile
time constant. Compile time would be something like that:
 
// FAIL, does compile nowhere
constexpr int dummy = 1;
constexpr bool little_endian = *(reinterpret_cast<char *> (&dummy));
 
 
> what people really want to know IMHO, is the binary serialization of
> native datatypes, when you access it from a char array, in order to do
> binary I/O.
 
Yes, read more carefully and try to express yourself with shorter sentences.
 
 
David Brown <david.brown@hesbynett.no>: Apr 28 11:33AM +0200

On 28/04/16 10:34, Christian Gollwitzer wrote:
> never have a chance to run on exotic platforms where sizeof(int) ==
> sizeof(char), or two's complement isn't used etc.
 
> Christian
 
As has been pointed out by Öö, this is not determined at compile time.
It's easy to determine endianness at run time in this manner (and it is
possible to make it more robust in the face of awkward platforms, such
as by using "unsigned long long" rather than "int" - though I am not
sure if this is valid for every platform).
 
Sometimes knowing endianness at runtime (especially if the compiler can
optimise the test) is good enough - but often it is not.
scott@slp53.sl.home (Scott Lurndal): Apr 28 01:57PM

>On 4/27/2016 1:37 PM, Scott Lurndal wrote:
 
 
>Believe I've been working with unix for probably a lot longer than you
>have - back around 1979 (although not continuously since that time)
 
1977.
scott@slp53.sl.home (Scott Lurndal): Apr 28 02:01PM


>> That assertion is false.
 
>Wrong answer, Gareth. There is much more that needs to be done than
>just concatenating files. Otherwise, why would you even need ar?
 
Use the source, luke.
 
http://ftp.gnu.org/gnu/binutils/binutils-2.26.tar.gz
JiiPee <no@notvalid.com>: Apr 28 12:51PM +0100

What is the difference between arguments:
 
void foo(char a[]) {}
 
and
 
void foo(char* a) {}
 
Can the foo use the array the same way in both?
 
char k[] = "hello";
 
creates memory for k-array and copies hello there. But I can see char
a[] does not make a new array and copy the calling values there... it
seems like its a pointer only. Is this correct? Is I call foo("John") ,
i can see that I cannot modify a's values, so its clearly a pointer to
John-literal. Are both of those just pointer arguments?
Michael Tsang <miklcct@gmail.com>: Apr 28 08:26PM +0800

JiiPee wrote:
 
> seems like its a pointer only. Is this correct? Is I call foo("John") ,
> i can see that I cannot modify a's values, so its clearly a pointer to
> John-literal. Are both of those just pointer arguments?
 
They are the same (pass by pointer) as defined by the specification
--
Sent from KMail
JiiPee <no@notvalid.com>: Apr 28 01:28PM +0100

On 28/04/2016 13:26, Michael Tsang wrote:
> They are the same (pass by pointer) as defined by the specification
 
oh ok thanks
Victor Bazarov <v.bazarov@comcast.invalid>: Apr 28 09:24AM -0400

On 4/28/2016 7:51 AM, JiiPee wrote:
> seems like its a pointer only. Is this correct? Is I call foo("John") ,
> i can see that I cannot modify a's values, so its clearly a pointer to
> John-literal. Are both of those just pointer arguments?
 
Built-in arrays are not first-class objects in C++ (and never have
been). You can't pass them by value (as you seem to expect from the
former declaration). You can pass them by reference, but you need to
know the exact size of the array. What book are you reading that
doesn't explain that?
 
V
--
I do not respond to top-posted replies, please don't ask
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Apr 28 03:37PM +0200

On 28.04.2016 13:51, JiiPee wrote:
 
> and
 
> void foo(char* a) {}
 
> Can the foo use the array the same way in both?
 
They are the same because for a formal argument type, array decays to
pointer.
 
There is a similar decay for a function type as formal argument type, it
also decays to pointer.
 
For functions you can use a function reference as formal argument type,
in order to express the intent that your function is not called with a
nullpointer as actual argument.
 
For raw arrays this is AFAIK not possible for unknown array size,
because a reference to array of unknown bounds is just invalid.
 
In other contexts than formal argument type the type itself doesn't
change, but an expression of one of these types has implicit conversion
to pointer when the context requires a pointer.
 
 
> char k[] = "hello";
 
Is very different because it's not a formal argument type.
 
This results in k getting the type `char[6]`.
 
 
Cheers & hth.,
 
- Alf
SG <s.gesemann@gmail.com>: Apr 28 04:08AM -0700

On Wednesday, April 27, 2016 at 4:04:25 PM UTC+2, Heinz-Mario Frühbeis wrote:
 
> I was searching for a way to move an iterator to a position in a
> vector and found: move_iterator.
> E.g.: <http://www.cplusplus.com/reference/iterator/move_iterator/>
 
That move_iterator probably doesn't do what you think it does.
 
> vec.push_back("Beispiel");
> it = vec.begin();
> std::move_iterator<it>(2);
 
First of all "it" is not a type. So, std::move_iterator<it> doesn't
make any sense. Secondly, The move_iterator<> constructor doesn't
take a number. It takes an iterator to wrap.
 
What you probably meant to write is this:
 
auto it = vec.begin() + 2;
 
or this:
 
auto it = std::next(vec.begin(), 2);
 
There is also the older std::advance:
 
auto it = vec.begin();
std::advance(it, 2);
 
This gives you an iterator that points to "bin". The + operator works
because a vector iterator is a random access iterator and every random
access iterator supports this kind of "iterator arithmetic".
 
std::advance is a little more general in that it also supports other
iterators (like bidirectional ones).
 
std::next is the "functional version" of std::advance that is
supported since C++11.
 
> But this gives me an error:
> 'move_iterator' is not a member of 'std'
 
> What can I do?
 
move_iterator is also a C++11 feature. So, if you want to make use
of a C++11 feature, you should make sure that the compiler operates
in C++11 mode (or beyond).
 
Cheers!
SG
"Heinz-Mario Frühbeis" <Div@Earlybite.individcore.de>: Apr 28 08:24AM +0200

Am 16.04.2016 um 22:10 schrieb Kalle Olavi Niemitalo:
> the best solution, as long as you're using X11 core text.
> Then your program doesn't itself have to convert the strings
> to the encoding of the font.
 
Hi,
 
what I noticed is that the printing of the umlauts depends on the fontname.
E.g. currently I use for my Label-Area a different fontname as my
Textbox-Area has.
Umlauts in the Label are special characters, but for the Textbox it
prints 'Ü', 'ü', 'ß', etc...
 
So it is part of the given/selected font.
 
Til then
Heinz-Mario Frühbeis
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: