Saturday, October 25, 2014

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

comp.lang.c++@googlegroups.com Google Groups
Unsure why you received this message? You previously subscribed to digests from this group, but we haven't been sending them for a while. We fixed that, but if you don't want to get these messages, send an email to comp.lang.c+++unsubscribe@googlegroups.com.
David Brown <david.brown@hesbynett.no>: Oct 22 11:32PM +0200

On 22/10/14 18:31, Scott Lurndal wrote:
>> languages. Of course it is possible to be good at both - but then you
>> are a "C and C++ programmer", not a "C/C++ programmer".
 
> IME a good programmer is good regardless of the language programmed in.
 
That is mostly true. I should have written "/experienced/ C++
programmer" (etc.) rather than "/good/ C++ programmer".
"Öö Tiib" <ootiib@hot.ee>: Oct 25 04:25AM -0700

On Saturday, 25 October 2014 02:02:19 UTC+3, Christopher Pisz wrote:
> side not to be too bad. I hate COM, but sometimes I just need to talk to
> C# and its better than writing something from scratch to do the job.
> Pros outweigh cons sometimes.
 
My impression is that COM is fine. It can be used for inter-process
communication and even for RPC (but that RPC part feels vulnerable,
it opens more attack vectors, so is better to disable). C++ end is
tolerable if to consider that it works inter-process and in
language-neutral way.
 
> This still leads to debugging problems as described. As well as memory
> leak detection problems. However, if your COM layer is just a simple
> pass through, than that's not as big of an issue.
 
Yes, it requires some discipline and skill but that seems worth it.
Debugging is simpler if to keep the modules written in different
languages in different processes. Then we have just to attach different
debugger to each process.
 
Memory management issues I have only seen in C++ when someone did not
use the ATL COM smart pointers and/or called 'IUnknown::AddRef' or
'IUnknown::Release' directly for whatever reason. That indeed screws
everything up. :D
 
The only thing I don't like about COM is its dependency on Windows
registry. That is fault of registry, database of everything is good
for nothing.
"Tobias Müller" <troplin@bluewin.ch>: Oct 25 12:19PM

> anyone welcome that? C and C++ can with ease interface with script
> languages like Python. What is oh so special about C# that it needs
> a vomit package?
 
Did you actually ever use it or do you always complain like that about
things you don't know yourself?
 
Interop from C# is actually quite painless. I like it much more than
Java/JNI. I don't know if it's the same with android though.
 
The primary C# interop construct is not C++/CLI but the so called P/Invoke
(platform invoke). It's similar to the FFI in most languages. Define a
prototype of a C function in C# and then call it.
For most cases this enough and you don't need C++/CLI.
 
Only for complex cases use C++/CLI. You can almost freely mix
managed/unmanaged code with managed/unmanaged data.
 
Tobi
"Öö Tiib" <ootiib@hot.ee>: Oct 25 06:17AM -0700

On Thursday, 23 October 2014 19:33:18 UTC+3, Christopher Pisz wrote:
> on an every day desktop system where 100s of processes are running, what
> are the chances you are ever going to be in L1 or L2 anyway unless your
> process has the highest priority?
 
The 100s of processes is true but it is usually most of those are idle most
of the time. Also modern desktop system has several processor cores. One has
to have skill to make all cores occupied on desktop system. The cores do not
share L1 or L2. The few processes/threads that work usually work on
different cores.
 
So we will be in cache most of the time unless we spread the data structure thinly in large memory area (with likes of 'std::list', 'std::set' or 'std::map'). Replacing with 'boost::intrusive::?' or 'boost::flat_?'
(note that what is better depends on circumstances) typically results with
2 times faster program, sometimes 4 or 8 times.

> need, I find it hard to believe that I could get better performance
> using a vector, since the whole vector has to move because of its
> guarantee to be contiguous.
 
"Never" is always wrong to say. There are plenty of cases where
'std::vector' alone performs worse than 'std::list'. Suggestion to
*always* use 'vector' instead of 'list' is plain wrong.
 
Most well-performing function of 'std::list' is likely 'splice'. It
is rarely used (or even mentioned) when solving real problems and
usually something that uses 'boost::intrusive::list' wins it in tests.
 
> I'd be interested in seeing some test code to profile that show results
> that conflict with the way we usually think.
 
If to discuss performance of containers in general then that would be fat
book, not Usenet article, short video nor blog.
With 'std::list' it is simple. Just produce a test case that achieves
anything reasonable (the result of algorithm has point) with long 'list'
and challenge people to write the algorithm with something else. If they
can't then we finally have found a case where 'std::list' wins. ;)
"Öö Tiib" <ootiib@hot.ee>: Oct 25 09:40AM -0700

On Saturday, 25 October 2014 15:20:05 UTC+3, Tobias Müller wrote:
> > a vomit package?
 
> Did you actually ever use it or do you always complain like that about
> things you don't know yourself?
 
What is "it"? I have indeed participated in writing several
applications in C++ for various platforms and also few in C#. Where
C++/CLI has been ever used there it was most broken part. Even
microsoft's own intellisense could not sort it out what the code means
and it had to be replaced, typically with COM interop.
 
> For most cases this enough and you don't need C++/CLI.
 
> Only for complex cases use C++/CLI. You can almost freely mix
> managed/unmanaged code with managed/unmanaged data.
 
That was the *whole* *point* of my complaint: C++/CLI is good for
nothing and needed for nothing vomit of language whose only purpose
is to discredit the C++ language with its misleading name.
woodbrian77@gmail.com: Oct 25 10:30AM -0700

On Saturday, October 25, 2014 8:18:12 AM UTC-5, Öö Tiib wrote:
 
> Most well-performing function of 'std::list' is likely 'splice'. It
> is rarely used (or even mentioned) when solving real problems and
> usually something that uses 'boost::intrusive::list' wins it in tests.
 
I still dislike having same name of list in ::std::list
and ::boost::intrusive::list. I think we see unfortunate
consequences of standards process making the container we
know as ::std::list part of the standard. If ::std::list
hadn't been added to the standard, we would have easier
time today sorting this stuff out and giving name of "list"
to a more useful container. Things added to the standard
could have more conservative names. If they had called
::std::list "proposed_list", we wouldn't have this problem.
 
Brian
Ebenezer Enterprises
http://webEbenezer.net
"Öö Tiib" <ootiib@hot.ee>: Oct 25 12:40PM -0700

> > usually something that uses 'boost::intrusive::list' wins it in tests.
 
> I still dislike having same name of list in ::std::list
> and ::boost::intrusive::list.
 
Why? These are names in different namespaces. That is enough
for differentiating.
 
> hadn't been added to the standard, we would have easier
> time today sorting this stuff out and giving name of "list"
> to a more useful container.
 
The problem with list is not in its name nor its design. Linked list
is sequence. The order in sequence of linked elements is important but
should not be result of comparing the elements (or else set and priority
queue perform better) or result of comparing something related to
elements (or else map performs better) or result of adding/removing
elements from container ends (or else deque and ring buffer perform
better).
 
That basically narrows down usefulness of list into situation where
the sequence is produced by 'splice'ing sub-sequences between different
lists. It is hard to imagine example for such situation.
There are no much difference if it is 'std::list',
'boost::intrusive::slist' or 'boost::intrusive::list'. All of those are
doomed by being optimal for rather unusual situation.
 
> Things added to the standard could have more conservative names.
> If they had called ::std::list "proposed_list", we wouldn't have
> this problem.
 
I think it is not about names. For most people there are already
too lot of containers in standard library and they are already in
difficulty to pick. Adding more details there makes it likely
worse not better.
jacob navia <jacob@spamsink.net>: Oct 25 11:13PM +0200

Le 20/10/2014 22:20, Christopher Pisz a écrit :
> Spoken like a true C developer. If it works it's good. Bah!
 
Well, obviously you think that if it doesn'tg work it is much better!
 
Surely a C++ "aficionado".
jacob navia <jacob@spamsink.net>: Oct 25 11:24PM +0200

Le 21/10/2014 18:20, Christopher Pisz a écrit :
 
> It's more a matter of seeing C as a cult of relic programmers whom have
> twisted values built into them from years of programming with 4k of
> memory, that do not apply anymore, and are out to ruin C++ projects,
 
C programmers often prefer simplicity in coding. They tend to dislike
the utter bloat and performance hit that comes with bloated, fat code.
 
Now, within the C++ community, insulting the C programmers is a very
old trend, since the inception of this language. The myth of the
"old C programmer" is being vomited since at least 20 years by the C++
heads, when they try to justify their new lambda, templates, what have
you...
 
> fiasco, making retarded interfaces that take string arguments and then
> parsing them using lucky charms encoder rings to get real arguments and
> call other functions and on and on and on.
 
How easy it is to insult people without even the shadow of some data to
justify these insults.
 
I can do the same:
 
> wasn't responsible for more than 80% of the bugs being tracked. Whom
> didn't argue until veins popped out of their forehead about things like
> how to use templates until nobody understand what the code does,
using lambdas, tiny classes all over the friggin place "because its
easier', ignoring the documentation fiasco, making retarded interfaces
that take string arguments and then
> parsing them using lucky charms encoder rings to get real arguments and
> call other functions and on and on and on.
 
How about that? Isn't that nice to insult other people riding your
highhorse?
 
> My response was to your statement, "If it works, then it is good code",
> for which you should be shot.
 
Of course, code that doesn't work is better! That is the reasoning
behind the C++ programmers?
 
And the tie in to "spoken like a true C
> programmer" is because that statement has been said in so many arguments
> in the past by the very C programmers of which I speak.
 
Surely C++ programmers do not follow the principle:
 
IF IT AIN'T BROKE DO NOT FIX IT IDIOT!
jacob navia <jacob@spamsink.net>: Oct 25 11:34PM +0200

Le 22/10/2014 18:35, Christopher Pisz a écrit :
> different idea of what "good" is and it is often incorrect, because the
> definition has changed quite a bit.
 
> The quote that started this subthread is a perfect example.
 
Yes, perfect
 
C++ has been able to hide the lost of performance and efficiency with
using the hardware advances. Modern processors do get bogged down
compiling templte ridden code but it is masked with machines with 8/16
processors and 32 GB of memory.
 
Bad luck for C++, people are starting to realize that all that language
features DO HAVE an important COST.
 
Not only in embedded systems where each kilobyte counts but ALSO in more
advanced environments.
 
Cost of code bloat, that destroys performance
 
Cost of code bloat that destroys readability
 
Cost of portability issues between compilers and between versions of the
language.
 
Nobody is able to understand 100% of all C++. Not even Stroustroup, that
after years of efforts was forced to acknowledge that he could not add
an essential new feature to the language ("concepts" anyone?)
 
The people here give a proof of this situation with every question a
bit complex:
 
Why doesn't this compile?
 
The answers are often:
 
It compiles with compiler "xyz"
It doesn't compile with compiler 'zzz'
 
Nobody is able to follow the language text and deduce why it should
compile or not!
 
Look, C++ has its strengths but also has a lot of weakness, the
principal one is its shher SIZE.
 
Of course (as you point out) you can go to the computer store and buy
more RAM and accomodate an even bigger language.
 
But there aren't any stores to buy a BRAIN EXTENSION to accomodate the
ever increasing amount of C++ trivia we are supposed to swallow!
 
Yours truly
 
jacob, a C programmer
Ian Collins <ian-news@hotmail.com>: Oct 26 10:59AM +1300

jacob navia wrote:
 
> Not only in embedded systems where each kilobyte counts but ALSO in more
> advanced environments.
 
> Cost of code bloat, that destroys performance
 
So don't write bloated code, this isn't rocket science.
 
> Cost of code bloat that destroys readability
 
So don't to it.
 
> Cost of portability issues between compilers and between versions of the
> language.
 
So I can compile my strictly C99 code using the Microsoft compiler?
 
> Of course (as you point out) you can go to the computer store and buy
> more RAM and accomodate an even bigger language.
 
The only reason to do that is to load the pdf of the standard....
 
> ever increasing amount of C++ trivia we are supposed to swallow!
 
> Yours truly
 
> jacob, a C programmer
 
So you don't use your own proprietary C extensions?
 
--
Ian Collins
Andreas Dehmel <blackhole.8.zarquon42@spamgourmet.com>: Oct 25 03:41PM +0200

On Fri, 24 Oct 2014 21:41:33 +0100
> > standard has at least as big a part in this mess as Microsoft does.
 
> It is not for the C++ standard to mandate codesets for filesystems,
> that would be absurd.
 
I didn't say codesets for filesystems but a working UTF-8 locale,
that's different. That would merely ensure a _lossless_ encoding
to use in communicating with the standard library, which will still
have to translate that to whatever the system uses for its various
components (just as it does now).
 
 
> And it isn't a mess - it was you trying to make something of it. I
> have never written code which depends on a particular narrow character
> codeset for the target machine.
 
I'm sure you have. Pretty much everybody has because the encoding
of compiler-generated string literals is implementation-defined
and we've all used those in one form or another. It being some sort
of ASCII-superset is merely a very common convention. Only C++11
u8-literals have a well-defined encoding and these will take a long
time to appear in actual code.
 
 
[...]
> > anaemic.
 
> Care to elaborate on the "crap" that std::string pulls in that QString
> does not?
 
Well, looking at the string-includes for GCC-4.7 on Linux:
 
 
#include <bits/c++config.h>
#include <bits/stringfwd.h>
#include <bits/char_traits.h> // NB: In turn includes stl_algobase.h
#include <bits/allocator.h>
#include <bits/cpp_type_traits.h>
#include <bits/localefwd.h> // For operators >>, <<, and getline.
#include <bits/ostream_insert.h>
#include <bits/stl_iterator_base_types.h>
#include <bits/stl_iterator_base_funcs.h>
#include <bits/stl_iterator.h>
#include <bits/stl_function.h> // For less
#include <ext/numeric_traits.h>
#include <bits/stl_algobase.h>
#include <bits/range_access.h>
#include <bits/basic_string.h>
#include <bits/basic_string.tcc>
 
 
... and it only gets worse from there. While some of these are obviously
needed, I wouldn't call pulling in everything from iostreams to (later
on) hash functions particularily memorable design. YMMV.
 
 
 
 
Andreas
--
Dr. Andreas Dehmel Ceterum censeo
FLIPME(ed.enilno-t@nouqraz) Microsoft esse delendam
http://www.zarquon.homepage.t-online.de (Cato the Much Younger)
"Öö Tiib" <ootiib@hot.ee>: Oct 25 10:09AM -0700

On Saturday, 25 October 2014 17:09:25 UTC+3, Andreas Dehmel wrote:
> of ASCII-superset is merely a very common convention. Only C++11
> u8-literals have a well-defined encoding and these will take a long
> time to appear in actual code.
 
Everybody use string literals for ASCII (whose superset UTF8 is).
People who uses string literals where ASCII is not sufficient
(like i18n) will soon regret anyway. No one does want those as
string literals. Translation company? No. Coder? No. End user? No.
So non-ASCII string literal is basically red herring and bear trap
for naive novice developer.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 25 10:05PM +0100

On Sat, 25 Oct 2014 15:41:05 +0200
> to use in communicating with the standard library, which will still
> have to translate that to whatever the system uses for its various
> components (just as it does now).
 
That is pointless. There is no point in providing a lossless channel
to a lossy system encoding. You just stay with the system encoding,
whatever it happens to be.
 
> of ASCII-superset is merely a very common convention. Only C++11
> u8-literals have a well-defined encoding and these will take a long
> time to appear in actual code.
 
I have not, and the same applies as I have set out above. Unless you
are writing to a machine whose narrow or wide locale encoding is
guaranteed to support unicode, you can't write string literals covering
the full unicode range in any context which uses narrow or wide
encoding. I do not write programs for such constrained cases, and I do
not think you could find any program which does, except in very
specialized cases. In practice I pass all such strings dynamically
through a text and encoding translation layer, such as gettext (gettext
will translate both), but there are a number of similar systems
available. Translation and encoding failure is then soft and
recoverable dynamically at run time (say with untranslated ASCII or
substitute characters), which it has to be, because the system locale
on which a binary runs is a runtime property. Incidentally, Qt does
the same for its strings.
 
> obviously needed, I wouldn't call pulling in everything from
> iostreams to (later on) hash functions particularily memorable
> design. YMMV.
 
Frankly, that is pathetic. These headers are all very basic stuff,
separated out for the purposes of modularity, and are no indication of
either complexity or "crap" content.
 
You started off with what I regard as a bizarre proposition (to another
poster) that "Could it also be that you haven't realized that your
suggestion of delegating string operations to ICU (and implicitly
everything file-related to yet another library or platform-specific
CRT) is just a roundabout way of saying the standard library has NO
string support? Amusing", and then added to it in a way which led me to
believe that you did not have a clear understanding of how Qt strings
are implemented internally, nor of unicode nor the standard library.
However, you are a very accomplished linguist (and that is a
compliment).
 
Chris
Chandrasekhar Thumuluru <chandra.thumuluru@gmail.com>: Oct 25 02:17AM -0700

On Friday, October 24, 2014 11:14:34 PM UTC+5:30, Paavo Helde wrote:
> exactly, mangled by extern "C" conventions) was probably the simplest way
> to ensure it works with all linkers.
 
> So, why do you care? What problem are you trying to solve?
 
Thanks for your response. I buy the answer. In fact my understanding was same too.
 
Here was the confusion I had:
I wrote the above sample and tried to change the program entry point from main() to main1(). With that I was expecting the entry method to be main1() and not main() and hence main() should be name mangled and not main1(). To my surprise it was not the case with g++ compiler I used.
 
After posting this question I read further and realized #pragma support is compiler dependent. I used -Wunknown-pragmas g++ flag and realized g++ doesn't support #pragma entry. That explains the mangling part.
Truth <ItIs@IsTruth.com>: Oct 25 09:44AM -0700

On 10/24/2014 10:39 AM, Victor Bazarov wrote:
> even mentioned by the Standard). What difference does it make whether
> it's done, or how it's done?
 
> V
 
The "difference" is knowledge and understanding. Something that was
loathsome to "The Russian Bunch" who flooded this group and the idiots
who followed them here back in the late-nineties and early two
thousands. And I see the last one left is still at it. Exactly and
precisely the same, no change at all. He's learned nothing.
 
I remember when Bjarne himself would post here.
 
I warned repeatedly and loudly back then to stop with that crap and
treating people like crap or this newsgroup would become as dead as
these jerks social lives. And look at this group now. It's a burned-out
stinking fetid ghetto. Extremist freaks arguing... nothing. I was 100
percent right.
 
Argue against the fact this group is pathetic now and you show how
delusional one can be. Sausages Leigh? For fucks sakes.
 
But I'm talking to the Continuum aren't I.
Truth <ItIs@IsTruth.com>: Oct 25 09:54AM -0700

On 10/24/2014 10:44 AM, Paavo Helde wrote:
> exactly, mangled by extern "C" conventions) was probably the simplest way
> to ensure it works with all linkers.
 
> So, why do you care? What problem are you trying to solve?
 
There was a time when professionals simply answered a question. If they
didn't know, they'd say that or not respond in a newsgroup. They'd leave
it to the professionals. The best engineers I've ever known always
wonder and try to understand their craft, in all its aspects, and would
share the knowledge they worked hard to obtain. Knowledge that may not
seem pertinent at the moment, but it's always useful down the road.
 
The worst, the absolute worst "engineers" are the ones who say; "Why do
you need to know that? Oh, you don't. Why do you care?" The worst. Avoid
these people. And of course, the irony is these people never even
recognize this trait.
 
Watch and listen. Here it comes now.
"Öö Tiib" <ootiib@hot.ee>: Oct 25 10:49AM -0700

On Saturday, 25 October 2014 19:44:39 UTC+3, Truth wrote:
 
> Argue against the fact this group is pathetic now and you show how
> delusional one can be. Sausages Leigh? For fucks sakes.
 
> But I'm talking to the Continuum aren't I.
 
Read your own post above. It is best example of recursion since it
complains about problem that it represents. No information only empty
trash talk.
Paavo Helde <myfirstname@osa.pri.ee>: Oct 25 02:06PM -0500

Chandrasekhar Thumuluru <chandra.thumuluru@gmail.com> wrote in
 
> Here was the confusion I had:
> I wrote the above sample and tried to change the program entry point
> from main() to main1().
 
You can probably do that, but it's kind of swimming against the current.
Basically you need to tell the linker the new entry point name and throw
out or redirect any compontents which are expecting 'main'. See e.g.
http://stackoverflow.com/questions/7494244/how-to-change-the-entry-point-
in-gcc and make needed modifications for C++ (like declaring main1 as
extern "C"). Lots of hassle with no gain.
 
Cheers
Paavo
Paavo Helde <myfirstname@osa.pri.ee>: Oct 25 02:09PM -0500

> do you need to know that? Oh, you don't. Why do you care?" The worst.
> Avoid these people. And of course, the irony is these people never
> even recognize this trait.
 
I didn't say he does not need to know. I asked because I suspected (and
rightly so) that his actual worries are elsewhere. Indeed, he wanted to
rename main, but got confused by the name mangling meanwhile.
 
Cheers
Paavo
agent@drrob1.com: Oct 25 08:33AM -0400

thanks. That is very clear and helpful.
 
 
On Thu, 23 Oct 2014 05:00:45 CST, "James K. Lowden"
Rosario193 <Rosario@invalid.invalid>: Oct 25 07:34AM -0600


>Does this have something to do with make?
 
>Thanks,
>Rob
 
there was someone that said that is not good break code in many files
if not dll...
 
 
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
scott@slp53.sl.home (Scott Lurndal): Oct 21 04:18PM

>When someone cannot accept something (like news or new information) they
>just put you in their kill file? That's hilarious! And cowardly to boot!
 
(1) this is a C++ group, not a political group
(2) What does climate change (regardless of what I think of it) have to
do with the C++ or training replacment C++ programmers?
(3) I noted that my experience was anecdotal. Did you miss that?
(4) I don't use kill files.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Oct 22 08:58PM

On Sat, 2014-10-18, Paavo Helde wrote:
> for any case. This is just confusing, even (and maybe especially) if the
> latter (i.e. former) value is not used. Why write down an expression
> preserving a special value which is not used?
 
But we do that all the time, e.g. when we do an assignment. There are
all kinds of values being created, ignored, and optimized away ...
 
Someone here, a few years ago, convinced me that basically "if ++i is
more efficient in that context than i++, your iterator type is strange
and probably broken".
 
Since I had forced myself for a long time to type ++i and /still/
didn't feel comfortable reading it, I bought the argument and switched
back.
 
(Of course now we have ranged-for, lambdas and so on, so you don't
have to deal with the issue that often.)
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
JiiPee <no@notvalid.com>: Oct 22 10:20PM +0100

On 22/10/2014 21:58, Jorgen Grahn wrote:
 
> (Of course now we have ranged-for, lambdas and so on, so you don't
> have to deal with the issue that often.)
 
> /Jorgen
 
I used like 15 years i++. Then forced myself to use ++i and now do not
have much problem doing it. Our brains get rewired doing new things....
I feel good when seeing ++i because i know am doing faster code now :)
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: