Thursday, October 23, 2014

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

comp.lang.c++@googlegroups.com Google Groups
Unsure why you received this message? You previously subscribed to digests from this group, but we haven't been sending them for a while. We fixed that, but if you don't want to get these messages, send an email to comp.lang.c+++unsubscribe@googlegroups.com.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 23 04:40PM +0100

On Thu, 23 Oct 2014 03:30:03 -0700 (PDT)
> libraries. Authors of C++ JSON parsers, for example, have to support
> conversions between UTF8 and codepoints, and they typically implement
> that themselves. That shouldn't be necessary.
 
The complaint was not that the C++ standard does not make provision for
codeset conversions, but that this amounts to it having "NO string
support". I think that is a very strange way of looking at it, and
wrong.
 
The C++ standard library does not come equipped with functions to carry
out conversions between different character sets. However I do not see
that as an impediment. There are bazillions of libraries available to
do that for you. If that is not good enough it would be
straightforward to make a proposal to the C++ standard committee for
them to be added (but which might well be rejected).
 
The actual complaint was that "you're horribly wrong in thinking
encoding doesn't matter for the majority of cases, it's the complete
opposite. Not even something seemingly simple like concatenation is
encoding-agnostic because concatenating strings with different
encodings will result in complete and utter garbage. String comparisons
_always_ depend on the encoding, not just "certain types" ...". The
further proposition was that the functions to enable this should be
part of the string interface.
 
I disagree with that also. You don't need specific unicode support to
provide string concatenation - a C++ string will concatenate anything
you offer it provided that it has the correct formal type (char for
std::string, wchar_t for std::wstring, char16_t for std::u16string or
char32_t for std::u32string). Ditto comparisons. It is up to the user
to get the codesets right when concatenating or comparing. Whether the
function which is doing the conversion in order to perform the
concatenation or comparison is internal to the string or external to it
is beside the point. std::string is enough of a monster already. Note
also that there a number of systems whose wide character codeset is
neither UTF-16 nor UTF-32. And some systems still using narrow
encodings other than UTF-8. (ISO 8859-1 and derivatives still seems
quite popular, as well as being compatible with UTF-16 up to code point
255.)
 
The last point to make is that the poster's exemplar of QString does not
actually seem to perform the function that he thinks it does. You still
have to offer QString its input in a codeset it recognises (UTF-8 or
UTF-16), for obvious reasons; for anything else the user has to make
her own conversions using fromLatin1(), fromLocal8bit() or use some
external conversion function, and if you don't it will fail. And you
still have to ensure that if you are comparing strings they are
correctly normalized (to use or not use combining characters). And
QString carries out comparisons of two individual characters using
QChar, which only covers characters in the basic multilingual plane.
And its character access functions also only return QChar and not a 32
bit type capable of holding a unicode code point. Indeed, as far as I
can tell (but I stand ready to be corrected) it appears that all
individual character access functions in QString only correctly handle
the BMP, including its iterators and the way it indexes for its other
methods such as chop(), indexOf() and size(). It even appears to allow
a QString to be modified by indexed 16 bit code units. If so, that is
hopeless.
 
I used to use frequently a string class which was designed for UTF-8.
In the end, I stopped using it because I found it had little actual
advantage over std::string. You still had to validate what went into
this string class to ensure that it really was UTF-8, and convert if
necessary. The class provided an operator[]() method which returned a
whole unicode code point which was nice (and which QString appears not
to), but in the end I made my own iterator class for std::string which
iterates over the string by whole code points (and dereferences to a
32 bit type), and in practice I found that was just as good.
 
Chris
Nobody <nobody@nowhere.invalid>: Oct 23 06:34PM +0100

On Thu, 23 Oct 2014 16:40:45 +0100, Chris Vine wrote:
 
> The C++ standard library does not come equipped with functions to carry
> out conversions between different character sets.
 
Why doesn't std::codecvt qualify?
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 23 07:45PM +0100

On Thu, 23 Oct 2014 18:34:14 +0100
 
> > The C++ standard library does not come equipped with functions to
> > carry out conversions between different character sets.
 
> Why doesn't std::codecvt qualify?
 
Good point. It is easy to overlook C++11's built in conversion
specializations for UTF-8 <-> UTF-32 and UTF-8 <-> UTF-32. There must
be at least some compilers that now support it (presumably gcc-4.9 and
clang-3.4 do). While std::codecvt has in the past been more closely
associated with file streams, there is no reason why it cannot be used
with strings, and there is std::wstring_convert to help you do it.
 
So does this mean that C++11 now does provide "string support" after
all? I suppost those that hold that line would have to say that it
does. (Of course, I say string support and conversion support are two
different things.)
 
Chris
Andreas Dehmel <blackhole.8.zarquon42@spamgourmet.com>: Oct 23 09:30PM +0200

On Wed, 22 Oct 2014 23:52:50 +0100
> purport to) represent an entire unicode code point. So I would have
> thought it would be very difficult if not impossible to provide O(1)
> random access.
 
Where is that documented? Because I looked through the Qt4 documentation
of QChar and QString and while they mention a 16-bit base type, there's
nothing there about the internal encoding being UTF-16. There are
certainly conversion functions for UTF-16 and several others in the
respective interfaces of these classes, just like for UTF-8 and Latin-1,
but that has nothing to do the internal storage format.
 
 
 
Andreas
--
Dr. Andreas Dehmel Ceterum censeo
FLIPME(ed.enilno-t@nouqraz) Microsoft esse delendam
http://www.zarquon.homepage.t-online.de (Cato the Much Younger)
Andreas Dehmel <blackhole.8.zarquon42@spamgourmet.com>: Oct 23 09:25PM +0200

On Thu, 23 Oct 2014 16:40:45 +0100
> for codeset conversions, but that this amounts to it having "NO string
> support". I think that is a very strange way of looking at it, and
> wrong.
 
I think it boils down to this, though. Pretty much the only thing
the standard lib provides is a buffer with a zero at the end as its
only non-implementation-defined property, and I think that's not even
remotely sufficient to seriously claim "string support". Never mind
that most functions taking "string" arguments go even lower level
with a raw char*...
 
 
[...]
> has to make her own conversions using fromLatin1(), fromLocal8bit()
> or use some external conversion function, and if you don't it will
> fail.
 
Of course. I never claimed QString will magically guess the encoding.
However, sorting out the encoding during construction is a much
cleaner, less error-prone way than keeping the encoding implicitly
on the side as in std::string and working on a raw byte buffer (where
strictly speaking the standard doesn't even say it's a byte).
Components making errors during construction (e.g. another library
which, like so many, just dumps local encoding into everything) are
typically revealed much sooner that way, within those components
near the error, rather than completely elsewhere where they'll make
a _real_ mess.
Conversions usually don't require external tools BTW, the methods
in the QString-interface are just convenience for the most popular
encodings, a much wider range is available via QTextCodec.
 
 
> the way it indexes for its other methods such as chop(), indexOf()
> and size(). It even appears to allow a QString to be modified by
> indexed 16 bit code units. If so, that is hopeless.
 
That depends on what you use it for. When using it to communicate
with the OS, e.g. file system, console, GUI etc., I have yet to see
this become an issue (which includes Asian countries), and that's one
of the biggest advantages QString has in my eyes: it's not the class
itself, it's the entire environment it's integrated in, and integrated
well. I've mentioned this several times, but it's obviously a point
everybody painstakingly avoids addressing for fear of admitting that
pretty much every function in the standard library that goes beyond
manipulating a mere byte buffer (i.e. pretty much everything interfacing
with the system and the environment, or in other words everything you
can't just as well implement yourself in portable user space but only by
resorting to platform-specific extensions) can't handle UTF-8 (except
by accident). And while QString may not be perfect, it's several orders
of magnitude better in these areas than anything the standard provides,
which is basically undefined behaviour for anything not restricted to
7-bits of some unqualified encoding (good luck trying to feed a
UTF-8 encoded filename into fopen() on e.g. Windows...)
 
 
> not to), but in the end I made my own iterator class for std::string
> which iterates over the string by whole code points (and dereferences
> to a 32 bit type), and in practice I found that was just as good.
 
So in conclusion:
1) you initially used a string class not in the standard library
2) you then extended the standard library for string handling
3) you need external libraries for string transcoding so you can
even get started using the "std::string has UTF-8 encoding" convention
4) you need external libraries to actually do anything with these
strings that goes beyond the simplest forms of buffer manipulation
5) you need other external libraries if you want these "strings"
to actually interface with something like the file system. Which
to me is really beyond insanity.
 
Yet people insist the standard has string support and as far as I'm
concerned that simply does not compute. We can agree to disagree.
 
 
 
Andreas
--
Dr. Andreas Dehmel Ceterum censeo
FLIPME(ed.enilno-t@nouqraz) Microsoft esse delendam
http://www.zarquon.homepage.t-online.de (Cato the Much Younger)
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Oct 23 08:56PM +0100

On 23/10/2014 20:25, Andreas Dehmel wrote:
[snip]
> Yet people insist the standard has string support and as far as I'm
> concerned that simply does not compute. We can agree to disagree.
 
QString is terrible in comparison to std::string because UTF-16 is
terrible in comparison to UTF-8. UTF-16 QString does NOT have O(1)
random access as you don't know how many surrogate pairs are in the
string. The fact that QString "works well" with Windows (which also
erroneously embraces UTF-16) is irrelevant as Qt is supposed to be
cross-platform.
 
Bottom line: UTF-8 and UTF-32 FTW; UTF-16 if you are mental.
 
/Flibble
Paavo Helde <myfirstname@osa.pri.ee>: Oct 23 03:13PM -0500

Andreas Dehmel <blackhole.8.zarquon42@spamgourmet.com> wrote in
> 5) you need other external libraries if you want these "strings"
> to actually interface with something like the file system. Which
> to me is really beyond insanity.
 
You are speaking like Qt were not an external library.
 
Besides, communicating UTF-8 to OS does not need any external libraries
on common targets. On Linux/Mac no conversion is needed (de facto). On
Windows you have the Windows SDK functions MultiByteToWideChar() and
WideCharToMultiByte() which do the work. Just wrap them up in 3-liner C++
wrapper functions and there you go.
 
Besides, UTF-8 to UTF-16 or UTF-32 conversion is not actually a rocket
science, it would be just a 10 or 20 line function in C++. No need to
incorporate a huge external library.
 
Cheers
Paavo
Emanuel Berg <embe8573@student.uu.se>: Oct 18 02:15AM +0200

> than the concrete language constructs: you will use
> most C language constructs in your C++ programs over
> the years, but not in the way a C programmer would.)
 
Well, now you are introducing the "C programmer". Some
guy who did almost nothing but C for decades and
during this time acquired all the good habits as well
as all the hacks. And those are the exact same that
some other guy acquired from doing C (almost nothing
but C) at some other facility. And then when they
switch to C++...
 
There might certainly be some common culture and some
common methods to deal with particular problems within
a group that does one piece of technology - but it is
too schematic for me. Most people didn't do C at AT&T
porting UNIX from B...
 
I don't think you will be a worse C++ programmer from
doing C. On the contrary, I think you will be a better
C++ programmer the more you do C. And I think you'll
be a better C programmer the more you do C++. The more
you do it, the better, and those are fairly close, all
things considered.
 
But: although learing Latin will make it easier for
you to learn Italian (not just words, morphology, and
theory terminology as in grammar, etc.) but also
work-habits - how to learn a foreign language - you
will *not* be that much helped if you do tons of Latin
and then go to Italy and try to communicate or read
the paper. (Or you will be, but it won't be
proportional by far, like in far-far.) Compare this to
a guy who is fluent in the C syntax and very familiar
with the libraries and stuff. OK, take this person to
C++. Oh, mama, it'll tear down the tower of Pisa - and
con estilo at that.
 
--
underground experts united
Paavo Helde <myfirstname@osa.pri.ee>: Oct 18 04:55PM -0500

Emanuel Berg <embe8573@student.uu.se> wrote in
 
> ++i (instead of i++) looks backward and it is
> confusing as it doesn't serve a clear purpose (or
> any?).
 
Nope. ++i is straightforward as it increments i without any concern of side
effects. OTOH, i++ says: increment i, but leave the original value around
for any case. This is just confusing, even (and maybe especially) if the
latter (i.e. former) value is not used. Why write down an expression
preserving a special value which is not used?
 
Cheers
P
JiiPee <no@notvalid.com>: Oct 18 11:25PM +0100

On 18/10/2014 22:55, Paavo Helde wrote:
> preserving a special value which is not used?
 
> Cheers
> P
 
Yes and it is clear also: ++ first says that "increase first", and then
i after that tells that return the current (increased) i value. For me
it looks very clear.
 
And what comes to using auto: I think its just a matter of getting used
it (after all these years using old 1990 C++). The old for loop might
look easier still now, but maybe its just because we are not used to the
new way yet! But it was the same with ++i and i++: I used to also use
i++ like 15 years, but then changed to ++i... and was difficult first
but now am getting used to it. Brain learns new things :), but takes
some time.
Emanuel Berg <embe8573@student.uu.se>: Oct 19 03:21AM +0200


> The context is that someone wants to learn C++ --
> surely no promotion is needed?
 
OK, if someone wants to learn C++ he or she should do
C++.
 
> mention object orientation -- why? It was popular in
> the 1990s, but today noone forces you to do it, not
> in C++ at least.
 
I'm old-school! :) But I agree you can do everything
with C++. It doesn't have to be Haskell or Lisp or SML
to be functional programming. But to do OO in C or
some SQL-like data-oriented programming in C++ (or C)
will just be a lot of unmotivated work. But I don't
see much you could do in C that you couldn't do in
C++. And that's logical as C++ is an extention of C.
 
> safety, exceptions and the standard library: good C
> programs have to make do with arrays or home-grown
> linked lists, and manual memory management.
 
No, good programs solve a problem - the intended,
whole problem, and none other. Many programs that run
every day and have done so for ages aren't what any of
us would call good programs if we were to look at the
source. Just compile the Linux kernel and you get
hundreds of warnings from the compiler. Take a look at
the source code and you might immediately dislike the
indentation style and naming conventions and...
whatever. It doesn't matter to you unless you are
writing it, in what case of course you want a program
whose source conforms with your views and style.
 
> program, but rarely see a need for run-time
> polymorphism, abstract interfaces, design patterns
> and so on. YMMV.
 
No, that is CS schoolbook stuff that will only make
your head spin. In practice old tricks are the best.
 
--
underground experts united
Luca Risolia <luca.risolia@linux-projects.org>: Oct 19 03:15AM +0200

Il 11/10/2014 13:24, JiiPee ha scritto:
 
> What I said is that first learn proper C++ way, then after that C... so
> the other way around :). But I guess one can learn both at the same time
> when just being careful understanding that not using C ways in C++ code.
 
If you want to be able to appreciate and write elegant code, start to
learn C++ from the beginning, don't waste your time on C.
Emanuel Berg <embe8573@student.uu.se>: Oct 19 03:33AM +0200


> But it is not a proper subset, and neither is it
> "huge".
 
I used that word informally, and I don't think it will
be worthwhile to somehow do a weighted quantification
to find out, but I'm still rather confident that some
techno-historian one thousand years or so from now
will come to the conclusion that C was a huge part of
C++ both in terms of technology and as regards the
human aspects of it.
 
> ... so one would need to learn the usage of basic
> looping constructs twice
 
All the better!
 
> inside function also has unnessecary and harmful
> restrictions in C, so learning the C way first is
> actually harmful when learning C++.
 
No, the more you learn, the better. Manny Pacquiao and
the Klitschko brothers are all boxing champions. What
sport did they start with? Kickboxing! (or Muay Thai)
So, training kickboxing didn't make them worse boxers
even though in boxing you are not allowed to kick.
They went on to become boxing champions nonetheless.
Learning is never harmful, it is always positive *for
everything*!
 
Don't worry about it!
 
--
underground experts united
Emanuel Berg <embe8573@student.uu.se>: Oct 19 06:25AM +0200

> size(&str) - it is not object oriented programming.
> You could get the same effect in C by adding nothing
> more than function overloading.
 
Well then, what is your definition of OO? The one I
like the best is the coupling of data and the
algorithms that modify that same data. Which is the
case here. It doesn't really matter if that is
implemented as syntactic sugar or not. OO is all about
interfaces anyway. (Well, all good programming is, I
would argue.)
 
--
underground experts united
Robert Wessel <robertwessel2@yahoo.com>: Oct 17 11:10PM -0500

On 18 Oct 2014 02:05:40 GMT, ram@zedat.fu-berlin.de (Stefan Ram)
wrote:
 
>sizeoffirst );
 
> C++ might not guarantee this, but your implementation might
> handle it this way.
 
 
Unfortunately realloc() is not required to shrink allocations in
place. It may allocate the new smaller area, copy over the data that
will fit, deallocate the original area, and return the pointer to the
new area.
 
If the application is depending on addresses in the allocated block,
this will likely break them.
 
OTOH, most implementations do (usually) shorten blocks in-place, but
they are not required to. And it would not surprise me at all that
they do not in all cases - for example, many allocators go to the OS
for very large allocations, if you realloc'd a multi-megabyte
allocation down to a few bytes, I could easily see that allocation
being moved into the "normal" heap.
 
That being said, there may be implementation dependent ways of doing
that. For example, in Windows you could VirtualAlloc() a region (this
has page granularity, but that would seem to not be an issue for the
OPs problem), and the VirtualFree() specific pages from with that
allocation later. Alternatively, the OS Heap functions might be
enough, HeapRealloc() allows you to specify
HEAP_REALLOC_IN_PLACE_ONLY, which will cause the function to fail if
it cannot do the reallocation in-place (which would seem to be a
reasonable fallback for the OP's problem - things continue, just the
shrink didn't occur).
BGB <cr88192@hotmail.com>: Oct 18 09:11AM -0500

On 10/17/2014 9:05 PM, Stefan Ram wrote:
> sizeoffirst );
 
> C++ might not guarantee this, but your implementation might
> handle it this way.
 
expecting resizing in-place, and subsequent mallocs to allocate said
adjacent memory, is *very* unsafe and non-portable.
 
 
if applicable, possible is to just treat them as a single big block, for
example:
pBoth=malloc(szFirst+szSecond);
pFirst=pBoth;
pSecond=(void *)((char *)pBoth+szFirst); //*
 
*: while GCC and friends allow pointer arithmetic on 'void *', this is
an extension, and better is to cast to/from char or similar for the
arithmetic.
 
 
however, it is necessary to free both at the same time in this case.
 
 
as a practical example, this is commonly done with VBOs in OpenGL, since
generally you have to do everything in a given DrawArrays or
DrawArrayElements call with a single VBO bound (which is essentially a
memory buffer holding several parallel arrays).
 
 
or, in a particularly ugly example:
dumping most of the terrain geometry into a single massive set of
parallel arrays (generally holding around 1M vertices / 340k tris in
stats). as the player moves, the contents of these arrays are repacked
and uploaded to GL, mostly so that the terrain rendering could be cut
down to a more modest number of draw-calls.
 
so, say, one allocates 64MB for their temporary vertex arrays (using
mostly packed formats), fills them up, then copies some memory to
"collapse" any unused space, then sending the whole mess to GL.
Melzzzzz <mel@zzzzz.com>: Oct 19 02:59AM +0200

On Sat, 18 Oct 2014 17:17:00 -0700 (PDT)
 
> Seems like 'memory collapse' would do the job, or some way to force
> allocation of two memory blocks to be exactly next to each other in
> the first place.
 
Try to allocate with mmap directly.
 
 
 
--
Manjaro all the way!
http://manjaro.org/
Robert Hutchings <rm.hutchings@gmail.com>: Oct 18 09:32AM -0500

Given this code:
 
#include <iostream>
#include <set>
 
using namespace std;
 
// ---------------- Observer interface -----------------
class Observer {
public:
virtual void Notify() = 0;
};
 
// ---------------- Observable object -------------------
class Observable {
static Observable* instance;
set<Observer*> observers;
Observable() { };
public:
static Observable* GetInstance();
void AddObserver(Observer& o);
void RemoveObserver(Observer& o);
void NotifyObservers();
void Trigger();
};
 
Observable* Observable::instance = NULL;
 
Observable* Observable::GetInstance()
{
if (instance == NULL) {
instance = new Observable();
}
 
return instance;
}
 
void Observable::AddObserver(Observer& o)
{
observers.insert(&o);
}
 
void Observable::RemoveObserver(Observer& o)
{
observers.erase(&o);
}
 
void Observable::NotifyObservers()
{
set<Observer*>::iterator itr;
for (itr = observers.begin();
itr != observers.end(); itr++)
(*itr)->Notify();
}
 
// TEST METHOD TO TRIGGER
// IN THE REAL SCENARIO THIS IS NOT REQUIRED
void Observable::Trigger()
{
NotifyObservers();
}
 
// ------ Concrete class interested in notifications ---
class MyClass : public Observer {
 
Observable* observable;
 
public:
MyClass() {
observable = Observable::GetInstance();
observable->AddObserver(*this);
}
 
~MyClass() {
observable->RemoveObserver(*this);
}
 
void Notify() {
cout << "Received a change event" << endl;
}
};
 
 
 
void main()
{
Observable* observable = Observable::GetInstance();
MyClass* obj = new MyClass();
observable->Trigger();
}
 
 
 
What if I don't to use a SET? What is the advantage of using pointers
with "new", as apposed to NOT using pointers?
Paavo Helde <myfirstname@osa.pri.ee>: Oct 18 09:49AM -0500

Robert Hutchings <rm.hutchings@gmail.com> wrote in news:m1ttlj$3m3$1
> }
 
> What if I don't to use a SET? What is the advantage of using pointers
> with "new", as apposed to NOT using pointers?
 
Whic pointers exactly? You have a lot of them here.
 
"MyClass* obj = new MyClass()" is not needed (and you have a memory leak
caused by it), you could do equally well
 
MyClass obj;
 
Dynamic allocation is needed only if the lifetime of the object is not
bound to a single scope.
 
You also have a lot of pointers to Observable. Most of them could be
replaced by a reference for a bit better syntax.
 
You don't need the member variable MyClass::observable, instead you can
call Observable::GetInstance() whenever needed.
 
But inside the Observable, you need a std::set or some other container of
pointers though (because you cannot put references into a container, as
they are not objects, unlike pointers).
 
 
hth
Paavo
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Oct 18 09:45PM +0100

On 18/10/2014 21:12, 嘱 Tiib wrote:
 
> his 'neosigslot' some time ago here http://i42.co.uk/stuff/neosigslot.htm
> Leigh is trolling here as "Mr Flibble", but don't worry, C++ he knows well
> and can write finely.
 
Don't break the fourth wall!
 
/Flibble
 
P.S. Sausages.
Christopher Pisz <nospam@notanaddress.com>: Oct 23 11:33AM -0500

> Ebenezer Enterprises - Unless the L-rd builds the house,
> they labor in vain that build it. Psalms 127:1
 
> http://webEbenezer.net
 
This is an interesting video. Coworker raised a point though. If you're
on an every day desktop system where 100s of processes are running, what
are the chances you are ever going to be in L1 or L2 anyway unless your
process has the highest priority?
 
His advice to _never_ use a linked list is a little hard to swallow. If
I am constantly inserting and don't know before hand how much memory I
need, I find it hard to believe that I could get better performance
using a vector, since the whole vector has to move because of its
guarantee to be contiguous.
 
I'd be interested in seeing some test code to profile that show results
that conflict with the way we usually think.
Bo Persson <bop@gmb.dk>: Oct 23 07:01PM +0200

On 2014-10-23 18:33, Christopher Pisz wrote:
> need, I find it hard to believe that I could get better performance
> using a vector, since the whole vector has to move because of its
> guarantee to be contiguous.
 
You don't need to know *exactly* how much memory you will need, even a
ballpark figure will help.
 
After, say, a v.reserve(1000000) the vector will hardly ever reallocate.
 
 
Bo Persson
Paavo Helde <myfirstname@osa.pri.ee>: Oct 23 12:01PM -0500

Christopher Pisz <nospam@notanaddress.com> wrote in
> you're on an every day desktop system where 100s of processes are
> running, what are the chances you are ever going to be in L1 or L2
> anyway unless your process has the highest priority?
 
Thankfully enough, most of these 100 processes are sleeping and waiting
for something like a mouse click. Also, your evereday desktop system
already has multiple cores (together with multiple L1 and to some extent
also L2 and L3 caches) so it can run more processes really in parallel.
 
We have done lots of profiling with memory-intensive applications and the
effect of caches is clearly visible. For example, when adding more
calculation threads one-by-one the overall performance goes up linearly
to 6 threads, then starts to decline and level to a plateau, although the
machine has 12 physical cores (and 24 logical ones with HT). Reason?
There were only 6 L3 caches. If the caches were of no use, nothing
spectacular would happen at 6 threads.
 

> of its guarantee to be contiguous.
 
> I'd be interested in seeing some test code to profile that show
> results that conflict with the way we usually think.
 
It all depends on the task. Std::list is in general indeed pretty
useless, std::deque or std::vector are often a better fit. But it all
depends on circumstances, one cannot apply generic wisdom "never use
something!" here. If some data structure were really not useful at all
for anything, it would not be in the standard! And performance is only
one of the goals and must be balanced with all others.
 
Cheers
Paavo
Robert Wessel <robertwessel2@yahoo.com>: Oct 17 11:14PM -0500

On Fri, 17 Oct 2014 17:44:11 +0000 (UTC), drew@furrfu.invalid (Drew
Lawson) wrote:
 
>something in much less time than it would take to read a 800 character
>message that is drowning in angle brackets.
 
>At least that is my experience with Visual Studio and gcc.
 
 
I've often though some better formatting of that sort of message would
make them considerably more readable. Just a decent indenting scheme
to should the nesting of references ought to go a long way. One could
even imagine a fancy graphical representation, although there are the
(obvious) reasons for wanting to keep the error messages reasonably
pure text.
MikeCopeland <mrc2323@cox.net>: Oct 17 10:03PM -0700

> This won't compile. I first tried that, as I use it with other STL
> containers. However, it doesn't compile for my std::vector. (Perhaps I
> have some other error nearby...<sigh>)
 
Yes, I had another problem: in my actual code I had declared dIter as a "const_iterator". This (is one of the) causes for the C2678 error I was getting.
Thus, even though I tried using the above code, with my particular declaration the compile errored...
 
---
This email is free from viruses and malware because avast! Antivirus protection is active.
http://www.avast.com
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: