Wednesday, September 17, 2014

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

comp.lang.c++@googlegroups.com Google Groups
Unsure why you received this message? You previously subscribed to digests from this group, but we haven't been sending them for a while. We fixed that, but if you don't want to get these messages, send an email to comp.lang.c+++unsubscribe@googlegroups.com.
Lynn McGuire <lmc@winsim.com>: Sep 17 11:45AM -0500

"The C++14 Standard: What You Need to Know"
http://www.drdobbs.com/cpp/the-c14-standard-what-you-need-to-know/240169034
 
Nice, thorough article with good explanation on auto.
 
I miss Dr. Dobbs magazine.
 
Lynn
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 17 10:05AM -0700

On Wednesday, September 17, 2014 12:45:51 PM UTC-4, Lynn McGuire wrote:
 
> Nice, thorough article with good explanation on auto.
> I miss Dr. Dobbs magazine.
> Lynn
 
A quote from the article:
---[ Begin ]---
It is obvious to both me and the compiler that the function is returning a double. So in C++14, I can define the function return type as auto instead of double:
 
auto getvalue() {
return 1.4;
}
---[ End ]---
 
I think this is potentially very dangerous. Suppose I meant to type 1.4f
but forgot the f? This will not catch that error, and may only generate
a warning elsewhere which another developer sees as "oh, I just need to
cast it to double to get rid of that ridiculous warning," and inserts a
new bug and does not correct the original flaw which would've actually
fixed the first bug and prevented the second.
 
I believe strongly in an explicit conveyance of developer information
through syntax. For example, getvalue() should be declared thusly:
 
double getvalue(void) {
return 1.4;
}
 
In this case the return type was explicitly given, and you explicitly
indicated that you, as the developer, purposefully had no parameters,
rather than leaving a potential questiont here: Did the developer
intend to pass any parameters? When "void" is used, there is no
ambiguity as it's much harder to accidentally type "void" than it is
to accidentally leave off parameters in an otherwise valid syntax
such as "double getvalue()".
 
I also think we should do away with .h files when they're not necessary,
and instead provide something like:
 
#includeh "myclass.cpp"
 
Which is interpreted by the compiler as "read myclass.cpp, but only
derive its information to create what we need to access it, as would
othemrwise be provided by a header."
 
In a similar manner, an IDE should be able to divine this information
for you and present a read-only, on-the-fly created header file that
can be logically wielded, examined, printed, etc., but rather stems
directly back to the actual cpp source code rather than a separate
file. It also saves disk space. (LOL had to throw that in there :-))
 
Oh I can't wait to get my compiler written. :-) All of these life and
living requirements make it painful to pause as the months roll by. :-)
 
Best regards,
Rick C. Hodgin
Victor Bazarov <v.bazarov@comcast.invalid>: Sep 17 01:41PM -0400

On 9/17/2014 1:05 PM, Rick C. Hodgin wrote:
> A quote from the article:
> ---[ Begin ]---
> It is obvious to both me and the compiler that the function is
returning a double. So in C++14, I can define the function return type
as auto instead of double:
> ---[ End ]---
 
> I think this is potentially very dangerous. Suppose I meant to type 1.4f
> but forgot the f? This will not catch that error, [..]
 
Even a very good compiler is not the panacea against the programmer's
stupidity... <ahem> ...forgetfulness.
 
It's not an error the compiler is supposed to catch.
 
int foo(int* pi)
{
return *pi;
}
 
int main()
{
return foo(nullptr);
}
 
I "forgot" that the function 'foo' dereferences the pointer without
checking, so not until I run my program (and only if I'm lucky and the
execution environment provides the mechanism for catching that type of
situation) shall I know that I've blundered.
 
There are some tools that can perform static code analysis and point out
the potential trouble in such a case. Omitting a suffix in a literal is
much more difficult to diagnose as unintentional.
 
If your compiler when you finish it can "do what I mean, not what I
write", let us know.
 
V
--
I do not respond to top-posted replies, please don't ask
Paavo Helde <myfirstname@osa.pri.ee>: Sep 17 02:04PM -0500

"Rick C. Hodgin" <rick.c.hodgin@gmail.com> wrote in
> ---[ End ]---
 
> I think this is potentially very dangerous. Suppose I meant to type
> 1.4f but forgot the f
 
This feature is not obligatory, one is not forced to use it. Most shops
have coding guidelines which ought to spell out when and how 'auto' is
acceptable.
 
> necessary, and instead provide something like:
 
> #includeh "myclass.cpp"
 
> Oh I can't wait to get my compiler written. :-)
 
The module system support for C++ is underway: http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2014/n4047.pdf
 
It's long overdue, but will still probably be finished before your
compiler is ready ;-)
 
Cheers
Paavo
Christopher Pisz <nospam@notanaddress.com>: Sep 16 07:00PM -0500

On 9/16/2014 5:30 PM, David Brown wrote:
 
> It's like looking at a rainbow. You can say that one bit is red, and
> another bit is orange, but you can't look at the blur between them and
> say /that/ bit there is the first red colour.
 
So, is it your opinion that there cannot be any living thing that
is...we'll make up out own metric here...1e-999999 evolutionary units
ahead or behind the rest of its scientific classification?
 
While it occurring very slowly would justify tiny and conveniently
unobservable differences, it does not explain the synchronization.
Why not have one group of poor suckers somewhere that didn't keep up
over the last million years? Sure the strong survive, but the weak don't
necessarily die. Look at our cat and dog population :P
 
Which is another point. If we all evolved from some ancient single
celled organism, which was still brought to "life" by some unknown
means, how did we diverge so greatly into so many forms of life? What
caused this divergence and why did we conveniently end up on top?
 
Then there is the more philosophical question of consciousness.
If our sense of self and our thoughts are nothing but chemical reactions
that just happened to come together perfectly out of sheer random
chance, then why is that my consciousness is in this body and yours in
your body. It just happened by chance? I am just trapped here in this
pile of mechanical flesh and chemical reactions out of sheer randomness?
What sparked my mental self to come into being in this machinery?
Furthermore, wouldn't the science-fictional idea of transferring my
consciousness into another flesh sack be possible? Or perhaps something
less complicated like that of a kitten. But we cannot seem to do that.
 
We are also talking about more than luck here. To have all of these
things come into place just perfectly out of sheer random luck. Even
over billions of years, how many times does the universe win the
lottery? and these laws, the physics, the math, why do the laws make
such sense, how is it that we have just enough gravity, just enough sun
light, just enough of everything, at just the right time. If we can win
the lottery 1e999999999999... times, how can it be any more far fetched
to think some intelligence rigged the lottery?
David Brown <david.brown@hesbynett.no>: Sep 17 02:04AM +0200

On 16/09/14 20:27, Mr Flibble wrote:
 
> I have never used "edit and continue" and probably never will: it just
> doesn't seem that useful to me.
 
Edit and continue requires (as I understand it) that you use completely
unoptimised code. I am not sure I get the point either, assuming that's
true - if code size and speed matters so little that you never use it,
then why not use a real dynamic language like Python? That certainly
makes more sense for many gui's.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 16 05:22PM -0700

On Tuesday, September 16, 2014 8:04:27 PM UTC-4, David Brown wrote:
> Edit and continue requires (as I understand it) that you use completely
> unoptimised code.
 
It does not require it.
 
> true - if code size and speed matters so little that you never use it,
> then why not use a real dynamic language like Python? That certainly
> makes more sense for many gui's.
 
The purpose is development. You don't use edit-and-continue on production
code. You use it while you're working on algorithms. You can make changes
immediately without restarting and getting back to the point where you
just were when the code change was needed.
 
With edit-and-continue you are in the debugger, you see where the error
is in your code, you change it, hit apply changes (or press a key which
runs or steps into executable code again), and you're continuing.
 
In the cases of logic and new code it almost always compiles without
error. You can add new variables, delete existing ones, etc. There are
several things it won't do well though ... but in those cases you
simply fall back on the model that non-edit-and-continue compilers
use today (you stop debugging, make the change, recompile, and restart
the app and get back to the point you needed to debug).
 
It's a developer man-hour time saver. It makes debugging algorithms
faster because you don't have to keep restarting.
 
I honestly believe many developers don't recognize the value because
they haven't had it available to them before. Were it available, I
think many would see overnight the improvement it offers.
 
For example, Apple thought enough about the facility to add it to
their tools back in the mid/late 2000s. Microsoft has had it in every
tool since the late 90s.
 
It is a valuable asset. It just hasn't been typically available in
general, and especially to non-Windows users, in C/C++. As such,
it's a tool where most people don't have experience or reference.
 
Best regards,
Rick C. Hodgin
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 16 05:39PM -0700

On Tuesday, September 16, 2014 8:04:27 PM UTC-4, David Brown wrote:
> Edit and continue requires (as I understand it) that you use completely
> unoptimised code.
 
I will add this comment about using optimized code ... the optimizing
compiler will often merge local variables, do away with them completely
as they become register variables, remove entire lines of source code,
and more. As such, the ability to perform general debugging is diminished,
let along doing edit-and-continue changes.
 
It does work however.
 
There have been many times I've been doing that by accident, and not
understanding why my code wasn't breaking on a particular source code
line only to realize when I put a hard "_asm int 3" in there (an x86
opcode for a hard debugger break) to then single-step through that I
realize it bypasses that source code line. Then I look up and see where
I'm compiling in "Release" mode rather than "Debug" mode. Switch,
recompile, and everything is back to wonderfulness. :-)
 
Best regards,
Rick C. Hodgin
Lynn McGuire <lmc@winsim.com>: Sep 16 08:32PM -0500

On 9/16/2014 7:04 PM, David Brown wrote:
 
> Edit and continue requires (as I understand it) that you use completely unoptimised code. I am not sure I get the point either,
> assuming that's true - if code size and speed matters so little that you never use it, then why not use a real dynamic language like
> Python? That certainly makes more sense for many gui's.
 
I have measured the speed increase between the Smalltalk
version of our app and the unoptimized C++ version. The
unoptimized C++ version is 100 times faster than the
Smalltalk version. 100X!
 
And the C++ version does not run out of pointers. Our
Smalltalk version ran out of pointers all the time.
 
And the C++ version does not spend precious seconds of
time garbage collecting while the user looks at the
hourglass. Our Smalltalk version did.
 
And the C++ version is type safe. The Smalltalk version,
well, lets hope that you've always got the method for
that object.
 
I cannot imagine that Python is any faster than Smalltalk.
 
Lynn
Ian Collins <ian-news@hotmail.com>: Sep 17 06:01PM +1200

Scott Lurndal wrote:
>> which shows variables related to the nearby source lines, and much
>> more.
 
> Not applicable to embedded development with a non-windows target.
 
Eh? I use NetBeans all the time for debugging on embed targets.
 
--
Ian Collins
David Brown <david.brown@hesbynett.no>: Sep 17 11:14AM +0200

On 17/09/14 02:00, Christopher Pisz wrote:
 
> So, is it your opinion that there cannot be any living thing that
> is...we'll make up out own metric here...1e-999999 evolutionary units
> ahead or behind the rest of its scientific classification?
 
Yes, that is correct.
 
Evolution is not a linear progression. Gradual changes in genetics are
spread across the population, and mixed in successive generations.
 
Occasionally, there is a "jump" due to a serious genetic mutation that
happens to be beneficial. (Most evolution in sexual species is due to
mixing of genes, rather than major mutations - mutations are uncommon,
and most have either no serious effect or lead to an early death.) You
could pick one such genetic mutation and say that the first person to
have that, was the first human. (A prime contender would be the
mutation that allowed us to produce a much wider range of vocal sounds,
leading to verbal communication - but we already had complex
communication using simpler sounds and sign language.) Any such choice
of the definition of "human" is arbitrary, however.
 
Another possible way to define a "first human" would be to look for
historic bottlenecks and minimums in the population of our ancestors.
On several occasions, they were nearly extinct - with populations down
to a few thousand or even mere hundreds. And - due to luck - there were
points in during such periods that all future generations descended from
a single male or a single female. Approximate dates for such cases have
been calculated based on genetic mutation rates and the spread of genes
in modern humans - a so-called "Mitochondrial Eve" and "Y-chomosomal
Adam". But while these were common ancestors for future generations,
they were by no means the /only/ man or women at the time, there could
have been many such cases throughout history (we can only use genetic
tracing back to the most recent male and female), they probably lived
something around 100,000 years apart, and they were not significantly
different from their parents or their children.
 
> Why not have one group of poor suckers somewhere that didn't keep up
> over the last million years? Sure the strong survive, but the weak don't
> necessarily die. Look at our cat and dog population :P
 
Sorry - I can't figure out what you are trying to say here.
 
(Though I would say that evolution is not "survival of the fittest" - it
is "survival of the ones most likely to produce viable offspring that
continue the line". And there is plenty of luck involved too - it
doesn't matter how "fit" your group is if you happen to get hit by a
volcano.)
 
> celled organism, which was still brought to "life" by some unknown
> means, how did we diverge so greatly into so many forms of life? What
> caused this divergence and why did we conveniently end up on top?
 
Evolution requires four basic things - reproduction, a mechanism for
encoding characteristics (DNA in most modern lifeforms, RNA in simpler
ones, and perhaps other systems before RNA evolved), a mechanism for
modifying these characteristics between generations, and a mechanism for
selecting which characteristics are passed on. Once these are
established within appropriate bounds, evolution is inevitable.
 
(The development of the first protocells is another issue, called
"biogenesis". We could go into the science of that too, but perhaps we
should leave it to keep our focus here.)
 
There are many ways in which genes can be modified, mutated, mixed,
swapped, added, deleted, and replicated. Some ways are dramatic and
lead to major changes, most are gradual.
 
Why did humans end up on top? Probably at some point in our past, women
started taking a fancy to smart men rather than just men with big
muscles, and evolutionary pressure made us smarter (the key is not the
chance of survival, but the chance of having kids).
 
> Furthermore, wouldn't the science-fictional idea of transferring my
> consciousness into another flesh sack be possible? Or perhaps something
> less complicated like that of a kitten. But we cannot seem to do that.
 
"Consciousness" is what is know as an "emergent" property. It is not
something with a specific place, but is a feature of a system as a
whole. You can liken it to culture - you cannot look at a country and
point to its "culture", but you can clearly see that one country has a
different culture from another, and that different aspects of the
culture are "concentrated" in different physical places.
 
(In case you get any wrong ideas here, "consciousness" is not a uniquely
human feature, though we have a significantly more advanced and complex
conciousness than any other animals. It is certainly not an
all-or-nothing concept.)
 
> things come into place just perfectly out of sheer random luck. Even
> over billions of years, how many times does the universe win the
> lottery?
 
Often enough, it seems.
 
> light, just enough of everything, at just the right time. If we can win
> the lottery 1e999999999999... times, how can it be any more far fetched
> to think some intelligence rigged the lottery?
 
Why are the laws of physics the way they are? Why are the universal
constants so neatly tuned to make the universe work so well? These are
big questions, and for the most part, the scientific answer is "we don't
know yet". There are plenty of ideas and theories. One corollary of
string theory (which has in no way been proven) is that there is a vast
number of universes with slightly different universal constants - we
live in one that is "just right" because in most of the others, life
could not exist. This is known as the "anthropomorphic principle" -
scientists consider it a weak argument, but often better than nothing.
scott@slp53.sl.home (Scott Lurndal): Sep 17 01:41PM

>. Just goes to
>show that I'm an old fashioned guy ... not just in that I like old
>John Deere tractors
 
I'll take a Farmall super M over any JD anyday :-)
drew@furrfu.invalid (Drew Lawson): Sep 17 02:00PM

In article <lvaivr$t9n$1@dont-email.me>
 
>So, is it your opinion that there cannot be any living thing that
>is...we'll make up out own metric here...1e-999999 evolutionary units
>ahead or behind the rest of its scientific classification?
 
Just to pick at a common rhetorical nit -- evolution is not forward
or backward, it is just change. The mutation that improves
survivability in one situation might reduce it in others.
 
>Why not have one group of poor suckers somewhere that didn't keep up
>over the last million years? Sure the strong survive, but the weak don't
>necessarily die. Look at our cat and dog population :P
 
Ever hear of the coelacanth?
 
But I suspect that you mean why isn't there a pocket of pseudo-humans
somewhere. My view is that has the same reason as the lack of
ground sloths and giant kangaroos -- intelligent, tool wielding
humans are good at slaughtering things.
 
>celled organism, which was still brought to "life" by some unknown
>means, how did we diverge so greatly into so many forms of life? What
>caused this divergence and why did we conveniently end up on top?
 
Time. Get a survivable mutation every generation or three, and
repeat that for billions of years.
 
--
Drew Lawson | "Look! A big distracting thing!"
| -- Crow T. Robot.
|
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 17 08:26AM -0700

On Wednesday, September 17, 2014 9:42:00 AM UTC-4, Scott Lurndal wrote:
> >show that I'm an old fashioned guy ... not just in that I like old
> >John Deere tractors
 
> I'll take a Farmall super M over any JD anyday :-)
 
I remember seeing a tractor show on RFD-TV where the Farmall guys
and JD guys were going back and forth with jibes, disputing over
their preference. A JD guy said something like, "Trucks are red.
Tractors are green. Everything else is just in-between."
 
:-)
 
Actually I like the old Farmalls too, along with most all of the
old style tractors. They were for communities, able to farm only
a few hundred acres each max, meaning more families were tied to
farming, to the communities, less corporate concerns, more people
concerns.
 
I like to watch the Amish plow their fields. Many here in Indiana
still use horse and pull two to four bottom plows.
 
What I really like though are the old steam engine tractors, such
as the Oil Pulls. Just awesome power in those otherwise relatively
silent packages.
 
Here's five of them pulling a 66-bottom plow!!
That's 13.2 bottoms each! :-)
http://www.youtube.com/watch?v=vLNQyJqcE5I
 
Here's a single 120hp (old hp rating)
Peerless Z3 pulling a 20-bottom plow!!
http://www.youtube.com/watch?v=WNS38av6LcQ
 
By comparison, here's a huge modern tractor, 95,000 lbs,
1000 hp (new hp rating) V16 Detroit diesel (two stroke
diesel) pulling a 15-bottom plow:
http://www.youtube.com/watch?v=L5N9t_z7c6o
 
Best regards,
Rick C. Hodgin
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Sep 17 05:39PM +0100

On 16/09/2014 23:20, David Brown wrote:
 
> Again, your logic is flawed. If Adam didn't exist, then Abraham and
> Moses cannot be descended from him - but that in no way implies that
> Abraham and Moses did not exist.
 
My logic is not flawed: if Adam didn't exist then Adam's son didn't
exist either and so on for all descendents. If Moses of the OT was
*based* on a real person then they are two entirely different people. I
will even go a step further: humans *of the bible* never existed as
humans *of the bible* are all descended from Adam. The OT is a complete
fiction and it is quite possible to *base* a fictional character on a
real person.
 
 
> The non-existence of a first man falsifies /some parts/ of the OT - but
> certainly not all of it.
 
It falsfies all of it; none of Adam's descendents as described in the OT
existed.
 
/Flibble
Dombo <dombo@disposable.invalid>: Sep 17 09:05PM +0200

Op 17-Sep-14 2:22, Rick C. Hodgin schreef:
 
 
> I honestly believe many developers don't recognize the value because
> they haven't had it available to them before. Were it available, I
> think many would see overnight the improvement it offers.
 
The value of edit-and-continue is strongly related to your work flow. If
you put most of your thought about how you are going to tackle a
particular problem before your start writing code you don't really need
edit-and-continue and will rarely need a debugger. If your work flow is
trail-and-error style coding I can understand why shortening the
build-test-analyze-modify cycle with the edit-and-continue feature is
valuable to you.
 
Though I have worked with IDE's that support edit-and-continue for over
15 years as of yet have never had a need for it. My experience is that
thinking things through in advance leads to better results in less time
than the trail-and-error development style.
Robert Hutchings <rm.hutchings@gmail.com>: Sep 17 11:59AM -0700

My manager gave me this code today, and I am not sure understand the Patterns code. I know ABOUT design patterns (GoF, etc), but has anyone seen code like this before?
 
**********************************************************************************
Abstract Instrument class
This class is purely abstract and defines the interface of concrete instruments
which will be derived from this one. It implements the Observable interface
**********************************************************************************/
class Instrument : public Patterns::Observer, public Patterns::Observable
{
public:

Instrument(const std::string& isinCode, const std::string& description) : NPV_(0.0), isExpired_(false), isinCode_(isinCode),
description_(description), calculated(false) {}
virtual ~Instrument() {}
// inline definitions
// returns the ISIN code of the instrument, when given.
inline std::string isinCode() const {
return isinCode_;
}
// returns a brief textual description of the instrument.
inline std::string description() const {
return description_;
}
// returns the net present value of the instrument.
inline double NPV() const {
calculate();
return (isExpired_ ? 0.0 : NPV_);
}
// returns whether the instrument is still tradable.
inline bool isExpired() const {
calculate();
return isExpired_;
}
// updates dependent instrument classes
inline void update() {
calculated = false;
notifyObservers();
}
}
Lynn McGuire <lmc@winsim.com>: Sep 17 11:05AM -0500

"Why Java and C++ developers should sleep well at night"
http://www.itworld.com/big-data/436286/why-java-and-c-developers-should-sleep-well-night
 
Nice! And nice picture of a dude sleeping under his
desk.
 
Yes, the programming world is continuing to fragment
but C / C++ are needful for large or compute time
intensive software packages.
 
Lynn
woodbrian77@gmail.com: Sep 17 10:49AM -0700

On Wednesday, September 17, 2014 11:05:28 AM UTC-5, Lynn McGuire wrote:
 
> http://www.itworld.com/big-data/436286/why-java-and-c-developers-should-sleep-well-night
 
> Nice! And nice picture of a dude sleeping under his
> desk.
 
I sometimes sleep on the floor of my office too.
 
> Yes, the programming world is continuing to fragment
> but C / C++ are needful for large or compute time
> intensive software packages.
 
I'm not surprised C++ is doing well, but can't explain
Java's doing well.
 
Here's an example of an interesting development in
C++ realm:
 
http://bannalia.blogspot.com/2014/05/fast-polymorphic-collections-with.html
 
I don't think it has been mentioned here yet.
 
 
 
Brian
Ebenezer Enterprises - So far G-d has helped us.
http://webEbenezer.net
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Sep 17 11:23AM -0700

> I'm not surprised C++ is doing well, but can't explain
> Java's doing well.
 
I've had this thought... this ranking is based on search engine queries.
Don't people go to search engines when they have a problem? And wouldn't
this stat reflect more how hard it is to work with any given language
than with its overall popularity? :-)
 
C is tops -- lots of issues with it, needs lots of online help.
 
Java is near the top, but dropping -- used to be much harder, but
with so many open source projects now it's easier to find examples
at GitHub.
 
And so on... :-)
 
I can say this ... I don't work with C#, but I've gone to search
engines many times to look up something in C# syntax just to see
what some code example I was looking at did. Am I somehow
inadvertently giving C# a boost even though I don't use it?
I sure hope not. :-)
 
Best regards,
Rick C. Hodgin
Jorgen Grahn <grahn+nntp@snipabacken.se>: Sep 17 12:42PM

On Mon, 2014-09-15, Victor Bazarov wrote:
...
> "Down with newsgroup bullying!!!" -- I can just see CNN spending hours a
> day chewing fat on that... <sigh>
 
Replace "newsgroup" with "social media", and I'm pretty sure they are
doing just that. Here in .se they have (to confuse things) even
appropriated the term "troll" for those bullies ...
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
arnuld <sunrise@invalid.address>: Sep 17 05:38AM

WANT: print stack elements
PROBLEM: No way to print
 
This is the code from section 12.1.2 of Nicolai's 2nd edition:
 
#include <iostream>
#include <stack>
 
int main()
{
std::stack<int> st;

st.push(10);
st.push(11);
st.push(12);
 
std::cout << ":STACK:" << std::endl;
std::cout << "size = " << st.size() << std::endl;

while(!st.empty())
{
std::cout << st.top() << "\n";
st.pop();
}
 
std::cout << std::endl;

 
return 0;
}

 
 
It prints the elements but makes empty too. Is there no way to print
stack elements without pop()-ing them out ?
Christian Gollwitzer <auriocus@gmx.de>: Sep 17 08:01AM +0200

Am 17.09.14 07:38, schrieb arnuld:
> WANT: print stack elements
> PROBLEM: No way to print
 
Stacks do not provide the facility to iterate over the elements. But you
can use a std::vector instead of a stack easily:
 
> This is the code from section 12.1.2 of Nicolai's 2nd edition:
 
Adapted:
 
#include <iostream>
#include <vector>
 
int main()
{
std::vector<int> st;
 
st.push_back(10);
st.push_back(11);
st.push_back(12);
 
std::cout << ":STACK:" << std::endl;
std::cout << "size = " << st.size() << std::endl;
 
std::cout<<"Stack content: "<<std::endl;
// print non-destructively
for (auto &x: st) {
std::cout<<x<<std::endl;
}
 
std::cout<<"Popping down stack:"<<std::endl;
// now popdown stack
while(!st.empty())
{
std::cout << st.back() << "\n";
st.pop_back();
}
 
std::cout << std::endl;
 
 
return 0;
}
 
Christian
Jorgen Grahn <grahn+nntp@snipabacken.se>: Sep 17 10:34AM

On Wed, 2014-09-17, Christian Gollwitzer wrote:
 
> int main()
> {
> std::vector<int> st;
 
That's not a stack in the OP's sense -- but perhaps that was your
point?
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Victor Bazarov <v.bazarov@comcast.invalid>: Sep 17 07:52AM -0400

On 9/17/2014 1:38 AM, arnuld wrote:
> }
 
> It prints the elements but makes empty too. Is there no way to print
> stack elements without pop()-ing them out ?
 
Stack is not a container over which you can iterate without altering it.
In that sense it's similar to streams.
 
V
--
I do not respond to top-posted replies, please don't ask
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: