Tuesday, March 24, 2015

Digest for comp.lang.c++@googlegroups.com - 16 updates in 4 topics

Ian Collins <ian-news@hotmail.com>: Mar 24 05:29PM +1300

Doug Mika wrote:
> cin>>fname;
> out.open(fname);
 
> Why not, how do I get around it?
 
Use a newer compiler. The missing open member and constructor have been
added in C++11.
 
--
Ian Collins
Vir Campestris <vir.campestris@invalid.invalid>: Mar 24 09:39PM

On 24/03/2015 04:29, Ian Collins wrote:
> Use a newer compiler. The missing open member and constructor have been
> added in C++11.
 
Ooh. Does this mean you can now open Unicode filenames?
 
Andy
woodbrian77@gmail.com: Mar 23 07:25PM -0700

On Monday, March 23, 2015 at 5:24:11 AM UTC-5, Juha Nieminen wrote:
 
> What exactly do you think you are doing?
 
Matthew 5:3-10 says:
 
Blessed are the poor in spirit, for theirs is the kingdom of heaven.
Blessed are those who mourn, for they shall be comforted.
Blessed are the gentle, for they shall inherit the earth.
Blessed are those who hunger and thirst for righteousness,
for they shall be satisfied.
Blessed are the merciful, for they shall receive mercy.
Blessed are the pure in heart, for they shall see God.
Blessed are the peacemakers, for they shall be called sons of God.
Blessed are those who have been persecuted for the sake of
righteousness, for theirs is the kingdom of heaven.
 
Blessed are you when people insult you and persecute you,
and falsely say all kinds of evil against you because of Me.
Rejoice and be glad, for your reward in heaven is great; for
in the same way they persecuted the prophets who were before you.
-------------------------------------------------------------------
 
The future of C++ belongs to the poor in spirit, those who
mourn, the gentle, those who hunger and thirst for righteousness,
the merciful, the pure in heart, the peacemakers, and those
persecuted for the sake of righteousness.
 
Ebenezer Enterprises is a small company, but we have a
big G-d.
 
 
Brian
Ebenezer Enterprises
http://webEbenezer.net
David Brown <david.brown@hesbynett.no>: Mar 24 08:37AM +0100

On 23/03/15 22:21, Glen Stark wrote:
 
>> --- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
> I can't imagine that the post was anything but a joke. I mean c,mon, get
> a sense of humor. I found it pretty funny.
 
Brian takes his religion far too seriously to make jokes about it.
 
On comp.lang.c, someone has been forging posts as though they were
written by another regular poster - exaggerating his style and opinions
in order to mock them. It is not inconceivable that the same immature
bully is at work in this group, posting in Brian's name - Brian has
never hidden his beliefs or his conviction that his work is "guided by
Jesus", but I don't remember him making posts that are merely mindless
Biblical quotations.
 
Would the real Brian please stand up?
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Mar 24 07:13PM

> persecuted for the sake of righteousness.
 
> Ebenezer Enterprises is a small company, but we have a
> big G-d.
 
Give it a fucking rest mate. Your bible is a lie and your god doesn't
exist.
 
/Flibble
drew@furrfu.invalid (Drew Lawson): Mar 24 08:32PM

In article <863c9f30-8b17-4de6-851d-323413d1ed5c@googlegroups.com>
 
>> What exactly do you think you are doing?
 
>Matthew 5:3-10 says:
 
>Blessed are the poor in spirit, for theirs is the kingdom of heaven.
 
Blessed are the cheesemakers.
 
 
--
In Dr. Johnson's famous dictionary patriotism is defined as the
last resort of the scoundrel. With all due respect to an enlightened
but inferior lexicographer I beg to submit that it is the first.
-- Ambrose Bierce
red floyd <no.spam@its.invalid>: Mar 24 01:49PM -0700

On 3/24/2015 1:32 PM, Drew Lawson wrote:
 
>> Matthew 5:3-10 says:
 
>> Blessed are the poor in spirit, for theirs is the kingdom of heaven.
 
> Blessed are the cheesemakers.
 
It's a metaphor for ALL dairy products....
Jorgen Grahn <grahn+nntp@snipabacken.se>: Mar 24 06:11PM

On Mon, 2015-03-23, Bonnedav wrote:
>> >> How do I make a console text editor in C++?
 
>> > Wow ok. I just want to make a simple text editor for my database program.
>> > And by database I mean a folder with a bunch of files in it, each
...
>> set. If so, invoke that tool. If nothing is set, invoke vi. If that does
>> not work, write an error.
 
> How do i do it on windows?
 
It might be harder there. Sorry, don't know.
 
On Unix, what Christian describes is the defacto standard way of doing
it. It's what my newsreader is doing right now, for example -- slrn
has no text editor of its own (which means I can choose the one I
prefer).
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
kennethadammiller@gmail.com: Mar 23 05:21PM -0700

So, I think that it's vital to some objectives of mine in order to understand what's happening at the hardware level when C++11's memory order is used.
 
It seems to me that it sounds like memory order is imposing some specifics on what the compiler generates as far as cache flushing goes, which is *very* important to me. In fact, I'd like to be able to control exactly when the cache is flushed or reloaded, and be able to prevent memory operations from triggering cache reads.
 
Can someone tell me more about how C++ 11's new memory order is implemented at a low level? And can someone tell me how to achieve that goal of cache control?
Richard Damon <Richard@Damon-Family.org>: Mar 24 12:15AM -0400


> Can someone tell me more about how C++ 11's new memory order is
> implemented at a low level? And can someone tell me how to achieve
> that goal of cache control?
 
Fundamentally, the new C++11's memory order doesn't say anything about
"cache" as it doesn't mention a cache at all, it doesn't even require
there BE a cache. The standard doesn't talk with respect to a cache, but
when changes can't/may/must be visible to another piece of code.
 
Practically, to implement the semantics imposed, on a processor with a
cache, there will be things that need to be done, but it is fairly
processor and implementation specific.
 
If you really need to know/control the details of what is happening, you
are going to need to get down into assembly and/or details of the
implementation.
kennethadammiller@gmail.com: Mar 23 10:44PM -0700

On Tuesday, March 24, 2015 at 12:15:32 AM UTC-4, Richard Damon wrote:
 
> If you really need to know/control the details of what is happening, you
> are going to need to get down into assembly and/or details of the
> implementation.
 
I'll be on x86/64 and arm. But that's all; first x86/64 then arm. Arm is a second, distant priority.
Christian Gollwitzer <auriocus@gmx.de>: Mar 24 09:06AM +0100

>> are going to need to get down into assembly and/or details of the
>> implementation.
 
> I'll be on x86/64 and arm. But that's all; first x86/64 then arm. Arm is a second, distant priority.
 
If you have a good understanding of the corresponding assembly, maybe
just compiling test programs and looking at the assembly output will
help? e.g. using gcc do g++ -S test.cpp and inspect test.S. Play with
the optimization flags, at least -O1 should eliminate many trivial
function calls to template code.
 
Christian
Nobody <nobody@nowhere.invalid>: Mar 24 08:17AM

On Mon, 23 Mar 2015 17:21:24 -0700, kennethadammiller wrote:
 
 
> Can someone tell me more about how C++ 11's new memory order is
> implemented at a low level? And can someone tell me how to achieve that
> goal of cache control?
 
Memory order is typically unrelated to caching. What it actually does is
to:
 
a) cause certain operations to be atomic, either by using atomic CPU
instructions (if such an instruction exists for the operation in question)
or by guarding non-atomic instructions or instruction sequences with
mutual exclusion primtives,
 
b) prevent the compiler from re-ordering instructions in a way that would
be safe for single-threaded code but unsafe for multi-threaded code, and
 
c) prevent the CPU from re-ordering instructions in a way that would be
safe for single-threaded code but unsafe for multi-threaded code.
 
Caching only comes into the picture if you have multiple caches which are
not automatically synchronised (e.g. via bus snooping). In that case,
the compiler may need to place atomic variables in uncached memory, or add
explicit flushes.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Mar 24 01:20PM

On Mon, 23 Mar 2015 17:21:24 -0700 (PDT)
 
> Can someone tell me more about how C++ 11's new memory order is
> implemented at a low level? And can someone tell me how to achieve
> that goal of cache control?
 
You cannot control the instruction caches or data caches in the way you
hope for, and cache flushing can occur quite independently from memory
ordering between threads, such by doing memory operations which cause a
cache miss, or by having a context switch.
 
Even if you use relaxed memory ordering you cannot control the time at
which cores synchronize (if that is what you really mean) because even
with relaxed memory ordering C++11 requires visibility "within a
reasonable amount of time" (§29.3/13), whatever that may be taken to
imply.
 
You can of course do things which will provoke memory synchronization,
such as by issuing fence instructions or by carrying out a C++11
acquire and release operation (note caches are always coherent anyway,
in x86/64 at least). But to the best of my knowledge there is no
instruction to _prevent_ caches doing their thing.
 
You might want to read this:
 
http://mechanical-sympathy.blogspot.co.uk/2013/02/cpu-cache-flushing-fallacy.html
 
Chris
kennethadammiller@gmail.com: Mar 24 07:48AM -0700

On Tuesday, March 24, 2015 at 9:24:16 AM UTC-4, Chris Vine wrote:
 
> You might want to read this:
 
> http://mechanical-sympathy.blogspot.co.uk/2013/02/cpu-cache-flushing-fallacy.html
 
> Chris
 
Oh ok. I had hoped that by trying to make my worker threads locally proximal to one another that caches could be flushed in a coherent way extremely quickly. Like, say threads one and two share a portion of memory. Before that memory goes out, what if another core could do a data structure coherent merge of the data? I thought that if I postured the problem so that work was associative and commutative, then no matter the order of thread precedence, the final result would be correct.
 
Now it seems like cache control isn't something you directly manage, but is done by the processor for you, and you can only try to get the compiler to select specific instructions that are suited to what you want and hopefully reduce unnecessary processor work as a result of those operations.
 
Please let me know your thoughts :)
"Öö Tiib" <ootiib@hot.ee>: Mar 24 08:53AM -0700

> postured the problem so that work was associative and commutative,
> then no matter the order of thread precedence, the final result would
> be correct.
 
If your algorithm involves several threads to share and to mutate
same portion of memory then it is not scalable algorithm and if
you want to improve that then the easiest is to reduce such
sharing.
 
> want and hopefully reduce unnecessary processor work as a result
> of those operations.
 
> Please let me know your thoughts :)
 
Profile and use usual techniques of reducing cache misses (like improving
locality, merging arrays, loop interchange, loop fusion) and leave the rest
up to compiler and hardware to deal with.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: