Sunday, October 2, 2022

Digest for comp.lang.c++@googlegroups.com - 15 updates in 5 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 02 12:44PM -0700

On 9/28/2022 2:11 PM, Scott Lurndal wrote:
 
> The processor fabric forwards the operation to the point of coherency
> (e.g. the L2/LLC) for cachable memory locations and to the endpoint for
> uncachable memory locations (e.g. a PCIexpress or CXL endpoint).
 
Oh yeah! I remember reading about this over on comp.arch a while back. I
wonder if I can find the post. Thanks Scott. :^)
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 02 12:46PM -0700

On 9/30/2022 9:26 PM, Bonita Montero wrote:
>> LDSMAX (signed maximum), LDUMAX (unsigned maximum), LDSMIN, LDUMIN.
 
> Eh, RMW can be emulated with LL/SC but not vice versa.
> A CAS emulated by LL/SC isn't slower than a native CAS.
 
Using LL/SC can be tricky. You really need to isolate the reservation
granule...
 
 
> But atomic increments, decrements, ands, ors or whatever
> ebulated with LL/SC is sometimes slower.
 
How many times do you spin on a SC failure before you get, pissed off?
Michael S <already5chosen@yahoo.com>: Oct 02 03:32PM -0700

On Saturday, October 1, 2022 at 8:02:05 PM UTC+3, Scott Lurndal wrote:
 
> [*] For your edification, CAS on modern archtitectures isn't
> handled by the CPU, but rather by the point of coherency (LLC
> or PCI-Express/CXL endpoint).
 
I don't think so.
IMHO, a typical implementation is that CPU acquires the ownership
of location and refuses all attempts by other agents to take it back
until both parts of CAS are completed and committed to L1$.
What you say is an idea that floats widely but never implemented on
general-purpose CPUs. Mostly, because for workloads that run on
general-purpose CPUs, it's a very bad idea.
 
May be, on some network processor it works the way, you suggest,
but I wouldn't call architectures of these processors "modern".
 
David Brown <david.brown@hesbynett.no>: Oct 02 01:38PM +0200

On 30/09/2022 19:20, Kaz Kylheku wrote:
 
> It's true whether you're toggling switches on a panel to produce
> a binary program directly in memory, or whether you're using OCaml
> or Prolog.
 
Yes. And it is not uncommon in many fields to blame the tools for
mistakes of the users. Sometimes such blame is fair, of course - but
usually not.
 
> don't have the same safety, and we evolve things accordingly.
 
> Speaking of returning the cross-posting to comp.lang.c++, that language
> as of C++17 has chosen to define some evaluation orders.
 
Yes. Primarily it is for the case of "cout << a() << b();". This is a
situation where specific ordering really does help the programmer.
 
Language changes are sometimes a good idea, but there are almost always
trade-offs to consider. Things that one person sees as clearly a good
idea, will sound crazy and be a pain for someone else.
 
> argument space: with just the classic sequencing of function calls not
> being interleaved, this is still undefined:
 
> f(g(i++), h(i++))
 
There is always going to be things that are undefined or unspecified.
There are always going to be ways to write things that make no sense, or
at least no well-defined and consistent sense. A language can aim to
reduce these possibilities, but it comes at a big cost. There are three
disadvantages, as I see them. One is that it tells programmers that
even though a piece of code is unclear to humans, it is valid code - and
then some people write code like "f(i++, i++)" that is hard for others
to comprehend. Another is it reduces the ability of compilers and tools
to help you write good, clear code (compilers can warn on "f(i++, i++)"
when it is not defined behaviour). And it reduces optimisation
possibilities for perfectly good code.
 
 
> of i. But, I think, we can infer that that function which is called
> first receives the original value of i, and i is reliably incremented
> twice. Baby steps in the right direction, in any case.
 
No - that is the /wrong/ message to take from the C++17 changes. It is
not "baby steps in the right direction" - it is a case of minimal
changes needed to give defined behaviour to common incorrect code that
has been used in practice for years. The key motivation is that a lot
of code has been written that assumes "cout << f() << g();" calls "f()"
first, then "g()".
 
It is not a step in a direction towards more general changes, because
that would mean making at least some existing correct code less efficient.
 
> example where you have a side effect in A that B depends on. What I'm
> likely missing that it's probably not the built-in arithmetic << that is
> of concern, but overloads.
 
Correct.
 
>> it's the programmers' fault.
 
> It's not a game of blame, but of reducing the unfortunate situations in
> which someone has a reason to look for something to blame.
 
Fair enough.
 
> I can't look at a large body of code (that I perhaps didn't write: so no
> responsibility of mine) and easily know whether there is a problem due
> to eval order.
 
Analysing the correctness of other people's code is never an easy job!
 
However, I don't think a specified evaluation order would help. If you
see "f(g(), h());" in the code, you are concerned that the programmer
might be assuming that "g()" is evaluated before "h()". But if you know
the programmer was good at his/her job, you will know that he/she would
have used temporaries and run g() and h() before f() if the order
mattered. So you know the code is correct, and the order does not matter.
 
However, if the order were specified by the language, then you still
know the code is correct but you don't know if the order of the call
matters (and the programmer relied on the evaluation order), or if it
does not matter.
 
The lack of specification in the language gives you /more/ information,
not less.
 
And if you can't rely on the original programmer's competence, you can't
rely on anything in the code without more detailed checking anyway.
 
 
Sometimes tighter specification for things like evaluation order gives
you benefits, but not always - it can just as well be a disadvantage.
 
 
Ben Bacarisse <ben.usenet@bsb.me.uk>: Oct 02 01:00PM +0100


> On 30/09/2022 19:20, Kaz Kylheku wrote:
<cut>
>> as of C++17 has chosen to define some evaluation orders.
 
> Yes. Primarily it is for the case of "cout << a() << b();". This is
> a situation where specific ordering really does help the programmer.
 
I'm curious. How does a specific ordering help the programmer here?
Even with no specified order of evaluation, the output must look as if
the result of b() was appended to the stream resulting from the
left-most << operator. All the ordering does is cause any side effects
in a() and b() to be predictably ordered, and that might look more like
helping the programmer to write dodgy code.
 
--
Ben.
David Brown <david.brown@hesbynett.no>: Oct 02 04:20PM +0200

On 02/10/2022 14:00, Ben Bacarisse wrote:
> left-most << operator. All the ordering does is cause any side effects
> in a() and b() to be predictably ordered, and that might look more like
> helping the programmer to write dodgy code.
 
Yes, it is a matter of side-effects. Without side-effects in "a()" or
"b()", the result is clear regardless of the order of evaluation.
 
And if there are side-effects, and order matters, then programmers can
(and should, at least until C++17) write along the lines of :
 
const auto res_a = a();
const auto res_b = b();
cout << res_a << res_b;
 
But it seems that many programmers have been assuming that using
iostreams for output like this has a specified order - even when they
don't make such assumptions about other kinds of expressions. I suppose
it is something about the appearance of the code that makes it look like
there is an order to the evaluation.
 
Maybe it is not accurate to say this "helps the programmer", and it is
more correct to say that it makes what used to be dodgy code into
correct code. But if it means their old dodgy code continues to work as
they intended even when they use newer compilers that do more
re-ordering optimisations, then it helps them.
 
A disadvantage of this change is that it might make some programmers
think ordering is more tightly specified for other kinds of expression
as well, and rely on such incorrect assumptions.
 
(I'm not convinced that this evaluation order change in C++17 is a good
idea, or worth the complications of having this special case. I am just
trying to say why it is different from a more general evaluation order
specification, and why the C++ committee thought it was worth changing.)
Tim Rentsch <tr.17687@z991.linuxsc.com>: Oct 02 07:24AM -0700

> left-most << operator. All the ordering does is cause any side effects
> in a() and b() to be predictably ordered, and that might look more like
> helping the programmer to write dodgy code.
 
For this sub-thread I think the posting should have been limited
to comp.lang.c++, and not included comp.lang.c.
Tim Rentsch <tr.17687@z991.linuxsc.com>: Oct 02 07:31AM -0700

>> principal means of getting anything done is a poor technical situation.
 
> That must be some other language. I have never written any C++ code
> where side effects were the principal means of getting something done.
 
That claim is very hard to believe.
 
> If anything, C++ in general has moved towards more functional style
> over the years, with fewer mutable objects, not to speak about objects
> mutated by side effects.
 
I feel obliged to mention that comments about C++ are not topical
in comp.lang.c, and the Newsgroups: line should have been
adjusted accordingly.
Tim Rentsch <tr.17687@z991.linuxsc.com>: Oct 02 12:57PM -0700

>> are). Someone seeing just "convert(...)" in the code can't have any
>> idea what it's doing.
 
> It sounds like an argument against C++-style static polymorphism.
 
Do you mean ad hoc polymorphism, aka function overloading? Or do
you mean something else, e.g., template-related?
Tim Rentsch <tr.17687@z991.linuxsc.com>: Oct 02 12:59PM -0700


> Non-GC is definitely not for embedded programming only.
> I have seen a pretty large project for digital imaging workstations
> fail because of the non deterministic nature of GC.
 
Garbage collection is not inherently non-deterministic.
Manfred <noname@add.invalid>: Oct 02 10:05PM +0200

On 10/2/2022 4:20 PM, David Brown wrote:
 
>     const auto res_a = a();
>     const auto res_b = b();
>     cout << res_a << res_b;
 
I disagree that this change is to be interpreted as "giving some help to
bad programmers".
 
The key point is that this is about the insertion operator, not binary
shift (here the C++ syntax does not help indeed)
Insertion operators, and specifically in combination with streams, have
been explicitly designed /from day one/ to allow for:
 
cout <<
value_1 <<
value_2 <<
value_3 <<
value_4 <<
value_5 <<
value_6;
 
Along the same lines:
 
cout <<
f_1() <<
f_2() <<
f_3() <<
f_4() <<
f_5() <<
f_6();
 
Is just expected semantics, which good language design should account for.
 
I am pretty confident that the construct that you suggest as "the right
way" was not the intention of the language designers - think e.g. IO
manipulators.
 
 
Michael S <already5chosen@yahoo.com>: Oct 02 01:53PM -0700

On Sunday, October 2, 2022 at 10:57:21 PM UTC+3, Tim Rentsch wrote:
 
> > It sounds like an argument against C++-style static polymorphism.
 
> Do you mean ad hoc polymorphism, aka function overloading? Or do
> you mean something else, e.g., template-related?
 
Mostly the former, but I'd rather do without the later as well.
With very few exceptions.
The only static polymorphism I wholeheartedly approve is
polymorphism of arithmetic operators for built-in types.
James Kuyper <jameskuyper@alumni.caltech.edu>: Oct 02 04:08PM -0400

On 10/1/22 15:30, Lynn McGuire wrote:
 
> > Subscripted using hkl(3,i)? I thought this was supposed to be C++
> code, not Fortran?
 
> They are C++ objects acting like Fortran arrays.
 
They have an object type with an operator() overload that takes the
indices for the array access? If so, I think that describing them as
"arrayed objects" is a bit misleading.
Tim Rentsch <tr.17687@z991.linuxsc.com>: Oct 02 12:37PM -0700

>> of converting one or more operands.
 
> Or explicitly say in [expr.add] that the promotion is performed on
> the integer or enumeration operand,
 
Seems like a bandaid. I think it would be better to add a case
to the usual arithmetic conversions that covers the case of a
pointer and a non-pointer.
olcott <polcott2@gmail.com>: Oct 02 01:25PM -0500

On 10/2/2022 1:06 PM, Richard Damon wrote:
> actual input based on the actual behavior of the input (at least when
> you fix the meaning of the behavior of an input, which for a HALT
> DECIDER, is DEFINED by the behavior of the machine the input represents).
 
That is merely your weasel worded (thus deceptive) way of referring to
the behavior of the non-input: int main() { P(P); }
 
typedef void (*ptr)();
 
void P(ptr x)
{
int Halt_Status = H(x, x);
if (Halt_Status) // if H(P,P) reports that its input halts
HERE: goto HERE; // P loops and never halts
return; // else P halts
}
 
The behavior of the infinite set of H/P pairs such that H correctly
simulates or directly executes 1 to ∞ steps of its input, P never
reaches its final state and halts.
 
On 7/24/2022 6:19 PM, Paul N wrote:
> simulate its input until it *correctly* matches a non-
> halting behaviour pattern then this SHD is correct when it
> aborts its simulation and reports non-halting.
 
The H/P pair meeting the above requirements is fully operational in this
system.
 
Complete halt deciding system (Visual Studio Project)
(a) x86utm operating system
(b) Complete x86 emulator adapted from libx86emu to compile under Windows
(c) Several halt deciders and their sample inputs contained within Halt7.c
https://liarparadox.org/2022_09_07.zip
 
 
--
Copyright 2022 Pete Olcott "Talent hits a target no one else can hit;
Genius hits a target no one else can see." Arthur Schopenhauer
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: