Wednesday, December 8, 2021

Digest for comp.lang.c++@googlegroups.com - 10 updates in 3 topics

Lynn McGuire <lynnmcguire5@gmail.com>: Dec 08 04:11PM -0600

"Modernizing your code with C++20" by phil nash
https://blog.sonarsource.com/modernizing-your-code-with-cpp20
 
"C++20 is here! In fact, as we head towards 2022, it's been here a
while. It may surprise some, but we're only a few months from a freeze
on new proposals for C++23! But let's not get ahead of ourselves. C++20
is a big release - at least the biggest since C++11 - some have said
it's the biggest since the first standard in 1998!"
 
"Another possible surprise is that support for C++20 is currently better
in GCC and MSVC++ than in Clang. Nonetheless, significant chunks of the
new language and library features are widely available across the three
major compilers, already. Many of them, including some less well known
features, are there to make common things safer and easier. So we've
been hard at work implementing analyzer rules to help us all take full
advantage of the latest incarnation of "Modern C++". This is just the
start, but we already have 28 C++20-specific rules in the latest
releases of all our products (with many more in development)."
 
Lynn
David Brown <david.brown@hesbynett.no>: Dec 08 09:08AM +0100

On 07/12/2021 20:14, Scott Lurndal wrote:
 
> If it's already in the L1 cache, then the processor will
> automatically treat it as a near-atomic, this is expected
> to be a rare case with correctly designed atomic usage.
 
This is such an obvious improvement that I am constantly amazed how long
it has taken to be implemented. Using ordinary memory for atomic
operations, locks, etc., is massively inefficient compared to a
dedicated hardware solution.
 
I've used a multi-core embedded microcontroller with a semaphore block,
consisting of a number (16, IIRC) of individual semaphores. Each of
these was made of two 16-bit parts - the lock tag and the value. You
can only change the value if you have the lock, and you get the lock by
writing a non-zero tag when the tag is currently 0 (unlocked). You
release it by writing your tag with the high bit set. It is all very
simple, and extremely fast - no need to go through caches, snooping, or
any of that nonsense because it is dedicated and connected close to the
cpu's core buses.
 
Obviously in a "big" system you need to handle more than two cores (and
with the Z-Gen and CLX system, other bus masters), support larger
numbers of locks, and security is a rather different matter! But the
principle of having dedicated hardware, memory mapped but not passing
through caches and slow external memory, is the same.
 
Atomic operations carried out by the core on memory in the L1 caches
will be fast as long as their are no conflicts, but you wouldn't bother
with atomics unless there /were/ a risk of conflict. And then they get
slow. With a "far atomics" solution, you should be able to get much
more consistent timings and efficient results.
 
(At least, that is my understanding of it, without having actually used
them!)
Bonita Montero <Bonita.Montero@gmail.com>: Dec 08 09:50AM +0100

Am 07.12.2021 um 20:25 schrieb Scott Lurndal:
>> to be a rare case with correctly designed atomic usage.
 
> In case you need a public reference for a shipping processor:
 
> https://developer.arm.com/documentation/102099/0000/L1-data-memory-system/Instruction-implementation-in-the-L1-data-memory-system
 
That's not a processor implementing this Gen-Z interconnect and
it's atomic facilities. This is just an optimization for a special
kind of processor architecture.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Dec 08 12:59AM -0800

On 12/7/2021 11:25 AM, Scott Lurndal wrote:
>> to be a rare case with correctly designed atomic usage.
 
> In case you need a public reference for a shipping processor:
 
> https://developer.arm.com/documentation/102099/0000/L1-data-memory-system/Instruction-implementation-in-the-L1-data-memory-system
 
You have encountered the rabbit hole of Bonita! I have proved her/it
wrong several times. No good, goes nowhere.
Bonita Montero <Bonita.Montero@gmail.com>: Dec 08 03:41PM +0100

Am 08.12.2021 um 09:59 schrieb Chris M. Thomasson:
 
>> https://developer.arm.com/documentation/102099/0000/L1-data-memory-system/Instruction-implementation-in-the-L1-data-memory-system
 
> You have encountered the rabbit hole of Bonita! I have proved her/it
> wrong several times. No good, goes nowhere.
 
What he links isn't a proof for what he says.
The above CPU doesn't implement the mentioned interconnect. It's
just a minor improvement for this special kind of CPU -architecture
to speed up lock-flipping with concurrent cores.
scott@slp53.sl.home (Scott Lurndal): Dec 08 03:48PM


>> You have encountered the rabbit hole of Bonita! I have proved her/it
>> wrong several times. No good, goes nowhere.
 
>What he links isn't a proof for what he says.
 
As you note Chris, Christof/Bonita cannot admit he
was wrong.
 
>The above CPU doesn't implement the mentioned interconnect.
 
Of course not, ARM doesn't make CPUs. They provide the IP
used to make real CPUs; for example the Amazon AWS Graviton 2 and 3.
 
Yet, ARM does provide interconnect IP which fully supports
near and far atomics.
 
Some of the current Neoverse N2 licensees are listed here:
 
https://www.design-reuse.com/news/49872/arm-neoverse-n2-v1-platform.html
Bonita Montero <Bonita.Montero@gmail.com>: Dec 08 05:35PM +0100

Am 08.12.2021 um 16:48 schrieb Scott Lurndal:
> used to make real CPUs; for example the Amazon AWS Graviton 2 and 3.
 
> Yet, ARM does provide interconnect IP which fully supports
> near and far atomics.
 
They're not far in the sense of the mentioned interconnect.
"Öö Tiib" <ootiib@hot.ee>: Dec 08 10:57AM -0800

On Wednesday, 8 December 2021 at 10:59:51 UTC+2, Chris M. Thomasson wrote:
 
> > https://developer.arm.com/documentation/102099/0000/L1-data-memory-system/Instruction-implementation-in-the-L1-data-memory-system
 
> You have encountered the rabbit hole of Bonita! I have proved her/it
> wrong several times. No good, goes nowhere.
 
But it what comp.lang.c++ is. Whenever you come here then BM is
present, wrong (or not even wrong), desperately trying to shadow
it by snipping out of context, removing attributions, misrepresenting
what others wrote, moving goalposts etc. Keeping hearth and home
warm. ;-D
"Öö Tiib" <ootiib@hot.ee>: Dec 07 04:39PM -0800

On Tuesday, 7 December 2021 at 20:00:13 UTC+2, Alf P. Steinbach wrote:
 
> It was idiotic. It was simple blunders. But inn both cases, as I recall,
> they tried to cover up the blunder by writing a rationale; they took the
> blunders to heart and made them into great obstacles, to not lose face.
 
If C and/or C++ committee had standardized that wchar_t means
precisely "UTF-16 LE code unit" and nothing else then it would be
something different on Windows by now.
 
On case of Microsoft the only way to make it to change their idiotic
"existing practices" appears to be to standardize those. Once idiotic
practice of Microsoft is standardized then Microsoft finds resources
to switch from such to some reasonable one (as their "innovative"
extension).
David Brown <david.brown@hesbynett.no>: Dec 08 09:18AM +0100

On 07/12/2021 18:59, Alf P. Steinbach wrote:
 
>> The C++ standard explicitly addresses that point, though the C standard
>> does not.
 
> Happy to hear that but some more specific information would be welcome.
 
My understanding is that at that time, the Windows wide character set
was UCS2, not UTF-16. Thus a 16-bit wchar_t was sufficient to encode
all wide characters.
 
It turned out that UCS2 was a dead-end, and now UTF-16 is a hack-job
that combines all the disadvantages of UTF-8 with all the disadvantages
of UTF-32, and none of the benefits of either. We can't blame MS for
going for UCS2 - they were early adopters and Unicode was 16-bit, so it
was a good choice at the time. They, and therefore their users, were
unlucky (along with Java, QT, Python, and no doubt others). Changing is
not easy - you have to make everything UTF-8 and yet still support a
horrible mix of wchar_t, char16_t, UCS2, and UTF-16 for legacy.
 
But as far as I can see, the C and C++ standards were fine with 16-bit
wchar_t when they were written. I have heard, but have no reference or
source, that the inclusion of 16-bit wchar_t in the standards was
promoted by MS in the first place.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: