Saturday, October 9, 2021

Digest for comp.lang.c++@googlegroups.com - 17 updates in 3 topics

Branimir Maksimovic <branimir.maksimovic@icloud.com>: Oct 09 07:29AM


> Fwiw, here is a decent description of DLA:
 
> https://en.wikipedia.org/wiki/Diffusion-limited_aggregation
 
> Thanks.
Works! thanks!
 
--
 
7-77-777
Evil Sinner!
with software, you repeat same experiment, expecting different results...
Bonita Montero <Bonita.Montero@gmail.com>: Oct 09 09:38AM +0200

Am 07.10.2021 um 03:22 schrieb Chris M. Thomasson:
 
>> Or more precisely: pracitcally yes. Because other uses aren't
>> practicable because of the drawbacks of lock-free algorithms.
 
> think outside of the box.
 
Any algorihm having a slow path isn't
lock free because the slow path locks.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 09 12:29PM -0700

On 10/9/2021 12:38 AM, Bonita Montero wrote:
 
>> think outside of the box.
 
> Any algorihm having a slow path isn't
> lock free because the slow path locks.
 
One can even choose to follow the slow-path or not. It would be in the
try_* variety of functions... ;^)
 
So allowing a fast and a slow path gives the best of both worlds.
 
:^)
Bonita Montero <Bonita.Montero@gmail.com>: Oct 09 09:59PM +0200

> So allowing a fast and a slow path gives the best of both worlds.
 
Am I talking to a complete idiot here ?
 
We're not discussing what's the best while synchronizing threads
but what's lock-free and what's not. And having a slow path makes
an algorithm not lock-free.
 
Read the WP-article: https://en.wikipedia.org/wiki/Non-blocking_algorithm :
"In computer science, an algorithm is called non-blocking
if failure or suspension of any thread cannot cause failure
_or suspension of another thread_".
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 09 01:13PM -0700

On 10/9/2021 12:59 PM, Bonita Montero wrote:
> "In computer science, an algorithm is called non-blocking
>  if failure or suspension of any thread cannot cause failure
>  _or suspension of another thread_".
 
You think lock-free is crap. You wrote such things. So, whatever Bonita.
Paavo Helde <myfirstname@osa.pri.ee>: Oct 09 10:47AM +0300

08.10.2021 23:47 Steve Keller kirjutas:
> definition in x.cc so that it is
 
> extern const int a = 42;
 
> the object 'a' is created in x.o. Can someone explain this?
 
Yes, this is working as designed, in C++. I believe this is one of
differences between C and C++.
 
Such global const definitions normally appear in a common header file
and would just cause duplicate linker symbols if they had external
linkage. Now they can be just optimized away in translation units which
do not use them.
 
I believe the original motivation was to have an easy replacement of C
macros in header files, i.e. instead of
 
#define a 42
 
one can easily use
 
const int a = 42;
 
which would function almost exactly like a #define, plus it honors C++
namespaces and cannot be undefined.
 
They messed it up though with class static const members, which by some
reason did not follow the same rules and caused a lot of randomly
appearing obscure linker errors.
"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Oct 09 03:05PM +0200

On 8 Oct 2021 22:47, Steve Keller wrote:
> access it in another:
 
> --------- x.cc ---------
> const int a = 42
 
Assuming the code actually has a semicolon here, so that it compiles.
Copy and paste code to avoid such typos.
 
 
 
> movl a(%rip), %eax
> ret
 
> But the object file x.o does not contain anything. Why?
 
Because without `extern` the definition in `x.cpp` has internal linkage,
as noted by Red Floyd in his response.
 
And since it's not visible outside the translation unit, and is not used
within it, and its initialization has no side effects, it's optimized away.
 
 
> definition in x.cc so that it is
 
> extern const int a = 42;
 
> the object 'a' is created in x.o. Can someone explain this?
 
See above.
 
A good way to avoid these problems is to place the declaration of `a`,
with an `extern`, in a header file that you include both in the defining
source file and in the using source file.
 
Because, when the compiler has seen the `extern`-ness of `a` once in a
translation unit, then in any subsequent encounter of `a` that
`extern`-ness is implied.
 
 
- Alf
scott@slp53.sl.home (Scott Lurndal): Oct 09 02:32PM


>They messed it up though with class static const members, which by some
>reason did not follow the same rules and caused a lot of randomly
>appearing obscure linker errors.
 
I would not characterize the linker errors as either random or obscure.
 
Annoying, yes. But the solution is simple, if not necessarily pretty.
Juha Nieminen <nospam@thanks.invalid>: Oct 09 05:40PM

> definition in x.cc so that it is
 
> extern const int a = 42;
 
> the object 'a' is created in x.o. Can someone explain this?
 
This is one key difference between C and C++.
 
In C, const variables at the global namespace level have external linkage
by default. In other words, they are implicitly 'extern'.
 
In C++, const variables at the namespace level (including the global
namespace) have internal linkage by default. In other words, they are
implicitly 'static'.
 
In C, if you want a const variable in the global namespace to have
internal linkage, you need to explicitly specify 'static' in its
declaration.
 
In C++, if you want a const variable at a namespace level (including
the global namespace) to have external linkage, you need to explicitly
specify 'extern' in its declaration.
 
(In C++ this is true for *any* type, not just basic types.)
 
Previously this was a minor nuisance in C++ because a 'const' variable
in a header would be duplicated in each object file that used it, if
the compiler couldn't optimize it away (ie. "inline" it). If you wanted
only one instance of that const variable, you had to 'extern' it manually.
 
C++17 added support for "inline const" variables, which does this for
you automatically (in the same way as 'inline' does with functions).
Paavo Helde <myfirstname@osa.pri.ee>: Oct 09 09:39PM +0300

09.10.2021 17:32 Scott Lurndal kirjutas:
>> reason did not follow the same rules and caused a lot of randomly
>> appearing obscure linker errors.
 
> I would not characterize the linker errors as either random or obscure.
 
If a linker errors start to appear after some straightforward code
rewrite like replacing an if() with the ternary operator, and only with
some compilers and only with some optimization options, surely they will
look random and obscure if you have not encountered such issues before.
 
Yes, they are not obscure once you already understand the issue.
 
> Annoying, yes. But the solution is simple, if not necessarily pretty.
 
IIRC at one point one had to use different solutions for different
mainstream compilers, as there was no code variant accepted by both. But
that was long ago.
David Brown <david.brown@hesbynett.no>: Oct 09 10:23AM +0200

On 08/10/2021 20:23, Bonita Montero wrote:
>> decade or so before that, ColdFire was one of the most popular
>> architectures in small network devices such as SOHO NAT routers, ...
 
> MIPS was dominant in routers before it was replaced with ARM.
 
MIPS was dominant in high-end routers and fast switches (with PowerPC
being the main competitor). ColdFire had a substantial share of the
small device market for quite a while, as ucLinux became a popular
choice for the system, though MIPS was also used there (mostly with
proprietary OSes). Later, MIPS was also used with Linux in such
routers, and as you say, ARM is now the most common choice (though MIPS
is still used in many low-end and high-end network devices, and PowerPC
is still found in some big systems).
 
> 68K had a straighter instruction-set than x86, but the two times
> indirect addressing-modes introduced with the 68020 were totally
> brain-damaged.
 
68k was a better ISA than x86 in almost every way imaginable. But it
did get a few overly complicated addressing modes - some of these were
dropped in later 68k devices. And the /implementation/ in the 68k
family was not as good - Motorola didn't have as many smart people and
as big budgets as Intel or even AMD. On the 68030, IIRC, someone
discovered that a software division routine worked faster than the
hardware division instruction.
 
The ColdFire was a clean slate re-implementation of a simplified and
modernised 68k ISA. It dropped the more advanced addressing modes and
some of the rarely used complex instructions, but added a few new useful
ones and streamlined the primary ones. It was marketed as a "variable
instruction length RISC architecture".
 
When you look back at the original 68000 compared to the 8086, it is
clear that technically the 68000 ISA was a modern and forward-looking
architecture while the 8086 was outdated and old-fashioned before the
first samples were made. The story of the IBM PC shows how technical
brilliance is not enough to succeed - the worst cpu architecture around,
combined with the worst OS ever hacked together, ended up dominant.
Bonita Montero <Bonita.Montero@gmail.com>: Oct 09 10:50AM +0200

Am 09.10.2021 um 10:23 schrieb David Brown:
 
>> MIPS was dominant in routers before it was replaced with ARM.
 
> MIPS was dominant in high-end routers and fast switches (with PowerPC
> being the main competitor). ...
 
I don't know about former high-end routers, but MIPS was by far
the most dominant architecture on SOHO-Routers before ARM. 68k
played almost no role then. F.e. in Germany almost anyone uses
the Fritz!Box routers and they all were MIPS-based before AVM
switched to ARM.
Branimir Maksimovic <branimir.maksimovic@icloud.com>: Oct 09 09:02AM

> played almost no role then. F.e. in Germany almost anyone uses
> the Fritz!Box routers and they all were MIPS-based before AVM
> switched to ARM.
 
Well I have router with 1GB of RAM and 4 cores :P
running as server :P
 
--
 
7-77-777
Evil Sinner!
with software, you repeat same experiment, expecting different results...
Bart <bc@freeuk.com>: Oct 09 12:16PM +0100

On 09/10/2021 09:23, David Brown wrote:
> clear that technically the 68000 ISA was a modern and forward-looking
> architecture while the 8086 was outdated and old-fashioned before the
> first samples were made.
 
That was my initial impression when I first looked at 68000 in the 80s.
 
Until I had a closer look at the instruction set, with a view to
generating code for it from a compiler. Then it had almost as much lack
of orthogonality as the 8086.
 
The obvious one is in have two lots of integer registers, 8 Data
registers and 8 Address registers, instead of just 16 general registers,
so that you are constantly thinking about which register set your
operands and intermediate results should go in.
 
(I never got round to using the chip; my company had moved on to doing
stuff for the IBM PC, instead of developing own hardware products.)
 
> The story of the IBM PC shows how technical
> brilliance is not enough to succeed - the worst cpu architecture around,
> combined with the worst OS ever hacked together, ended up dominant.
 
I was looking forward to the Zilog Z80000, but unfortunately that never
happened.
David Brown <david.brown@hesbynett.no>: Oct 09 03:10PM +0200

On 09/10/2021 13:16, Bart wrote:
 
> Until I had a closer look at the instruction set, with a view to
> generating code for it from a compiler. Then it had almost as much lack
> of orthogonality as the 8086.
 
It is not as orthogonal as most RISC architectures, but a world ahead of
x86.
 
> registers and 8 Address registers, instead of just 16 general registers,
> so that you are constantly thinking about which register set your
> operands and intermediate results should go in.
 
Sure. But you have just two sorts of registers - A0..7 and D0..7. Some
common instructions can use either set, while ALU and address operations
tend to be limited to one set. On the 8086, every register - A, B, C,
D, SI, DI, BP, SP - has wildly different uses and instructions. (Later
x86 devices got more regular.)
 
scott@slp53.sl.home (Scott Lurndal): Oct 09 02:57PM

>played almost no role then. F.e. in Germany almost anyone uses
>the Fritz!Box routers and they all were MIPS-based before AVM
>switched to ARM.
 
There are still very large numbers of MIPS-based chips being produced (at
legacy nodes like 65, 45 and 22nm) today. Not new designs, mind,
but rather ten to fifteen year old designs.
 
This third generation ARM processor for routers (high-end) is now sampling:
 
https://semiaccurate.com/2021/06/28/marvell-announces-their-5nm-octeon-10-dpu/
https://www.theregister.com/2021/10/06/marvell_ai_chip/
red floyd <no.spam.here@its.invalid>: Oct 09 10:26AM -0700

On 10/9/2021 4:16 AM, Bart wrote:
\
> I was looking forward to the Zilog Z80000, but unfortunately that never
> happened.
 
 
Same here. The Z8k had a great instruction set. I was really hoping
the Z80000 would succeed.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: