Wednesday, February 3, 2021

Digest for comp.lang.c++@googlegroups.com - 3 updates in 2 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 03 02:35PM -0800

On 1/31/2021 12:17 AM, Jorgen Grahn wrote:
>> touch google groupswith a ten-foot pole.
 
> Also see <https://lwn.net/Articles/827233/>, which seems to be from
> when comp.lang.lisp got the same treatment back in July.
 
DAMN!
"Öö Tiib" <ootiib@hot.ee>: Feb 02 11:58PM -0800

On Tuesday, 2 February 2021 at 07:52:02 UTC+2, Pavel wrote:
> threads and shared ptrs are dirty hacks, indicate lack design and slow down my
> servers. Don't need'em stinky threads; not on Linux anyway. :-)
 
> Anybody believes I am serious and do not intend to start any flames?
 
May be you want to, but multi-processing instead of multi-threading is rather
viable when you manage to keep inter-process communications low. The
interfaces between modules tend to be better thought thru, an undefined
behavior in some rarely working module has less ways to screw up work
of other modules, it is easier to scale to multiple systems, different modules
are easier to write in different programming languages, little modules
are easier to reason about separately, to black-box test and/or to profile.
 
But usual pressure is speed to market and whatever hacks it takes.
Marcel Mueller <news.5.maazl@spamgourmet.org>: Feb 03 09:00AM +0100

Am 02.02.21 um 21:58 schrieb Chris M. Thomasson:
>> likely to work on any real hardware.
 
> Yeah... It probably does have a UB issue. However, like you said, its
> going to work on a vast majority of hardware.
 
I am not 100% sure about the UB. std::atomic<uintptr_t> might solve the
UB problem with stolen bits. I did not test this, because with a defined
set of target platforms there was no UB.
 
 
> back on comp.programming.threads where somebody was trying to convince
> me that strong thread safety is hardly ever needed. Iirc, my response
> was that its not needed, until it is needed. ;^)
 
There are really /many/ use cases of strong thread safety. It simplifies
programming significantly. Less race conditions, no deadlocks, no
priority inversion. Basically it is required for anything pointer like
object to fit into a std:atomic<T>.
 
In conjunction with immutable objects it enables large scale
applications without locks, e.g. in memory databases with deduplication.
They have amazing performance. The most interesting part is the scaling
which is less than linear, because the more data you load into memory
the more effectively deduplication compresses it.
Without strong safety any read access needs to be protected by a lock.
This is almost impossible without introducing a bottleneck.
 
BTDT. I have written such an application in the past. It runs for about
ten years now. A 20GB database typically fits into less than 2GB memory.
This results in amazingly high RAM cache efficiencies. Deduplication not
only saves memory. It effectively also increases the memory throughput.
The typical CPU load of the server with this application is 5% with a
dual core VM and several dozen concurrent users. (I/O is neglectable
with an in memory DB anyway.)
The efficiency was so high that we decided to keep the full change
history of ten years. It just takes only little additional resources.
 
 
 
> https://groups.google.com/g/comp.lang.c++/c/KwreljoeJ0k/m/KdQVJocuCQAJ
 
> (oh shit! a google group link... The google might ban comp.lang.c++
> next... shit)
 
:-)
 
More likely they simply discard google groups in the future. There is no
profit in this service.
 
 
Marcel
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: