Sunday, October 13, 2019

Digest for comp.lang.c++@googlegroups.com - 10 updates in 3 topics

woodbrian77@gmail.com: Oct 13 03:32PM -0700

On Tuesday, October 8, 2019 at 7:31:48 AM UTC-5, Alf P. Steinbach wrote:
 
> Not as far as I know, but if you want to experiment then Boost has a
> library only thing that provides close to the same functionality.
 
> <url: https://boostorg.github.io/outcome/experimental.html>
 
I watched Phil Nash's "The Dawn of a New Error" talk at 2019
CPP Con. It was helpful and I recommend it. He mentions how
Swift has had static exceptions for years. He also mentions
Outcome.
 
Anyone want to trade Coroutines and Concepts for static exceptions?

 
Brian
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 12 09:40PM -0700

On 10/12/2019 2:14 PM, Manfred wrote:
 
> Bottom line is that the properties random_device are implementation
> dependent, so the authoritative source is the implementation, rather
> than the standard.
 
Agreed. I think a per-thread PRNG seeded with a std::random_device is
fine. The thread would take the seed from a single instance of
std::random_device at startup, before passing control to the user. It
can even mix in its thread id for the seed as an experiment. Therefore,
after this, a thread can use its own per-thread context to generate a
pseudo random number without using any sync.
 
basic pseudo-code, abstract on certain things:
 
 
struct rnd_dev
{
a_true_random_ng rdev; // assuming a non-thread safe TRNG
std::mutex mtx;
 
seed_type generate()
{
mtx.lock();
seed_type seed = rdev.generate();
mtx.unlock();
 
return seed;
}
};
 
 
static rnd_dev g_rnd_dev;
 
 
struct per_thread
{
prng rseq; // thread local prng
 
void startup()
{
// seed with the main TRNG
rseq.seed(g_rnd_dev.generate());
}
 
// generate a thread local PRN
rnd_type generate()
{
return rseq.next();
}
};
 
 
Just for fun, Actually, each thread can use different PRNG algorithms. I
remember doing an experiment where there was a global static table of
function pointers to different PRNGS and a thread could choose between them.
 
 
Btw, can this be an implementation of a random device wrt its output:
 
https://groups.google.com/d/topic/comp.lang.c++/7u_rLgQe86k/discussion
 
;^)
 
 
 
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 12 12:58PM -0700

On 10/12/2019 12:40 AM, Bonita Montero wrote:
 
> Is there any guarantee the standard makes that random_device shoudn't
> start at the same internal state for all new threads, i.e. the output
> of the following code is likey to be different on both lines?
 
I don' think so, however, the term non-deterministic is used:
 
https://en.cppreference.com/w/cpp/numeric/random/random_device
 
A random device per thread should be okay. Although, one can create a
PRNG per thread and use a single random device to gain their individual
per-thread seeds.
 
 
 
Manfred <noname@invalid.add>: Oct 13 06:26PM +0200

On 10/13/19 6:40 AM, Chris M. Thomasson wrote:
 
> Btw, can this be an implementation of a random device wrt its output:
 
> https://groups.google.com/d/topic/comp.lang.c++/7u_rLgQe86k/discussion
 
> ;^)
 
Funny, however I wonder how reliable it can be - there is no guarantee
of an actual race condition.
 
Anyway, for a few years now Intel has been shipping CPUs with rdrand,
and GCC's random_device does use it, as far as I can see. Still I get
entropy() = 0, though.
Bonita Montero <Bonita.Montero@gmail.com>: Oct 13 06:28PM +0200

> Anyway, for a few years now Intel has been shipping CPUs with rdrand,
> and GCC's random_device does use it, ...
 
Really? When I constantly read from random_device I get 100% load
on the core, but almost only kernel-time. I.e. random_device must
be feeded significantly by the kernel.
Manfred <noname@invalid.add>: Oct 13 06:36PM +0200

On 10/13/19 6:28 PM, Bonita Montero wrote:
 
> Really? When I constantly read from random_device I get 100% load
> on the core, but almost only kernel-time. I.e. random_device must
> be feeded significantly by the kernel.
 
In my case disassembly shows that it does - of course it can be that on
other systems it doesn't.
Bonita Montero <Bonita.Montero@gmail.com>: Oct 13 06:39PM +0200


> Really? When I constantly read from random_device I get 100% load
> on the core, but almost only kernel-time. I.e. random_device must
> be feeded significantly by the kernel.
 
When I compile an run this ...
 
#include <random>
int main()
{
std::random_device rd;
for( unsigned u = 10'000'000; u; --u )
rd();
}
 
... this is the result when being compiled with g++ and run with
"Windows Subsystem for Linux 2.0" ...
 
real 0m25.452s
user 0m0.563s
sys 0m24.875s
 
And this is the result when being compiled with MSVC and run under
Windows 10 ...
 
real 531.25ms
user 531.25ms
sys 0.00ms
 
Maybe random_device hasn't a kernel-call for each ()-call, but the
kernel-calls might be a thousand times slower than when the app is
being feeded from the userland-state.
 
I think it's a big mistake to build random_device on top of /dev/random
or /dev/urandom. Standard-library random-number-generators simply don't
need to give high quality randomness.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Oct 13 01:51PM -0700

On 10/13/2019 9:26 AM, Manfred wrote:
 
>> ;^)
 
> Funny, however I wonder how reliable it can be - there is no guarantee
> of an actual race condition.
 
Yes there is. I am implementing an atomic fetch-and-add operation in a
very standard yet racy way, well, lets call it the full blown wrong way
wrt the fetch-and-add...
_______________________________
void racer(unsigned int n)
{
for (unsigned int i = 0; i < n; ++i)
{
// Race infested fetch-and-add op
unsigned int r = g_racer.load(std::memory_order_relaxed);
r = r + 1;
g_racer.store(r, std::memory_order_relaxed);
}
}
_______________________________
 
This can be influenced by simple moves of the mouse. Its funny to me.
There is no race condition wrt the standard, however, wrt the rules of
fetch-and-add, its totally foobar!
 
 
Joseph Hesse <joeh@gmail.com>: Oct 13 09:33AM -0500

On 10/3/19 9:03 AM, Joseph Hesse wrote:
> that it has to stop when it sees a 0?
 
> Thank you,
> Joe
 
This is similar to my last post.
How can one put a "range based for loop" for a built in array type
inside a function? The following program will not compile to an object
file.
 
#include <iostream>
using namespace std;
 
void func(int a[])
{
for(const auto &i : a)
cout << i << endl;
}
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 13 04:11PM +0100

On Sun, 13 Oct 2019 09:33:56 -0500
> for(const auto &i : a)
> cout << i << endl;
> }
 
You have to prevent pointer decay so as enable the array to retain its
size. One way to do that is to take the array as a reference so that
the size remains part of its type:
 
void func(int (&a)[5]) {
for(const auto &i : a)
cout << i << endl;
}
 
Another way is to use std::array, which has its size as its second
template parameter.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: