Monday, February 26, 2018

Digest for comp.programming.threads@googlegroups.com - 9 updates in 7 topics

dave@boostpro.com: Feb 25 11:09AM -0800

Hi,
 
I want to dynamically allocate memory, then once, on demand, thread-safely initialize part of that memory, and potentially free the memory. Then later I want to repeat the process with what is potentially the same memory block (and thus the same address of pthread_once_t) that has been returned from the allocator. Does this
 
a. "just work" as long as it is set up with PTHREAD_ONCE_INIT, or
b. do I need to do insert some kind of fence to ensure that all threads see the re-initialized memory, or
c. never work; don't do that!
d. something else I haven't thought of?
 
I ask because all the examples I can find store the pthread_once_t in static storage and never re-initialize it.
 
Thanks in advance,
Dave
Kaz Kylheku <217-679-0842@kylheku.com>: Feb 26 02:29AM

> b. do I need to do insert some kind of fence to ensure that all threads see the re-initialized memory, or
> c. never work; don't do that!
> d. something else I haven't thought of?
 
It must work.
 
Firstly, POSIX says that PTHREAD_ONCE_INIT is a constant.
Thus it may be used in assignment. This is important because, by
contrast, the PTHREAD_MUTEX_INITIALIZER is described as a macro
that can be used for initialization, and not a constant.
 
Secondly, there is a storage restriction on the phread_once_t
control variable: namely, it may not be defined in automatic storage
(i.e. non-static local variable). There is no restriction against
pthread_once_t being in dynamic storage.
 
> I ask because all the examples I can find store the pthread_once_t in
> static storage and never re-initialize it.
 
However, if that is in a shared library, it's dynamic anyway. The
dynamic case is effectively exercised any time a shared library is
dlopen'ed that has a file scope pthread_once_t.
dave@boostpro.com: Feb 25 10:10PM -0800

On Sunday, February 25, 2018 at 6:29:49 PM UTC-8, Kaz Kylheku wrote:
> > c. never work; don't do that!
> > d. something else I haven't thought of?
 
> It must work.
 
Well, I realize that's what the docs seem to imply, but…
 
> Thus it may be used in assignment. This is important because, by
> contrast, the PTHREAD_MUTEX_INITIALIZER is described as a macro
> that can be used for initialization, and not a constant.
 
Sure, but when I assign it, I am doing nothing to ensure that assignment is visible to all threads before they call pthread_once on it. Therefore it seemed plausible that another core's cache still contains some other value for that memory location, and a thread running on that core and calling pthread_once would see the uninitialized value instead of PTHREAD_ONCE_INIT, and skip calling the initialization function, which would be a problem…
 
 
> However, if that is in a shared library, it's dynamic anyway. The
> dynamic case is effectively exercised any time a shared library is
> dlopen'ed that has a file scope pthread_once_t.
 
Hm, you make a good point. But you know how tricky races are; I wouldn't count on all that exercise to have flushed out a bug if this issue wasn't totally considered.
computer45 <computer45@cyber.com>: Feb 25 06:36PM -0500

Hello..
 
Read this..
 
 
Here is a very interesting video:
 
MIT AGI: Future of Intelligence (Ray Kurzweil)
 
https://www.youtube.com/watch?v=9Z06rY3uvGY
 
 
Thank you,
Amine Moulay Ramdane.
computer45 <computer45@cyber.com>: Feb 25 04:42PM -0500

Hello..
 
Read this:
 
 
UK Prime Minister is criticizing the far-right and the far-left for
there incapacity to recognize the benefits of globalization of economy
etc. please listen to UK Prime Minister in the following video (because
she is really smart) to understand more:
 
Theresa May Speaks Artificial Intelligence at World Economic Summit in
Davos, Switzerland
 
https://www.youtube.com/watch?v=tNhsm0oDkSw
 
 
Thank you,
Amine Moulay Ramdane.
computer45 <computer45@cyber.com>: Feb 25 03:56PM -0500

Hello...
 
Read this:
 
 
2018 Isaac Asimov Memorial Debate: Artificial Intelligence
 
Read more here:
 
https://www.youtube.com/watch?v=gb4SshJ5WOY
 
 
Thank you,
Amine Moulay Ramdane.
computer45 <computer45@cyber.com>: Feb 25 03:20PM -0500

Hello..
 
Read this:
 
AI Developers Are Getting Desks Next to Their Bosses
 
Read more here:
 
https://www.developer.com/daily_news/ai-developers-are-getting-desks-next-to-their-bosses.html
 
 
Thank you,
Amine Moulay Ramdane.
computer45 <computer45@cyber.com>: Feb 25 03:08PM -0500

Hello...
 
Read this:
 
 
More precision about my efficient Threadpools that scale very well,
my Threadpools are much more scalable that the one of Microsoft,
in the workers side i am using scalable counting networks to
distribute on the many queues or stacks, so it is scalable
on the workers side, on the consumers side i am also using
lock striping to be able to scale very well, so
it is scalable on those parts, on the other part that is
work stealing, i am using scalable counting networks,
so globally they scales very well, and since work stealing is "rare"
so i think that my efficient Threadpools that scale very well
are really powerful, and they are much more optimized and the scalable
counting networks eliminate false sharing, and they work with Windows
and Linux
 
And i have updated the html tutorials inside the zip files, please read
them.
 
You can download them from:
 
https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-that-scales-very-well
 
and from:
 
https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
 
 
Thank you,
Amine Moulay Ramdane.
computer45 <computer45@cyber.com>: Feb 25 03:03PM -0500

Hello....
 
 
Regression Testing Strategies: an Overview
 
Read more here:
 
https://www.infoq.com/articles/regression-testing-strategies
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: