Wednesday, May 2, 2018

Digest for comp.programming.threads@googlegroups.com - 2 updates in 2 topics

Sky89 <Sky89@sky68.com>: May 01 07:52PM -0400

Hello..
 
 
Becareful of Lockfree algorithms
 
Look at how they are "much" slower than lock based algorithms,
read the following from acmqueue to notice it:
 
==
 
In practice, however, lock-free algorithms may not live up to these
performance expectations. Consider, for example, Michael and Scott's
lock-free queue algorithm.11 This algorithm implements a queue using a
linked list, with items enqueued to the tail and removed from the head
using CAS loops. (The exact details are not as important as the basic
idea, which is similar in spirit to the example in figure 4.) Despite
this, as figure 6a shows, the lock-free algorithm fails to scale beyond
four threads and eventually performs worse than the two-lock queue
algorithm.
 
The reason for this poor performance is CAS failure: as the amount of
concurrency increases, so does the chance that a conflicting CAS gets
interleaved in the middle of a core's read-compute-update CAS region,
causing its CAS to fail. CAS operations that fail in this way pile
useless work on the critical path. Although these failing CASes do not
modify memory, executing them still requires obtaining exclusive access
to the variable's cache line. This delays the time at which later
operations obtain the cache line and complete successfully (see figure
5b, in which only two operations complete in the same time that three
operations completed in figure 5a).
 
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=2991130
 
==
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: May 01 02:55PM -0400

Hello,
 
Read this:
 
 
Neural networks everywhere
 
New chip reduces neural networks' power consumption by up to 95 percent,
making them practical for battery-powered devices.
 
MIT researchers have developed a special-purpose chip that increases the
speed of neural-network computations by three to seven times over its
predecessors, while reducing power consumption 93 to 96 percent. That
could make it practical to run neural networks locally on smartphones or
even to embed them in household appliances.
 
 
Read more here:
 
https://www.sciencedaily.com/releases/2018/02/180214145854.htm
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: