Sunday, January 8, 2017

Digest for comp.programming.threads@googlegroups.com - 11 updates in 9 topics

Ramine <toto@toto.net>: Jan 07 06:41PM -0800

Hello,
 
 
About SemaMonitor 2.02...
 
When you use Semaphores , and when you release and you have attainef the
maximum count to be released, the signal(or the release) will be lost
and this is no good, this is why i have implemented my new algorithm of
my SemaMonitor version 2.02, now if you want the signal(s) to not be
lost with my SemaMonitor, you can configure it by passing a parameter to
the constructor, and it works great, this has helped to implement my
Concurrent FIFO Queue 1 and my Concurrent FIFO queue 2 without using
Condition variables.
 
 
You can download the concurrent FIFO queues from:
 
 
https://sites.google.com/site/aminer68/concurrent-fifo-queue-1
 
and from:
 
https://sites.google.com/site/aminer68/concurrent-fifo-queue-2
 
 
And you can download my Lightweight SemaMonitor and my SemaCondvar
version 2.02 from:
 
Lightweight SemaCondvar & SemaMonitor version 2.02
 
https://sites.google.com/site/aminer68/light-weight-semacondvar-semamonitor
 
And:
 
SemaCondvar & SemaMonitor version 2.02
 
https://sites.google.com/site/aminer68/semacondvar-semamonitor
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 07:20PM -0800

On 1/7/2017 6:41 PM, Ramine wrote:
> Hello,
 
> About SemaMonitor 2.02...
 
> When you use Semaphores , and when you release and you have attainef the
 
I mean: attained, not attainef
 
Ramine <toto@toto.net>: Jan 07 05:26PM -0800

Hello....
 
 
Yet more about all my projects...
 
The four major components of efficiency are:
 
1- User efficiency:
 
The amount of time and effort users will spend to learn how to use
the program, how to prepare the data, and how to interpret and use the
output.
 
2- Maintenance Efficiency:
 
The amount of time and effort maintenance programmers will spend
reading a program and its accompanying technical documentation
in order to understand it well enough to make any necessary
modifications.
 
3- Algorithmic complexity:
 
The inherent efficiency of the method itself, regardless of wich
machine we run it on or how we code it.
 
4- Coding efficiency:
 
This is the traditional efficiency measure. Here we are concerned
with how much processor time and memory space a computer program
requires to produce correct answer.
 
Twenty years ago, the most expensive aspect of programming was computer
costs, consequently we tended to "optimize for the machine." Today,
the most expensive aspect of programming is the programmers costs,
because today programmers cost more money than hardware.
 
Computer programs should be written with these goals in mind:
 
1- To be correct and reliable
 
2- To be easy to use for its intended end-user population
 
3- To be easy to understand and easy to change.
 
Here is among other things the key aspects of end-user efficiency:
 
1- Program robustness
2- Program generality
2- Portability
4- Input/Output behavior
5- User documentation.
 
Here is the the key points in achieving maintenance efficiency:
 
1- A clear, readable programming style
2- Adherence to structured programming.
3- A well-designed, functionally modular solution
4- A thoroughly tested and verified program with build-in debugging
and testing aids
5- Good technical documentation.
 
You have to know that i have used a Top-Down methodology to design my
projects.. the Top-Down methodology begins with the overall goals of the
program- what we wich to achieve instead of how -, and after that it
gets on more details and how to implement them.
 
And i have taken care with my objects and modules of the following
characteristics:
 
- Logical coherence
 
- Independence:
 
It is like making more pure functions of functional programming to avoid
side-effects and to easy the maintenance and testing steps.
 
- Object oriented design and coding
 
- and also structure design and coding with sequence , iteration and
conditionals.
 
And about the testing phase read the following:
 
Alexandre Machado wrote:
 
>- You don't have both, unit and performance tests
>Have you ever considered this? I'm sure that it would make
>make it easier for other Delphi devs to start using it, no?
 
You have to know that i have also used the following method of testing
called black box testing:
 
https://en.wikipedia.org/wiki/Black-box_testing
 
This is why i have written this:
 
I have thoroughly tested and stabilized more my parallel archiver for
many years, and now i think that it is more stable and efficient, so i
think that you can be more confident with it.
 
This also true for all my other projects, i have followed the black box
testing also with them...
 
For race conditions , i think for an experienced programmer in parallel
programming like me, this is not a so difficult task to avoid race
conditions.
 
For sequential consistency i have also written this:
 
I have implemented my inventions with FreePascal and Delphi compilers
that don't reorder loads and stores even with compiler optimization, and
this is less error prone than C++ that follows a relaxed memory model
when compiled with optimization, so i have finally compiled my
algorithms implementations with FreePascal into Dynamic Link Libraries
that are used by C++ in a form of my C++ Object Synchronization Library.
 
So this is much easier to make a correct sequential consistency with
Delphi and Freepascal because it is less error prone.
 
Other than that you have to know that i am an experienced programmer in
parallel programming also, so i think that my projects are more stable
and fast.
 
You can download all my projects from:
 
https://sites.google.com/site/aminer68/
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 04:18PM -0800

Hello,
 
Parallel Compression Library was updated to version 3.44.
 
If you want to compile it for Delphi XE versions
just uncomment the define option called XE inside defines.inc
inside the zip.
 
 
You can download it from:
 
https://sites.google.com/site/aminer68/parallel-compression-library
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 03:18PM -0800

Hello,
 
 
Parallel archiver was updated to version 3.9, i have optimized more some
parts of the code.
 
 
You can download the new Parallel archiver version 3.9 from:
 
https://sites.google.com/site/aminer68/parallel-archiver
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 12:23PM -0800

Hello,
 
I have been working all yesterday and modifying a little bit my
SemaCondvar and SemaMonitor algorithms, and and now i think they are
working properly, i have ensured that they are more stable now by
testing them thoroughly again and again, and now they are powerful, and
i think you can be more confident with them
 
You can download my Lightweight SemaMonitor and my SemaCondvar version
2.02 from:
 
Lightweight SemaCondvar & SemaMonitor version 2.02
 
https://sites.google.com/site/aminer68/light-weight-semacondvar-semamonitor
 
And:
 
SemaCondvar & SemaMonitor version 2.02
 
https://sites.google.com/site/aminer68/semacondvar-semamonitor
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 11:59AM -0800

Hello...
 
 
My C++ synchronization objects library was just updated, now i think
that my DRWLock is working properly and i think that my SemaMonitor is
working properly, i have just tested it thoroughly and i think it is
working properly:
 
You can download it from:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
Author: Amine Moulay Ramdane
 
Email: aminer@videotron.ca
 
Description:
 
This library contains 7 synchronization objects, first one is my
scalable SeqlockX that is a variant of Seqlock that eliminates the
weakness of Seqlock that is "livelock"of the readers when there is more
writers, and second is my scalable MLock that is a scalable lock , and
third is my SemaMonitor that combines all characteristics of a semaphore
and an eventcount and also a windows Manual-reset event and also a
windows Auto-reset event, and fourth is my scalable DRWLock that is a
scalable reader-writer lock that is starvation-free and it does
spin-wait, and five is is my scalable DRWLockX that is a scalable
reader-writer lock that is starvation-free and it doesn't spin-wait, but
it waits on the Event objects and my SemaMonitor, so it is energy
efficient, and six is my LW_Asym_RWLockX that is a lightweight scalable
Asymmetric Reader-Writer Mutex that uses a technic that looks like
Seqlock without looping on the reader side like Seqlock, and this has
permited the reader side to be costless, it is FIFO fair on the writer
side and FIFO fair on the reader side and it is of course
Starvation-free and it does spin-wait, and seven is my Asym_RWLockX, a
lightweight scalable Asymmetric Reader-Writer Mutex that uses a technic
that looks like Seqlock without looping on the reader side like Seqlock,
and this has permited the reader side to be costless, it is FIFO fair on
the writer side and FIFO fair on the reader side and it is of course
Starvation-free and it does not spin-wait, but waits on my SemaMonitor,
so it is energy efficient.
 
My scalable Asymmetric Reader-Writer Mutex calls the windows
FlushProcessWriteBuffers() just one time for each writer.
 
I have implemented my inventions with FreePascal and Delphi compilers
that don't reorder loads and stores even with compiler optimization, and
this is less error prone than C++ that follows a relaxed memory model
when compiled with optimization, so i have finally compiled my
algorithms implementations with FreePascal into Dynamic Link Libraries
that are used by C++ in a form of my C++ Object Synchronization Library.
 
If you take a look at the zip file , you will notice that it contains
the DLLs Object pascal source codes, to compile those dynamic link
libraries source
 
codes you will have to download my SemaMonitor Object pascal source code
and my SeqlockX Object pascal source code and my scalable MLock Object
pascal source code and my scalable DRWLock Object pascal source code
from here:
 
https://sites.google.com/site/aminer68/
 
I have compiled and included the 32 bit and 64 bit windows Dynamic Link
libraries inside the zip file, if you want to compile the dynamic link
libraries for Unix and Linux and OSX on (x86) , please download the
source codes of my SemaMonitor and my scalable SeqlockX and my scalable
MLock and my scalable DRWLock and compile them yourself.
 
SemaMonitor is a new and portable synchronization object, SemaMonitor
combines some of the characteristics of a semaphore and all the
characteristics of an eventcount and all the characteristics of a
windows Manual-reset event and also all the characteristics of a windows
Auto-reset event , and if you want the signal(s) to not be lost, you can
configure it by passing a parameter to the constructor, they only use an
event object and and a very fast and very efficient and portable lock,
so it is fast and it is FIFO fair and and it is portable to
windows,Linux and OSX on x86 architecture, here is its C++ interface:
 
class SemaMonitor{
public:
 
SemaMonitor(bool state, long3 InitialCount1=0,long3 MaximumCount1=MaxInt1);
 
~SemaMonitor();
bool checkRange(long1 x);
void wait(signed long mstime=INFINITE);
void signal();
void signal_all();
bool signal(long1 nbr);
void setSignal();
void resetSignal();
long2 WaitersBlocked();
};
 
When you set the first parameter of the constructor to true, the signal
will not be lost if the threads are not waiting for the SemaMonitor
objects, but when you set the first parameter of the construtor to
false, if the threads are not waiting for the SemaCondvar or SemaMonitor
the signal will be lost..
 
the parameters InitialCount1 and MaximumCount1 is the semaphore
InitialCount and MaximumCount.
 
The wait() method is for the threads to wait on the SemaMonitor or
SemaCondvar object for the signal to be signaled. If wait() fails, that
can be that the number of waiters is greater than the maximum number of
an unsigned long.
 
and the signal() method will signal one time a waiting thread on the
SemaMonitor object.
 
the signal_all() method will signal all the waiting threads on the
SemaMonitor object.
 
the signal(long2 nbr) method will signal nbr number of waiting threads
 
the setSignal() and resetSignal() methods behave like the windows Event
object's methods that are setEvent() and resetEvent().
 
and WaitersBlocked() will return the number of waiting threads on the
SemaMonitor object.
 
As you have noticed my SemaMonitor is a powerful synchronization object.
 
Please read the readme files inside the zip file to know more about them..
 
Here is my new invention that is my new algorithm:
 
I have invented a new algorithm of my scalable Asymmetric Distributed
Reader-Writer Mutex, and this one is costless on the reader side, this
one doesn't use any atomic operations and/or StoreLoad style memory
barriers on the reader side, my new algorithm has added a technic that
looks like Seqlock, but this technic doesn't loop as Seqlock. Here is my
algorithm:
 
On the reader side we have this:
 
--
procedure TRWLOCK.RLock(var myid:integer);
 
var myid1:integer;
id:long;
begin
 
 
myid1:=0;
id:=FCount5^.fcount5;
if (id mod 2)=0
then FCount1^[myid1].fcount1:=1
else FCount1^[myid1].fcount1:=2;
if ((FCount3^.fcount3=0) and (id=FCount5^.fcount5) and
(FCount1^[myid1].fcount1=1))
then
else
begin
LockedExchangeAdd(nbr^.nbr,1);
if FCount1^[myid1].fcount1=2
then LockedExchangeAdd(FCount1^[myid1].fcount1,-2)
else if FCount1^[myid1].fcount1=1
then LockedExchangeAdd(FCount1^[myid1].fcount1,-1);
event2.wait;
LockedExchangeAdd(FCount1^[myid1].fcount1,1);
LockedExchangeAdd(nbr^.nbr,-1);
end;
end;
--
 
The writer side will increment FCount5^.fcount5 like does
a Seqlock, and the reader side will grap a copy of FCount5^.fcount5 and
copy it on the id variable, if (id modula 2) is equal to zero that means
the writer side has not modified yet Fcount3^.fcount3, and the reader
side will test again if FCount3^.fcount3 equal 0, and if
id=FCount5^.fcount5 didn't change and if FCount1^[myid1].fcount1 that we
have assigned before didn't change and that means that we are sure that
the writer side will block on FCount1^[myid1].fcount1 equal 1.
 
And notice with me that i am not looping like in Seqlock.
 
And the rest of my algorithm is easy to understand.
 
This technic that looks like Seqlock without looping like Seqlock will
allow us to be sure that although the x86 architecture will reorder the
loads of the inside reader critical section , the loads inside the
reader critical section will not go beyond the load of FCount5^.fcount5
and this will allow my algorithm to work correctly.
 
My algorithm is FIFO fair on the writer side and FIFO fair on the
Reader side , and of course it is Starvation-free, and it is
suitable for realtime critical systems.
 
My Asym_RWLockX and LW_Asym_RWLockX algorithms work the same.
 
You will find the source code of my new algorithm here:
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
 
It is the version 2 that is my own algorithm.
 
and you can download the source code of my Asym_RWLockX and
LW_Asym_RWLockX algorithms that work the same from here:
 
 
https://sites.google.com/site/aminer68/scalable-rwlock
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 12:06PM -0800

Hello,
 
I correct, read again about the Description:
 
 
Description:
 
This library contains 7 synchronization objects, first one is my
scalable SeqlockX that is a variant of Seqlock that eliminates the
weakness of Seqlock that is "livelock"of the readers when there is more
writers, and second is my scalable MLock that is a scalable lock , and
third is my SemaMonitor that combines some of the characteristics of a
semaphore and all the characteristics of an eventcount and all the
characteristics of a windows Manual-reset event and also all the
characteristics of a windows Auto-reset event , and if you want the
signal(s) to not be lost, you can configure it by passing a parameter to
the constructor, and fourth is my scalable DRWLock that is a scalable
reader-writer lock that is starvation-free and it does spin-wait, and
five is is my scalable DRWLockX that is a scalable reader-writer lock
that is starvation-free and it doesn't spin-wait, but it waits on the
Event objects and my SemaMonitor, so it is energy efficient, and six is
my LW_Asym_RWLockX that is a lightweight scalable Asymmetric
Reader-Writer Mutex that uses a technic that looks like Seqlock without
looping on the reader side like Seqlock, and this has permited the
reader side to be costless, it is FIFO fair on the writer side and FIFO
fair on the reader side and it is of course Starvation-free and it does
spin-wait, and seven is my Asym_RWLockX, a lightweight scalable
Asymmetric Reader-Writer Mutex that uses a technic that looks like
Seqlock without looping on the reader side like Seqlock, and this has
permited the reader side to be costless, it is FIFO fair on the writer
side and FIFO fair on the reader side and it is of course
Starvation-free and it does not spin-wait, but waits on my SemaMonitor,
so it is energy efficient.
 
 
You can download it from:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 11:47AM -0800

Hello,
 
Scalable Parallel C++ Conjugate Gradient Linear System Solver Library
was updated to version 1.55
 
You can download it from:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Scalable Parallel implementation of Conjugate
Gradient Dense Linear System Solver library that is NUMA-aware and
cache-aware, and it contains also a Scalable Parallel implementation of
Conjugate Gradient Sparse Linear System Solver library that is cache-aware.
 
Please download the zip file and read the readme file inside the
zip to know how to use it.
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 11:36AM -0800

Hello,
 
 
My C++ synchronization objects library was just updated, now i think
that my DRWLock is working properly and i think that my SemaMonitor is
working properly, i have just tested it thoroughly and i think it is
working properly:
 
 
You can download it from:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
Author: Amine Moulay Ramdane
 
Email: aminer@videotron.ca
 
Description:
 
This library contains 7 synchronization objects, first one is my
scalable SeqlockX that is a variant of Seqlock that eliminates the
weakness of Seqlock that is "livelock"of the readers when there is more
writers, and second is my scalable MLock that is a scalable lock , and
third is my SemaMonitor that combines all characteristics of a semaphore
and an eventcount and also a windows Manual-reset event and also a
windows Auto-reset event, and fourth is my scalable DRWLock that is a
scalable reader-writer lock that is starvation-free and it does
spin-wait, and five is is my scalable DRWLockX that is a scalable
reader-writer lock that is starvation-free and it doesn't spin-wait, but
it waits on the Event objects and my SemaMonitor, so it is energy
efficient, and six is my LW_Asym_RWLockX that is a lightweight scalable
Asymmetric Reader-Writer Mutex that uses a technic that looks like
Seqlock without looping on the reader side like Seqlock, and this has
permited the reader side to be costless, it is FIFO fair on the writer
side and FIFO fair on the reader side and it is of course
Starvation-free and it does spin-wait, and seven is my Asym_RWLockX, a
lightweight scalable Asymmetric Reader-Writer Mutex that uses a technic
that looks like Seqlock without looping on the reader side like Seqlock,
and this has permited the reader side to be costless, it is FIFO fair on
the writer side and FIFO fair on the reader side and it is of course
Starvation-free and it does not spin-wait, but waits on my SemaMonitor,
so it is energy efficient.
 
My scalable Asymmetric Reader-Writer Mutex calls the windows
FlushProcessWriteBuffers() just one time for each writer.
 
I have implemented my inventions with FreePascal and Delphi compilers
that don't reorder loads and stores even with compiler optimization, and
this is less error prone than C++ that follows a relaxed memory model
when compiled with optimization, so i have finally compiled my
algorithms implementations with FreePascal into Dynamic Link Libraries
that are used by C++ in a form of my C++ Object Synchronization Library.
 
If you take a look at the zip file , you will notice that it contains
the DLLs Object pascal source codes, to compile those dynamic link
libraries source
 
codes you will have to download my SemaMonitor Object pascal source code
and my SeqlockX Object pascal source code and my scalable MLock Object
pascal source code and my scalable DRWLock Object pascal source code
from here:
 
https://sites.google.com/site/aminer68/
 
I have compiled and included the 32 bit and 64 bit windows Dynamic Link
libraries inside the zip file, if you want to compile the dynamic link
libraries for Unix and Linux and OSX on (x86) , please download the
source codes of my SemaMonitor and my scalable SeqlockX and my scalable
MLock and my scalable DRWLock and compile them yourself.
 
My SemaMonitor of my C++ synchronization objects library is
easy to use, it combines all characteristics of a semaphore and an
eventcount and also a windows Manual-reset event and also a windows
Auto-reset event, here is its C++ interface:
 
class SemaMonitor{
 
SemaMonitor(bool state, long1 InitialCount1=0,long1 MaximumCount1=INFINITE);
~SemaMonitor();
 
void wait(signed long mstime=INFINITE);
void signal();
void signal_all();
void signal(long1 nbr);
void setSignal();
void resetSignal();
long2 WaitersBlocked();
 
};
So when you set the first parameter that is state of the constructor to
true. it will add the characteristic of a Semaphore to the to the
Eventcount, so the signal will not be lost if the threads are not
waiting for the SemaMonitor objects, but when you set the first
parameter of the construtor to false, it will not behave like a
Semaphore because if the threads are not waiting for the SemaCondvar or
SemaMonitor the signal will be lost..
 
the parameters InitialCount1 and MaximumCount1 is the semaphore
InitialCount and MaximumCount.
 
The wait() method is for the threads to wait on the SemaMonitor
object for the signal to be signaled.
 
and the signal() method will signal one time a waiting thread on the
SemaMonitor object.
 
the signal_all() method will signal all the waiting threads on the
SemaMonitor object.
 
the signal(long2 nbr) method will signal nbr number of waiting threads
 
the setSignal() and resetSignal() methods behave like the windows Event
object's methods that are setEvent() and resetEvent().
 
and WaitersBlocked() will return the number of waiting threads on
the SemaMonitor object.
 
As you have noticed my SemaMonitor is a powerful synchronization object.
 
Please read the readme files inside the zip file to know more about them..
 
Here is my new invention that is my new algorithm:
 
I have invented a new algorithm of my scalable Asymmetric Distributed
Reader-Writer Mutex, and this one is costless on the reader side, this
one doesn't use any atomic operations and/or StoreLoad style memory
barriers on the reader side, my new algorithm has added a technic that
looks like Seqlock, but this technic doesn't loop as Seqlock. Here is my
algorithm:
 
On the reader side we have this:
 
--
procedure TRWLOCK.RLock(var myid:integer);
 
var myid1:integer;
id:long;
begin
 
 
myid1:=0;
id:=FCount5^.fcount5;
if (id mod 2)=0
then FCount1^[myid1].fcount1:=1
else FCount1^[myid1].fcount1:=2;
if ((FCount3^.fcount3=0) and (id=FCount5^.fcount5) and
(FCount1^[myid1].fcount1=1))
then
else
begin
LockedExchangeAdd(nbr^.nbr,1);
if FCount1^[myid1].fcount1=2
then LockedExchangeAdd(FCount1^[myid1].fcount1,-2)
else if FCount1^[myid1].fcount1=1
then LockedExchangeAdd(FCount1^[myid1].fcount1,-1);
event2.wait;
LockedExchangeAdd(FCount1^[myid1].fcount1,1);
LockedExchangeAdd(nbr^.nbr,-1);
end;
end;
--
 
The writer side will increment FCount5^.fcount5 like does
a Seqlock, and the reader side will grap a copy of FCount5^.fcount5 and
copy it on the id variable, if (id modula 2) is equal to zero that means
the writer side has not modified yet Fcount3^.fcount3, and the reader
side will test again if FCount3^.fcount3 equal 0, and if
id=FCount5^.fcount5 didn't change and if FCount1^[myid1].fcount1 that we
have assigned before didn't change and that means that we are sure that
the writer side will block on FCount1^[myid1].fcount1 equal 1.
 
And notice with me that i am not looping like in Seqlock.
 
And the rest of my algorithm is easy to understand.
 
This technic that looks like Seqlock without looping like Seqlock will
allow us to be sure that although the x86 architecture will reorder the
loads of the inside reader critical section , the loads inside the
reader critical section will not go beyond the load of FCount5^.fcount5
and this will allow my algorithm to work correctly.
 
My algorithm is FIFO fair on the writer side and FIFO fair on the
Reader side , and of course it is Starvation-free, and it is
suitable for realtime critical systems.
 
My Asym_RWLockX and LW_Asym_RWLockX algorithms work the same.
 
You will find the source code of my new algorithm here:
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
 
It is the version 2 that is my own algorithm.
 
and you can download the source code of my Asym_RWLockX and
LW_Asym_RWLockX algorithms that work the same from here:
 
 
https://sites.google.com/site/aminer68/scalable-rwlock
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Jan 07 11:32AM -0800

Hello,
 
 
I have just updated my following projects:
 
 
An efficient Threadpool engine that scales well
 
https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-that-scales-well
 
 
An efficient Threadpool engine with priorities that scales well
 
https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-with-priorities-that-scales-well
 
 
Threadpool engine
 
https://sites.google.com/site/aminer68/threadpool
 
 
Threadpool engine with priorities
 
https://sites.google.com/site/aminer68/threadpool-with-priorities
 
 
Concurrent FIFO Queue 1
 
https://sites.google.com/site/aminer68/concurrent-fifo-queue-1
 
 
Concurrent FIFO queue 2
 
https://sites.google.com/site/aminer68/concurrent-fifo-queue-2
 
 
Concurrent SkipList
 
https://sites.google.com/site/aminer68/concurrent-skiplist
 
 
Parallel implementation of Conjugate Gradient Sparse Linear System
Solver library
 
 
https://sites.google.com/site/aminer68/parallel-implementation-of-conjugate-gradient-sparse-linear-system-solver
 
 
Scalable Parallel implementation of Conjugate Gradient Linear System
Solver library that is NUMA-aware and cache-aware,
 
https://sites.google.com/site/aminer68/scalable-parallel-implementation-of-conjugate-gradient-linear-system-solver-library-that-is-numa-aware-and-cache-aware
 
 
Parallel archiver
 
https://sites.google.com/site/aminer68/parallel-archiver
 
 
Light Weight SemaCondvar & SemaMonitor
 
https://sites.google.com/site/aminer68/light-weight-semacondvar-semamonitor
 
 
SemaCondvar & SemaMonitor
 
https://sites.google.com/site/aminer68/semacondvar-semamonitor
 
 
Scalable Distributed Reader-Writer Mutex
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
 
Scalable RWLock
 
https://sites.google.com/site/aminer68/scalable-rwlock
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: