Sunday, November 2, 2014

Digest for comp.programming.threads@googlegroups.com - 6 updates in 6 topics

comp.programming.threads@googlegroups.com Google Groups
Unsure why you received this message? You previously subscribed to digests from this group, but we haven't been sending them for a while. We fixed that, but if you don't want to get these messages, send an email to comp.programming.threads+unsubscribe@googlegroups.com.
Ramine <ramine@1.1>: Nov 01 05:01PM -0700

Hello,
 
 
The concurrency area is very interesting, i will explain to you more
some concepts so that you the why i have invented my scalable
synchronization algorithms..
 
For example when a mutating integer variable (a variable that changes)
is used from multiple threads, the content of this variable have to be
transfered from one core to another core, and this generate cache-line
transfer from one core to the other, and this a cache-line transfer is
expensive, so you have to optimize more your algorithms to reduce the
number of variables that generate cache-line transfers cause those
variables will generate more contention and will slow your algorithm,
and if you are spin-waiting waiting for a specific content of a variable
that is mutating you have to minimize efficienly the cache-coherence
traffic, that means you have to do your best to transform your algorithm
so that this spin-waiting on a specific content of a variable will
generate less cache-line transfers, this is what is doing my algorithms,
my algorithms of my scalable sychronization algorithms and my algorithms
of efficient concurrent FIFO queues are efficient in the sense that
they reduce the cache coherence traffic and they use less variables that
generate cache-line transfers.
 
 
I will also add something important:
 
If you look at my very fast concurrent FIFO queue here:
 
https://sites.google.com/site/aminer68/concurrent-fifo-queue-1
 
 
As you have noticed i am not using a lock around the push()
and another lock around the pop(), and you have to understand
that for example the "setObject(lastHead,tm)" will be executed
in parallel and each thread will write the tm variable
in its write-back cache , that means it will not write immediatly
the content of the variable to the memory , but will write it latter,
so this parallelism will higher the throughput and this is what i have
noticed on my benchmarks , i have got more throughput than the two-lock
algorithm, but when you are using a lock around the push()
and a lock around the pop() you will not get this parallelism
so this will drop the throughput... and that's the same for
the pop() method , the "if fcount1^[lastTail and fMask].flag=1"
can be executed in parallel and the inter-communication between
the cores can be done in parallel so this will higher the throughput...
This is why i have told you that my new algorithm of a very fast
concurrent FIFO queue has more parallelism than the two-lock algorithm ,
and it has a much better throughput, and it will scale better with more
and more cores.
 
 
Please take a look at my other projects here:
 
https://sites.google.com/site/aminer68
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Nov 01 04:51PM -0700

Hello,
 
 
The technic for Petri Net modeling that i have invented is the following:
 
If you want to model a Semaphore or a critical sections
or the windows WaitForSingleObject() or WaitForMultipleObject(),
before it was difficult for an engineer to model the parallel program
easily with Petri Nets, but with my new technic that i have invented,
i have added a layer of expressivness that easy the modeling of parallel
applications for us, so i have transformed the Petri Nets modeling so
that we reason about parallel programs by using If-then-else statements
, so i have modeled the Semaphore and
Cirtical section and the windows WaitForSingleObject() or
WaitForMultipleObject() with If-then-else statements and those
If-then-else statements have easy the modeling and reasonning for us,
and the way to transform a parallel applications with those If-then-else
statements to the Petri-Nets is more easy than without those
statements.. this is my invention.
 
 
Please take a look at my new technic that i have invented here:
 
https://sites.google.com/site/aminer68/how-to-analyse-parallel-applications-with-petri-nets
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Nov 01 04:40PM -0700

Hello,
 
 
You have seen me talking about C and C++ , and i think
that C and C++ are an elitist tool, they were designed
with the same spirit as Unixes, and this spirit is
making the things difficult so that only the elites can
use effectivly C and C++ and Unixes... so you have to understand
me dear programmers, i am a leftist and i am a socialist and i think
that the most important improvement that brings socialism is that it
tries to miminize at best social darwinism, this is why i have tried to
minimize social darwinism by avoiding those complicated and difficult
tools such as C and C++ and Unixes, and this is why i have decided
to defend Object Pascal, cause i think that Object Pascal is a kind
of a socialist tool that easy the things for programmer,
so it is in accordance with my philosophy of "intelligent easiness"..
 
C and C++ and Unixes are wild beasts that hurts the weaker human among
us, those C and C++ and Unixes wild beasts can be powerful, but even
though they can be powerful, they hurts the weaker humans among us that
finds C and C++ and Unixes too difficult and too complex, so since C and
C++ and Unixes are complex and difficult they will likely create
something like a darwinian filter who don't let the weaker humans among
us humans to cross or to climb the social ladder, and we call that
social darwinism, so the complexity and difficulty of C and C++
languages do participate to social darwinism, and this is bad i think.
 
Hope you have understood my final words about C and C++
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Nov 01 04:37PM -0700

Hello,
 
 
 
I am politicaly a leftist, a socialist, but my socialism
is a kind of improved socialism , cause i added some improvement to
socialism so that it becomes better, i think that the most important
improvement that brings socialism is that it tries to miminize at best
social darwinism, this is why you have seen me on this forum talking
social darwinism and talking about scalable synchronization algorithms,
i think Dmitry Vyukov and Chris Thomasson have both failed on the
criteria that i have talked to you before and that is "intelligent
easiness" , i have defined "intelligent easiness" as to "EASY" the
things and the processes and the life at the maximum keeping at
the same time a high level of "quality", so i have followed
the spirit of Dmitry Vyukov and Chris Thomasson and i have not
liked there spirit, cause what they are doing almost all the time
is giving only there algorithms as a source code without explaining
correctly there algorithms and without easying the explanation of there
algorithms so that the weaker humans among us humans can understand
those algorithms and can easily learn so that to give the weaker humans
among us a better chance to understand better parallel programming and
give them a better chance to climb higher and higher the social ladder,
this is why i have tried to be in accordance to my philosophy of
"intelligent easiness" and this why i have tried to improve this
criteria of "intelligent easiness" by explaning with "EASY" the "how"
and "why" my scalable synchronization algorithms are scalable or
waitfree etc.
 
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Nov 01 03:59PM -0700

Hello,
 
 
I have come to an interresting subject...
 
I have offen read some comments about parallel programming
and Object oriented programming and Functional programming,
and i will bring my comments about them...
 
You will read about parallel programming that it's hard/difficult ,
cause parallel programming adds more layers to serial
programming, in those added layers in parallel programming we have
to think not only about serial programming algorithmic and such...
but we have also to think about parallel correctness and parallel
performance... and i have heared comments that says that proving
parallel correctness is a difficult task in complex systems ,
so you have to proove that your parallel program is free of
deadlocks and free of race conditions and free of starvation etc.
this is what is making parallel programming a difficult task,
but as you have seen me saying to you on this forum that the "quality"
of the learning process of the "what is" and "how to" is also a good
tool that enhance the safety of parallel systems and the safety of
parallel realtime critical systems and the performance, hence a good way
of doing is also using external parallel libraries that are implemented
by people who "knows" how to implement correctly synchronization
algorithms and that knows how to implement correctly
parallel programs, and i think that this way of doing is an effective
way i think to avoid deadlock , race conditions etc. and on the critera
of performance that's the same, i think you have to use external
libraries that are done by people who know how to optimize
parallel algorithms to be sure that parallel libraries
meet the requirement of good performance, so as you have seen me talking
about Functional programming and Object oriented programming i have also
said that Bartosz Milewski here: http://bartoszmilewski.com/ is trying
to convert C++ people and programmers to Functional programming, but i
have commented on the writings of Bartosz Milewski and said that his way
of thinking is an extremist view/way... cause what he is trying to do is
analyzing programming languages from the point views and criterias such
as composability and easy of maintenance etc. and what Bartosz Milewski
have said is that Functional programming such as Haskel permit a greater
composability and do improve
the criteria of maintainability and reliability etc.
and what i have noticed that Bartosz Milewski is saying
that Functional programming is something that we can not avoid
hence it is like something mandatory, this is why i have said
that it is an extremist point of view, cause i am convinced
that the way to improve reliability and performance and
maintanability is not only a consequence of the quality of
a programming language but it is also a consequence
of good quality reusability , so what i want to emphasis
in my post is that i am a fan of this criteria that we call
reusability and i am convinced that to improve realiability and
performance etc. we have to use good external libraries and good
external objects and parallel objects wrote by people that know
how to implement correct parallel programs and fast parallel programs,
so in my opinion, that`s also a good way to improve parallel programming
and programming in general.
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Nov 01 03:21PM -0700

Hello,
 
 
Read it all and understand it all by reading this:
 
"After decades of progress in making programming languages easier for
humans to read and understand, functional programming syntax turns back
the clock."
 
 
Read more here:
 
http://www.javaworld.com/article/2078610/java-concurrency/functional-programming--a-step-backward.html
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: