Tuesday, July 2, 2019

Digest for comp.programming.threads@googlegroups.com - 25 updates in 25 topics

Horizon68 <horizon@horizon.com>: Jul 01 05:11PM -0700

Hello..
 
 
Reread my final corrected poem to know more..
 
 
Do i also know you ? i can see you clearly now..
 
Because i am also "special" like you..
 
 
Look at this singer to know more, who is he ?:
 
Prince - I Wanna Be Your Lover
 
https://www.youtube.com/watch?v=Rp8WL621uGM
 
 
 
So here is my new poem about the beautiful Superstars:
 
 
I "know" i am talking to the stars !
 
Because i am climbing in front of you like a superstar
 
Because i love to be a fast Ferrari car
 
Since my Love is like coming from Mars and the beautiful Stars
 
This is why my Love knows about "you" my "beautiful" stars
 
And since my love is not a "barbar"
 
Thus my love is like growing like a beautiful seminar
 
This is why i love to be as you are beautiful superstars
 
Because look at this beautiful Superstar
 
He is rolling like my beautiful desire
 
Look at this beautiful Superstar
 
He is beautifully escaping from the evil fire
 
Look at this beautiful Superstar
 
He is building a big empire
 
Look at this beautiful Superstar
 
He is richness that also knows how to retire
 
Look at this beautiful Superstar
 
Because he knows how to play beautifully the guitar
 
Look at this beautiful Superstar
 
He is like someone who we admire
 
And this is why I want to be like a Superstar
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 01 03:24PM -0700

Hello..
 
 
 
I was just listening at the following song:
 
Tiny Dancer
 
https://www.youtube.com/watch?v=KBWfUc5jKiM
 
And i have just decided to write a poem about
you the beautiful dancer !, here it is:
 
 
My so beautiful dancer !
 
You are coming to me with all your splendor
 
My so beautiful dancer !
 
You are growing inside me like a so beautiful flower !
 
My so beautiful dancer !
 
It is also about the right dose and the right answer
 
My so beautiful dancer !
 
Because my way is not to be a gangster
 
My so beautiful dancer !
 
Because i am like "wisdom" that is like the master
 
My so beautiful dancer !
 
Because i want to go beautifully and faster
 
My so beautiful dancer !
 
As you can see, I am not a barbar or a Tiger !
 
My so beautiful dancer !
 
Because my way is Love and Wisdom that is not the inferior
 
My so beautiful dancer !
 
It is why i am like you a beautiful dancer !
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 01 02:33PM -0700

Hello..
 
 
 
Read again, i correct a last typo because i write fast..
 
 
Do i also know you ? i can see you clearly now..
 
Because i am also "special" like you..
 
 
Look at this singer to know more:
 
Michael Jackson - You Rock My World
 
https://www.youtube.com/watch?v=C1kHeeEMe-s
 
 
 
So here is my new poem about the beautiful Superstars:
 
 
I "know" i am talking to the stars !
 
Because i am climbing in front of you like a superstar
 
Because i love to be a fast Ferrari car
 
Since my Love is like coming from Mars and the beautiful Stars
 
This is why my Love knows about "you" my "beautiful" stars
 
And since my love is not a "barbar"
 
Thus my love is like growing like a beautiful seminar
 
This is why i love to be as you are beautiful supertars
 
Because look at this beautiful Superstar
 
He is rolling like my beautiful desire
 
Look at this beautiful Superstar
 
He is beautifully escaping from the evil fire
 
Look at this beautiful Superstar
 
He is building a big empire
 
Look at this beautiful Superstar
 
He is richness that also knows how to retire
 
Look at this beautiful Superstar
 
Because he knows how to play beautifully the guitar
 
Look at this beautiful Superstar
 
He is like someone who we admire
 
And this is why I want to be like a Superstar
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 01 02:26PM -0700

Hello...
 
 
 
Do i also know you ? i can see you clearly now..
 
Because i am also "special" like you..
 
 
Look at this singer to know more:
 
Michael Jackson - You Rock My World
 
https://www.youtube.com/watch?v=C1kHeeEMe-s
 
 
 
So here is my new poem about the beautiful Superstars:
 
 
I "know" i am talking to the stars !
 
Because i am climbing in front of you like a superstar
 
Because i love to be a fast Ferrari car
 
Since my Love is like coming from Mars and the beautiful Stars
 
This is why my Love knows about "you" my "beautiful" stars
 
And since my love is not a "barbar"
 
Thus love is like growing like a beautiful seminar
 
This is why i love to be as you are beautiful supertars
 
Because look at this beautiful Superstar
 
He is rolling like my beautiful desire
 
Look at this beautiful Superstar
 
He is beautifully escaping from the evil fire
 
Look at this beautiful Superstar
 
He is building a big empire
 
Look at this beautiful Superstar
 
He is richness that also knows how to retire
 
Look at this beautiful Superstar
 
Because he knows how to play beautifully the guitar
 
Look at this beautiful Superstar
 
He is like someone who we admire
 
And this is why I want to be like a Superstar
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 03:41PM -0700

Hello..
 
 
Here is my next extended poem of Love, i have just
thought it fast and written it fast and extended it fast:
 
I explain more my poem below:
 
When i say the following:
 
"So i will ask, is "specialization" innocence ?"
 
"innocence" means "naiveté" in the context, it is like
political philosophy, because we have to know how to be
"tolerance" over "specialization" too, because
when you want to survive you can be more specialization , like
doing your own job without knowing how to do other jobs or/and without
being more awareness of political philosophy or/and without awareness of
science etc. think about it more, so we have to know
how to be tolerance on those realities.
 
 
Here is my extended new poem of Love:
 
 
The beauty of your eyes is beauty coming
from above
 
Like the words of the right dose
 
Like the words of God
 
So i am coming with a beautiful "smile"
 
That says to you i am the one of the "right" side
 
So come to me beautiful that does not hide
 
Since i am here to write to you about my way that guides
 
So as you see i am "love" that is immense and wide
 
Since beautifulness is also like the almight
 
And do you hear the sound of tolerance ?
 
And do you hear the sound of patience ?
 
So why tolerance and why patience ?
 
So i will ask, is "specialization" innocence ?
 
Do you understand the essence ?
 
Because from where comes abundance ?
 
It is the right "dose" that "advance" !
 
Hence I am dancing with you a beautiful dance
 
A beautiful dance of romance and common sense
 
So i am happy to dance this beautiful dance
 
So come to me my beautiful patience
 
Because i am here waiting for you my beautiful tolerance.
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 03:23PM -0700

Hello..
 
 
 
Here is another poem of mine that i think is wise:
 
 
I was just listening to the following song of Elvis presley:
 
Elvis Presley - Don't Be Cruel
 
https://www.youtube.com/watch?v=ViMF510wqWA
 
 
So i have decided to write a poem, here it is:
 
 
Our many constrains
 
Is like the important and the main
 
But we need to grow to know how to restrain
 
Because knowing how to restrain is like avoiding the pain
 
Because it is like a beautiful flow not in vain
 
Because when you think Morocco and Spain
 
Do you think violent past or what's remain ?
 
Because violent past is like a pain that forms an evil chain
 
Do you understand to be able to break the evil chains ?
 
So you have to train and to train
 
To be able to go fast as a plane
 
Because is it not important as the brain ?
 
So you have to know how to transcend without complain !
 
So rain and rain beautiful rain !
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 02:59PM -0700

Hello..
 
 
About the beautiful dance..
 
I was just listening to this beautiful songs that make
you dance a "beautiful" dance, so i share them with you:
 
Jive Bunny - Rockabilly & 60's Oldies Monstermix
 
https://www.youtube.com/watch?v=AU84rm39smU
 
 
And here is my new poem that speaks about the beautiful dance:
 
 
I am being a beautiful dance
 
Since war is calling peace and confidence
 
I am being a beautiful dance
 
So is the beautifulness of romance
 
I am being a beautiful dance
 
To be able to avoid the bad chance
 
I am being a beautiful dance
 
To be able to know it in advance
 
I am being a beautiful dance
 
Since i need to advance
 
I am being a beautiful dance
 
To be able to attain the "immense"
 
I am being a beautiful dance
 
To be able to grow the plants
 
I am being a beautiful dance
 
To be able to help the small Ants
 
I am being a beautiful dance
 
Because it is my poetry without expense !
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 01:24PM -0700

Hello..
 
 
Read again, i correct a typo because i write fast...
 
More about computing and parallel computing..
 
The important guaranties of Memory Safety in Rust are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
I think i have solved Null Pointer Dereferences and also solved Dangling
Pointers and also solved memory leaks for Delphi and Freepascal by
inventing my "scalable" reference counting with efficient support for
weak references and i have implemented it in Delphi and Freepascal, and
reference counting in Rust and C++ is "not" scalable.
 
About the (3) above that is Buffer Overruns, read here about Delphi
and Freepascal:
 
What's a buffer overflow and how to avoid it in Delphi?
 
http://delphi.cjcsoft.net/viewthread.php?tid=49495
 
 
About Deadlock and Race conditions in Delphi and Freepascal:
 
I have ported DelphiConcurrent to Freepascal, and i have
also extended them with the support of my scalable RWLocks for Windows
and Linux and with the support of my scalable lock called MLock for
Windows and Linux and i have also added the support for a Mutex for
Windows and Linux, please look inside the DelphiConcurrent.pas and
FreepascalConcurrent.pas files inside the zip file to understand more.
 
You can download DelphiConcurrent and FreepascalConcurrent for Delphi
and Freepascal from:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way
to build Delphi applications which involve parallel executed code based
on threads like application servers. DelphiConcurrent provides to the
programmers the internal mechanisms to write safer multi-thread code
while taking a special care of performance and genericity.
 
In concurrent applications a DEADLOCK may occurs when two threads or
more try to lock two consecutive shared resources or more but in a
different order. With DelphiConcurrent and FreepascalConcurrent, a
DEADLOCK is detected and automatically skipped - before he occurs - and
the programmer has an explicit exception describing the multi-thread
problem instead of a blocking DEADLOCK which freeze the application with
no output log (and perhaps also the linked clients sessions if we talk
about an application server).
 
Amine Moulay Ramdane has extended them with the support of his scalable
RWLocks for Windows and Linux and with the support of his scalable lock
called MLock for Windows and Linux and he has also added the support for
a Mutex for Windows and Linux, please look inside the
DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more.
 
And please read the html file inside to learn more how to use it.
 
 
About race conditions now:
 
My scalable Adder is here..
 
As you have noticed i have just posted previously my modified versions
of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in
parallel programs.
 
But i have just read the following about how to avoid race conditions in
Parallel programming in most cases..
 
Here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
This is why i have invented my following powerful scalable Adder to help
you do the same as the above, please take a look at its source code to
understand more, here it is:
 
https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal
 
Other than that, about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And about Message Passing Process Communication Model and Shared Memory
Process Communication Model:
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
And more about my scalable reference counting with efficient support
for weak references:
 
My invention that is my scalable reference counting with efficient
support for weak references version 1.35 is here..
 
Here i am again, i have just updated my scalable reference counting with
efficient support for weak references to version 1.35, I have just added
a TAMInterfacedPersistent that is a scalable reference counted version,
and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page,
because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else,
"typically" a form (and "typically" means in english: in "most" cases,
and this is the most important thing to understand). In that scenario,
reference count is not used.
 
If you pass a component as an interface reference, it would be very
unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just
replace TInterfacedObject with my TAMInterfacedObject that is the
scalable reference counted version, and just replace
TInterfacedPersistent with my TAMInterfacedPersistent that is the
scalable reference counted version, and you will find both my
TAMInterfacedObject and my TAMInterfacedPersistent inside the
AMInterfacedObject.pas file, and to know how to use weak references
please take a look at the demo that i have included called example.dpr
and look inside my zip file at the tutorial about weak references, and
to know how to use delegation take a look at the demo that i have
included called test_delegation.pas, and take a look inside my zip file
at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak
references is stable and fast, and it works on both Windows and Linux,
and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find
it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory
leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just
extended to make you understand more.
 
You can download my new scalable reference counting with efficient
support for weak references version 1.35 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 12:03PM -0700

Hello..
 
 
More about computing and parallel computing..
 
The important guaranties of Memory Safety in Rust are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
I think i have solved Pointer Dereferences and also solved Dangling
Pointers and also solved memory leaks for Delphi and Freepascal by
inventing my "scalable" reference counting with efficient support for
weak references and i have implemented it in Delphi and Freepascal, and
reference counting in Rust and C++ is "not" scalable.
 
About the (3) above that is Buffer Overruns, read here about Delphi
and Freepascal:
 
What's a buffer overflow and how to avoid it in Delphi?
 
http://delphi.cjcsoft.net/viewthread.php?tid=49495
 
 
About Deadlock and Race conditions in Delphi and Freepascal:
 
I have ported DelphiConcurrent to Freepascal, and i have
also extended them with the support of my scalable RWLocks for Windows
and Linux and with the support of my scalable lock called MLock for
Windows and Linux and i have also added the support for a Mutex for
Windows and Linux, please look inside the DelphiConcurrent.pas and
FreepascalConcurrent.pas files inside the zip file to understand more.
 
You can download DelphiConcurrent and FreepascalConcurrent for Delphi
and Freepascal from:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way
to build Delphi applications which involve parallel executed code based
on threads like application servers. DelphiConcurrent provides to the
programmers the internal mechanisms to write safer multi-thread code
while taking a special care of performance and genericity.
 
In concurrent applications a DEADLOCK may occurs when two threads or
more try to lock two consecutive shared resources or more but in a
different order. With DelphiConcurrent and FreepascalConcurrent, a
DEADLOCK is detected and automatically skipped - before he occurs - and
the programmer has an explicit exception describing the multi-thread
problem instead of a blocking DEADLOCK which freeze the application with
no output log (and perhaps also the linked clients sessions if we talk
about an application server).
 
Amine Moulay Ramdane has extended them with the support of his scalable
RWLocks for Windows and Linux and with the support of his scalable lock
called MLock for Windows and Linux and he has also added the support for
a Mutex for Windows and Linux, please look inside the
DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more.
 
And please read the html file inside to learn more how to use it.
 
 
About race conditions now:
 
My scalable Adder is here..
 
As you have noticed i have just posted previously my modified versions
of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in
parallel programs.
 
But i have just read the following about how to avoid race conditions in
Parallel programming in most cases..
 
Here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
This is why i have invented my following powerful scalable Adder to help
you do the same as the above, please take a look at its source code to
understand more, here it is:
 
https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal
 
Other than that, about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And about Message Passing Process Communication Model and Shared Memory
Process Communication Model:
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
And more about my scalable reference counting with efficient support
for weak references:
 
My invention that is my scalable reference counting with efficient
support for weak references version 1.35 is here..
 
Here i am again, i have just updated my scalable reference counting with
efficient support for weak references to version 1.35, I have just added
a TAMInterfacedPersistent that is a scalable reference counted version,
and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page,
because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else,
"typically" a form (and "typically" means in english: in "most" cases,
and this is the most important thing to understand). In that scenario,
reference count is not used.
 
If you pass a component as an interface reference, it would be very
unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just
replace TInterfacedObject with my TAMInterfacedObject that is the
scalable reference counted version, and just replace
TInterfacedPersistent with my TAMInterfacedPersistent that is the
scalable reference counted version, and you will find both my
TAMInterfacedObject and my TAMInterfacedPersistent inside the
AMInterfacedObject.pas file, and to know how to use weak references
please take a look at the demo that i have included called example.dpr
and look inside my zip file at the tutorial about weak references, and
to know how to use delegation take a look at the demo that i have
included called test_delegation.pas, and take a look inside my zip file
at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak
references is stable and fast, and it works on both Windows and Linux,
and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find
it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory
leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just
extended to make you understand more.
 
You can download my new scalable reference counting with efficient
support for weak references version 1.35 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 11:22AM -0700

Hello..
 
 
About SC and TSO and RMO hardware memory models..
 
I have just read the following webpage about the performance difference
between: SC and TSO and RMO hardware memory models
 
I think TSO is better, it is just around 3% ~ 6% less performance
than RMO and it is a simpler programming model than RMO. So i think ARM
must support TSO to be compatible with x86 that is TSO.
 
Read more here to notice it:
 
https://infoscience.epfl.ch/record/201695/files/CS471_proj_slides_Tao_Marc_2011_1222_1.pdf
 
 
About memory models and sequential consistency:
 
As you have noticed i am working with x86 architecture..
 
Even though x86 gives up on sequential consistency, it's among the most
well-behaved architectures in terms of the crazy behaviors it allows.
Most other architectures implement even weaker memory models.
 
ARM memory model is notoriously underspecified, but is essentially a
form of weak ordering, which provides very few guarantees. Weak ordering
allows almost any operation to be reordered, which enables a variety of
hardware optimizations but is also a nightmare to program at the lowest
levels.
 
Read more here:
 
https://homes.cs.washington.edu/~bornholt/post/memory-models.html
 
 
Memory Models: x86 is TSO, TSO is Good
 
Essentially, the conclusion is that x86 in practice implements the old
SPARC TSO memory model.
 
The big take-away from the talk for me is that it confirms the
observation made may times before that SPARC TSO seems to be the optimal
memory model. It is sufficiently understandable that programmers can
write correct code without having barriers everywhere. It is
sufficiently weak that you can build fast hardware implementation that
can scale to big machines.
 
Read more here:
 
https://jakob.engbloms.se/archives/1435
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 11:17AM -0700

Hello..
 
 
Transactional Memory Everywhere: I/O Operations
 
One can execute I/O operations within a lock-based critical section,
and, at least in principle, from within an RCU read-side critical
section. What happens when you attempt to execute an I/O operation from
within a transaction?
 
The underlying problem is that transactions may be rolled back, for
example, due to conflicts. Roughly speaking, this requires that all
operations within any given transaction be idempotent, so that executing
the operation twice has the same effect as executing it once.
Unfortunately, I/O is in general the prototypical non-idempotent
operation, making it difficult to include general I/O operations in
transactions.
 
Read more here:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/Answers/TransactionalMemoryEverywhere/IO.html
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 03:23PM -0700

Hello..
 
 
About the Active object pattern..
 
I think the proxy and scheduler of the Active object pattern are
embellishments, not essential. The core of the idea is simply a queue of
closures executed on different thread(s) to that of the client, and here
you are noticing that you can do the same thing as the Active object
pattern and more by using my powerful "invention" that is: An efficient
Threadpool engine with priorities that scales very well that you can
download from here:
 
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
 
 
This Threadpool of mine is really powerful because it scales very well
on multicore and NUMA systems, also it comes with a ParallelFor()
that scales very well on multicores and NUMA systems.
 
Here is the explanation of my ParallelFor() that scales very well:
 
I have also implemented a ParallelFor() that scales very well, here is
the method:
 
procedure ParallelFor(nMin, nMax:integer;aProc:
TParallelProc;GrainSize:integer=1;Ptr:pointer=nil;pmode:TParallelMode=pmBlocking;Priority:TPriorities=NORMAL_PRIORITY);
 
 
nMin and nMax parameters of the ParallelFor() are the minimum and
maximum integer values of the variable of the ParallelFor() loop, aProc
parameter of ParallelFor() is the procedure to call, and GrainSize
integer parameter of ParallelFor() is the following:
 
The grainsize sets a minimum threshold for parallelization.
 
A rule of thumb is that grainsize iterations should take at least
100,000 clock cycles to execute.
 
For example, if a single iteration takes 100 clocks, then the grainsize
needs to be at least 1000 iterations. When in doubt, do the following
experiment:
 
1- Set the grainsize parameter higher than necessary. The grainsize is
specified in units of loop iterations.
If you have no idea of how many clock cycles an iteration might take,
start with grainsize=100,000.
 
The rationale is that each iteration normally requires at least one
clock per iteration. In most cases, step 3 will guide you to a much
smaller value.
 
2- Run your algorithm.
 
3- Iteratively halve the grainsize parameter and see how much the
algorithm slows down or speeds up as the value decreases.
 
A drawback of setting a grainsize too high is that it can reduce
parallelism. For example, if the grainsize is 1000 and the loop has 2000
iterations, the ParallelFor() method distributes the loop across only
two processors, even if more are available.
 
And you can pass a parameter in Ptr as pointer to ParallelFor(), and you
can set pmode parameter of to pmBlocking so that ParallelFor() is
blocking or to pmNonBlocking so that ParallelFor() is non-blocking, and
the Priority parameter is the priority of ParallelFor(). Look inside the
test.pas example to see how to use it.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 02:53PM -0700

Hello...
 
 
My Universal Scalability Law for Delphi and FreePascal was updated to
version 3.22
 
I have implemented and enhanced this powerful tool.
 
I have included a 32 bit and 64 bit windows and linux executables called
usl.exe and usl_graph.exe inside the zip, please read the readme file
to know how to use it, it is a very powerful tool.
 
Now about the Beta and Alpha coefficients of USL:
 
Coefficient Alpha is: the contention
 
And
 
Coefficient Beta is: the coherency.
 
Contention and coherency are measured as the fraction of the sequential
execution time. A value of 0 means that there is no effect on
performance. A contention factor of 0.2, for instance, means that 20% of
the sequential execution time cannot be parallelized. A coherency factor
of 0.01 means that the time spent in the synchronization between each
pair of processes is 1% of the sequential execution time.
 
 
Also there is something very important to know, and here it is:
 
So to optimize more the criterion of the cost for a better QoS, you have
to choose a good delta(y)/delta(x) to optimize the criterion of the cost
of your system and you have to balance better between the performance
and the cost. You can read about my powerful tool and download it from:
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 12:36PM -0700

Hello,
 
 
About implicit type conversions..
 
The more implicit type conversions a language supports the weaker its
type system is said to be. C++ supports more implicit conversions than
Ada or Delphi. Implicit conversions allow the compiler to silently
change types, especially in parameters to yet another function call -
for example automatically converting an int into some other object type.
If you accidentally pass an int into that parameter the compiler will
"helpfully" silently create the temporary for you, leaving you perplexed
when things don't work right. Sure we can all say "oh, I'll never make
that mistake", but it only takes one time debugging for hours before one
starts thinking maybe having the compiler tell you about those
conversions is a good idea.
 
 
Thank you,
Amine Mopulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 12:28PM -0700

Hello,
 
 
About composability of lock-based systems..
 
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And now about Message Passing Process Communication Model and Shared
Memory Process Communication Model:
 
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
 
 
Thank you,
Amine Moulay Ramdane.
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 12:26PM -0700

Hello,
 
 
More about Energy efficiency..
 
You have to be aware that parallelization of the software
can lower power consumption, and here is the formula
that permits you to calculate the power consumption of
"parallel" software programs:
 
Power consumption of the total cores = (The number of cores) * (
1/(Parallel speedup))^3) * (Power consumption of the single core).
 
 
Also read the following about energy efficiency:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there,
and Java and FreePascal are also good at Energy efficiency.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 12:25PM -0700

Hello..
 
 
What about garbage collection?
 
Read what said this serious specialist called Chris Lattner:
 
"One thing that I don't think is debatable is that the heap compaction
behavior of a GC (which is what provides the heap fragmentation win) is
incredibly hostile for cache (because it cycles the entire memory space
of the process) and performance predictability."
 
"Not relying on GC enables Swift to be used in domains that don't want
it - think boot loaders, kernels, real time systems like audio
processing, etc."
 
"GC also has several *huge* disadvantages that are usually glossed over:
while it is true that modern GC's can provide high performance, they can
only do that when they are granted *much* more memory than the process
is actually using. Generally, unless you give the GC 3-4x more memory
than is needed, you'll get thrashing and incredibly poor performance.
Additionally, since the sweep pass touches almost all RAM in the
process, they tend to be very power inefficient (leading to reduced
battery life)."
 
Read more here:
 
https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html
 
 
Here is Chris Lattner's Homepage:
 
http://nondot.org/sabre/
 
And here is Chris Lattner's resume:
 
http://nondot.org/sabre/Resume.html#Tesla
 
 
This why i have invented the following scalable algorithm and its
implementation that makes Delphi and FreePascal more powerful:
 
My invention that is my scalable reference counting with efficient
support for weak references version 1.35 is here..
 
Here i am again, i have just updated my scalable reference counting with
efficient support for weak references to version 1.35, I have just added
a TAMInterfacedPersistent that is a scalable reference counted version,
and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page,
because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else,
"typically" a form (and "typically" means in english: in "most" cases,
and this is the most important thing to understand). In that scenario,
reference count is not used.
 
If you pass a component as an interface reference, it would be very
unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just
replace TInterfacedObject with my TAMInterfacedObject that is the
scalable reference counted version, and just replace
TInterfacedPersistent with my TAMInterfacedPersistent that is the
scalable reference counted version, and you will find both my
TAMInterfacedObject and my TAMInterfacedPersistent inside the
AMInterfacedObject.pas file, and to know how to use weak references
please take a look at the demo that i have included called example.dpr
and look inside my zip file at the tutorial about weak references, and
to know how to use delegation take a look at the demo that i have
included called test_delegation.pas, and take a look inside my zip file
at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak
references is stable and fast, and it works on both Windows and Linux,
and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find
it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory
leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just
extended to make you understand more.
 
You can download my new scalable reference counting with efficient
support for weak references version 1.35 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 12:19PM -0700

Hello..
 
 
NP-hard problem means there is no known algorithm can solve it in a
polynomial time, so that the time to find a solution grows exponentially
with problem size. Although it has not been definitively proven that,
there is no polynomial algorithm for solving NP-hard problems, many
eminent mathematicians have tried and failed.
 
Race condition detection is NP-hard
 
Read more here:
 
https://pages.mtu.edu/~shene/NSF-3/e-Book/RACE/difficult.html
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 12:17PM -0700

Hello,
 
 
Here is the advantages and disadvantages of automation:
 
Following are some of the advantages of automation:
 
1. Automation is the key to the shorter workweek. Automation will allow
the average number of working hours per week to continue to decline,
thereby allowing greater leisure hours and a higher quality life.
 
2. Automation brings safer working conditions for the worker. Since
there is less direct physical participation by the worker in the
production process, there is less chance of personal injury to the worker.
 
3. Automated production results in lower prices and better products. It
has been estimated that the cost to machine one unit of product by
conventional general-purpose machine tools requiring human operators may
be 100 times the cost of manufacturing the same unit using automated
mass-production techniques. The electronics industry offers many
examples of improvements in manufacturing technology that have
significantly reduced costs while increasing product value (e.g., colour
TV sets, stereo equipment, calculators, and computers).
 
4. The growth of the automation industry will itself provide employment
opportunities. This has been especially true in the computer industry,
as the companies in this industry have grown (IBM, Digital Equipment
Corp., Honeywell, etc.), new jobs have been created.
These new jobs include not only workers directly employed by these
companies, but also computer programmers, systems engineers, and other
needed to use and operate the computers.
 
5. Automation is the only means of increasing standard of living. Only
through productivity increases brought about by new automated methods of
production, it is possible to advance standard of living. Granting wage
increases without a commensurate increase in productivity
will results in inflation. To afford a better society, it is a must to
increase productivity.
 
Following are some of the disadvantages of automation:
 
1. Automation will result in the subjugation of the human being by a
machine. Automation tends to transfer the skill required to perform work
from human operators to machines. In so doing, it reduces the need for
skilled labour. The manual work left by automation requires lower skill
levels and tends to involve rather menial tasks (e.g., loading and
unloading workpart, changing tools, removing chips, etc.). In this
sense, automation tends to downgrade factory work.
 
2. There will be a reduction in the labour force, with resulting
unemployment. It is logical to argue that the immediate effect of
automation will be to reduce the need for human labour, thus displacing
workers.
 
3. Automation will reduce purchasing power. As machines replace workers
and these workers join the unemployment ranks, they will not receive the
wages necessary to buy the products brought by automation. Markets will
become saturated with products that people cannot afford to purchase.
Inventories will grow. Production will stop. Unemployment will reach
epidemic proportions and the result will be a massive economic depression.
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 11:41AM -0700

Hello,
 
 
Beyond Limits: Rethinking the next generation of AI
 
Read more here:
 
https://www.computerworld.com/article/3405897/beyond-limits-rethinking-the-next-generation-of-ai.html
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 11:26AM -0700

Hello..
 
 
My Parallel archiver and my Parallel Compression Library were updated
 
My Parallel archiver was updated to version 5.0
 
And my Parallel Compression Library was updated to version 4.41
 
The Zstandard Dynamic Link Libraries for Windows and the Zstandard
Shared Libraries for Linux were updated to the newer versions.
 
And i think that both my Parallel Compression Library and my Parallel
archiver are stable and fast.
 
I have done a quick calculation of the scalability prediction for my
Parallel Compression Library and my Parallel archiver, and i think it's
good: they can scale beyond 100X on NUMA systems.
 
You can read about them and download them from:
 
https://sites.google.com/site/scalable68/parallel-archiver
 
And from:
 
https://sites.google.com/site/scalable68/parallel-compression-library
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 28 02:40PM -0700

Hello..
 
 
About the logical consistency of my writing about abstraction and
efficiency:
 
I wrote previously the following:
 
"You can feel it by seeing that mathematically a+a=2*a, it is also
about abstraction, it is like commutative, a+a abstract 2*a
and 2*a abstract a+a"
 
 
Is it logically consistent to say so ?
 
Yes, because you have to read what follows, here is what follows:
 
"And you can also run the abstraction of 2*a or a+a in your brain and if
your brain contains the consciousness of the understanding of the
abstractions, then the understanding of the abstractions will come to
you quickly, so then, is the understanding of the abstractions is part
of the process that we call abstraction ? i think it is
a more appropriate philosophy, so i think we can answer with
a "yes"."
 
 
So as you are noticing i am "including" in my philosophy, that i think
is more appropriate, the understanding of the abstractions in the
process that we call abstraction. So there is no logical inconsistency.
 
 
So read again my previous post:
 
 
About abstraction and efficiency..
 
 
When you abstract and say or write mathematically:
 
a+a= 2*a
 
I think this is the most important part of the philosophy
of computing or parallel computing, i think you have to be a wise type
of person like me to see it clearly..
 
Philosophy about computing is something really important,
what are we doing in computing or parallel computing ? i mean how to
abstract the answer to feel it much more correctly ?
 
You can feel it by seeing that mathematically a+a=2*a, it is also
about abstraction, it is like commutative, a+a abstract 2*a
and 2*a abstract a+a, and you can also run the abstraction
of 2*a or a+a in your brain and if your brain contains the
consciousness of the understanding of the abstractions ,
then the understanding of the abstractions will come to you quickly,
so then, is the understanding of the abstractions is part
of the process that we call abstraction ? i think it is
a more appropriate philosophy, so i think we can answer with
a "yes".
 
So now by analogy you are feeling more how to abstract much more
correctly the philosophy of computing, it becomes more clearly
that in computing or parallel computing we are abstracting more and more
towards higher level of abstractions, and we are organizing those
abstractions like a "language" to be executed in computers, and the
understanding of the abstractions must be part of the process of
abstracting in computing or parallel computing, and the abstractions
must be "efficient" and then we are also running those higher level
abstractions in our computers.
 
This is why you have previously seen me posting the following, read it
carefully:
 
Analogy with parallel computing..
 
My personality is more complex, you have to understand me more,
when i say i am also a gay like Chevy Chase because i am more
humoristic, you have to understand this "abstraction" of saying
humoristic, i am humoristic like Chevy Chase because i am more
positive and i want the others to be more posititive, so i can
be humoristic to make you positive, but my humoristic way
of doing is more "smart", because i can use a sophisticated humoristic
manner to learn you more about morality and about life and i am
more intellectual in doing so.
 
And speaking about "abstractions", i think it is a good subject
of philosophy, because i think you have to be capable
of philosophy about computing, i think one of the main part
of computing is also about abstracting, but it is not only
about abstracting but you have to abstract and be sophisticated
in it by making your abstractions "efficient". I give you an example:
 
As you know i am an inventor of many scalable algorithms, and
one of my last invention is a Fast Mutex that is adaptative,
so i have extracted the abstractions from my Fast Mutex,
and those abstractions are like a language or like an automaton
that is also like a protocol that is constituted of a language,
so when i execute the abstraction that is the Enter() method, it will
enter the Fast Mutex, and when i execute the abstraction that is
the Leave() method, it will leave the Fast Mutex, but you have
to be more smart, because it is "not" enough to abstract, you
have to be more efficient, i mean that i am thinking like a researcher
when i have invented my last Fast Mutex by taking into account
the following characteristics, so i have made my new Fast Mutex powerful
by making it as the following:
 
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a good preemption tolerance.
 
 
I think that you will not find anywhere this new invention of mine.
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 28 01:24PM -0700

Hello..
 
 
About abstraction and efficiency..
 
 
When you abstract and say or write mathematically:
 
a+a= 2*a
 
I think this is the most important part of the philosophy
of computing or parallel computing, i think you have to be a wise type
of person like me to see it clearly..
 
Philosophy about computing is something really important,
what are we doing in computing or parallel computing ? i mean how to
abstract the answer to feel it much more correctly ?
 
You can feel it by seeing that mathematically a+a=2*a, it is also
about abstraction, it is like commutative, a+a abstract 2*a
and 2*a abstract a+a, and you can also run the abstraction
of 2*a or a+a in your brain and if your brain contains the
consciousness of the understanding of the abstractions ,
then the understanding of the abstractions will come to you quickly,
so then, is the understanding of the abstractions is part
of the process that we call abstraction ? i think it is
a more appropriate philosophy, so i think we can answer with
a "yes".
 
So now by analogy you are feeling more how to abstract much more
correctly the philosophy of computing, it becomes more clearly
that in computing or parallel computing we are abstracting more and more
towards higher level of abstractions, and we are organizing those
abstractions like a "language" to be executed in computers, and the
understanding of the abstractions must be part of the process of
abstracting in computing or parallel computing, and the abstractions
must be "efficient" and then we are also running those higher level
abstractions in our computers.
 
This is why you have previously seen me posting the following, read it
carefully:
 
Analogy with parallel computing..
 
My personality is more complex, you have to understand me more,
when i say i am also a gay like Chevy Chase because i am more
humoristic, you have to understand this "abstraction" of saying
humoristic, i am humoristic like Chevy Chase because i am more
positive and i want the others to be more posititive, so i can
be humoristic to make you positive, but my humoristic way
of doing is more "smart", because i can use a sophisticated humoristic
manner to learn you more about morality and about life and i am
more intellectual in doing so.
 
And speaking about "abstractions", i think it is a good subject
of philosophy, because i think you have to be capable
of philosophy about computing, i think one of the main part
of computing is also about abstracting, but it is not only
about abstracting but you have to abstract and be sophisticated
in it by making your abstractions "efficient". I give you an example:
 
As you know i am an inventor of many scalable algorithms, and
one of my last invention is a Fast Mutex that is adaptative,
so i have extracted the abstractions from my Fast Mutex,
and those abstractions are like a language or like an automaton
that is also like a protocol that is constituted of a language,
so when i execute the abstraction that is the Enter() method, it will
enter the Fast Mutex, and when i execute the abstraction that is
the Leave() method, it will leave the Fast Mutex, but you have
to be more smart, because it is "not" enough to abstract, you
have to be more efficient, i mean that i am thinking like a researcher
when i have invented my last Fast Mutex by taking into account
the following characteristics, so i have made my new Fast Mutex powerful
by making it as the following:
 
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a good preemption tolerance.
 
 
I think that you will not find anywhere this new invention of mine.
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 28 12:25PM -0700

Hello...
 
 
I am gay (more positive and more humoristic or more funny) like this
beautiful song:
 
https://www.youtube.com/watch?v=dqUdI4AIDF0
 
I am in real life more positive, it is genetical in me, i am positive.
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 28 11:34AM -0700

Hello..
 
 
Read again, i correct a last typo, because i write fast..
 
 
Analogy with parallel computing..
 
My personality is more complex, you have to understand me more,
when i say i am also a gay like Chevy Chase because i am more
humoristic, you have to understand this "abstraction" of saying
humoristic, i am humoristic like Chevy Chase because i am more
positive and i want the others to be more posititive, so i can
be humoristic to make you positive, but my humoristic way
of doing is more "smart", because i can use a sophisticated humoristic
manner to learn you more about morality and about life and i am
more intellectual in doing so.
 
And speaking about "abstractions", i think it is a good subject
of philosophy, because i think you have to be capable
of philosophy about computing, i think one of the main part
of computing is also about abstracting, but it is not only
about abstracting but you have to abstract and be sophisticated
in it by making your abstractions "efficient". I give you an example:
 
As you know i am an inventor of many scalable algorithms, and
one of my last invention is a Fast Mutex that is adaptative,
so i have extracted the abstractions from my Fast Mutex,
and those abstractions are like a language or like an automaton
that is also like a protocol that is constituted of a language,
so when i execute the abstraction that is the Enter() method, it will
enter the Fast Mutex, and when i execute the abstraction that is
the Leave() method, it will leave the Fast Mutex, but you have
to be more smart, because it is "not" enough to abstract, you
have to be more efficient, i mean that i am thinking like a researcher
when i have invented my last Fast Mutex by taking into account
the following characteristics, so i have made my new Fast Mutex powerful
by making it as the following:
 
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a good preemption tolerance.
 
 
I think that you will not find anywhere this new invention of mine.
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: