Tuesday, February 25, 2020

Digest for comp.programming.threads@googlegroups.com - 11 updates in 10 topics

Autist <autist69@gmail.com>: Feb 19 07:31AM +0100

> Hello,
> Don't worry about my poems, because i am not crazy, because a gentleman type of person and have just posted a few poems of Love of mine, and i will stop it, and from now on i will post about parallel programming ans such.
 
You are crazy, and you can't assess yourself so sou will
continue to post "poems".
Autist <autist69@gmail.com>: Feb 17 08:22PM +0100

You should get a proper psychiatrical treatment. Otherwise you
will continue posting here in 30 years when the Usenet is dead.
aminer68@gmail.com: Feb 24 02:58PM -0800

Hello,
 
 
More about arabs..
 
 
I am a white arab, and I think arabs are smart people,
Babylonians of Irak were racially arabs, read about them here:
 
3,700-year-old Babylonian tablet rewrites the history of maths - and shows the Greeks did not develop trigonometry
 
Read more here:
 
https://www.telegraph.co.uk/science/2017/08/24/3700-year-old-babylonian-tablet-rewrites-history-maths-could/
 
 
Also read the following about Arabs:
 
 
Research: Arab Inventors Make the U.S. More Innovative
 
It turns out that the U.S. is a major home for Arab inventors. In the five-year period from 2009 to 2013, there were 8,786 U.S. patent applications in our data set that had at least one Arab inventor. Of the total U.S. patent applications, 3.4% had at least one Arab inventor, despite the fact that Arab inventors represent only 0.3% of the total population.
 
Read more here:
 
https://hbr.org/2017/02/arab-inventors-make-the-u-s-more-innovative
 
 
Even Steve Jobs the founder of Apple had a Syrian immigrant father called Abdul Fattah Jandal.
 
 
Read more here about it:
 
https://www.macworld.co.uk/feature/apple/who-is-steve-jobs-syrian-immigrant-father-abdul-fattah-jandali-3624958/
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 24 02:55PM -0800

Hello,
 
 
Arabs are moving ahead, even Palestine is moving ahead, read the following to notice it:
 
Palestinian students, US experts co-create car application to reduce vehicle emissions
 
https://www.globaltimes.cn/content/1180210.shtml
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 24 11:42AM -0800

Hello,
 
 
About Memory Ordering and Atomicity..
 
Read the following it says:
 
"The performance gain from allowing memory reordering is small, and it doesn't make up for the extra headaches that come from difficult-to-find failures."
 
Read more here:
 
http://www.informit.com/articles/article.aspx?p=1676714&seqNum=5
 
 
So then i think what i wrote previously about it is true, read it again carefully:
 
Here is another problem with ARM processors..
 
About SC and TSO and RMO hardware memory models..
 
I have just read the following webpage about the performance difference
between: SC and TSO and RMO hardware memory models
 
I think TSO is better, it is just around 3% ~ 6% less performance
than RMO and it is a simpler programming model than RMO. So i think ARM
must support TSO to be compatible with x86 that is TSO.
 
Read more here to notice it:
 
https://infoscience.epfl.ch/record/201695/files/CS471_proj_slides_Tao_Marc_2011_1222_1.pdf
 
About memory models and sequential consistency:
 
As you have noticed i am working with x86 architecture..
 
Even though x86 gives up on sequential consistency, it's among the most
well-behaved architectures in terms of the crazy behaviors it allows.
Most other architectures implement even weaker memory models.
 
ARM memory model is notoriously underspecified, but is essentially a
form of weak ordering, which provides very few guarantees. Weak ordering
allows almost any operation to be reordered, which enables a variety of
hardware optimizations but is also a nightmare to program at the lowest
levels.
 
Read more here:
 
https://homes.cs.washington.edu/~bornholt/post/memory-models.html
 
 
Memory Models: x86 is TSO, TSO is Good
 
Essentially, the conclusion is that x86 in practice implements the old
SPARC TSO memory model.
 
The big take-away from the talk for me is that it confirms the
observation made may times before that SPARC TSO seems to be the optimal
memory model. It is sufficiently understandable that programmers can
write correct code without having barriers everywhere. It is
sufficiently weak that you can build fast hardware implementation that
can scale to big machines.
 
Read more here:
 
https://jakob.engbloms.se/archives/1435
 
 
Thank you,
Amine Moulay Ramdane.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 24 09:21PM +0100

> "The performance gain from allowing memory reordering is small, and it doesn't make up for the extra headaches that come from difficult-to-find failures."
 
It's impossible to estimate this gain because you would
need an identical CPU with and without memory-reordering.
aminer68@gmail.com: Feb 24 11:25AM -0800

Hello,
 
 
More precision about my previous post about Turing completeness and parallel programming..
 
You have to know that a Turing-complete system can be proven mathematically to be capable of performing any possible calculation or computer program.
 
So now you are understanding what is the power of "expressiveness" that
is Turing-complete.
 
For example i am working with the Tool that is called "Tina"(read about it below), it is a powerful tool that permits to work on Petri nets and be able to know about the boundedness and liveness of Petri nets, for example Tina supports Timed Petri nets that are Turing-complete , so the power of there expressiveness is Turing-complete, but i think this level of expressiveness is good for parallel programming and such, but it is not an efficient high level expressiveness. But still Petri nets are good for parallel programming.
 
Read the rest of my previous thoughts to know more:
 
About deadlocks and race conditions in parallel programming..
 
I have just read the following paper:
 
Deadlock Avoidance in Parallel Programs with Futures
 
https://cogumbreiro.github.io/assets/cogumbreiro-gorn.pdf
 
So as you are noticing you can have deadlocks in parallel programming
by introducing circular dependencies among tasks waiting on future values
or you can have deadlocks by introducing circular dependencies among tasks
waiting on windows event objects or such synchronisation objects, so you have to have a general tool that detects deadlocks, but if you are noticing that the tool called Valgrind for C++ can detect deadlocks only happening from Pthread locks , read the following to notice it:
 
http://valgrind.org/docs/manual/hg-manual.html#hg-manual.lock-orders
 
So this is not good, so you have to have a general way that permits
to detect deadlocks on locks , mutexes, and deadlocks from introducing
circular dependencies among tasks waiting on future values or deadlocks
you may have deadlocks by introducing circular dependencies among tasks
waiting on windows event objects or such synchronisation objects etc.
this is why i have talked before about this general way that detects
deadlocks, and here it is, read about it in my following thoughts:
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more Petri nets,
they are useful for parallel programming, and what i have noticed by studying them, is that there is two methods to prove that there is no deadlock in the system, there is the structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to the incidence matrix to find the Place invariants, and as you will notice those place invariants calculations of the Petri nets look like Markov chains in mathematics, with there vector of probabilities and there transition matrix of probabilities, and you can, using Markov chains mathematically calculate where the vector of probabilities will "stabilize", and it gives you a very important information, and you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and other fields, and you can notice that it is like calculating the invariants , because the invariant in the system above is the vector1 of probabilities that is obtained, and this invariant, like in the invariants of the structural analysis of Petri nets, gives you a very important information about the system, like where market shares will stabilize that is calculated this way in economics. About reachability analysis of a Petri net.. As you have noticed in my Petri nets tutorial example (read below), i am analysing the liveness of the Petri net, because there is a rule that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina gives you the necessary information about boundedness and liveness of the Petri net. So if it gives you that the Petri net is "live" , so there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems are
being developed. The verification of such systems is a challenging task, as
their state space grows exponentially with the number of processes.
Partial order reduction is an effective technique to address this problem.
It relies on the observation that the effect of executing transitions
concurrently is often independent of their ordering.
 
Tina is using "partial-order" reduction techniques aimed at preventing
combinatorial explosion, read more here to notice it:
 
http://projects.laas.fr/tina/papers/qest06.pdf
 
About modelizations and detection of race conditions and deadlocks in parallel programming..
 
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
 
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
 
And i have just taken a look at the following webpage of Dr Dobb's journal:
 
Detecting Deadlocks in C++ Using a Locks Monitor
 
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
 
And i think that both of them are using technics that are not as good as
analysing deadlocks with Petri Nets in parallel applications , for example
the above two methods are only addressing locks or mutexes or reader-writer
locks , but they are not addressing semaphores or event objects and such other synchronization objects, so they are not good, this is why i have written a tutorial that shows my methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is:
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
You have to get the powerful Tina software to run my Petri Net examples inside my tutorial, here is the powerful Tina software:
 
http://projects.laas.fr/tina/
 
Also to detect race conditions in parallel programming you have to take a look at the following new tutorial that uses the powerful Spin tool:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
 
Here is how to install Spin Model Checker on windows:
 
I invite you to look at this video to learn how to install Spin model checker and iSpin:
 
https://www.youtube.com/watch?v=MGzmtWi4Oq0
 
I have installed them and configured them correctly and i am working with them in parallel programming to detect race conditions etc.

This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 24 11:20AM -0800

Hello,
 
 
More precision about the Haskell programming language..
 
 
As you have noticed in my previous post i have just talked of the importance
of knowing the advantages and disadvantages of this or that programming language, for example Haskell has disadvantages, for example the only way to
implement fairness in Haskell is to abondon "composability", and
we know that for example fairness is an important characteristic that is needed by realtime Safety-critical systems, read more carefully here to notice it:
 
https://books.google.ca/books?id=wSkRAAAAQBAJ&pg=PA192&lpg=PA192&dq=fairness+and+haskell+and+stm&source=bl&ots=ow4i3rBcTG&sig=ACfU3U22X6q3zLmEeKOWHA3oHBMWBzqoQg&hl=en&sa=X&ved=2ahUKEwjHkKWC8OrnAhUplXIEHV9LACEQ6AEwBHoECAoQAQ#v=onepage&q=fairness%20and%20haskell%20and%20stm&f=false
 
 
And here is another disadvantage of Haskell:
 
Also I have just taken a look at Haskell, and i think
i am more capable and Haskell is easy for me to learn,
but to be more efficient, here is what i have also just discovered:
take a look at the following about Mvars of Haskell:
 
http://neilmitchell.blogspot.com/2012/06/flavours-of-mvar_04.html
 
 
It is with this primitive of Haskell that we call Mvar that you construct
a higher level abstractions so that for example to make a FIFO queue that
is "energy" efficient, also it permits to compose synchronization objects that use signals to signal the other processes or threads, but as you have noticed in my previous post that this can introduce circular dependencies among tasks waiting on this kind of synchronization objects
that use signals etc. and this can cause deadlocks, so then
Haskell is not immune to deadlocks.
 
Also there is another disadvantage in Haskell, read the following
 
Functional programming: A step backward
 
Unlike imperative code, functional code doesn't map to simple language constructs. Rather, it maps to mathematical constructs.
 
We've gone from wiring to punch cards to assembler to macro assembler to C (a very fancy macro assembler) and on to higher-level languages that abstract away much of the old machine complexity. Each step has taken us a little closer to the scene in "Star Trek IV" where a baffled Mr. Scott tries to speak instructions into a mouse. After decades of progress in making programming languages easier for humans to read and understand, functional programming syntax turns back the clock.
 
Functional programming addresses the concurrency problem of state but often at a cost of human readability. Functional programmming may be entirely appropriate for many circumstances. Ironically, it might even help bring computer and human languages closer together indirectly through defining domain-specific languages. But its difficult syntax makes it an extremely poor fit for general-purpose application programming. Don't jump on this bandwagon just yet — especially for risk-averse projects.
 
 
Read more here:
 
https://www.javaworld.com/article/2078610/functional-programming--a-step-backward.html
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 24 10:25AM -0800

Hello,
 
 
I correct, i mean "Haskell", not Haskel, read again..
 
What is it to be smart ?
 
Am i like a genius ? am i more smart ? am i a wise type of person ?
 
 
I will not answer those questions in the following post, but in the following post i will talk about an important subject because it is one of the the most important, and it is that so that to be efficient you have first to be able at being efficient at knowing the advantages and disadvantages of this or that tool, and second you have to know that the tool is also the efficient knowledge that simplify at best the knowledge, those are important requirements so that to avoid the stupid approach.. and now you are quickly noticing that when you become more knowledgeable of the advantages and disadvantages of this or that tool you will become less idealistic like Chad of comp.programming , this Chad looks like the following PhD here:
 
https://bartoszmilewski.com/
 
 
You will notice that both of them are idealistic and they are neglecting the criterion of "adaptability", since notice that Bartosz Milewski in the above web link is idealistic and he is wanting from us to like programming only with Haskell, so he is idealistic, but his idealism lacks pragmatism, since we have to take into account the criterion of adaptability, so to be able to adapt efficiently we have to be able at being efficient at selecting the right tools, and now by reading my posts here , you are going to notice that even Rust or C++ or Haskell have disadvantages and disadvantages, so the
best way to be pragmatic is to know about the advantages and disadvantages
of this or that tool, and now you are noticing that so that
to not be archaism you have to know that today we have to be capable
of being like efficiently multicultural, that means that for example you have to be able to notice that the languages like Delphi has its advantage and C++ has its advantage and Rust has its advantage etc., so then you will be capable of seeing that to be able to be efficient adaptability you have to be like multicultural and know how to use Delphi or C++ or Rust etc. for the right job.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 24 06:25AM -0800

Hello,
 
 
Quadsort: Introduction to a new stable sorting algorithm faster than quicksort
 
Read more here:
 
https://github.com/scandum/quadsort
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 24 04:08AM -0800

Hello,
 
 
Read again, i correct a last typo, because i think fast and i write fast my posts..
 
 
What is it to be smart ?
 
Am i like a genius ? am i more smart ? am i a wise type of person ?
 
 
I will not answer those questions in the following post, but in the following post i will talk about an important subject because it is one of the the most important, and it is that so that to be efficient you have first to be able at being efficient at knowing the advantages and disadvantages of this or that tool, and second you have to know that the tool is also the efficient knowledge that simplify at best the knowledge, those are important requirements so that to avoid the stupid approach.. and now you are quickly noticing that when you become more knowledgeable of the advantages and disadvantages of this or that tool you will become less idealistic like Chad of comp.programming , this Chad looks like the following PhD here:
 
https://bartoszmilewski.com/
 
 
You will notice that both of them are idealistic and they are neglecting the criterion of "adaptability", since notice that Bartosz Milewski in the above web link is idealistic and he is wanting from us to like programming only with Haskel, so he is idealistic, but his idealism lacks pragmatism, since we have to take into account the criterion of adaptability, so to be able to adapt efficiently we have to be able at being efficient at selecting the right tools, and now by reading my posts here , you are going to notice that even Rust or C++ or Haskel have disadvantages and disadvantages, so the
best way to be pragmatic is to know about the advantages and disadvantages
of this or that tool, and now you are noticing that so that
to not be archaism you have to know that today we have to be capable
of being efficiently multicultural, that means that for example you have to be able to notice that the languages like Delphi has its advantage and C++ has its advantage and Rust has its advantage etc., so then you will be capable of seeing that to be able to be efficient adaptability you have to be like multicultural and know how to use Delphi or C++ or Rust etc. for the right job.
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: