Monday, February 24, 2020

Digest for comp.programming.threads@googlegroups.com - 12 updates in 12 topics

aminer68@gmail.com: Feb 23 07:45PM -0800

Hello,
 
 
What is it to be smart ?
 
Am i like a genius ? am i more smart ? am i a wise type of person ?
 
 
I will not answer those questions in the following post, but in the following post i will talk about an important subject because it is one of the the most important, and it is that so that to be efficient you have first to be able at being efficient at knowing the advantages and disadvantages of this or that tool, and second you have to know that the tool is also the efficient knowledge that simplify at best the knowledge, those are important requirements so that to avoid the stupid approach.. and now you are quickly noticing that when you become more knowledgeable of the advantages and disadvantages of this or that tool you will become less idealistic like Chad of comp.programming , this Chad looks like the following PhD here:
 
https://bartoszmilewski.com/
 
 
You will notice that both of them are idealistic and they are neglecting the criterion of "adaptability", since notice that Bartosz Milewski in the above web link is idealistic and he is wanting from us to like programming only with Haskel, so he is idealistic, but his idealism lacks pragmatism, since we have to take into account the criterion of adaptability, so to be able to adapt efficiently we have to be able at being efficient at selecting the right tools, and now by reading my posts here , you are going to notice that even Rust or C++ or Haskel have disadvantages and disadvantages, so the
best way to be pragmatic is to know about the advantages and disadvantages
of this or that tool, and now you are noticing that so that
to not be archaism you have to know that today we have to be capable
of being efficiently multicultual, that means that for example you have to be able to notice that the languages like Delphi has its advantage and C++ has its advantage and Rust has its advantage etc., so then you will be capable of seeing that to be able to be efficient adaptability you have to be like multicultural and know how to use Delphi or C++ or Rust etc. for the right job.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 06:32PM -0800

Hello,
 
 
I am commenting and writing more about my posts so that you understand more
the why i am posting them, so as you have noticed in my previous posts
i was saying that there is need for like an efficient high level top-down approach that makes us being much more efficient at selecting the right tools, so notice in my current post that i am in accordance with my logical reasonning because how to be efficient at choosing between Lock versus Lock-free, here is how, read my following writing:
 
Lock Versus Lock-Free..
 
 
The class of problems that can be solved by lock-free approaches is limited.
 
Furthermore, lock-free approaches can require restructuring a problem.
As soon as multiple shared data-structures are modified simultaneously,the only practical approach is to use a lock.
 
All lock-free dynamic-size data-structures using such CAA (Compare-and-assign) require some form of garbage collector to lazily delete storage when it is no longer referenced. In languages with garbage collection, this capability comes for free (at the cost of garbage collection). For languages without garbage collection, the code is complex and error prone in comparison with locks, requiring epoch-based reclamation, read-copy-update (RCU), or hazard pointers.
 
While better performance is claimed for lock-free data-structures, there is no long-term evidence to support this claim. Many high-performance locking situations, e.g., operating system kernels and databases, continue to use locking in various forms, even though there are a broad class of lock-free data-structure readily available.
 
While lock-free data-structures cannot have deadlock, there is seldom deadlock using locks for the simple class of problems solvable using lock-free approaches. For example, protecting basic data-structure operations with locks is usually very straightforward. Normally deadlock occurs when accessing multiple resources simultaneously, which is not a class of problems
dealt with by lock-free approaches. Furthermore, disciplined lock usage, such as ranking locks to avoid deadlock, works well in practice and
is not onerous for the programmer.Finally, some static analysis tools are helpful for detecting deadlock scenarios.
 
Lock-free approaches have thread-kill tolerance, meaning no thread owns a lock, so any thread can terminate at an arbitrary point without leaving a lock in the closed state. However, within an application, thread kill is an unusual operation and thread failure means an unrecoverable error or major reset.
 
A lock-free approach always allows progress of other threads, whereas locks can cause delays if the lock owner is preempted. However,this issue is a foundational aspect of preemptive concurrency. And there are ways to mitigate this issue for locks using scheduler-activation techniques. However, lock-free is not immune to delays. If a page is evicted containing part of the lock-based or lockfree data, there is a delay. Hence, lock free is no better than lock based if the page
fault occurs on frequently accessed shared data. Given the increasing number of processors and large amount of memory on modern computers, neither of these delays should occur often.
 
Lock-free approaches are reentrant, and hence, can be used in signal handlers, which are implicitly concurrent. Locking approaches cannot deal with this issue. Lock-free approaches are claimed not to have priority inversion. However, inversion can occur because of the spinning required with atomic instructions, like CAA, as the hardware does not provide a bound for spinning threads. Hence, a low-priority thread can barge head of a high-priority thread because the low-priority thread just happens to win the race at the CAA instruction. Essentially,
priority inversion is a foundational aspect of preemptive concurrency and can only be mitigated.
 
The conclusion is that for unmanaged programming language (i.e., no garbage collection), using classical locks is simple, efficient, general, and causes issues only when the problem scales to multiple locks. For managed programming-languages, lock-free data-structures are easier to implement, but only handle a specific set of problems, and the programmer must accept other idiosyncrasies, like pauses in
execution for garbage collection.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 06:09PM -0800

Hello,
 
 
About how to be efficient at selecting the right tools..
 
You have to understand me more, why are you noticing that i am posting
like i am posting? because i have noticed that people has to be more
efficient at selecting the right tools, and this is why i am writing
about the criteria like: wich is better Garbage collecting or not garbage collecting and i am writing about energy efficiency and memory efficiency,
and i am writing about memory safety and such, all this to help the others at being more efficient at selecting the right tools, and this is also a good way at becoming more smart.
 
So now you are seeing more clearly the why of my posting here, i will explain more and more about my writing here, so that you understand me better.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 05:49PM -0800

Hello,
 
 
About the full paper on the Swarm chip and more..
 
I have just read the following full PhD paper from MIT about the new Swarm chip:
 
https://people.csail.mit.edu/sanchez/papers/2015.swarm.micro.pdf
 
I think there are disadvantages with this chip, first it is
using the same mechanisms as Transactional memory, but those mechanisms
of Transactional memory are not so efficient(read below to know more), but we are already having Intel hardware Transactional memory , and i don't think it is globally faster on parallelism than actual hardware and software because look at the writing on the paper about the benchmarks and you will understand more.
 
And about Transactional memory and more read my following thoughts:
 
About Hardware Transactional Memory and my invention that is my powerful Fast Mutex:
 
"As someone who has used TSX to optimize synchronization primitives, you can expect to see a ~15-20% performance increase, if (big if) your program is heavy on disjoint data access, i.e. a lock is needed for correctness, but conflicts are rare in practice. If you have a lot of threads frequently writing the same cache lines, you are probably going to see worse performance with TSX as opposed to traditional locking. It helps to think about TSX as transparently performing optimistic concurrency control, which is actually pretty much how it is implemented under the hood."
 
Read more here:
 
https://news.ycombinator.com/item?id=8169697
 
So as you are noticing, HTM (hardware transactional memory) and TM can not replace locks when doing IO and for highly contended critical sections, this is why i have invented my following powerful Fast Mutex:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a decent preemption tolerance.
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies.
 
And about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:
 
"Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.
 
There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
 
About deadlocks and race conditions in parallel programming..
 
I have just read the following paper:
 
Deadlock Avoidance in Parallel Programs with Futures
 
https://cogumbreiro.github.io/assets/cogumbreiro-gorn.pdf
 
So as you are noticing you can have deadlocks in parallel programming
by introducing circular dependencies among tasks waiting on future values or you can have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects, so you have to have a general tool that detects deadlocks,
but if you are noticing that the tool called Valgrind for C++
can detect deadlocks only happening from Pthread locks , read
the following to notice it:
 
http://valgrind.org/docs/manual/hg-manual.html#hg-manual.lock-orders
 
So this is not good, so you have to have a general way that permits
to detect deadlocks on locks , mutexes, and deadlocks from introducing circular dependencies among tasks waiting on future values or deadlocks you may have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects etc.
this is why i have talked before about this general way that detects deadlocks, and here it is, read my following thoughts:
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following
system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
 
About reachability analysis of a Petri net..
 
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering.
 
Tina is using "partial-order" reduction techniques aimed at preventing
combinatorial explosion, Read more here to notice it:
 
http://projects.laas.fr/tina/papers/qest06.pdf
 
About modelizations and detection of race conditions and deadlocks
in parallel programming..
 
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
 
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
 
And i have just taken a look at the following webpage of Dr Dobb's journal:
 
Detecting Deadlocks in C++ Using a Locks Monitor
 
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
 
And i think that both of them are using technics that are not as good
as analysing deadlocks with Petri Nets in parallel applications ,
for example the above two methods are only addressing locks or mutexes
or reader-writer locks , but they are not addressing semaphores
or event objects and such other synchronization objects, so they
are not good, this is why i have written a tutorial that shows my
methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is:
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
You have to get the powerful Tina software to run my Petri Net examples
inside my tutorial, here is the powerful Tina software:
 
http://projects.laas.fr/tina/
 
Also to detect race conditions in parallel programming you have to take
a look at the following new tutorial that uses the powerful Spin tool:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
 
This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
 
 
About Java and Delphi and Freepascal..
 
I have just read the following webpage:
 
Java is not a safe language
 
https://lemire.me/blog/2019/03/28/java-is-not-a-safe-language/
 
 
But as you have noticed the webpage says:
 
- Java does not trap overflows
 
But Delphi and Freepascal do trap overflows.
 
And the webpage says:
 
- Java lacks null safety
 
But Delphi has null safety since i have just posted about it by saying the following:
 
Here is MyNullable library for Delphi and FreePascal that brings null safety..
 
Java lacks null safety. When a function receives an object, this object might be null. That is, if you see 'String s' in your code, you often have no way of knowing whether 's' contains an actually String unless you check at runtime. Can you guess whether programmers always check? They do not, of course, In practice, mission-critical software does crash without warning due to null values. We have two decades of examples. In Swift or Kotlin, you have safe calls or optionals as part of the language.
 
Here is MyNullable library for Delphi and FreePascal that brings null safety, you can read the html file inside the zip to know how it works, and you can download it from my website here:
 
https://sites.google.com/site/scalable68/null-safety-library-for-delphi-and-freepascal
 
 
And the webpage says:
 
- Java allows data races
 
But for Delphi and Freepascal i have just written about how to prevent data races by saying the following:
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following
system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
 
About reachability analysis of a Petri net..
 
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to
aminer68@gmail.com: Feb 23 05:37PM -0800

Hello,
 
 
I correct a last typo, read again..
 
About Case Sensitivity..
 
The trend in computer languages has been to be case-sensitive for a long time now. Java, JavaScript, C#, C++ are all case-sensitive languages. This, in my opinion, is one of the most boneheaded design trends to come out of a group of people who are supposed to be "smart". You can argue semantics and personal preference to the ends of the earth, but I'll always win any debate by asking this one question that those of you who think that case-sensitivity is a good thing for language design simply cannot answer:
 
"How does complaining when I type 'int32' instead of 'Int32' make your compiler a better, more-productive, and useful 'tool' than if you simply just let it slide with a warning or hint?"
 
To me, computer languages are tools, not rules. You can evaluate the usefulness of an everyday household tool with some fairly objective and quantifiable metrics and chances are, people who have nail-guns are going to be more productive than people who use hammers.
 
Similar things can be applied to computer languages: Languages that help you get the right answer will yield higher productivity than compilers that simply complain when you get it wrong. On a completely different subject — file systems: they shouldn't be case sensitive either…. seriously why does my grandma care if she named her file "recipies.txt" or "Recipies.txt" and even as an engineer/power-user, why on earth would I want to have the privilege of having both?
 
 
Delphi is not case-sensitive.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 05:34PM -0800

Hello,
 
 
About Case Sensitivity..
 
The trend in computer languages has been to be case-sensitive for a long time now. Java, JavaScript, C#, C++ are all case-sensitive languages. This, in my opinion, is one of the most boneheaded design trends to come out of a group of people who are supposed to be "smart". You can argue semantics and personal preference to the ends of the earth, but I'll always win any debate by asking this one question that those of you who think that case-sensitivity is a good thing for language design simply cannot answer:
 
"How does complaining when I type 'int32' instead of 'Int32' make your compiler a better, more-productive, and useful 'tool' than if you simply just let it slide with a warning or hint?"
 
To me, computer languages are tools, not rules. You can evaluate the usefulness of an everyday household tool with some fairly objective and quantifiable metrics and chances are, people who have nail-guns are going to be more productive than people who use hammers.
 
Similar things can be applied to computer languages: Languages that help you get the right answer will yield higher productivity than compilers that simply complain when you get it wrong. On a completely different subject — file systems: they shouldn't be case sensitive either…. seriously why does my grandma care if she named her file "recipies.txt" or "Recipies.txt" and even as an engineer/power-user, why on earth would I want to have the privilege of having both?
 
 
Delphi is not care-sensitive.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 05:19PM -0800

Hello,
 
 
What about garbage collection?
 
Read what said this serious specialist called Chris Lattner:
 
"One thing that I don't think is debatable is that the heap compaction
behavior of a GC (which is what provides the heap fragmentation win) is
incredibly hostile for cache (because it cycles the entire memory space
of the process) and performance predictability."
 
"Not relying on GC enables Swift to be used in domains that don't want
it - think boot loaders, kernels, real time systems like audio
processing, etc."
 
"GC also has several *huge* disadvantages that are usually glossed over:
while it is true that modern GC's can provide high performance, they can
only do that when they are granted *much* more memory than the process
is actually using. Generally, unless you give the GC 3-4x more memory
than is needed, you'll get thrashing and incredibly poor performance.
Additionally, since the sweep pass touches almost all RAM in the
process, they tend to be very power inefficient (leading to reduced
battery life)."
 
Read more here:
 
https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html
 
Here is Chris Lattner's Homepage:
 
http://nondot.org/sabre/
 
And here is Chris Lattner's resume:
 
http://nondot.org/sabre/Resume.html#Tesla
 
 
This why i have invented the following scalable algorithm and its
implementation that makes Delphi and FreePascal more powerful:
 
My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here..
 
Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used.
 
If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just extended to make you understand more.
 
You can download my new scalable reference counting with efficient support for weak references version 1.38 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 05:14PM -0800

Hello,
 
 
More about Energy efficiency..
 
You have to be aware that parallelization of the software
can lower power consumption, and here is the formula
that permits you to calculate the power consumption of
"parallel" software programs:
 
Power consumption of the total cores = (The number of cores) * ( 1/(Parallel speedup))^3) * (Power consumption of the single core).
 
 
Also read the following about energy efficiency:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there,
and Java and FreePascal are also good at Energy efficiency.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 05:11PM -0800

Hello,
 
 
About C++ and Delphi and Java..
 
In C++, the #include keyword is basically single-handedly responsible for murdering the productivity of C++ programmers around the world. I would guesstimate that billions and billions of dollars have been wasted waiting for C++ programs to build, and it is largely the fault of the #include keyword. Include is different from Delphi's "uses" and C#'s "using" keywords in that, in the "uses/using" paradigm, the files being used are considered to be precompiled. "Include" does not allow the included file to be precompiled (without essentially tricking the compiler using precompiled headers). Things that happen in the file before the #include can affect things that happen in the included file (essentially the all included files are first merged into one large file before being compiled).
 
As a result of this silly and rarely useful rule, the same file often gets recompiled hundreds or even thousands of times in a single build operation. If you have hundreds of files, you get to waste lots of time by the company water cooler or check up on your facebook buddies. I have one embedded C app, designed for a device that has a measily 96k of RAM that takes 5+ minutes to compile with all 8 VCores of my i7 machine pegged at 100%. This is for a target that has a measily 96k of RAM and 512K of flash! This is unacceptable. This has always been unacceptable. This will always be unacceptable and it is another reason why C++ can't be the language of the 21st century.
 
 
More about compile time and build time..
 
Look here about Java it says:
 
 
"Java Build Time Benchmarks
 
I'm trying to get some benchmarks for builds and I'm coming up short via Google. Of course, build times will be super dependent on a million different things, but I'm having trouble finding anything comparable.
 
Right now: We've got ~2 million lines of code and it takes about 2 hours for this portion to build (this excludes unit tests).
 
What do your build times look like for similar sized projects and what did you do to make it that fast?"
 
 
Read here to notice it:
 
https://www.reddit.com/r/java/comments/4jxs17/java_build_time_benchmarks/
 
 
So 2 million lines of code of Java takes about 2 hours to build.
 
 
And what do you think that 2 millions lines of code takes
to Delphi ?
 
Answer: Just about 20 seconds.
 
 
Here is the proof from Embarcadero, read and look at the video to be convinced about Delphi:
 
https://community.idera.com/developer-tools/b/blog/posts/compiling-a-million-lines-of-code-with-delphi
 
C++ also takes "much" more time to compile than Delphi.
 
 
This is why i said previously the following:
 
 
I think Delphi is a single pass compiler, it is very fast at compile time, and i think C++ and Java and C# are multi pass compilers that are much slower than Delphi in compile time, but i think that the generated executable code of Delphi is still fast and is faster than C#.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 03:17PM -0800

Hello,
 
 
I have just answered to the following in C++ newsgroup, read it all, it will make you understand better:
 
https://groups.google.com/forum/#!topic/comp.lang.c++/4SM-df-BsbI
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 02:27PM -0800

Hello,
 
 
More about my posts..
 
You are being immaturity, because you are not capable of thinking
correctly, because what do you think that i have done by posting
few posts in C++ or ADA newsgroup? you are thinking that i am a spammer,
but you are wrong, because you have to understand what i was doing to understand better, since i am an inventor of many scalable algorithms,
i have decided to post about them so that to help others to understand
more some of my scalable algorithms in parallel computing and synchronization, so as you are noticing i am a good person that tried to help, also if you have noticed i have also posted about my EasyList, because i said to myself that i wanted to help by make you understand my parallel sort algorithm that is very interesting, read about it here:
 
https://sites.google.com/site/scalable68/easylist-for-delphi-and-freepascal
 
 
And as you have noticed i have posted few of my poems of Love in comp.programming and comp.programming.threads so that to make you
understand something by writing the following in comp.programming
and comp.programming.threads, and here it is:
 
 
==========
Why am i writing poems of Love ?
 
I think you know me more by my writing , and you have to know that
i am a gentleman type of person, but you have to know that i am using
my smartness efficiently, so what is saying to me my smartness ?
it is saying to me that hate is an easy thing, because we can easily
hate ourselves or hate our world and be much more destructive, and this
is not the right way and it is not the wise way, because the wise way is to be more constructive and be more transcendence that needs more responsability and more capability from us, this is why i am showing that i am capable of transcending by writing beautiful poems and by showing that i am capable of beautiful thinking.
============
 
That was just few of my poems of Love that i have posted and i have stopped it quickly, so it is not spam.
 
And as you have noticed i have posted my post titled: "About minimizing at best complexity by maximizing at best efficiency !"
 
So that to make you understand more precisely one of my previous posts
about my scalable algorithms.
 
So where am i a spammer ? as you are noticing i am not a spammer.
 
And what i also want you to understand is the following:
 
More explanation..
 
I think i am understanding more, because i have posted just very
few posts on the newsgroups of ADA and C++ and i have noticed that
they have become very agressive and hateful towards me, and i think
they are hating me because i am a Delphi and Freepascal developer,
and they are ADA or C++ developers, this is why they have become very agressive towards me even if i have posted just very few posts on the newsgroups of C++ and ADA, this is why you have noticed that i got angry in some of my previous posts here, so from now on i will post just on topic posts in comp.programming and comp.programming.threads, and i will not post on C++ or ADA newsgroups.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Feb 23 01:16PM -0800

Hello,
 
 
More explanation..
 
I think i am understanding more, because i have posted just very
few posts on the newsgroups of ADA and C++ and i have noticed that
they have become very agressive and hateful towards me, and i think
they are hating me because i am a Delphi and Freepascal developer,
and they are ADA or C++ developers, this is why they have become very agressive towards me even if i have posted just very few posts on the newsgroups of C++ and ADA, this is why you have noticed that i got angry in some of my previous posts here, so from now on i will post just on topic posts in comp.programming and comp.programming.threads, and i will not post on C++ or ADA newsgroups.
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: