Monday, January 20, 2020

Digest for comp.programming.threads@googlegroups.com - 5 updates in 5 topics

aminer68@gmail.com: Jan 19 05:12PM -0800

Hello,
 
 
We have to be more smart, and i think there is a big problem
with programming languages in the era of parallel programming,
here is the problem:
 
"However under x86-TSO, the stores are cached in the store buffers,
a load consult only shared memory and the store buffer of the given thread, wich means it can load data from memory and ignore values from
the other thread."
 
Read more here:
 
https://books.google.ca/books?id=C2R2DwAAQBAJ&pg=PA127&lpg=PA127&dq=immediately+visible+and+m+fence+and+store+buffer+and+x86&source=bl&ots=yfGI17x1YZ&sig=ACfU3U2EYRawTkQmi3s5wY-sM7IgowDlWg&hl=en&sa=X&ved=2ahUKEwi_nq3duYPkAhVDx1kKHYoyA5UQ6AEwAnoECAgQAQ#v=onepage&q=immediately%20visible%20and%20m%20fence%20and%20store%20buffer%20and%20x86&f=false
 
 
 
So i think that programming languages are not designed correctly,
because the variables that are not local and that are created in the main thread has to be local to the main thread and not accessible
by the other threads, and they have to be passed to the other threads with for example a queue that has to issue a memory barrier to flush
the store buffer to be able to solve the above safety problem.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Jan 19 03:38PM -0800

Hello,
 
 
I was just reading this interesting webpage about:
"Everything that's Wrong with the C++ and Object Pascal Languages":
 
https://blog.digitaltundra.com/?p=403
 
And as you have noticed that it says about Delphi the following:
 
"A [volatile] attribute was quietly slipped into the newest versions of delphi, but AFAIK it likely only affects the iOS/Android compilers, and its effects are not really well documented… see this white paper for more)."
 
and it says also the following:
 
"Again, this isn't a problem in Delphi only because non-local memory is always considered volatile"
 
 
So from the above you are understanding that Delphi is keeping compatibility on windows with older windows Delphi compilers that don't support volatile.
 
And Freepascal is the same since i wrote the following to make you understand it:
 

I have just read the following about Freepascal:
 
----
 
The old volatile discussions with Thaddy...
 
In my understanding:
 
Volatile guarantees that the variable accesses in statements are not optimized to registers etc. In fpc there was no such concept (I heard it was introduced lately), so all non local variables have to be dealt as volatile, especially in multithreading applications. A prominent example where "volatility" is needed is the TThreat.Terminated property (boolean). A strong optimization of the loop
while not terminated do
should recognize that it isn't modified within the loop, so for a non-volatile "terminated" it could be optimized to
if not terminated do
Of course this would break the code. In case of fpc the volatility is implizit, as said, all non local variables are volatile. But this prevents such optimizations for all variables, also those, that don't need volatile behaviour.
 
I see a problem in the introduction of a volatile keyword, the backward compatibility. The keyword only makes sense, if optimizations are implemented, that recognize the keyword. If this is the case than old code with the missing keyword gets broken.
 
This means in fpc you can just leave the [volatile] away.
 
Read more here:
 
https://forum.lazarus.freepascal.org/index.php?topic=46083.0
 
---
 
 
So i have also written the following:
 
I have just read the following about volatile from Barry Kelly:
 
"Specifically, volatile prevents the compiler from changing the number of reads or writes to or from a location from the number that are
explicitly indicated in the source. That doesn't mean that the reads or
writes to volatile locations will act as memory barriers.
 
To my knowledge, current implementations of Win32 Delphi never change
the number or order of reads or writes to global variables or fields of
objects."
 
Read more here:
 
http://borland.newsgroups.archived.at/public.delphi.language.delphi.win32/200606/0606232337.html
 
So i think the old Delphi compilers that don't support volatile never change the number or order of reads or writes to global variables or fields of objects, and i think since Freepascal in the Delphi mode is compatible with the old Delphi compilers that don't support volatile, so then Freepascal in the Delphi mode never change the number or order of reads or writes to global variables or fields of objects.

So i think my parallel software projects are ok with Freepascal and
are ok with Delphi on Windows, since they works on Linux and Windows.
 
 
 
You can download them from:
 
https://sites.google.com/site/scalable68/
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Jan 19 02:59PM -0800

Hello,
 
 
Lock Versus Lock-Free..
 
 
The class of problems that can be solved by lock-free approaches is limited.
 
Furthermore, lock-free approaches can require restructuring a problem.
As soon as multiple shared data-structures are modified simultaneously,the only practical approach is to use a lock.
 
All lock-free dynamic-size data-structures using such CAA (Compare-and-assign) require some form of garbage collector to lazily delete storage when it is no longer referenced. In languages with garbage collection, this capability comes for free (at the cost of garbage collection). For languages without garbage collection, the code is complex and error prone in comparison with locks, requiring epoch-based reclamation, read-copy-update (RCU), or hazard pointers.
 
While better performance is claimed for lock-free data-structures, there is no long-term evidence to support this claim. Many high-performance locking situations, e.g., operating system kernels and databases, continue to use locking in various forms, even though there are a broad class of lock-free data-structure readily available.
 
While lock-free data-structures cannot have deadlock, there is seldom deadlock using locks for the simple class of problems solvable using lock-free approaches. For example, protecting basic data-structure operations with locks is usually very straightforward. Normally deadlock occurs when accessing multiple resources simultaneously, which is not a class of problems
dealt with by lock-free approaches. Furthermore, disciplined lock usage, such as ranking locks to avoid deadlock, works well in practice and
is not onerous for the programmer.Finally, some static analysis tools are helpful for detecting deadlock scenarios.
 
Lock-free approaches have thread-kill tolerance, meaning no thread owns a lock, so any thread can terminate at an arbitrary point without leaving a lock in the closed state. However, within an application, thread kill is an unusual operation and thread failure means an unrecoverable error or major reset.
 
A lock-free approach always allows progress of other threads, whereas locks can cause delays if the lock owner is preempted. However,this issue is a foundational aspect of preemptive concurrency. And there are ways to mitigate this issue for locks using scheduler-activation techniques. However, lock-free is not immune to delays. If a page is evicted containing part of the lock-based or lockfree data, there is a delay. Hence, lock free is no better than lock based if the page
fault occurs on frequently accessed shared data. Given the increasing number of processors and large amount of memory on modern computers, neither of these delays should occur often.
 
Lock-free approaches are reentrant, and hence, can be used in signal handlers, which are implicitly concurrent. Locking approaches cannot deal with this issue. Lock-free approaches are claimed not to have priority inversion. However, inversion can occur because of the spinning required with atomic instructions, like CAA, as the hardware does not provide a bound for spinning threads. Hence, a low-priority thread can barge head of a high-priority thread because the low-priority thread just happens to win the race at the CAA instruction. Essentially,
priority inversion is a foundational aspect of preemptive concurrency and can only be mitigated.
 
The conclusion is that for unmanaged programming language (i.e., no garbage collection), using classical locks is simple, efficient, general, and causes issues only when the problem scales to multiple locks. For managed programming-languages, lock-free data-structures are easier to implement, but only handle a specific set of problems, and the programmer must accept other idiosyncrasies, like pauses in
execution for garbage collection.
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Jan 19 02:54PM -0800

Hello,
 
 
As you have noticed i have just posted previously about
safe languages like ADA and unsafe languages like C and C++,
now i will write more about safe and unsafe languages:
 
About safe and unsafe languages..
 
We use the term safe to refer to languages that automatically
perform runtime checks to prevent programs from violating the
bounds of allocated memory. Safe languages must provide two
properties to ensure that programs respect allocation bounds:
memory safety and type safety.
 
Memory safety is the real goal, it means that the program
will not read or write data outside the bounds of allocated
regions. To achieve memory safety, a language must also enforce
type safety so that it can keep track of the memory allocation
bounds. Without type safety, any arbitrary value could be used
as a reference into memory.
 
Beyond the possibility of buffer overflow, unsafe languages,
such as C and C++, value compile-time optimization and concise
expression over safety and comprehensibility, which are key
features of safe languages. This difference in priorities is
evidenced by the fact that most unsafe languages allow programs
to directly access low-level system resources, including memory.
In contrast, safe languages must explicitly control the ways
programs are allowed to access resources, to prevent violations
of the properties that they guarantee.
 
Fundamental to the trade-off between safe and unsafe languages
is the concept of trust. Unsafe languages implicitly trust the
programmer, while safe languages explicitly limit the operations
that they allow in exchange for the capability to prevent programs
from making potentially damaging mistakes. The result is that
unsafe languages are more powerful with respect to the operations
that can be performed, while safe languages provide greater reusable
functionality with built-in protections that often make programmers
more efficient. Another side-effect of a small set of low-level
operations is that complex problems can typically be solved more
concisely in unsafe languages, which is often seen as another
advantage over safe languages.
 
Many of the distinctions that often accompany the difference
between safe and unsafe languages are technically unnecessary.
It is possible to implement a safe language that provides a
small instruction set and low-level access to nonmemory
resources, such as the network and filesystem. However,
because the additional record keeping and checks required
to make a language safe degrade the performance of compile-time
optimization strategies, memory-safe languages have typically
been deemed unacceptable for certain types of programs.
 
Recently, security concerns have prompted limited reconsideration
of these tradeoffs. Safe languages designed with performance and
flexibility in mind have been created in academic circles and
have been shown to effectively prevent buffer overflow
vulnerabilities, albeit at a performance cost. The section on
safe C dialects gives an overview of two of the more complete
implementations.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Jan 19 02:47PM -0800

Hello..
 
 
About ADA, C++ and C ..
 
Ada was designed to support a high-level of abstraction and, in particular, safe abstractions. The C philosophy is to keep the design of the language simple, and the runtime overhead minimal, at the cost of safety. The C++ philosophy, centered on the concept of "zero-cost abstractions," is that as few safeguards as possible are added by the compiler, especially if those safe-guards imply a runtime cost. Ada's philosophy tends to be at the opposite of the spectrum, favoring safety before other criteria.
 
And what are zero-cost abstractions in programming languages?
 
Zero cost abstractions are exemplified by C++
 
For example a C++ std::vector object is almost like a C array except it can grow and shrink - It has a more abstract and powerful interface than a C array.
 
However the runtime performance of a vector is identical to that of an array - the additional features in a std::vector do not cost anything at runtime if you were to replace a dynamically allocated C array with a vector.
 
Most C++ features are like this - in fact it's one of the primary considerations for a feature to be added to the language.
 
In C++, operator overloading, static inheritance, constexpr, templates, deterministic destruction and references are all features that introduce a more powerful abstraction that costs nothing at runtime.
You could try writing alternative code without using that feature and it would not be any faster.
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: