Wednesday, July 10, 2019

Digest for comp.programming.threads@googlegroups.com - 25 updates in 25 topics

Horizon68 <horizon@horizon.com>: Jul 01 03:24PM -0700

Hello..
 
 
 
I was just listening at the following song:
 
Tiny Dancer
 
https://www.youtube.com/watch?v=KBWfUc5jKiM
 
And i have just decided to write a poem about
you the beautiful dancer !, here it is:
 
 
My so beautiful dancer !
 
You are coming to me with all your splendor
 
My so beautiful dancer !
 
You are growing inside me like a so beautiful flower !
 
My so beautiful dancer !
 
It is also about the right dose and the right answer
 
My so beautiful dancer !
 
Because my way is not to be a gangster
 
My so beautiful dancer !
 
Because i am like "wisdom" that is like the master
 
My so beautiful dancer !
 
Because i want to go beautifully and faster
 
My so beautiful dancer !
 
As you can see, I am not a barbar or a Tiger !
 
My so beautiful dancer !
 
Because my way is Love and Wisdom that is not the inferior
 
My so beautiful dancer !
 
It is why i am like you a beautiful dancer !
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 01 02:26PM -0700

Hello...
 
 
 
Do i also know you ? i can see you clearly now..
 
Because i am also "special" like you..
 
 
Look at this singer to know more:
 
Michael Jackson - You Rock My World
 
https://www.youtube.com/watch?v=C1kHeeEMe-s
 
 
 
So here is my new poem about the beautiful Superstars:
 
 
I "know" i am talking to the stars !
 
Because i am climbing in front of you like a superstar
 
Because i love to be a fast Ferrari car
 
Since my Love is like coming from Mars and the beautiful Stars
 
This is why my Love knows about "you" my "beautiful" stars
 
And since my love is not a "barbar"
 
Thus love is like growing like a beautiful seminar
 
This is why i love to be as you are beautiful supertars
 
Because look at this beautiful Superstar
 
He is rolling like my beautiful desire
 
Look at this beautiful Superstar
 
He is beautifully escaping from the evil fire
 
Look at this beautiful Superstar
 
He is building a big empire
 
Look at this beautiful Superstar
 
He is richness that also knows how to retire
 
Look at this beautiful Superstar
 
Because he knows how to play beautifully the guitar
 
Look at this beautiful Superstar
 
He is like someone who we admire
 
And this is why I want to be like a Superstar
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 30 11:22AM -0700

Hello..
 
 
About SC and TSO and RMO hardware memory models..
 
I have just read the following webpage about the performance difference
between: SC and TSO and RMO hardware memory models
 
I think TSO is better, it is just around 3% ~ 6% less performance
than RMO and it is a simpler programming model than RMO. So i think ARM
must support TSO to be compatible with x86 that is TSO.
 
Read more here to notice it:
 
https://infoscience.epfl.ch/record/201695/files/CS471_proj_slides_Tao_Marc_2011_1222_1.pdf
 
 
About memory models and sequential consistency:
 
As you have noticed i am working with x86 architecture..
 
Even though x86 gives up on sequential consistency, it's among the most
well-behaved architectures in terms of the crazy behaviors it allows.
Most other architectures implement even weaker memory models.
 
ARM memory model is notoriously underspecified, but is essentially a
form of weak ordering, which provides very few guarantees. Weak ordering
allows almost any operation to be reordered, which enables a variety of
hardware optimizations but is also a nightmare to program at the lowest
levels.
 
Read more here:
 
https://homes.cs.washington.edu/~bornholt/post/memory-models.html
 
 
Memory Models: x86 is TSO, TSO is Good
 
Essentially, the conclusion is that x86 in practice implements the old
SPARC TSO memory model.
 
The big take-away from the talk for me is that it confirms the
observation made may times before that SPARC TSO seems to be the optimal
memory model. It is sufficiently understandable that programmers can
write correct code without having barriers everywhere. It is
sufficiently weak that you can build fast hardware implementation that
can scale to big machines.
 
Read more here:
 
https://jakob.engbloms.se/archives/1435
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 29 02:53PM -0700

Hello...
 
 
My Universal Scalability Law for Delphi and FreePascal was updated to
version 3.22
 
I have implemented and enhanced this powerful tool.
 
I have included a 32 bit and 64 bit windows and linux executables called
usl.exe and usl_graph.exe inside the zip, please read the readme file
to know how to use it, it is a very powerful tool.
 
Now about the Beta and Alpha coefficients of USL:
 
Coefficient Alpha is: the contention
 
And
 
Coefficient Beta is: the coherency.
 
Contention and coherency are measured as the fraction of the sequential
execution time. A value of 0 means that there is no effect on
performance. A contention factor of 0.2, for instance, means that 20% of
the sequential execution time cannot be parallelized. A coherency factor
of 0.01 means that the time spent in the synchronization between each
pair of processes is 1% of the sequential execution time.
 
 
Also there is something very important to know, and here it is:
 
So to optimize more the criterion of the cost for a better QoS, you have
to choose a good delta(y)/delta(x) to optimize the criterion of the cost
of your system and you have to balance better between the performance
and the cost. You can read about my powerful tool and download it from:
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 05 03:16PM -0700

Hello...
 
 
More about Energy efficiency..
 
You have to be aware that parallelization of the software
can lower power consumption, and here is the formula
that permits you to calculate the power consumption of
"parallel" software programs:
 
Power consumption of the total cores = (The number of cores) * (
1/(Parallel speedup))^3) * (Power consumption of the single core).
 
 
Also read the following about energy efficiency:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there,
and Java and FreePascal are also good at Energy efficiency.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 05 02:51PM -0700

Hello,
 
 
About my ParallelFor() that scales very well that uses my efficient
Threadpool that scales very well:
 
With ParallelFor() you have to:
 
1- Ensure Sufficient Work
 
Each iteration of a loop involves a certain amount of work,
so you have to ensure a sufficient amount of the work,
read below about "grainsize" that i have implemented.
 
2- In OpenMP we have that:
 
Static and Dynamic Scheduling
 
One basic characteristic of a loop schedule is whether it is static or
dynamic:
 
• In a static schedule, the choice of which thread performs a particular
iteration is purely a function of the iteration number and number of
threads. Each thread performs only the iterations assigned to it at the
beginning of the loop.
 
• In a dynamic schedule, the assignment of iterations to threads can
vary at runtime from one execution to another. Not all iterations are
assigned to threads at the start of the loop. Instead, each thread
requests more iterations after it has completed the work already
assigned to it.
 
 
But with my ParallelFor() that scales very well, since it is using my
efficient Threadpool that scales very well, so it is using Round-robin
scheduling and it uses also work stealing, so i think that this is
sufficient.
 
Read the rest:
 
My Threadpool engine with priorities that scales very well is really
powerful because it scales very well on multicore and NUMA systems, also
it comes with a ParallelFor() that scales very well on multicores and
NUMA systems.
 
You can download it from:
 
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
 
 
Here is the explanation of my ParallelFor() that scales very well:
 
I have also implemented a ParallelFor() that scales very well, here is
the method:
 
procedure ParallelFor(nMin, nMax:integer;aProc:
TParallelProc;GrainSize:integer=1;Ptr:pointer=nil;pmode:TParallelMode=pmBlocking;Priority:TPriorities=NORMAL_PRIORITY);
 
nMin and nMax parameters of the ParallelFor() are the minimum and
maximum integer values of the variable of the ParallelFor() loop, aProc
parameter of ParallelFor() is the procedure to call, and GrainSize
integer parameter of ParallelFor() is the following:
 
The grainsize sets a minimum threshold for parallelization.
 
A rule of thumb is that grainsize iterations should take at least
100,000 clock cycles to execute.
 
For example, if a single iteration takes 100 clocks, then the grainsize
needs to be at least 1000 iterations. When in doubt, do the following
experiment:
 
1- Set the grainsize parameter higher than necessary. The grainsize is
specified in units of loop iterations.
 
If you have no idea of how many clock cycles an iteration might take,
start with grainsize=100,000.
 
The rationale is that each iteration normally requires at least one
clock per iteration. In most cases, step 3 will guide you to a much
smaller value.
 
2- Run your algorithm.
 
3- Iteratively halve the grainsize parameter and see how much the
algorithm slows down or speeds up as the value decreases.
 
A drawback of setting a grainsize too high is that it can reduce
parallelism. For example, if the grainsize is 1000 and the loop has 2000
iterations, the ParallelFor() method distributes the loop across only
two processors, even if more are available.
 
And you can pass a parameter in Ptr as pointer to ParallelFor(), and you
can set pmode parameter of to pmBlocking so that ParallelFor() is
blocking or to pmNonBlocking so that ParallelFor() is non-blocking, and
the Priority parameter is the priority of ParallelFor(). Look inside the
test.pas example to see how to use it.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 05 11:49AM -0700

Hello..
 
 
Scalability Modeling using Universal Scalability Law (USL)
 
Read more here:
 
https://wso2.com/blog/research/scalability-modeling-using-universal-scalability-law
 
 
And my Universal Scalability Law for Delphi and FreePascal was updated
to version 3.22
 
I have implemented and enhanced this powerful tool.
 
I have included a 32 bit and 64 bit windows and linux executables called
usl.exe and usl_graph.exe inside the zip, please read the readme file
to know how to use it, it is a very powerful tool.
 
Now about the Beta and Alpha coefficients of USL:
 
Coefficient Alpha is: the contention
 
And
 
Coefficient Beta is: the coherency.
 
Contention and coherency are measured as the fraction of the sequential
execution time. A value of 0 means that there is no effect on
performance. A contention factor of 0.2, for instance, means that 20% of
the sequential execution time cannot be parallelized. A coherency factor
of 0.01 means that the time spent in the synchronization between each
pair of processes is 1% of the sequential execution time.
 
 
Also there is something very important to know, and here it is:
 
So to optimize more the criterion of the cost for a better QoS, you have
to choose a good delta(y)/delta(x) to optimize the criterion of the cost
of your system and you have to balance better between the performance
and the cost. You can read about my powerful tool and download it from:
 
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 04 02:44PM -0700

Hello,
 
 
About the Active object pattern..
 
I think the proxy and scheduler of the Active object pattern are
embellishments, not essential. The core of the idea is simply a queue of
closures executed on different thread(s) to that of the client, and here
you are noticing that you can do the same thing as the Active object
pattern and more by using my powerful "invention" that is: An efficient
Threadpool engine with priorities that scales very well that you can
download from here:
 
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
 
 
This Threadpool of mine is really powerful because it scales very well
on multicore and NUMA systems, also it comes with a ParallelFor()
that scales very well on multicores and NUMA systems.
 
Here is the explanation of my ParallelFor() that scales very well:
 
I have also implemented a ParallelFor() that scales very well, here is
the method:
 
procedure ParallelFor(nMin, nMax:integer;aProc:
TParallelProc;GrainSize:integer=1;Ptr:pointer=nil;pmode:TParallelMode=pmBlocking;Priority:TPriorities=NORMAL_PRIORITY);
 
nMin and nMax parameters of the ParallelFor() are the minimum and
maximum integer values of the variable of the ParallelFor() loop, aProc
parameter of ParallelFor() is the procedure to call, and GrainSize
integer parameter of ParallelFor() is the following:
 
The grainsize sets a minimum threshold for parallelization.
 
A rule of thumb is that grainsize iterations should take at least
100,000 clock cycles to execute.
 
For example, if a single iteration takes 100 clocks, then the grainsize
needs to be at least 1000 iterations. When in doubt, do the following
experiment:
 
1- Set the grainsize parameter higher than necessary. The grainsize is
specified in units of loop iterations.
If you have no idea of how many clock cycles an iteration might take,
start with grainsize=100,000.
 
The rationale is that each iteration normally requires at least one
clock per iteration. In most cases, step 3 will guide you to a much
smaller value.
 
2- Run your algorithm.
 
3- Iteratively halve the grainsize parameter and see how much the
algorithm slows down or speeds up as the value decreases.
 
A drawback of setting a grainsize too high is that it can reduce
parallelism. For example, if the grainsize is 1000 and the loop has 2000
iterations, the ParallelFor() method distributes the loop across only
two processors, even if more are available.
 
And you can pass a parameter in Ptr as pointer to ParallelFor(), and you
can set pmode parameter of to pmBlocking so that ParallelFor() is
blocking or to pmNonBlocking so that ParallelFor() is non-blocking, and
the Priority parameter is the priority of ParallelFor(). Look inside the
test.pas example to see how to use it.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 04 02:40PM -0700

Hello..
 
 
What about garbage collection?
 
Read what said this serious specialist called Chris Lattner:
 
"One thing that I don't think is debatable is that the heap compaction
behavior of a GC (which is what provides the heap fragmentation win) is
incredibly hostile for cache (because it cycles the entire memory space
of the process) and performance predictability."
 
"Not relying on GC enables Swift to be used in domains that don't want
it - think boot loaders, kernels, real time systems like audio
processing, etc."
 
"GC also has several *huge* disadvantages that are usually glossed over:
while it is true that modern GC's can provide high performance, they can
only do that when they are granted *much* more memory than the process
is actually using. Generally, unless you give the GC 3-4x more memory
than is needed, you'll get thrashing and incredibly poor performance.
Additionally, since the sweep pass touches almost all RAM in the
process, they tend to be very power inefficient (leading to reduced
battery life)."
 
Read more here:
 
https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html
 
Here is Chris Lattner's Homepage:
 
http://nondot.org/sabre/
 
And here is Chris Lattner's resume:
 
http://nondot.org/sabre/Resume.html#Tesla
 
 
This why i have invented the following scalable algorithm and its
implementation that makes Delphi and FreePascal more powerful:
 
My invention that is my scalable reference counting with efficient
support for weak references version 1.35 is here..
 
Here i am again, i have just updated my scalable reference counting with
efficient support for weak references to version 1.35, I have just added
a TAMInterfacedPersistent that is a scalable reference counted version,
and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page,
because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else,
"typically" a form (and "typically" means in english: in "most" cases,
and this is the most important thing to understand). In that scenario,
reference count is not used.
 
If you pass a component as an interface reference, it would be very
unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just
replace TInterfacedObject with my TAMInterfacedObject that is the
scalable reference counted version, and just replace
TInterfacedPersistent with my TAMInterfacedPersistent that is the
scalable reference counted version, and you will find both my
TAMInterfacedObject and my TAMInterfacedPersistent inside the
AMInterfacedObject.pas file, and to know how to use weak references
please take a look at the demo that i have included called example.dpr
and look inside my zip file at the tutorial about weak references, and
to know how to use delegation take a look at the demo that i have
included called test_delegation.pas, and take a look inside my zip file
at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak
references is stable and fast, and it works on both Windows and Linux,
and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find
it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory
leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just
extended to make you understand more.
 
You can download my new scalable reference counting with efficient
support for weak references version 1.35 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 04 02:24PM -0700

Hello..
 
 
More about Hardware transactional memory, and now about the
disadvantages of Intel TSX:
 
Here is also something interesting to read about hardware transactional
memory that is Intel TSX:
 
TSX does not gaurantee forward progress, so there must always be a
fallback non-TSX pathway. (complex transactions might always abort even
without any contention because they overflow the speculation buffer.
Even transactions that could run in theory might livelock forever if you
don't have the right pauses to allow forward progress, so the fallback
path is needed then too).
 
TSX works by keeping a speculative set of registers and processor state.
It tracks all reads done in the speculation block, and enqueues all
writes to be delayed until the transaction ends. The memory tracking of
the transaction is currently done using the L1 cache and the standard
cache line protocols. This means contention is only detected at cache
line granularity, so you have the standard "false sharing" issue.
 
If your transaction reads a cache line, then any write to that cache
line by another core causes the transaction to abort. (reads by other
cores do not cause an abort).
 
If your transaction writes a cache line, then any read or write by
another core causes the transaction to abort.
 
If your transaction aborts, then any cache lines written are evicted
from L1. If any of the cache lines involved in the transaction are
evicted during the transaction (eg. if you touch too much memory, or
another core locks that line), the transaction is aborted.
 
TSX seems to allow quite a large working set (up to size of L1 ?).
Obviously the more memory you touch the more likely to abort due to
contention.
 
Obviously you will get aborts from anything "funny" that's not just
plain code and memory access. Context switches, IO, kernel calls, etc.
will abort transactions.
 
At the moment, TSX is quite slow, even if there's no contention and you
don't do anything in the block. There's a lot of overhead. Using TSX
naively may slow down even threaded code. Getting significant
performance gains from it is non-trivial.
Read more here:
 
http://cbloomrants.blogspot.ca/2014/11/11-12-14-intel-tsx-notes.html
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 04 02:16PM -0700

Hello...
 
 
Read the following paper about the disadvantages of Transactional memory:
 
 
"Hardware-only (HTM) suffers from two major impediments:
high implementation and verification costs lead to design
risks too large to justify on a niche programming model;
hardware capacity constraints lead to significant performance
degradation when overflow occurs, and proposals for managing overflows
(for example, signatures ) incur false positives that add
complexity to the programming model.
 
Therefore, from an industrial perspective, HTM designs have to provide
more benefits for the cost, on a more diverse set of workloads (with
varying transactional characteristics) for hardware designers to
consider implementation."
 
etc.
 
"We observed that the TM programming model itself, whether implemented
in hardware or software, introduces complexities that limit the expected
productivity gains, thus reducing the current incentive for migration to
transactional programming, and the justification at present for anything
more than a small amount of hardware support."
 
 
Read more here:
 
http://pages.cs.wisc.edu/~cain/pubs/cascaval_cacm08.pdf
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 04 01:49PM -0700

Hello..
 
 
More about computing and parallel computing..
 
The important guaranties of Memory Safety in Rust are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
I think i have solved Null Pointer Dereferences and also solved Dangling
Pointers and also solved memory leaks for Delphi and Freepascal by
inventing my "scalable" reference counting with efficient support for
weak references and i have implemented it in Delphi and Freepascal, and
reference counting in Rust and C++ is "not" scalable.
 
About the (3) above that is Buffer Overruns, read here about Delphi
and Freepascal:
 
What's a buffer overflow and how to avoid it in Delphi?
 
http://delphi.cjcsoft.net/viewthread.php?tid=49495
 
 
About Deadlock and Race conditions in Delphi and Freepascal:
 
I have ported DelphiConcurrent to Freepascal, and i have
also extended them with the support of my scalable RWLocks for Windows
and Linux and with the support of my scalable lock called MLock for
Windows and Linux and i have also added the support for a Mutex for
Windows and Linux, please look inside the DelphiConcurrent.pas and
FreepascalConcurrent.pas files inside the zip file to understand more.
 
You can download DelphiConcurrent and FreepascalConcurrent for Delphi
and Freepascal from:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way
to build Delphi applications which involve parallel executed code based
on threads like application servers. DelphiConcurrent provides to the
programmers the internal mechanisms to write safer multi-thread code
while taking a special care of performance and genericity.
 
In concurrent applications a DEADLOCK may occurs when two threads or
more try to lock two consecutive shared resources or more but in a
different order. With DelphiConcurrent and FreepascalConcurrent, a
DEADLOCK is detected and automatically skipped - before he occurs - and
the programmer has an explicit exception describing the multi-thread
problem instead of a blocking DEADLOCK which freeze the application with
no output log (and perhaps also the linked clients sessions if we talk
about an application server).
 
Amine Moulay Ramdane has extended them with the support of his scalable
RWLocks for Windows and Linux and with the support of his scalable lock
called MLock for Windows and Linux and he has also added the support for
a Mutex for Windows and Linux, please look inside the
DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more.
 
And please read the html file inside to learn more how to use it.
 
 
About race conditions now:
 
My scalable Adder is here..
 
As you have noticed i have just posted previously my modified versions
of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in
parallel programs.
 
But i have just read the following about how to avoid race conditions in
Parallel programming in most cases..
 
Here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
This is why i have invented my following powerful scalable Adder to help
you do the same as the above, please take a look at its source code to
understand more, here it is:
 
https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal
 
Other than that, about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And about Message Passing Process Communication Model and Shared Memory
Process Communication Model:
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
And more about my scalable reference counting with efficient support
for weak references:
 
My invention that is my scalable reference counting with efficient
support for weak references version 1.35 is here..
 
Here i am again, i have just updated my scalable reference counting with
efficient support for weak references to version 1.35, I have just added
a TAMInterfacedPersistent that is a scalable reference counted version,
and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page,
because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else,
"typically" a form (and "typically" means in english: in "most" cases,
and this is the most important thing to understand). In that scenario,
reference count is not used.
 
If you pass a component as an interface reference, it would be very
unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just
replace TInterfacedObject with my TAMInterfacedObject that is the
scalable reference counted version, and just replace
TInterfacedPersistent with my TAMInterfacedPersistent that is the
scalable reference counted version, and you will find both my
TAMInterfacedObject and my TAMInterfacedPersistent inside the
AMInterfacedObject.pas file, and to know how to use weak references
please take a look at the demo that i have included called example.dpr
and look inside my zip file at the tutorial about weak references, and
to know how to use delegation take a look at the demo that i have
included called test_delegation.pas, and take a look inside my zip file
at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak
references is stable and fast, and it works on both Windows and Linux,
and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find
it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory
leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just
extended to make you understand more.
 
You can download my new scalable reference counting with efficient
support for weak references version 1.35 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 04 01:45PM -0700

Hello..
 
 
Disadvantages of Actor model:
 
1- Not all languages easily enforce immutability
 
Erlang, the language that first popularized actors has immutability at
its core but Java and Scala (actually the JVM) does not enforce
immutability.
 
 
2- Still pretty complex
 
Actors are based on an asynchronous model of programming which is not so
straight forward and easy to model in all scenarios; it is particularly
difficult to handle errors and failure scenarios.
 
 
3- Does not prevent deadlock or starvation
 
Two actors can be in the state that wait message one from another; thus
you have a deadlock just like with locks, although much easier to debug.
With transactional memory however you are guaranteed deadlock free.
 
 
4- Not so efficient
 
Because of enforced immutability and because many actors have to switch
in using the same thread actors won't be as efficient as lock-based
concurrency.
 
 
Conclusion:
 
Lock-based concurrency is the most efficient.
 
 
More about Message Passing Process Communication Model and Shared Memory
Process Communication Model:
 
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 03 11:45AM -0700

Hello,
 
 
Don't worry guys ! my previous poem was the last poem that
i have posted here..
 
 
From now on i will write just about parallel programming.
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 03 09:45AM -0700

Hello..
 
 
I was just listening at the following beautiful song
of Into the Mystic of Van Morrison, listen to it , here it is:
 
https://www.youtube.com/watch?v=PZ59spYH9mk
 
 
So i have just decided to write another poem of Love, here it is:
 
 
 
Feeling like i am feeling the beautiful wind
 
Feeling like i am feeling the beautiful stars
 
Feeling like i am feeling the beautiful sky
 
It is like wanting to fly with you high
 
Feeling like i am feeling the beautiful sky
 
It is like the beautiful that is not wanting to die
 
Feeling like i am feeling the beautiful sky
 
It is like not wanting to be the bad guy
 
Feeling like i am feeling the beautiful sky
 
It is like the beautiful Gold of Versailles
 
Feeling like i am feeling the beautiful sky
 
It is like don't cry my baby, don't cry !
 
Feeling like i am feeling the beautiful sky
 
It is like never saying to love a Goodbye !
 
Feeling like i am feeling the beautiful sky
 
It is like being far from war and the cry !
 
Feeling like i am feeling the beautiful sky
 
It is also like my great love that want to mystify
 
So let me fly with you baby, let me fly !
 
 
 
 
Thank you,
Amine Moulay RAamdeane.
Horizon68 <horizon@horizon.com>: Jul 02 07:14PM -0700

Hello..
 
 
 
I was just listening at the following song of Barry White:
 
Barry White - Just the way you are
 
https://www.youtube.com/watch?v=X9iFEkNqHac&t=34s
 
 
And here is one movie of Bogart:
 
https://www.youtube.com/watch?v=lB2ckiy_qlQ
 
 
 
So i have just decided to write again another poem of Love of mine, here
it is:
 
 
My baby, just the way you are !
 
Since i am loving you like in a beautiful movie of Bogart
 
My baby, just the way you are !
 
Since my love is like the music of Mozart
 
My baby, just the way you are !
 
Since even if i am not so strong as Napoleon Bonaparte
 
My baby, just the way you are !
 
Since I am coming to you with class and elegance my lovely heart
 
My baby, just the way you are !
 
Since you are feeling my love in my beautiful art !
 
My baby, just the way you are !
 
Because our love is like a so beautiful light from the start !
 
My baby, just the way you are !
 
Since our love will never come apart !
 
My baby, just the way you are !
 
This way, my baby, our love is thus so beautifully smart !
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 02 04:40PM -0700

Hello..
 
 
 
I was just listening at this beautiful song:
 
https://www.youtube.com/watch?v=zwDjJP_l5AY
 
 
So i have just decided to write this new poem of Love of mine, here it is:
 
 
Your Love Is King
 
It's like the beautiful jewels and a beautiful spring
 
Your Love Is King
 
It is like the Lord looking at everything
 
Your Love Is King
 
It's like your so beautiful wings that are making me sing
 
Your Love Is King
 
It's making my words of Love beautifully swing
 
Your Love Is King
 
It's like a beautiful cup of wine that we drink
 
Your Love Is King
 
It is like Wisdom and Love that i bring
 
Your Love Is King
 
It is like my ring around your beautiful finger of a King
 
Your Love Is King
 
It's like the beautiful melody of Jazz and the Swing
 
Your Love Is King
 
It is like our Love that we are worshiping
 
 
 
Thank you,
Amine Moulay Ramdane
Horizon68 <horizon@horizon.com>: Jul 02 01:43PM -0700

Hello,
 
 
Read my final corrected poem of Love:
 
 
I was just listening to the following song of George Michael
 
FREEDOM - George Michael
 
https://www.youtube.com/watch?v=j8h-Ltha_9w
 
 
So i have just decided to write a new poem of Love of mine, here it is:
 
 
Play to me this lovely song of you
 
Because it makes me feel you like a beautiful bijou
 
Play to me this lovely song of you
 
Because i am night and day searching for you
 
Play to me this lovely song of you
 
Since my Tattoo of love is making it a beautiful news
 
Play to me this lovely song of you
 
Since it is like a transfuse of love of my beautiful views
 
Play to me this lovely song of you
 
Since my love is a beautiful ocean trip that we call a "cruise"
 
Play to me this lovely song of you
 
Because i love to see you washing your hair with a beautiful shampoo
 
Play to me this lovely song of you
 
Because it's my peace and love and not the fight of the Kung fu
 
Play to me this lovely song of you
 
Because it's like my course of love coming from the universities of MIT
and Waterloo !
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 02 10:52AM -0700

Hello..
 
 
I was just listening to the following song of George Michael
 
FREEDOM - George Michael
 
https://www.youtube.com/watch?v=j8h-Ltha_9w
 
 
So i have just decided to write a new poem of Love of mine, here it is:
 
 
Play to me this lovely song of you
 
Because it makes feel you like a beautiful bijou
 
Play to me this lovely song of you
 
Because i am night and day searching for you
 
Play to me this lovely song of you
 
Since my Tattoo of love is making it a beautiful news
 
Play to me this lovely song of you
 
Since it is like a transfuse of love of my beautiful views
 
Play to me this lovely song of you
 
Since my love is a beautiful ocean trip that we call a "cruise"
 
Play to me this lovely song of you
 
Because i love to see you washing your hair with a beautiful shampoo
 
Play to me this lovely song of you
 
Because it's my peace and love and not the fight of the Kung fu
 
Play to me this lovely song of you
 
Because it's like my course of love coming from the universities of MIT
and Waterloo !
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jul 02 09:38AM -0700

Hello..
 
 
 
I was just listening at the following song:
 
Aretha Franklin - I say a little prayer
 
https://www.youtube.com/watch?v=KtBbyglq37E
 
 
So i have just decided to write this new poem of Love of mine:
 
 
 
Falling In Love With You
 
Is the way of my beautiful wisdom and not the Marabout
 
Falling In Love With You
 
Is peace and love that is not a fight with you
 
Falling In Love With You
 
It is like my beautiful desire that i want to always renew
 
Falling In Love With You
 
Is like a game of love that i am playing like Chess and Sudoku
 
Falling In Love With You
 
Is like i am being a hero of a beautiful breakthrough
 
Falling In Love With You
 
Is because i am wanting to stay forever with you.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 28 11:27AM -0700

Hello..
 
 
Analogy with parallel computing..
 
My personality is more complex, you have to understand me more,
when i say i am also a gay like Chevy Chase because i am more
humoristic, you have to understand this "abstraction" of saying
humoristic, i am humoristic like Chevy Chase because i am more
positive and i want the others to be more posititive, so i can
be humoristic to make you positive, but my humoristic way
of doing is more "smart", because i can use a sophisticated humoristic
manner to learn you more about morality and about life and i am
more intellectual in doing so.
 
And speaking about "abstractions", i think it is a good subject
of philosophy, because i think you have to be capable
of philosophy about computing, i think one of the main part
of computing is also about abstracting, but it is not only
about abstracting but you have to abstract and be sophisticated
in it by making you abstractions "efficient". I give you an example:
 
As you know i am an inventor of many scalable algorithms, and
one of my last invention is a Fast Mutex that is adaptative,
so i have extracted the abstractions from my Fast Mutex,
and those abstractions are like a language or like an automaton
that is also like a protocol that is constituted of a language,
so when i execute the abstraction that is the Enter() method, it will
enter the Fast Mutex, and when i execute the abstraction that is
the Leave() method, it will leave the Fast Mutex, but you have
to be more smart, because it is "not" enough to abstract, you
have to be more efficient, i mean that i am thinking like a researcher
when i have invented my last Fast Mutex by taking into account
the following characteristics, so i have made my new Fast Mutex powerful
by making it as the following:
 
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a good preemption tolerance.
 
 
I think that you will not find anywhere this new invention of mine.
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 27 01:30PM -0700

Hello..
 
 
 
My invention that is my scalable reference counting with efficient
support for weak references version 1.35 is here..
 
Here i am again, i have just updated my scalable reference counting with
efficient support for weak references to version 1.35, I have just added
a TAMInterfacedPersistent that is a scalable reference counted version,
and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page,
because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else,
"typically" a form (and "typically" means in english: in "most" cases,
and this is the most important thing to understand). In that scenario,
reference count is not used.
 
If you pass a component as an interface reference, it would be very
unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just
replace TInterfacedObject with my TAMInterfacedObject that is the
scalable reference counted version, and just replace
TInterfacedPersistent with my TAMInterfacedPersistent that is the
scalable reference counted version, and you will find both my
TAMInterfacedObject and my TAMInterfacedPersistent inside the
AMInterfacedObject.pas file, and to know how to use weak references
please take a look at the demo that i have included called example.dpr
and look inside my zip file at the tutorial about weak references, and
to know how to use delegation take a look at the demo that i have
included called test_delegation.pas, and take a look inside my zip file
at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak
references is stable and fast, and it works on both Windows and Linux,
and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find
it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory
leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just
extended to make you understand more.
 
You can download my new scalable reference counting with efficient
support for weak references version 1.35 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 27 01:07PM -0700

Hello..
 
 
Disadvantages of functional programming:
 
 
- Immutable values combined with recursion might lead to a reduction in
performance
- In some cases, writing pure functions causes a reduction in the
readability of the code
- Though writing pure functions is easy, combining the same with the
rest of the application as well as the I/O operations is tough
- Writing programs in recursive style in place of using loops for the
same can be a daunting task
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 26 04:07PM -0700

Hello..
 
 
 
About my new PERT++ and JNI Wrapper..
 
 
I am also posting this time about JNI Wrapper for Delphi and FreePascal
that i have enhanced much more, and i think it is stable and complete,
as you will notice that now JNI Wrapper automatically configure itself
to support new versions of Oracle Java.
 
Please download the new JNI Wrapper from:
 
https://sites.google.com/site/scalable68/jni-wrapper-for-delphi-and-freepascal
 
 
And now here is my new PERT++:
 
 
PERT++ (An enhanced edition of the program or project evaluation and
review technique that includes Statistical PERT) in Delphi and FreePascal
 
Version: 1.37
 
Authors: Amine Moulay Ramdane that has implemented PERT, Robert
Sedgewick, Kevin Wayne.
 
Email: aminer68@gmail.com
 
Description:
 
This program (or project) evaluation and review technique includes
Statistical PERT, it is a statistical tool, used in project management,
which was designed to analyze and represent the tasks involved in
completing a given project.
 
PERT++ permits also to calculate:
 
- The longest path of planned activities to the end of the project
 
- The earliest and latest that each activity can start and finish
without making the project longer
 
- Determines "critical" activities (on the longest path)
 
- Prioritize activities for the effective management and to
shorten the planned critical path of a project by:
 
- Pruning critical path activities
 
- "Fast tracking" (performing more activities in parallel)
 
- "Crashing the critical path" (shortening the durations of critical
path activities by adding resources)
 
- And it permits to give Risk for each output PERT formula
 
PERT is a method of analyzing the tasks involved in completing a given
project, especially the time needed to complete each task, and to
identify the minimum time needed to complete the total project. It
incorporates uncertainty by making it possible to schedule a project
while not knowing precisely the details and durations of all the
activities. It is more of an event-oriented technique rather than start-
and completion-oriented, and is used more in projects where time is the
major factor rather than cost. It is applied to very large-scale,
one-time, complex, non-routine infrastructure and Research and
Development projects.
 
PERT and CPM are complementary tools, because CPM employs one time
estimate and one cost estimate for each activity; PERT may utilize three
time estimates (optimistic, most likely, and pessimistic) and no costs
for each activity. Although these are distinct differences, the term
PERT is applied increasingly to all critical path scheduling. This PERT
library uses a CPM algorithm that uses Topological sorting to render CPM
a linear-time algorithm for finding the critical path of the project, so
it's fast.
 
You have to have a java compiler, and you have first to compile the java
libraries with the batch file compile.bat, and after that compile the
Delphi and Freepascal test1.pas program.
 
Here is the procedure to call for PERT:
 
procedure solvePERT(filename:string;var info:TCPMInfo;var
finishTime:system.double;var criticalPathStdDeviation:system.double);
 
The arguments are:
 
The filename: is the file to pass, it's is organized as:
 
The first line is the number of jobs, the rest of each of the lines are:
three time estimates that takes the job (optimistic, expected, and
pessimistic) and after that the number of precedence constraints and
after that the precedence constraints that specify that the job have to
be completed before certain other jobs are begun.
 
info: is the returned information, you can get the job number and the
start and finish time of the job in info[i].job and info[i].start and
info[i].finish, please look at the test.pas example to understand.
 
finishTime: is the finish time.
 
criticalPathStdDeviation: is the critical path standard deviation.
 
I have also provided you with three other functions, here they are:
 
function NormalDistA (const Mean, StdDev, AVal, BVal: Extended): Single;
 
function NormalDistP (const Mean, StdDev, AVal: Extended): Single;
 
function InvNormalDist(const Mean, StdDev, PVal: Extended; const Less:
Boolean): Extended;
 
For NormalDistA() or NormalDistP(), you pass the best estimate of
completion time to Mean, and you pass the critical path standard
deviation to StdDev, and you will get the probability of the value Aval
or the probability between the values of Aval and Bval.
 
For InvNormalDist(), you pass the best estimate of completion time to
Mean, and you pass the critical path standard deviation to StdDev, and
you will get the length of the critical path of the probability PVal,
and when Less is TRUE, you will obtain a cumulative distribution.
 
I have also included a 32 bit and 64 bit windows executables called
PERT32.exe and PERT64.exe (that take the file, with a the file format
that i specified above, as an argument) inside the zip, it is a very
powerful tool, you need to compile CPM.java with compile.bat before
running them.
 
I have also included a 32 bit and 64 bit windows executables called
CPM32.exe and CPM64.exe (that take the file, with a the file format that
i specified in the Readme.CPM file, as an argument) inside the zip, they
run the CPM solver that you use with Statistical PERT that i have
included inside the zip file, you need to compile CPM.java with
compile.bat before running them.
 
The very important things to know about PERT is this:
 
1- PERT works best in projects where previous experience can be relied
on to accurately make predictions.
 
2- To not underestimate project completion time, especially if delays
cause the critical path to shift around, you have to enhance with point
number 1 above or/and management time and resources can be applied to
make sure that optimistic and most likely and pessimistic time estimates
of activities are accurate.
 
Also PERT++ zip file includes the powerful Statistical PERT, Statistical
PERT is inside Statistical_PERT_Beta_1.0.xlsx microsoft excel workbook,
you can use LibreOffice or Microsoft Office to execute it, after that
pass the output data of Statistical PERT to CPM library, please read the
Readme.CPM to learn how to use CPM library, and please read and learn
about Statistical PERT on internet.
 
Please read about Statistical PERT here:
 
http://www.statisticalpert.com/What_is_Statistical_PERT.pdf
 
Have fun with it !
 
Language: FPC Pascal v2.2.0+ / Delphi 7+: http://www.freepascal.org/
 
Operating Systems: Windows,
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
For Delphi XE-XE7 and Delphi tokyo use the -DXE switch
 
 
 
You can download it from:
 
https://sites.google.com/site/scalable68/pert-an-enhanced-edition-of-the-program-or-project-evaluation-and-review-technique-that-includes-statistical-pert-in-delphi-and-freepascal
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 26 01:58PM -0700

Hello..
 
 
My GUI components are here..
 
Why am i implementing GUI components and the process by wich
you organize them into a more interesting and interactive GUI applications ?
 
You can still use Qt GUI libraries or the like , but what
i am doing is "learning" the Delphi and FreePascal community
how to design and to implement the following GUI components
with simple primitives of Wingraph that is like the graph
unit of Turbo Pascal:
 
- Button
- Label
- TextBox
- MessageBox
- Memo
- Panel
- RadioButton
- CheckBox
- ProgressBar
- TreeMenu
- Calendar
 
You have to know that "using" GUI libraries is much easier
but to be able to understand the "inside" of how to
implement sophisticated GUI components from
simple graphical primitives is better and is good to know.
 
About my nextTreeMenu GUI component, i think i will also implement soon
and easily a TreeMenu component, that is like TreeView but it is
constituted with two of my Winmenus and with my StringTree, i think you
will appreciate it because it will be "powerful" because it will use a
much enhanced version of my Winmenus.
 
More explanation of the GUI components..
 
Now the following GUI components are supported:
 
- Button
- Label
- TextBox
- MessageBox
- CheckBox
- Winmenus
 
 
I will soon add a more powerful GUI component called TreeMenu,
it will look like the GUI of a file manager, but you will
be able to click on the callbacks and to search etc.
 
Also i will soon provide you with a calendar GUI component and with a
checkbox component.
 
I have corrected a small bug in the event loop, now i think
all is working correctly, and i think my units are stable, here is the
units that have included inside the zip file of my new Winmenus version
1.22: so i have included the my Graph3D unit for 3D graphism that looks
like the graph unit of turbo pascal, and i have included my enhanced
Winmenus GUI component using wingraph, and i have included the GUI unit
that contains the other GUI components, and of course i have included
Wingraph that look like the graph unit of Turbo Pascal but it is for
Delphi and FreePascal.
 
About my software project..
 
As you have noticed i have extended the GUI components using Wingraph,
now i have added a checkbox GUI component, now here is the GUI
components that are supported:
 
- Button
- Label
- TextBox
- MessageBox
- CheckBox
- Winmenus
 
 
But i think that the Memo GUI component can be "emulated"
with my Winmenus GUI component, so i think that my
Winmenus GUI component is powerful and i think it is
complete now, and i will soon add a more powerful GUI component called
TreeMenu that will look like the GUI of a file manager, but you will be
able to click on the callbacks and to search etc. it will be
implemented with two of my Winmenus and with my powerful StringTree here:
 
https://sites.google.com/site/scalable68/stringtree
 
 
And soon i will also implement a Calendar GUI component and
a ProgressBar GUI component.
 
But you have to understand me , i am implementing those
GUI components so that you will be able to design and implement
more interactive GUI applications for graphical applications
with Wingraph and such.
 
 
You can download Winmenus using wingraph that contains all the units
above that i have implemented from:
 
https://sites.google.com/site/scalable68/winmenus-using-wingraph
 
 
I have also implemented the text mode WinMenus, here it is:
 
 
Description

 
Drop-Down Menu Widget using the Object Pascal CRT unit
 
Please look at the test.pas example inside the zip file Use the 'Delete'
on the keyboard to delete the items
and use the 'Insert' on the keyboard to insert the items
and use the 'Up' and 'Down' and 'PageUp and 'PageDown' to scroll .. and
use the 'Tab' on the keyboard to switch between the Drop Down Menus and
'Enter' to select an item..
and the 'Esc' on the keyboard to exit..
and the 'F1' on keyboard to delete all the items from the list
right arrow and left arrow to scroll on the left or on the right

 
You can search with SearchName() and NextSearch() methods and now the
search with wildcards inside the Widget is working perfectly.
 
Winmenus is event driven, i have to explain all to you to understand
more...
 
At first you have to create your Widget menu by executing something like
this:
 
Menu1:=TMenu.create(5,5);
 
This will create a Widget menu at the coordinate (x,y) = (5,5)
 
After that you have to set your callbacks,cause my Winmenus is event
driven, so you have to do it like this:
 
Menu1.SetCallbacks(insert,updown);
 
The SetCallbacks() method will set your callbacks, the first callback
parameter is the callback that will be executed when the insert key is
pressed and it is the insert() function, and the second callback is the
callback that will be called when the up and down keys are pressed and
it is the function "updown" , the remaining callbacks that you can
assign are the following keys: Delete and F1 to F12.
 
After that you can add your items and the callbacks to the Menu by
calling the AddItem() method like this:
 
Menu1.AddItem(inttostr(i),test1);
 
the test1 is a callback that you add with AddItem() method.
 
After that you will enter a loop like this , the template of this loop
must look like the following, that's
not difficult to understand:
 
Here it is:
 
===
repeat
 
textbackground(blue);
clrscr;
menu2.execute(false);
menu1.execute(false);
 
case i mod 2 of
 
1: ret:=Menu1.Execute(true);
0: ret:=Menu2.Execute(true);
end;
if ret=ctTab then inc(i);
 
until ret=ctExit;
 
menu1.free;
menu2.free;
 
end.
 
==
 
When you execute menu1.execute(false), with a parameter equal to false,
my Winmenus widget will draw your menu without waiting for your input
and events, when you set the parameter of the execute() method to true
it will wait for your input and events, if the parameter of the execute
method is true and the returned value of the execute method is ctTab,
that means you have pressed on the Tab key, and if the returned value is
ctExit, that means you have pressed on the Escape key to exit.
 
 
You can download my text mode Winmenus from:
 
https://sites.google.com/site/scalable68/winmenus
 
 
And my units are working with Delphi and FreePascal and C++Builder.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: