Saturday, November 30, 2019

Digest for comp.lang.c++@googlegroups.com - 24 updates in 3 topics

"Öö Tiib" <ootiib@hot.ee>: Nov 29 04:41PM -0800

On Friday, 29 November 2019 18:35:35 UTC+2, Mr Flibble wrote:
> > revisit that decision after every release. What is reasonable
> > now may become backwards few years later.
 
> Utter tosh. if constexpr is far more than a "performance optimization"; it is especially useful in template code.
 
Utter jive, it was warning C4127 under discussion not usefulness of
if constexpr. What is the value warning C4127 "consider hinting me
that we can performance-optimize here."?
"Öö Tiib" <ootiib@hot.ee>: Nov 29 04:52PM -0800

On Friday, 29 November 2019 18:33:53 UTC+2, Mr Flibble wrote:
 
> VC++ is correct to warn about while(2+2 == 4) and not for(;;) the later being the only acceptable why to do an endless loop (I don't agree that while(true) is just as good as it is syntactically less clean).
 
It was while(1) versus for(;;) last I saw anyone bothering to discuss
those and these are still as apparently equivalent as these were back
then.
"Öö Tiib" <ootiib@hot.ee>: Nov 29 05:15PM -0800

On Friday, 29 November 2019 10:38:18 UTC+2, David Brown wrote:
 
> <https://en.cppreference.com/w/cpp/language/if>
 
> One other point is that since the discarded part is discarded, any
> identifiers it uses do not need to be defined or linked in with the code.
 
I'm in doubt about that other point. I noticed it long time ago that
the implementations do not link useless functions. It was like 2004
or maybe 2007.
 
I did delete unused functions after refactoring and since there
was tens of such I used impossibility to set breakpoints in those
as heuristic that these are now unused.
 
However my program became ill-formed since I had erased one that
was optimized out but not because it wasn't called but because it
was called only in branch that was impossible to take.
 
So compilers did not need if constexpr back then for it ... why
these suddenly started to need it now?
Paavo Helde <myfirstname@osa.pri.ee>: Nov 30 08:19AM +0200

On 30.11.2019 3:15, Öö Tiib wrote:
> was called only in branch that was impossible to take.
 
> So compilers did not need if constexpr back then for it ... why
> these suddenly started to need it now?
 
You answered that yourself: constexpr is needed to make your ill-formed
program well-formed again.
Ned Latham <nedlatham@woden.valhalla.oz>: Nov 30 12:24AM -0600

Paavo Helde wrote:
> > these suddenly started to need it now?
 
> You answered that yourself: constexpr is needed to make your ill-formed
> program well-formed again.
 
No. What would make his program well formed is the removal of that
"branch that [is] impossible to take".
 
(HTF doesd one *write* such a branch?)
"Öö Tiib" <ootiib@hot.ee>: Nov 30 02:13AM -0800

On Saturday, 30 November 2019 08:24:49 UTC+2, Ned Latham wrote:
> > program well-formed again.
 
> No. What would make his program well formed is the removal of that
> "branch that [is] impossible to take".
 
+1 That did make it.
 
> (HTF doesd one *write* such a branch?)
 
It was the effect of evolution. About like that one developer wrote
a new feature, other fixed some issue in it and third implemented
an improvement. Details in if condition did change during the process
so it turned into always false. Neither those programmers nor reviewers
of their pull requests did notice that. Perhaps there was some mile-
stone pressing as it often is. Then it was released, real data arrived
(with what performance of it was weak) and that refactoring took place.
Ned Latham <nedlatham@woden.valhalla.oz>: Nov 30 04:56AM -0600

嘱 Tiib wrote:
 
> > No. What would make his program well formed is the removal of that
> > "branch that [is] impossible to take".
 
> +1 That did make it.
 
Glad to help. You've erased the function it called as well?
 
> so it turned into always false. Neither those programmers nor reviewers
> of their pull requests did notice that. Perhaps there was some mile-
> stone pressing as it often is.
 
I've been there. The commercial environment is hostile to thoroughness.
 
> Then it was released, real data arrived
> (with what performance of it was weak) and that refactoring took place.
 
I have some difficulty with the mindset behind program development
these days; IDEs provide features that make routine programming
easier, but get in the way of the unorthodox approach; compilers
do the same; even operating systems can be a nuisance.
 
I cut my teeth on assembly-language programming on CP/M machines;
thoroughness is productive in that environment, and it translated
well for me into HLLs.
David Brown <david.brown@hesbynett.no>: Nov 30 12:03PM +0100

On 30/11/2019 01:52, Öö Tiib wrote:
 
> It was while(1) versus for(;;) last I saw anyone bothering to discuss
> those and these are still as apparently equivalent as these were back
> then.
 
I always use "while (true)" - and all my programs have that at least
once, being small embedded systems.
 
I have never liked the look of "for (;;)" - and I fear that some email
programs will turn it into a rather weird smiley!
"Öö Tiib" <ootiib@hot.ee>: Nov 30 03:20AM -0800

On Saturday, 30 November 2019 12:56:59 UTC+2, Ned Latham wrote:
> > > "branch that [is] impossible to take".
 
> > +1 That did make it.
 
> Glad to help. You've erased the function it called as well?
 
That happened in previous step:
1) Compiler left it out from executable,
2) I erased it from code since break-point was impossible to set in it,
3) I erased the worthless branch that stopped to compile.
 
 
> I cut my teeth on assembly-language programming on CP/M machines;
> thoroughness is productive in that environment, and it translated
> well for me into HLLs.
 
Yes, it has gone harder since all tools have became easier to use
but also the targets are more complicated. It is tried to solve
by adding more programmers but synchronizing their effort is also
an effort. At the end everybody's vision of what is going on is
naive but things get somehow done.
Ned Latham <nedlatham@woden.valhalla.oz>: Nov 30 05:38AM -0600

嘱 Tiib wrote:
> Ned Latham wrote:
 
----snip----
 
> by adding more programmers but synchronizing their effort is also
> an effort. At the end everybody's vision of what is going on is
> naive but things get somehow done.
 
That "somehow" bothers me. It's what started the over-complication
of the programming environment in the first place, IMO, and it
continues to influence development in all areas.
"Öö Tiib" <ootiib@hot.ee>: Nov 30 03:47AM -0800

On Saturday, 30 November 2019 13:04:07 UTC+2, David Brown wrote:
> > then.
 
> I always use "while (true)" - and all my programs have that at least
> once, being small embedded systems.
 
For me it is just style and it is not business of compiler to
regulate style.
Especially annoying is the warning when someone had wrapped
function-like macro's body into do { } while(false) and
then compiler starts to whine about while(false).
Until there are features available only in preprocessor we need
to use macros at places and so also safe idioms with those.
 
> I have never liked the look of "for (;;)" - and I fear that some email
> programs will turn it into a rather weird smiley!
 
Pasting to modern chats does it more likely than not. I just
thought that in woodbrian's style of prefixing std with
:: we can get trigraph <:: in vector<string> that some eldery
compilers still support.
"Öö Tiib" <ootiib@hot.ee>: Nov 30 04:57AM -0800

On Saturday, 30 November 2019 13:39:12 UTC+2, Ned Latham wrote:
 
> That "somehow" bothers me. It's what started the over-complication
> of the programming environment in the first place, IMO, and it
> continues to influence development in all areas.
 
It has been going on for long time. Notice that the project was in
mid 2000's. Currently teams of teams of programmers of random skill
levels are doing any (even embedded) projects. It is more like norm.
So piles of such somehow made stuff is of what our economies consist
of.
 
But what can we do? Imagine that we organize some kind of "order of
masters" to fight with garbage and for virtue of master. Imagine
that we gain power and popularity. Then some monster (Google,
Microsoft, Amazon, SAP, Oracle, Apple or combo of such) will figure
out how to weaken it, buy it, fill with weasels and noobs and use it
for producing and marketing of even more garbage. ;)
It is hard to do something that is tricky to hack with power of
monsters. But maybe worth to try? :D
David Brown <david.brown@hesbynett.no>: Nov 30 03:08PM +0100

On 30/11/2019 12:47, Öö Tiib wrote:
>> once, being small embedded systems.
 
> For me it is just style and it is not business of compiler to
> regulate style.
 
Agreed. Some prefer "while (true)", some prefer "for (;;)", and some
prefer other variations.
 
Still, it is not unreasonable for a compiler to suppose that "while (2 +
2 == 4)" is more likely to be a programmer mistake. Sometimes the line
between "unusual, inconsistent or lazy style" and "probable mistake" is
not sharp, and a warning can be helpful - like a compiler warning on
inconsistent indentation on nested if/else statements.
 
 
> Especially annoying is the warning when someone had wrapped
> function-like macro's body into do { } while(false) and
> then compiler starts to whine about while(false).
 
That one would quickly get annoying.
 
Ned Latham <nedlatham@woden.valhalla.oz>: Nov 30 09:46AM -0600

嘱 Tiib wrote:
> for producing and marketing of even more garbage. ;)
> It is hard to do something that is tricky to hack with power of
> monsters. But maybe worth to try? :D
 
Funny you should mention that. I have a project XBNF, which is an
eXtended Backus Naur Formalism (like EBNF but extended a little
more), and a co-project xbnf, which is a machine reader for XBNF.
 
If you give xbnf the XBNF definition of the syntax of a programming
language and the XBNF definition of syntax errors in that language,
and matching error messages, it will check the syntax for you and
emit error messages. You can give it other XBNF definitions too;
matching program commands and High Level Language Productions (HLLP),
to emit the first stage of a compilation.
 
XBNF can also define the syntax matching HHLPs with Assembly Language
Productions, and OS Productions, and Machine Architecture Productions;
with all that xbnf becomes a full-flown compiler capable of running on
any machine it can be compiled for, and capable of compiling code in
any programming language whatever, for any OS whatever, and to run on
pretty much on any machine whatever.
 
Configuration (writing all those syntax files) will be a bitch without
a share setup, and as a compiler, it'll be pretty simple, but you'll
be able to define (say) C++ with or without things like constexpr as
suits you. We'll be free of the tyranny of the compiler-writers. It's
a hacker's wet dream.
 
And it's open source. Anyone who gets a copy is an obstacle to the
corporates that would sabotage it.
 
If I can get it done.
"Öö Tiib" <ootiib@hot.ee>: Nov 30 11:09AM -0800

On Saturday, 30 November 2019 17:47:15 UTC+2, Ned Latham wrote:
> any machine it can be compiled for, and capable of compiling code in
> any programming language whatever, for any OS whatever, and to run on
> pretty much on any machine whatever.
 
Sounds like you are also claiming doing what Mr Flibble is claiming doing.
Very interesting. If neither Mr Flibble nor you are just trolling then
may be you should try to cooperate or to compete or something like that.
 
> be able to define (say) C++ with or without things like constexpr as
> suits you. We'll be free of the tyranny of the compiler-writers. It's
> a hacker's wet dream.
 
However to produce a translator of sophisticated modern programming
language (by programming or by configuring something) it will take
years of work of decent specialists.
 
> And it's open source. Anyone who gets a copy is an obstacle to the
> corporates that would sabotage it.
 
> If I can get it done.
 
Yeah ... and then there is "one ring" of Sauron open sourced. :D
I meant with "master" people who are willing to learn to be aware of
what they have produced into this world and also to learn capability
to carry the weight of responsibility for it. Since monsters
apparently don't care about it.
David Brown <david.brown@hesbynett.no>: Nov 30 12:00PM +0100

On 29/11/2019 18:18, Scott Lurndal wrote:
 
>> Once your LTO-optimised binary is stripped of all extra symbols, it will
>> be very difficult to follow by examination of the object code.
 
> Until you run it under QEMU and get a full instruction/register trace.
 
It will still be difficult to follow there. LTO generally results in a
lot of inlining, including partial inlining - code from all over the
project can get merged together into huge blocks of assembly with no
clear structure. With -O3 you get unrolling, re-arrangement of loops,
function cloning, and more.
 
Certainly QEMU or similar tools are a big aid to tracing the
functionality of code (and they pretty much neutralise things like code
encryption), but they won't make it an easy job here. Even for more
complex obfuscation techniques, like self-modifying code, extra jumps
added all over, extra dummy code, etc., there are tools available for
picking out the real code flow - though they may be hard to find and
expensive to get.
 
That is why complex obfuscation is pointless here, but LTO is a simple
and easy way to get block casual inspection of the code.
scott@slp53.sl.home (Scott Lurndal): Nov 30 07:03PM

>project can get merged together into huge blocks of assembly with no
>clear structure. With -O3 you get unrolling, re-arrangement of loops,
>function cloning, and more.
 
I spend every day debugging processors that run such code; for which
it is necessary to understand what the code is trying to accomplish,
often without source. Simulation tools and in-circuit emulation can
be quite powerful.
Frederick Gotham <cauldwell.thomas@gmail.com>: Nov 30 02:37AM -0800

> symbol prompted them to be included)
> from an archive is in the link map
> (assuming you ask for one).  -M
 
 
Thank you. Looks like I need to tell gcc to pass the "--print-map" flag to ld.
 
    gcc -Wl,--print-map main.c -o prog -l:libmonkey.a
Frederick Gotham <cauldwell.thomas@gmail.com>: Nov 30 02:38AM -0800

Scott said:
> An archive (.a) is simply a 'folder' of object files.
 
I thought everyone called.
these .a files "static libraries".
"Öö Tiib" <ootiib@hot.ee>: Nov 30 02:48AM -0800

On Saturday, 30 November 2019 12:39:00 UTC+2, Frederick Gotham wrote:
> > An archive (.a) is simply a 'folder' of object files.
 
> I thought everyone called.
> these .a files "static libraries".
 
That is fancy term about some .o (object) files combined into
.a (archive) file.
.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Nov 30 05:37PM

On Sat, 2019-11-30, Öö Tiib wrote:
>> these .a files "static libraries".
 
> That is fancy term about some .o (object) files combined into
> .a (archive) file.
 
I don't know about "fancy"; as far as I can tell "static library"
is the term people use for libraries which aren't shared libraries.
 
But they /are/ implemented as archive files on Unix: something similar
to a tar archive containing object files, and often a symbol table for
faster linking.
 
Anyway, the main point upthread was that they aren't called "static
shared libraries".
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
David Brown <david.brown@hesbynett.no>: Nov 30 07:12PM +0100

On 30/11/2019 11:38, Frederick Gotham wrote:
>> An archive (.a) is simply a 'folder' of object files.
 
> I thought everyone called.
> these .a files "static libraries".
 
Yes - but not "static /shared/ libraries". "Shared libraries" generally
means libraries that are shared at run time - ".so" files on Linux,
".dll" files on Windows.
"Öö Tiib" <ootiib@hot.ee>: Nov 30 10:25AM -0800

On Saturday, 30 November 2019 19:37:16 UTC+2, Jorgen Grahn wrote:
> > .a (archive) file.
 
> I don't know about "fancy"; as far as I can tell "static library"
> is the term people use for libraries which aren't shared libraries.
 
Correct.
 
> But they /are/ implemented as archive files on Unix: something similar
> to a tar archive containing object files, and often a symbol table for
> faster linking.
 
So these can be distributed as 1) folder of source code or 2) as
that archive file and folder of header files or even 3) just folder
of header files (then it is "header-only statically-linked library"
or just "header-only library").
 
> Anyway, the main point upthread was that they aren't called "static
> shared libraries".
 
I fully agree with that. I was attempting to add a pedantry point
to the mix in sense that calling that .a file alone (IOW wihout
headers) as "shared library" is becoming a bit extravagant. Sorry
for confusion if I caused any.
scott@slp53.sl.home (Scott Lurndal): Nov 30 07:00PM

>> An archive (.a) is simply a 'folder' of object files.
 
>I thought everyone called.
> these .a files "static libraries".
 
That's not the term you used. I'm not sure why you felt it necessary
to snip that part of your response.
 
Frederick Gotham <cauldwell.thomas@gmail.com> writes:
 
>I compile this library to produce a static shared library. All of the objec=
>t files are gathered together into an archive (In Linux, this is a ".a" fil=
>e).
 
Careful with your terminology. An archive file is not a 'static shared library'.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 14 updates in 14 topics

aminer68@gmail.com: Nov 29 04:50PM -0800

Hello,
 
 
More political philosophy about "Achieving Your Potential"..
 
 
I have just looked at the following video of
Garry Kasparov:
 
Garry Kasparov on "Achieving Your Potential"
 
https://www.youtube.com/watch?v=NPT0vg_Jl8Q
 
 
And i think that , in this video, Garry Kasparov is not so efficient
in his thinking, because like in psychology or philosophy
he must like shows the essence, i mean he must show the primitives
that permits to "achieve your potential", so i will explain:
 
First you have to know how to take care of your "image" , and
taking care of your image is also an engine that pushes you
forward towards more perfection, but there is also my Rule of:
"More perfection brings satisfaction" that is also an engine
that pushes you forward towards more and more perfection,
since also i said that this satisfaction is double satisfaction
as i explained it to you by saying:
 
"When you are preparing and cooking a beautiful Moroccan couscous and eating it, you will feel doubly satisfied by being satisfaction of being this more perfection of preparing and cooking the beautiful Moroccan couscous and you will also be satisfaction of eating it even if it comes with the "difficulty" of preparing and cooking and of learning how to prepare and to cook a beautiful Moroccan couscous. That's an efficient philosophy. And it is also my spirit."
 
 
But be smart and notice that the result of my Rule that is this satisfaction is also a balance of the individual satisfaction and the satisfaction of the society and the world, so it constrains your image to be in accordance with morality that takes care correctly of the society and the world, so i think that it becomes more clear that those two engines are really powerful too and they permit to achieve your potential.
 
Read my following thoughts of my political philosophy to
understand more:

More political philosophy about political correctness..
 
Now in political philosophy can we ask the question of:
 
Do we have to be political correctness or not ?
 
I think to be smart you have to define political correctness..
 
So then can we in political philosophy ask what is political correctness ?
 
I think that you have to know about morality to be able to answer this question, and you will notice in my logical proof below that i think a correct abstraction of what is morality is: "perfection at best", so why i am choosing such abstraction ? because we have to be perfection in action by also being an effort of being more perfection, so then you are noticing that morality must be pushed forward towards absolute perfection, but perfection at best is also that we can for example say: Perfection is perfection of not helping the weakest members of our society and it is also perfection of helping the weakest members of the society, and we can notice that each side has a weight of importance, since we can not say that helping the weakest members of the society has no importance, because it is also a perfection that has a weight of importance, and logically we have not to neglect it, and as you have noticed that i also said that: "Independently" of Democracy, we can say that my Rule of: "More perfection brings satisfaction" comes with a difficulty or with an effort of being more perfection that brings satisfaction, so then now you are noticing that morality is also this effort of helping others and it is also perfection in action towards absolute perfection that is absolute happiness, so then you are noticing that independently of democracy we can now notice that morality is also this act of balancing between perfection of helping others and perfection of moving forward towards absolute perfection that is absolute happiness, and now you are noticing that when you look at today world you are feeling that our today world is converging fast towards this essence of morality of human beings, and now you are finally noticing that you are able to see that this essence of morality
also dictates to us a kind of political correctness. So read the rest of my previous thoughts of my political philosophy to understand
more:
 
 
 
More about compassion in political philosophy..
 
And what about compassion in political philosophy ?
 
"Independently" of Democracy, we can say that my Rule of: "More perfection brings satisfaction" comes with a difficulty or with an effort of being more perfection that brings satisfaction, and you have noticed that i said the following so that to abstract it:
 
"When you are preparing and cooking a beautiful Moroccan couscous and eating it, you will feel doubly satisfied by being satisfaction of being this more perfection of preparing and cooking the beautiful Moroccan couscous and you will also be satisfaction of eating it even if it comes with the "difficulty" of preparing and cooking and of learning how to prepare and to cook a beautiful Moroccan couscous. That's an efficient philosophy. And it is also my spirit."
 
So then compassion is also "mechanical" that comes from my Rule of: "More perfection brings satisfaction", since perfection is also
helping others, so then a more appropriate morality is to not neglect
helping others and to know how to help others.
 
So you are noticing that i am saying: "since perfection is also helping others"
 
But i have to logically explain the big picture, here is how:
 
Since we can for example say: Perfection is perfection of not helping the weakest members of our society and it is also perfection of helping the weakest members of the society, and we can notice that each side has a weight of importance, since we can not say that helping the weakest members of the society has no importance, because it is also a perfection that has a weight of importance, and logically we have not to neglect it, and as you have noticed that i also said that: "Independently" of Democracy, we can say that my Rule of: "More perfection brings satisfaction" comes with a difficulty or with an effort of being more perfection that brings satisfaction.
 
 
More political philosophy of: What is morality ?
 
Am i crazy by saying that you have to follow my Rule of: "More perfection brings satisfaction" ?
 
No, i am not crazy , since i am like a wise man type of person, so
i will explain more:
 
When i say that you have to follow the Rule of:
 
""More perfection brings satisfaction"
 
 
You have to understand that i have abstracted the definition of morality to: "Perfection at best" (read my proof below of it), and the "at best"
must be defined more, and you will notice in my below writing that
i have defined more what is the "at best" of "perfection at best" so
that you understand that morality is the act of perfectionning at best that is constrained by some constraints, so to not be extremism or radicalism of perfection, such as neo-nazism, that is problematic, you have to read about those constraints in my below writing, so then you are understanding that my Rule of: "More perfection brings satisfaction" is also constrained by the same constraints of morality of today such as under democracy etc., so read all my following writing so that you
understand my political philosophy..
 
 
More political philosophy about life..
 
 
I think that there is many engines that push us forward,
but i don't think that we have to return to racial white nationalism,
because it can bring with it many problems such as hate between different races, so i think that the best engine is to follow my Rule of: "More perfection brings satisfaction..", this is i think the smartest way to follow, so read about it in my following writing to understand more:
 
More political philosophy about morality..
 
 
More about the efficient spirit..
 
 
When you are preparing and cooking a beautiful Moroccan couscous and eating it, you will feel doubly satisfied by being satisfaction of being this more perfection of preparing and cooking the beautiful Moroccan couscous and you will also be satisfaction of eating it even if it comes with the "difficulty" of preparing and cooking and of learning how to prepare and to cook a beautiful Moroccan couscous. That's an efficient philosophy. And it is also my spirit.
 
Read my previous thoughts to understand:
 
More political philosophy about the efficient spirit..
 
I am like a wise man type of person, and now i will speak more
about what is a beautiful spirit..
 
When you look at the beautiful, like a beautiful moon in the night,
you will notice that it is like a satisfaction of looking at the beautiful, but at the same time the moon is also a wild place and
dangerous place, so by logical analogy, if you only work for money , it it is like thinking at the other side of the moon that is dangerous for us and that is a wild place and logically inferring that the world is not beautiful and that this or that thing can look beautiful but our world is not beautiful and that this spirit makes you like hating our world, but saying so and being so is like working only for money and it is a "corruption" of the mind, so you have to change your "perception" and "conception" and to think differently by adhering to my Rule of: "More perfection brings satisfaction", it means that more perfection brings a satisfaction of being more perfection, and this Rule is an engine that pushes you forward towards more and more perfection, and this is also an efficient spirit and it is also my spirit. And please
read my below thoughts of my political philosophy to understand more:
 
 
And now more political philosophy..
 
 
I think it is a beautiful day, what do i mean by beautiful day?,
i am like a wise man type of person and i am able to notice it,
notice that Diversity has brought perfection, but you have to notice more, so look at my following poem of Love to notice:
 
 
==========================
 
You're In My Heart
 
Like the long way from the start
 
That brings the beautiful "insight"
 
You're In My Heart
 
Like the beautiful light of the day and "night"
 
That makes us see in the dark
 
You're In My Heart
 
Because it is far from the evil fight
 
Because it is like the words of the almight
 
You're In My Heart
 
Because we are beautiful from the inside
 
Because our Love is right !
 
So I want to take you in my arms
 
Like peace and love that disarm
 
I want to take you in my arms
 
Since i am here to make it fine
 
I want to take you in my arms
 
Since love is also like a beautiful cup of wine
 
I want to take you in my arms
 
Since the light of my heart is making our love shines
 
I want to take you in my arms
 
Since our destiny is a beautiful Day of Valentine !
 
=======================
 
 
 
Notice in my above poem of Love that i am saying the following:
 
"I want to take you in my arms
Since our destiny is a beautiful Day of Valentine !"
 
When i say: "Our destiny is the beautiful Day Of Valentine"
 
That doesn't mean that our destiny is only one Day Of Valentine,
Because that also symbolically means that our destiny is the "celebration" of the Day of Valentine, that means our destiny will be Love and Romance.
 
And notice in my above poem of Love that i am saying:
 
"You're In My Heart
Like the beautiful light of the day and "night"
That makes us see in the dark"
 
So as you are noticing the day and night means also symbolically
the night of humanity, that means the bad of our humanity, and both
the day and night of humanity have allowed us also to see in the dark,
so you have to understand my poem of Love, this is the basis
of the principle of: the right Diversity gives perfection,
but you have to be more smart, today we are able to adapt
quickly to the bad of our humanity and be much more wisdom
because we are much more aware and much conscious and much more
educated because of the sophistication of education and the sophistication of internet and such, this is why i think that
arab countries will adapt quickly, so i think that we have to be more
optimistic about arabs and about our world.
 
 
Read the rest of my thoughts of my political philosophy to understand more:
 
 
More political philosophy about beautifulness..
 
I am like a wise man type of person, so i am a special person like
a wise man, and i am a gentleman type of person, so i am beautiful
from the inside, since you have also noticed that i wrote many
poems in front of you.. but today i will continu to speak
about an important subject and it is the one of beautifulness,
so i will begin it by ask the question of:
 
 
Is my rule of: "The effort of being more perfection brings satisfaction"
beautiful or not ?
 
 
So here you will notice that you have to be a wise man to notice
that this rule is so important but it also brings much more beautifulness !
 
Since you can feel it by also noticing the following principle of: the right Diversity brings perfection, and you can notice it in the following video of a beautiful indian song:
 
https://www.youtube.com/watch?v=vnCIjfkPooo
 
Look at how they are being beautiful like beautiful westerners,
since this diversity is becoming perfection ! and this is what is bringing exponential progress and the law of accelerating returns, you can feel more this important principle by my following writing:
 
 
Now i think there is something really important about the essence of humanity, i think that it is "related" to morality, i said that morality is perfection at best (read below my proof of it), and the goal to attain is the goal of life that is to attain absolute perfection or absolute happiness, so morality that is perfection at best is pushed towards absolute perfection or absolute happiness, but we have to do more philosophy to understand better the essence of human evolution, i think that morality of past history has needed more "diversity" to be able for humans to survive and to be more quality, and diversity has given "immensity" or big "quantity", and you can notice it inside the evolution of life, that life has needed a greater number of monkeys and many tests and failures by evolution on them to evolve towards quality and smartness and so that the monkey become human and smartness of human. So as you are noticing "diversity" has given "immensity" or "big" "quantity" and that both diversity and immensity or big quantity have given quality and smartness(read below about the essence of smartness to notice it). This is too how "morality" has evolved, morality has needed diversity and immensity or big quantity so that to be more perfection, this is why morality too of today is needing diversity and immensity or big quantity so that to be perfection, and morality of today knows that perfection of today is also having the right "imperfections" (that are also diversity) to be
able to be the right perfection(read what i wrote about neo-nazism below to notice it).
 
More political philosophy about tolerance..
 
I am like a "wise" man type of person, and i am a gentleman type of person..
 
Today i will do more political philosophy about tolerance,
i will start it by asking the following smart question:
 
What is this Statue of Liberty in USA and what is Liberty ?
 
 
You can be inferiority, and think that Liberty is just like being free from constraints ! but a great philosopher knows Liberty comes with the right constraints
so that to be the right Liberty that is the right morality , so here
again we notice that Liberty too is constrained by morality, because
morality is like the King, and now you are noticing more and more that
a great philosopher says that political philosophy must be a transcendence with a sophisticated thinking ! and since i am like
a wise man, notice with me that the main point of Buddha is the Path of "Serenity" and "Insight", from this "Insight" you get more knowledge of yourself by more "hindsight" and more "introspection" that permits you to guide your more savage
aminer68@gmail.com: Nov 29 03:41PM -0800

Hello,
 
 
I invite you to look at this very interesting video of
the smart Garry Kasparov, i was looking to many
of his video and i think he is a gentleman and i think
he is both really educated and smart, so look at his following
interesting video:
 
Garry Kasparov on "Achieving Your Potential"
 
https://www.youtube.com/watch?v=NPT0vg_Jl8Q
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 03:06PM -0800

Hello,
 
 
I think smartness is so important for our world,
so i invite you to look at the following video of a 11-year old
boy that has passed the Mensa tests with an IQ of 162:
 
Meet the 11-year old with a higher IQ than Albert Einstein and Stephen Hawking
 
https://www.youtube.com/watch?v=aglfFlFrwgM
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 02:30PM -0800

Hello,
 
 
Look at this interesting video:
 
Garry Kasparov: Chess, Deep Blue, AI, and Putin | Artificial Intelligence (AI) Podcast
 
https://www.youtube.com/watch?v=8RVa0THWUWw
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 01:56PM -0800

Hello,
 
 
More about arabs..
 
I am a white arab, and I think arabs are smart people,
Babylonians of Irak were racially arabs, read about them here:
 
3,700-year-old Babylonian tablet rewrites the history of maths - and shows the Greeks did not develop trigonometry
 
Read more here:
 
https://www.telegraph.co.uk/science/2017/08/24/3700-year-old-babylonian-tablet-rewrites-history-maths-could/
 
 
Also read the following about Arabs:
 
 
Research: Arab Inventors Make the U.S. More Innovative
 
It turns out that the U.S. is a major home for Arab inventors. In the five-year period from 2009 to 2013, there were 8,786 U.S. patent applications in our data set that had at least one Arab inventor. Of the total U.S. patent applications, 3.4% had at least one Arab inventor, despite the fact that Arab inventors represent only 0.3% of the total population.
 
Read more here:
 
https://hbr.org/2017/02/arab-inventors-make-the-u-s-more-innovative
 
 
Even Steve Jobs the founder of Apple had a Syrian immigrant father called Abdul Fattah Jandal.
 
 
Read more here about it:
 
https://www.macworld.co.uk/feature/apple/who-is-steve-jobs-syrian-immigrant-father-abdul-fattah-jandali-3624958/
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 01:52PM -0800

Hello,
 
 
Look at my "sportive" spirit..
 
 
About my scalable algorithms inventions..
 
 
I am a white arab, and i am a gentleman type of person,
and i think that you know me too by my poetry that i wrote
in front of you and that i posted here, but i am
also a more serious computer developer, and i am also
an inventor who has invented many scalable algorithms, read about
them on my writing below:
 
 
Here is my last scalable algorithm invention, read
what i have just responded in comp.programming.threads:
 
About my LRU scalable algorithm..
 
On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote:
> in locked mode in very rare cases. And as I said inserting and
> flushing is conventional locked access.
> So the quest is for you: Can you guess what I did?
 
 
And here is what i have just responded:
 
 
I think i am also smart, so i have just quickly found a solution that is scalable and that is not your solution, so it needs my hashtable that is scalable and it needs my fully scalable FIFO queue that i have invented. And i think i will not patent it. But my solution is not Lockfree, it uses locks like in a Lock striping manner and it is scalable.
 
 
And read about my other scalable algorithms inventions on my writing below:
 
 
About the buffer overflow problem..
 
I wrote yesterday about buffer overflow in Delphi and Freepascal..
 
I think there is a "higher" abstraction in Delphi and Freepascal
that does the job very well of avoiding buffer overflow, and it is
the TMemoryStream class, since it behaves also like a pointer
and it supports reallocmem() and freemem() on the pointer but
with a higher level abstraction, look for example at my
following example in Delphi and Freepascal, you will notice
that contrary to pointers , that the memory stream is adapting with writebuffer() without the need of reserving the memory, and this is why it avoids the buffer overflow problem, read the following example to notice how i am using it with a PAnsichar type:
 
========================================
 
 
Program test;
 
 
uses system.classes,system.sysutils;
 
 
var P: PAnsiChar;
 
 
Begin
 
 
P:='Amine';
 
 
mem:=TMemorystream.create;
 
mem.position:=0;
 
mem.writebuffer(pointer(p)^,6);
 
mem.position:=0;
 
writeln(PAnsichar(mem.memory));
 
 
 
end.
 
 
===================================
 
 
So since Delphi and Freepascal also detect the buffer overflow on dynamic arrays , so i think that Delphi and Freepascal are powerful
tools.
 
 
Read my previous thoughts below to understand more:
 
 
And I have just read the following webpage about "Fearless Security: Memory safety":
 
https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/
 
Here is the memory safety problems:
 
1- Misusing Free (use-after-free, double free)
 
I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it.
 
 
2- Uninitialized variables
 
This can be detected by the compilers of Delphi and Freepascal.
 
 
3- Dereferencing Null pointers
 
I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it.
 
4- Buffer overflow and underflow
 
This has been solved in Delphi by using madExcept, read here about it:
 
http://help.madshi.net/DebugMm.htm
 
You can buy it from here:
 
http://www.madshi.net/
 
 
And about race conditions and deadlocks problems and more, read my following thoughts to understand:
 
 
I will reformulate more smartly what about race conditions detection in Rust, so read it carefully:
 
You can think of the borrow checker of Rust as a validator for a locking system: immutable references are shared read locks and mutable references are exclusive write locks. Under this mental model, accessing data via two independent write locks is not a safe thing to do, and modifying data via a write lock while there are readers alive is not safe either.
 
So as you are noticing that the "mutable" references in Rust follow the Read-Write Lock pattern, so this is not good, because it is not like more fine-grained parallelism that permits us to run the writes in "parallel" and gain more performance from parallelizing the writes.
 
 
Read more about Rust and Delphi and my inventions..
 
I think the spirit of Rust is like the spirit of ADA, they are especially designed for the very high standards of safety, like those of ADA, "but" i don't think we have to fear race conditions that Rust solve, because i think that race conditions are not so difficult to avoid when you are a decent knowledgeable programmer in parallel programming, so you have to understand what i mean, now we have to talk about the rest of the safety guaranties of Rust, there remain the problem of Deadlocks, and i think that Rust is not solving this problem, but i have provided you with an enhanced DelphiConcurrent library for Delphi and Freepascal that detects deadlocks, and there is also the Memory Safety guaranties of Rust, here they are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
But notice that I have solved the number 1 and number 2 by inventing my
scalable reference counting with efficient support for weak references
for Delphi and Freepascal, read below to notice it, and for number 3 read my following thoughts to understand:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a decent preemption tolerance.
 
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to
Google or Embarcadero or such software companies.
 
 
Read my following writing to know me more:
 
More about computing and parallel computing..
 
The important guaranties of Memory Safety in Rust are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
I think i have solved Null Pointer Dereferences and also solved Dangling Pointers and also solved memory leaks for Delphi and Freepascal by inventing my "scalable" reference counting with efficient support for weak references and i have implemented it in Delphi and Freepascal (Read about it below), and reference counting in Rust and C++ is "not" scalable.
 
About the (3) above that is Buffer Overruns, read here about Delphi and Freepascal:
 
What's a buffer overflow and how to avoid it in Delphi?
 
read my above thoughts about it.
 
 
About Deadlock and Race conditions in Delphi and Freepascal:
 
I have ported DelphiConcurrent to Freepascal, and i have
also extended them with the support of my scalable RWLocks for Windows and Linux and with the support of my scalable lock called MLock for Windows and Linux and i have also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files inside the zip file to understand more.
 
You can download DelphiConcurrent and FreepascalConcurrent for Delphi and Freepascal from:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way to build Delphi applications which involve parallel executed code based on threads like application servers. DelphiConcurrent provides to the programmers the internal mechanisms to write safer multi-thread code while taking a special care of performance and genericity.
 
In concurrent applications a DEADLOCK may occurs when two threads or more try to lock two consecutive shared resources or more but in a different order. With DelphiConcurrent and FreepascalConcurrent, a DEADLOCK is detected and automatically skipped - before he occurs - and the programmer has an explicit exception describing the multi-thread problem instead of a blocking DEADLOCK which freeze the application with no output log (and perhaps also the linked clients sessions if we talk about an application server).
 
Amine Moulay Ramdane has extended them with the support of his scalable RWLocks for Windows and Linux and with the support of his scalable lock called MLock for Windows and Linux and he has also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files to
understand more.
 
And please read the html file inside to learn more how to use it.
 
 
About race conditions now:
 
My scalable Adder is here..
 
As you have noticed i have just posted previously my modified versions of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in parallel programs.
 
But i have just read the following about how to avoid race conditions in Parallel programming in most cases..
 
Here it is:
 
https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/
 
This is why i have invented my following powerful scalable Adder to help you do the same as the above, please take a look at its source code to understand more, here it is:
 
https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal
 
Other than that, about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:
 
"Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.
 
There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And about Message Passing Process Communication Model and Shared Memory Process Communication Model:
 
An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine.
 
Read the following to notice it:
 
Why did Windows NT move away from the microkernel?
 
"The main reason that Windows NT became a hybrid kernel is speed. A microkernel-based system puts only the bare minimum system components in the kernel and runs the rest of them as user mode processes, known as servers. A form of inter-process communication (IPC), usually message passing, is used for communication between servers and the kernel.
 
Microkernel-based systems are more stable than others; if a server crashes, it can be restarted without affecting the entire system, which couldn't be done if every system component was part of the kernel. However, because of the overhead incurred by IPC and context-switching, microkernels are slower than traditional kernels. Due to the performance costs of a microkernel, Microsoft decided to keep the structure of a microkernel, but run the system components in kernel space. Starting in Windows Vista, some drivers are also run in user mode."
 
 
More about message passing..
 
An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine.
 
Read the following to notice it:
 
"One problem that plagues microkernel implementations is relatively poor performance. The message-passing layer that connects
different operating system components introduces an extra layer of
machine instructions. The machine instruction overhead introduced
by the message-passing subsystem manifests itself as additional
execution time. In a monolithic system, if a kernel component needs
to talk to another component, it can make direct function calls
instead of going through a third party."
 
However, shared memory model may create problems such as synchronization and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it's about time somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI application is significantly higher than that required to write a shared-memory version of it.
 
And more about my scalable reference counting with efficient support for weak references:
 
My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here..
 
Here i am again, i have just updated my scalable reference counting with
efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version,
and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
aminer68@gmail.com: Nov 29 01:32PM -0800

Hello,
 
 
More about my way of doing..
 
As you have noticed i am a white arab, i live in Quebec Canada since
year 1989.
 
Now if you ask me how i am making "money" so that to be able to live..
 
You have to understand my way of doing, I have gotten my Diploma in
Microelectronics and informatics in 1988, it is not a college level
diploma, my Diploma is a university level Diploma, it looks like an
Associate degree or the french DEUG.
 
Read here about the Associate degree:
 
https://en.wikipedia.org/wiki/Associate_degree
 
And after i have gotten my Diploma , I have also succeeded one year of
pure 'mathematics" at the university level of mathematics.
 
So i have studied and succeeded 3 years at the university level..
 
Now after that i have come to Canada in year 1989 and i have
started to study more software computing and to study network
administration in Quebec Canada, and after that i have started to work
as a network administrator for many years, after that around years 2001
and 2002 i have started to implement some of my softwares like PerlZip
that looked like PkZip of PKware software company, but i have
implemented it for Perl , and i have implemented the Dynamic Link
Libraries of my PerlZip that permits to compress and decompress etc.
with the "Delphi" compiler, so my PerlZip software product was very fast
and very efficient, in year 2002 i have posted the Beta version on
internet, and as a proof , please read about it here:
 
http://computer-programming-forum.com/52-perl-modules/ea157f4a229fc720.htm
 
And after that i have sold the release version of my PerlZip
product to many many companies and to many individuals around the world,
and i have even sold it to many Banks in Europe, and with that i have
made more money.
 
And after that i have started to work like a software developer
consultant, the name of my company was and is CyberNT Communications,
here it is:
 
Here is my company in Quebec(Canada) called CyberNT Communications,
i have worked as a software developer and as a network administrator,
read the proof here:
 
https://opencorporates.com/companies/ca_qc/2246777231
 
 
Also read the following part of a somewhat old book of O'Reilly called Perl for System Administration by David N. Blank-Edelman, and you will notice that it contains my name and it speaks about some of my Perl modules:
 
https://www.oreilly.com/library/view/perl-for-system/1565926099/ch04s04.html
 
 
 
And here is one of my new software project that is my powerful Parallel Compression Library was updated to version 4.4
 
You can download it from:
 
https://sites.google.com/site/scalable68/parallel-compression-library
 
 
And read more about it below:
 
 
Author: Amine Moulay Ramdane
 
Description:
 
Parallel Compression Library implements Parallel LZ4 , Parallel LZMA , and Parallel Zstd algorithms using my Thread Pool Engine.
 
- It supports memory streams, file streams and files
 
- 64 bit supports - lets you create archive files over 4 GB , supports archives up to 2^63 bytes, compresses and decompresses files up to 2^63 bytes.
 
- Parallel compression and parallel decompression are extremely fast
 
- Now it supports processor groups on windows, so that it can use more than 64 logical processors and it scales well.
 
- It's NUMA-aware and NUMA efficient on windows (it parallelizes the reads and writes on NUMA nodes)
 
- It minimizes efficiently the contention so that it scales well.
 
- It supports both compression and decompression rate indicator
 
- You can test the integrity of your compressed file or stream
 
- It is thread-safe, that means that the methods can be called from multiple threads
 
- Easy programming interface
 
- Full source codes available.
 
Now my Parallel compression library is optimized for NUMA (it parallelizes the reads and writes on NUMA nodes) and it supports processor groups on windows and it uses only two threads that do the IO (and they are not contending) so that it reduces at best the contention, so that it scales well. Also now the process of calculating the CRC is much more optimized and is fast, and the process of testing the integrity is fast.
 
I have done a quick calculation of the scalability prediction for my Parallel Compression Library, and i think it's good: it can scale beyond 100X on NUMA systems.
 
The Dynamic Link Libraries for Windows and Dynamic shared libraries for Linux of the compression and decompression algorithms of my Parallel Compression Library and for my Parallel archiver were compiled from C with the optimization level 2 enabled, so they are very fast.
 
Here are the parameters of the constructor:
 
First parameter is: The number of cores you have specify to run the compression algorithm in parallel.
 
Second parameter is: A boolean parameter that is processorgroups to support processor groups on windows , if it is set to true it will enable you to scale beyond 64 logical processors and it will be NUMA efficient.
 
Just look at the Easy compression library for example, if you have noticed it's not a parallel compression library:
 
http://www.componentace.com/ecl_features.htm
 
And look at its pricing:
 
http://www.componentace.com/order/order_product.php?id=4
 
My parallel compression library costs you 0$ and it's a parallel compression library..
 
My Parallel compression library was updated, i have ported the Parallel LZ4 compression algorithm(one of the fastest in the world) to the Windows 64 bit system, now Parallel LZ4 compression algorithm is working perfectly with Windows 32 bit and 64 bit, if you want to use Windows 64 bit Parallel LZ4 just copy the lz4_2.dll inside the LZ4_64 directory (that you find inside the zip file) to your current directory or to the c:\windows\SysWow64 directory, and if you want to use the Windows 32bit Parallel LZ4 use the lz4_2.dll inside the LZ4_32 directory.
 
If you want to use Windows 64 bit Parallel LZMA with Windows 64 bit just copy the LZMAStream1.dll inside the LZMA_fpc64 directory and LZMAStream2.dll inside LZMA_dcc64 directory to your
 
current directory or to the c:\windows\SysWow64 directory, and if you want to use Windows 32bit Parallel LZMA copy the LZMAStream1.dll inside the LZMA_fpc32 directory and LZMAStream2.dll inside LZMA_dcc32 directory to your current directory or to the c:\windows\system32 directory.
 
Operating systems: Windows , Linux (x86)
 
Language: FPC Pascal v2.2.0+ / Delphi 7+: http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi32
 
Required Delphi XE-XE5 switches: -DXE
 
{$DEFINE CPU32} and {$DEFINE Windows32} for 32 bit systems
 
{$DEFINE CPU64} and {$DEFINE Windows64} for 64 bit systems
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 01:16PM -0800

Hello,
 
 
 
Read the following the following webpage:
 
Concurrency and Parallelism: Understanding I/O
 
https://blog.risingstack.com/concurrency-and-parallelism-understanding-i-o/
 
 
So you have to know that my Parallel Compression Library and my
Parallel Archiver are very efficient in I/O, and here is
what i wrote about my powerful Parallel Compression Library:
 
Description:
 
Parallel Compression Library implements Parallel LZ4 , Parallel LZMA , and Parallel Zstd algorithms using my Thread Pool Engine.
 
- It supports memory streams, file streams and files
 
- 64 bit supports - lets you create archive files over 4 GB , supports archives up to 2^63 bytes, compresses and decompresses files up to 2^63 bytes.
 
- Parallel compression and parallel decompression are extremely fast
 
- Now it supports processor groups on windows, so that it can use more than 64 logical processors and it scales well.
 
- It's NUMA-aware and NUMA efficient on windows (it parallelizes the reads and writes on NUMA nodes)
 
- It minimizes efficiently the contention so that it scales well.
 
- It supports both compression and decompression rate indicator
 
- You can test the integrity of your compressed file or stream
 
- It is thread-safe, that means that the methods can be called from multiple threads
 
- Easy programming interface
 
- Full source codes available.
 
Now my Parallel compression library is optimized for NUMA (it parallelizes the reads and writes on NUMA nodes) and it supports processor groups on windows and it uses only two threads that do the IO (and they are not contending) so that it reduces at best the contention, so that it scales well. Also now the process of calculating the CRC is much more optimized and is fast, and the process of testing the integrity is fast.
 
I have done a quick calculation of the scalability prediction for my Parallel Compression Library, and i think it's good: it can scale beyond 100X on NUMA systems.
 
The Dynamic Link Libraries for Windows and Dynamic shared libraries for Linux of the compression and decompression algorithms of my Parallel Compression Library and for my Parallel archiver were compiled from C with the optimization level 2 enabled, so they are very fast.
 
Here are the parameters of the constructor:
 
First parameter is: The number of cores you have specify to run the compression algorithm in parallel.
 
Second parameter is: A boolean parameter that is processorgroups to support processor groups on windows , if it is set to true it will enable you to scale beyond 64 logical processors and it will be NUMA efficient.
 
Just look at the Easy compression library for example, if you have noticed it's not a parallel compression library:
 
http://www.componentace.com/ecl_features.htm
 
And look at its pricing:
 
http://www.componentace.com/order/order_product.php?id=4
 
My parallel compression library costs you 0$ and it's a parallel compression library..
 
 
You can read more about my Parallel Compression Library and download it from my website here:
 
https://sites.google.com/site/scalable68/parallel-compression-library
 
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 11:17AM -0800

Hello,
 
 
Look at this interesting video:
 
Are Highly Creative People More Intelligent?
 
https://www.youtube.com/watch?v=syFTN2OOA38
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 08:36AM -0800

Hello,
 
 
After AI, Fashion and Shopping Will Never Be the Same
 
Read more here:
 
https://singularityhub.com/2019/11/29/after-ai-fashion-and-shopping-will-never-be-the-same/
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 07:38AM -0800

Hello,
 
 
My GUI components are here..
 
Why am i implementing GUI components and the process by wich
you organize them into a more interesting and interactive GUI applications ?
 
You can still use Qt GUI libraries or the like , but what
i am doing is "learning" the Delphi and FreePascal community
how to design and to implement the following GUI components
with simple primitives of Wingraph that is like the graph
unit of Turbo Pascal:
 
- Button
- Label
- TextBox
- MessageBox
- Memo
- Panel
- RadioButton
- CheckBox
- ProgressBar
- TreeMenu
- Calendar
 
You have to know that "using" GUI libraries is much easier
but to be able to understand the "inside" of how to
implement sophisticated GUI components from
simple graphical primitives is better and is good to know.
 
About my next TreeMenu GUI component, i think i will also implement soon and easily a TreeMenu component, that is like TreeView but it is constituted with two of my Winmenus and with my StringTree, i think you will appreciate it because it will be "powerful" because it will use a much enhanced version of my Winmenus.
 
More explanation of the GUI components..
 
Now the following GUI components are supported:
 
- Button
- Label
- TextBox
- MessageBox
- CheckBox
- Winmenus
 
 
I will soon add a more powerful GUI component called TreeMenu,
it will look like the GUI of a file manager, but you will
be able to click on the callbacks and to search etc.
 
Also i will soon provide you with a calendar GUI component and with a checkbox component.
 
I have corrected a small bug in the event loop, now i think all is working correctly, and i think my units are stable, here is the units that have included inside the zip file of my new Winmenus version 1.23: so i have included the my Graph3D unit for 3D graphism that looks like the graph unit of turbo pascal, and i have included my enhanced Winmenus GUI component using wingraph, and i have included the GUI unit that contains the other GUI components, and of course i have included Wingraph that look like the graph unit of Turbo Pascal but it is for Delphi and FreePascal.
 
About my software project..
 
As you have noticed i have extended the GUI components using Wingraph, now i have added a checkbox GUI component, now here is the GUI components that are supported:
 
- Button
- Label
- TextBox
- MessageBox
- CheckBox
- Winmenus
 
 
But i think that the Memo GUI component can be "emulated" with my Winmenus GUI component, so i think that my Winmenus GUI component is powerful and i think it is complete now, and i will soon add a more powerful GUI component called TreeMenu that will look like the GUI of a file manager, but you will be able to click on the callbacks and to search etc. it will be implemented with two of my Winmenus and with my powerful StringTree here:
 
https://sites.google.com/site/scalable68/stringtree
 
 
And soon i will also implement a Calendar GUI component and
a ProgressBar GUI component.
 
But you have to understand me , i am implementing those
GUI components so that you will be able to design and implement
more interactive GUI applications for graphical applications
with Wingraph and such.
 
 
You can download Winmenus using wingraph that contains all the units
above that i have implemented from:
 
https://sites.google.com/site/scalable68/winmenus-using-wingraph
 
 
I have also implemented the text mode WinMenus, here it is:
 
 
Description
 
Drop-Down Menu Widget using the Object Pascal CRT unit
 
Please look at the test.pas example inside the zip file Use the 'Delete' on the keyboard to delete the items
and use the 'Insert' on the keyboard to insert the items
and use the 'Up' and 'Down' and 'PageUp and 'PageDown' to scroll .. and use the 'Tab' on the keyboard to switch between the Drop Down Menus and 'Enter' to select an item..
and the 'Esc' on the keyboard to exit..
and the 'F1' on keyboard to delete all the items from the list
right arrow and left arrow to scroll on the left or on the right
You can search with SearchName() and NextSearch() methods and now the search with wildcards inside the Widget is working perfectly.
 
Winmenus is event driven, i have to explain all to you to understand more...
 
At first you have to create your Widget menu by executing something like this:
 
Menu1:=TMenu.create(5,5);
 
This will create a Widget menu at the coordinate (x,y) = (5,5)
 
After that you have to set your callbacks,cause my Winmenus is event driven, so you have to do it like this:
 
Menu1.SetCallbacks(insert,updown);
 
The SetCallbacks() method will set your callbacks, the first callback parameter is the callback that will be executed when the insert key is pressed and it is the insert() function, and the second callback is the callback that will be called when the up and down keys are pressed and it is the function "updown" , the remaining callbacks that you can assign are the following keys: Delete and F1 to F12.
 
After that you can add your items and the callbacks to the Menu by calling the AddItem() method like this:
 
Menu1.AddItem(inttostr(i),test1);
 
the test1 is a callback that you add with AddItem() method.
 
After that you will enter a loop like this , the template of this loop must look like the following, that's
not difficult to understand:
 
Here it is:
 
===
repeat
 
textbackground(blue);
clrscr;
menu2.execute(false);
menu1.execute(false);
 
case i mod 2 of
 
1: ret:=Menu1.Execute(true);
0: ret:=Menu2.Execute(true);
end;
if ret=ctTab then inc(i);
 
until ret=ctExit;
 
menu1.free;
menu2.free;
 
end.
 
==
 
When you execute menu1.execute(false), with a parameter equal to false, my Winmenus widget will draw your menu without waiting for your input and events, when you set the parameter of the execute() method to true it will wait for your input and events, if the parameter of the execute method is true and the returned value of the execute method is ctTab, that means you have pressed on the Tab key, and if the returned value is ctExit, that means you have pressed on the Escape key to exit.
 
 
You can download my text mode Winmenus from:
 
https://sites.google.com/site/scalable68/winmenus
 
 
And my units are working with Delphi and FreePascal and C++Builder.
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 07:17AM -0800

Hello,
 
 
My new PERT++ and JNI Wrapper are here..
 
I am also posting this time about JNI Wrapper for Delphi and FreePascal that i have enhanced much more, and i think it is stable and complete, as you will notice that now JNI Wrapper automatically configure itself to support new versions of Oracle Java.
 
Please download the new JNI Wrapper from:
 
https://sites.google.com/site/scalable68/jni-wrapper-for-delphi-and-freepascal
 
 
And now here is my new PERT++:
 
 
PERT++ (An enhanced edition of the program or project evaluation and review technique that includes Statistical PERT) in Delphi and FreePascal
 
Version: 1.38
 
Authors: Amine Moulay Ramdane that has implemented PERT, Robert Sedgewick, Kevin Wayne.
 
Email: aminer68@gmail.com
 
Description:
 
This program (or project) evaluation and review technique includes Statistical PERT, it is a statistical tool, used in project management, which was designed to analyze and represent the tasks involved in completing a given project.
 
PERT++ permits also to calculate:
 
- The longest path of planned activities to the end of the project
 
- The earliest and latest that each activity can start and finish without making the project longer
 
- Determines "critical" activities (on the longest path)
 
- Prioritize activities for the effective management and to
shorten the planned critical path of a project by:
 
- Pruning critical path activities
 
- "Fast tracking" (performing more activities in parallel)
 
- "Crashing the critical path" (shortening the durations of critical path activities by adding resources)
 
- And it permits to give Risk for each output PERT formula
 
PERT is a method of analyzing the tasks involved in completing a given project, especially the time needed to complete each task, and to identify the minimum time needed to complete the total project. It incorporates uncertainty by making it possible to schedule a project while not knowing precisely the details and durations of all the activities. It is more of an event-oriented technique rather than start- and completion-oriented, and is used more in projects where time is the major factor rather than cost. It is applied to very large-scale, one-time, complex, non-routine infrastructure and Research and Development projects.
 
PERT and CPM are complementary tools, because CPM employs one time estimate and one cost estimate for each activity; PERT may utilize three time estimates (optimistic, most likely, and pessimistic) and no costs for each activity. Although these are distinct differences, the term PERT is applied increasingly to all critical path scheduling. This PERT library uses a CPM algorithm that uses Topological sorting to render CPM a linear-time algorithm for finding the critical path of the project, so it's fast.
 
You have to have a java compiler, and you have first to compile the java libraries with the batch file compile.bat, and after that compile the Delphi and Freepascal test1.pas program.
 
Here is the procedure to call for PERT:
 
procedure solvePERT(filename:string;var info:TCPMInfo;var finishTime:system.double;var criticalPathStdDeviation:system.double);
 
The arguments are:
 
The filename: is the file to pass, it's is organized as:
 
The first line is the number of jobs, the rest of each of the lines are: three time estimates that takes the job (optimistic, expected, and pessimistic) and after that the number of precedence constraints and after that the precedence constraints that specify that the job have to be completed before certain other jobs are begun.
 
info: is the returned information, you can get the job number and the start and finish time of the job in info[i].job and info[i].start and info[i].finish, please look at the test.pas example to understand.
 
finishTime: is the finish time.
 
criticalPathStdDeviation: is the critical path standard deviation.
 
I have also provided you with three other functions, here they are:
 
function NormalDistA (const Mean, StdDev, AVal, BVal: Extended): Single;
 
function NormalDistP (const Mean, StdDev, AVal: Extended): Single;
 
function InvNormalDist(const Mean, StdDev, PVal: Extended; const Less: Boolean): Extended;
 
For NormalDistA() or NormalDistP(), you pass the best estimate of completion time to Mean, and you pass the critical path standard deviation to StdDev, and you will get the probability of the value Aval or the probability between the values of Aval and Bval.
 
For InvNormalDist(), you pass the best estimate of completion time to Mean, and you pass the critical path standard deviation to StdDev, and you will get the length of the critical path of the probability PVal, and when Less is TRUE, you will obtain a cumulative distribution.
 
I have also included a 32 bit and 64 bit windows executables called PERT32.exe and PERT64.exe (that take the file, with a the file format that i specified above, as an argument) inside the zip, it is a very powerful tool, you need to compile CPM.java with compile.bat before running them.
 
I have also included a 32 bit and 64 bit windows executables called CPM32.exe and CPM64.exe (that take the file, with a the file format that i specified in the Readme.CPM file, as an argument) inside the zip, they run the CPM solver that you use with Statistical PERT that i have included inside the zip file, you need to compile CPM.java with compile.bat before running them.
 
The very important things to know about PERT is this:
 
1- PERT works best in projects where previous experience can be relied on to accurately make predictions.
 
2- To not underestimate project completion time, especially if delays cause the critical path to shift around, you have to enhance with point number 1 above or/and management time and resources can be applied to make sure that optimistic and most likely and pessimistic time estimates of activities are accurate.
 
Also PERT++ zip file includes the powerful Statistical PERT, Statistical PERT is inside Statistical_PERT_Beta_1.0.xlsx microsoft excel workbook, you can use LibreOffice or Microsoft Office to execute it, after that pass the output data of Statistical PERT to CPM library, please read the Readme.CPM to learn how to use CPM library, and please read and learn about Statistical PERT on internet.
 
Please read about Statistical PERT here:
 
http://www.statisticalpert.com/What_is_Statistical_PERT.pdf
 
 
Have fun with it !
 
Language: FPC Pascal v2.2.0+ / Delphi 7+: http://www.freepascal.org/
 
Operating Systems: Windows,
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
For Delphi XE-XE7 and Delphi tokyo use the -DXE switch
 
You can download my PERT++ from:
 
https://sites.google.com/site/scalable68/pert-an-enhanced-edition-of-the-program-or-project-evaluation-and-review-technique-that-includes-statistical-pert-in-delphi-and-freepascal
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 07:06AM -0800

Hello,
 
 
My Object oriented Stackful coroutines library for Delphi and FreePascal was updated to version 2.11
 
Author: Amine Moulay Ramdane
 
Description:
 
This is an object oriented library to do Stackful Coroutines for FreePascal and Delphi and it comes with a semaphore and a mutex,
and it works on both Windows and Linux.
 
Because threads alone are expensive on windows because each thread can take one Megabytes of stack and the thread context
switch is expensive, so you can use this library with my Threadpool engine that scales well to be able to serve a great number of internet connections or TCP/IP socket connections, and of course you can do with it many other things such as simulations etc.
 
Here is the use cases of this object oriented coroutines library:
 
- Context switching is expensive with threads, with coroutines it is
really fast.
 
- My mutex and semaphore of my coroutines library are much much more faster than the mutex and semaphore used on processes and threads.
 
Main features:
 
Very small RAM overhead and provides semaphore and mutex.
 
Note: I have not saved the XMM registers on 64 bit compilers in my assembler routines, so with fastcall calling convention you have to pass the floating-point arguments by reference or use dynamic memory.
 
I have come to an interesting subject...
 
Is my Object coroutines library still useful ?
 
Can threads do the same job as coroutines ?
 
I think that coroutines are still useful because a lock or a cache-line transfer is expensive on threads running on multicores, a cache-line transfer between cores is around 800 CPU cycles on x86 , and that's too much expensive compared to coroutines, and the semaphore and mutex of my coroutines library and the contention on them are thus much much less expensive, other than that coroutines can yield to other functions or procedures inside a function or procedure, so i think that coroutines are thus still useful.
 
 
You can download it from:
 
https://sites.google.com/site/scalable68/object-oriented-stackful-coroutines-library-for-delphi-and-freepascal
 
 
Look at the defines.inc include file, you can configure it like this:
 
{$DEFINE XE} for Delphi XE compilers and Delphi tokyo.
 
{$DEFINE FPC} for FreePascal.
 
{$DEFINE Delphi} for Delphi 7 to Delphi 2007 compilers.
 
{$DEFINE CPU32} for 32 bit systems
 
{$DEFINE CPU64} for 64 bit systems
 
Language: FPC Pascal v2.2.0+ / Delphi5+:
http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -DMSWINDOWS -$H+ -DDelphi
 
Required Delphi XE-XE7 switch: -$H+ -DXE
 
For Delphi use -DDelphi
 
Operating Systems: Win, Mac OS X and Linux on (x86).
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Nov 29 06:40AM -0800

Hello,
 
 
My Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well version 1.76 is here..
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware that scales very well, and it contains also a Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware that scales very well.
 
Sparse linear system solvers are ubiquitous in high performance computing (HPC) and often are the most computational intensive parts in scientific computing codes. A few of the many applications relying on sparse linear solvers include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry.
 
Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this. However, after a while people figured out that it works really well if you just stop the iteration much earlier - often you will get a very good approximation after much fewer than n steps. In fact, we can analyze how fast Conjugate gradient converges. The end result is that Conjugate gradient is used as an iterative method for large linear systems today.
 
Please download the zip file and read the readme file inside the zip to know how to use it.
 
 
You can download it from:
 
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.