Friday, January 31, 2020

Digest for comp.lang.c++@googlegroups.com - 15 updates in 2 topics

Juha Nieminen <nospam@thanks.invalid>: Jan 31 08:25PM

>> Anyway, a smiley in such a message is extremely ambiguous.
 
> Maybe for autists, but normal people recognize the meaning,
> even more with an additional "hrhr".
 
Why are you being such an asshole? For what purpose?
Sam <sam@email-scan.com>: Jan 31 07:06AM -0500

Bonita Montero writes:
 
 
>> You probably can't.  The value of std::type_info.name() is
>> implementation defined.
 
> I know, but probably there's a compiler-switch.
 
No, there isn't. There is no valid reason to have such a compiler option in
the first place.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 31 01:38PM +0100


>> I know, but probably there's a compiler-switch.
 
> No, there isn't. There is no valid reason to have such a compiler option
> in the first place.
 
There is a reason: depending on the type of usage of typeid(x).name()
you want to get the internal type-representation for efficient hand-
ling or the readable representation for debugging-purposes.
David Brown <david.brown@hesbynett.no>: Jan 31 02:03PM +0100

On 31/01/2020 13:38, Bonita Montero wrote:
 
> There is a reason: depending on the type of usage of typeid(x).name()
> you want to get the internal type-representation for efficient hand-
> ling or the readable representation for debugging-purposes.
 
No, there is no reason to have such a switch - name() is not intended to
give human-readable results, merely an identifier. So compilers handle
it in different ways.
 
Anyway, a few seconds googling gives this:
 
<https://en.cppreference.com/w/cpp/types/type_info/name>
 
<https://gcc.gnu.org/onlinedocs/libstdc++/manual/ext_demangling.html>
 
Hopefully that should give you what you need.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 31 02:15PM +0100

> No, there is no reason to have such a switch - name() is not intended to
> give human-readable results, ...
 
I told you the reason: it would be good for debugging-purposes.
It may be not your taste how this might be implemented, but there
are good reasons to make it the way I described.
Paavo Helde <myfirstname@osa.pri.ee>: Jan 31 04:01PM +0200

On 31.01.2020 14:38, Bonita Montero wrote:
 
> There is a reason: depending on the type of usage of typeid(x).name()
> you want to get the internal type-representation for efficient hand-
> ling or the readable representation for debugging-purposes.
 
The typeid names are implementation specific anyway, so nothing has
stopped gcc folks to generate more verbose names. The fact that they
have not done so indicates that they have thought about it and selected
the current solution as "the best". Why should they add another
inconsistent behavior which would not be "the best"? And maybe you want
to have both representations in a TU, a compiler switch would not help
at all with this.
 
Moreover, if there was such a compiler switch, it might cause glitches
or inconsistent behavior when linking together libraries compiled with
different options. Any potential benefit would not outweigh the loss of
consistency, especially considering that it is trivial to add an
abi::__cxa_demangle() call when needed.
Bart <bc@freeuk.com>: Jan 31 02:11PM

On 31/01/2020 14:01, Paavo Helde wrote:
> different options. Any potential benefit would not outweigh the loss of
> consistency, especially considering that it is trivial to add an
> abi::__cxa_demangle() call when needed.
 
Not so trivial since none of the following seems to work, 'abi' being
undefined in every case:
 
#include <abi>
#include <abi.h>
abi::__cxa_demangle(typeid(x).name())
Bonita Montero <Bonita.Montero@gmail.com>: Jan 31 03:15PM +0100

> The typeid names are implementation specific anyway, ...
 
But they could be verbose for debugging-purposes.
 
> have not done so indicates that they have thought about it and selected
> the current solution as "the best". Why should they add another
> inconsistent behavior which would not be "the best"?
 
Getting the typename isn't good for nothing more than debugging
-purposes. So there should be at least an option to have it verbose.
You can completely disable typeinfo; I think the first reason for that
is to make it harder to reverse-engineer code. If there's such an option
there could be also the option I have in mind.
 
> Moreover, if there was such a compiler switch, it might cause glitches
> or inconsistent behavior when linking together libraries compiled with
> different options.
 
There are also "glitches" if you disable typeid by compilerswitch
also; and no one cares for that. That's not a real issue.
Paavo Helde <myfirstname@osa.pri.ee>: Jan 31 04:27PM +0200

On 31.01.2020 16:11, Bart wrote:
 
> #include <abi>
> #include <abi.h>
> abi::__cxa_demangle(typeid(x).name())
 
Just include the correct header:
 
#include <iostream>
#include <cxxabi.h>
 
int main() {
int x = 42;
int status;
std::cout << abi::__cxa_demangle(typeid(x).name(), 0, 0, &status) << "\n";
}
James Kuyper <jameskuyper@alumni.caltech.edu>: Jan 31 09:30AM -0500

On 1/31/20 8:03 AM, David Brown wrote:
 
> No, there is no reason to have such a switch - name() is not intended to
> give human-readable results, merely an identifier. So compilers handle
> it in different ways.
 
I've reached the same conclusion, based upon precisely the opposite
premises. There's nothing that can usefully be done with the string
returned by name() EXCEPT print it out for humans to read. It's only
purpose is to be human-readable.
 
However, the issues of precisely what the string should say were
sufficiently complicated that the committee elected to leave them for
each implementation to decide. If an implementation doesn't produce
results that are as easy to understand as you'd like, that's because the
implementors decided that more useful results were not sufficiently
valuable to justify the (considerable) difficulty of producing them. A
compiler switch to enable more useful results therefore makes no sense.
In order to support that switch, they'd need to put in that extra
effort, which is what they were trying to avoid. If and when they do end
up deciding to put in that effort, why provide an option which, if not
selected, allows for the less useful results?
"Öö Tiib" <ootiib@hot.ee>: Jan 31 06:41AM -0800

On Friday, 31 January 2020 14:38:43 UTC+2, Bonita Montero wrote:
 
> There is a reason: depending on the type of usage of typeid(x).name()
> you want to get the internal type-representation for efficient hand-
> ling or the readable representation for debugging-purposes.
 
Most fruitful is to add text serialization support to data. Serialization
can be useful for lot of other things than debugging. Work needed
depends on serialization library used, on better cases it can be
conditionally compiled out in release builds, and result will look
something like:
 
std::cout /* or my::logger */ << json::make_from(x) << '\n';
Bonita Montero <Bonita.Montero@gmail.com>: Jan 31 04:09PM +0100

>     std::cout << abi::__cxa_demangle(typeid(x).name(), 0, 0, &status)
> << "\n";
> }
 
That's rather a workaround for me.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 31 04:12PM +0100

> Most fruitful is to add text serialization support to data.
 
That doesn't make sense since the extraced name from typeid(x).name()
wouldn't help you when deserializing. Using type_index( typeid(x) )
to map through a hashtable to a deserialization-function would be
a possible solution.
David Brown <david.brown@hesbynett.no>: Jan 31 04:31PM +0100

On 31/01/2020 15:30, James Kuyper wrote:
> premises. There's nothing that can usefully be done with the string
> returned by name() EXCEPT print it out for humans to read. It's only
> purpose is to be human-readable.
 
If I understand it correctly (from reading the links I posted), gcc and
clang specifically choose to give the names using the mangling format
specified by the Itanium C++ ABI. (Why the Itanium C++ ABI? I don't
know, but I suppose its as good as any.) So they are giving a
well-defined format that can be used by other software that follows the
same standard. That includes the symbol names generated by the tools.
Assuming I understand this all correctly, it means you can get the
mangled symbol name for "foo(T)" by combing "foo" and
"std::type_info.name(T)". This could have many more uses than just
human-readable output.
 
And you can get a more readable version from the abi::__cxa_demangle
function.
 
So gcc and clang gives a more useful and flexible solution here, though
it requires a little more effort to use it in human debugging.
 
> effort, which is what they were trying to avoid. If and when they do end
> up deciding to put in that effort, why provide an option which, if not
> selected, allows for the less useful results?
 
I can't imagine it would have been terribly difficult for gcc to pass
the type info strings through abi::__cxa-demangle to get a more human
readable version of name(). The abi:: stuff will already be used in the
compiler. But any switch here changing the output of name() would break
other uses of it, and so be a terrible idea.
Manfred <noname@add.invalid>: Jan 31 05:15PM +0100

On 1/31/2020 4:31 PM, David Brown wrote:
> know, but I suppose its as good as any.) So they are giving a
> well-defined format that can be used by other software that follows the
> same standard.
 
I believe the keyword here is well-defined (possibly together with
efficiency). Using the mangled name results in a 1:1 relationship
between name() and the C++ type, whereas a human readable name would end
up with things like "int unsigned" "unsigned int" and "unsigned" all
denoting the same type. Assuming the priority is to provide a name
somehow usable by the software, this kind of 1:1 equivalence is valuable.
Moreover, some mangled names need to end up in the executable for
dynamic linking anyway, so having name() return the same encoding can
save some duplication.
 
 
That includes the symbol names generated by the tools.
> human-readable output.
 
> And you can get a more readable version from the abi::__cxa_demangle
> function.
 
Which is perfectly suitable for debugging purposes.
 
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 1 update in 1 topic

aminer68@gmail.com: Jan 30 01:13PM -0800

Hello,
 
 
Here is how to install Spin Model Checker on windows:
 
I invite you to look at this video to learn how
to install Spin model checker and iSpin:
 
https://www.youtube.com/watch?v=MGzmtWi4Oq0
 
I have installed them and configured them correctly
and i am working with them in parallel programming to detect
race conditions etc., please read the rest of my thoughts
to understand more:
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following
system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
 
About reachability analysis of a Petri net..
 
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering.
 
Tina is using "partial-order" reduction techniques aimed at preventing
combinatorial explosion, Read more here to notice it:
 
http://projects.laas.fr/tina/papers/qest06.pdf
 
About modelizations and detection of race conditions and deadlocks
in parallel programming..
 
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
 
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
 
And i have just taken a look at the following webpage of Dr Dobb's journal:
 
Detecting Deadlocks in C++ Using a Locks Monitor
 
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
 
And i think that both of them are using technics that are not as good
as analysing deadlocks with Petri Nets in parallel applications ,
for example the above two methods are only addressing locks or mutexes
or reader-writer locks , but they are not addressing semaphores
or event objects and such other synchronization objects, so they
are not good, this is why i have written a tutorial that shows my
methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is:
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
You have to get the powerful Tina software to run my Petri Net examples
inside my tutorial, here is the powerful Tina software:
 
http://projects.laas.fr/tina/
 
Also to detect race conditions in parallel programming you have to take
a look at the following new tutorial that uses the powerful Spin tool:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
 
This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Thursday, January 30, 2020

With 8,000-Plus Deaths in U.S. Alone, 💥Flu Far More Deadly Than Coronavirus | The Weather Channel

雖然武漢新冠肺炎来勢凶凶,但在美国目前只有很少的病例,住在美国的

我们,💥千萬別忽略了己在美国導致 8000+ 人死亡的流感😷!

雖然有防疫針可打,但还是已经死了 八千多人,流感正值高潮期。

據說疫苗不能概括今年所有的流感,洗手潄口至為重要! 大家多保重。

https://weather.com/health/cold-flu/news/2020-01-28-flu-more-deadly-than-coronavirus


Sent from my iPad

Digest for comp.lang.c++@googlegroups.com - 5 updates in 5 topics

aminer68@gmail.com: Jan 30 01:13PM -0800

Hello,
 
 
Here is how to install Spin Model Checker on windows:
 
I invite you to look at this video to learn how
to install Spin model checker and iSpin:
 
https://www.youtube.com/watch?v=MGzmtWi4Oq0
 
I have installed them and configured them correctly
and i am working with them in parallel programming to detect
race conditions etc., please read the rest of my thoughts
to understand more:
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following
system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
 
About reachability analysis of a Petri net..
 
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering.
 
Tina is using "partial-order" reduction techniques aimed at preventing
combinatorial explosion, Read more here to notice it:
 
http://projects.laas.fr/tina/papers/qest06.pdf
 
About modelizations and detection of race conditions and deadlocks
in parallel programming..
 
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
 
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
 
And i have just taken a look at the following webpage of Dr Dobb's journal:
 
Detecting Deadlocks in C++ Using a Locks Monitor
 
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
 
And i think that both of them are using technics that are not as good
as analysing deadlocks with Petri Nets in parallel applications ,
for example the above two methods are only addressing locks or mutexes
or reader-writer locks , but they are not addressing semaphores
or event objects and such other synchronization objects, so they
are not good, this is why i have written a tutorial that shows my
methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is:
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
You have to get the powerful Tina software to run my Petri Net examples
inside my tutorial, here is the powerful Tina software:
 
http://projects.laas.fr/tina/
 
Also to detect race conditions in parallel programming you have to take
a look at the following new tutorial that uses the powerful Spin tool:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
 
This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
 
 
Thank you,
Amine Moulay Ramdane.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jan 29 09:19PM -0500

On 1/29/20 2:51 AM, Juha Nieminen wrote:
 
>> Are you autistic and don't understand emoticons ?
 
> It's a smiley, not an "emoticon" (which would be a picture. No pictures
> here).
 
That distinction has grown fuzzy. For instance, my newsreader
(Thunderbird) automatically converts ;-) into a picture of a smiley face.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jan 30 03:12AM +0100

On 29.01.2020 15:47, Bonita Montero wrote:
> x
> PA_i
 
> Does anyone know how to make gcc more expressive like MSVC?
 
If you're only interested in understanding the short output of a little
g++ test program, then you can use the `c++filt -t` command.
 
A clean type name utility needs to clean up not only the g++ result but
also the Visual C++ result, and maybe also other schemes used by other
compilers.
 
Code for the two mentioned compilers can go like the following, which is
available at <url:
https://github.com/alf-p-steinbach/cppx-core-language/blob/master/source/cppx-core-language/type-checking/type_name_from.hpp>:
 
 
#pragma once // Source encoding: UTF-8 with BOM (π is a lowercase
Greek "pi").
#include <cppx-core-language/assert-cpp/is-c++17-or-later.hpp>
 
#include <cppx-core-language/syntax/collection-util/Sequence_.hpp> //
cppx::Sequence_
#include <cppx-core-language/syntax/types/type-builders.hpp> //
cppx::Type_
#include <cppx-core-language/syntax/declarations.hpp> //
CPPX_USE_...
#include <cppx-core-language/text/ascii-character-util.hpp> //
cppx::ascii::*
#include <cppx-core-language/tmp/Type_carrier_.hpp> //
cppx::Type_carrier_
#include <cppx-core-language/tmp/type-modifiers.hpp> //
cppx::As_referent_
 
#include <functional> // std::invoke
#include <stdlib.h> // free
#include <string> // std::string
#include <typeinfo> // std::type_info
#include <utility> // std::forward
 
#ifdef __GNUC__
# include <cxxabi.h>

Wednesday, January 29, 2020

Digest for comp.lang.c++@googlegroups.com - 11 updates in 2 topics

Bonita Montero <Bonita.Montero@gmail.com>: Jan 29 03:47PM +0100

With this:
 
#include <iostream>
#include <typeinfo>
 
using namespace std;
 
int main()
{
int A, C;
float B;
long long D;
int (*E)[];
cout << typeid(A + B).name() << endl;
cout << typeid(A + C).name() << endl;
cout << typeid(A + D).name() << endl;
cout << typeid(E).name() << endl;
}
 
... I get ...
 
float
int
__int64
int (* __ptr64)[0]
 
... with MSVC.
 
With g++ I get ...
 
f
i
x
PA_i
 
Does anyone know how to make gcc more expressive like MSVC?
red floyd <no.spam@its.invalid>: Jan 29 06:58AM -0800

On 1/29/20 6:47 AM, Bonita Montero wrote:
> x
> PA_i
 
> Does anyone know how to make gcc more expressive like MSVC?
 
You probably can't. The value of std::type_info.name() is
implementation defined.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 29 03:59PM +0100

>> Does anyone know how to make gcc more expressive like MSVC?
 
> You probably can't.  The value of std::type_info.name() is
> implementation defined.
 
I know, but probably there's a compiler-switch.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 29 04:44PM +0100

That's a workaround:
 
#include <iostream>
#include <typeindex>
#include <typeinfo>
#include <unordered_map>
#include <string>
 
using namespace std;
 
int main()
{
unordered_map<type_index, string> typeMappings;
typeMappings[type_index( typeid(char) )] = "char";
typeMappings[type_index( typeid(unsigned char) )] = "unsigned
char";
typeMappings[type_index( typeid(signed char) )] = "signed char";
typeMappings[type_index( typeid(short) )] = "short";
typeMappings[type_index( typeid(unsigned short) )] = "unsigned
short";
typeMappings[type_index( typeid(int) )] = "int";
typeMappings[type_index( typeid(unsigned int) )] = "unsigned
int";
typeMappings[type_index( typeid(long) )] = "long";
typeMappings[type_index( typeid(unsigned long) )] = "unsigned
long";
typeMappings[type_index( typeid(long long) )] = "long long";
typeMappings[type_index( typeid(unsigned long long) )] = "unsigned
long long";
typeMappings[type_index( typeid(float) )] = "float";
typeMappings[type_index( typeid(double) )] = "double";
typeMappings[type_index( typeid(long double) )] = "long double";
 
int a, c;
float b;
long long d;
cout << typeMappings[type_index( typeid(a + b) )] << endl;
cout << typeMappings[type_index( typeid(a + c) )] << endl;
cout << typeMappings[type_index( typeid(a + d) )] << endl;
}
Melzzzzz <Melzzzzz@zzzzz.com>: Jan 29 04:05PM


>> You probably can't.  The value of std::type_info.name() is
>> implementation defined.
 
> I know, but probably there's a compiler-switch.
I can't remember but there is function to demangle typeid name...
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
Melzzzzz <Melzzzzz@zzzzz.com>: Jan 29 04:08PM

>>> implementation defined.
 
>> I know, but probably there's a compiler-switch.
> I can't remember but there is function to demangle typeid name...
 
Here it is:
https://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-html-USERS-4.3/a01696.html
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
boltar@nowhere.org: Jan 29 05:16PM

On Wed, 29 Jan 2020 15:47:43 +0100
>x
>PA_i
 
>Does anyone know how to make gcc more expressive like MSVC?
 
Its compiler dependant so no.
 
FWIW Clang gives the same output as gcc.
boltar@nowhere.org: Jan 29 05:17PM

On Wed, 29 Jan 2020 16:44:15 +0100
>That's a workaround:
 
Not sure you'd define creating a map with the long format as a "workaround" :)
Juha Nieminen <nospam@thanks.invalid>: Jan 29 07:51AM


>> Crap. "Similiar" mathematical techniques were first published in the
>> 17th century. CORDIC itself wasn't even conceived until the 20th.
 
> Are you autistic and don't understand emoticons ?
 
It's a smiley, not an "emoticon" (which would be a picture. No pictures
here).
 
Anyway, a smiley in such a message is extremely ambiguous. It could
quite well mean that you aren't saying the thing seriously, or it
could also mean that you are mocking the other person. It's impossible
to tell which, from the smiley alone.
 
Consider, for example, these two sentences:
 
"That's what she said. ;-)"
 
"Yeah, whatever you say, pal. And the Moon is made of cheese. ;-)"
 
One is jestful. The other is mockery. In this case it's clearer which is
which. However, it's not always so clear-cut.
"Öö Tiib" <ootiib@hot.ee>: Jan 29 12:58AM -0800

On Wednesday, 29 January 2020 09:52:08 UTC+2, Juha Nieminen wrote:
 
> It's a smiley, not an "emoticon" (which would be a picture. No pictures
> here).
 
Interesting ... why people do not use emoticons in Usenet? 🤔 😝
Bonita Montero <Bonita.Montero@gmail.com>: Jan 29 10:34AM +0100


>> Are you autistic and don't understand emoticons ?
 
> It's a smiley, not an "emoticon" (which would be a picture. No pictures
> here).
 
https://en.wikipedia.org/wiki/Emoticon#/media/File:Emoticon_Smile_Face.svg
 
> Anyway, a smiley in such a message is extremely ambiguous.
 
Maybe for autists, but normal people recognize the meaning,
even more with an additional "hrhr".
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 2 updates in 1 topic

aminer68@gmail.com: Jan 28 02:11PM -0800

Hello,
 
 
Embarcadero Launches LearnDelphi.org ...
 
As you have noticed , i am also programming in Delphi and Freepascal,
and now i will invite you to read the following news about Delphi:
 
Embarcadero Launches LearnDelphi.org, a Delphi-Centric Learning Ecosystem, to Promote Delphi Education
 
Read more here:
 
https://apnews.com/Business%20Wire/0524383998734d05a34d5900fdfa0058
 
And here is the new website of LearnDelphi.org:
 
https://www.learndelphi.org/
 
 
NASA is also using Delphi, read about it here:
 
https://community.embarcadero.com/blogs/entry/want-moreexploration-40857
 
 
The European Space Agency is also using Delphi, read about it here:
 
https://community.embarcadero.com/index.php/blogs/entry/delphi-s-involvement-with-the-esa-rosetta-comet-spacecraft-project-1
 
 
 
Thank you,
Amine Moulay Ramdane.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 29 04:02AM +0100

> Hello,
> Embarcadero Launches LearnDelphi.org ...
 
Decades too late. Delphi is more than dead.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Tuesday, January 28, 2020

Digest for comp.lang.c++@googlegroups.com - 6 updates in 2 topics

Puppet_Sock <puppet_sock@hotmail.com>: Jan 28 07:38AM -0800

The new version of "Large Scale C++" by Lakos is finally out. Well, volume 1 is out.
 
https://www.amazon.com/Large-Scale-Architecture-Addison-Wesley-Professional-Computing/dp/0201717069/
 
I am about 70 pages in. So far, I'm liking.
 
Anybody else read it? Thoughts?
Robert Wessel <robertwessel2@yahoo.com>: Jan 27 07:37PM -0600

>// endofouterloop:
 
>the statements will jump to completely different points. The break
>actually goes to an implicit statement after the loop.
 
 
I think the slight oddity of target location when the label is used
for both a break/continue and a goto is pretty minor. I suppose we
could come up with some alternative way to specify a "statement name",
but I don't see a big advantage. If we can have a dozen meanings of
static, why not this.
 
 
 
>as it pinpoints exactly where the code will end up next. With loads of
>nested }s floating around, many nothing to do with the loops, the end of
>the desired loop is not always easy to find.
 
 
I think that's the wrong principle. If you want to target a specific
location, use a goto. The idea is to break/continue a loop/switch, so
it should be the *loop/switch* that should be identified.
Robert Wessel <robertwessel2@yahoo.com>: Jan 27 09:06PM -0600

On Sun, 26 Jan 2020 22:15:06 +0100, David Brown
>author). As you say, the context of that paper is different from the
>situation now, and I think we can all agree that spaghetti programming
>is a bad idea.
 
 
Let me emphasize, I didn't intend to accuse you of that sort of closed
mindedness, and if it came across that way, I apologize. There are
many people, though, who have heard a rule, and gosh darn it, they're
going to apply it, no matter what. Unfortunately it seems to be
prevalent with people who are in charge of writing and enforcing
coding standards.
 
That being said, I don't think we're really disagreeing on all that
much.
 
 
>or perhaps you need to change the problem.
 
>I am sure there are some types of coding situations where goto is
>considered a reasonable choice - I just don't meet these situations.
 
 
Different people will disagree about issues of style and clarity.
That's inevitable. Take the endless brace-style discussions
(please!).
 
But if I come work on one of your projects, I'll follow your project's
standards. And perhaps I might struggle a bit in some of the
(occasional) cases where I might consider a goto, or if your standards
for routine length might be a bit shorter than ours, or whatever,
simply because that somewhat changes the idioms I'm used to using
every day.
 
And if you come work on one of mine, I'll trust you not to run off and
refactor everything that doesn't meet your preferences.
 
And we can both (good naturedly) grumble about the wrong-headedness of
the other approach at the water cooler.
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Jan 27 07:15PM -0800

> On Mon, 27 Jan 2020 21:38:26 +0000, Bart <bc@freeuk.com> wrote:
[...]
 
> I think that's the wrong principle. If you want to target a specific
> location, use a goto. The idea is to break/continue a loop/switch, so
> it should be the *loop/switch* that should be identified.
 
Personally, I like the way Ada does this. You can apply a label to a
loop, and it's the name of the loop ("--" introduces a comment):
 
OUTER: while condition loop
INNER: while condition loop
-- do stuff
if Done_With_Outer then
exit OUTER; -- "exit" is Ada for "break"
elsif Done_With_Inner then
exit INNER;
end if;
-- do more stuff
end loop INNER;
end loop OUTER;
 
"exit OUTER" is best thought of as an operation that applies to the loop
named "OUTER" rather than as a branch to a particular location.
 
Loops that are the targets of goto statements have a different syntax,
one that was designed to stand out:
 
<<LABEL>>
...
goto LABEL;
 
On the other hand, Perl does something similar, but uses the same syntax
for loop names and goto labels. This does mean that seeing a named loop
raises the possibility that somewhere there's a goto that targets it,
but that's not much of a problem in practice.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
"Fred. Zwarts" <F.Zwarts@KVI.nl>: Jan 28 11:45AM +0100

Op 27.jan..2020 om 22:38 schreef Bart:
 
> as it pinpoints exactly where the code will end up next. With loads of
> nested }s floating around, many nothing to do with the loops, the end of
> the desired loop is not always easy to find.
 
Maybe, but for a "continue outerloop;" a label at the beginning would be
more appropriate, or even better, inside the loop.
David Brown <david.brown@hesbynett.no>: Jan 28 01:08PM +0100

On 28/01/2020 04:06, Robert Wessel wrote:
>> is a bad idea.
 
> Let me emphasize, I didn't intend to accuse you of that sort of closed
> mindedness, and if it came across that way, I apologize.
 
I didn't think you did accuse me of closed mindedness (other people
have, at times, and they have sometimes done so fairly. I can be rather
rigid in my opinions, or the way I express them). I was just trying to
clarify a little.
 
> going to apply it, no matter what. Unfortunately it seems to be
> prevalent with people who are in charge of writing and enforcing
> coding standards.
 
With coding standards, as with any other set of rules, there is always a
balance between having a clear and consistent rule set and having
flexibility. Sometimes the consistency outweighs the details of the
rule, other times the rule might be too strict. Usually in coding
standards it is a good idea to have different levels (you /must/ do
this, and /should/ do that) and to have procedures for breaking the
rules ("should" rules can be broken but you must comment the break in
code, breaks of "must" rules need written permission from the program
manager).
 
> That being said, I don't think we're really disagreeing on all that
> much.
 
Agreed. I think we are both against banning "goto" merely because it is
trendy to hate it, or because Djikstra wrote a paper against it. We are
both looking for ways to write the code in a clear, logical and
maintainable fashion, and we are both expecting to use good tools and
newer language standards in order to get that without a cost in
efficiency. We just disagree a little on when "goto" might boost code
clarity, and when it is a detriment - and that is likely to be from
different experiences and different types of code.
 
 
> Different people will disagree about issues of style and clarity.
> That's inevitable. Take the endless brace-style discussions
> (please!).
 
Yes. There is one correct way, that I use, and a number of wrong ways
that some other people use :-)
 
> every day.
 
> And if you come work on one of mine, I'll trust you not to run off and
> refactor everything that doesn't meet your preferences.
 
Indeed. Consistency is very important.
 
There are times when a big refactoring is appropriate, but you need good
reason for it - changing brace style, for example, is not such a reason.
 
 
> And we can both (good naturedly) grumble about the wrong-headedness of
> the other approach at the water cooler.
 
Yes - and I view c.l.c. and c.l.c++ as virtual water coolers!
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 5 updates in 3 topics

aminer68@gmail.com: Jan 27 10:56AM -0800

Hello,
 
 
Priority Queueing Simulation
 
A round-robin queuing scheduler allows tasks to
have equal access to the processor. The addition of
priority levels allows more important tasks to be
completed first. However this scenario could result in
lower level tasks never gaining access to the
processor. This situation is known as starvation.
 
Here is two Strategies to Prevent Starvation:
 
Read more here:
 
https://www.nku.edu/~mcguffeej1/Franzen_poster.pdf
 
 
Thank you,
Amine Moulay Ramdane.
Wisdom90 <d@d.d>: Jan 27 12:15PM -0500

Hello,
 
 
About Java and Delphi and Freepascal..
 
I have just read the following webpage:
 
Java is not a safe language
 
https://lemire.me/blog/2019/03/28/java-is-not-a-safe-language/
 
 
But as you have noticed the webpage says:
 
- Java does not trap overflows
 
But Delphi and Freepascal do trap overflows.
 
And the webpage says:
 
- Java lacks null safety
 
But Delphi has null safety since i have just posted about it by saying
the following:
 
Here is MyNullable library for Delphi and FreePascal that brings null
safety..
 
Java lacks null safety. When a function receives an object, this object
might be null. That is, if you see 'String s' in your code, you often
have no way of knowing whether 's' contains an actually String unless
you check at runtime. Can you guess whether programmers always check?
They do not, of course, In practice, mission-critical software does
crash without warning due to null values. We have two decades of
examples. In Swift or Kotlin, you have safe calls or optionals as part
of the language.
 
Here is MyNullable library for Delphi and FreePascal that brings null
safety, you can read the html file inside the zip to know how it works,
and you can download it from my website here:
 
https://sites.google.com/site/scalable68/null-safety-library-for-delphi-and-freepascal
 
 
And the webpage says:
 
- Java allows data races
 
But for Delphi and Freepascal i have just written about how to prevent
data races by saying the following:
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to
mathematically find, or you can use the reachability tree, but we have
to notice that the structural analysis of Petri nets learns you more,
because it permits you to prove that there is no deadlock in the system,
and the place invariants are mathematically calculated by the following
system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities =
Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
 
About reachability analysis of a Petri net..
 
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of
processes. Partial order reduction is an effective technique to address
this problem. It relies on the observation that the effect of executing
transitions concurrently is often independent of their ordering.
 
Tina is using "partial-order" reduction techniques aimed at preventing
combinatorial explosion, Read more here to notice it:
 
http://projects.laas.fr/tina/papers/qest06.pdf
 
About modelizations and detection of race conditions and deadlocks
in parallel programming..
 
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
 
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
 
And i have just taken a look at the following webpage of Dr Dobb's journal:
 
Detecting Deadlocks in C++ Using a Locks Monitor
 
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
 
And i think that both of them are using technics that are not as good
as analysing deadlocks with Petri Nets in parallel applications ,
for example the above two methods are only addressing locks or mutexes
or reader-writer locks , but they are not addressing semaphores
or event objects and such other synchronization objects, so they
are not good, this is why i have written a tutorial that shows my
methodology of analysing and detecting deadlocks in parallel
applications with Petri Nets, my methodology is more sophisticated
because it is a generalization and it modelizes with Petri Nets the
broader range of synchronization objects, and in my tutorial i will add
soon other synchronization objects, you have to look at it, here it is:
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
You have to get the powerful Tina software to run my Petri Net examples
inside my tutorial, here is the powerful Tina software:
 
http://projects.laas.fr/tina/
 
Also to detect race conditions in parallel programming you have to take
a look at the following new tutorial that uses the powerful Spin tool:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
 
This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
 
 
And about memory safety of Delphi and Freepascal, here is what i said:
 
I have just read the following webpage about memory safety:
 
Microsoft: 70 percent of all security bugs are memory safety issues
 
https://www.zdnet.com/article/microsoft-70-percent-of-all-security-bugs-are-memory-safety-issues/
 
 
And it says:
 
 
"Users who often read vulnerability reports come across terms over and
over again. Terms like buffer overflow, race condition, page fault, null
pointer, stack exhaustion, heap exhaustion/corruption, use after free,
or double free --all describe memory safety vulnerabilities."
 
So as you will notice below, that the following memory safety problems
has been solved in Delphi:
 
And I have just read the following webpage about "Fearless Security:
Memory safety":
 
https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/
 
Here is the memory safety problems:
 
1- Misusing Free (use-after-free, double free)
 
I have solved this in Delphi and Freepascal by inventing a "Scalable"
reference counting with efficient support for weak references. Read
below about it.
 
 
2- Uninitialized variables
 
This can be detected by the compilers of Delphi and Freepascal.
 
 
3- Dereferencing Null pointers
 
I have solved this in Delphi and Freepascal by inventing a "Scalable"
reference counting with efficient support for weak references. Read
below about it.
 
4- Buffer overflow and underflow
 
This has been solved in Delphi by using madExcept, read here about it:
 
http://help.madshi.net/DebugMm.htm
 
You can buy it from here:
 
http://www.madshi.net/
 
 
There remains also the stack exhaustion memory safety problem,
and here is how to detect it in Delphi:
 
Call the function "DoStackOverflow" below once from your code and you'll
get the EStackOverflow error raised by Delphi with the message "stack
overflow", and you can print the line of the source code where
EStackOverflow is raised with JCLDebug and such:
 
----
 
​function DoStackOverflow : integer;
 
begin
 
result := 1 + DoStackOverflow;
 
end;
 
---
 
 
 
About my scalable algorithms inventions..
 
 
I am a white arab, and i am a gentleman type of person,
and i think that you know me too by my poetry that i wrote
in front of you and that i posted here, but i am
also a more serious computer developer, and i am also
an inventor who has invented many scalable algorithms, read about
them on my writing below:
 
 
Here is my last scalable algorithm invention, read
what i have just responded in comp.programming.threads:
 
About my LRU scalable algorithm..
 
On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote:
> in locked mode in very rare cases. And as I said inserting and
> flushing is conventional locked access.
> So the quest is for you: Can you guess what I did?
 
 
And here is what i have just responded:
 
 
I think i am also smart, so i have just quickly found a solution that is
scalable and that is not your solution, so it needs my hashtable that is
scalable and it needs my fully scalable FIFO queue that i have invented.
And i think i will not patent it. But my solution is not Lockfree, it
uses locks like in a Lock striping manner and it is scalable.
 
 
And read about my other scalable algorithms inventions on my writing below:
 
 
About the buffer overflow problem..
 
I wrote yesterday about buffer overflow in Delphi and Freepascal..
 
I think there is a "higher" abstraction in Delphi and Freepascal
that does the job very well of avoiding buffer overflow, and it is
the TMemoryStream class, since it behaves also like a pointer
and it supports reallocmem() and freemem() on the pointer but
with a higher level abstraction, look for example at my
following example in Delphi and Freepascal, you will notice
that contrary to pointers , that the memory stream is adapting with
writebuffer() without the need of reserving the memory, and this is why
it avoids the buffer overflow problem, read the following example to
notice how i am using it with a PAnsichar type:
 
========================================
 
 
Program test;
 
 
uses system.classes,system.sysutils;
 
 
var P: PAnsiChar;
 
 
Begin
 
 
P:='Amine';
 
 
mem:=TMemorystream.create;
 
mem.position:=0;
 
mem.writebuffer(pointer(p)^,6);
 
mem.position:=0;
 
writeln(PAnsichar(mem.memory));
 
 
 
end.
 
 
===================================
 
 
So since Delphi and Freepascal also detect the buffer overflow on
dynamic arrays , so i think that Delphi and Freepascal are powerful
tools.
 
 
Read my previous thoughts below to understand more:
 
 
And I have just read the following webpage about "Fearless Security:
Memory safety":
 
https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/
 
Here is the memory safety problems:
 
1- Misusing Free (use-after-free, double free)
 
I have solved this in Delphi and Freepascal by inventing a "Scalable"
reference counting with efficient support for weak references. Read
below about it.
 
 
2- Uninitialized variables
 
This can be detected by the compilers of Delphi and Freepascal.
 
 
3- Dereferencing Null pointers
 
I have solved this in Delphi and Freepascal by inventing a "Scalable"
reference counting with efficient support for weak references. Read
below about it.
 
4- Buffer overflow and underflow
 
This has been solved in Delphi by using madExcept, read here about it:
 
http://help.madshi.net/DebugMm.htm
 
You can buy it from here:
 
http://www.madshi.net/
 
 
And about race conditions and deadlocks problems and more, read my
following thoughts to understand:
 
 
I will reformulate more smartly what about race conditions detection in
Rust, so read it carefully:
 
You can think of the borrow checker of Rust as a validator for a locking
system: immutable references are shared read locks and mutable
references are exclusive write locks. Under this mental model, accessing
data via two independent write locks is not a safe thing to do, and
modifying data via a write lock while there are readers alive is not
safe either.
 
So as you are noticing that the "mutable" references in Rust follow the
Read-Write Lock pattern, so this is not good, because it is not like
more fine-grained parallelism that permits us to run the writes in
"parallel" and gain more performance from parallelizing the writes.
 
 
Read more about Rust and Delphi and my inventions..
 
I think the spirit of Rust is like the spirit of ADA, they are
especially designed for the very high standards of safety, like those
of ADA, "but" i don't think we have to fear race conditions that Rust
solve, because i think that race conditions are not so difficult to
avoid when you are a decent knowledgeable programmer in parallel
programming, so you have to understand what i mean, now we have to talk
about the rest of the safety guaranties of Rust, there remain the
problem of Deadlocks, and i think that Rust is not solving this problem,
but i have provided you with an enhanced DelphiConcurrent library for
Delphi and Freepascal that detects deadlocks, and there is also the
Memory Safety guaranties of Rust, here they are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
But notice that I have solved the number 1 and number 2 by inventing my
scalable reference counting with efficient support for weak references
for Delphi and Freepascal, read below to notice it, and for number 3
read my following thoughts to understand:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an
"inventor" that has invented many scalable algorithms and there
implementions, i am also inventing effective abstractions, i give you an
example:
 
Read the following of the senior research scientist that is called Dave
Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption
tolerant, but his lock lacks some important characteristics, this is why
i have just invented a new Fast Mutex that is adaptative and that is
much much better and i think mine is the "best", and i think you will
not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a decent preemption tolerance.
 
 
this is how i am an "inventor", and i have also invented other scalable
algorithms such as a scalable reference counting with efficient support
for weak references, and i have invented a fully scalable Threadpool,
and i have also invented a Fully scalable FIFO queue, and i have also
invented other scalable algorithms and there inmplementations, and i
think i will sell some of them to Microsoft or to
Google or Embarcadero or such software companies.
 
 
Read my following writing to know me more:
 
More about computing and parallel computing..
 
The important guaranties of Memory Safety in Rust are:
aminer68@gmail.com: Jan 27 09:16AM -0800

Hello,
 
 
About Java and Delphi and Freepascal..
 
I have just read the following webpage:
 
Java is not a safe language
 
https://lemire.me/blog/2019/03/28/java-is-not-a-safe-language/
 
 
But as you have noticed the webpage says:
 
- Java does not trap overflows
 
But Delphi and Freepascal do trap overflows.
 
And the webpage says:
 
- Java lacks null safety
 
But Delphi has null safety since i have just posted about it by saying the following:
 
Here is MyNullable library for Delphi and FreePascal that brings null safety..
 
Java lacks null safety. When a function receives an object, this object might be null. That is, if you see 'String s' in your code, you often have no way of knowing whether 's' contains an actually String unless you check at runtime. Can you guess whether programmers always check? They do not, of course, In practice, mission-critical software does crash without warning due to null values. We have two decades of examples. In Swift or Kotlin, you have safe calls or optionals as part of the language.
 
Here is MyNullable library for Delphi and FreePascal that brings null safety, you can read the html file inside the zip to know how it works, and you can download it from my website here:
 
https://sites.google.com/site/scalable68/null-safety-library-for-delphi-and-freepascal
 
 
And the webpage says:
 
- Java allows data races
 
But for Delphi and Freepascal i have just written about how to prevent data races by saying the following:
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following
system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
 
About reachability analysis of a Petri net..
 
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering.
 
Tina is using "partial-order" reduction techniques aimed at preventing
combinatorial explosion, Read more here to notice it:
 
http://projects.laas.fr/tina/papers/qest06.pdf
 
About modelizations and detection of race conditions and deadlocks
in parallel programming..
 
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
 
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
 
And i have just taken a look at the following webpage of Dr Dobb's journal:
 
Detecting Deadlocks in C++ Using a Locks Monitor
 
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
 
And i think that both of them are using technics that are not as good
as analysing deadlocks with Petri Nets in parallel applications ,
for example the above two methods are only addressing locks or mutexes
or reader-writer locks , but they are not addressing semaphores
or event objects and such other synchronization objects, so they
are not good, this is why i have written a tutorial that shows my
methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is:
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
You have to get the powerful Tina software to run my Petri Net examples
inside my tutorial, here is the powerful Tina software:
 
http://projects.laas.fr/tina/
 
Also to detect race conditions in parallel programming you have to take
a look at the following new tutorial that uses the powerful Spin tool:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
 
This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
 
 
And about memory safety of Delphi and Freepascal, here is what i said:
 
I have just read the following webpage about memory safety:
 
Microsoft: 70 percent of all security bugs are memory safety issues
 
https://www.zdnet.com/article/microsoft-70-percent-of-all-security-bugs-are-memory-safety-issues/
 
 
And it says:
 
 
"Users who often read vulnerability reports come across terms over and over again. Terms like buffer overflow, race condition, page fault, null pointer, stack exhaustion, heap exhaustion/corruption, use after free, or double free --all describe memory safety vulnerabilities."
 
So as you will notice below, that the following memory safety problems has been solved in Delphi:
 
And I have just read the following webpage about "Fearless Security: Memory safety":
 
https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/
 
Here is the memory safety problems:
 
1- Misusing Free (use-after-free, double free)
 
I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it.
 
 
2- Uninitialized variables
 
This can be detected by the compilers of Delphi and Freepascal.
 
 
3- Dereferencing Null pointers
 
I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it.
 
4- Buffer overflow and underflow
 
This has been solved in Delphi by using madExcept, read here about it:
 
http://help.madshi.net/DebugMm.htm
 
You can buy it from here:
 
http://www.madshi.net/
 
 
There remains also the stack exhaustion memory safety problem,
and here is how to detect it in Delphi:
 
Call the function "DoStackOverflow" below once from your code and you'll get the EStackOverflow error raised by Delphi with the message "stack overflow", and you can print the line of the source code where EStackOverflow is raised with JCLDebug and such:
 
----
 
​function DoStackOverflow : integer;
 
begin
 
result := 1 + DoStackOverflow;
 
end;
 
---
 
 
 
About my scalable algorithms inventions..
 
 
I am a white arab, and i am a gentleman type of person,
and i think that you know me too by my poetry that i wrote
in front of you and that i posted here, but i am
also a more serious computer developer, and i am also
an inventor who has invented many scalable algorithms, read about
them on my writing below:
 
 
Here is my last scalable algorithm invention, read
what i have just responded in comp.programming.threads:
 
About my LRU scalable algorithm..
 
On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote:
> in locked mode in very rare cases. And as I said inserting and
> flushing is conventional locked access.
> So the quest is for you: Can you guess what I did?
 
 
And here is what i have just responded:
 
 
I think i am also smart, so i have just quickly found a solution that is scalable and that is not your solution, so it needs my hashtable that is scalable and it needs my fully scalable FIFO queue that i have invented. And i think i will not patent it. But my solution is not Lockfree, it uses locks like in a Lock striping manner and it is scalable.
 
 
And read about my other scalable algorithms inventions on my writing below:
 
 
About the buffer overflow problem..
 
I wrote yesterday about buffer overflow in Delphi and Freepascal..
 
I think there is a "higher" abstraction in Delphi and Freepascal
that does the job very well of avoiding buffer overflow, and it is
the TMemoryStream class, since it behaves also like a pointer
and it supports reallocmem() and freemem() on the pointer but
with a higher level abstraction, look for example at my
following example in Delphi and Freepascal, you will notice
that contrary to pointers , that the memory stream is adapting with writebuffer() without the need of reserving the memory, and this is why it avoids the buffer overflow problem, read the following example to notice how i am using it with a PAnsichar type:
 
========================================
 
 
Program test;
 
 
uses system.classes,system.sysutils;
 
 
var P: PAnsiChar;
 
 
Begin
 
 
P:='Amine';
 
 
mem:=TMemorystream.create;
 
mem.position:=0;
 
mem.writebuffer(pointer(p)^,6);
 
mem.position:=0;
 
writeln(PAnsichar(mem.memory));
 
 
 
end.
 
 
===================================
 
 
So since Delphi and Freepascal also detect the buffer overflow on dynamic arrays , so i think that Delphi and Freepascal are powerful
tools.
 
 
Read my previous thoughts below to understand more:
 
 
And I have just read the following webpage about "Fearless Security: Memory safety":
 
https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/
 
Here is the memory safety problems:
 
1- Misusing Free (use-after-free, double free)
 
I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it.
 
 
2- Uninitialized variables
 
This can be detected by the compilers of Delphi and Freepascal.
 
 
3- Dereferencing Null pointers
 
I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it.
 
4- Buffer overflow and underflow
 
This has been solved in Delphi by using madExcept, read here about it:
 
http://help.madshi.net/DebugMm.htm
 
You can buy it from here:
 
http://www.madshi.net/
 
 
And about race conditions and deadlocks problems and more, read my following thoughts to understand:
 
 
I will reformulate more smartly what about race conditions detection in Rust, so read it carefully:
 
You can think of the borrow checker of Rust as a validator for a locking system: immutable references are shared read locks and mutable references are exclusive write locks. Under this mental model, accessing data via two independent write locks is not a safe thing to do, and modifying data via a write lock while there are readers alive is not safe either.
 
So as you are noticing that the "mutable" references in Rust follow the Read-Write Lock pattern, so this is not good, because it is not like more fine-grained parallelism that permits us to run the writes in "parallel" and gain more performance from parallelizing the writes.
 
 
Read more about Rust and Delphi and my inventions..
 
I think the spirit of Rust is like the spirit of ADA, they are especially designed for the very high standards of safety, like those of ADA, "but" i don't think we have to fear race conditions that Rust solve, because i think that race conditions are not so difficult to avoid when you are a decent knowledgeable programmer in parallel programming, so you have to understand what i mean, now we have to talk about the rest of the safety guaranties of Rust, there remain the problem of Deadlocks, and i think that Rust is not solving this problem, but i have provided you with an enhanced DelphiConcurrent library for Delphi and Freepascal that detects deadlocks, and there is also the Memory Safety guaranties of Rust, here they are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
But notice that I have solved the number 1 and number 2 by inventing my
scalable reference counting with efficient support for weak references
for Delphi and Freepascal, read below to notice it, and for number 3 read my following thoughts to understand:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a decent preemption tolerance.
 
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to
Google or Embarcadero or such software companies.
 
 
Read my following writing to know me more:
 
More about computing and parallel computing..
 
The important guaranties of Memory Safety in Rust are:
 
1- No Null Pointer Dereferences
2- No Dangling Pointers
3- No Buffer Overruns
 
I think i have solved Null Pointer Dereferences and also solved
Bonita Montero <Bonita.Montero@gmail.com>: Jan 27 06:49PM +0100

> About my LRU scalable algorithm..
 
Your LRU-algorithm ? Where is it ?
 
>> So the quest is for you: Can you guess what I did?
 
> And here is what i have just responded:
> I think i am also smart, ...
 
Then show me your LRU-code.
aminer68@gmail.com: Jan 27 05:49AM -0800

Hello,
 
 
Yet more precision about the invariants of a system..
 
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following
system of the given Petri net:
 
Transpose(vector) * Incidence matrix = 0
 
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
 
Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities.
 
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
 
About reachability analysis of a Petri net..
 
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
 
If a Petri net is live, that means that it is deadlock-free.
 
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
 
Tina and Partial order reduction techniques..
 
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering.
 
Tina is using "partial-order" reduction techniques aimed at preventing
combinatorial explosion, Read more here to notice it:
 
http://projects.laas.fr/tina/papers/qest06.pdf
 
About modelizations and detection of race conditions and deadlocks
in parallel programming..
 
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
 
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
 
And i have just taken a look at the following webpage of Dr Dobb's journal:
 
Detecting Deadlocks in C++ Using a Locks Monitor
 
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
 
And i think that both of them are using technics that are not as good
as analysing deadlocks with Petri Nets in parallel applications ,
for example the above two methods are only addressing locks or mutexes
or reader-writer locks , but they are not addressing semaphores
or event objects and such other synchronization objects, so they
are not good, this is why i have written a tutorial that shows my
methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is:
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
You have to get the powerful Tina software to run my Petri Net examples
inside my tutorial, here is the powerful Tina software:
 
http://projects.laas.fr/tina/
 
Also to detect race conditions in parallel programming you have to take
a look at the following new tutorial that uses the powerful Spin tool:
 
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
 
This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.