Monday, April 30, 2018

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

Tim Rentsch <txr@alumni.caltech.edu>: Apr 29 10:16PM -0700


> The failure you see of std::numeric_limits<double>::max()
> to compare equal to itself [...] (probably) does not violate
> the letter of the c++ standard.
 
What leads you to think the C++ standard allows this? Are
there any specific citations you can offer that support
this view? AFAICT the C and C++ standards admit no leeway,
and the comparison must give a result of equal.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 29 10:41PM -0700

> number does not have an exact representation in binary. It will
> probably yield different results when loaded into either a 64-bit or a
> 80-bit register.
 
None of those things matter. The Standard requires a particular
value be returned, however the implementation chooses to do it.
 
> Add some minor optimizer bugs and one can easily
> imagine that there might be problems when comparing this number with
> itself, even if it should work by the letter of the standard.
 
If you don't trust your compiler, get a different compiler.
 
If you think it's important to run sanity checks be sure the
compiler doesn't have bugs, by all means do so.
 
But don't give in to superstitious programming practices. Insist
on solid understanding and a rational decision process, not murky
justifications based on uncertainty and fear. Anyone promoting
voodoo programming principles should be encouraged to change
occupations from developer to witchdoctor.
Juha Nieminen <nospam@thanks.invalid>: Apr 30 05:57AM

>> 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00000000000000000000
>> 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00000000000000000000
 
> The two numbers are the same to the last digit.
 
If you want to print two floating point numbers in order to see if they
are bit-by-bit identical, you shouldn't print the in decimal. Conversion
to decimal may cause rounding errors (because floating point values are
internally represented in terms of powers of 2, while decimal is in
terms of powers of 10).
 
Either print the raw byte values of the floating point variable (eg.
in binary format), or use std::printf with the format specifier "%a"
which prints it in hexadecimal. (This is a bit-by-bit accurate
representation because converting from base-2 to base-16 can be
done losslessly, without need for rounding.)
 
There might have been an equivalent to "%a" for std::ostream, but
I don't remember now if there was.
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 30 10:05AM +0200

On 04/27/18 08:44, Andrea Venturoli wrote:
> Hello.
> ...
 
First off, thanks to anyone who got intersted in the matter.
I wrote to the clang-dev mailing list and received a precise answer.
 
 
 
 
I'll try to summarize everything here:
 
_ my first assumption that comparing std::numeric_limts<double>::max()
to itself was failing was wrong; the problem was another;
 
_ BTW, this comparing must work (or it would be a bug in the
compiler/system/etc...);
 
_ my real problem was that:
a) I had enabled FP exceptions (in particular overflow);
b) with optimizations on, the compiler would speculatively execute a
branch that would not run under proper program flow and such a branch
generated an FP exception.
 
_ My code was deemed as problematic because, in order to enable
exceptions (or use anything from <cenv>), the compiler should be
informed (by using #pragma STDC FENV_ACCESS on). Failure to do this will
let the optimizer take wrong assumptions.
 
_ BTW, I found some sources which say the above actually is C++
standard, some say it's C standard (possibly inherited by C++ or not),
some say it's an extension a compiler might support. I don't have access
to C++ standard.
 
_ In any case, Clang does not support that #pragma, so there's right now
no way to get FP exception to play nicely with optimizations.
There's work going on, but no estimate on the release.
 
 
 
bye
av.
Paavo Helde <myfirstname@osa.pri.ee>: Apr 30 11:10AM +0300

On 30.04.2018 8:41, Tim Rentsch wrote:
>> imagine that there might be problems when comparing this number with
>> itself, even if it should work by the letter of the standard.
 
> If you don't trust your compiler, get a different compiler.
 
That's not about my compiler. I need the code to be compiled by
different compilers and we do not have time or resources to test them
all, especially those which have not yet been written.
 
> If you think it's important to run sanity checks be sure the
> compiler doesn't have bugs, by all means do so.
 
Thanks, our software has a huge suite of automatic unit and integration
tests. With their help we recently located and eliminated a randomly
flipping bit in the physical memory of the testing server which the
memory diagnostic tools failed to diagnose.
 
> justifications based on uncertainty and fear. Anyone promoting
> voodoo programming principles should be encouraged to change
> occupations from developer to witchdoctor.
 
What one man calls superstitious hunch, another man calls experience. I
would not have written a direct comparison with
std::numeric_limits<double>::max() because I have had some experience
with compiler/optimizer bugs and where are the murky corners. As it came
out else-thread my suspicions were justified, the problem indeed appears
to be a bug in the compiler, triggered indeed by the presence of
std::numeric_limits<double>::max() in the code (albeit the bug was a
different and more interesting one from what I had imagined).
 
I get payed for writing software working as reliably as possible in the
real world. This has a lot to do with anticipating and avoiding or
working around any bugs or problems in the
standards/OS-es/compilers/toolchains/third-party libraries, etc.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 30 04:42AM -0700


> I found some sources which say the above actually is C++ standard,
> some say it's C standard (possibly inherited by C++ or not), some
> say it's an extension a compiler might support.
 
Support is required in ISO C, implementation-defined in C++.
 
> I don't have access to C++ standard.
 
Get free draft here:
 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4659.pdf
"James R. Kuyper" <jameskuyper@verizon.net>: Apr 30 08:58AM -0400

On 04/30/2018 01:57 AM, Juha Nieminen wrote:
...
> done losslessly, without need for rounding.)
 
> There might have been an equivalent to "%a" for std::ostream, but
> I don't remember now if there was.
 
Starting with C++2011, if str.flags() has both ios_base::fixed and
ios_base::scientific set at the same time, that's equivalent to %a or
%A, depending upon whether ios_base::uppercase is also set.
(24.4.2.2.2p5 - Table 76).
"James R. Kuyper" <jameskuyper@verizon.net>: Apr 30 09:29AM -0400

On 04/29/2018 04:12 PM, Paavo Helde wrote:
...
 
> From here you can clearly see there might be problems. This constant is
> specified in decimal and I believe there is a fair chance this number
> does not have an exact representation in binary.
 
True. But keep in mind that this definition is intended to be used only
by a particular implementation of C++ - one that provides the <limits>
and <cfloat> headers that you're looking at. The implementor has a
responsibility for making sure that the this particular floating point
constant will be converted BY THAT IMPLEMENTATION to the particular
floating point value which represents the maximum possible double value.
It can be proven, following the rules governing the interpretation of
floating point constants, that there do exist constants for which that
would be true. I would expect the implementor to choose the shortest
such constant.
 
This is complicated by the fact that some implementations of C++
(including, for some reason, some of the most popular ones on Windows)
consist of a compiler created by one vendor, combined with a C++
standard library created by a different vendor. However, if the compiler
and the library don't work together to interpret DBL_MAX correctly, then
that combination of compiler and library does not constitute a fully
conforming implementation of C++. Neither vendor should endorse using
their products together unless they've made sure that they do, together,
qualify as fully conforming (at least, when the appropriate options are
chosen). You shouldn't use them together unless at least one of the two
vendors has endorsed using them together.
 
If you don't have good reason to believe that an implementor has
bothered checking whether their implementation actually conforms (at
least, when you've chose the options that are supposed to make it
conform), then whether or not DBL_MAX is exactly the maximum finite
value for a double is going to be the least of your problems.
"James R. Kuyper" <jameskuyper@verizon.net>: Apr 30 10:02AM -0400

On 04/30/2018 04:05 AM, Andrea Venturoli wrote:
...
> let the optimizer take wrong assumptions.
 
> _ BTW, I found some sources which say the above actually is C++
> standard, some say it's C standard (possibly inherited by C++ or not),
 
The STDC FENV_ACCESS pragma is defined by the C standard. The entire C
standard library was incorporated by reference into the C++ standard,
with precisely specified modifications, but for the rest of the C
language, it's incorporated into the C++ standard only if the C++
standard explicitly says so. What it says about this pragma is that
support for it is implementation-defined.
 
> some say it's an extension a compiler might support. ...
 
"Implementation-defined" is a marginally stronger requirement than
"supportable extension": an implementation's documentation is required
to specify whether or not it's supported, and the standard specifies
what it means if supported.
 
> ... I don't have access
> to C++ standard.
 
The final draft of C++2017 is almost identical to the final approved
standard, and is a LOT cheaper:
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4659.pdf>
Manfred <noname@invalid.add>: Apr 30 04:17PM +0200

On 4/28/2018 2:35 PM, Andrea Venturoli wrote:
> On 04/28/18 11:42, Tim Rentsch wrote:
[...]
> }
 
> Prints '1' and value of A on -O0.
> Prints '1' and then fails on -O1.
[...]
>> exception before the conditional move can take effect.
 
> That's my hypotesis too.
 
>>> Is this just my opinion or a bug worth reporting?
 
This is definitely a bug worth reporting:
n4659 sec. 8.16 (Conditional operator) p.1:
 
"... The first expression is contextually converted to bool (Clause 7).
It is evaluated and if it is true, the result of the conditional
expression is the value of the second expression,
otherwise that of the third expression. Only one of the second and third
expressions is evaluated. ..."
 
The relevant part is that "Only one of the second and third expressions
is evaluated.", so A*2 should /not/ be evaluated.
clang is definitely wrong here.
 
Manfred <noname@invalid.add>: Apr 30 10:32PM +0200

On 4/30/2018 7:41 AM, Tim Rentsch wrote:
> justifications based on uncertainty and fear. Anyone promoting
> voodoo programming principles should be encouraged to change
> occupations from developer to witchdoctor.
 
The fact that comparing floating point values with == is inherently
brittle is a *fact*, and there is nothing superstitious with it.
 
Floating point programming has its own peculiarities; knowing that ==
comparisons are to be avoided is one of them.
Juha Nieminen <nospam@thanks.invalid>: Apr 30 05:43AM

> When examined in isolation, yes. However, this pattern does often
> repeat and it becomes a tedious pain in the neck.
 
Code readability requires following coding conventions. Writing namespace
prefixes can be a good coding convention.
Paavo Helde <myfirstname@osa.pri.ee>: Apr 30 09:03AM +0300

On 29.04.2018 20:45, Jouko Koski wrote:
>> this alternative should work?
 
> I thought I was asking the question. :-) Well, the ultimate goal is to
> improve readability
 
One thing you forget here is that readability depends on the reader.
What is easily recognized by one reader might not be recognized by another.
 
If I am familiar with the codebase and know there is no custom string
class there, then seeing a type name string automatically means this
must be a std::string. If I am not familiar with the codebase, then it
is not so clear at all.
 
When I see an identifier like socket in some foreign or long-forgotten
code I am just trying to debug this is not readable at all. What does it
mean, how do I construct one? Man socket says it is a function.
 
I will need to figure out somehow that it is a
boost::asio::ip::tcp::socket (or then something else). If I read this
code once in 5 years, I would like to have it all spelled out. If I read
and write this code each day, then I want it to be shortened to just
socket. The readability goals are different.
"Jouko Koski" <joukokoskispam101@netti.fi>: Apr 30 12:02PM +0300

"Vir Campestris" wrote:
> Two things:
 
> Why would I want to limit the line to 80 chars? My screen is bigger than
> that, and has been for 20 years at least.
 
80 may not be the exact value, but longer lines tend to deteriorate
readability, and that is fairly universal. Then, team members may be
reading code on paper or on their smartphones, and there are still old
vga projectors out there. Some people may be visually impaired or just
prefer to use the ample horizontal space for placing the working
windows side by side.
 
> std::list<
> IpAddress
 
> >;
 
Line folding tend to break the visual flow. Folding convention and
indentation do matter. Sometimes it may be insightful to replace all
printable characters in a source file (or in any text, for the matter!)
with X characters, and assess if the pattern still communicates
structure, function, and intent.
 
> Apparently the Linux kernel standard is to stop excessive nesting. so
> why doesn't it limit nesting instead?
 
Well, it does, doesn't it?
 
--
Jouko
Ian Collins <ian-news@hotmail.com>: Apr 30 09:55PM +1200

On 04/30/2018 08:53 AM, Vir Campestris wrote:
 
> Two things:
 
> Why would I want to limit the line to 80 chars? My screen is bigger than
> that, and has been for 20 years at least.
 
Some of us have very wide screens, but like to look at multiple files
side by side. Screen size and line length are unrelated.
 
--
Ian.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 30 05:17AM -0700

>> That resembles noise.
 
> Two things:
 
> Why would I want to limit the line to 80 chars?
 
There are good human factors reasons to limit lines to somewhere
in the neighborhood of 75 characters. It's no accident that
paper is taller than it is wide.
 
> My screen is bigger than that, and has been for 20 years at least.
 
Not really a good argument. Just because something can be done
doesn't mean it's a good idea to do it. Also there are other
output media to consider: even if you are happy to view code
only on your wide screen, other people want to look at code
using different media.
 
> std::list<
> IpAddress
 
> >;
 
If it were important to write this declaration on more than one
line, I would be inclined to take more advantage of horizontal
space, reducing the amount of vertical space used:
 
using ResolverMap =
std::map< std::string, std::list< IpAddress > >
;
 
or perhaps
 
using ResolverMap =
std::map<
std::string,
std::list< IpAddress >
 
;
"James R. Kuyper" <jameskuyper@verizon.net>: Apr 30 10:15AM -0400

On 04/29/2018 04:53 PM, Vir Campestris wrote:
...
> Why would I want to limit the line to 80 chars? My screen is bigger than
> that, and has been for 20 years at least.
 
Because scanning a long line is hard work for your eyes. There's a
reason why newspaper columns are so narrow: they're about the maximum
width that your eyes can see at one time without scanning left-to-right.
You can read a newspaper column scanning only in the vertical direction.
Books and magazines typically are wider, because it's not necessary to
completely avoid horizontal scanning. However, anything larger than
about 80 characters puts excess strain on your eye muscles - which is
why 80 characters was a popular choice for both printers and display
screens.
scott@slp53.sl.home (Scott Lurndal): Apr 30 03:09PM

>about 80 characters puts excess strain on your eye muscles - which is
>why 80 characters was a popular choice for both printers and display
>screens.
 
And here I always though it was to match 80-column cards. And note
that printers were generally 132 columns before consumer (cheap)
printers became widely available.
"James R. Kuyper" <jameskuyper@verizon.net>: Apr 30 11:31AM -0400

On 04/30/2018 11:09 AM, Scott Lurndal wrote:
 
> And here I always though it was to match 80-column cards. And note
> that printers were generally 132 columns before consumer (cheap)
> printers became widely available.
 
I'm giving part of the reason why (among other things) 80-column cards
became more popular than other sizes, and why 80 column printers became
more popular than 132 column ones.
scott@slp53.sl.home (Scott Lurndal): Apr 30 03:40PM

>> printers became widely available.
 
>I'm giving part of the reason why (among other things) 80-column cards
>became more popular than other sizes
 
Hm. Given that interpreted cards (i.e. cards with column content printed on them)
were generally quite rare, particularly in the first half of the 20th
century, I'm not sure that eyestrain had anything to do with the choice
of 80-columns for cards.
 
 
>, and why 80 column printers became
>more popular than 132 column ones.
 
Again, I can't see (pun unintended) that. I can see that when
using 8.5x11 paper in a printer, portrait mode printing at 10cpi
(a readable size for most) yields 80 columns - so I think the
relationship is more about using 8.5 inch wide paper than 80 columns per-se.
"James R. Kuyper" <jameskuyper@verizon.net>: Apr 30 11:40AM -0400

On 04/30/2018 11:16 AM, Stefan Ram wrote:
> also have a higher resolution, so sometimes more characters
> fit into the same width when the width is measured by the
> arc width it has for the eye (i.e., measured in degrees).
 
Your eyes have much higher resolution in the fovea than in peripheral
areas, and that resolution is needed when reading densely packed text.
This isn't immediately obvious, because whenever you try to pay
attention to a part of your field of view that is not currently
projected onto your fovea, your eyes automatically move to project it
onto that area, creating the illusion that we can see at high resolution
across our entire field of view.
 
> that a slideshow with pictures or a movie should not use the
> full width of the screen but be restricted to an area of
> smaller width.
 
As a general rule, film makers keep things that require high resolution
viewing near the center of the field of view; the edges tend to be
occupied by images for which the lower resolution of our our peripheral
vision is sufficient, either because they don't have much fine detail,
or because the fine detail isn't important.
ram@zedat.fu-berlin.de (Stefan Ram): Apr 30 02:58PM

>Because scanning a long line is hard work for your eyes. There's a
>reason why newspaper columns are so narrow: they're about the maximum
>width that your eyes can see at one time without scanning left-to-right.
 
I also prefer long lines sometimes.
 
Today, we prefer longer identifiers, so lines become long
by the longer identifiers without becoming more complex.
 
For example, from my "real" code,
 
int result = de.dclj.ram.algorithm.gregorian.YearMonthDay.difference( yearParameter, monthParameter, dayParameter ).plus( 6 ).mod( 7 );
 
(140 characters). You might attack this line for several
stylistic reasons, but I believe splitting it into several
lines will not help much.
 
If narrow newspaper columns help reading that much, then why
are book pages usually /not/ split into several narrow columns?
 
Another example, would be a small Java block to interchange
the value of two variables:
 
{ final var tmp = a; a = b; b = tmp; }
 
. This helps to see the block as a "unit". Also "a; a" and
"b; b" - as with domino pieces - help to check the right
order of statements. It can be identified by the brain as a
"chunk" to interchange two values.
 
(In Java, one cannot write a function nor a macro to
swap to variables.)
 
However, Usenet is deemed by me to be a special place for
which I usually format all my lines to contain 72 characters
at most. So, for Usenet, I'd write:
 
int result =
de.dclj.ram.algorithm.gregorian.YearMonthDay.difference
( yearParameter, monthParameter, dayParameter ).plus( 6 ).mod( 7 );
 
. PS: In my previous post I added a wrong reference to the
"References:" header line, so that it appeared as if I had
responded to a post of yours while in the body I responded
to Jouko only. Sorry for that!
ram@zedat.fu-berlin.de (Stefan Ram): Apr 30 03:04PM

>If narrow newspaper columns help reading that much, then why
>are book pages usually /not/ split into several narrow columns?
 
(PS: I now have seen that this already was addressed by James.)
ram@zedat.fu-berlin.de (Stefan Ram): Apr 30 03:16PM

>about 80 characters puts excess strain on your eye muscles - which is
>why 80 characters was a popular choice for both printers and display
>screens.
 
However long a line is, it always will fit into the screen
(when editing). Today's Screens are somewhat wider but they
also have a higher resolution, so sometimes more characters
fit into the same width when the width is measured by the
arc width it has for the eye (i.e., measured in degrees).
 
And if there would be such a strain, this also would imply
that a slideshow with pictures or a movie should not use the
full width of the screen but be restricted to an area of
smaller width.
 
Some people intentionally roll their eyes from side to side
and call this "ophthalmic gymnastics". It also sometimes
is reported to help to find a memory of something in the
brain.
 
A real problem with adjusted paragraphs (right-aligned) is
that it can be hard to find the beginning of the next line
when lines are too wide because all lines look the same.
This, however, does not apply to source code.
Juha Nieminen <nospam@thanks.invalid>: Apr 30 05:58AM

> I correct a typo, please read again:
 
Could you please stop spamming?
 
If the topic is C++, then it's fine, but just stop spamming the same
thing over and over.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 6 updates in 6 topics

Sky89 <Sky89@sky68.com>: Apr 29 07:47PM -0400

Hello,
 
Read this:
 
 
I correct a typo, please read again..
 
Israel loves arab countries and arab countries love Israel
 
That's the truth today, please read this to notice it:
 
BUSINESS TIES TO ARAB WORLD SKYROCKETING, SAYS VENTURE CAPITALIST MARGALIT
 
As Israel marked Independence Day, the country was benefiting from
ever-growing business ties with the Arab world, according to one Israeli
executive who has helped paved the way for the budding rapprochement.
 
In the Middle East, Israeli former Labor MK and venture capitalist Erel
Margalit publicly named Jordan, Egypt, Morocco, Dubai, Abu Dhabi as
countries that seek to incorporate the Israeli homegrown tools. The
executive has also met with leaders from Oman and Tunisia.
 
With Saudi Arabia now developing a $500 billion smart city mere
kilometers from the southern city of Eilat, Israeli companies are in a
prime place to bid for contracts and services.
 
A number of Israeli companies are talking to the Saudi sovereign wealth
fund – the Public Investment Fund of Saudi Arabia – about developing the
proposed 26,500-sq.km. "smart city" zone, Margalit previously told The
Jerusalem Post.
 
 
Read more here:
 
https://www.jpost.com/Jpost-Tech/Business-ties-to-Arab-world-skyrocketing-says-venture-capitalist-Margalit-552460
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 29 07:39PM -0400

Hello....
 
 
Israel loves arab countries and arab countries loves Israel
 
That's the truth today, please read this to notice it:
 
BUSINESS TIES TO ARAB WORLD SKYROCKETING, SAYS VENTURE CAPITALIST MARGALIT
 
As Israel marked Independence Day, the country was benefiting from
ever-growing business ties with the Arab world, according to one Israeli
executive who has helped paved the way for the budding rapprochement.
 
In the Middle East, Israeli former Labor MK and venture capitalist Erel
Margalit publicly named Jordan, Egypt, Morocco, Dubai, Abu Dhabi as
countries that seek to incorporate the Israeli homegrown tools. The
executive has also met with leaders from Oman and Tunisia.
 
With Saudi Arabia now developing a $500 billion smart city mere
kilometers from the southern city of Eilat, Israeli companies are in a
prime place to bid for contracts and services.
 
A number of Israeli companies are talking to the Saudi sovereign wealth
fund – the Public Investment Fund of Saudi Arabia – about developing the
proposed 26,500-sq.km. "smart city" zone, Margalit previously told The
Jerusalem Post.
 
 
Read more here:
 
https://www.jpost.com/Jpost-Tech/Business-ties-to-Arab-world-skyrocketing-says-venture-capitalist-Margalit-552460
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 29 07:02PM -0400

Hello....
 
 
Islamic Iran loves also USA people, this is why they will buy
80 aircraft from Boeing to love and "encourage" USA people. Read this
to notice it:
 
Iran Air has agreed to buy 80 aircraft from Boeing and 100 from Airbus
in addition to 20 from ATR. Iran's Aseman Airlines has also signed a
deal to purchase 30 Boeing 737 MAX jets.
 
Read more here:
 
http://www.irna.ir/en/News/82891060
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 29 06:19PM -0400

Hello..
 
Read this:
 
I am white and i look very very much like the persians both in my mind
and in my face and my body, even my mother told me that i look like the
persians. Here is more about my brothers the persians:
 
The Israeli MILITARY INTELLIGENCE chief Maj.- Gen. Herzl Halevi told the
daily Haaretz that he was pessimistic. "Today, we have the advantage.
Iran is closing in on it. Since the 1979 Iranian revolution, the number
of universities and university students in Iran has increased 20-fold,
compared with three and a half times for Israel." Enrollment in science,
technology, engineering and math in Iran is skyrocketing, he said. In
other words – in this technology war, Israel is losing.
 
Read more here:
 
https://www.jpost.com/Jerusalem-Report/Will-Iran-win-the-technology-war-435827
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 29 05:04PM -0400

Hello....
 
 
I am white and i look very very much like the persians both in my mind
and in my face and my body, even my mother told me that i look like the
persians.
 
Here is my brothers the "race" of persians:
 
Ancient History Persian Empire Documentary
 
https://www.youtube.com/watch?v=FKFMo3eB82o
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 29 03:32PM -0400

Hello..
 
Read this:
 
Here is fascinating mathematical work of arabs that demonstrates
undoubted genius, a new research shows the Babylonians, not the Greeks,
were the first to study trigonometry -- the study of triangles -- and
reveals an ancient mathematical sophistication that had been hidden
until now.
 
Mathematical mystery of ancient Babylonian clay tablet solved
 
Read more here:
 
https://www.sciencedaily.com/releases/2017/08/170824141250.htm
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Sunday, April 29, 2018

Digest for comp.lang.c++@googlegroups.com - 10 updates in 2 topics

Jorgen Grahn <grahn+nntp@snipabacken.se>: Apr 29 06:23AM

On Sat, 2018-04-28, Jouko Koski wrote:
 
> Having to repeat the namespace name everywhere does not improve readability.
> It adds noise, induces boilerplate typing and it looks ugly, albeit it does
> make the identifiers more explicit.
 
Then I'm with Juha (I think it was): I think accepting the namespaces
is the best overall solution. So I refer to things as "std::foo" and
to "bar::foo" in my code (if that code isn't also in bar).
 
It no longer looks ugly to me, and it typically doesn't mean a lot of
repetition (especially nowadays with C++11 auto).
 
I've done some work with boost::asio, and there the long namespacing
/was/ annoying, because you had to mention it over and over again.
 
> };
 
> so that string would be std::string in this interface without leaking the
> same using or alias declaration to everywhere else, too. Any suggestions?
 
If we focus on the interface: to me, if that had appeared in a header
file, I'd worry about what 'string' meant, since it didn't say
'std::string'. It doesn't feel like an improvement to
 
struct thing {
void func(std::string s);
};
 
I can appreciate an abbreviation like this in an interface:
 
using ResolverMap = std::map<std::string, std::list<IpAddress>>;
 
but then I'm getting something more than removal of the std prefix.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
woodbrian77@gmail.com: Apr 29 08:00AM -0700

On Sunday, April 29, 2018 at 1:23:46 AM UTC-5, Jorgen Grahn wrote:
> };
 
> I can appreciate an abbreviation like this in an interface:
 
> using ResolverMap = std::map<std::string, std::list<IpAddress>>;
 
I would write it like this:
 
using ResolverMap = ::std::map<::std::string,::std::list<IpAddress>>;
 
.
 
 
> but then I'm getting something more than removal of the std prefix.
 
You get both precise and flexible names this way.
 
 
Brian
Ebenezer Enterprises - In G-d we trust.
http://webEbenezer.net
"Jouko Koski" <joukokoskispam101@netti.fi>: Apr 29 08:37PM +0300

"Alf P. Steinbach" wrote:
> using string = std::string;
> void func( string const& );
> };
 
Yes, and a typedef would do as well. Now, can this be scaled so that
other identifiers from or even the whole std:: would be available inside
the struct?
 
--
Jouko
"Jouko Koski" <joukokoskispam101@netti.fi>: Apr 29 08:45PM +0300

"James Kuyper" wrote:
>> same using or alias declaration to everywhere else, too. Any suggestions?
 
> Can you suggest an alternative, and define the details of how you think
> this alternative should work?
 
I thought I was asking the question. :-) Well, the ultimate goal is to
improve readability in a bit broader context than in an interface
consisting of one class with one member function. But these would be my
shots with this minimalistic example so far:
 
using namespace std; // considered bad
 
struct thing {
void func(string s);
};
 
This is the non-solution, because it leaks std to anybody using the thing.
 
struct thing {
typedef std::string string; // or using string = std::string;
void func(string s);
};
 
This is a solution. However, it does not scale, because a typedef is
necessary for each "beautified" type in the class scope. The solution
carries a lot of boilerplate: We have written "string" three times!
 
struct thing {
using namespace std; // not valid C++
void func(string s);
};
 
If this were allowed it would probably the best solution. But there
might be some implications...
 
namespace some_unique_boilerplate_namespace_name {
using namespace std;
struct thing {
void func(string s);
};
}
using some_unique_boilerplate_namespace_name::thing;
 
This is the best I have come up with. But shall we really conclude
that the general guideline is: Place all your code in a namespace
even when you are not supposed to use a namespace; and place all your
code in a nested namespace when you are using a namespace?
 
--
Jouko
"Jouko Koski" <joukokoskispam101@netti.fi>: Apr 29 09:13PM +0300

"Jorgen Grahn" wrote:
 
> Then I'm with Juha (I think it was): I think accepting the namespaces
> is the best overall solution. So I refer to things as "std::foo" and
> to "bar::foo" in my code (if that code isn't also in bar).
 
Making a virtue of necessity! Yes, it is more explicit, but it cripples
readability.
 
> It no longer looks ugly to me, and it typically doesn't mean a lot of
> repetition (especially nowadays with C++11 auto).
 
Well, it is ugly and there is still a lot of repetition.
 
 
> If we focus on the interface: to me, if that had appeared in a header
> file, I'd worry about what 'string' meant, since it didn't say
> 'std::string'.
 
I trust you not being that inept in real life! People using some other
languages that have packages or modules mechanism seem to do quite ok
with this issue.
 
If there were "string", "buffer" or "socket" in some declaration and
there is "std" or "boost::asio" in the same cognitive scope, one should
be able to assume that string, buffer or socket cannot be just anything.
They are supposed to be coming from std or boost::asio without the need
of repeating the full namespace path on every single occurrence.
 
When it comes to this particular toy example, string is probably the
most used type in the standard library. I would expect that if there
were any other kind of string in the same context simultaneously, it
is that string that should be addressed as the other::string without
having to drag the std:: prefix to everywhere else. Having "using
namespace std;" in the global scope might not be that bad idea after
all but that is another story.
 
> I can appreciate an abbreviation like this in an interface:
 
> using ResolverMap = std::map<std::string, std::list<IpAddress>>;
 
> but then I'm getting something more than removal of the std prefix.
 
Yes. When it comes to code readability, this declaration is over 60
characters long. It is a bit challenging to try to limit the max line
length to 80 when this kind of stuff tends to be the norm. About 10 %
of it is colons and the "std::" is repeated three times. That resembles
noise.
 
--
Jouko
"Jouko Koski" <joukokoskispam101@netti.fi>: Apr 29 09:23PM +0300

wrote:
 
> I would write it like this:
 
> using ResolverMap = ::std::map<::std::string,::std::list<IpAddress>>;
 
Now, this is plain unbeliavable madness! This may solve some problem,
but that problem has very little to do with programming. A code review
with professional programmers or other professional help might give
guidance realizing that this kind of a convention results in just a
pile of personal read-only code.
 
--
Jouko
Ian Collins <ian-news@hotmail.com>: Apr 30 07:17AM +1200

On 04/30/2018 06:23 AM, Jouko Koski wrote:
> with professional programmers or other professional help might give
> guidance realizing that this kind of a convention results in just a
> pile of personal read-only code.
 
All of the superfluous colons do make it had to read, take them away and
it isn't as bad....
 
--
Ian.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Apr 29 07:58PM

On Sun, 2018-04-29, Jouko Koski wrote:
 
>> It no longer looks ugly to me, and it typically doesn't mean a lot of
>> repetition (especially nowadays with C++11 auto).
 
> Well, it is ugly and there is still a lot of repetition.
 
I note that I wrote "looks ugly to me", and that you still pretend
your preferences are universal.
 
>> file, I'd worry about what 'string' meant, since it didn't say
>> 'std::string'.
 
> I trust you not being that inept in real life!
 
Ad hominem this soon? I have better things to do.
 
*plonk*
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Vir Campestris <vir.campestris@invalid.invalid>: Apr 29 09:53PM +0100

On 29/04/2018 19:13, Jouko Koski wrote:
> length to 80 when this kind of stuff tends to be the norm. About 10 %
> of it is colons and the "std::" is repeated three times. That resembles
> noise.
 
Two things:
 
Why would I want to limit the line to 80 chars? My screen is bigger than
that, and has been for 20 years at least.
 
And
 
using ResolverMap
= std::map<
std::string,
std::list<
IpAddress
 
>;
 
Andy.
--
Apparently the Linux kernel standard is to stop excessive nesting. so
why doesn't it limit nesting instead?
Paavo Helde <myfirstname@osa.pri.ee>: Apr 29 11:12PM +0300

On 27.04.2018 13:22, Andrea Venturoli wrote:
 
>> Nevertheless, comparing with std::numeric_limits<double>::max() seems
>> pretty fragile
 
> Why then?
 
Comparing floating-point numbers for exact equality is always fragile,
there is always a chance that somebody adds some computation like divide
by 10, multiply by 10, ruining the results. A deeply "floating-point"
value like std::numeric_limits<double>::max() is doubly suspect just
because it is far away from normal and well-tested range of values.
 
I just checked how it is defined in MSVC. It appears the value is
defined by the macro
 
#define DBL_MAX 1.7976931348623158e+308
 
From here you can clearly see there might be problems. This constant is
specified in decimal and I believe there is a fair chance this number
does not have an exact representation in binary. It will probably yield
different results when loaded into either a 64-bit or a 80-bit register.
Add some minor optimizer bugs and one can easily imagine that there
might be problems when comparing this number with itself, even if it
should work by the letter of the standard.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Fwd: 各国元首对日本的惊人评价,最后一句太霸气了!


Subject: FW: Fwd: FW: Fwd: 各国元首对日本的惊人评价,最后一句太霸气了!

 

 

From: RICHARD     Sent: Friday, April 27, 2018 1:16 PM
Subject:  
各国元首对日本的惊人评价,最后一句太霸气了!

Digest for comp.programming.threads@googlegroups.com - 11 updates in 10 topics

Sky89 <Sky89@sky68.com>: Apr 29 12:25AM -0400

Hello..
 
 
Optical processor startup claims to outperform GPUs for AI workloads
 
Fathom Computing, a startup founded by brothers William and Michael
Andregg, is aiming to design an optical computer for neural networks. As
performance increases in traditional transistor-based CPUs have
relatively stagnated over the last few years, alternative computing
paradigms such as quantum computers have been gaining traction. While
optical computers are not a new concept—Coherent Optical Computers was
published in 1972—they have been relegated to university research
laboratories for decades.
 
The design of the Fathom prototype performs mathematical operations by
encoding numbers into light, according to this profile in Wired. The
light is then passed through a series of lenses and other optical
components. The measured result of this process is the calculated result
of the operation.
 
The Fathom prototype is not a general-purpose processor, it is designed
to compute specific types of linear algebra operations. Specifically,
Fathom is targeting the long short-term memory type of recurrent neural
networks, as well as the non-recurrent feedforward neural network. This
mirrors trends in quantum computers, as systems produced by D-Wave are
similarly targeted toward quantum annealing, rather than general processing.
 
In a recent blog post, Fathom indicated that they are still two years
away from launching, but that they expect their platform to
"significantly outperform state-of-the-art GPUs." The first systems will
be available as a cloud service for researchers working in artificial
intelligence.
 
Read more here:
 
https://www.techrepublic.com/article/optical-processor-startup-claims-to-outperform-gpus-for-ai-workloads/
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 11:12PM -0400

Hello..
 
 
I correct a typo, please read again:
 
I have thought more about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
 
I think it is not a "bright" idea, this NUMA Locks that uses Lock
Cohorting do not optimize the inside of the critical sections protected
by the NUMA Locks that use Lock Cohorting, because the inside of those
critical sections may transfer/bring Data from different NUMA nodes, so
this will cancel the gains that we have got from NUMA Locks that uses
Lock cohorting. So i don't think i will implement Lock Cohorting.
 
So then i have invented my scalable AMLock and scalable MLock, please
read for example about my scalable MLock here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
You can download my scalable MLock for C++ by downloading
my C++ synchronization objects library that contains some of my
"inventions" here:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
The Delphi and FreePascal version of my scalable MLock is here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 11:10PM -0400

Hello...
 
I have thought more about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
 
I think it is not a "bright" idea, this NUMA Locks that uses Lock
Cohorting do not optimize the inside of the critical section protected
by the NUMA Locks that use Lock Cohorting, because the inside of those
criticals sections may transfer/bring Data from different NUMA nodes, so
this will cancel the gains that we have got from NUMA Locks that uses
Lock cohorting. So i don't think i will implement Lock Cohorting.
 
So then i have invented my scalable AMLock and scalable MLock, please
read for example about my scalable MLock here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
You can download my scalable MLock for C++ by downloading
my C++ synchronization objects library that contains some of my
"inventions" here:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
The Delphi and FreePascal version of my scalable MLock is here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 07:11PM -0400

Hello..
 
 
I think i have just made a mistake in my previous post:
 
I was just reading about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
I think that this Lock cohorting optimizes more the cache etc. so i
think it is great, so i will implement it in C++ and Delphi and FreePascal.
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 06:57PM -0400

Hello,
 
 
I was just reading about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
And i have noticed that they are testing on other NUMA systems than
Intel NUMA systems, i think that Intel NUMA systems are much more
optimized and the costs of data transfer between NUMA nodes on Intel
NUMA systems is "only" 1.6X of the a local NUMA node cost, so don't
bother about Lock Cohorting on Intel NUMA systems.
 
This is why i have invented my scalable AMLock and scalable MLock,
please read for example about my scalable MLock here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
You can download my scalable MLock for C++ by downloading
my C++ synchronization objects library that contains some of my
"inventions" here:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
The Delphi and FreePascal version of my scalable MLock is here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 05:08PM -0400

Hello,
 
 
My Scalable reference counting with efficient support for weak
references version 1.11 is here..
 
There was no bug in version 1.1, this new version 1.11 is just that i
have switched the variables "head" and "tail" in my scalable reference
counting algorithm.
 
You can download my Scalable reference counting with efficient support
for weak references version 1.11 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 04:36PM -0400

Hello..
 
About my new Scalable reference counting with efficient support for weak
references version 1.1:
 
Weak references support is done by hooking the TObject.FreeInstance
method so every object destruction is noticed and if a weak reference
for that object exists it gets removed from the internal dictionary
where all weak references are stored. While it works I am aware that
this is hacky approach and it might not work if someone overrides the
FreeInstance method and does not call inherited.
 
You can download and read about my new scalable reference counting with
efficient support for weak references version 1.1 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 04:07PM -0400

Hello..
 
 
I want to share with you this beautiful song:
 
Laid Back - Sunshine Reggae
 
https://www.youtube.com/watch?v=bNowU63PF5E
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 03:34PM -0400

Hello..
 
 
I correct a typo, please read again..
 
My new Scalable reference counting with efficient support for weak
references version 1.1 is here..
 
I have enhanced my scalable algorithm and now it is much powerful, now
my scalable algorithm implementation works also as a "scalable" counter
that supports both "increment" and "decrement" using two scalable
counting networks, please take a look at my new scalable algorithm
implementation inside the source code..
 
You can download my new scalable reference counting with efficient
support for weak references version 1.1 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
Description:
 
This is my scalable reference counting with efficient support for weak
references, and since problems that cannot be solved without weak
references are rare, so this library does scale very well, this scalable
reference counting is implemented using scalable counting networks that
eliminate completely false sharing , so it is fully scalable on
multicore processors and manycore processors and this scalable algorithm
is optimized, and this library does work on both Windows and Linux
(x86), and it is easy to port to Mac OS X.
 
I have modified my scalable algorithm, now as you will notice i am not
using decrement with support for antitokens in the balancers of the
scalable counting networks, i am only using an "increment", please look
at my new scalable algorithm inside the zip file, i think it is working
correctly. Also notice that the returned value of _Release() method will
be valid if it is equal to 0.
 
I have optimized it more, now i am using only tokens and no antitokens
in the balancers of the scalable counting networks, so i am only
supporting increment, not decrement, so you have to be smart to invent
it correctly, this is what i have done, so look at the
AMInterfacedObject.pas file inside my zip file, you will notice that it
uses counting_network_next_value() function,
counting_network_next_value() increments the scalable counter network by
1, the _AddRef() method is simple, it increment by 1 to increment the
reference to the object, but look inside the _Release() method it calls
counting_network_next_value() three times, and my invention is calling
counting_network_next_value(cn1) first inside the _Release() method to
be able to make my scalable algorithm works, so just debug it more and
you will notice that my scalable algorithm is smart and it is working
correctly, i have debugged it and i think it is working correctly.
 
I have to prove my scalable reference counting algorithm, like with
mathematical proof, so i will use logic to prove like in PhD papers:
 
You will find the code of my scalable reference counting inside
AMInterfacedObject.pas inside the zip file here:
 
If you look inside the code there is two methods, _AddRef() and
_Release() methods, i am using two scalable counting networks,
think about them like counters, so in the _AddRef() method i am
executing the following:
 
v1 := counting_network_next_value(cn1);
 
cn1 is the scalable counting network, and counting_network_next_value()
is a function that increment the scalable counting network by 1.
 
In the _Release() method i am executing the following:
 
v2 := counting_network_next_value(cn1);
v1 := counting_network_next_value(cn2);
v1 := counting_network_next_value(cn2);
 
So my scalable algorithm is "smart", because the logical proof is
that i am calling counting_network_next_value(cn1) first in the
above, so this allows my scalable algorithm to work correctly,
because we are advancing cn1 by 1 to obtain the value of cn1,
so the other threads are advancing also cn1 by one inside
_Release() , it is the last thread that is advancing cn1 by 1 that will
make the reference counter equal to 0 , and _AddRef() method is the same
and it is easy to reason about, so this scalable algorithm is working.
Please look more carefully at my algorithm and you will notice that it
is working as i have just logically proved it.
 
Please read also the following to understand better:
 
Here is the parameters of the constructor:
 
First parameter is: The width of the scalable counting networks that
permits my scalable refererence counting algorithm to be scalable, this
parameter must be 1 to 31, it is now at 4 , this is the power, so it is
equal to 2 power 4 , that means 24=16, and you have to pass this
counting networks width to the n of following formula:
 
(n*log(n)*(1+log(n)))/4
 
The log of the formula is in base 2
 
This formula gives the number of gates of the scalable counting
networks, and if we replace n by 16, this will equal 80 gates, that
means you can scale the scalable counting networks to 80 cores, and
beyond 80 cores you will start to have contention.
 
Second parameter is: a boolean that tells if reference counting is used
or not, it is by default to true, that means that reference counting is
used.
 
About the weak references support: the Weak<T> type supports assignment
from and to T and makes it usable as if you had a variable of T. It has
the IsAlive property to check if the reference is still valid and not a
dangling pointer. The Target property can be used if you want access to
members of the reference.
 
Note: the use of the IsAlive property on our weak reference, this tells
us whether the referenced object is still available, and provides a safe
way to get a concrete reference to the parent.
 
I have ported efficient weak references support to Linux by implementing
efficient code hooking, look at my DSharp.Core.Detour.pas file for Linux
that i have written to see how i have implemented it in the Linux
library. Please look at the example.dpr and test.pas demos to see how
weak references work etc.
 
Call _AddRef() and _Release() methods to manually increment or decrement
the number of references to the object.
 
- Platform: Windows and Linux(x86)
 
Language: FPC Pascal v3.1.x+ / Delphi 2007+:
 
http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
For Delphi XE versions and Delphi Tokyo use the -DXE switch
 
The defines options inside defines.inc are:
 
{$DEFINE CPU32} for 32 bit systems
 
{$DEFINE CPU64} for 64 bit systems
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 02:46PM -0400

Hello..
 
Read this:
 
 
My new Scalable reference counting with efficient support for weak
references version 1.1 is here..
 
I have enhanced my scalable algorithm and now it is much powerful, now
my scalable algorithm implemention works also as a "scalable" counter
that supports both "increment" and "decrement" using two scalable
counting networks, please take a look at my new scalable algorithm
implementation inside the source code..
 
You can download my new scalable reference counting with efficient
support for weak references version 1.1 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
Description:
 
This is my scalable reference counting with efficient support for weak
references, and since problems that cannot be solved without weak
references are rare, so this library does scale very well, this scalable
reference counting is implemented using scalable counting networks that
eliminate completely false sharing , so it is fully scalable on
multicore processors and manycore processors and this scalable algorithm
is optimized, and this library does work on both Windows and Linux
(x86), and it is easy to port to Mac OS X.
 
I have modified a little bit my scalable algorithm, now as you will
notice i am not using decrement with support for antitokens in the
balancers of the scalable counting networks, i am only using an
"increment", please look at my new scalable algorithm inside the zip
file, i think it is working correctly. Also notice that the returned
value of _Release() method will be valid if it is equal to 0.
 
I have optimized it more, now i am using only tokens and no antitokens
in the balancers of the scalable counting networks, so i am only
supporting increment, not decrement, so you have to be smart to invent
it correctly, this is what i have done, so look at the
AMInterfacedObject.pas file inside my zip file, you will notice that it
uses counting_network_next_value() function,
counting_network_next_value() increments the scalable counter network by
1, the _AddRef() method is simple, it increment by 1 to increment the
reference to the object, but look inside the _Release() method it calls
counting_network_next_value() three times, and my invention is calling
counting_network_next_value(cn1) first inside the _Release() method to
be able to make my scalable algorithm works, so just debug it more and
you will notice that my scalable algorithm is smart and it is working
correctly, i have debugged it and i think it is working correctly.
 
I have to prove my scalable reference counting algorithm, like with
mathematical proof, so i will use logic to prove like in PhD papers:
 
You will find the code of my scalable reference counting inside
AMInterfacedObject.pas inside the zip file here:
 
If you look inside the code there is two methods, _AddRef() and
_Release() methods, i am using two scalable counting networks,
think about them like counters, so in the _AddRef() method i am
executing the following:
 
v1 := counting_network_next_value(cn1);
 
cn1 is the scalable counting network, and counting_network_next_value()
is a function that increment the scalable counting network by 1.
 
In the _Release() method i am executing the following:
 
v2 := counting_network_next_value(cn1);
v1 := counting_network_next_value(cn2);
v1 := counting_network_next_value(cn2);
 
So my scalable algorithm is "smart", because the logical proof is
that i am calling counting_network_next_value(cn1) first in the
above, so this allows my scalable algorithm to work correctly,
because we are advancing cn1 by 1 to obtain the value of cn1,
so the other threads are advancing also cn1 by one inside
_Release() , it is the last thread that is advancing cn1 by 1 that will
make the reference counter equal to 0 , and _AddRef() method is the same
and it is easy to reason about, so this scalable algorithm is working.
Please look more carefully at my algorithm and you will notice that it
is working as i have just logically proved it.
 
Please read also the following to understand better:
 
Here is the parameters of the constructor:
 
First parameter is: The width of the scalable counting networks that
permits my scalable refererence counting algorithm to be scalable, this
parameter must be 1 to 31, it is now at 4 , this is the power, so it is
equal to 2 power 4 , that means 24=16, and you have to pass this
counting networks width to the n of following formula:
 
(n*log(n)*(1+log(n)))/4
 
The log of the formula is in base 2
 
This formula gives the number of gates of the scalable counting
networks, and if we replace n by 16, this will equal 80 gates, that
means you can scale the scalable counting networks to 80 cores, and
beyond 80 cores you will start to have contention.
 
Second parameter is: a boolean that tells if reference counting is used
or not, it is by default to true, that means that reference counting is
used.
 
About the weak references support: the Weak<T> type supports assignment
from and to T and makes it usable as if you had a variable of T. It has
the IsAlive property to check if the reference is still valid and not a
dangling pointer. The Target property can be used if you want access to
members of the reference.
 
Note: the use of the IsAlive property on our weak reference, this tells
us whether the referenced object is still available, and provides a safe
way to get a concrete reference to the parent.
 
I have ported efficient weak references support to Linux by implementing
efficient code hooking, look at my DSharp.Core.Detour.pas file for Linux
that i have written to see how i have implemented it in the Linux
library. Please look at the example.dpr and test.pas demos to see how
weak references work etc.
 
Call _AddRef() and _Release() methods to manually increment or decrement
the number of references to the object.
 
- Platform: Windows and Linux(x86)
 
Language: FPC Pascal v3.1.x+ / Delphi 2007+:
 
http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
For Delphi XE versions and Delphi Tokyo use the -DXE switch
 
The defines options inside defines.inc are:
 
{$DEFINE CPU32} for 32 bit systems
 
{$DEFINE CPU64} for 64 bit systems
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 03:26PM -0400

Hello..
 
 
I correct a typo, please read again..
 
My new Scalable reference counting with efficient support for weak
references version 1.1 is here..
 
I have enhanced my scalable algorithm and now it is much powerful, now
my scalable algorithm implementation works also as a "scalable" counter
that supports both "increment" and "decrement" using two scalable
counting networks, please take a look at my new scalable algorithm
implementation inside the source code..
 
You can download my new scalable reference counting with efficient
support for weak references version 1.1 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
Description:
 
This is my scalable reference counting with efficient support for weak
references, and since problems that cannot be solved without weak
references are rare, so this library does scale very well, this scalable
reference counting is implemented using scalable counting networks that
eliminate completely false sharing , so it is fully scalable on
multicore processors and manycore processors and this scalable algorithm
is optimized, and this library does work on both Windows and Linux
(x86), and it is easy to port to Mac OS X.
 
I have modified a little bit my scalable algorithm, now as you will
notice i am not using decrement with support for antitokens in the
balancers of the scalable counting networks, i am only using an
"increment", please look at my new scalable algorithm inside the zip
file, i think it is working correctly. Also notice that the returned
value of _Release() method will be valid if it is equal to 0.
 
I have optimized it more, now i am using only tokens and no antitokens
in the balancers of the scalable counting networks, so i am only
supporting increment, not decrement, so you have to be smart to invent
it correctly, this is what i have done, so look at the
AMInterfacedObject.pas file inside my zip file, you will notice that it
uses counting_network_next_value() function,
counting_network_next_value() increments the scalable counter network by
1, the _AddRef() method is simple, it increment by 1 to increment the
reference to the object, but look inside the _Release() method it calls
counting_network_next_value() three times, and my invention is calling
counting_network_next_value(cn1) first inside the _Release() method to
be able to make my scalable algorithm works, so just debug it more and
you will notice that my scalable algorithm is smart and it is working
correctly, i have debugged it and i think it is working correctly.
 
I have to prove my scalable reference counting algorithm, like with
mathematical proof, so i will use logic to prove like in PhD papers:
 
You will find the code of my scalable reference counting inside
AMInterfacedObject.pas inside the zip file here:
 
If you look inside the code there is two methods, _AddRef() and
_Release() methods, i am using two scalable counting networks,
think about them like counters, so in the _AddRef() method i am
executing the following:
 
v1 := counting_network_next_value(cn1);
 
cn1 is the scalable counting network, and counting_network_next_value()
is a function that increment the scalable counting network by 1.
 
In the _Release() method i am executing the following:
 
v2 := counting_network_next_value(cn1);
v1 := counting_network_next_value(cn2);
v1 := counting_network_next_value(cn2);
 
So my scalable algorithm is "smart", because the logical proof is
that i am calling counting_network_next_value(cn1) first in the
above, so this allows my scalable algorithm to work correctly,
because we are advancing cn1 by 1 to obtain the value of cn1,
so the other threads are advancing also cn1 by one inside
_Release() , it is the last thread that is advancing cn1 by 1 that will
make the reference counter equal to 0 , and _AddRef() method is the same
and it is easy to reason about, so this scalable algorithm is working.
Please look more carefully at my algorithm and you will notice that it
is working as i have just logically proved it.
 
Please read also the following to understand better:
 
Here is the parameters of the constructor:
 
First parameter is: The width of the scalable counting networks that
permits my scalable refererence counting algorithm to be scalable, this
parameter must be 1 to 31, it is now at 4 , this is the power, so it is
equal to 2 power 4 , that means 24=16, and you have to pass this
counting networks width to the n of following formula:
 
(n*log(n)*(1+log(n)))/4
 
The log of the formula is in base 2
 
This formula gives the number of gates of the scalable counting
networks, and if we replace n by 16, this will equal 80 gates, that
means you can scale the scalable counting networks to 80 cores, and
beyond 80 cores you will start to have contention.
 
Second parameter is: a boolean that tells if reference counting is used
or not, it is by default to true, that means that reference counting is
used.
 
About the weak references support: the Weak<T> type supports assignment
from and to T and makes it usable as if you had a variable of T. It has
the IsAlive property to check if the reference is still valid and not a
dangling pointer. The Target property can be used if you want access to
members of the reference.
 
Note: the use of the IsAlive property on our weak reference, this tells
us whether the referenced object is still available, and provides a safe
way to get a concrete reference to the parent.
 
I have ported efficient weak references support to Linux by implementing
efficient code hooking, look at my DSharp.Core.Detour.pas file for Linux
that i have written to see how i have implemented it in the Linux
library. Please look at the example.dpr and test.pas demos to see how
weak references work etc.
 
Call _AddRef() and _Release() methods to manually increment or decrement
the number of references to the object.
 
- Platform: Windows and Linux(x86)
 
Language: FPC Pascal v3.1.x+ / Delphi 2007+:
 
http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
For Delphi XE versions and Delphi Tokyo use the -DXE switch
 
The defines options inside defines.inc are:
 
{$DEFINE CPU32} for 32 bit systems
 
{$DEFINE CPU64} for 64 bit systems
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.