Saturday, April 28, 2018

Digest for comp.lang.c++@googlegroups.com - 23 updates in 12 topics

Sky89 <Sky89@sky68.com>: Apr 28 11:11PM -0400

Hello,
 
 
I correct a typo, please read again:
 
I have thought more about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
 
I think it is not a "bright" idea, this NUMA Locks that uses Lock
Cohorting do not optimize the inside of the critical sections protected
by the NUMA Locks that use Lock Cohorting, because the inside of those
critical sections may transfer/bring Data from different NUMA nodes, so
this will cancel the gains that we have got from NUMA Locks that uses
Lock cohorting. So i don't think i will implement Lock Cohorting.
 
So then i have invented my scalable AMLock and scalable MLock, please
read for example about my scalable MLock here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
You can download my scalable MLock for C++ by downloading
my C++ synchronization objects library that contains some of my
"inventions" here:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
The Delphi and FreePascal version of my scalable MLock is here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 11:08PM -0400

Hello..
 
I have thought more about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
 
I think it is not a "bright" idea, this NUMA Locks that uses Lock
Cohorting do not optimize the inside of the critical section protected
by the NUMA Locks that use Lock Cohorting, because the inside of those
critical sections may transfer/bring Data from different NUMA nodes, so
this will cancel the gains that we have got from NUMA Locks that uses
Lock cohorting. So i don't think i will implement Lock Cohorting.
 
So then i have invented my scalable AMLock and scalable MLock, please
read for example about my scalable MLock here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
You can download my scalable MLock for C++ by downloading
my C++ synchronization objects library that contains some of my
"inventions" here:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
The Delphi and FreePascal version of my scalable MLock is here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 28 09:52AM +0200

On 04/27/18 16:26, Tim Rentsch wrote:
> The function std::numeric_limits<double>::max() is of type
> double, and must return a particular (finite) value. Any
> particular value must compare equal to itself.
 
Thanks.
 
 
 
 
 
> By "reducing" the code what you've actually done is change it so
> the problem is no longer there. Always post code that does in
> fact exhibit the behavior you want to ask about.
 
You are absolutely right.
I was only asking for confirmation before I wasted hours "reducing" on a
false assumption.
In fact it looks like this was not the actual problem.
In any case, here we go now...
 
 
 
 
 
I'm working on FreeBSD 11.1/amd64 and tried clang 4.0.0, 5.0.1 and
6.0.0: all show the problem I'll describe when compiling with -O1.
 
Some warning:
_ of course this snippet doesn't make sense, it's just a Proof Of Concept;
_ I know "feenableexcept" is FreeBSD specific and I don't know what a
Linux equivalent would be;
_ at -O3 probably the compilers realizes f() will always return max()
and optimizes everything away, so a more complicated example is
required; this can be surely done as the original software fails with
this option.
 
 
 
The snippet:
> }
> std::cout<<B<<std::endl;
> }
 
This code as is will work fine, printing:
1.79769e+308
Good
1.79769e+308
 
Commenting the lines noted by NOTE 1 & 2 will still work fine (of course
you'll see no "Good" in the output).
 
Removing only NOTE 1 (but leaving NOTE 2) will of course still produce a
correct output.
 
Leaving NOTE 1, but removing NOTE 2 will output the first value, but
then generate an FP exception at "B=A*2".
 
 
 
Now some thoughts:
 
_ IMO adding or removing std::cout should not change a program behaviour
(apart obviously from the output);
 
_ in any case the instruction "B=A*2" should never be executed;
 
_ perhaps, when there's no messing with std::cout, the compiler will
label this branch as worth of speculatively being executed? In that case
I think any collateral effect should be thrown away, so the FP exception
should be handled.
 
Is this just my opinion or a bug worth reporting?
Maybe it's feenableexcept that makes the whole system deviate from the
standard?
 
bye & Thanks
av.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 28 02:42AM -0700

>> the problem is no longer there. Always post code that does in
>> fact exhibit the behavior you want to ask about.
 
> You are absolutely right. [...]
 
You're darn tootin'. :)
 
> 6.0.0: all show the problem I'll describe when compiling with -O1.
 
> _ I know "feenableexcept" is FreeBSD specific and I don't know what a
> Linux equivalent would be;
 
Maybe feenableexcept() is POSIX? Anyway it worked fine on my
linux system.
 
> }
> std::cout<<B<<std::endl;
> }
 
I recommend just indenting, or alternatively using some lead
character other than > to mark the code. If > is used it looks
like an extra level of quoting (ie, to a previous message) in a
news posting.
 
> Leaving NOTE 1, but removing NOTE 2 will output the first value, but
> then generate an FP exception at "B=A*2".
 
Here is my stripped down version (just the main() function):
 
int
main( int /*argc*/ , char** /*argv*/ ){
double A = f( .0001002773902563, 1. );
bool equals = A == std::numeric_limits<double>::max();
 
std::cout << equals << std::endl;
 
feenableexcept( FE_OVERFLOW );
std::cout << (equals ? A : A*2) << std::endl;
 
return 0;
}
 
Prints '1' and value of A on -O0.
Prints '1' and then fails on -O1.
 
 
> will label this branch as worth of speculatively being executed?
> In that case I think any collateral effect should be thrown away,
> so the FP exception should be handled.
 
First, the problem is not the comparison. The '1' being printed
shows the two values compare equal.
 
Most likely what has happened is that clang decided to calculate
both 'A' and 'A*2', and then use a conditional move (which in
your program would assign to 'B') to choose the appropriate
value. Unfortunately, calculating 'A*2' causes a floating
exception before the conditional move can take effect.
 
That is consistent with the change when removing the call to
cout. When the cout calls are there, a branch instruction
needs to be done anyway, so a conditional move isn't used.
 
> Is this just my opinion or a bug worth reporting?
> Maybe it's feenableexcept that makes the whole system deviate from
> the standard?
 
I think using feenableexcept() already puts you outside the realm
of the C and C++ standards. So I'm not sure where the finger
should be pointed.
 
On the other hand, if I were on the clang team, I would like to
know that this behavior occurs, because the result is awful
whether or not it is technically considered a bug. And the bad
behavior does not occur with g++.
Christiano <christiano@engineer.com>: Apr 28 04:14AM -0700

On Saturday, April 28, 2018 at 4:52:37 AM UTC-3, Andrea Venturoli wrote:
> standard?
 
> bye & Thanks
> av.
 
Don't use fenv.h, use cfenv instead.
 
The minimalist code that is generating the bug is:
 
// ------------ a.cpp ------------------
#include <iostream>
#include <numeric>
#include <limits>
#include <cfenv>
 
double f(double A,double B) {
if (A<B) return std::numeric_limits<double>::max();
return A-B;
}
 
int main(int , char**) {
feenableexcept(FE_OVERFLOW); //**** NOTE 1
double A=f(.0001002773902563,1.);
std::cout<<A<<std::endl;
double B;
if (A==std::numeric_limits<double>::max()) {
// std::cout<<"Good"<<std::endl; //**** NOTE 2
B=A;
} else {
// std::cout<<"Bad"<<std::endl; //**** NOTE 2
B=A*2.;
}
std::cout<<B<<std::endl;
}
// ---------------------------------------
 
The command:
$ clang++ -O1 a.cpp
$ ./a.out
1.79769e+308
Floating point exception (core dumped)
 
This seems like a bug with clang optimization. Please, go to:
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
And report the problem using the minimalist code above and the compilation command.
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 28 02:35PM +0200

On 04/28/18 11:42, Tim Rentsch wrote:
 
> Maybe feenableexcept() is POSIX? Anyway it worked fine on my
> linux system.
 
Didn't know.
Thanks for trying this on Linux.
 
 
 
 
 
> character other than > to mark the code. If > is used it looks
> like an extra level of quoting (ie, to a previous message) in a
> news posting.
 
Sorry for that. I had been told that's the only way to avoid line breaks
in many clients.
At least it seems to be so in ThunderBird.
 
 
 
 
 
> First, the problem is not the comparison. The '1' being printed
> shows the two values compare equal.
 
Yes, as I said I originally tought this was the problem, but it isn't.
Strangely enough, however, just using 1000 instead of max(), with no
other changes to the program, made the problem go away!!!
 
 
 
 
 
> your program would assign to 'B') to choose the appropriate
> value. Unfortunately, calculating 'A*2' causes a floating
> exception before the conditional move can take effect.
 
That's my hypotesis too.
 
 
 
 
> I think using feenableexcept() already puts you outside the realm
> of the C and C++ standards. So I'm not sure where the finger
> should be pointed.
 
I guess at least the compiler should know that exceptions where enabled.
How? I don't know.
 
 
 
> know that this behavior occurs, because the result is awful
> whether or not it is technically considered a bug. And the bad
> behavior does not occur with g++.
 
I'll do this.
 
bye & Thanks
av.
"K. Frank" <kfrank29.c@gmail.com>: Apr 28 07:33AM -0700

Hi Andrea!
 
On Friday, April 27, 2018 at 2:44:47 AM UTC-4, Andrea Venturoli wrote:
> std::numeric_limits<double>::max() should yield a deterministic result.
> Am I correct?
> ...
 
Short answer:
 
The failure you see of std::numeric_limits<double>::max()
to compare equal to itself violates any reasonable de facto
standard, but (probably) does not violate the letter of the
c++ standard.
 
Even judged by the letter of the standard rather than the
de facto standard, it represents a significant quality-of-
implementation bug.
 
Some further explanation:
 
The c++ standard gives quite concrete guarantees about how
unsigned integral arithmetic works, and somewhat weaker
(rooted in giving flexibility to how sign bits are implemented)
guarantees for signed integral arithmetic.
 
Some time back -- probably the c++ 11 or 14 draft standard -- I
read carefully the standard's floating point verbiage. The c++
standard is SURPRISINGLY vacuous about floating-point arithmetic.
Basically it says that floating-point numbers represent thingys
that kind-of, sort-of behave like floating-point numbers, you can
use arithmetic operators on them, and that long double has at
least as much precision as double, which has at least as much
precision as float. (Hmm ...)
 
(Things may have been tightened up in the most recent standard,
but I doubt it. Also, an implementation has the option to return
true for std::numeric_limits<double>::is_iec559, in which case
it promises to meet (some level) of the ieee 754 floating-standard.)
 
In short, the c++ standard guarantees for floating-point arithmetic
are so vacuous as to be useless, so we are forced to rely on de
facto standards or quality-of-implementation criteria to get any
(floating-point) work done.
 
While we're on the subject, let me address some of the floating-point
FUD that infects many nooks and crannies of the internet (this news
group included).
 
Floating-point arithmetic definitely has its subtleties, but, no,
floating-point numbers are not mere shape-shifting specters of some
sort of actual numbers. They are all-together well defined (although
different implementations legitimately define them differently).
 
If your implementation permits
 
float x = 3.3;
x != x + 0.0;
 
your implementation has a bug and violates the (de facto) standard.
 
No, floating-point numbers are not permitted to grow some sort of
floating-point-FUD fuzz when they move along the data bus. If you
store a floating-point number in memory and read it back ten times,
your implementation has a bug if it gives you back ten different
values. (Just sayin'.)
 
Floating-point arithmetic is commutative:
 
x + y == y + x;
 
It cannot be (and is not) associative for all values:
 
(x + y) + z == x + (y + z);
 
need not (and does not) hold for all values.
 
Floating-point arithmetic* should be exact to the precision of
the representation. Being off by the least-significant bit I
would call a quality-of-implementation imperfection, and being
off by two bits I would call a bug.
 
*) By this I mean a single floating-point operation. Round-off
error does -- unavoidably -- accumulate when performing a series
of operations, and managing this reality is an important part of
numerical analysis. Also, implementations, legitimately, give
themselves a couple of bits slop for mathematical functions, i.e.,
sqrt, sin, etc.
 
If someone tells you that floating-point arithmetic isn't precise
or that floating-point numbers start to get moldy when they sit
in memory for too long, they're spewing FUD.
 
If someone tells you that you can't meaningfully test for equality
between floating-point numbers, they're spewing FUD.
 
To be fair, floating-point arithmetic is not the arithmetic of real
numbers, and it has a number of subtleties. Everyone should check
out Goldberg's "What Every Computer Scientist Should Know About
Floating-Point Arithmetic," which can be found many places on the
internet, for example:
 
www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf
 
> Before I start trying different compilers (or versions), is my original
> assumption true or wrong?
 
You're right. Your compiler has a bug and violates the (de facto)
standard. (You might want to be nice, and file a bug report.)
 
> bye & Thanks
> av.
 
 
Happy Floating-Point Hacking!
 
K. Frank
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 28 04:45PM +0200

On 04/28/18 14:35, Andrea Venturoli wrote:
 
> Yes, as I said I originally tought this was the problem, but it isn't.
> Strangely enough, however, just using 1000 instead of max(), with no
> other changes to the program, made the problem go away!!!
 
Sorry, forget this: wrong wording.
Of course using 1000 will make 1000*2 a legal statement.
 
What I meant to say: at first I came up with an hypotesis, which later
proved not to be the correct one.
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 28 04:52PM +0200

On 04/28/18 16:33, K. Frank wrote:
> to compare equal to itself violates any reasonable de facto
> standard, but (probably) does not violate the letter of the
> c++ standard.
 
Thanks a lot for your post.
 
However, as I said, my first hypotesis (wrong comparison) was not the
correct one.
I apologize for this... sometimes, when dealing with optimizers, things
are not so linear.
 
The problem lies elsewhere, as described in my other posts in this thread.
I think it's still an interesing one, possibly a compiler bug; so, if
you are curious, read them :)
 
bye
av.
James Kuyper <jameskuyper@alumni.caltech.edu>: Apr 28 04:26PM -0400

On 04/28/2018 10:33 AM, K. Frank wrote:
...
> Some time back -- probably the c++ 11 or 14 draft standard -- I
> read carefully the standard's floating point verbiage. The c++
> standard is SURPRISINGLY vacuous about floating-point arithmetic.
 
"... this International Standard imposes no restrictions on the accuracy
of floating-point operations, ..." (5.20p6). While that is a "Note", and
therefore non-normative, it does seem to correctly describe the
normative text of the standard: I couldn't find any such restrictions.
> (Things may have been tightened up in the most recent standard,
> but I doubt it. ...
 
The above quotation is from n4567.pdf, the closest free thing to C++17.
 
> ... Also, an implementation has the option to return
> true for std::numeric_limits<double>::is_iec559, in which case
> it promises to meet (some level) of the ieee 754 floating-standard.)
 
"static constexpr bool is_iec559;
True if and only if the type adheres to IEC 559 standard." (18.3.2.4p56).
 
I don't see the term "adheres" as providing a whole lot of wiggle room -
an implementation either does or does not adhere to the standard. Is
there an IEC document somewhere which defines "adhere" in a way that
provides wiggle room?
The IEC 559 requirements on the precision of results are pretty nearly
as tight as it is practically possible to make them. I'm willing to rely
on that when is_iec559() is true for the relevant type(s).
 
...
> If someone tells you that floating-point arithmetic isn't precise> or that floating-point numbers start to get moldy when they sit
> in memory for too long, they're spewing FUD.
 
If, on the other hand, they tell that that floating point arithmetic
isn't infinitely precise, they're telling you the exact truth.
 
> If someone tells you that you can't meaningfully test for equality
> between floating-point numbers, they're spewing FUD.
 
However, if someone tells you that two different calculations which, if
carried out with infinite precision, mathematically should produce
exactly identical results, might not produce exactly identical results
when performed using floating point arithmetic, they're telling you the
truth.
 
ram@zedat.fu-berlin.de (Stefan Ram): Apr 28 07:53PM

On 04/28/2018 07:11 AM, Jouko Koski wrote (abbreviated by me [S. R.]):
>struct thing {
> void func(string s);
>so that string would be std::string in this interface
 
I am not sure what "in this interface" is - not even what an
"interface" is (they have "interfaces" in Java). But what about
the obvious
 
struct thing
{ using string = ::std::string;
void func( string s ); };
 
?
"Jouko Koski" <joukokoskispam101@netti.fi>: Apr 28 02:11PM +0300

"Jorgen Grahn" wrote:
 
> Yes. It's unclear to me if the OP has problems specifically with
> boost::asio (understandable!) or also with more normal examples of
> namespaces.
 
Having to repeat the namespace name everywhere does not improve readability.
It adds noise, induces boilerplate typing and it looks ugly, albeit it does
make the identifiers more explicit.
 
For instance, it would be nice to have a sane way of doing
 
struct thing {
void func(string s);
};
 
so that string would be std::string in this interface without leaking the
same using or alias declaration to everywhere else, too. Any suggestions?
 
--
Jouko
"Jouko Koski" <joukokoskispam101@netti.fi>: Apr 28 02:13PM +0300

"Juha Nieminen" wrote:
 
> If you had written, for instance, "buffer(output.buf + output.len, n)",
> I wouldn't even know that it's a Boost function. It could be anything.
> It might not even become apparent with the whole source file.
 
When examined in isolation, yes. However, this pattern does often
repeat and it becomes a tedious pain in the neck.
 
I tend to think that the code is usually examined in some cognitive
scope where there should be some decent way of not assuming that
anything can be just anything, other than always repeating the full
name path.
 
--
Jouko
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Apr 28 04:53PM +0200

On 28.04.2018 13:11, Jouko Koski wrote:
> };
 
> so that string would be std::string in this interface without leaking the
> same using or alias declaration to everywhere else, too. Any suggestions?
 
struct thing
{
using string = std::string;
void func( string const& );
};
 
?
 
Cheers!,
 
- Alf
James Kuyper <jameskuyper@alumni.caltech.edu>: Apr 28 03:23PM -0400

On 04/28/2018 07:11 AM, Jouko Koski wrote:
...
> };
 
> so that string would be std::string in this interface without leaking the
> same using or alias declaration to everywhere else, too. Any suggestions?
 
Can you suggest an alternative, and define the details of how you think
this alternative should work?
Sky89 <Sky89@sky68.com>: Apr 28 07:10PM -0400

Hello,
 
 
I think i have just made a mistake in my previous post:
 
I was just reading about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
I think that this Lock cohorting optimizes more the cache etc. so i
think it is great, so i will implement it in C++ and Delphi and FreePascal.
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 06:58PM -0400

Hello..
 
 
I was just reading about the following paper about Lock Cohorting: A
General Technique for Designing NUMA Locks:
 
http://groups.csail.mit.edu/mag/a13-dice.pdf
 
And i have noticed that they are testing on other NUMA systems than
Intel NUMA systems, i think that Intel NUMA systems are much more
optimized and the costs of data transfer between NUMA nodes on Intel
NUMA systems is "only" 1.6X of the a local NUMA node cost, so don't
bother about Lock Cohorting on Intel NUMA systems.
 
This is why i have invented my scalable AMLock and scalable MLock,
please read for example about my scalable MLock here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
You can download my scalable MLock for C++ by downloading
my C++ synchronization objects library that contains some of my
"inventions" here:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
The Delphi and FreePascal version of my scalable MLock is here:
 
https://sites.google.com/site/aminer68/scalable-mlock
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 05:09PM -0400

Hello..
 
 
My Scalable reference counting with efficient support for weak
references version 1.11 is here..
 
There was no bug in version 1.1, this new version 1.11 is just that i
have switched the variables "head" and "tail" in my scalable reference
counting algorithm.
 
You can download my Scalable reference counting with efficient support
for weak references version 1.11 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
You can now port it to C++ if you want..
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 04:38PM -0400

Hello...
 
About my new Scalable reference counting with efficient support for weak
references version 1.1:
 
Weak references support is done by hooking the TObject.FreeInstance
method so every object destruction is noticed and if a weak reference
for that object exists it gets removed from the internal dictionary
where all weak references are stored. While it works I am aware that
this is hacky approach and it might not work if someone overrides the
FreeInstance method and does not call inherited.
 
You can download and read about my new scalable reference counting with
efficient support for weak references version 1.1 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
And you can port it to C++..
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 02:59PM -0400

Hello....
 
Read this:
 
 
My new Scalable reference counting with efficient support for weak
references version 1.1 is here..
 
I have enhanced my scalable algorithm and now it is much powerful, now
my scalable algorithm implemention works also as a "scalable" counter
that supports both "increment" and "decrement" using two scalable
counting networks, please take a look at my new scalable algorithm
implementation inside the source code, and you can port it to C++
 
You can download my new scalable reference counting with efficient
support for weak references version 1.1 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
Description:
 
This is my scalable reference counting with efficient support for weak
references, and since problems that cannot be solved without weak
references are rare, so this library does scale very well, this scalable
reference counting is implemented using scalable counting networks that
eliminate completely false sharing , so it is fully scalable on
multicore processors and manycore processors and this scalable algorithm
is optimized, and this library does work on both Windows and Linux
(x86), and it is easy to port to Mac OS X.
 
I have modified my scalable algorithm, now as you will notice i am not
using decrement with support for antitokens in the balancers of the
scalable counting networks, i am only using an "increment", please look
at my new scalable algorithm inside the zip file, i think it is working
correctly. Also notice that the returned value of _Release() method will
be valid if it is equal to 0.
 
I have optimized it more, now i am using only tokens and no antitokens
in the balancers of the scalable counting networks, so i am only
supporting increment, not decrement, so you have to be smart to invent
it correctly, this is what i have done, so look at the
AMInterfacedObject.pas file inside my zip file, you will notice that it
uses counting_network_next_value() function,
counting_network_next_value() increments the scalable counter network by
1, the _AddRef() method is simple, it increment by 1 to increment the
reference to the object, but look inside the _Release() method it calls
counting_network_next_value() three times, and my invention is calling
counting_network_next_value(cn1) first inside the _Release() method to
be able to make my scalable algorithm works, so just debug it more and
you will notice that my scalable algorithm is smart and it is working
correctly, i have debugged it and i think it is working correctly.
 
I have to prove my scalable reference counting algorithm, like with
mathematical proof, so i will use logic to prove like in PhD papers:
 
You will find the code of my scalable reference counting inside
AMInterfacedObject.pas inside the zip file here:
 
If you look inside the code there is two methods, _AddRef() and
_Release() methods, i am using two scalable counting networks,
think about them like counters, so in the _AddRef() method i am
executing the following:
 
v1 := counting_network_next_value(cn1);
 
cn1 is the scalable counting network, and counting_network_next_value()
is a function that increment the scalable counting network by 1.
 
In the _Release() method i am executing the following:
 
v2 := counting_network_next_value(cn1);
v1 := counting_network_next_value(cn2);
v1 := counting_network_next_value(cn2);
 
So my scalable algorithm is "smart", because the logical proof is
that i am calling counting_network_next_value(cn1) first in the
above, so this allows my scalable algorithm to work correctly,
because we are advancing cn1 by 1 to obtain the value of cn1,
so the other threads are advancing also cn1 by one inside
_Release() , it is the last thread that is advancing cn1 by 1 that will
make the reference counter equal to 0 , and _AddRef() method is the same
and it is easy to reason about, so this scalable algorithm is working.
Please look more carefully at my algorithm and you will notice that it
is working as i have just logically proved it.
 
Please read also the following to understand better:
 
Here is the parameters of the constructor:
 
First parameter is: The width of the scalable counting networks that
permits my scalable refererence counting algorithm to be scalable, this
parameter must be 1 to 31, it is now at 4 , this is the power, so it is
equal to 2 power 4 , that means 24=16, and you have to pass this
counting networks width to the n of following formula:
 
(n*log(n)*(1+log(n)))/4
 
The log of the formula is in base 2
 
This formula gives the number of gates of the scalable counting
networks, and if we replace n by 16, this will equal 80 gates, that
means you can scale the scalable counting networks to 80 cores, and
beyond 80 cores you will start to have contention.
 
Second parameter is: a boolean that tells if reference counting is used
or not, it is by default to true, that means that reference counting is
used.
 
About the weak references support: the Weak<T> type supports assignment
from and to T and makes it usable as if you had a variable of T. It has
the IsAlive property to check if the reference is still valid and not a
dangling pointer. The Target property can be used if you want access to
members of the reference.
 
Note: the use of the IsAlive property on our weak reference, this tells
us whether the referenced object is still available, and provides a safe
way to get a concrete reference to the parent.
 
I have ported efficient weak references support to Linux by implementing
efficient code hooking, look at my DSharp.Core.Detour.pas file for Linux
that i have written to see how i have implemented it in the Linux
library. Please look at the example.dpr and test.pas demos to see how
weak references work etc.
 
Call _AddRef() and _Release() methods to manually increment or decrement
the number of references to the object.
 
- Platform: Windows and Linux(x86)
 
Language: FPC Pascal v3.1.x+ / Delphi 2007+:
 
http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
For Delphi XE versions and Delphi Tokyo use the -DXE switch
 
The defines options inside defines.inc are:
 
{$DEFINE CPU32} for 32 bit systems
 
{$DEFINE CPU64} for 64 bit systems
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Apr 28 03:26PM -0400

Hello,
 
 
I correct a typo, please read again..
 
My new Scalable reference counting with efficient support for weak
references version 1.1 is here..
 
I have enhanced my scalable algorithm and now it is much powerful, now
my scalable algorithm implementation works also as a "scalable" counter
that supports both "increment" and "decrement" using two scalable
counting networks, please take a look at my new scalable algorithm
implementation inside the source code..
 
You can download my new scalable reference counting with efficient
support for weak references version 1.1 from:
 
https://sites.google.com/site/aminer68/scalable-reference-counting-with-efficient-support-for-weak-references
 
Description:
 
This is my scalable reference counting with efficient support for weak
references, and since problems that cannot be solved without weak
references are rare, so this library does scale very well, this scalable
reference counting is implemented using scalable counting networks that
eliminate completely false sharing , so it is fully scalable on
multicore processors and manycore processors and this scalable algorithm
is optimized, and this library does work on both Windows and Linux
(x86), and it is easy to port to Mac OS X.
 
I have modified a little bit my scalable algorithm, now as you will
notice i am not using decrement with support for antitokens in the
balancers of the scalable counting networks, i am only using an
"increment", please look at my new scalable algorithm inside the zip
file, i think it is working correctly. Also notice that the returned
value of _Release() method will be valid if it is equal to 0.
 
I have optimized it more, now i am using only tokens and no antitokens
in the balancers of the scalable counting networks, so i am only
supporting increment, not decrement, so you have to be smart to invent
it correctly, this is what i have done, so look at the
AMInterfacedObject.pas file inside my zip file, you will notice that it
uses counting_network_next_value() function,
counting_network_next_value() increments the scalable counter network by
1, the _AddRef() method is simple, it increment by 1 to increment the
reference to the object, but look inside the _Release() method it calls
counting_network_next_value() three times, and my invention is calling
counting_network_next_value(cn1) first inside the _Release() method to
be able to make my scalable algorithm works, so just debug it more and
you will notice that my scalable algorithm is smart and it is working
correctly, i have debugged it and i think it is working correctly.
 
I have to prove my scalable reference counting algorithm, like with
mathematical proof, so i will use logic to prove like in PhD papers:
 
You will find the code of my scalable reference counting inside
AMInterfacedObject.pas inside the zip file here:
 
If you look inside the code there is two methods, _AddRef() and
_Release() methods, i am using two scalable counting networks,
think about them like counters, so in the _AddRef() method i am
executing the following:
 
v1 := counting_network_next_value(cn1);
 
cn1 is the scalable counting network, and counting_network_next_value()
is a function that increment the scalable counting network by 1.
 
In the _Release() method i am executing the following:
 
v2 := counting_network_next_value(cn1);
v1 := counting_network_next_value(cn2);
v1 := counting_network_next_value(cn2);
 
So my scalable algorithm is "smart", because the logical proof is
that i am calling counting_network_next_value(cn1) first in the
above, so this allows my scalable algorithm to work correctly,
because we are advancing cn1 by 1 to obtain the value of cn1,
so the other threads are advancing also cn1 by one inside
_Release() , it is the last thread that is advancing cn1 by 1 that will
make the reference counter equal to 0 , and _AddRef() method is the same
and it is easy to reason about, so this scalable algorithm is working.
Please look more carefully at my algorithm and you will notice that it
is working as i have just logically proved it.
 
Please read also the following to understand better:
 
Here is the parameters of the constructor:
 
First parameter is: The width of the scalable counting networks that
permits my scalable refererence counting algorithm to be scalable, this
parameter must be 1 to 31, it is now at 4 , this is the power, so it is
equal to 2 power 4 , that means 24=16, and you have to pass this
counting networks width to the n of following formula:
 
(n*log(n)*(1+log(n)))/4
 
The log of the formula is in base 2
 
This formula gives the number of gates of the scalable counting
networks, and if we replace n by 16, this will equal 80 gates, that
means you can scale the scalable counting networks to 80 cores, and
beyond 80 cores you will start to have contention.
 
Second parameter is: a boolean that tells if reference counting is used
or not, it is by default to true, that means that reference counting is
used.
 
About the weak references support: the Weak<T> type supports assignment
from and to T and makes it usable as if you had a variable of T. It has
the IsAlive property to check if the reference is still valid and not a
dangling pointer. The Target property can be used if you want access to
members of the reference.
 
Note: the use of the IsAlive property on our weak reference, this tells
us whether the referenced object is still available, and provides a safe
way to get a concrete reference to the parent.
 
I have ported efficient weak references support to Linux by implementing
efficient code hooking, look at my DSharp.Core.Detour.pas file for Linux
that i have written to see how i have implemented it in the Linux
library. Please look at the example.dpr and test.pas demos to see how
weak references work etc.
 
Call _AddRef() and _Release() methods to manually increment or decrement
the number of references to the object.
 
- Platform: Windows and Linux(x86)
 
Language: FPC Pascal v3.1.x+ / Delphi 2007+:
 
http://www.freepascal.org/
 
Required FPC switches: -O3 -Sd
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
For Delphi XE versions and Delphi Tokyo use the -DXE switch
 
The defines options inside defines.inc are:
 
{$DEFINE CPU32} for 32 bit systems
 
{$DEFINE CPU64} for 64 bit systems
 
 
 
Thank you,
Amine Moulay Ramdane.
Pvr Pasupuleti <pvr.ram34@gmail.com>: Apr 28 05:43AM -0700

https://unacademy.com/lesson/constructor/FP6M4RUR
Jorgen Grahn <grahn+nntp@snipabacken.se>: Apr 28 06:17AM

On Wed, 2018-04-25, Alf P. Steinbach wrote:
>>> inventions
 
>> ROTL!
 
> Rolling on the laughter? What?
 
Rotating left, surely?
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: