Monday, February 18, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 3 topics

"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 17 08:23PM -0800

There is a sort of funny discussion going on over on reddit about my
read/write mutex. Some people think is has UB wrt an integer overflow or
underflow. I boiled it down to the following simple program:
____________________________________
#include <iostream>
#include <climits>
 
 
long fetch_add(long& gcount, long addend)
{
long lcount = gcount;
gcount += addend;
return lcount;
}
 
 
int main()
{
long m_count = LONG_MAX;
std::cout << "m_count = " << m_count << "\n";
 
// simulate three concurret readers
fetch_add(m_count, -3);
std::cout << "m_count = " << m_count << ", 3 readers\n";
 
// now m_count = LONG_MAX - 3
 
// simulate a writer.
long count = fetch_add(m_count, -LONG_MAX);
std::cout << "m_count = " << m_count << ", 3 readers in write mode\n";
 
if (count < LONG_MAX)
{
long readers = LONG_MAX - count;
std::cout << "count = " << count << "\n";
std::cout << "readers = " << readers << "\n";
}
 
return 0;
}
____________________________________
 
 
Is there any UB in there? Some on reddit seem to think so. They are most
likely trolling me. Little shi%'s.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 17 10:14PM -0800

On 2/17/2019 8:23 PM, Chris M. Thomasson wrote:
> ____________________________________
 
> Is there any UB in there? Some on reddit seem to think so. They are most
> likely trolling me. Little shi%'s.
 
Fwiw, this is in regard to the ct_rwmutex::lock_shared function here:
 
https://pastebin.com/raw/xCBHY9qd
 
Actually, the only way ct_rwmutex::m_count can get into UB land is if
the number of concurrent reader threads exceeds LONG_MAX. Well, that is
a heck of a lot of threads. Sorry, but still venting from some reddit
trolls. I think they know better: Almost sorry for posting it there in
the first place. Not sorry for posting it here, and here:
 
https://groups.google.com/d/topic/lock-free/3sO-lnwtsi8/discussion
 
____________________________
#define RWMUTEX_COUNT_MAX LONG_MAX
 
struct ct_rwmutex
{
// shared state
std::atomic<long> m_wrstate;
std::atomic<long> m_count;
std::atomic<long> m_rdwake;
 
ct_slow_semaphore m_rdwset;
ct_slow_semaphore m_wrwset;
ct_fast_mutex m_wrlock;
 
 
ct_rwmutex() :
m_wrstate(1),
m_count(RWMUTEX_COUNT_MAX),
m_rdwake(0),
m_rdwset(0),
m_wrwset(0) {
}
 
 
// READ, pretty slim...
void lock_shared()
{
if (m_count.fetch_add(-1, std::memory_order_acquire) < 1)
{
m_rdwset.dec();
}
}
 
void unlock_shared()
{
if (m_count.fetch_add(1, std::memory_order_release) < 0)
{
if (m_rdwake.fetch_add(-1, std::memory_order_acq_rel) == 1)
{
m_wrwset.inc();
}
}
}
 
 
// WRITE, more hefty
void lock()
{
m_wrlock.lock();
 
long count = m_count.fetch_add(-RWMUTEX_COUNT_MAX,
std::memory_order_acquire);
 
if (count < RWMUTEX_COUNT_MAX)
{
long rdwake = m_rdwake.fetch_add(RWMUTEX_COUNT_MAX - count,
std::memory_order_acquire);
 
if (rdwake + RWMUTEX_COUNT_MAX - count)
{
m_wrwset.dec();
}
}
}
 
// write unlock
void unlock()
{
long count = m_count.fetch_add(RWMUTEX_COUNT_MAX,
std::memory_order_release);
 
if (count < 0)
{
m_rdwset.add(-count);
}
 
m_wrlock.unlock();
}
};
____________________________
 
 
 
____________________________
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 17 10:17PM -0800

On 2/17/2019 10:14 PM, Chris M. Thomasson wrote:
>> There is a sort of funny discussion going on over on reddit about my
>> read/write mutex. Some people think is has UB wrt an integer overflow
>> or underflow. I boiled it down to the following simple program:
[...]
> trolls. I think they know better: Almost sorry for posting it there in
> the first place. Not sorry for posting it here, and here:
 
> https://groups.google.com/d/topic/lock-free/3sO-lnwtsi8/discussion
[...]
 
ARGH! Wrong link, but still an interesting discussion nonetheless, that
happens to be related to read/write access patterns. Strange mistake.
 
Anyway, here is the correct link:
 
https://groups.google.com/d/topic/lock-free/zzZX4fvtG04/discussion
 
Sorry.
David Brown <david.brown@hesbynett.no>: Feb 18 08:52AM +0100

On 18/02/2019 05:23, Chris M. Thomasson wrote:
 
>     // now m_count = LONG_MAX - 3
 
>     // simulate a writer.
>     long count = fetch_add(m_count, -LONG_MAX);
 
So count is LONG_MAX - 3, while m_count is now -3.
 
 
>     if (count < LONG_MAX)
>     {
>         long readers = LONG_MAX - count;
 
Readers is 3.
 
I don't see any overflows here.
 
If someone thought that fetch_add returned the result of the addition,
then they'd think count is -3 and thus readers tries to be LONG_MAX + 3.
Could that be the mistake they are making?
 
 
(I haven't looked at your real code, just this post.)
 
Juha Nieminen <nospam@thanks.invalid>: Feb 18 07:58AM

> There is a sort of funny discussion going on over on reddit about my
> read/write mutex. Some people think is has UB wrt an integer overflow or
> underflow.
 
I doubt there exists a single piece of hardware capable of compiling and
running a program in use today that does not use 2's complement arithmetic
and where over and underflow isn't dealt simply by doing all arithmetic
modulo the size of the integer types.
 
I don't remember if it's undefined behavior according to the standard,
but who cares? It's going to work as arithmetic modulo the maximum size
anyway, for all practical purposes. Nobody is going to use the library
in an architecture where that's not the case (even if such an architecture
even exists).
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
David Brown <david.brown@hesbynett.no>: Feb 18 09:45AM +0100

On 18/02/2019 08:58, Juha Nieminen wrote:
> running a program in use today that does not use 2's complement arithmetic
> and where over and underflow isn't dealt simply by doing all arithmetic
> modulo the size of the integer types.
 
Well, you'd be wrong. /Very/ wrong.
 
There are no modern systems that don't use 2's complement arithmetic -
that half is correct. But modern compilers can, and do, assume that
your signed arithmetic never overflows.
 
> anyway, for all practical purposes. Nobody is going to use the library
> in an architecture where that's not the case (even if such an architecture
> even exists).
 
That is extraordinarily bad advice. Do not let your signed integers
overflow - it is /always/ wrong. (Regardless of the behaviour of the
compiler, language, and hardware, overflow - signed or unsigned - is
almost always a bug in the software with meaningless results.)
 
Compilers know that signed integer arithmetic never overflows. They
optimise using that knowledge. If you have written code that relies on
the behaviour when overflow /does/ occur, you are lying to your compiler
- and bad things will happen. Compilers don't cry - programmers do.
 
If Chris' code has signed integer overflow, then it is undefined
behaviour and he needs to find it and fix it - not ignore it and hope no
one notices. (In the code he posted, I could not see any problems.)
 
gcc's "-fsanitize=undefined" is your friend here, as is "-Wstrict-overflow".
Paavo Helde <myfirstname@osa.pri.ee>: Feb 18 12:11PM +0200

On 18.02.2019 9:58, Juha Nieminen wrote:
> anyway, for all practical purposes. Nobody is going to use the library
> in an architecture where that's not the case (even if such an architecture
> even exists).
 
Most probably you have such an architecture/compiler right under your
fingertips:
 
> cat ->test1.cpp
#include <limits>
int main() {
int i = std::numeric_limits<int>::max();
i = i+1;
}
> g++ -ftrapv test1.cpp
> ./a.out
Aborted (core dumped)
"Öö Tiib" <ootiib@hot.ee>: Feb 18 02:46AM -0800

On Monday, 18 February 2019 10:45:18 UTC+2, David Brown wrote:
 
> There are no modern systems that don't use 2's complement arithmetic -
> that half is correct. But modern compilers can, and do, assume that
> your signed arithmetic never overflows.
 
Since compiler writers are people with extremely benchmark-oriented
head shape they sometimes even assume that when you have explicitly
used implementation-specific compiler flags to define behavior on case
of signed arithmetic overflow. They of course do fix such issues when
reported but some keep slipping in both with clang and gcc.
 
Without -ftrapv or -fwrapv all bets are off and one can get full set of
nasal demons on signed integer overflow on those compilers.
Bart <bc@freeuk.com>: Feb 18 12:52PM

On 18/02/2019 10:11, Paavo Helde wrote:
> > g++ -ftrapv test1.cpp
> > ./a.out
> Aborted (core dumped)
 
 
But you gave it the -ftrapv option, which without looking it up I assume
means trap-on-overflow.
 
So you told it to abort in this situation
 
This can be done whatever the language, compiler, or hardware.
 
Presumably you can also invent an option, if it is not already the
default behaviour, to ignore such an overflow.
David Brown <david.brown@hesbynett.no>: Feb 18 02:22PM +0100

On 18/02/2019 11:46, Öö Tiib wrote:
> used implementation-specific compiler flags to define behavior on case
> of signed arithmetic overflow. They of course do fix such issues when
> reported but some keep slipping in both with clang and gcc.
 
Nonsense. That is a popular propaganda spread by people who don't
understand how C (or C++) works, or don't want it to work the way it
does. People writing compilers do so for people who use compilers.
Accusations that compiler writers are only interested in maximum
benchmark speeds are absurd. gcc, for example, tests compilation on the
entire Debian repository - vast quantities of code, much of which is
ancient.
 
What compiler writers do not do, however, is limit their handling of
well-written code because some people write poor code. Instead, they
provide flags to support those that have code that relies on particular
handling of undefined behaviours, or code that "worked fine on my old
compiler" - flags like "-fwrapv" and "-fno-strict-aliasing". Then
people who know how to integer arithmetic works in C and C++ can get
faster code, and people who can't get it right (or who have to use
broken code from others) have an escape route.
 
 
> Without -ftrapv or -fwrapv all bets are off and one can get full set of
> nasal demons on signed integer overflow on those compilers.
 
Of course all bets are off if you have signed integer overflow - this
should well known to anyone who has learned C or C++ programming. It is
not the compiler that launches nasal daemons - it is the programmer,
when they write code that does not make sense in the language.
 
"-ftrapv", by the way, has been considered a poor and unreliable
solution for a good many years - "-fsanitize=signed-integer-overflow" is
what you want.
 
And "-fwrapv" turns invalid code with undefined behaviour into valid
code with defined but wrong behaviour, while slowing down code that was
correct all along. (Unless you think that it makes sense to add an
apple to a pile of 2147483647 apples and get -2147483648 apples - in
which case it is exactly what you want.)
 
Write decent code, that does what you actually want, and signed integer
overflow behaviour doesn't matter. And if you need help spotting
problems (as we all do), use the tools the compiler gives you rather
than blaming the compiler for your mistakes.
Paavo Helde <myfirstname@osa.pri.ee>: Feb 18 03:30PM +0200

On 18.02.2019 14:52, Bart wrote:
 
> This can be done whatever the language, compiler, or hardware.
 
> Presumably you can also invent an option, if it is not already the
> default behaviour, to ignore such an overflow.
 
The point is that this option is fully standards-conformant (a core dump
upon UB is one of the best manifestations of UB) and as a library writer
one would have a hard time explaining to his library users that they
cannot compile his library with a standards-conforming compiler, or has
to tune down its bug detection capabilities.
 
Note that this was in response to the claim "Nobody is going to use the
library in an architecture where [signed int wrapover] not the case".
 
Also, signed int overflow is not so innocent beast, I have found some
real bugs with -ftrapv and would not like to be told that I cannot use it.
David Brown <david.brown@hesbynett.no>: Feb 18 02:44PM +0100

On 18/02/2019 14:30, Paavo Helde wrote:
> one would have a hard time explaining to his library users that they
> cannot compile his library with a standards-conforming compiler, or has
> to tune down its bug detection capabilities.
 
Yes.
 
"-fsanitize=signed-integer-overflow" will also cause similar outputs.
 
 
> library in an architecture where [signed int wrapover] not the case".
 
> Also, signed int overflow is not so innocent beast, I have found some
> real bugs with -ftrapv and would not like to be told that I cannot use it.
 
My understanding (and experience) is that "-ftrapv" is considered of
questionable worth, and I would not be surprised to see it removed from
gcc in the future. Use the sanitize option with modern gcc (or clang).
Ben Bacarisse <ben.usenet@bsb.me.uk>: Feb 18 02:34PM

"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>
writes:
 
> There is a sort of funny discussion going on over on reddit about my
> read/write mutex. Some people think is has UB wrt an integer overflow
> or underflow. I boiled it down to the following simple program:
 
You could boil it down even more, or at the very least say where the
people on Reddit think there is overflow.
 
I went to look and all I could find is a perfectly correct remark that
some other code (quite different to this boiled down code) might
overflow.
 
The main you posted has no overflow, but any signed integer arithmetic
might, depending on the operands.
 
<snip>
> Is there any UB in there? Some on reddit seem to think so. They are
> most likely trolling me. Little shi%'s.
 
Tell us where! I could no find your posted code on Reddit.
 
--
Ben.
Paavo Helde <myfirstname@osa.pri.ee>: Feb 18 05:15PM +0200

On 18.02.2019 15:44, David Brown wrote:
 
> My understanding (and experience) is that "-ftrapv" is considered of
> questionable worth, and I would not be surprised to see it removed from
> gcc in the future. Use the sanitize option with modern gcc (or clang).
 
Thanks, will look into it!
David Brown <david.brown@hesbynett.no>: Feb 18 05:03PM +0100

On 18/02/2019 16:15, Paavo Helde wrote:
>> questionable worth, and I would not be surprised to see it removed from
>> gcc in the future.  Use the sanitize option with modern gcc (or clang).
 
> Thanks, will look into it!
 
There are many sanitize options. Some of them can be quite detrimental
to performance, but run-time checking is useful during debugging and
bug-hunting.
"Öö Tiib" <ootiib@hot.ee>: Feb 18 08:50AM -0800

On Monday, 18 February 2019 15:22:21 UTC+2, David Brown wrote:
 
> Nonsense. That is a popular propaganda spread by people who don't
> understand how C (or C++) works, or don't want it to work the way it
> does. People writing compilers do so for people who use compilers.
 
Nonsense. People writing compilers focus on what their employers
tell them to focus. That feels to be mostly about generating shorter
and faster code quicker than competitors. The whole "Meltdown and
Spectre" problem is clearly because of such benchmark-orientation.
Both hardware makers and compiler writers kept optimizing until it
was "overly optimal".
 
> benchmark speeds are absurd. gcc, for example, tests compilation on the
> entire Debian repository - vast quantities of code, much of which is
> ancient.
 
Where I wrote "only"? Sure, they have to care of other things like to
keep backwards compatibility. Also if code of popular benchmark
contains UB then they have to avoid "optimizing" it, to keep the result
"correct".

 
> "-ftrapv", by the way, has been considered a poor and unreliable
> solution for a good many years - "-fsanitize=signed-integer-overflow" is
> what you want.
 
Why did it become "poor" and "unreliable"? What was the reasoning?
Is it OK to release software with "poor" and "unreliable" features?
When incorrect answer is worse than no answer then normal people
want defects to crash in release too.
 
> correct all along. (Unless you think that it makes sense to add an
> apple to a pile of 2147483647 apples and get -2147483648 apples - in
> which case it is exactly what you want.)
 
All I want is that they stop "optimizing" it into nasal demons when I
did specify -ftrapv and the signed arithmetic that was used did
overflow.
 
> overflow behaviour doesn't matter. And if you need help spotting
> problems (as we all do), use the tools the compiler gives you rather
> than blaming the compiler for your mistakes.
 
I am certainly not omnipotent and so write and repair several bugs daily.
When I repair a defect then there is decent likelihood that I introduce
another one. I know noone who is different. The only people who do
no errors are the ones who do nothing. However I avoid releasing
anything with features about what I have to say that it is "poor" and
"unreliable".
Bonita Montero <Bonita.Montero@gmail.com>: Feb 18 05:57PM +0100

> > g++ -ftrapv test1.cpp
> > ./a.out
> Aborted (core dumped)
 
Is it possible to mark certain variables to excluded from this
behaviour?
And is it possible to direct the compiler to assume wrap-around
for individual variables and to exclude according optimizations
with optimized code?
Paavo Helde <myfirstname@osa.pri.ee>: Feb 18 07:12PM +0200

On 18.02.2019 18:57, Bonita Montero wrote:
> And is it possible to direct the compiler to assume wrap-around
> for individual variables and to exclude according optimizations
> with optimized code?
 
Sure, there is a special keyword for that: unsigned. Not much useful for
anything else than just defining wrap-over integer types.
 
> cat test1.cpp
#include <limits>
#include <iostream>
int main() {
int i = std::numeric_limits<int>::max();
i = static_cast<unsigned int>(i)+1;
std::cout << i << "\n";
}
> g++ -ftrapv test1.cpp
> ./a.out
-2147483648
Bonita Montero <Bonita.Montero@gmail.com>: Feb 18 07:11PM +0100

>> for individual variables and to exclude according optimizations
>> with optimized code?
 
> Sure, there is a special keyword for that: unsigned. ...
 
No, that's not that what I imagined. I imagined something that
marks a variable to be excluded from the default-behaviour set
by the compiler-flag.
David Brown <david.brown@hesbynett.no>: Feb 18 07:56PM +0100

On 18/02/2019 17:50, Öö Tiib wrote:
> tell them to focus. That feels to be mostly about generating shorter
> and faster code quicker than competitors. The whole "Meltdown and
> Spectre" problem is clearly because of such benchmark-orientation.
 
I am sorry, but you appear to be confused. The "Meltdown and Spectre"
stuff is a hardware problem, not a software problem. Compiler writers
have tried to find ways to work around the hardware flaw.
 
As for "what employers say", then yes, in /some/ cases that is what
compiler writers focus on. But you'll find that for a number of
important compilers - including the ones most targeted by such "they
only work on benchmark" claims - the people writing them are widely
dispersed with totally different kinds of employers. In particular, the
gcc developers fall into several categories:
 
1. Those working for chip manufacturers - Intel, AMD, ARM, etc. These
don't care if you use gcc or anything else, as long as you buy their
chips, so their concern is that you (the programmer) get the best from
the compiler and their chip. Benchmarks for the compiler don't matter -
support for the chip matters.
 
2. Those working for software companies like Red Hat, IBM, etc., that
provide tools and services to developers. They want programmers to be
happy with the tools - they don't care if you use a different compiler
instead.
 
3. Those working for big users, like Google and Facebook. They don't
care about benchmarks - they care about performance on their own software.
 
4. The independent and volunteer developers. They care about the
quality of their code, and making something worthwhile - they don't care
about benchmark performances.
 
I'm sure there are other categories that you can think of. I can't see
any significant number being benchmark oriented. People don't choose
compilers because of their benchmarks - they choose for features, target
support, static checking, language support, compatibility with existing
source code, etc. They expect a gradual progress towards faster code
with newer versions, but not more than that. And those that pick a
compiler for its speed, do so based on the speed for their own source
code, not for some benchmark.
 
 
Like all conspiracy theories, the best way to test it is to follow the
money. Who would profit from making compilers focused on benchmark
performance as the main goal, with a disregard for support for existing
C or C++ sources?
 
> Both hardware makers and compiler writers kept optimizing until it
> was "overly optimal".
 
That again is simply incorrect.
 
Developers - hardware or software - can make mistakes, and release a
design which later turns out to have unforeseen flaws. With software,
you can often find these early and fix them, but sometimes the flaws are
discovered quite late. Hardware flaws are harder to fix - but very easy
for amateurs to condemn once they are found.
 
> keep backwards compatibility. Also if code of popular benchmark
> contains UB then they have to avoid "optimizing" it, to keep the result
> "correct".
 
They sometimes have to make such adjustments, yes. Often that is
because they realise that not only do the benchmark writers make such
mistakes, but others do too - and that it can be helpful to treat such
code in the manner the programmer appeared to expect. But for most
undefined behaviour, it is hard or impossible to guess what the
programmer expected - that is the nature of undefined behaviour.
 
>> what you want.
 
> Why did it become "poor" and "unreliable"? What was the reasoning?
> Is it OK to release software with "poor" and "unreliable" features?
 
Many bugs have been found in the "-ftrapv" implementation - and in
particular, it does not trap in all cases. Personally, I think the flag
should be dropped in favour of the sanitizer, which is a more modern and
flexible alternative and which is actively maintained.
 
> When incorrect answer is worse than no answer then normal people
> want defects to crash in release too.
 
"-ftrapv" could has always been slower than non-trapping code. People
usually aim to right correct code, and have that correct code run as
fast as reasonably possible. If you want software that is full of
run-time checks, you don't program in C or C++.
 
In C and C++, you can always manually add any checks you want. With
C++, you can make your own types that do checking in the manner that
suits your needs.
 
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Feb 18 10:15AM -0500

https://www.fudzilla.com/news/pc-hardware/48123-more-than-70-percent-of-microsoft-patches-are-for-memory-problems
 
"More than 70 percent of Microsoft patches are for memory safety bugs.
 
"Speaking to the assembled throngs at an Israel Security conference,
a Microsoft engineer Matt Miller said that memory safety bugs happen
when software, accidentally or intentionally, accesses system memory
in a way that exceeds its allocated size and memory addresses.
 
"He said that over the the last 12 years, around 70 percent of all
Microsoft patches were fixes for memory safety bugs.
 
"The reason for this high percentage is because Windows has been writ-
ten mostly in C and C++, two 'memory-unsafe' programming languages...
 
"One slip-up in the developers' memory management code can lead to a
slew of memory safety errors that attackers can exploit with dangerous
and intrusive consequences --such as remote code execution or elevation
of privilege flaws..."
 
--
Rick C. Hodgin
"Mr. Man-wai Chang" <toylet.toylet@gmail.com>: Feb 19 12:28AM +0800

On 2/18/2019 11:15 PM, Rick C. Hodgin wrote:
 
> "He said that over the the last 12 years, around 70 percent of all
> Microsoft patches were fixes for memory safety bugs.
 
It's not the fault of C nor C compilers. It's the programmers
deliberately not auditing their memory use.
 
--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
¤£­É¶U! ¤£¶BÄF! ¤£½ä¿ú! ¤£´©¥æ! ¤£¥´¥æ! ¤£¥´§T! ¤£¦Û±þ! ¤£¨D¯«!
½Ð¦Ò¼{ºî´© (CSSA):
http://www.swd.gov.hk/tc/index/site_pubsvc/page_socsecu/sub_addressesa
Ben Bacarisse <ben.usenet@bsb.me.uk>: Feb 18 06:34PM

> in a way that exceeds its allocated size and memory addresses.
 
> "He said that over the the last 12 years, around 70 percent of all
> Microsoft patches were fixes for memory safety bugs.
 
I wonder if any explanation was offered as to why so many of these bugs
were found by people who probably had no access to the source code
rather than by people doing code reviews and testing.
 
If these are, in fact, the result of internal review, then the timing
just needs to be altered.
 
If these are being found internally before release, then it's bug fixing
and not really headline news.
 
Not that that lets C off the hook, but it does look like an exercise in
looking everywhere but on your own doorstep. (That's from the limited
quotes -- there's no link to what was actually said.)
 
--
Ben.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 17 10:21PM -0800

On 2/16/2019 11:50 PM, Melzzzzz wrote:
 
> For libstdc++ place to start is: https://gcc.gnu.org/onlinedocs/libstdc++/manual/appendix_contributing.html
> for libc++ : https://llvm.org/docs/Phabricator.html
 
Before that, I need to create another type of benchmark. One that
measures reads/writes per-second, per-thread wrt readers iterating
through a shared data-structure. Writers would push and pop from it. The
algorithm that allows for the most reads (primarily), or writes per
second per thread wins. Looking forward to seeing your results, if you
decide to run the new code. Should be up sometime in a day or two. :^)
mvorbrodt@gmail.com: Feb 17 11:34PM -0800

On Thursday, February 14, 2019 at 2:03:43 AM UTC-5, Chris M. Thomasson wrote:
> __________________________________
 
> I will explain the algorithm in further detail when I get some more
> time. Probably tomorrow.
 
 
 
 
Tested your latest code on 2018 i5 Mac Book Pro and 3 different compilers:
 
*************************************
* GCC -Ofast -march=native -lstdc++ *
*************************************
 
Testing: Chris M. Thomasson's Experimental Read/Write Mutex
 
msec = 46171
shared.m_state = 160000000
 
 
Fin!
 
******************************************
* Apple CLANG -Ofast -march=native -lc++ *
******************************************
 
Testing: Chris M. Thomasson's Experimental Read/Write Mutex
 
msec = 40027
shared.m_state = 160000000
 
 
Fin!
 
*****************************************
* LLVM CLANG -Ofast -march=native -lc++ *
*****************************************
 
Testing: Chris M. Thomasson's Experimental Read/Write Mutex
 
msec = 37518
shared.m_state = 160000000
 
 
Fin!
 
 
 
VS SHARED MUTEX:
 
Ran for 15 minutes then i stopped it. It's not even close.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: