Sunday, November 25, 2018

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: Nov 24 04:30PM -0800

On 11/23/2018 9:25 PM, James Kuyper wrote:
> protection that they provide to those variables? I've reviewed every
> line of the standard containing the word "mutex" without seeing any hint
> of an answer to either of those questions - what did I miss?
 
They basically give them acquire, for lock, and release semantics for
unlock, just like the standard memory barrier functions in C++11 and C11.
 
Take a look at:
 
https://en.cppreference.com/w/cpp/atomic/memory_order
 
 
>> std::mutex basically implies acquire semantics. An unlock implies
>> release. A conforming C11 or C++11 compiler shall honor the semantics of
>> a mutex. Period. Anything else, is non-conforming. End of story.
 
Fwiw, the following program has no undefined behavior wrt threading:
 
Take a good look at ct_shared... Its variables wrt members
ct_shared::data_[0...2] do not need any special atomics, or membars
whatsoever because it uses a std::mutex to protect itself. It is nice
that this is all 100% standard now:
_________________________________
#include <iostream>
#include <functional>
#include <mutex>
#include <thread>
#include <cassert>
 
 
#define THREADS 7
#define N 123456
 
 
// Shared Data
struct ct_shared
{
std::mutex mtx;
unsigned long data_0;
unsigned long data_1;
unsigned long data_2;
};
 
 
// A thread...
void ct_thread(ct_shared& shared)
{
for (unsigned long i = 0; i < N; ++i)
{
shared.mtx.lock();
// we are locked!
shared.data_0 += i;
shared.data_1 += i;
shared.data_2 += i;
shared.mtx.unlock();
 
std::this_thread::yield();
 
shared.mtx.lock();
// we are locked!
shared.data_0 -= i;
shared.data_1 -= i;
std::this_thread::yield(); // for fun...
shared.data_2 -= i;
shared.mtx.unlock();
}
}
 
 
int main(void)
{
ct_shared shared;
 
// init
shared.data_0 = 1;
shared.data_1 = 2;
shared.data_2 = 3;
 
// launch...
{
std::thread threads[THREADS];
 
// create
for (unsigned long i = 0; i < THREADS; ++i)
{
threads[i] = std::thread(ct_thread, std::ref(shared));
}
 
std::cout << "processing...\n\n";
std::cout.flush();
 
// join...
for (unsigned long i = 0; i < THREADS; ++i)
{
threads[i].join();
}
}
 
std::cout << "shared.data_0 = " << shared.data_0 << "\n";
std::cout << "shared.data_1 = " << shared.data_1 << "\n";
std::cout << "shared.data_2 = " << shared.data_2 << "\n";
 
assert(shared.data_0 == 1);
assert(shared.data_1 == 2);
assert(shared.data_2 == 3);
 
return 0;
}
_________________________________
 
 
No undefined behavior. Why do you think that ct_shared::data_[0...2]
should be specially decorated?
"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: Nov 24 04:38PM -0800

On 11/24/2018 4:30 PM, Chris M. Thomasson wrote:
>> of an answer to either of those questions - what did I miss?
 
> They basically give them acquire, for lock, and release semantics for
> unlock, just like the standard memory barrier functions in C++11 and C11.
[...]
 
Fwiw, the C11 and C++11 standard wrt atomics and membars allows one to
build custom mutex logic. I love that this is in the language now. Fwiw,
Alexander Terekhov built a nice one over on comp.programming.threads
many years ago, before all of this was standard. Over two decades? I
will try to find the original post.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Nov 25 01:05AM

On Sat, 24 Nov 2018 02:13:15 +0100
> would some sort of official "C++11 memory model" paper.
 
> Otherwise "it worked when I tried it" is not worth the pixels it is
> written on.
 
Mutexes do, as you say, provide mutual exclusion to any piece of code
which is only accessible by locking the mutex in question. However,
they do more than that. They also synchronize the values held in
memory locations: in particular, locking a mutex is an acquire operation
and unlocking it is a release operation. It is guaranteed that the
values of any variables in the program as they existed no earlier than
at the time of the unlocking of a mutex by one thread will be visible
to any other thread which subsequently locks the same mutex. An
operation with acquire semantics is one which does not permit
subsequent memory operations to be advanced before it, and an operation
with release semantics is one which does not permit preceding memory
operations to be delayed past it, as regards the two threads
synchronizing on the same synchronization object.
 
Non-normatively, for mutexes this is offered by §1.10/5 of C++11:
 
"Note: For example, a call that acquires a mutex will perform
an acquire operation on the locations comprising the mutex.
Correspondingly, a call that releases the same mutex will perform a
release operation on those same locations. Informally, performing a
release operation on A forces prior side effects on other memory
locations to become visible to other threads that later perform a
consume or an acquire operation on A."
 
The normative (and more hard-to-read) requirement for mutexes is in
§30.4.1.2/11 and §30.4.1.2/25 ("synchronizes with") read with §1.10/11
and §1.10/12 ("happens before") and §1.10/13 ("visible side effect")
of C++11.
 
Posix mutexes have the same effect although this is much more
incoherently expressed. Section 4.12 of the SUS says (without further
explanation) in referring to mutex locking and unlocking operations
(amongst other similar operations) that "The following functions
synchronize memory with respect to other threads". In practice posix
mutexes behave identically to C/C++ mutexes.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Nov 25 03:57AM +0100

On 24.11.2018 21:51, Tim Rentsch wrote:
> the type required by the argument of the constructor". The type
> required by the argument of the constructor is (in this case)
> 'long'.
 
In the last sentence above you are yourself using the word "argument"
about a formal argument.
 
The actual argument is `0`. There is nothing "required by" the `0`.
 
The formal argument is `const long&`. There is a type required by the
`const long&`. In the given context, that type is `long`.
 
 
 
> The quoted paragraph is referring to the second of these, which
> is the type of an intermediate argument to the constructor, not
> the type of the parameter in the constructor's definition.
 
The last sentence I quoted from the C++17 standard, specified the result
type of the initial conversion sequence:
 
"If the user-defined conversion is specified by a constructor
(15.3.1), the initial standard conversion sequence converts the source
type to the type required by the argument of the constructor."
 
You get into infinite recursion, a totally meaningless statement worthy
of membership in the International Tautology Club, if you define the
result type of the initial conversion sequence as (the result type of
the initial conversion sequence), as you strongly suggest here by saying
the specification of the result type of the initial conversion sequence
refers to second of the three possibilities you list, where the second
is "the standard conversion sequence result type", which is evidently
intended to refer to the result type of the initial conversion sequence.
 
In an ISO standard such infinite recursion would be a defect.
 
 
 
Cheers & hth.,
 
- Alf
David Brown <david.brown@hesbynett.no>: Nov 25 04:21PM +0100

On 24/11/2018 02:17, Chris M. Thomasson wrote:
> std::mutex basically implies acquire semantics. An unlock implies
> release. A conforming C11 or C++11 compiler shall honor the semantics of
> a mutex. Period. Anything else, is non-conforming. End of story.
 
Sorry, Chris, but your "proof by repeated assertion" is not good enough.
Your "Period. End of story." is just like sticking your fingers in
your ears and saying "La, la, la, I'm not listening". It shows that you
are unwilling to think about the situation or read the relevant standards.
 
I've read plenty of your posts here over the years, and you are
experienced with multi-threading and multi-processing. It surprises me
greatly to hear your attitude here. You know fine that in the world of
multi-threading, "it works when I tried it" is /not/ good enough. Code
can work as desired in millions of tests, and then fail at a critical
juncture in practice. You have to /know/ the code is correct. You have
to /know/ the standards guarantee particular behaviour regarding
ordering and synchronisation - you can't just guess because it looked
okay on a couple of tests, and it would be convenient to you if it worked.
David Brown <david.brown@hesbynett.no>: Nov 25 04:30PM +0100

On 24/11/2018 06:42, Alf P. Steinbach wrote:
>> line of the standard containing the word "mutex" without seeing any hint
>> of an answer to either of those questions - what did I miss?
 
> Mutexes are used to provide exclusive access to variables.
 
Mutexes are used to provide exclusive access to a lock. That is all.
 
> It's up to the programmer to establish the guard relationship.
 
Exactly.
 
And that means using appropriate synchronisation operations and atomic
operations.
 
Melzzzzz <Melzzzzz@zzzzz.com>: Nov 25 03:44PM


> Exactly.
 
> And that means using appropriate synchronisation operations and atomic
> operations.
 
So mutexes are not appropriate synchronisation?
 
--
press any key to continue or any other to quit...
David Brown <david.brown@hesbynett.no>: Nov 25 05:01PM +0100

On 25/11/2018 16:44, Melzzzzz wrote:
 
>> And that means using appropriate synchronisation operations and atomic
>> operations.
 
> So mutexes are not appropriate synchronisation?
 
Mutexes are certainly appropriate synchronisation. What is wrong is to
assume that just because you have locked a mutex, then all normal
(non-atomic, non-volatile) accesses that are inside the locked section
in the source code, are executed entirely within the locked section in
practice.
 
And it is also wrong to think that just because you sometimes access a
variable within a locked section, that the variable is not accessed when
you don't have a lock.
 
Remember the original code under discussion here:
 
int res = trylock(&mutex);
if (res == 0)
++acquires_count;
 
There is /nothing/ in that to suggest that "acquires_count" will not be
accessed if the lock is not acquired.
 
Use atomic accesses, or volatile accesses, as appropriate.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 25 05:17PM +0100

> You can't get more efficient then longjmp().
 
LOL!
Melzzzzz <Melzzzzz@zzzzz.com>: Nov 25 04:28PM


> There is /nothing/ in that to suggest that "acquires_count" will not be
> accessed if the lock is not acquired.
 
> Use atomic accesses, or volatile accesses, as appropriate.
 
Hm, I was always thought that volatile is useless for multithreading...
If you use atomics you don't need mutexes.
So what's the purpose of mutexes then?
 
 
--
press any key to continue or any other to quit...
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Nov 25 05:29PM

On Sun, 25 Nov 2018 16:28:28 GMT
 
> Hm, I was always thought that volatile is useless for multithreading...
> If you use atomics you don't need mutexes.
> So what's the purpose of mutexes then?
 
This thread is getting horribly confused.
 
Volatile is useless for multithreading. In C and C++ (as opposed to
Java and C#), volatile does not synchronize. If you need to
synchronize, use a mutex, an atomic variable or a fence.
 
Chris Thomasson can speak for himself, but it seems clear to me that in
the example under discussion and his subsequent example (his posting of
24 Nov 2018 16:30:43 -0800), he was taking it as a given that every read
or write access to acquire_count and to his data_0, data_1 and data_2
variables was (as written by the programmer) within a locking of the
same mutex. That is the only reasonable explanation of his postings,
and is how I read them. It also seems clear enough to me that that was
the case also for the gcc bug posting to which we were referred (the
reference to dropped increments was not to do with the fact that
acquires_count is not an atomic variable, but to the fact that the
compiler was reordering access to it outside the mutex for optimization
purposes in a way forbidden by the C++ standard, although arguably not
by posix).
 
In such a case, using an atomic is pointless. It just results in a
doubling up of fences or whatever other synchronization primitives the
implementation uses.
 
Sure, if not all accesses to a protected variable are within a mutex,
then it needs to be atomic. But if that is the case there is probably
something wrong with the design. You should not design code which
requires such a doubling up of synchronization approaches, and I cannot
immediately visualize a case where that would be sensible.
David Brown <david.brown@hesbynett.no>: Nov 25 09:34PM +0100

On 25/11/2018 01:30, Chris M. Thomasson wrote:
> unlock, just like the standard memory barrier functions in C++11 and C11.
 
> Take a look at:
 
> https://en.cppreference.com/w/cpp/atomic/memory_order
 
Almost everything here describes synchronisation and ordering amongst
/atomic/ accesses. Fences do affect non-atomic accesses, but you need
to be much more careful about how the non-atomic variables are handled.
In particular, there is nothing about fences (or mutexes) that stops
other accesses to the non-atomic variables. And in the case of the
original example here, the code after the "trylock" call runs whether
the lock is taken or not - and if it is not taken, there is no fence.
 
>             shared.data_1 += i;
>             shared.data_2 += i;
>         shared.mtx.unlock();
 
Note that this is /totally/ different from the original example. Here,
you have a lock - in the earlier example, you might or might not have
the lock. The problem situation earlier came from the possibility of
having a write operation even when the lock was not taken.
 
 
> _________________________________
 
> No undefined behavior. Why do you think that ct_shared::data_[0...2]
> should be specially decorated?
 
The ordering enforced by the fences in the mutex acquire and release
operations affects the /accesses/ to the variables - it does not lock or
protect the variables themselves. This is important to understand,
especially when considering non-atomic non-volatile variables that can
have other accesses generated by the compiler optimisations.
 
Certainly the C11 and C++11 standards have made this sort of thing
clearer and easier. But do not make the mistake of thinking it has
suddenly become easy - you still need to think long and hard about
things, especially if you are using data that is not atomic.
Vir Campestris <vir.campestris@invalid.invalid>: Nov 25 09:08PM

On 25/11/2018 16:01, David Brown wrote:
> Use atomic accesses, or volatile accesses, as appropriate.
 
Volatile is no longer appropriate except when accessing hardware registers.
 
It Just Doesn't Work on all architectures when you've got
multiprocessors and cacheing.
 
Andy
Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 25 04:21PM -0500

Chris Vine wrote:
> something wrong with the design. You should not design code which
> requires such a doubling up of synchronization approaches, and I cannot
> immediately visualize a case where that would be sensible.
 
I just found a kind-of-resolution of this dispute by Ian Lance's, with
good explanation:
 
https://www.airs.com/blog/archives/79
 
Essentially, Ian, beleives that "For this code to be correct in standard
C, the variable needs to be marked as volatile, or it needs to use an
explicit memory barrier (which requires compiler specific magic–in the
case of gcc, a volatile asm with an explicit memory clobber)."
 
Ian did eventually change the compiler, mainly it seems to please
[Linux] kernel people. However what's interesting even Linus who
essentially blasted GCC team in his usual arrogant manner (see
https://lkml.org/lkml/2007/10/25/186) does not seem to believe the
optimization is illegal as far as specs are concerned; instead he
lambasts gcc team for being "far enough removed from "real life" that
they have a tendency to talk in terms of "this is what the spec says"
rather than "this is a problem". Essentially his position (as quite
often before): "this should be fixed because I don't like it" -- which
seems to be slightly different from "this should be fixed because it is
against the specs (in this case , ISO C and POSIX)".
 
Whether we like it or not, POSIX says nothing about memory barriers or
fences that mutex locking should execute or any "variables protected by
a mutex".
 
Even (C++11) standard is not definitive: in 1.10-5 it says "a call that
acquires a mutex will perform an acquire operation on the locations
comprising the mutex"; but it never defines what exactly locations a
mutex "comprise".
 
Then, 1.10-8 says:
 
The least requirements on a conforming implementation are:
— Access to volatile objects are evaluated strictly according to the
rules of the abstract machine.
...
These collectively are referred to as the observable behavior of the
program.
 
-- which seems to assume that access to non-volatile objects may be
reordered.
 
But then,
 
1.10-5 ...A synchronization operation without an associated memory
location is a fence and can be either
an acquire fence, a release fence, or both an acquire and release fence. ...
 
-- that is mutex locking in a fence in C++11; however the fence is later
defined as a hardware memory ordering (which is different from compiler
fence that essentially prevents compiler from reading variable too early
or writing it too late).
 
But, eventually, it seems (again, from the definition of
atomic_signal_fence as a subset of atomic_thread_fence, rather than
explicitly) that any fence always should inhibit memory access
reordering by the compiler.
 
Therefore it seems that with regard to C++ mutex, the compiler fences
are to be always executed by mutex locking (for all variables -- which
is clearly a big impediment for optimization) and hence the
correspondent optimization for C++11 standard mutex would be illegal.
 
But, to repeat myself, POSIX does not have any such language so no
assumption should be made about compiler memory access reordering across
POSIX mutexes; instead, correspondent gcc or gcc asm primitives should
be used.
scott@slp53.sl.home (Scott Lurndal): Nov 25 09:47PM

>>> of an answer to either of those questions - what did I miss?
 
>> Mutexes are used to provide exclusive access to variables.
 
>Mutexes are used to provide exclusive access to a lock. That is all.
 
More accurately, they provide exclusive access to one or more code
sequences.
David Brown <david.brown@hesbynett.no>: Nov 25 10:55PM +0100

On 25/11/2018 18:29, Chris Vine wrote:
>>> accessed if the lock is not acquired.
 
>>> Use atomic accesses, or volatile accesses, as appropriate.
 
>> Hm, I was always thought that volatile is useless for multithreading...
 
Volatile accesses force an ordering compared to other volatiles, but
that order is not necessarily visible to other threads. They can still
be useful for two purposes in a multi-threaded environment.
 
One is if you have a single cpu - no SMP or multi-threading. In such
systems, different threads /will/ see the same order of volatile
accesses (even if memory and other bus masters perhaps do not see the
same order without additional fences, barriers, or synchronisation
instructions), and volatile accesses can be significantly cheaper than
atomic accesses.
 
The other is that volatile accesses can be used to ensure order compared
to other actions, from the viewpoint of the current thread. They can
also control optimisation, avoiding the kind of extra read and write
operation that has been a concern in this thread. And you can easily
force an access to a normal variable to be atomic using a pointer cast
(this is not guaranteed by the standards at the moment, but all
compilers allow it and I believe it is likely to be codified in the
upcoming C standards). As far as I know, you cannot use "*((_Atomic int
*) &x)" to force an atomic access to x, in the way you can force a
volatile access with "*((volatile int *) &x)". Even it were possible,
volatile accesses can be cheaper than atomic accesses.
 
But as is noted below, volatile accesses do not synchronise with
multi-threading primitives or atomics (unless these are also volatile),
and their order is not guaranteed to match when viewed from different
threads.
 
 
> Volatile is useless for multithreading. In C and C++ (as opposed to
> Java and C#), volatile does not synchronize. If you need to
> synchronize, use a mutex, an atomic variable or a fence.
 
Agreed. (Implementation-specific methods are also possible, but clearly
they will not be portable.)
 
 
> In such a case, using an atomic is pointless. It just results in a
> doubling up of fences or whatever other synchronization primitives the
> implementation uses.
 
As I see it, the fences from the mutex lock (and presumably later
unlock) protect the accesses to data_0, data_1 and data_2 in his later
example. But the earlier example from the google group link was
different - the increment code was not necessarily in the context of
having the lock. The fence from the lock would prevent movement of the
accesses to acquire_const from moving before the trylock call, but it
would not prevent optimisation after that call. So in this case,
additional effort /is/ needed to ensure there is no unexpected effects.
This could be from other fences, atomic accesses, or volatile accesses.
The following would, I believe, all work:
 
int acquire_counts;
 
int trylock1(void) {
int res = mtx_trylock(&mutex);
if (res == thrd_success) {
atomic_thread_fence(memory_order_acquire);
++acquire_counts;
}
return res;
}
 
or
 
 
int acquire_counts;
 
int trylock2(void) {
int res = mtx_trylock(&mutex);
if (res == thrd_success) {
(*(volatile int*)(&acquire_counts))++;
}
return res;
}
 
 
or
 
 
_Atomic int acquire_counts;
 
int trylock3(void) {
int res = mtx_trylock(&mutex);
if (res == thrd_success) {
++acquire_counts;
}
return res;
}
 
 
> Sure, if not all accesses to a protected variable are within a mutex,
> then it needs to be atomic.
 
You need to be careful about mixing accesses that are protected with a
mutex with accesses that are not protected with the same mutex, even if
these are atomic. Some accesses won't be guaranteed to be visible, or
in the same order, unless the reading thread also takes the same mutex.
Without that, other threads will read the atomic data with either the
old value, or the new value - but not necessarily with the same ordering
amongst other data.
 
> But if that is the case there is probably
> something wrong with the design.
 
Agreed.
 
> You should not design code which
> requires such a doubling up of synchronization approaches, and I cannot
> immediately visualize a case where that would be sensible.
 
I'd say trylock1 above is the best choice for this example, and avoids
unnecessary "doubling up". But too much protection is better than too
little protection - it is better to be a bit inefficient, than to have
the risk of race conditions.
"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: Nov 25 02:05PM -0800

On 11/24/2018 5:05 PM, Chris Vine wrote:
> §30.4.1.2/11 and §30.4.1.2/25 ("synchronizes with") read with §1.10/11
> and §1.10/12 ("happens before") and §1.10/13 ("visible side effect")
> of C++11.
 
Correct. Afaict, David Brown is mistaken wrt his view on C11 and C++11
mutexs, and how they work. He does not seem to understand how a
acquire-release relationship guarantees visibility, and how it is now
part of the language itself.
 
 
> (amongst other similar operations) that "The following functions
> synchronize memory with respect to other threads". In practice posix
> mutexes behave identically to C/C++ mutexes.
 
Agreed. POSIX casts some rules on a system, and the compiler is part of
the system. Always had a good laugh at that quote:
 
"The following functions synchronize memory with respect to other threads."
 
Define synchronize? At least C++11/C11 defines it as an acquire-release
relationship. ;^)
Jorgen Grahn <grahn+nntp@snipabacken.se>: Nov 25 10:14PM

On Sun, 2018-11-25, Pavel wrote:
...
> Whether we like it or not, POSIX says nothing about memory barriers or
> fences that mutex locking should execute or any "variables protected by
> a mutex".
 
I haven't followed this thread (and I'm snipping heavily) but as I
remember the attitude here and in comp.lang.c, before the languages
got threading support, it went roughly like this:
 
Any compiler/library combination which claims to support pthreads
makes sure to put a suitable fence/barrier/whatever-it's-called
at a mutex lock, because anything else would be suicide.
 
...
> assumption should be made about compiler memory access reordering across
> POSIX mutexes; instead, correspondent gcc or gcc asm primitives should
> be used.
 
Just to clarify, are you saying most software which uses POSIX
multithreading is broken, since it (in my experience) rarely inserts
its own fences?
 
(The answer "yes" wouldn't bother me. I'm fine with this being a
theoretical problem, triggered by a "suicidal" compiler/library
combination. Partly because multithreaded software tends to be
subtly broken anyway.)
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
peter koch <peter.koch.larsen@gmail.com>: Nov 25 02:03PM -0800

onsdag den 21. november 2018 kl. 15.06.08 UTC+1 skrev Jan Riewenherm:
 
> Code Snippped of the locking helper object.
> class LockingObject
> {
...
> }
 
> How to prevent calling the constructor where the created object is not used at all? c++17 [[nodiscard]] does not work for Constructors.
 
> Any advice appreciated.
 
You can use nodiscard on the class:
 
class [[nodiscard]] LockingObject
Ben Bacarisse <ben.usenet@bsb.me.uk>: Nov 25 12:22AM

> the same relative precedence in essentially all programming
> languages (the only exceptions I'm aware of are APL and Smalltalk,
> which have just one precedence level for all binary operators).
 
Curiously, they are reversed in Icon, a fact I post simply for the
novelty value.
 
<snip>
--
Ben.
Vir Campestris <vir.campestris@invalid.invalid>: Nov 25 09:57PM

On 24/11/2018 19:43, Tim Rentsch wrote:
> Anyone who is capable
> enough to be a competent programmer can easily learn those rules
> in about 15 minutes.
 
I'm afraid I'm going to disagree with you.
 
I don't have the kind of mind that can do rote learning. And it is that
- there's no logic that says even that multiply takes precedence over
add. It's just tradition.
 
I think I am a fairly competent programmer. I'm still working on code
for embedded devices which ship in the millions, 40 years after I first
hacked some Fortran together.
 
And I feel that writing (a & b) ^ c is clearer than leaving the brackets
out. It's obvious what is meant, and I don't have to think about it.
 
Andy
Horizon68 <horizon@horizon.com>: Nov 25 12:52PM -0800

Hello,
 
 
My Parallel C++ Conjugate Gradient Linear System Solver Library that
scales very well was updated to version 1.74
 
Here is what i have enhanced:
 
The Solve() method is now thread-safe, so you can you call it from
multiple threads, everything else is thread-safe except for the
constructor , you have to call the constructor one time from a process
and use the object from multiple threads.
 
I think that my library is much more stable and fast and it works
on both Windows and Linux.
 
You can read about it and download it from my website here:
 
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
Thank you,
Amine Moulay Ramdane.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Nov 25 03:25AM +0100

On 24.11.2018 15:21, Sam wrote:
> using occurs_in=std::conditional_t<
>     count_occurences<T, first_type, Types...>::how_many <= 0,
>         std::false_type, std::type_type>;
 
std::type_type?
 
 
>     std::cout << occurs_in<int, double>::value << std::endl;
>     return 0;
> }
 
Cheers!,
 
- Alf
Kalle Olavi Niemitalo <kon@iki.fi>: Nov 25 11:02AM +0200


> The odd "<= 0" comparison, instead of "> 0" was needed because
> ">" gets parsed …differently in the context of a template
> parameter.
 
Or you could put parentheses around the expression, AFAIK.
Elephant Man <conanospamic@gmail.com>: Nov 25 08:39AM

Article d'annulation émis par un modérateur JNTP via Nemo.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: