Monday, August 10, 2015

Digest for comp.lang.c++@googlegroups.com - 25 updates in 12 topics

Doug Mika <dougmmika@gmail.com>: Aug 10 02:19PM -0700

It says that the atomic bool is not copy-assignable, however, you can assign to it a bool value:
 
atomic<bool> b(true);
b=false; //assignment operation which relies on the overloaded copy assignment operator.
 
What is copy assignable and which operator does it rely on?
Bo Persson <bop@gmb.dk>: Aug 11 12:26AM +0200

On 2015-08-10 23:19, Doug Mika wrote:
 
> atomic<bool> b(true);
> b=false; //assignment operation which relies on the overloaded copy assignment operator.
 
> What is copy assignable and which operator does it rely on?
 
You are assigning a value, which is not a copy of an atomic variable.
Copy assign means assigning a value of the same type. Atomic types have
these operators deleted, to make them non-copyable.
 
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
 
 
Bo Persson
joecook@gmail.com: Aug 10 02:33PM -0700

I have a third party library that returns a pointer that the caller is then responsible for:
 
float* myData = thirdParty.getData();
delete [] myData;
 
I liked using boost instead to wrap this into a smart pointer (e.g. scoped_array).
 
With C++11, I had thought to use unique_ptr, but the deleter on unique_ptr is calling the standard delete, not array delete, even though the "new" underneath is allocating an array explicitly.
 
Using the specialization for unique_ptr<> for arrays is clunky and inconvenient. When and how is the compiler supposed to handle this for me and deduce the correct unique_ptr<> type?
 
Is there a cleaner way?
 
If I have a function, such as :
 
float* goo()
{
float* temp = new float[100];
return temp;
}
 
int main
{
std::unique_ptr<float> (goo()); // Calls wrong deleter
}
 
Is the return type of "new float[100]" somehow different as per the standard from "new float"? Is that how the compiler can choose the correct unique_ptr specialization?
 
Thx
--Joe
"K. Frank" <kfrank29.c@gmail.com>: Aug 10 06:21AM -0700

Hello All!
 
Thanks for everyone's thoughtful comments.
 
On Sunday, August 9, 2015 at 6:38:17 PM UTC-4, Jorgen Grahn wrote:
> I noted this just like you did, but eventually concluded that "ok, so
> I can't use the new syntax in that special case. The simplification
> comes at a price."
 
Yes, I'm basically coming to the same conclusion. But
it seems that the "special case" where I can't use the
new syntax (without paying the extra price) is for me
more often the case than not.
 
It's certainly trivial to do something special before
or after you iterate over the container. A boolean
flag, a la Victor, works for the first iteration and
seems conceptually lighter-weight than a counter (but
really isn't much different than introducing an integral
counter). But to do something special for the last
iteration does seem to require a counter. I suppose
one could somehow test somehow for the last element's
location in the collection, but this seems unwieldy,
potentially expensive, and not generally reliable.
 
(I don't think Louis's suggestion of testing for a
sentinel value at the end of the collection really
works. If I want to do something special for the
last substantive item in the collection -- and the
sentinel value is just an add-on placeholder -- I
won't find out that I've hit the sentinel value until
after I've iterated over -- and processed -- the last
"real" item -- the next-to-last nominal item -- in the
collection.)
 
I like Alf's suggestion of a syntax that gives you an
index value ("for free," syntactically) as well as the
collection item. I wouldn't add it as a home-brew facility
in my own code, but if it became common practice, I would
probably use it. It might be a little tough justifying adding
it to the language proper -- the old-school "for (i = ...)"
syntax does get the job done.
 
> /Jorgen
 
 
Thanks all for some good ideas.
 
 
K. Frank
Jorgen Grahn <grahn+nntp@snipabacken.se>: Aug 10 01:51PM

On Mon, 2015-08-10, K. Frank wrote:
> it seems that the "special case" where I can't use the
> new syntax (without paying the extra price) is for me
> more often the case than not.
 
That's how it felt for me too (when I went through my hobby projects
and applied C++11 features like this one, as an exercise) but I think
in my case it was an illusion.
 
I /hope/ it is an illusion, because otherwise ranged-for is flawed.
It should be something that's frequently useful ...
 
And why would it be, given how useful it is in other languages like
shell script, Perl and Python?
 
> [...] the old-school "for (i = ...)"
> syntax does get the job done.
 
Yes, and with 'auto', std::begin() and so on, a lot of the ugliness
goes away.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
scott@slp53.sl.home (Scott Lurndal): Aug 10 08:21PM


> . However, having to write
 
>[]( int const i, bool const isfirst, bool const islast )
 
> in the client of »foreach« is rather cumbersome!
 
Man, that is unnecessarly complicated for such a simple
problem. Just use the interators directly and dump the
unreadable auto/lamda parts.
woodbrian77@gmail.com: Aug 10 12:35PM -0700

I guess the C++ Middleware Writer is the elephant in the
room. It's met with silence and denial by some, but we
continue working on the software and hardware.
 
I'm willing to donate 16 hours/week for six months to a
project that uses the C++ Middleware Writer.
 
Also I'll pay $1,400 and give a $1,200 investment in the
company to someone who helps us find someone interested in this.
I'll pay the $1,400 after working for four months on the project.
Ebenezer Enterprises works to reward investments to 3 times the
original amount. So the investment would result in between
$0 and $3,600, depending on how things go for the company.
 
Brian
Ebenezer Enterprises - In G-d we trust.
http://webEbenezer.net
~
~
ram@zedat.fu-berlin.de (Stefan Ram): Aug 10 02:53PM

>it seems that the "special case" where I can't use the
>new syntax (without paying the extra price) is for me
>more often the case than not.
 
It reminds my of copying a container to »::std::cout«:
 
::std::copy( ::std::begin( a ), ::std::end( a ),
::std::ostream_iterator< ::std::string >( ::std::cout, "\n" ))
 
. The above will /terminate/ each entry with »"\n"«.
But what if you want to /separate/ them with »","«?
 
You can write you own looper using lambdas:
 
myforeach
( container,
execute this lambda(x) for each entry,
execute this optional lambda(x) for the first entry,
execute this optional lambda(x) for the last entry )
 
giving nullptr for a optional lambda is the same as not
giving a value for it. If the second or third lambda is
not given then the first lambda is used for the first and
last entry, too, respectively.
 
E.g.,
 
myforeach
( ::std::vector{ 5, 6, 7 },
[]( int const x ){ ::std::cout << ", " << x; },
[]( int const x ){ ::std::cout << x; })
 
is intended to print »5, 6, 7«.
 
Disclaimer: I have not implemented this yet, so I am not
sure that it will work as intended.
ram@zedat.fu-berlin.de (Stefan Ram): Aug 10 03:00PM

>( ::std::vector{ 5, 6, 7 },
> []( int const x ){ ::std::cout << ", " << x; },
> []( int const x ){ ::std::cout << x; })
 
Or, another possible interface:
 
myforeach
( ::std::vector{ 5, 6, 7 },
[]( int const x, int const i, int const left )
{ ::std::cout <<( i > 0 ? ", " : "" )<< x; });
 
Where »i« is a loop counter, and »left« gives the
number of iterations still left to be done.
ram@zedat.fu-berlin.de (Stefan Ram): Aug 10 03:37PM

> { ::std::cout <<( i > 0 ? ", " : "" )<< x; });
>Where »i« is a loop counter, and »left« gives the
>number of iterations still left to be done.
 
The problem with int counters is that they can
overflow or underflow. So a single variable with
only few values might be more save:
 
1 first iteration
2 last iteration
3 first and last iteration
0 neither first nor last iteration
 
One also might consider having an optional lambda
that is to be called for the special case of an
empty container.
ram@zedat.fu-berlin.de (Stefan Ram): Aug 10 06:59PM

>One also might consider having an optional lambda
>that is to be called for the special case of an
>empty container.
 
Here is a first implementation:
 
#include <iostream>
#include <ostream>
#include <functional> // ::std::function
#include <vector>
 
template< typename C >
bool foreach
( C const container,
::std::function< bool( typename C::value_type, bool, bool )> body )
{ auto const b = ::std::cbegin( container );
auto const e = ::std::cend( container );
bool first = true;
bool last = false;
auto next = b;
for( auto i = b; i != e; i = next )
{ if( next != e )++next;
if( next == e )last = true;
if( !body( *i, first, last ))return false;
first = false; }
return true; }
 
int main()
{ foreach
( ::std::vector< int >{ 4, 5, 6, 7 },
[]( int const i, bool const isfirst, bool const islast )
{ ::std::cout <<( isfirst ? "" : ", " )<< i <<( islast ? ".\n" : "" );
return true; } ); }
 
. This prints:
 
4, 5, 6, 7.
 
. However, having to write
 
[]( int const i, bool const isfirst, bool const islast )
 
in the client of »foreach« is rather cumbersome!
Paul <pepstein5@gmail.com>: Aug 09 11:59PM -0700

Suppose I have
 
bool f(int lhs, int rhs)
{
///
}
 
std::vector<int> vec;
 
// vec is further set here.
 
What is the simplest way to set a priority_queue with the elements in vec based on the order defined by f?
 
Thank you,
 
Paul
"Öö Tiib" <ootiib@hot.ee>: Aug 10 02:28AM -0700

On Monday, 10 August 2015 09:59:40 UTC+3, Paul wrote:
 
> std::vector<int> vec;
 
> // vec is further set here.
 
> What is the simplest way to set a priority_queue with the elements in vec based on the order defined by f?
 
Simplest to me feels to try to type the sentence above in C++:

std::priority_queue<int, std::vector<int>, decltype(&f)> pq( &f, vec );
 
Was it what you asked?
Paul <pepstein5@gmail.com>: Aug 10 04:50AM -0700

On Monday, August 10, 2015 at 10:28:32 AM UTC+1, Öö Tiib wrote:
 
> Simplest to me feels to try to type the sentence above in C++:
 
> std::priority_queue<int, std::vector<int>, decltype(&f)> pq( &f, vec );
 
> Was it what you asked?
 
Yes, that's what I asked. Thanks.
 
Paul
Doug Mika <dougmmika@gmail.com>: Aug 09 07:10PM -0700

On Sunday, August 9, 2015 at 6:14:40 PM UTC-5, Chris Vine wrote:
> point to make is that this is normally only an issue for library
> writers. Your code is redundant by virtue of std::async.
 
> Chris
 
It's Listing 4.14 in C++ Concurrency in Action practical multithreading. It's not an easy read, but some examples seem useful. Any suggestion on a good intro to intermediate level book for C++ multithreading? Anyone? (of course in C++11)
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 10 11:10AM +0100

On Sun, 9 Aug 2015 19:10:43 -0700 (PDT)
Doug Mika <dougmmika@gmail.com> wrote:
[snip]
> multithreading. It's not an easy read, but some examples seem
> useful. Any suggestion on a good intro to intermediate level book
> for C++ multithreading? Anyone? (of course in C++11)
 
Although I have never read it, I have heard quite good things about that
book so it is a bit worrying that it should come up with a code listing
which not only does not compile but which shows such a lack of
understanding of rvalue references, move semantics and template type
deduction. Anyway, you now have a corrected version from me. I
suggest you submit an erratum to the publisher because as I say I have
heard that the book is reasonably good on threading matters.
 
To be honest though, spending your time looking at a book about
multi-threading is probably not wise for someone who has yet to acquire
the basics. If you want to operate at this low level, you need to
understand rvalue references and template type deduction to understand
the code, and not just the particulars about std::packaged_task and
std::future. The basic point is that with this function signature for
perfect forwarding:
 
template <class T> void func(T&& t) {...}
 
T deduces as a reference type if passed a lvalue and a value type if
passed a rvalue. That's how forwarding works. But it does mean that
you cannot use T in the same way as you could in the C++98 days if the
signature were, say
 
template <class T> void func(T& t) {...}
 
which works completely differently - here func can only take a lvalue
and T deduces as the underlying value type, either const or non-const.
 
template <class T> void func(T t) {...}
 
does similarly, except that it discards const qualifiers and will
accept rvalues as well as lvalues (and a signature of func(const T& t)
would do the same but would differ in its treatment of volatile).
 
The literature on this is not that good in my opinion. Section 23.5.2
and 23.5.2.1 of Stroustrup's the C++ Programming Language (4th ed) is
unnecessarily light in content in my view. The following link is a
great deal more complete, but consequently more difficult for a
beginner to understand:
http://en.cppreference.com/w/cpp/language/template_argument_deduction
 
Part of the problem is probably that in C++ the use of && (rvalue
reference) has been overloaded to cover perfect forwarding by a sleight
of hand involving reference collapsing. It might have been thought
cute at the time but it is highly confusing for beginners (and also for
others, it would appear).
 
I also repeat the point I made before. Most C++ users don't actually
need to know about the details of template type deduction. Mostly it
just works as you would expect. But be on your guard whenever you see a
forwarding reference, where sometimes it does not.
 
Chris
Barry Schwarz <schwarzb@dqel.com>: Aug 10 02:14AM -0700


>Omniorb uses gnu make scripts to build (with Cygwin), but I have not idea: How to force VC++ to include one of those directories?
 
>please help me, regards
>Szyk Cech
 
Please adjust your newsreader to limit your line length.
 
Since this is obviously not a standard header, you might have better
luck in a newsgroup that deals with Visual C++ or an appropriate
Microsoft forum.
 
--
Remove del for email
Juha Nieminen <nospam@thanks.invalid>: Aug 10 08:25AM

> for (int i = 0; i < 4000000; ++i){
> char s[200];
> ::strncpy(s, "We few; we happy few; we band of brothers.", sizeof(s));
 
You are copying data from a string literal to a stack-allocated static array.
Of course it's going to be faster than allocating a string dynamically.
(And of course it should be done like that, if it suffices and if it indeed
requires the speed.)
 
Btw, are you sure the compiler isn't optimizing the loop away? After all,
it can see that 's' isn't doing anything. It can also see that the body
of the loop isn't using the loop variable nor has any side effects.
I wouldn't be surprised if the complier optimized the whole thing away.
 
For a more reliable (and fairer) comparison, change it so that the string
being built is of a length unknown at compile time, and the result is
used for something...
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Aug 03 11:09PM +0200

On 03-Aug-15 10:51 PM, Doug Mika wrote:
 
> So the question that comes to me now is how did you know that the wait function
> takes a non-const reference to a lock?
 
Look at the documentation, e.g. <url:
http://en.cppreference.com/w/cpp/thread/condition_variable/wait>.
 
The formal argument type is reference to non-const, and only that.
 
 
> And how do you know that these don't allow rvalues?
 
The Holy Standard.
 
But any good introductory book should tell you that.
 
 
> I often use www.cplusplus.com as reference for all these little tidbits on
> various functions, and nowhere did I find what you wrote.
 
At the top of the page about that function, <url:
http://www.cplusplus.com/reference/condition_variable/condition_variable/wait/>.
 
 
> How can I deduce when I can use implicit variables as function parameters,
> and when I cannot?
 
Just check the documentation, or the source.
 
Besides, in general (not sure about Visual C++, but in general) the
compiler will tell you. ;-)
 
 
Cheers & hth.,
 
- Alf
 
--
Using Thunderbird as Usenet client, Eternal September as NNTP server.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Aug 03 11:16PM +0200

On 03-Aug-15 11:10 PM, Öö Tiib wrote:
> both list:
 
> explicit unique_lock( mutex_type& m );
 
> That 'm' is clearly non-const lvalue reference.
 
Uhm, that's not the `wait` function.
 
Cheers,
 
- Alf
 
--
Using Thunderbird as Usenet client, Eternal September as NNTP server.
Bo Persson <bop@gmb.dk>: Aug 04 01:00AM +0200

On 2015-08-03 22:51, Doug Mika wrote:
>> non-const reference to a lock. That will not bind to a temporary.
 
>> Bo Persson
 
> So the question that comes to me now is how did you know that the wait function takes a non-const reference to a lock?
 
I guessed that from your problem, and from knowing that a
condition_variable must do something to the mutex. Like unlock it while
waiting.
 
Then I checked the reference. :-)
 
 
> And how do you know that these don't allow rvalues?
 
I'm an old guy, and remember reading that Bjarne initially allowed
references to tempories but found that it caused too many bugs. Updates
to temporaries were "mysteriously" lost.
 
For example:
 
void f(long& x)
{ ++x; }
 
int y = 0;
f(y); // not allowed
 
If this was allowed, the int would be converted to a temporary long, and
the long would incremented, not the int.
 
By limiting temporaries to const references, they cannot be changed (as
they are const) and this kind of bugs goes away.
 
 
 
> I often use www.cplusplus.com as reference for all these little tidbits on various functions, and nowhere did I find what you wrote.
 
Unfortunately, the C++ standard has to be read holistically. It often
says "unless otherwise specified", and then you have to read everything
else to see if it IS specified differently 200 pages further on.
 
> How can I deduce when I can use implicit variables as function parameters, and when I cannot?
 
If they are passed by value, or by const reference, or by rvalue
reference... Just not by non-const lvalue reference. :-)
 
 
Bo Persson
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 05 01:24AM +0100

On Tue, 4 Aug 2015 21:38:12 +0100
> of thread A, become visible side-effects in thread B, that is, once
> the atomic load is completed, thread B is guaranteed to see
> everything thread A wrote to memory."
 
Technically I don't think that applies to mutexes because although a
mutex represents a memory location it does not of itself offer atomic
stores and loads. Non-normatively, for mutexes the result you mention
is offered by §1.10/5 of C++11:
 
"Note: For example, a call that acquires a mutex will perform
an acquire operation on the locations comprising the mutex.
Correspondingly, a call that releases the same mutex will perform a
release operation on those same locations. Informally, performing a
release operation on A forces prior side effects on other memory
locations to become visible to other threads that later perform a
consume or an acquire operation on A."
 
The normative (and more hard-to-read) requirement for mutexes is in
§30.4.1.2/11 and §30.4.1.2/25 ("synchronizes with") read with §1.10/11
and §1.10/12 ("happens before") and §1.10/13 ("visible side effect").
 
So far as concerns acquire and release operations on atomic variables,
these also provide synchronization, in that informally an operation with
acquire semantics is one which does not permit subsequent memory
operations to be advanced before it, and an operation with release
semantics is one which does not permit preceding memory operations to
be delayed past it, as regards the two threads synchronizing.
This synchronization with respect to those two threads extends beyond
just the memory location represent by the particular atomic variable.
It applies generally to all operations on memory locations shared by the
two threads performing the acquire/release on the particular atomic
variable (but does not provide full sequential consistency with respect
to atomic operations performed by other threads).
 
A consume operation on the other hand only synchronizes on the
particular atomic variable and its dependencies. In practice, no one
bothers about consume operations except in the most obscure
circumstances. Likewise the need for full sequential consistency with
other threads for atomic variables is also relatively uncommon,
nothwithstanding that it is the default for atomics.
 
Chris
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 04 12:56AM +0100

On Tue, 4 Aug 2015 00:43:40 +0100
> applies to anything. That is the point of a mutex. And because a mutex
> unlock (unless it is a custom mutex) has release semantics, it doesn't
> only apply to dependent loads.
 
I should qualify that by saying that you have quoted from documentation
on std::memory_order. In relation to atomic variables, the
documentation refers to dependent loads relating to a memory operation
on the same atomic variable. However, your post was concerned with
mutexes, about which I responded.
 
Chris
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 04 12:43AM +0100

On Mon, 3 Aug 2015 21:39:21 +0100
> reads. As far as I can tell they are relying on the mutex doing a
> memory order consume on acquisition, and an release on release. I'd
> prefer them to say so explicitly.
 
It's a matter of language definition. std::mutex::lock() performs an
acquire operation (not just a consume). std::mutex::unlock() performs
a release operation. No load after an acquire can migrate before
the corresponding prior release on the same mutex object. No store
prior to the release can migrate after the corresponding subsequent
acquire on that mutex. What's the point of documenting that in the
code, unless this is a custom mutex object with unusual properties (in
which case documentation would be essential)?
 
As I understand it boost::mutex does the same as standard mutexes, but
in any event the header to your post indicates you are interested in the
standard memory model rather than what boost's mutexes happen to do.
Unsurprisingly, POSIX mutexes happen to have the same semantics as
standard mutexes so it would be astonishing if boost::mutex was
different.
 
> http://en.cppreference.com/w/cpp/atomic/memory_order
 
> Does it mean any variable at all? Or is it just volatiles? Or is it
> just other atomics?
 
So far as concerns the standard memory model and standard mutexes, it
applies to anything. That is the point of a mutex. And because a mutex
unlock (unless it is a custom mutex) has release semantics, it doesn't
only apply to dependent loads.
 
Chris
Ian Collins <ian-news@hotmail.com>: Aug 05 09:12AM +1200

Victor Bazarov wrote:
 
> But that doesn't actually give the number of CPUs, does it? And the
> implementation is allowed to return 0 (which isn't helpful to those who
> need to know the number of CPUs)...
 
While true, the number returned (if it isn't zero) if probably more use
than the number of CPUs which is becoming more of a nebulous concept
these days!
 
--
Ian Collins
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: