Tuesday, October 20, 2015

Digest for comp.lang.c++@googlegroups.com - 22 updates in 8 topics

bleachbot <bleachbot@httrack.com>: Oct 20 09:46PM +0200

ram@zedat.fu-berlin.de (Stefan Ram): Oct 20 08:24PM

>both versions of foo() and still we are able to call foo(T&)
>with any kind of argument! This is good at least for
>prototyping.
 
One can use a »const &« for that:
 
void f( int const & i ){}
int main(){ int i = 2; f( i ); f( 2 ); }
 
.
bleachbot <bleachbot@httrack.com>: Oct 21 12:36AM +0200

bleachbot <bleachbot@httrack.com>: Oct 21 01:20AM +0200

Ramine <ramine@1.1>: Oct 20 07:22PM -0700

Hello...
 
 
I have just read about avout this Asymetric Read Writer lock...
 
Please look at it here:
 
https://github.com/chaelim/RWLock/blob/master/Src/RWLock.cpp
 
 
You will notice that he is doing a stupid thing , look at
the C header here:
 
https://github.com/chaelim/RWLock/blob/master/Src/RWLock.h
 
 
He is allocating an array of uint8_t, that means of type of 8 bits, like
this in C:
 
uint8_t m_readers[MAX_RWLOCK_READER_COUNT];
 
 
And after that in his algorithm he is doing in the CRWLock::EnterRead()
this:
 
m_readers[t_curThreadIndex] = true;
 
 
and he is doing in CRWLock::LeaveRead() this:
 
m_readers[t_curThreadIndex] = false;
 
 
But this is stupid, because his array must be alligned on 64 bytes
and every element in his array must be of a size of a cacheline to
avoid false sharing.
 
I have taking care of that on my new algorithm of a scalable
reader-writer mutex the above problem of false sharing, and that is
sequential consistent and like in Seqlock or RCU , my new scalable
distributed reader-writer mutex doesn't use any atomic operations and/or
StoreLoad style memory barriers on the reader side, so it's very fast
and scalable..but you have to use the define's option TLW_RWLockX or the
define's option TRWLockX inside the defines1.inc file for that.
 
 
So be happy with my new algorithm that you can download from here:
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
 
 
Thank you,
Amine Moulay Ramdane.
wtholliday@gmail.com: Oct 20 10:52AM -0700

I'm concerned that I'm over-using shared_ptr. Here's a simplified version of my object model, which I think is sufficient for the purpose of this discussion.


// A simplified object model.

// A Patch is a collection of Nodes
class Patch {

public:

void AddNode(const shared_ptr<Node>&);
void RemoveNode(const shared_ptr<Node>&);

private:

vector< shared_ptr<Node> > _nodes;

};

// A node knows which patch it belongs to, and has
// connections to other nodes.
class Node {

public:

Node(const shared_ptr<Patch>& patch) : _patch(patch) { }
Connect(const shared_ptr<Node>&);

private:

weak_ptr<Patch> _patch;
vector< weak_ptr<Node> > _connections;

};

This design is nice because it always avoids dangling references. However, the client gets to control the lifetime of a `Node` which can be problematic in other parts of my code (specifically, undo).
 
If I understand the C++ cognoscenti correctly, they would recommend avoiding `shared_ptr`, preferring to use `unique_ptr` since `Patch` effectively *owns* the `Nodes`:

// A Patch is a collection of Nodes
class Patch {

public:

// Patch takes ownership of the node
void AddNode(unique_ptr<Node>);
void RemoveNode(Node*);

private:

vector< unique_ptr<Node> > _nodes;

};

// A node knows which patch it belongs to, and has
// connections to other nodes.
class Node {

public:

Node(const shared_ptr<Patch>& patch) : _patch(patch) { }
Connect(Node*);

private:

weak_ptr<Patch> _patch;
vector< Node* > _connections;

};

This design is nice because the lifetime of a Node is easy to understand: it's deallocated when `RemoveNode` is called. However, the client could be left with a dangling pointer. Granted, this would be bad behavior for the client in the first place, but I'd like such errors to be easier to catch. Also, catching errors in `Node::_connections` is harder.
 
**Now, before I go and write some sort of fancy handle or intrusive weak pointer, is there an easier way to avoid the client controlling the lifetime while still being safe?**
 
Thanks!!
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Oct 21 12:24AM +0200


> This design is nice because it always avoids dangling references. However,
> the client gets to control the lifetime of a `Node` which can be problematic
> in other parts of my code (specifically, undo).
 
You should ask whether a Node needs to belong to a Patch in order to be
useful. If so then this design allows too much.
 
 
> If I understand the C++ cognoscenti correctly, they would recommend avoiding
> `shared_ptr`, preferring to use `unique_ptr` since `Patch` effectively
> *owns* the `Nodes`:
 
Yes, that sounds more like it.
 
Or, even better, just raw pointers, except in the formal argument to
AddNode.
 
 
 
> };
 
> This design is nice because the lifetime of a Node is easy to understand: it's
> deallocated when `RemoveNode` is called.
 
The weak_ptr says that an owning Path may not exist.
 
Is that really the case?
 
 
 
> However, the client could be left with a dangling pointer.
 
How so? Aren't all connected nodes in the same patch?
 
If they're not, then instead of `vector<Node*>` you may use a
``vector<shared_ptr<Node>>`, and use the /aliasing constructor/ of
shared_ptr to create a shared_ptr with ownership of a patch and
referencing a node owned by that patch.
 
For that approach you'd need to add a patch argument to the Connect method.
 
 
 
> **Now, before I go and write some sort of fancy handle or intrusive weak
> pointer, is there an easier way to avoid the client controlling the lifetime
> while still being safe?**
 
Dunno, but I think there's a sort of smell about passing unique_ptr to
Node to a patch which takes ownership. Why not let a patch provide the
node storage. Then it can just have a std::list of nodes.
 
That is, using collection instead of smart pointer.
 
 
Cheers & hth.,
 
- Alf
wtholliday@gmail.com: Oct 20 04:04PM -0700

On Tuesday, October 20, 2015 at 3:25:16 PM UTC-7, Alf P. Steinbach wrote:
> > deallocated when `RemoveNode` is called.
 
> The weak_ptr says that an owning Path may not exist.
 
> Is that really the case?
 
No. Good point. That was to break the cycle between patches and nodes in the first implementation.
 
 
> > However, the client could be left with a dangling pointer.
 
> How so? Aren't all connected nodes in the same patch?
 
something like:
 
auto node = make_uniqe<MyAwesomeNodeSubclass>();
auto nodePtr = node.get();
patch->AddNode(node);
patch->RemoveNode(nodePtr);
 
// nodePtr now dangles
 
Maybe that interface is pretty lame anyway. Feels dirty to have
to grab the raw pointer out of the unique_ptr.
 
They're all in the same patch. Adding a node to multiple patches
should trigger an error.
 
> Node to a patch which takes ownership. Why not let a patch provide the
> node storage. Then it can just have a std::list of nodes.
 
> That is, using collection instead of smart pointer.
 
Because Node is actually an abstract base class.
 
 
> Cheers & hth.,
 
> - Alf
 
Thanks for your help Alf!
Ramine <ramine@1.1>: Oct 20 06:38PM -0700

Hello..
 
 
We have to be careful with sequential consistency...
 
I have just updated my new algorithm of new scalable
distributed Reader-Writer mutex to version 1.31 and i have made it
sequential consistent.. and now i think that all is correct...
 
Also, like Seqlock or RCU , version2 of my new scalable distributed
reader-writer mutex doesn't use any atomic operations and/or StoreLoad
style memory barriers on the reader side, so it's very fast and
scalable..but you have to use the define's option TLW_RWLockX or the
define's option TRWLockX inside the defines1.inc file for that.
 
 
You can download my new updated Scalable Distributed Reader-Writer mutex
1.31 from:
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
 
 
Thank you,
Amine Moulay Ramdane.
legalize+jeeves@mail.xmission.com (Richard): Oct 20 05:00PM

[Please do not mail me a copy of your followup]
 
Lynn McGuire <lmc@winsim.com> spake the secret code
 
>I would like to calculate the size of a very complex object at runtime.
 
Obviously calculating the size of an object is not useful in and of
itself.
 
What is it you are really trying to accomplish?
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Jorgen Grahn <grahn+nntp@snipabacken.se>: Oct 20 05:07PM

On Tue, 2015-10-20, Richard wrote:
 
> Obviously calculating the size of an object is not useful in and of
> itself.
 
> What is it you are really trying to accomplish?
 
You're right, but I think that was covered later in the thread:
finding out how much disk space the serialised representation of the
object would require.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Lynn McGuire <lmc@winsim.com>: Oct 20 12:29PM -0500

On 10/20/2015 12:07 PM, Jorgen Grahn wrote:
> finding out how much disk space the serialised representation of the
> object would require.
 
> /Jorgen
 
Yup.
 
Thanks,
Lynn
Lynn McGuire <lmc@winsim.com>: Oct 20 03:46PM -0500

On 10/20/2015 12:29 PM, Lynn McGuire wrote:
 
> Yup.
 
> Thanks,
> Lynn
 
I am writing the multiple large object serialization code right now. I will know soon of the effect of moving from one large object
to multiple large objects on our binary file size. My concern is that we may need to compress the entire binary file to keep it a
reasonable size.
 
Lynn
Ian Collins <ian-news@hotmail.com>: Oct 21 09:51AM +1300

Lynn McGuire wrote:
> multiple large objects on our binary file size. My concern is that
> we may need to compress the entire binary file to keep it a
> reasonable size.
 
Define reasonable!
 
Depending on where you deploy it, you can just pipe the output through
gzip, or (as I do) use a compressed filesystem.
 
--
Ian Collins
Paavo Helde <myfirstname@osa.pri.ee>: Oct 20 04:26PM -0500

> multiple large objects on our binary file size. My concern is that we
> may need to compress the entire binary file to keep it a reasonable
> size.
 
Yes, what is reasonable? I understand currently storing a 100 kB file is
pretty reasonable and storing a 100 GB file is probably not. However,
compression will reduce the data size only ca 2-10 times, depending on
content, so in this sense there is not much difference. Next year the disks
etc get 10 times bigger and the borders of "reasonable" will change.
 
Anyway, if you want to add transparent compression, look up gzopen().
 
Cheers
Paavo
Lynn McGuire <lmc@winsim.com>: Oct 19 06:45PM -0500

"Bjarne Stroustrup on the 30th anniversary of Cfront (the first C++ compiler)"
http://cpp-lang.io/30-years-of-cpp-bjarne-stroustrup/
 
What a long strange trip it has been?
 
Lynn
Jorgen Grahn <grahn+nntp@snipabacken.se>: Oct 20 05:05PM

On Mon, 2015-10-19, Lynn McGuire wrote:
> "Bjarne Stroustrup on the 30th anniversary of Cfront (the first C++ compiler)"
> http://cpp-lang.io/30-years-of-cpp-bjarne-stroustrup/
 
It was an interesting read, but my favorite quote is:
 
"I don't remember. In fact, I don't remember all that much
from the 1980s."
 
Also the part about how "people were over-excited about the use of
class hierarchies". I used to think Stroustrup was one of them ...
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Christopher Pisz <nospam@notanaddress.com>: Oct 20 03:57PM -0500

On 10/20/2015 12:05 PM, Jorgen Grahn wrote:
 
> It was an interesting read, but my favorite quote is:
 
> "I don't remember. In fact, I don't remember all that much
> from the 1980s."
 
Mad Dog 20/20, Hairbands, Tight Rolled Pants, and Neon Green or Orange
 
 
 
 
--
I have chosen to troll filter/ignore all subthreads containing the
words: "Rick C. Hodgins", "Flibble", and "Islam"
So, I won't be able to see or respond to any such messages
---
Ramine <ramine@1.1>: Oct 20 03:48PM -0700

Hello..
 
 
Note: if you want to port my source code of my algorithm to C++, do it
please.
 
 
Scalable Distributed Reader-Writer Mutex version 1.3 is here...
 
Like Seqlock or RCU , now version2 of my new scalable distributed
reader-writer mutex doesn't use any atomic operations and/or StoreLoad
style memory barriers on the reader side, so it's very fast and
scalable..but you have to use the define's option TLW_RWLockX or the
define's option TRWLockX inside the defines1.inc file for that.
 
 
Author: Amine Moulay Ramdane, based on Dmitry Vyukov C++ Reader-Writer Mutex
 
 
Description:
 
A scalable Distributed Reader-Writer Mutex based on the Dmitry Vyukov
C++ Reader-Writer Mutex. This scalable Distributed Reader-Writer Mutex
works accross processes and threads.
 
There is two options to choose from, you can choose to use RWLock by
uncommenting the define`s option called "TRWLock" inside the file called
defines1.inc , or you can choose to use RWLockX by uncommenting the
define`s option called "TRWLockX" inside the file called defines1.inc ,
if you set it to RWLock , it will spin-wait, but if you set it to
RWLockX it will not spin-wait , but it will wait on the Event objects
and my SemaMonitor.
 
To compile it for MacOSX or Linux, please uncomment the define's option
called "threads" inside the file called defines1.inc, but to use it
accross processes and threads under windows , please comment the
define's option called "threads"...
 
And to use version1 of this scalable distributed RWLock uncomment the
define's option version1 and comment the define's option version2 and to
use version2 that is much more faster and scalable, uncomment the
define's option version2 and comment the define's option version1.
 
There is no limitation on version1.
 
But on version2 you are limited to 1000 threads and you have to start
once and for all your threads and work with all your threads, don't
start every time a thread and exit from the thread.. Version2 of my
distributed reader-writer mutex doesn't use any atomic operations and/or
StoreLoad style memory barriers on the reader side, so it's very fast,
but you have to use the define's option TLW_RWLockX or the define's
option TRWLockX inside the defines1.inc file for that.
 
 
You can download my Scalable Distributed Reader-Rriter Mutex from:
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
 
Language: FPC Pascal v2.2.0+ / Delphi 7+: http://www.freepascal.org/
 
Operating Systems: Windows, Mac OSX , Linux...
 
Required FPC switches: -O3 -Sd -dFPC -dFreePascal
 
-Sd for delphi mode....
 
Required Delphi switches: -$H+ -DDelphi
 
{$DEFINE CPU32} and {$DEFINE Windows32} for 32 bit systems
 
{$DEFINE CPU64} and {$DEFINE Windows64} for 64 bit systems
 
 
 
 
Thank you,
Amine Moulay Ramdane.
michael.podolsky.rrr@gmail.com: Oct 20 11:16AM -0700

Hi Everyone,
 
Will anything be broken in C++ if we allow lvalue reference to bind a rvalue (a temporary)?
 
Then, the following function
 
void foo(T& t)
{ ... }
 
may be called in both ways:
 
T t;
foo(t);
foo(T()); // not a valid C++, but a proposal: allowing calling for temporary
 
On the other side, if both versions of foo function are defined:
 
void foo(T& t)
{}
void foo(T&& t)
{}
 
then different versions of it will be called for lvalue and rvalue arguments, the same way as it is in the current C++.
 
The advantage of such a modification: we do not need to write both versions of foo() and still we are able to call foo(T&) with any kind of argument! This is good at least for prototyping.
 
I do not see currently any problem with this idea, but I may miss something here. Any thoughts?
 
Thanks, Michael
Paavo Helde <myfirstname@osa.pri.ee>: Oct 20 01:46PM -0500

michael.podolsky.rrr@gmail.com wrote in
 
> Hi Everyone,
 
> Will anything be broken in C++ if we allow lvalue reference to bind a
> rvalue (a temporary)?
 
This has been tried and considered to have failed ca 20 years ago. The
canonical example:
 
void increment(int& x) {
++x;
}
 
int main() {
long y = 0;
increment(y);
return y;
}
 
FYI: MSVC used to compile such code for many years, but by now even they
have adhered to the standard.
 
Cheers
Paavo
michael.podolsky.rrr@gmail.com: Oct 20 11:55AM -0700

On Tuesday, October 20, 2015 at 2:46:38 PM UTC-4, Paavo Helde wrote:
> increment(y);
> return y;
> }
 
Yep. Thank you for pointing that out!
 
Regards,
Michael
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: