Thursday, February 4, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

JiiPee <no@notvalid.com>: Feb 04 04:07PM

On 04/02/2016 12:43, Alf P. Steinbach wrote:
> On 2/4/2016 1:32 PM, JiiPee wrote:
 
> IMHO it all depends on whether the comments really add something that
> is useful and can't be easily expressed in the code itself.
 
you mean not like this:
 
// here we are looping though all the humans in the vector and printing
their information!
for(const auto& a : humans)
a.print();
 
 
hehe
 
 
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Cholo Lennon <chololennon@hotmail.com>: Feb 04 01:27PM -0300

On 02/04/2016 11:03 AM, Jerry Stuckle wrote:
> defines input and output conditions to a function, and other information
> not part of the code.
 
> Completely different things.
 
+1 I fully agree
 
 
--
Cholo Lennon
Bs.As.
ARG
Jorgen Grahn <grahn+nntp@snipabacken.se>: Feb 04 05:49PM

On Thu, 2016-02-04, Alf P. Steinbach wrote:
>> them.
 
>> The best form of documentation is the code itself!
 
> I agree with all that.
 
Come to think of it, I do too. I disagree with the subject
"commenting code considered harmful", but the text you quote is fine
by me.
 
> non-commented class she could use as starting point, and she was well
> satisfied with that. However, our project coding guidelines required
> comments on everything
 
:-/ That's not documentation -- you weren't free to document things in
an optimal way. You were in effect being paid to make the
code worse and harder to maintain.
 
Part of the trick with good (or decent) documentation is to know what
to leave out, or let speak for itself. If that tool is not available,
the results will not be very good no matter how hard you try.
 
...
> Looks, not content.
 
Yes.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Andrea Venturoli <ml.diespammer@netfence.it>: Feb 04 07:38PM +0100

On 02/04/16 15:03, Jerry Stuckle wrote:
 
> defines input and output conditions to a function, and other information
> not part of the code.
 
> Completely different things.
 
Thanks.
You spoke what I think, saying it far better than I could.
 
 
 
 
I've been struggling with collegues that write code like the following
(I simplified, obviously):
 
> BOOST_CHECK_THROW(g(0))
> ...
> BOOST_CHECK_THROW(z(3,4))
 
Think 20-40 lines of those comments, followed by the 20-40 lines of
code, which soon will get out of sync.
 
 
 
Of course, writing *why* something should fail would probably avoid
wading through hundred of pages of technical specs, but that would be
too good apparently.
Daniel <danielaparker@gmail.com>: Feb 04 09:06AM -0800

How do you organize your put-downs with regards to the number of posts you make? Lets say you have 20-30 different put-downs and each takes one to two lines of text, e.g. "you apparently haven't worked since the PC was invented", "leave you there in your own little out-of-date pigsty". Would you separate them to different posts or put them in the same post? Is it a good idea to have a lot of smaller put-downs in a post, or combine them?
 
Thanks,
Daniel
"Öö Tiib" <ootiib@hot.ee>: Feb 04 09:34AM -0800

On Thursday, 4 February 2016 19:06:37 UTC+2, Daniel wrote:
> How do you organize your put-downs with regards to the number of posts you make? Lets say you have 20-30 different put-downs and each takes one to two lines of text, e.g. "you apparently haven't worked since the PC was invented", "leave you there in your own little out-of-date pigsty". Would you separate them to different posts or put them in the same post? Is it a good idea to have a lot of smaller put-downs in a post, or combine them?
 
It is not important. The main idea is to socialize. :D
legalize+jeeves@mail.xmission.com (Richard): Feb 04 05:32PM

[Please do not mail me a copy of your followup]
 
Jerry Stuckle <jstucklex@attglobal.net> spake the secret code
>> http://webEbenezer.net
 
>Lots of BS. For instance, the Washington, DC group hasn't met in over
>two years, AFAIK.
 
You pick one example and generalize to the whole list.
 
I can do that too.
 
The Utah group has met every month for it's entire existence.
 
Therefore, the list is flawless.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
David Brown <david.brown@hesbynett.no>: Feb 04 04:21PM +0100

On 04/02/16 15:17, Jerry Stuckle wrote:
 
> Not at all. Look back. I'm not the one claiming to take snapshots
> every 15 minutes. And 100 MB is not at all big for a repo - I've seen
> ones in the TB range on large projects.
 
You pulled the 100 MB figure out of the air (or somewhere worse). And
since you /still/ don't understand these things, it is the /changes/
that need to be copied (if anything is being copied at all - snapshots
are generally on the same system).
 
 
> And when you only back up the changes, you have to restore starting with
> the last full backup and every incremental backup since then. One of
> them gets lost? You're SOL after that.
 
Maybe you are thinking of tapes stored in a vault, or perhaps punched cards.
 
Here in the real world, in /this/ century, these things are done using
disks.
 
Let me explain in very small steps.
 
Server 1 has data A on it.
Server 2 is an independent copy, perhaps off site. It also has data A
on it.
 
Server 1 takes a snapshot. It now has a read-only copy of A, and a
working set A.
 
You write some new data B1 to server 1. Now it has a read-only set A,
and a working set (A + B1).
 
Server 1 takes a new snapshot. It now has a read-only copies of A and
(A + B1), and a working set (A + B1).
 
You write some more new data, changing B1 to B2. It takes a new
snapshot. Server 1 now has read-only copies A, (A + B1), (A + B2), as
well as the working set (A + B2).
 
You do a backup to server 2. This involves server 2 first taking a
snapshot, then you transfer the difference between server 1's last
snapshot and the last snapshot on server 2 - that is, B2. Now server 2
has read-only snapshots A and (A + B2).
 
At every point, you have multiple old copies of the data directly
available. How often you take your snapshots, how often you back them
up to other servers, how many duplicate copies you have, how often (if
at all) you prune old snapshots - that's all details that must fit your
budget and your requirements.
 
But at no point are you having to search though old heaps of tapes to
find the files you want.
 
>> moment - it will take time to sink in.
 
> Oh, I do. That's how it works on ONE system. It's not how every system
> works.
 
It is pretty much how all modern snapshotting is done. You might use
different filesystems, or different technologies (such as lvm, which
works at a layer below the filesystem). You might include clustered
filesystems, or use raid with some drives using iSCSI for off-site
replication. But that is how filesystem snapshotting works.
 
>> copy.
 
> So what? And we're talking about repos, not file systems. Well managed
> projects don't depend on file systems. They use repos.
 
The repositories sit on filesystems, and run on servers (usually virtual
servers). The only time your repositories get corrupted (assuming the
IT folk setting it up are not morons) is if there is a serious failure
on the server software, the filesystem or the hardware - and then your
snapshots and/or filesystem backups are used to restore operation.
 
You don't need to snapshot your repositories - with a version control
system, /every/ commit is a snapshot.
 
If you like, you can do a dump of the repository data and back up those
files. It is usually a somewhat time-consuming process, but worth doing
to provide extra recovery options (for disaster recovery, not the normal
development process). This can be particularly useful if your budget
does not extend to replicated servers and you are happy with the idea of
"The server died. We'll get a new one running by tomorrow, and
re-install the version control software on it." Of course, if you use a
distributed version control system (which is great for some types of
development team, less good for other types) then none of this is an issue.
 
> do nightly backups of everything, and store the tapes off site. But
> then they may be backing up a couple of petabytes every night. Not at
> all unusual.
 
No one (this side of Y2K) uses mainframes for their version control
systems, and no one stores petabytes of data in their software
development servers. Sure, mainframes are in use - and sure, tape
backup systems are in use. But not for software development.
 
 
> But people who have only worked on PC's think they know how the whole
> world works. You don't.
 
And you apparently haven't worked since the PC was invented, and yet you
still think you know how the world works.
 
 
> That's one file. What about all of the other files associated with that
> change? And how do you know which files are associated with that
> change? Good project management uses repos, not file systems.
 
When I need an old file from the repository, I find it using the version
control system's clients. There are all sorts of ways, depending on the
particular version control system in use. But key to all version
control systems is the ability to easily see old files and directories,
see what was changed, by whom, and when, compare revisions, and collect
copies of old versions. If your version control system can't handle
that, you don't have a "repository", you have a "shared folder".
 
 
> And where are all of the other 2,000 files you need to complete your
> project? Are they also in that snapshot? Or do you need to spend the
> next three weeks trying to find them?
 
They are all there, in the snapshots or the repository as appropriate.
 
 
> never been on a properly managed project.
 
> Tell me - when the system crashes, and you have 100 programmers, each
> costing you $350/hr., how long will your CEO keep you around?
 
If I were running the IT for 100 programmers, the system would not crash
- because the distributed version control running on independent servers
would have a redundancy level making it far more likely for the building
to be destroyed by fire or flood than for multiple disk, server, or
power failures to stop /all/ the servers. And if that happened, the
off-site redundant copies would still be running.
 
I don't have quite such a system, because we are a much smaller group,
and we have a system appropriate for our needs. But I know how to
design such a high-reliability system.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 10:30AM -0500

On 2/4/2016 9:27 AM, David Brown wrote:
> (filesystem, database, repository, whatever), usually read-only, and
> usually implemented as some sort of near-instantaneous virtual copy
> rather than a literal copy.
 
Now which is it - a complete copy, or a copy of the changes? In one
case you copy the entire file system. In the other you copy the changes.
 
Which is it?
 
Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 10:42AM -0500

On 2/4/2016 10:21 AM, David Brown wrote:
> since you /still/ don't understand these things, it is the /changes/
> that need to be copied (if anything is being copied at all - snapshots
> are generally on the same system).
 
It is not all that big for a repo. I've seen ones in the TB range on
large projects.
 
And which is it - the file system or the changes? You can't have it
both ways.
 
Oink, oink.
 
> budget and your requirements.
 
> But at no point are you having to search though old heaps of tapes to
> find the files you want.
 
Real project management uses repos, not file systems, to store the code.
But you obviously don't.
 
> works at a layer below the filesystem). You might include clustered
> filesystems, or use raid with some drives using iSCSI for off-site
> replication. But that is how filesystem snapshotting works.
 
Properly managed projects use repos, not file systems, to store the code.
 
> IT folk setting it up are not morons) is if there is a serious failure
> on the server software, the filesystem or the hardware - and then your
> snapshots and/or filesystem backups are used to restore operation.
 
That's right. And WHEN (not IF) there is a serious failure, your repo
gets corrupted. Your snapshot may or may not save you.
 
Oink, oink.
 
> You don't need to snapshot your repositories - with a version control
> system, /every/ commit is a snapshot.
 
Right. Oink, oink.
 
> re-install the version control software on it." Of course, if you use a
> distributed version control system (which is great for some types of
> development team, less good for other types) then none of this is an issue.
 
Ah, that's one way of doing it. Also backing up the repository itself,
which is generally how project admins do it (separate from the IT
folks). That way recovery is quite fast, no matter what the failure.
 
> systems, and no one stores petabytes of data in their software
> development servers. Sure, mainframes are in use - and sure, tape
> backup systems are in use. But not for software development.
 
Gee, there are or thousands of companies on this side who use
mainframes. I'll bet there are on your side, also. And like here, I'll
bet they have mainframe development systems, also. How else do they
develop mainframe applications? On TRS-80's? You probably think so.
 
You really have no idea how the world outside of your little corner
works, do you?
 
>> world works. You don't.
 
> And you apparently haven't worked since the PC was invented, and yet you
> still think you know how the world works.
 
And you think the PC is the only think in the world. Here's a clue - it
isn't. Oink, oink.
 
> see what was changed, by whom, and when, compare revisions, and collect
> copies of old versions. If your version control system can't handle
> that, you don't have a "repository", you have a "shared folder".
 
Which is what you have with your ZFS. And what happens when your
repository gets corrupted?
 
Oink, oink.
 
>> project? Are they also in that snapshot? Or do you need to spend the
>> next three weeks trying to find them?
 
> They are all there, in the snapshots or the repository as appropriate.
 
Until your repository gets corrupted. Oink, oink.
 
> to be destroyed by fire or flood than for multiple disk, server, or
> power failures to stop /all/ the servers. And if that happened, the
> off-site redundant copies would still be running.
 
Systems are never 100%. They do crash - even with redundant systems.
You're living in a dream world.
 
Those in IT who ARE in charge of systems like that understand the
weaknesses of the system - and KNOW systems can and will crash. And
they have processes in place to recover.
 
> I don't have quite such a system, because we are a much smaller group,
> and we have a system appropriate for our needs. But I know how to
> design such a high-reliability system.
 
It's obvious you don't have such a system, because you have no idea how
to properly implement it. You only think you do.
 
Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Feb 04 04:59PM +0100

On 04/02/16 15:23, Jerry Stuckle wrote:
> is again a theoretical maximum - not one you will get.
 
> But then my original background was EE, and my first five years with IBM
> was in hardware. So unlike you, I understand the hardware.
 
I've got news for you - I also do electronics design, and I also know
how the hardware works. And since I have a background in mathematics, I
understand that even though a 10 Gb link is not going to hit quite 10 Gb
throughput, it is still plenty for handling 100 MB/s.
 
>> to a number of different clients can probably be built for about
>> $4000-$5000.
 
> Go ahead - let's see you do it.
 
I can get a solid Dell rack server for under $2K, with 2 x 10 GB. We
did not mention disk capacity, but 4 SAS-3 2TB drives is about $700 for
a total of 4 TB with raid. For the same price, I could get 4 TB SATA
hybrid disks, which may be a better choice. Maybe I'd splash out on
$400 for a PCI SSD for the system, log files, raid journal, etc.
Another $250 for a UPS, and $250 for some more memory. That's a total
of $3600 - based on Norwegian prices (which are usually a lot higher
than American prices), all using parts I can order today from my usual
supplier.
 
In practice, I might want to pay a bit more for more expandability - at
$2500 I get a server with a faster processor and 8 2.5" bays. 6 SAS-3
1TB drives now come to $1400 giving a lot more headroom for the disk
throughput. With an extra redundant power supply for the server at
$100, that would come to $4900.
 
Setting up the disk system would depend on the type of files and access
that is needed. For maximal sustained read of single files with less
regard for write speed, I'd use Linux md raid10 "offset" mode so that
I'd get full width parallel reads (raid0 speed), at the cost of slower
writes. For more general use, I'd probably set up the disks in raid1
pairs (using the hardware raid at that level), then stripe a raid0
across them (for faster access to large files) or put a btrfs directly
on the three disks for greater flexibility. If it were a mail server,
I'd use xfs on a linear concatenation of the raid1 pairs.
David Brown <david.brown@hesbynett.no>: Feb 04 05:07PM +0100

On 04/02/16 16:30, Jerry Stuckle wrote:
 
> Now which is it - a complete copy, or a copy of the changes? In one
> case you copy the entire file system. In the other you copy the changes.
 
> Which is it?
 
The snapshot is logically a complete copy, but it only takes the space
of changed files, and only changes need to be replicated to another
machine. It's quite simple.
 
So again, let's have some links supporting your view.
David Brown <david.brown@hesbynett.no>: Feb 04 05:11PM +0100

On 04/02/16 16:42, Jerry Stuckle wrote:
> mainframes. I'll bet there are on your side, also. And like here, I'll
> bet they have mainframe development systems, also. How else do they
> develop mainframe applications? On TRS-80's? You probably think so.
 
Since you have finally come clean and admitted you are still living on
the other side of the year 2000, I think I'll just leave you there in
your own little out-of-date pigsty. (Not that I think many of your
ideas were appropriate 16 years ago either.)
 
If anyone else has questions about snapshots, I'll be happy to comment.
But I believe I will leave Jerry to himself for a while.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 12:04PM -0500

On 2/4/2016 11:07 AM, David Brown wrote:
> of changed files, and only changes need to be replicated to another
> machine. It's quite simple.
 
> So again, let's have some links supporting your view.
 
So you don't have the old copies then, either. I thought you were using
a repository.
 
But I'm tired of teaching the pig to sing. Let me know when you learn
how things work in the real world - instead of your little corner. I
won't be holding my breath.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 12:04PM -0500

On 2/4/2016 11:11 AM, David Brown wrote:
> ideas were appropriate 16 years ago either.)
 
> If anyone else has questions about snapshots, I'll be happy to comment.
> But I believe I will leave Jerry to himself for a while.
 
Oink, oink!
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 12:04PM -0500

On 2/4/2016 10:59 AM, David Brown wrote:
> across them (for faster access to large files) or put a btrfs directly
> on the three disks for greater flexibility. If it were a mail server,
> I'd use xfs on a linear concatenation of the raid1 pairs.
 
Oink, oink!
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
"Öö Tiib" <ootiib@hot.ee>: Feb 04 09:23AM -0800

On Thursday, 4 February 2016 19:04:35 UTC+2, Jerry Stuckle wrote:
 
> Oink, oink!
 
Jerry, dating that Circe is clearly not good to your health.
Geoff <geoff@invalid.invalid>: Feb 04 09:29AM -0800

On Thu, 4 Feb 2016 12:04:52 -0500, Jerry Stuckle
 
>Oink, oink!
 
This, from someone who claims to have 30 years industrial experience
and to be a consultant and project manager, demonstrates an astounding
level of immaturity.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 12:45PM

On 04/02/2016 12:31, Öö Tiib wrote:
 
>> /Flibble
 
> May be have 'radio_button* radio_button::next_sibling()' instead of what seem
> to be 'i_widget* radio_button::link_after()' and 'bool radio_button::is_sibling(i_widget*)'.
 
link_after and is_sibling are members of widget base class but yes
"next_radio_button" would be better.
 
/Flibble
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 12:56PM

On 04/02/2016 12:45, Mr Flibble wrote:
>> radio_button::is_sibling(i_widget*)'.
 
> link_after and is_sibling are members of widget base class but yes
> "next_radio_button" would be better.
 
const radio_button* radio_button::next_radio_button() const
{
const i_widget* candidate = &link_after();
while (candidate != this)
{
if (is_sibling(*candidate))
{
// Teh ghastly dynamic_cast! A simpler CLEAN solution which doesn't
leak details everywhere doesn't immediately spring to mind.
const radio_button* candidateRadioButton = dynamic_cast<const
radio_button*>(candidate);
if (candidateRadioButton != 0)
return candidateRadioButton;
}
candidate = &candidate->link_after();
}
return this;
}
 
radio_button* radio_button::next_radio_button()
{
return const_cast<radio_button*>(const_cast<const
radio_button*>(this)->next_radio_button());
}
 
void radio_button::set_on_state(bool aOnState)
{
if (iOnState != aOnState)
{
if (aOnState)
for (radio_button* nextRadioButton = next_radio_button();
nextRadioButton != this; nextRadioButton =
nextRadioButton->next_radio_button())
nextRadioButton->set_on_state(false);
iOnState = aOnState;
update();
if (is_on())
on.trigger();
else if (is_off())
off.trigger();
}
}
 
/Flibble
"Öö Tiib" <ootiib@hot.ee>: Feb 04 06:56AM -0800

On Thursday, 4 February 2016 14:56:21 UTC+2, Mr Flibble wrote:
> }
> }
 
> /Flibble
 
Hmm. Sometimes visitor pattern is used to search from or manipulate
elements of run-time hierarchy of objects of several types. It replaces
'dynamic_cast' with 2 virtual calls ('accept' of object calls 'visit' of visitor)
so it is not more efficient. Also some people hate the pattern. However
there won't be need for 'typeid' or 'dynamic_cast' with it.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 03:13PM

On 04/02/2016 14:56, Öö Tiib wrote:
> 'dynamic_cast' with 2 virtual calls ('accept' of object calls 'visit' of visitor)
> so it is not more efficient. Also some people hate the pattern. However
> there won't be need for 'typeid' or 'dynamic_cast' with it.
 
Visitor pattern wouldn't be appropriate in this instance as it would
involve creating a closed (class) leaky abstraction (the visitor interface).
 
/Flibble
"Öö Tiib" <ootiib@hot.ee>: Feb 04 09:17AM -0800

On Thursday, 4 February 2016 17:13:19 UTC+2, Mr Flibble wrote:
> > there won't be need for 'typeid' or 'dynamic_cast' with it.
 
> Visitor pattern wouldn't be appropriate in this instance as it would
> involve creating a closed (class) leaky abstraction (the visitor interface).
 
Why leaky? You can make the off-turning visitor as inner as you only
want. Usage is only slightly more code than typing 'dynamic_cast' and
checking that result is not 'nullptr'. Of course if you don't want to visit
your widgets for some other purposes as well then it is waste.
 
// that is the only copy-paste bloat that visitor pattern adds to
// visitable stuff
virtual void radio_button::accept(widget_visitor &v) override
{
v.visit(*this);
}
 
void radio_button::set_on_state(bool aOnState)
{
if (iOnState != aOnState)
{
if (aOnState)
{
struct off_others
:public widget_visitor // its 'visit' overloads do nothing
{
virtual void visit(radio_button& aThat) override { aThat.set_on_state(false); }
} offOthers();

for (i_widget* p = &link_after(); p != this; p = &p->link_after())
if (is_sibling(*p))
p->accept(offOthers);
}
iOnState = aOnState;
update();
if (is_on())
on.trigger();
else if (is_off())
off.trigger();
}
}
"Öö Tiib" <ootiib@hot.ee>: Feb 04 09:27AM -0800

On Thursday, 4 February 2016 19:17:55 UTC+2, Öö Tiib wrote:
> {
> virtual void visit(radio_button& aThat) override { aThat.set_on_state(false); }
> } offOthers();
 
Typo ... 'offOthers;' without parentheses here otherwise it is vexing parse.
JiiPee <no@notvalid.com>: Feb 04 04:57PM

I was reading Sutters book where he adviced how to create a perfect
class. In one place he said that here:
 
class Human
{
public:
string getName() const { return m_name; }
private:
string m_name;
};
 
it should be:
const string getName() const { return m_name; }
 
instead so that the user cannot by accident try to set the temporary
variable like:
 
Human a;
a.getName() = "Peter";
 
So preventing this kind of "error" use. I kind of agree, but do you
people also think that all getters should be done like this (if they
return an temporary object)?
 
I do not see class makers doing this consistently.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: