Thursday, February 4, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 3 topics

"Öö Tiib" <ootiib@hot.ee>: Feb 04 02:40AM -0800

On Thursday, 4 February 2016 11:23:29 UTC+2, David Brown wrote:
> tested restoring virtual machines from the snapshots. The time to learn
> about how to restore a VM (such as our version control server) is when
> creating a backup procedure, not when you /have/ to restore it.
 
Yes, but Jerry already explained that in Fortune 500 company you should
have no permissions since that is process to avoid chaos. Also the network
must be optimized to be 1MB/s for to achieve superior security.
 
Visual SourceSafe is tool of choice not git since who you trust: foul mouthed
Linus Torvalds or billionaire Bill Gates? Unfortunately some internal timeouts
of Visual SourceSafe throw on case of too many (like 5) people accessing it
simultaneously in 1MB/s network and that corrupts repo daily. That can
be partly mitigated by having ~10 different special purpose repos of same
project (like analysis, proofs of concept, design, development, unit tests,
integration tests, manual testing, release etc.) but corruptions then
sometimes happen when moving bigger files between repos.

When repo is corrupted then by process only IT department BOFH have
permissions. BOFH have (by their process) rewind the tapes whole day
when Visual SourceSafe goes corrupt. That makes work very inefficient.
Therefore Jerry has ordered each of his team member to burn a DVD
with backup of repo before check in as part of development process.
That has saved their bacon more than once.

So it is our own lack of experience and ignorance that we consider the
process bizarrely kafkaesque. Most real software (for example that Visual
SourceSafe itself) is made in that way. In the Fortune 500 companies
the failing teams will be decimated by contract. We rarely hear news of
those people but Jerry is rare non-failing example. However I suspect
that he is now violating some NDA by spreading the business secrets
here.
David Brown <david.brown@hesbynett.no>: Feb 04 12:23PM +0100

On 04/02/16 02:00, Jerry Stuckle wrote:
> had 13 years as an employee. And I hate to tell you - but the
> interviewer was being kind. They tell something like that to every one
> they turn down. They aren't going to tell you you aren't qualified.
 
Oh, that must be it, of course. Thank you for making it so clear to me.
Now that I know I am unqualified and incompetent in everything I do, I
will resign my job and dig ditches for a living. Since everyone else in
c.l.c++, other than Jerry, is equally incompetent, I suggest we all gang
together as ditch diggers.
 
Paavo Helde <myfirstname@osa.pri.ee>: Feb 04 02:50PM +0200

On 4.02.2016 3:06, Jerry Stuckle wrote:
 
 
> However, if you wait for IT to restore your repos, you might be down for
> days. If you have your own backups of the repos, you can rent a server
> and be back up in a few hours.
 
Guess what? You are absolutely right, the backup processes can get
stumbled because they are not in daily use.
 
However, this is not an argument to make more backups. This is an
argument for switching over to a distributed SCM.
 
If we used a distributed SCM, then we would just declare another clone
of the repo as "production", and redirected the production builds to it.
We would be up and running in minutes, not in days or hours.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:06AM -0500

On 2/3/2016 8:46 PM, Ian Collins wrote:
> replication sends.
 
> You are making an arse of yourself discussing technology you don't
> understand.
 
Sigh, once again I find myself trapped into trying to teach the pig to
sing. You are talking about one specific use of snapshots. And you
think everything is like that.
 
Oink, oink.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:17AM -0500

On 2/4/2016 3:56 AM, David Brown wrote:
 
>> Backing up 100MB every 15 minutes comes out to about 9.6Tb per day.
 
> Yes, copying 100 MB every 15 minutes means 9.6 TB per 24 hours. So
> what? You picked a totally arbitrary number and extrapolated wildly.
 
Not at all. Look back. I'm not the one claiming to take snapshots
every 15 minutes. And 100 MB is not at all big for a repo - I've seen
ones in the TB range on large projects.
 
> minutes. Per developer, the median will be a few KB at most. If this
> is the machine that stores everything, including generated object files,
> debug files, etc., then changes will be bigger.
 
And when you only back up the changes, you have to restore starting with
the last full backup and every incremental backup since then. One of
them gets lost? You're SOL after that.
 
> and the features are similar. A snapshot is done in a fraction of a
> second, regardless of the size of the change. Think about that for a
> moment - it will take time to sink in.
 
Oh, I do. That's how it works on ONE system. It's not how every system
works.
 
> only the last change gets sent. Filessytems like ZFS and btrfs have
> tools to let you send over only the data that has changed since the last
> copy.
 
So what? And we're talking about repos, not file systems. Well managed
projects don't depend on file systems. They use repos.
 
> whether all your "Fortune 500" customers are also there. But it is a
> good long time since people have used tape systems with monthly full
> backups and daily incremental or differential backups.
 
Actually, tape systems are still heavily used with mainframes. And they
do nightly backups of everything, and store the tapes off site. But
then they may be backing up a couple of petabytes every night. Not at
all unusual.
 
But people who have only worked on PC's think they know how the whole
world works. You don't.
 
> lost everything, I can find it at the same place (roughly) on the backup
> server - it's a quick ssh away. If that has crashed too, I can find it
> at the same place on the off-site backup server - also a quick ssh away.
 
That's one file. What about all of the other files associated with that
change? And how do you know which files are associated with that
change? Good project management uses repos, not file systems.
 
 
> That's what you get with snapshotting, and proper backups.
 
> (And of course, development files are all in the version control
> repositories, and so old versions are available from there.)
 
And where are all of the other 2,000 files you need to complete your
project? Are they also in that snapshot? Or do you need to spend the
next three weeks trying to find them?
 
Like with Ian, I'm trying to teach a pig to sing. You've obviously
never been on a properly managed project.
 
Tell me - when the system crashes, and you have 100 programmers, each
costing you $350/hr., how long will your CEO keep you around?
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:19AM -0500

On 2/3/2016 8:48 PM, Ian Collins wrote:
>>> times that.
 
>> I understand that. And what does your disk do?
 
> So you did you say "you won't get 100MB/s throughput on a lan"?
 
You won't get it through a WAN, either, with your disk. Oink, oink.
 
 
>> Maximum sustained transfer rate: 175MB/s. Actual rate will be less than
>> 1/2 that in production.
 
> Google "stripe".
 
I did, and I'm familiar with the disk. Maximum sustained transfer rate
is 175 MB/sec. Like everything, that's a theoretical max, now real
world. But you are too stoopid to understand a simple fact. Oink, oink.
 
 
>>> Old hat, we restore from snapshots.
 
>> Snapshots only contain the changes from the last backup or snapshot.
 
> Nope.
 
You just contradicted yourself again, Ian. Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:23AM -0500

On 2/4/2016 4:15 AM, David Brown wrote:
 
> And remember, Jerry is used to IBM 350 disks - the idea that a bog
> standard cheapo hard disk can sustain well over 100 MB/s sequential
> transfer is science fiction to him.
 
Sure I have. And I also know that 10 Gb is theoretical maximum, not a
speed you will ever get. And I know "maximum sustained transfer speed"
is again a theoretical maximum - not one you will get.
 
But then my original background was EE, and my first five years with IBM
was in hardware. So unlike you, I understand the hardware.
 
 
>> Google "stripe".
 
> Also google for raid in general, disk caching, and SSD's (sit down
> before you read that one, Jerry).
 
Oh, I know all about SSDs. I also know the limitations of them -
obviously unlike you.
 
> A machine capable of sustained random-access transfers of 500 MB/s total
> to a number of different clients can probably be built for about
> $4000-$5000.
 
Go ahead - let's see you do it.
 
 
 
> <https://en.wikipedia.org/wiki/Snapshot_%28computer_storage%29>
 
>> Nope.
 
Once again you are contradicting yourself, David.
 
Oink, oink.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:24AM -0500

On 2/3/2016 8:56 PM, Ian Collins wrote:
>> about.
 
> It's becoming more and more obvious that I am wasting my time trying to
> educate you.
 
Real projects don't depend on disk file systems. They use repos. But
you've never worked on a real project - one that has been properly
managed. And a file system is NOT a repository.
 
But it's even more obvious I'm trying to teach the pig to sing. Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:26AM -0500

On 2/4/2016 4:23 AM, David Brown wrote:
> tested restoring virtual machines from the snapshots. The time to learn
> about how to restore a VM (such as our version control server) is when
> creating a backup procedure, not when you /have/ to restore it.
 
Yea, right.
 
Now tell me - you just lost your entire repository, with over 20,000
files. You have 100 programmers, each costing you $350/hr. waiting for
the files.
 
How long do you have before you are fired?
 
Oink, oink!
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Feb 04 03:27PM +0100

On 04/02/16 15:06, Jerry Stuckle wrote:
 
> Sigh, once again I find myself trapped into trying to teach the pig to
> sing. You are talking about one specific use of snapshots. And you
> think everything is like that.
 
Whereas you are talking about a different specific use of the word
"snapshot", which is so specific that it is unique to you, and you think
everything is like that.
 
Feel free to provide a link or two justifying your idea of "snapshot",
showing clearly that a "snapshot" is a copy of the changes between two
backups or snapshots (good luck on that recursive definition of yours),
or that it is another term for "incremental backup". Bonus points will
be given if you can show this is the common usage of "snapshot" in the
world of computer storage while the definition used by myself and Ian is
an unusual and specific case.
 
To get you started, the ever-helpful wikipedia link is:
 
<https://en.wikipedia.org/wiki/Snapshot_%28computer_storage%29>
 
Unfortunately for you, this fits the definition used by Ian and myself
(and everyone else) of a snapshot being a /complete/ copy of the system
(filesystem, database, repository, whatever), usually read-only, and
usually implemented as some sort of near-instantaneous virtual copy
rather than a literal copy.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:30AM -0500

On 2/4/2016 5:40 AM, Öö Tiib wrote:
 
> Yes, but Jerry already explained that in Fortune 500 company you should
> have no permissions since that is process to avoid chaos. Also the network
> must be optimized to be 1MB/s for to achieve superior security.
 
I never said that. I said there are limited permissions to write to
each repo. And I never said anything about 1MB/s.
 
Now you're just trolling.
 
> Therefore Jerry has ordered each of his team member to burn a DVD
> with backup of repo before check in as part of development process.
> That has saved their bacon more than once.
 
Once again you're trolling. I never said any of that.
 
> those people but Jerry is rare non-failing example. However I suspect
> that he is now violating some NDA by spreading the business secrets
> here.
 
That part is true. High quality software (I never mentioned VSS or any
other tool because you wouldn't be paying for them, anyway) are what is
used.
 
But the rest of your tripe is just trolling, also. You seem to be good
at that.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:32AM -0500

On 2/3/2016 8:59 PM, Ian Collins wrote:
>> commercial and/or internally developed software.
 
> They own Rational, so that isn't surprising.
 
> Who is one of the biggest contributors to the Linux kernel? IBM.
 
They BOUGHT Rational - the company. For very good reasons.
 
And why bring up their contributions to Linux? I guess you just ran out
of ways to counter my facts, so you have to try to change the subject.
 
Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:33AM -0500

On 2/4/2016 6:23 AM, David Brown wrote:
> will resign my job and dig ditches for a living. Since everyone else in
> c.l.c++, other than Jerry, is equally incompetent, I suggest we all gang
> together as ditch diggers.
 
I would guess so, at least as far as IBM is concerned. The truth hurts,
doesn't it?
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:38AM -0500

On 2/4/2016 7:50 AM, Paavo Helde wrote:
 
> If we used a distributed SCM, then we would just declare another clone
> of the repo as "production", and redirected the production builds to it.
> We would be up and running in minutes, not in days or hours.
 
Maybe, maybe not. Distributed SCMs are not the total answer, either.
They have their own problems - the first being security. The more
systems you have involved, especially in remote locations, the less
secure you are. And no matter how many you have, you can have major
problems. For instance, what happens when your fiber optic line gets cut
and can't be fixed for several days? Few companies can afford multiple
physical communications links from multiple providers, each link taking
a different physical path once it leaves the building.
 
There is no failsafe system. Even full backups, shipped off-site as
soon as they are taken (physically or via network) can have problems,
but they are still the surest form of restoring a failed system. That's
why every big company does it that way. And every properly managed
project does, also.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 10:35AM

On 04/02/2016 07:11, Öö Tiib wrote:
> break some unit test naively and waste her time. If there are also no
> unit tests that demonstrate the reason why then that typically results
> with regression.
 
Nonsense. Why is not important, what is. If you were implementing
std::copy would you comment why? Of course not, the what is what is
important and the code itself tells you what.
 
/Flibble
"Öö Tiib" <ootiib@hot.ee>: Feb 04 02:55AM -0800

On Thursday, 4 February 2016 12:36:04 UTC+2, Mr Flibble wrote:
 
> Nonsense. Why is not important, what is. If you were implementing
> std::copy would you comment why? Of course not, the what is what is
> important and the code itself tells you what.
 
Such comment is indeed needed when someone implements and uses
their own copy. Maintainer may otherwise look that it is pointless code
erase it and replace usages with 'std::copy'. However if there is comment
that
// because the destination remains intact on case of exception mid copy unlike with 'std::copy'
Then it is clear why it is used.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 04 12:17PM +0100

On 2/3/2016 6:35 PM, Mr Flibble wrote:
> have disastrous consequences if incorrect assumptions are made based on
> them.
 
> The best form of documentation is the code itself!
 
I agree with all that.
 
Of course there are exceptions.
 
But in favor of your view, I once had to help a colleague with a little
Java class dealing with timestamps. I first sent her a simple
non-commented class she could use as starting point, and she was well
satisfied with that. However, our project coding guidelines required
comments on everything, to serve as automatically generated
documentation, and I had a little free time so I added what I thought
was reasonable commenting and sent that. This would be very helpful, I
thought, and the code was exactly the same. But now the clear
understanding evaporated, "I don't understand any of this!".
 
I guess what happened was not that the comments misled intellectually,
but that with comments added the code LOOKED MORE COMPLICATED.
 
In a similar vein, my late father once thought he couldn't use my
calculator, because it looked so complex, lots of "math" keys. It didn't
matter that the keys he'd use were the same as on other calculators he'd
used. There was the uncertainty about the thing.
 
Francis Glassborow once remarked that the nice thing about the
introduction of syntax colouring was that one could now configure the
editor to show comments as white on white. ;-)
 
Which, I think, goes to show that your sentiment is not new, and is
shared by many who have suffered other's "well-commented" code.
 
Looks, not content.
 
 
 
Cheers,
 
- Alf
David Brown <david.brown@hesbynett.no>: Feb 04 12:33PM +0100

On 04/02/16 00:09, Ian Collins wrote:
 
> Especially if it was written with TDD where you have a lovely set of
> tests that tell you exactly what the code does :)
 
> Aren't you going to offer up your critique of Uncle Bob's TDD sausages?
 
Not long ago, I had the pleasure of bug-fixing code from a different
company that combined incompressible code, badly named variables and
functions, minimal commenting (some of which was other languages), and
no possibility of any sort of testing. However, the authors clearly
understood the importance of testing, since the one appropriate comment
was "// Test this shit!".
JiiPee <no@notvalid.com>: Feb 04 12:05PM

On 04/02/2016 11:17, Alf P. Steinbach wrote:
>> them.
 
>> The best form of documentation is the code itself!
 
> I agree with all that.
 
Does this mean no comments at all, even not outside the code? Like I
make a code to handle 3 base numbers (as I need to have 3 values per
slot, not binary values like 101100, but could have 201200). Now
explaining the theory (and put couple of examples also) near that code
helps me when I come back year after. It speeds up things.
In a comment I tell what is the mathematical logic behind it and coupld
of short examples. Then its easy to understand the code after that.
 
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 12:08PM

On 04/02/2016 12:05, JiiPee wrote:
> helps me when I come back year after. It speeds up things.
> In a comment I tell what is the mathematical logic behind it and coupld
> of short examples. Then its easy to understand the code after that.
 
I guess we can summarize both those points as never document HOW you are
doing something as the code itself does that.
 
/Flibble
JiiPee <no@notvalid.com>: Feb 04 12:32PM

On 04/02/2016 12:08, Mr Flibble wrote:
>> of short examples. Then its easy to understand the code after that.
 
> I guess we can summarize both those points as never document HOW you
> are doing something as the code itself does that.
 
if I explain in the code also why I use that 3-base system, then it
helps to understand the code around it. The first question when seeing
3-base calculations there is: "why are we doing it like this? why use
3-base numbers here?". I did have that question when I came back to code
months after... and comments above it helped to undertand the motive
behind it.
 
The code does not answer questions like "why are we doing like this?
what is the motive doing this? why not doing another way? why is this
the best way to do this?"
 
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 04 01:43PM +0100

On 2/4/2016 1:32 PM, JiiPee wrote:
> helps to understand the code around it. The first question when seeing
> 3-base calculations there is: "why are we doing it like this? why use
> 3-base numbers here?".
 
That's because the NIM game with 3 heaps has a simple solution in base 3.
 
 
 
> The code does not answer questions like "why are we doing like this?
> what is the motive doing this? why not doing another way? why is this
> the best way to do this?"
 
Could be useful.
 
IMHO it all depends on whether the comments really add something that is
useful and can't be easily expressed in the code itself.
 
 
Cheers!,
 
- Alf
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:03AM -0500

On 2/3/2016 6:19 PM, Mr Flibble wrote:
 
> Perhaps your problem is that you are confusing TDD with unit testing?
> Unit tests are great, TDD isn't.
 
> /Flibble
 
Well written code indicates WHAT it does.
 
Well written comments indicate WHY the code does what it does. It also
defines input and output conditions to a function, and other information
not part of the code.
 
Completely different things.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 04 12:00PM

Check out my codes!
 
void radio_button::set_on_state(bool aOnState)
{
if (iOnState != aOnState)
{
if (aOnState)
{
i_widget* candidate = &link_after();
while (candidate != this)
{
if (is_sibling(*candidate))
{
// Teh ghastly dynamic_cast! A simpler CLEAN solution which
doesn't leak details everywhere doesn't immediately spring to mind.
radio_button* candidateRadioButton =
dynamic_cast<radio_button*>(candidate);
if (candidateRadioButton != 0)
candidateRadioButton->set_on_state(false);
}
candidate = &candidate->link_after();
}
}
iOnState = aOnState;
update();
if (is_on())
on.trigger();
else if (is_off())
off.trigger();
}
}
 
/Flibble
"Öö Tiib" <ootiib@hot.ee>: Feb 04 04:31AM -0800

On Thursday, 4 February 2016 14:00:51 UTC+2, Mr Flibble wrote:
> }
> }
 
> /Flibble
 
May be have 'radio_button* radio_button::next_sibling()' instead of what seem
to be 'i_widget* radio_button::link_after()' and 'bool radio_button::is_sibling(i_widget*)'.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: