Tuesday, February 9, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 3 topics

Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 10:42AM -0500

On 2/4/2016 10:21 AM, David Brown wrote:
> since you /still/ don't understand these things, it is the /changes/
> that need to be copied (if anything is being copied at all - snapshots
> are generally on the same system).
 
It is not all that big for a repo. I've seen ones in the TB range on
large projects.
 
And which is it - the file system or the changes? You can't have it
both ways.
 
Oink, oink.
 
> budget and your requirements.
 
> But at no point are you having to search though old heaps of tapes to
> find the files you want.
 
Real project management uses repos, not file systems, to store the code.
But you obviously don't.
 
> works at a layer below the filesystem). You might include clustered
> filesystems, or use raid with some drives using iSCSI for off-site
> replication. But that is how filesystem snapshotting works.
 
Properly managed projects use repos, not file systems, to store the code.
 
> IT folk setting it up are not morons) is if there is a serious failure
> on the server software, the filesystem or the hardware - and then your
> snapshots and/or filesystem backups are used to restore operation.
 
That's right. And WHEN (not IF) there is a serious failure, your repo
gets corrupted. Your snapshot may or may not save you.
 
Oink, oink.
 
> You don't need to snapshot your repositories - with a version control
> system, /every/ commit is a snapshot.
 
Right. Oink, oink.
 
> re-install the version control software on it." Of course, if you use a
> distributed version control system (which is great for some types of
> development team, less good for other types) then none of this is an issue.
 
Ah, that's one way of doing it. Also backing up the repository itself,
which is generally how project admins do it (separate from the IT
folks). That way recovery is quite fast, no matter what the failure.
 
> systems, and no one stores petabytes of data in their software
> development servers. Sure, mainframes are in use - and sure, tape
> backup systems are in use. But not for software development.
 
Gee, there are or thousands of companies on this side who use
mainframes. I'll bet there are on your side, also. And like here, I'll
bet they have mainframe development systems, also. How else do they
develop mainframe applications? On TRS-80's? You probably think so.
 
You really have no idea how the world outside of your little corner
works, do you?
 
>> world works. You don't.
 
> And you apparently haven't worked since the PC was invented, and yet you
> still think you know how the world works.
 
And you think the PC is the only think in the world. Here's a clue - it
isn't. Oink, oink.
 
> see what was changed, by whom, and when, compare revisions, and collect
> copies of old versions. If your version control system can't handle
> that, you don't have a "repository", you have a "shared folder".
 
Which is what you have with your ZFS. And what happens when your
repository gets corrupted?
 
Oink, oink.
 
>> project? Are they also in that snapshot? Or do you need to spend the
>> next three weeks trying to find them?
 
> They are all there, in the snapshots or the repository as appropriate.
 
Until your repository gets corrupted. Oink, oink.
 
> to be destroyed by fire or flood than for multiple disk, server, or
> power failures to stop /all/ the servers. And if that happened, the
> off-site redundant copies would still be running.
 
Systems are never 100%. They do crash - even with redundant systems.
You're living in a dream world.
 
Those in IT who ARE in charge of systems like that understand the
weaknesses of the system - and KNOW systems can and will crash. And
they have processes in place to recover.
 
> I don't have quite such a system, because we are a much smaller group,
> and we have a system appropriate for our needs. But I know how to
> design such a high-reliability system.
 
It's obvious you don't have such a system, because you have no idea how
to properly implement it. You only think you do.
 
Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Feb 04 04:59PM +0100

On 04/02/16 15:23, Jerry Stuckle wrote:
> is again a theoretical maximum - not one you will get.
 
> But then my original background was EE, and my first five years with IBM
> was in hardware. So unlike you, I understand the hardware.
 
I've got news for you - I also do electronics design, and I also know
how the hardware works. And since I have a background in mathematics, I
understand that even though a 10 Gb link is not going to hit quite 10 Gb
throughput, it is still plenty for handling 100 MB/s.
 
>> to a number of different clients can probably be built for about
>> $4000-$5000.
 
> Go ahead - let's see you do it.
 
I can get a solid Dell rack server for under $2K, with 2 x 10 GB. We
did not mention disk capacity, but 4 SAS-3 2TB drives is about $700 for
a total of 4 TB with raid. For the same price, I could get 4 TB SATA
hybrid disks, which may be a better choice. Maybe I'd splash out on
$400 for a PCI SSD for the system, log files, raid journal, etc.
Another $250 for a UPS, and $250 for some more memory. That's a total
of $3600 - based on Norwegian prices (which are usually a lot higher
than American prices), all using parts I can order today from my usual
supplier.
 
In practice, I might want to pay a bit more for more expandability - at
$2500 I get a server with a faster processor and 8 2.5" bays. 6 SAS-3
1TB drives now come to $1400 giving a lot more headroom for the disk
throughput. With an extra redundant power supply for the server at
$100, that would come to $4900.
 
Setting up the disk system would depend on the type of files and access
that is needed. For maximal sustained read of single files with less
regard for write speed, I'd use Linux md raid10 "offset" mode so that
I'd get full width parallel reads (raid0 speed), at the cost of slower
writes. For more general use, I'd probably set up the disks in raid1
pairs (using the hardware raid at that level), then stripe a raid0
across them (for faster access to large files) or put a btrfs directly
on the three disks for greater flexibility. If it were a mail server,
I'd use xfs on a linear concatenation of the raid1 pairs.
David Brown <david.brown@hesbynett.no>: Feb 04 05:11PM +0100

On 04/02/16 16:42, Jerry Stuckle wrote:
> mainframes. I'll bet there are on your side, also. And like here, I'll
> bet they have mainframe development systems, also. How else do they
> develop mainframe applications? On TRS-80's? You probably think so.
 
Since you have finally come clean and admitted you are still living on
the other side of the year 2000, I think I'll just leave you there in
your own little out-of-date pigsty. (Not that I think many of your
ideas were appropriate 16 years ago either.)
 
If anyone else has questions about snapshots, I'll be happy to comment.
But I believe I will leave Jerry to himself for a while.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 12:04PM -0500

On 2/4/2016 10:59 AM, David Brown wrote:
> across them (for faster access to large files) or put a btrfs directly
> on the three disks for greater flexibility. If it were a mail server,
> I'd use xfs on a linear concatenation of the raid1 pairs.
 
Oink, oink!
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 12:04PM -0500

On 2/4/2016 11:07 AM, David Brown wrote:
> of changed files, and only changes need to be replicated to another
> machine. It's quite simple.
 
> So again, let's have some links supporting your view.
 
So you don't have the old copies then, either. I thought you were using
a repository.
 
But I'm tired of teaching the pig to sing. Let me know when you learn
how things work in the real world - instead of your little corner. I
won't be holding my breath.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 12:04PM -0500

On 2/4/2016 11:11 AM, David Brown wrote:
> ideas were appropriate 16 years ago either.)
 
> If anyone else has questions about snapshots, I'll be happy to comment.
> But I believe I will leave Jerry to himself for a while.
 
Oink, oink!
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: Feb 05 08:06AM +1300

Jerry Stuckle wrote:
 
> Real project management uses repos, not file systems, to store the code.
 
So where do these repos live?
 
--
Ian Collins
Ian Collins <ian-news@hotmail.com>: Feb 05 08:18AM +1300

Jerry Stuckle wrote:
 
> Real projects don't depend on disk file systems. They use repos. But
> you've never worked on a real project - one that has been properly
> managed. And a file system is NOT a repository.
 
Where do your mythical "repos" live?
 
If you can't answer that simple question, game over.
 
Do you even know what a repo is?
 
If you can't answer that simple question either, game over.
--
Ian Collins
Ian Collins <ian-news@hotmail.com>: Feb 05 08:16AM +1300

Jerry Stuckle wrote:
 
>>> I understand that. And what does your disk do?
 
>> So you did you say "you won't get 100MB/s throughput on a lan"?
 
> You won't get it through a WAN, either, with your disk. Oink, oink.
 
Ah, I see you are unable to put up anything close to a coherent argument
and have reverted to type. I was expecting more, but life is full of
disappointments.
 
I don't have "a disk" I have a storage pool. I'm happy to explain how
they work (I build them for a living) and to refer you to the
appropriate forum if you with to learn something. I doubt that you will
take me up on either offer.
 
--
Ian Collins
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 08:18PM -0500

On 2/4/2016 7:18 PM, Robert Wessel wrote:
 
> VSS was replaced with TFS a decade ago. It's long dead, and it's
> problems with it (although the last release did seem pretty solid).
> TFS even interfaces to Git these days.
 
Shhh. Don't burst his bubble!
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Robert Wessel <robertwessel2@yahoo.com>: Feb 04 06:18PM -0600

On Thu, 4 Feb 2016 02:40:49 -0800 (PST), 嘱 Tiib <ootiib@hot.ee>
wrote:
 
>those people but Jerry is rare non-failing example. However I suspect
>that he is now violating some NDA by spreading the business secrets
>here.
 
 
VSS was replaced with TFS a decade ago. It's long dead, and it's
problems with it (although the last release did seem pretty solid).
TFS even interfaces to Git these days.
Paavo Helde <myfirstname@osa.pri.ee>: Feb 05 01:20PM +0200

On 4.02.2016 16:38, Jerry Stuckle wrote:
 
> They have their own problems - the first being security. The more
> systems you have involved, especially in remote locations, the less
> secure you are.
 
What do you mean be security? Are you afraid that a laptop gets lost or
hacked and your valuable software is stolen? Do not worry, nobody wants
to do anything with your software which only runs on mainframes and
which has been implemented in those few spare moments your 30 developers
were able to find between endless meetings and fighting other corporate
obstacles.
 
Only other similar companies might be interested, but they are too busy
restoring their repos from tapes.
 
> and can't be fixed for several days? Few companies can afford multiple
> physical communications links from multiple providers, each link taking
> a different physical path once it leaves the building.
 
And how do you propose restoring anything from your backups over the
broken link? With a distributed SCM all developers can just carry on as
usual, just merging the repos is delayed until the network comes back.
 
 
> There is no failsafe system. Even full backups, shipped off-site as
> soon as they are taken (physically or via network) can have problems,
> but they are still the surest form of restoring a failed system.
 
Nope, 30 clones of a distributed SCM repo all over the world is by far
more fail-safe than an off-site backup in a bank vault.
 
Let's say a chance to lose a repo on a remote laptop is once per year.
The probability to happen it in a given week is then 1/50 = 0.02. I
choose a week because in a week the developer should get it working
again, cloning the repo from another developer. Now, the probability to
lose all 30 repos in a given week, assuming that the events are
independent, is 0.02^30 = 1e-51.
 
Let's say a bank vault can stay intact for a billion years (a clear
exaggeration). The probability to lose it in a given week is then
1/(10^9*50) = 2e-11. You see that it is 1e40 times more probable to lose
the bank vault.
 
Of course the assumption that all laptop losses are independent is not
correct. However, already 7 independent locations (like 6 people working
from home at any given day) beat the billion-year-bank vault.
 
 
> That's
> why every big company does it that way. And every properly managed
> project does, also.
 
The big companies do it because this was the best practice 30 years ago,
plus they don't trust their workers and attempt to control everything
centrally.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 05 10:28AM -0500

On 2/5/2016 6:20 AM, Paavo Helde wrote:
> obstacles.
 
> Only other similar companies might be interested, but they are too busy
> restoring their repos from tapes.
 
The more systems you have the code stored on, the more people who have
access to that code. And the vast majority of leaks start with someone
having access.
 
And I said nothing about my software running only on mainframes. I just
challenged the comment that "no one develops on mainframes any more".
It shows just how out of touch with the real world you and many others
in this newsgroup are. You think the whole world is groups of two or
three people working on PCs go develop your little crap no one else wants.
 
It's not.
 
And those 30 developers created more profit in a year than every person
in this thread will create in their lifetimes. The projected value of
the software is in the hundreds of millions of dollars.
 
 
> And how do you propose restoring anything from your backups over the
> broken link? With a distributed SCM all developers can just carry on as
> usual, just merging the repos is delayed until the network comes back.
 
That's my question to you. How do they carry on when they can't access
the repo?
 
>> but they are still the surest form of restoring a failed system.
 
> Nope, 30 clones of a distributed SCM repo all over the world is by far
> more fail-safe than an off-site backup in a bank vault.
 
And hugely less secure. Why do you think the tapes are stored in a bank
vault?
 
> again, cloning the repo from another developer. Now, the probability to
> lose all 30 repos in a given week, assuming that the events are
> independent, is 0.02^30 = 1e-51.
 
You don't need to lose all of the repos. Just one is enough. And if
the chance of losing a laptop once per year, the chance of losing one of
30 laptops rapidly approaches unity.
 
> exaggeration). The probability to lose it in a given week is then
> 1/(10^9*50) = 2e-11. You see that it is 1e40 times more probable to lose
> the bank vault.
 
And the chances of you being killed by a meteor are greater than that of
winning the Irish Sweepstakes.
 
No, figures don't lie. But liars can figure.
 
 
> The big companies do it because this was the best practice 30 years ago,
> plus they don't trust their workers and attempt to control everything
> centrally.
 
The big companies do it that way because it is STILL the best practice.
They know that - better than some programmer with no experience in real
managed projects.
 
But once again I've tried to teach a pig to sing.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: Feb 04 02:48PM +1300

Jerry Stuckle wrote:
 
>> WAN != LAN. 10GBASE-T 10 Gbit/s Ethernet over cat-6A will give you many
>> times that.
 
> I understand that. And what does your disk do?
 
So you did you say "you won't get 100MB/s throughput on a lan"?
 
 
>> Yep. Stripes of mirrors of Seagate Constellation ES.3 drives.
 
> Maximum sustained transfer rate: 175MB/s. Actual rate will be less than
> 1/2 that in production.
 
Google "stripe".
 
 
>>> Google "restore from incremental backup"
 
>> Old hat, we restore from snapshots.
 
> Snapshots only contain the changes from the last backup or snapshot.
 
Nope.
 
--
Ian Collins
Ian Collins <ian-news@hotmail.com>: Feb 04 02:59PM +1300

Jerry Stuckle wrote:
 
> Big companies, including IBM, don't use git internally - at least none
> of the projects I know of do, for a number of reasons. They use
> commercial and/or internally developed software.
 
They own Rational, so that isn't surprising.
 
Who is one of the biggest contributors to the Linux kernel? IBM.
 
--
Ian Collins
Ian Collins <ian-news@hotmail.com>: Feb 04 02:02PM +1300

Jerry Stuckle wrote:
 
> You haven't? I have. In fact, I've been using incremental backups
> since the mid 1980's with DB2. A database, but the concept is identical.
 
You are still living there aren't you?
 
> idea what you're talking about. To restore from snapshots, you need to
> go back to the last full backup, then restore each snapshot - *in
> order*.
 
Nope. On modern filesystems, a snapshot is a full representation of the
filesystem at the time it was taken. To restore, you just copy the
files from the snapshot.
 
--
Ian Collins
Ian Collins <ian-news@hotmail.com>: Feb 04 02:12PM +1300

Jerry Stuckle wrote:
> On 2/3/2016 6:07 PM, David Brown wrote:
 
> Once again you show your extreme ignorance. git is far from the best,
> and is never used in bigger development groups.
 
Er the Linux kernel? The Illumos kernel? The... and so on.
 
Seeing as you like them so much, read:
 
http://www.ibm.com/developerworks/library/wa-git/
 
--
Ian Collins
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 02:49PM -0500

On 2/4/2016 1:38 PM, Andrea Venturoli wrote:
>> BOOST_CHECK_THROW(g(0))
>> ...
>> BOOST_CHECK_THROW(z(3,4))
 
That's even worse than the one I saw (from a "senior" programmer who
didn't like to write documentation and had to be forced to do so):
 
i++; /* increment i */
 
 
> Of course, writing *why* something should fail would probably avoid
> wading through hundred of pages of technical specs, but that would be
> too good apparently.
 
BTDTGTTS!
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Vir Campestris <vir.campestris@invalid.invalid>: Feb 04 09:41PM

On 04/02/2016 19:16, red floyd wrote:
> One time, I wrote a comment that was about five times the length of the
> actual code.
 
The other day I put a comment against this:
 
char* workbuffer = malloc(strlen(input));
 
The code is correct. The first operation is guaranteed to remove at
least one char from the input, and put the rest in workbuffer. But it
looks wrong!
 
Andy
--
p.s. Didn't anyone like my German sausages?
scott@slp53.sl.home (Scott Lurndal): Feb 05 02:28PM


>Andy
>--
>p.s. Didn't anyone like my German sausages?
 
One of the wurst jokes ever.
Vir Campestris <vir.campestris@invalid.invalid>: Feb 05 10:46PM

On 05/02/2016 07:21, Alf P. Steinbach wrote:
> Assuming it's C code there should be no cast, and of course malloc is OK
> in C.
 
As usual Alf gets it right ;)
 
It's in a Linux kernel driver.
 
Andy
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 05 03:32PM

Who is this Jerry Stuckle guy? He is giving me a headache.
 
/Flibble
Jerry Stuckle <jstucklex@attglobal.net>: Feb 05 11:35AM -0500

On 2/5/2016 10:32 AM, Mr Flibble wrote:
> Who is this Jerry Stuckle guy? He is giving me a headache.
 
> /Flibble
 
Someone who's been programming longer than you've been alive, and has
consulted on three continents and for many Fortune 500 companies.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 05 06:22PM

On 05/02/2016 16:35, Jerry Stuckle wrote:
 
>> /Flibble
 
> Someone who's been programming longer than you've been alive, and has
> consulted on three continents and for many Fortune 500 companies.
 
But you have no idea how old I am or how long I have been programming so
how can you make such an assertion?
 
/Flibble
Jerry Stuckle <jstucklex@attglobal.net>: Feb 05 03:52PM -0500

On 2/5/2016 2:37 PM, Mr Flibble wrote:
 
>> Wrong again - and even more proof that I was right in my educated guess.
 
> Get a proper hobby mate.
 
> /Flibble
 
You just admitted I'm right. Thanks for the confirmation.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: