Tuesday, February 9, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 1 topic

Jerry Stuckle <jstucklex@attglobal.net>: Feb 03 08:41PM -0500

On 2/3/2016 8:02 PM, Ian Collins wrote:
 
> Nope. On modern filesystems, a snapshot is a full representation of the
> filesystem at the time it was taken. To restore, you just copy the
> files from the snapshot.
 
That is known as a backup, not a snapshot. Snapshots are incremental
backups.
 
And if it were the full file system - backing up a 100 MB repo every 15
minutes would be 9.6 TB per day. But you claim it isn't anywhere near
that. Which is it?
 
It's becoming more and more obvious you have no idea what you're talking
about.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 03 08:14PM -0500

On 2/3/2016 5:03 PM, Geoff wrote:
 
> What tools are you using in your work where you can even conceive of a
> scenario where a development team with a version control system could
> have a singular copy of a file?
 
You are assuming that the RCS is recoverable. That is not always true.
 
You fail to understand that *any* data can be corrupted - even a repo.
The fact you've never had it happen means only that you've been lucky.
It's not a matter of *if* something happens - it's a matter of *when* it
happens.
 
It's why large companies spend millions of dollars every year (some
every week) maintaining backups.
 
Using multiple systems limits the effect of hardware failures. But
that's all it limits. Software failures and deliberate sabotage can
destroy any system. If you think it won't happen, look at the facts -
the vase majority of successful hacking attacks start with someone who
has legitimate access to a system. It can be a disgruntled employee -
or someone who was sloppy with a password.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 03 08:45PM -0500

On 2/3/2016 8:12 PM, Ian Collins wrote:
 
> Er the Linux kernel? The Illumos kernel? The... and so on.
 
> Seeing as you like them so much, read:
 
> http://www.ibm.com/developerworks/library/wa-git/
 
Big companies, including IBM, don't use git internally - at least none
of the projects I know of do, for a number of reasons. They use
commercial and/or internally developed software.
 
But it's free, for cheapskates who won't pay for a more robust
commercial system. IBM would be stupid to ignore that fact.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: Feb 04 02:46PM +1300

Jerry Stuckle wrote:
>> snapshot represents the state of the filesystem when it was taken.
 
> Then it isn't a snapshot. It is a backup. Snapshots only store changes
> since the last backup or snapshot.
 
You really can't learn, can you?
 
From the ZFS man page:
 
Snapshots
A snapshot is a read-only copy of a file system or volume. Snapshots
can be created extremely quickly, and initially consume no additional
space within the pool. As data within the active dataset changes, the
snapshot consumes more data than would otherwise be shared with the
active dataset.
 
> So you really don't know what a snapshot is, do you?
 
I know they weren't around in the 80s, but if you take the trouble to
read up on modern filesystems you might gain some understanding.
 
>> the individual replications.
 
> Maybe, maybe not. But if it's a snapshot, it has *only* the changes
> since the last backup or snapshot. Nothing more.
 
The difference between any two snapshots is what an incremental
replication sends.
 
You are making an arse of yourself discussing technology you don't
understand.
 
--
Ian Collins
Jerry Stuckle <jstucklex@attglobal.net>: Feb 03 08:40PM -0500

On 2/3/2016 7:59 PM, Ian Collins wrote:
 
>> Yep. And I stand by my statement.
 
> WAN != LAN. 10GBASE-T 10 Gbit/s Ethernet over cat-6A will give you many
> times that.
 
I understand that. And what does your disk do?
 
 
>> For hundreds of megabytes? I doubt it. What disks are you using that
>> are that fast?
 
> Yep. Stripes of mirrors of Seagate Constellation ES.3 drives.
 
Maximum sustained transfer rate: 175MB/s. Actual rate will be less than
1/2 that in production.
 
 
>>> Google "incremental".
 
>> Google "restore from incremental backup"
 
> Old hat, we restore from snapshots.
 
Snapshots only contain the changes from the last backup or snapshot.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: Feb 04 02:56PM +1300

Jerry Stuckle wrote:
>> files from the snapshot.
 
> That is known as a backup, not a snapshot. Snapshots are incremental
> backups.
 
You have no clue about modern filesystems, do you?
 
"zfs snapshot <filesystem>@<snapshot name>" creates a snapshot.
 
"zfs send -i <filesystem>@<snapshot one> <filesystem>@<snapshot two>
creates an incremental stream comprising the differences between the two
snapshots.
 
A snapshot is not a backup, it is a snapshot (hence the name) in time of
the filesystem.
 
> And if it were the full file system - backing up a 100 MB repo every 15
> minutes would be 9.6 TB per day. But you claim it isn't anywhere near
> that. Which is it?
 
That depends on how much churn there is in the repository.
 
> It's becoming more and more obvious you have no idea what you're talking
> about.
 
It's becoming more and more obvious that I am wasting my time trying to
educate you.
 
--
Ian Collins
David Brown <david.brown@hesbynett.no>: Feb 03 10:49PM +0100

On 03/02/16 22:10, Jerry Stuckle wrote:
 
> Sure I know what the word "replicate" means. And unless you're running
> fiber, you won't get 100MB/s throughput on a lan - especially one shared
> with other users. Your disk won't even provide that speed.
 
Jerry, have you ever heard the phrase "when you are in a hole, stop
digging" ? You are truly making a fool of yourself.
 
 
> Not to mention needing several terabytes per day for all of the
> replications.
 
You haven't the faintest clue what you are talking about, do you? What
kind of amateur IT person would transfer their data off-site by
re-copying all the data? You copy the /changes/. Unless you are
producing a movie or running a particle accelerator, you don't make
terabytes of /new/ data every day.
 
David Brown <david.brown@hesbynett.no>: Feb 04 09:56AM +0100

On 04/02/16 01:42, Jerry Stuckle wrote:
>> producing a movie or running a particle accelerator, you don't make
>> terabytes of /new/ data every day.
 
> Backing up 100MB every 15 minutes comes out to about 9.6Tb per day.
 
Yes, copying 100 MB every 15 minutes means 9.6 TB per 24 hours. So
what? You picked a totally arbitrary number and extrapolated wildly.
 
First, productive though he surely is, I don't expect Ian to produce 100
MB of change every 15 minutes. Even his whole team (however big that
might be, I've no idea) will not produce 100 MB of code changes every 15
minutes. Per developer, the median will be a few KB at most. If this
is the machine that stores everything, including generated object files,
debug files, etc., then changes will be bigger.
 
But as I said, I don't believe you know how snapshotting works. My
guess is that Ian is running this on a ZFS filesystem, since he is a
Solaris fan. I haven't used ZFS, but I use btrfs on my Linux systems,
and the features are similar. A snapshot is done in a fraction of a
second, regardless of the size of the change. Think about that for a
moment - it will take time to sink in.
 
When passing the data out to a different machine (on-site, off-site,
whatever), then the data must pass along a wire. But the data that
needs to be sent is not the sum of the changes since the last backup -
most of the big changes will be multiple changes to the same file, and
only the last change gets sent. Filessytems like ZFS and btrfs have
tools to let you send over only the data that has changed since the last
copy.
 
> Copying the changes means you have to restore starting with the last
> full backup. That may have been weeks ago.
 
I don't know whether it is just /you/ that is stuck in the 1990's, or
whether all your "Fortune 500" customers are also there. But it is a
good long time since people have used tape systems with monthly full
backups and daily incremental or differential backups.
 
Now, I don't know the details of the backup system at Ian's workplace.
But this is how things work at mine:
 
If I have a file /home/david/test.txt, and I want to get the version
from 01.06.2015. I can find it on my machine at
/snaps/01.06.2015/home/david/test.txt. If my machine has crashed and
lost everything, I can find it at the same place (roughly) on the backup
server - it's a quick ssh away. If that has crashed too, I can find it
at the same place on the off-site backup server - also a quick ssh away.
 
On the server side, if a VM dies horribly, I can use a snapshot of the
whole server for a restart. The snapshots are all there, ready to go.
 
That's what you get with snapshotting, and proper backups.
 
(And of course, development files are all in the version control
repositories, and so old versions are available from there.)
Jerry Stuckle <jstucklex@attglobal.net>: Feb 03 08:36PM -0500

On 2/3/2016 7:49 PM, Ian Collins wrote:
>> full backup. That may have been weeks ago.
 
> Bong! Wrong again. I restore from the appropriate snapshot. Each
> snapshot represents the state of the filesystem when it was taken.
 
Then it isn't a snapshot. It is a backup. Snapshots only store changes
since the last backup or snapshot.
 
So you really don't know what a snapshot is, do you?
 
> get many megabytes of data every 15 minutes, but nearly all of that will
> be changes to the same files, the the aggregate isn't much bigger the
> the individual replications.
 
Maybe, maybe not. But if it's a snapshot, it has *only* the changes
since the last backup or snapshot. Nothing more.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Feb 04 10:23AM +0100

On 04/02/16 01:46, Jerry Stuckle wrote:
> order*. One snapshot missing or corrupted? EVERYTHING after that is
> lost. And if you try to restore later snapshots, you will have a
> corrupted repo - one where you have no idea what is good and what is bad.
 
That is not shapshots - that's is 1980's style incremental backups.
It's a different world. Until you learn that, you are going to remain
confused in this discussion.
 
And just because I have never had to recover a repository from my
backups, does not mean I don't know how to do it - I do, and have have
tested restoring virtual machines from the snapshots. The time to learn
about how to restore a VM (such as our version control server) is when
creating a backup procedure, not when you /have/ to restore it.
David Brown <david.brown@hesbynett.no>: Feb 04 12:23PM +0100

On 04/02/16 02:00, Jerry Stuckle wrote:
> had 13 years as an employee. And I hate to tell you - but the
> interviewer was being kind. They tell something like that to every one
> they turn down. They aren't going to tell you you aren't qualified.
 
Oh, that must be it, of course. Thank you for making it so clear to me.
Now that I know I am unqualified and incompetent in everything I do, I
will resign my job and dig ditches for a living. Since everyone else in
c.l.c++, other than Jerry, is equally incompetent, I suggest we all gang
together as ditch diggers.
 
Paavo Helde <myfirstname@osa.pri.ee>: Feb 04 02:50PM +0200

On 4.02.2016 3:06, Jerry Stuckle wrote:
 
 
> However, if you wait for IT to restore your repos, you might be down for
> days. If you have your own backups of the repos, you can rent a server
> and be back up in a few hours.
 
Guess what? You are absolutely right, the backup processes can get
stumbled because they are not in daily use.
 
However, this is not an argument to make more backups. This is an
argument for switching over to a distributed SCM.
 
If we used a distributed SCM, then we would just declare another clone
of the repo as "production", and redirected the production builds to it.
We would be up and running in minutes, not in days or hours.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:38AM -0500

On 2/4/2016 7:50 AM, Paavo Helde wrote:
 
> If we used a distributed SCM, then we would just declare another clone
> of the repo as "production", and redirected the production builds to it.
> We would be up and running in minutes, not in days or hours.
 
Maybe, maybe not. Distributed SCMs are not the total answer, either.
They have their own problems - the first being security. The more
systems you have involved, especially in remote locations, the less
secure you are. And no matter how many you have, you can have major
problems. For instance, what happens when your fiber optic line gets cut
and can't be fixed for several days? Few companies can afford multiple
physical communications links from multiple providers, each link taking
a different physical path once it leaves the building.
 
There is no failsafe system. Even full backups, shipped off-site as
soon as they are taken (physically or via network) can have problems,
but they are still the surest form of restoring a failed system. That's
why every big company does it that way. And every properly managed
project does, also.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:17AM -0500

On 2/4/2016 3:56 AM, David Brown wrote:
 
>> Backing up 100MB every 15 minutes comes out to about 9.6Tb per day.
 
> Yes, copying 100 MB every 15 minutes means 9.6 TB per 24 hours. So
> what? You picked a totally arbitrary number and extrapolated wildly.
 
Not at all. Look back. I'm not the one claiming to take snapshots
every 15 minutes. And 100 MB is not at all big for a repo - I've seen
ones in the TB range on large projects.
 
> minutes. Per developer, the median will be a few KB at most. If this
> is the machine that stores everything, including generated object files,
> debug files, etc., then changes will be bigger.
 
And when you only back up the changes, you have to restore starting with
the last full backup and every incremental backup since then. One of
them gets lost? You're SOL after that.
 
> and the features are similar. A snapshot is done in a fraction of a
> second, regardless of the size of the change. Think about that for a
> moment - it will take time to sink in.
 
Oh, I do. That's how it works on ONE system. It's not how every system
works.
 
> only the last change gets sent. Filessytems like ZFS and btrfs have
> tools to let you send over only the data that has changed since the last
> copy.
 
So what? And we're talking about repos, not file systems. Well managed
projects don't depend on file systems. They use repos.
 
> whether all your "Fortune 500" customers are also there. But it is a
> good long time since people have used tape systems with monthly full
> backups and daily incremental or differential backups.
 
Actually, tape systems are still heavily used with mainframes. And they
do nightly backups of everything, and store the tapes off site. But
then they may be backing up a couple of petabytes every night. Not at
all unusual.
 
But people who have only worked on PC's think they know how the whole
world works. You don't.
 
> lost everything, I can find it at the same place (roughly) on the backup
> server - it's a quick ssh away. If that has crashed too, I can find it
> at the same place on the off-site backup server - also a quick ssh away.
 
That's one file. What about all of the other files associated with that
change? And how do you know which files are associated with that
change? Good project management uses repos, not file systems.
 
 
> That's what you get with snapshotting, and proper backups.
 
> (And of course, development files are all in the version control
> repositories, and so old versions are available from there.)
 
And where are all of the other 2,000 files you need to complete your
project? Are they also in that snapshot? Or do you need to spend the
next three weeks trying to find them?
 
Like with Ian, I'm trying to teach a pig to sing. You've obviously
never been on a properly managed project.
 
Tell me - when the system crashes, and you have 100 programmers, each
costing you $350/hr., how long will your CEO keep you around?
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:19AM -0500

On 2/3/2016 8:48 PM, Ian Collins wrote:
>>> times that.
 
>> I understand that. And what does your disk do?
 
> So you did you say "you won't get 100MB/s throughput on a lan"?
 
You won't get it through a WAN, either, with your disk. Oink, oink.
 
 
>> Maximum sustained transfer rate: 175MB/s. Actual rate will be less than
>> 1/2 that in production.
 
> Google "stripe".
 
I did, and I'm familiar with the disk. Maximum sustained transfer rate
is 175 MB/sec. Like everything, that's a theoretical max, now real
world. But you are too stoopid to understand a simple fact. Oink, oink.
 
 
>>> Old hat, we restore from snapshots.
 
>> Snapshots only contain the changes from the last backup or snapshot.
 
> Nope.
 
You just contradicted yourself again, Ian. Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:06AM -0500

On 2/3/2016 8:46 PM, Ian Collins wrote:
> replication sends.
 
> You are making an arse of yourself discussing technology you don't
> understand.
 
Sigh, once again I find myself trapped into trying to teach the pig to
sing. You are talking about one specific use of snapshots. And you
think everything is like that.
 
Oink, oink.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Feb 04 04:21PM +0100

On 04/02/16 15:17, Jerry Stuckle wrote:
 
> Not at all. Look back. I'm not the one claiming to take snapshots
> every 15 minutes. And 100 MB is not at all big for a repo - I've seen
> ones in the TB range on large projects.
 
You pulled the 100 MB figure out of the air (or somewhere worse). And
since you /still/ don't understand these things, it is the /changes/
that need to be copied (if anything is being copied at all - snapshots
are generally on the same system).
 
 
> And when you only back up the changes, you have to restore starting with
> the last full backup and every incremental backup since then. One of
> them gets lost? You're SOL after that.
 
Maybe you are thinking of tapes stored in a vault, or perhaps punched cards.
 
Here in the real world, in /this/ century, these things are done using
disks.
 
Let me explain in very small steps.
 
Server 1 has data A on it.
Server 2 is an independent copy, perhaps off site. It also has data A
on it.
 
Server 1 takes a snapshot. It now has a read-only copy of A, and a
working set A.
 
You write some new data B1 to server 1. Now it has a read-only set A,
and a working set (A + B1).
 
Server 1 takes a new snapshot. It now has a read-only copies of A and
(A + B1), and a working set (A + B1).
 
You write some more new data, changing B1 to B2. It takes a new
snapshot. Server 1 now has read-only copies A, (A + B1), (A + B2), as
well as the working set (A + B2).
 
You do a backup to server 2. This involves server 2 first taking a
snapshot, then you transfer the difference between server 1's last
snapshot and the last snapshot on server 2 - that is, B2. Now server 2
has read-only snapshots A and (A + B2).
 
At every point, you have multiple old copies of the data directly
available. How often you take your snapshots, how often you back them
up to other servers, how many duplicate copies you have, how often (if
at all) you prune old snapshots - that's all details that must fit your
budget and your requirements.
 
But at no point are you having to search though old heaps of tapes to
find the files you want.
 
>> moment - it will take time to sink in.
 
> Oh, I do. That's how it works on ONE system. It's not how every system
> works.
 
It is pretty much how all modern snapshotting is done. You might use
different filesystems, or different technologies (such as lvm, which
works at a layer below the filesystem). You might include clustered
filesystems, or use raid with some drives using iSCSI for off-site
replication. But that is how filesystem snapshotting works.
 
>> copy.
 
> So what? And we're talking about repos, not file systems. Well managed
> projects don't depend on file systems. They use repos.
 
The repositories sit on filesystems, and run on servers (usually virtual
servers). The only time your repositories get corrupted (assuming the
IT folk setting it up are not morons) is if there is a serious failure
on the server software, the filesystem or the hardware - and then your
snapshots and/or filesystem backups are used to restore operation.
 
You don't need to snapshot your repositories - with a version control
system, /every/ commit is a snapshot.
 
If you like, you can do a dump of the repository data and back up those
files. It is usually a somewhat time-consuming process, but worth doing
to provide extra recovery options (for disaster recovery, not the normal
development process). This can be particularly useful if your budget
does not extend to replicated servers and you are happy with the idea of
"The server died. We'll get a new one running by tomorrow, and
re-install the version control software on it." Of course, if you use a
distributed version control system (which is great for some types of
development team, less good for other types) then none of this is an issue.
 
> do nightly backups of everything, and store the tapes off site. But
> then they may be backing up a couple of petabytes every night. Not at
> all unusual.
 
No one (this side of Y2K) uses mainframes for their version control
systems, and no one stores petabytes of data in their software
development servers. Sure, mainframes are in use - and sure, tape
backup systems are in use. But not for software development.
 
 
> But people who have only worked on PC's think they know how the whole
> world works. You don't.
 
And you apparently haven't worked since the PC was invented, and yet you
still think you know how the world works.
 
 
> That's one file. What about all of the other files associated with that
> change? And how do you know which files are associated with that
> change? Good project management uses repos, not file systems.
 
When I need an old file from the repository, I find it using the version
control system's clients. There are all sorts of ways, depending on the
particular version control system in use. But key to all version
control systems is the ability to easily see old files and directories,
see what was changed, by whom, and when, compare revisions, and collect
copies of old versions. If your version control system can't handle
that, you don't have a "repository", you have a "shared folder".
 
 
> And where are all of the other 2,000 files you need to complete your
> project? Are they also in that snapshot? Or do you need to spend the
> next three weeks trying to find them?
 
They are all there, in the snapshots or the repository as appropriate.
 
 
> never been on a properly managed project.
 
> Tell me - when the system crashes, and you have 100 programmers, each
> costing you $350/hr., how long will your CEO keep you around?
 
If I were running the IT for 100 programmers, the system would not crash
- because the distributed version control running on independent servers
would have a redundancy level making it far more likely for the building
to be destroyed by fire or flood than for multiple disk, server, or
power failures to stop /all/ the servers. And if that happened, the
off-site redundant copies would still be running.
 
I don't have quite such a system, because we are a much smaller group,
and we have a system appropriate for our needs. But I know how to
design such a high-reliability system.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:24AM -0500

On 2/3/2016 8:56 PM, Ian Collins wrote:
>> about.
 
> It's becoming more and more obvious that I am wasting my time trying to
> educate you.
 
Real projects don't depend on disk file systems. They use repos. But
you've never worked on a real project - one that has been properly
managed. And a file system is NOT a repository.
 
But it's even more obvious I'm trying to teach the pig to sing. Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:23AM -0500

On 2/4/2016 4:15 AM, David Brown wrote:
 
> And remember, Jerry is used to IBM 350 disks - the idea that a bog
> standard cheapo hard disk can sustain well over 100 MB/s sequential
> transfer is science fiction to him.
 
Sure I have. And I also know that 10 Gb is theoretical maximum, not a
speed you will ever get. And I know "maximum sustained transfer speed"
is again a theoretical maximum - not one you will get.
 
But then my original background was EE, and my first five years with IBM
was in hardware. So unlike you, I understand the hardware.
 
 
>> Google "stripe".
 
> Also google for raid in general, disk caching, and SSD's (sit down
> before you read that one, Jerry).
 
Oh, I know all about SSDs. I also know the limitations of them -
obviously unlike you.
 
> A machine capable of sustained random-access transfers of 500 MB/s total
> to a number of different clients can probably be built for about
> $4000-$5000.
 
Go ahead - let's see you do it.
 
 
 
> <https://en.wikipedia.org/wiki/Snapshot_%28computer_storage%29>
 
>> Nope.
 
Once again you are contradicting yourself, David.
 
Oink, oink.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Feb 04 03:27PM +0100

On 04/02/16 15:06, Jerry Stuckle wrote:
 
> Sigh, once again I find myself trapped into trying to teach the pig to
> sing. You are talking about one specific use of snapshots. And you
> think everything is like that.
 
Whereas you are talking about a different specific use of the word
"snapshot", which is so specific that it is unique to you, and you think
everything is like that.
 
Feel free to provide a link or two justifying your idea of "snapshot",
showing clearly that a "snapshot" is a copy of the changes between two
backups or snapshots (good luck on that recursive definition of yours),
or that it is another term for "incremental backup". Bonus points will
be given if you can show this is the common usage of "snapshot" in the
world of computer storage while the definition used by myself and Ian is
an unusual and specific case.
 
To get you started, the ever-helpful wikipedia link is:
 
<https://en.wikipedia.org/wiki/Snapshot_%28computer_storage%29>
 
Unfortunately for you, this fits the definition used by Ian and myself
(and everyone else) of a snapshot being a /complete/ copy of the system
(filesystem, database, repository, whatever), usually read-only, and
usually implemented as some sort of near-instantaneous virtual copy
rather than a literal copy.
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:26AM -0500

On 2/4/2016 4:23 AM, David Brown wrote:
> tested restoring virtual machines from the snapshots. The time to learn
> about how to restore a VM (such as our version control server) is when
> creating a backup procedure, not when you /have/ to restore it.
 
Yea, right.
 
Now tell me - you just lost your entire repository, with over 20,000
files. You have 100 programmers, each costing you $350/hr. waiting for
the files.
 
How long do you have before you are fired?
 
Oink, oink!
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:32AM -0500

On 2/3/2016 8:59 PM, Ian Collins wrote:
>> commercial and/or internally developed software.
 
> They own Rational, so that isn't surprising.
 
> Who is one of the biggest contributors to the Linux kernel? IBM.
 
They BOUGHT Rational - the company. For very good reasons.
 
And why bring up their contributions to Linux? I guess you just ran out
of ways to counter my facts, so you have to try to change the subject.
 
Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 09:30AM -0500

On 2/4/2016 5:40 AM, Öö Tiib wrote:
 
> Yes, but Jerry already explained that in Fortune 500 company you should
> have no permissions since that is process to avoid chaos. Also the network
> must be optimized to be 1MB/s for to achieve superior security.
 
I never said that. I said there are limited permissions to write to
each repo. And I never said anything about 1MB/s.
 
Now you're just trolling.
 
> Therefore Jerry has ordered each of his team member to burn a DVD
> with backup of repo before check in as part of development process.
> That has saved their bacon more than once.
 
Once again you're trolling. I never said any of that.
 
> those people but Jerry is rare non-failing example. However I suspect
> that he is now violating some NDA by spreading the business secrets
> here.
 
That part is true. High quality software (I never mentioned VSS or any
other tool because you wouldn't be paying for them, anyway) are what is
used.
 
But the rest of your tripe is just trolling, also. You seem to be good
at that.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Feb 04 10:30AM -0500

On 2/4/2016 9:27 AM, David Brown wrote:
> (filesystem, database, repository, whatever), usually read-only, and
> usually implemented as some sort of near-instantaneous virtual copy
> rather than a literal copy.
 
Now which is it - a complete copy, or a copy of the changes? In one
case you copy the entire file system. In the other you copy the changes.
 
Which is it?
 
Oink, oink.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Feb 04 05:07PM +0100

On 04/02/16 16:30, Jerry Stuckle wrote:
 
> Now which is it - a complete copy, or a copy of the changes? In one
> case you copy the entire file system. In the other you copy the changes.
 
> Which is it?
 
The snapshot is logically a complete copy, but it only takes the space
of changed files, and only changes need to be replicated to another
machine. It's quite simple.
 
So again, let's have some links supporting your view.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: