Wednesday, November 2, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

red floyd <no.spam@its.invalid>: Oct 27 01:16PM -0700

On 10/25/2016 6:00 PM, Chris M. Thomasson wrote:
 
 
>> I can code RCU in C++! Nice...
 
Has the RCU patent expired?
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 26 07:51PM +0100

On Wed, 26 Oct 2016 18:45:00 +0200
> assuming the battery is the only object in space, what is the
> potential differenc between one of it's connectors and an infinitely
> distant point? The answer is U/2, half of the battery voltage.
 
Interesting. I will add to the usual saying. Engineers defer only to
physicists, physicists defer only to mathematicians, and mathematicians
defer only to God.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 26 07:27PM +0100

On Wed, 26 Oct 2016 12:36:48 -0400
> stick a light bulb, voltmeter or whatever across the two exposed
> electrodes. How much current flows? The answer is zero - even though
> you have two batteries, each with a pre-existing potential difference.
 
I am glad that you appear to have understood it ... except that it also
appears below that you didn't. Stick to this one and you are doing
fine.
 
> > I guess you don't care what people think of you.
 
> The argument was that a voltage exists when there is infinite
> resistance, because I*R can be non-zero with an infinite R.
 
Your argument was (and I quote) "If current is zero, voltage, by
definition, is zero", which as a general statement is wrong, whether by
definition or otherwise. Ohms law only operates (a) to describe the
relationship between voltage and current across a conducting medium
with a given resistance, and (b) if there is no reactance in the circuit
(capacitive or inductive) - in other words, that the system is in a
steady state.
 
> > That is what electromotive force is. That is how a battery works.
> > It is astounding that you think you can argue about it.
 
> It is a *potential* source of EMF.
 
No. It is a source of EMF measured in volts (this time really by
definition). It *may* cause a current to flow in a circuit and so do
work, and will do so if a conducting medium is connected to it.
 
The pressure in your water supply pipe does not go to zero when you
turn your tap off. The pressure is there all the time, courtesy of your
supply company.
 
 
> No, it shows that a battery has no EMF unless *BOTH* electrodes are
> connected. It is an extreme example to show the fallacy of your
> logic.
 
It shows no such thing.
David Brown <david.brown@hesbynett.no>: Oct 26 10:11PM +0200

On 26/10/16 16:28, Jerry Stuckle wrote:
> their systems using hardware raid because that is the most reliable.
 
> And you're saying you know more than the experts at IBM, Dell and
> hundreds of other big companies? ROFLMAO! What an idiot!
 
Do you really think the people selling or installing IBM and Dell
systems are experts at RAID? The sales folk read the features from
their marketing information, and sell based on what will make the most
profit for their company. The installers and service people try to put
everything together in the same way as the do for every system. The
idea is to minimise training and maintenance costs - not to optimise
systems for individual customers' needs. IBM and Dell certainly have
plenty of experts - but customers normally don't get to interact with
them, even with million dollar budgets.
 
>> usage.
 
> Sure - there are dozens of ways to emulate a raid with them. I never
> said there wasn't.
 
Those are not emulations. (I know I am flogging a dead horse here.)
 
> front and select the correct raid emulation. But that is what
> incompetent people do - and then they claim their solution is "better"
> to cover their arses.
 
Usually, you don't need to change the style of raid - but it happens.
Perhaps you started your system with two disks in RAID-1, and then
decide to expand it with another couple of disks and change to RAID-5.
But the issue here was on the flexibility to choose different
arrangements when setting up a system, according to your expected needs.
Software RAID gives far more options - that also means it requires
more knowledge, thought, planning, and testing if you want to take
advantage of these options. Hardware raid gives you a fixed, basic
solution that is probably all right for many uses even if it is
significantly more expensive and often slower than a more optimised
software raid setup could be.
 
>> with software raid.
 
> Yes, and small users sometimes use Linux. But a NAS isn't software
> RAID. You don't understand the difference, though.
 
You really are clueless, aren't you? "Small users sometimes use Linux".
I hope you aren't trying to say that /only/ small users use Linux!
 
And I wonder how you think NAS systems work. Some expensive devices
have the same chips as hardware raid cards, but the huge majority run
standard operating systems (Linux or BSD, mostly), possibly with
optimisations or modifications, and use the standard software raid
options from those OS's. They run Linux with md raid, or FreeBSD with
gmirror, or ZFS or (on a few types) btrfs.
 
 
> ROFLMAO! Don't break your arm patting yourself on the back! How many
> hardware RAID devices have you designed and marketed? Zero? That's
> what I thought.
 
I don't have to design them to know what the chips on hardware RAID
cards do, and how they do it. I haven't designed a cpu, but I know how
they work too - in far, far more detail than most people selling,
installing or configuring PC's. And I understand the mathematics behind
RAID-5, RAID-6, and triple-parity RAID (which has no standardised
level). Not that that knowledge helps me know what types of RAID and
disk setup might be suitable choices for a given workload, but I like to
know what is going on under the bonnet.
 
>> to have spares on hand - but for smaller setups that is an extra expense
>> that you don't want.
 
> So what do you do when the computer you're emulating the RAID on dies?
 
Put the drives into another computer and carry on, assuming that enough
of the drives haven't also died. Getting another generic computer is
easy - getting a specific hardware raid card can be hard if it is an old
or unusual device.
 
> Or when you get a virus that wipes out your file system?
 
Viruses are typically found on client machines (i.e., Windows) rather
than servers. But it does not matter to the server, the client, or the
virus whether the filesystem is mounted on disks on a hardware raid,
software raid, a NAS, a SAN, remote iSCSI disks, a filesystem with
built-in raid, or whatever. When a virus attacks the file, it writes as
a client. Assuming it has the right permissions, your raid system will
do what it is designed to do - store the damaged file on the disks.
 
And since I use btrfs on most systems now, I would then find the last
undamaged version of the file from my hourly snapshots. For older
(pre-btrfs) systems, my snapshots are nightly.
 
What would /you/ do, with your expensive hardware raid card which you
think is for backup? Watch the hardware raid card happily make exactly
the same copies of the damaged file as the software raid system does,
and then realise that you maybe should have had backups after all?
David Brown <david.brown@hesbynett.no>: Oct 26 10:39PM +0200

On 26/10/16 17:58, Jerry Stuckle wrote:
>> it does not make sense to talk about "using RAID for backup" - except in
>> the context of "don't do that".
 
> No, YOU'VE been talking about RAID. And that is it's major usage.
 
Are you really trying to claim that RAID is about backup?
 
Or are you simply using the wrong words, when what you mean is
availability? The primary function of most RAID setups is to minimise
down-time - your storage continues to work correctly (though possibly
more slowly) even if you have a disk failure. That is what the R for
/redundancy/ means.
 
It does not do /backup/, which gives you old versions of your files if
something happens to your current versions (like corruption by virus, or
user error), if there is a major disaster (fire in the server room), or
if you simply need to refer to older versions for some reason.
 
 
> I do
> agree, however, that it can be used for other things, such as faster
> data access.
 
Typical benefits of RAID can include faster throughput for streamed
reads or writes, faster access times for small reads or writes, and
bigger volumes. More specialist uses may include continuous off-site
replication of data.
 
 
> And when that is necessary, once again RAID controllers
> outperform RAID emulation because they are dedicated to data access.
 
No, they don't outperform software raid. Hardware raid controllers are
usually significantly slower. Modern cpus are simply much faster than
the devices used on hardware raid controllers, and they usually have so
many cores and bandwidth that it is not a bottleneck. Remember, the
reason you are reading or writing data to or from the disk is that the
data is needed in an application - the data has to pass through the cpu
anyway.
 
Now, if you have a large number of disks and have a layered setup with a
RAID (or linear concatenation) on top of a set of RAID-1 pairs, then
there is a good argument for having hardware raid controllers handle the
mirroring. That will half the IO traffic through the cpu when writing
(it makes no difference for reading, nor a difference to cache usage or
memory costs). But you are probably cheaper by getting a slightly more
powerful motherboard (say, a dual-socket board) or better cpus with
higher bandwidth.
 
If you are using SSD's, you definitely want to avoid hardware raid
controllers - the extra latency they add will swamp all those lovely
low-latency access times you paid for.
 
Some will say that hardware raid cards with big caches and a battery
unit make them faster. But those can cost more than just buying a UPS
for the server, and you can add a hundred times the memory to the server
for cache usage for the same price.
 
 
> They don't need to steal cycles from the processor to access the data,
> or wait for the processor to have time to do their work.
 
The processor has more than enough cycles to spare. And if it doesn't,
it is more cost-effective to buy more or faster cpus.
 
>> raid, which is also true.
 
> Not when fast access is important. And not for data which is required
> 24/7 without fail. Too many things happen on the link.
 
Obviously. It is a choice some companies make, for at least some of
their data. Not all data has to be accessed at high speeds, and for
many companies they are so reliant on Internet access that adding one
more dependency is a minor issue.
 
 
> And "may have meant"? Based on what facts? Oh, I forgot. You don't
> have any facts.
 
Ian mentioned Oracle's cloud storage - I said how I interpreted that
reference.
 
>> confusion about raid and backup again?
 
> Good cloud storage providers use RAID for backup. They could be held
> responsible for data loss in the case of a disk crash, for instance.
 
Again, they use raid (usually software raid - either an explicit raid
layer such as Linux md raid, or through a filesystem such as ZFS) to
keep their systems running in the case of a disk crash. It is not backup.
 
They will also have backup, mostly to protect against user errors but
also to deal with more serious disasters.
 
> over distance, network speeds are slower than disk. How many companies
> do you know of have 100Gb links from their site to the storage provider,
> and can maintain those speeds, for instance.
 
I don't think you understand what I wrote. Until you learn the
difference between raid and backup, there is little point in trying to
explain about technologies such as DRBD or iSCSI.
David Brown <david.brown@hesbynett.no>: Oct 26 11:57PM +0200

On 26/10/16 18:45, Christian Gollwitzer wrote:
> electrodynamics, there is only one potential phi, and voltages are
> simply the differences of that phi at different points in space. Because
> the potential drops off at a rate of 1/r from static charges, one can
 
Is it not 1/r² drop-off from a point charge? Or would that be the
drop-off of the electric field, and the potential is the integral of
that (in which case I understand the 1/r factor)?
 
 
> between one of it's connectors and an infinitely distant point? The
> answer is U/2, half of the battery voltage.
 
> Christian
 
I suppose the potential phi of an object is the energy it takes to move
one Coulomb of charge from an infinite distance to the neutral object?
Ben Bacarisse <ben.usenet@bsb.me.uk>: Oct 26 09:20PM +0100

>> power is being generated. Some people like to quote it in joules per
>> coulomb.
 
> Yes - but it is only the ability to generate power.
 
What is the alternative name for joules per coulomb?
 
<snip>
--
Ben.
Ian Collins <ian-news@hotmail.com>: Oct 27 11:52AM +1300

On 10/27/16 10:55 AM, Jerry Stuckle wrote:
 
>> Those are not emulations. (I know I am flogging a dead horse here.)
 
> I can run QEMU on my system here and load and execute code for a Cortex
> A-9 processor. So according to you I have an ARM processor on my laptop.
 
No, QEMU *is* an emulator.
 
"QEMU is a generic and open source machine emulator and virtualizer."
 
http://wiki.qemu.org/Main_Page
 
>> gmirror, or ZFS or (on a few types) btrfs.
 
> Sure. But the code is in flash or other ROM, and not mixed in with user
> code - as I said. A huge difference.
 
Ah, so now you agree that software RAID is RAID. Good start.
 
 
> Again you have no idea. Viruses can be spread throughout the system.
> There are many instances, especially recently, of entire networks being
> infected - including servers.
 
Reference (assuming you aren't referring to Windows servers)?
 
> But the viruses cannot destroy the partitioning, directories, etc. disk
> on a RAID device. It can corrupt data, but only data allowed by file
> permissions.
 
A rootkit can destroy everything.
 
--
Ian
David Brown <david.brown@hesbynett.no>: Oct 27 01:36AM +0200

On 26/10/16 23:55, Jerry Stuckle wrote:
 
>> Those are not emulations. (I know I am flogging a dead horse here.)
 
> I can run QEMU on my system here and load and execute code for a Cortex
> A-9 processor. So according to you I have an ARM processor on my laptop.
 
Eh, no. The hint is in the name.
 
RAID is not a type of device or hardware, it is a way of organising
data. Whether you do it with dedicated hardware or pure software is
irrelevant.
 
 
> Exactly. You don't need to change it, so your argument is as full of
> crap as you are. However, I know you need to use RAID emulation because
> you can't select the appropriate RAID device in the first place.
 
I said one of the key features of software raid, compared to hardware
raid, is its flexibility and its options. I didn't say you would be
likely to change these options in an existing system - it is possible,
and occasionally useful, but not common.
 
>> I hope you aren't trying to say that /only/ small users use Linux!
 
> I never said anything of the sort. Your logic is as fallacious as the
> rest of your arguments.
 
Read what I wrote - "I /hope/ you aren't trying to say...". Your
writing was unclear - and now you have confirmed what you were /not/ saying.
 
Now, what /were/ you saying when you wrote "and small users sometimes
use Linux" ? Was it just another empty statement?
 
>> gmirror, or ZFS or (on a few types) btrfs.
 
> Sure. But the code is in flash or other ROM, and not mixed in with user
> code - as I said. A huge difference.
 
They may typically store some or all of their software in flash (not
ROM!), but it is a sizeable NAND flash with a filesystem. There is no
difference between a flash filesystem and a filesystem on a disk - it is
the same software, and works in the same way. It may be restricted to
make it difficult to run other programs on the system (though often
there is a ssh daemon for more advanced users), and certainly you would
normally not choose to run programs that are not associated with the NAS
functionality. But that is just like any other server - if you want the
server to be efficient and reliable, you use it for the functions and
programs that are needed for that server, and not for other purposes.
 
The difference is minor at most.
 
>> know what is going on under the bonnet.
 
> No, you haven't designed anything, have you, David. But you are an
> expert on EVERYTHING! ROFLMAO!
 
It should not really be a surprise that a person can understand how
something works without having designed the thing in practice. After
all, surely someone who /has/ designed a hardware RAID card had a fair
idea how it should work before starting the design? I bet even you have
a fair idea how a normal car engine works - but I also bet you have
never designed one.
 
It just happens that with RAID, I find the theory fascinating even
though I have little need for complicated RAID setups in practice. I
enjoy the mathematics involved in multi-parity raid. It is weird to do
maths for fun, I know, but there we have it.
 
 
>> or unusual device.
 
> Gee, that would be exactly the same thing you would do if your RAID
> controller died.
 
No, if your raid controller card died you would need to get a
replacement raid controller card of the same (or very similar) type.
That is not the same thing as getting another computer.
 
> And it's not so easy to just boot an OS for an old or
> unusual computer on a new one. So once again your argument is full of crap.
 
Why would you use an old or unusual computer? Why would you use an
unusual OS? One of the points about using software raid is that it
works on bog-standard computers. Unless you are talking about pre-SATA
disk interfaces, you just take the disk out the old computer and attach
it to the new one. You either install the same version of Linux (or
BSD, or FreeBSD, or whatever) that you had on the old system, or you
install a newer version if you want. It will still work fine with the
disks.
 
>> a client. Assuming it has the right permissions, your raid system will
>> do what it is designed to do - store the damaged file on the disks.
 
> Again you have no idea. Viruses can be spread throughout the system.
 
Security and reliability is never absolute - it is all a matter of
reducing likelihoods of problems, and consequences of problems. Yes,
you /can/ get viruses or malware on your servers. But if your servers
are managed reasonably well your chances of getting a virus in them are
extremely small. With a Linux server with few services other than file
sharing (NFS, Samba), decent passwords and sensible filesystem
permissions, and a firewall protecting it from the most dangerous
outside forces, then you will not expect to have any serious risk of
malware infections within the working lifetime of the server. That does
not mean that it cannot happen - it means it is highly unlikely.
 
 
> But the viruses cannot destroy the partitioning, directories, etc. disk
> on a RAID device. It can corrupt data, but only data allowed by file
> permissions.
 
Have you ever actually /used/ any sort of raid system? I mean, have you
ever configured a raid system and installed an OS on a PC that uses raid?
 
If you had, you would realise that a raid set appears to the OS as one
big virtual disk. (This is for hardware raid, or normal software raid.
Raid built into a filesystem is a little different.) On that virtual
disk, you make your partitions, build your filesystems, and install your
OS and data. To the OS, and to any programs running on it - including a
virus - there is no logical difference between a filesystem on a
hardware raid set or a filesystem on a software raid set or a filesystem
on a plain disk. If the virus is running with permissions to do "rm -rf
/", it has the same effect with or without raid. If the virus has
permission to run "parted" or access the raw device files, it can screw
with the partitioning. If it can access the software used for
configuring the hardware raid sets, it could even screw with them (it is
highly unlikely that it would bother, of course - it's easy enough to
destroy the data with a total disregard for any underlying raid).
 
>> undamaged version of the file from my hourly snapshots. For older
>> (pre-btrfs) systems, my snapshots are nightly.
 
> You can do the same on RAID, if you wish.
 
You can do it if you have a filesystem or volume manager that supports
fast snapshots, or do it slowly without such support.
 
 
> Sure - but only the files my current ID has access to. It cannot, for
> instance, destroy the backups because this ID does not have permission
> to access those backups.
 
And that is the same no matter what sort of raid, or lack thereof, you
might have.
 
But you only have backups if you realise that your hardware raid card is
not a backup solution as you have repeatedly claimed.
Ian Collins <ian-news@hotmail.com>: Oct 27 11:34AM +1300

On 10/27/16 10:58 AM, Jerry Stuckle wrote:
 
>> RAID != backup.
 
> No one ever said it was. However, you have to try to put words into
> other peoples' mouths to try to make a point.
 
You keep conflating the terms.
 
 
>> So Oracle and its customers (who include the U.S. government) aren't
>> "people who need high reliability" or big companies?
 
> You don't know the difference there, either.
 
Difference between what?
 
 
>> The big companies who use Oracle?
 
> Yes, and the big companies who use real RAID devices for their critical
> reliability systems.
 
Oracle engineered systems all use ZFS.
 
--
Ian
Ian Collins <ian-news@hotmail.com>: Oct 27 12:48PM +1300

On 10/27/16 11:08 AM, Jerry Stuckle wrote:
 
>> Are you really trying to claim that RAID is about backup?
 
> I'm claiming that the major use of RAID around the world is for reliable
> data backup.
 
The major use of RAID around the world is for reliable data storage.
 
>> more slowly) even if you have a disk failure. That is what the R for
>> /redundancy/ means.
 
> Yes, which is a form of backup.
 
Nope. That's a common mistake naive admins make until their machine
goes pop or their RAID card shits its self.
 
>> if you simply need to refer to older versions for some reason.
 
> Two copies of a file IS backup. Even copying a file to a different
> directory is a well-known method of backing up the file.
 
Nope, its redundancy.
 
>> replication of data.
 
> Yes, and what do you think "continuous off-site replication of data" is,
> if it's not off-site backup.
 
Redundancy.
 
>>> have any facts.
 
>> Ian mentioned Oracle's cloud storage - I said how I interpreted that
>> reference.
 
How you correctly interpreted that reference.
 
 
> Sure, they use RAID. And as I said earlier, RAID uses software. But
> these devices are dedicated to backups - no user code is running on
> them. A huge difference.
 
Indeed, they are not dedicated to backups, they are dedicated to redundancy.
 
--
Ian
Ian Collins <ian-news@hotmail.com>: Oct 27 04:25PM +1300

On 10/27/16 04:16 PM, Jerry Stuckle wrote:
 
> According to someone who just a couple of days ago admitted he knew very
> little about RAID - but now you know how "everyone in the storage
> business" calls them?
 
Where did I say I know very little about RAID? Come on, show the quote
and prove me wrong.
 
I've been in the storage business for a very long time.
 
--
Ian
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 27 10:14AM +0100

On Wed, 26 Oct 2016 23:22:46 -0400
Jerry Stuckle <jstucklex@attglobal.net> wrote:
[snip]
> Voltage is the difference in charges between two points. But if you
> have an infinite resistance, you can have no charge difference
> because there is no relationship between the points.
 
You are clueless.
Ian Collins <ian-news@hotmail.com>: Oct 27 03:57PM +1300

On 10/27/16 03:48 PM, Jerry Stuckle wrote:
>> irrelevant.
 
> Only an incompetent idiot like you would even try to claim that software
> RAID is the same as hardware RAID.
 
Which is why David didn't.
 
>> That is not the same thing as getting another computer.
 
> Or you just get another RAID device and move your disks to that device.
> No problem.
 
A Redundant Array of Independent Disks device, that's a new one.
 
>> ever configured a raid system and installed an OS on a PC that uses raid?
 
> I've been using RAID since the 80's. But I've never installed an OS on
> a PC that uses RAID. They are too insecure.
 
By the way you describe things, you stopped using RAID in the 80s as
well. There's me thinking Jerry world was stuck in the 90s.
 
--
Ian
Jerry Stuckle <jstucklex@attglobal.net>: Oct 26 11:13PM -0400

On 10/26/2016 7:58 PM, David Brown wrote:
 
>> Yes, which is a form of backup.
 
> OK, so you /are/ using the wrong words. You are making up your own
> terms, just like your "true raid" and "emulated raid" terms.
 
Nope. RAID devices such as RAID 1 keep identical copies of files on
different disks. These are backups.
 
> technical issues, it makes sense to use the standard terms. You might
> like to start by googling for "backup" and "raid" - or perhaps go
> straight to Wikipedia.
 
Yes, and you should understand backup. But then there are facts, and
there is Wikipedia. Sometimes the two mesh - but not often. And
obviously you get your information from Wikipedia.
 
> data - you can view them as independent files, change one while having
> the old backup copy remain safe, etc.
 
> Copying a file to a different directory is a simple backup, yes.
 
Wrong answer. Redundancy is a live backup. It follows all changes to
the original file. And just because you can't access it directly on a
RAID device does not mean it is not a backup.
 
> changes you make to the on-site copy are quickly (depending on network
> speed) copied to the off-site copy. If normal usage changes your
> copies, they are not backups!
 
Backups don't need to be logically independent. They need to be
physically independent, which RAID 1 creates. And it can be logically
separated by removing one disk and placing it in another device.
 
> Also note that this sort of continuous off-site replication is not a
> common usage, and it is certainly not supported by any hardware raid
> system.
 
I never said it was. YOU are the one who brought up cloud storage, not
me. But you forgot that point also, didn't you?
 
>>> anyway.
 
>> Your proof? You don't have it, because once again you are full of crap.
 
> Do your own research. It should not take long to google.
 
I have. The difference is I have used hardware RAID controllers -
unlike you. Of course, the other difference is I use my computers for
more than playing FreeCell.
 
Once again, you made a statement without proof - and you can't back it
up because you have admitted you have ZERO experience with hardware RAID.
 
But you are an expert on it! ROFLMAO!
 
> the cpu time needed for compression and decompression, the throughput is
> faster than reading straight from a hard disk. And compression and
> decompression is a good deal more cpu intensive than even raid6.
 
Of course that's all you see. But then it doesn't take a lot of CPU to
play FreeCell. But on a decently active system, software RAID emulation
will definitely slow it down.
 
>> optimized to those processors.
 
> No, they are not faster. And they have significant latencies compared
> to direct host access of SSD's.
 
ROFLMAO! Once again you admit you have never used a RAID controller.
But once again you're trying to compare apples and oranges. How about
comparing apples and apples - hard disks on a computer vs. hard disks on
a RAID device. Or SSD's on a computer vs. SSD's on a RAID device.
 
In all cases a good RAID device will outperform RAID emulation,
especially on a heavily used system.
 
>>> it is more cost-effective to buy more or faster cpus.
 
>> Of course it does, when all you use it for is to play FreeCell.
 
> Maybe you play Freecell on your servers, but most of us do not.
 
No, I guess that is beyond your level of competency.
 
>> don't have any facts - just an opinion.
 
> I never claimed to have facts here - just my interpretation. And I am
> sure Ian will let me know if I guessed incorrectly.
 
You sure talk like what you say are facts. And I wouldn't count on Ian
to verify anything - he's already admitted he knows little about RAID
devices. But suddenly he's an expert. Just the blind leading the blind.
 
> really have services that demand maximal cpu time, then you are likely
> to want the fastest raid types - raid1 and raid10, which have completely
> negligible cpu costs.
 
The difference is the RAID devices do not (and in fact cannot) run user
code. There is no way to corrupt the disk or data unrelated to the user
currently accessing the disk.
 
And the CPU time required can be critical in a system that does more
than play FreeCell (not that even that is up to your level of competency).
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Oct 26 11:16PM -0400

On 10/26/2016 2:54 PM, Ian Collins wrote:
>> ROMs).
 
> It is what everyone in the storage business calls them: EMC, your
> beloved IBM, Oracle, the list goes on.
 
According to someone who just a couple of days ago admitted he knew very
little about RAID - but now you know how "everyone in the storage
business" calls them? ROFLMAO!
 
>> are wrong. And you would NEVER do that.
 
> If he agreed with your made up definitions he would be a fool, which he
> isn't.
 
No, calling you a fool would be an insult to real fools.
 
>> It is, however, how the industry defines RAID and RAID emulation.
 
> Which industry would that be, horticulture? Even Google can't find a
> relevant definition.
 
No, I can believe *you* couldn't find a definition - relevant or not.
But so much for someone who just a couple of days ago admitted he knew
very little about RAID - but now you're an expert? ROFLMAO!
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Oct 27 01:26PM +0200

When you talk about "RAID devices", do you mean SAN ("Storage area
network") systems? i.e., a box that has a lot of disk in a RAID array,
and is connected to the server by Fibre Channel, iSCSI, AoE, etc.?
 
That might explain a little of your horrendous misconceptions. SANs
make use of hardware or software RAID (the distinction is a bit blurred
in a dedicated device). But RAID is certainly not limited to such devices.
 
Or is this what you think /true/ RAID systems look like:
 
<https://www.flickr.com/photos/sainz/3015818920/>
Ian Collins <ian-news@hotmail.com>: Oct 27 04:00PM +1300

On 10/27/16 03:50 PM, Jerry Stuckle wrote:
> On 10/26/2016 6:34 PM, Ian Collins wrote:
 
>> Oracle engineered systems all use ZFS.
 
> Oh yes? You know every Oracle system?
 
I work with Oracle storage.
 
--
Ian
Ian Collins <ian-news@hotmail.com>: Oct 27 04:07PM +1300

On 10/27/16 03:55 PM, Jerry Stuckle wrote:
 
>> The major use of RAID around the world is for reliable data storage.
 
> So that is different from your previous claim that the major use is for
> fast access?
 
I made no such claim.
 
> And reliable data storage requires data backup.
 
Indeed. The two are not the same.
 
--
Ian
Christian Gollwitzer <auriocus@gmx.de>: Oct 27 09:04AM +0200

Am 27.10.16 um 05:22 schrieb Jerry Stuckle:
> That is correct. Ohms law is one of the basic laws of physics.
 
Not really. It is an assumed material characteristics, the same as
Hooke's "Law". If you stretch a spring too much, the force will no
longer be linearly proportional to the displacement. The same with
current in a conducting medium. But admittedly, for a large practical
range, you can forget about nonlinearities and assume Ohm's Law is
correct. For many other practical applications, it isn't. A diode (or
LED) does NOT obey Ohm's "law".
 
> Voltage
> is the difference in charges between two points.
 
Nope, difference between potential. Charge and potential are related by
Maxwell's equations, a set of partial differential equations.
 
> But if you have an
> infinite resistance, you can have no charge difference because there is
> no relationship between the points.
 
Jerry's lecture notes on physics. Nice.
 
Christian
Jerry Stuckle <jstucklex@attglobal.net>: Oct 26 11:31PM -0400

On 10/26/2016 4:20 PM, Ben Bacarisse wrote:
 
>> Yes - but it is only the ability to generate power.
 
> What is the alternative name for joules per coulomb?
 
> <snip>
 
It is the volt. But it is still only the ability to generate power
(work, actually). Coulomb is a relative measurement, not an absolute.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
David Brown <david.brown@hesbynett.no>: Oct 27 12:08PM +0200

On 27/10/16 04:43, Ian Collins wrote:
 
> I guess the real world and Jerry world will have to disagree on that one.
 
> To confuse you further: ZFS can emulate Redundant Array of Independent
> Disks by using disk files as emulated disks.
 
I don't know about ZFS, but just for laughs I have done this with Linux
md raid where the files that are stored on a tmpfs filesystem, which is
normally in memory, except when bits of it the files get swapped out
(and with swap partitions on two disks, these also effectively form a
raid-0 pair).
 
Clearly you don't store your files in such arrangements, but they are
excellent for practising with the tools and familiarising yourself with
failure recovery, resizing/reshaping, etc.
Robert Wessel <robertwessel2@yahoo.com>: Oct 27 08:48AM -0500

On Thu, 27 Oct 2016 01:58:31 +0200, David Brown
 
>Also note that this sort of continuous off-site replication is not a
>common usage, and it is certainly not supported by any hardware raid system.
 
 
 
Well, depending on how one stretches the definition of "'Hardware
RAID", better storage arrays (IBM DS8000 series, for example), most do
support offsite replication, at least as an option.
 
Current DS8880s actually offer two levels of that, a short distance
option (300km, aka "Metro Mirror") with "zero" data loss, and a long
distance option ("Global Mirror") with a 5* second window. Metro does
add a bit to I/O write latency (since the remote site needs to
acknowledge receipt before the local DS8880 can report the I/O write
as committed), although at shorter distances it's quite reasonable.
Global Metro does not add latency, but does have a data loss window.
Obviously both options are at the mercy of the link bandwidth for
overall write throughput.
 
But it's not something that exists on the typical PCIe "RAID
controller" card.
 
 
*The DS8880 may actually be 3s, but at least some DS8000s are 5s.
Robert Wessel <robertwessel2@yahoo.com>: Oct 27 08:55AM -0500

On Thu, 27 Oct 2016 08:48:31 -0500, Robert Wessel
 
>But it's not something that exists on the typical PCIe "RAID
>controller" card.
 
>*The DS8880 may actually be 3s, but at least some DS8000s are 5s.
 
 
And FWIW, a DS8880 runs a full copy of AIX. And you can load
additional software onto it, although that's obviously not
recommended, except in specific cases.
Jerry Stuckle <jstucklex@attglobal.net>: Oct 27 11:11AM -0400

On 10/27/2016 7:12 AM, David Brown wrote:
> be no need for two terms, would there? But both are RAID - in the same
> way that a "blue car" is not the same thing as a "red car", but they are
> both cars.
 
That isn't what you said earlier. You said RAID was RAID, whether
implemented in software or hardware. Now you're claiming it isn't?
 
> block device. (Some filesystems can use knowledge of the underlying
> geometry of the block device to improve performance, but that's a bit
> too technical for you.)
 
No, it doesn't matter to YOU whether you use RAID or RAID emulation.
But it does to a lot of people - especially people concerned with not
losing data.
 
 
>> And it's ability to corrupt the entire array due to a virus.
 
> When you invent an irrelevant and incorrect straw man in one post, you
> don't need to keep it up in other posts.
 
Not at all irrelevant or incorrect. It is entirely possible for your
RAID emulation to get corrupted and corrupt the entire disk.
 
> "good enough") decision in the first place. But like any competent
> person, I know that circumstances may change in the future - and that
> may mean it is worth changing the RAID layout of your disks.
 
A competent person would be able to predict the future needs with a fair
amount of accuracy. People I work with do this regularly, because
having to make changes due to a bad decision can cost the company
millions. They are not 100% right - but their success rate is very
high. It has to be or they don't maintain the positions they are in.
 
>> systems - or even moderately secure systems, so you have no idea.
 
> I know where and why ROMs are used. And they are not used to store the
> software in a NAS.
 
You know nothing. High security systems don't allow FLASH for device
code. If there needs to be a change to the software, the ROM must be
physically replaced. It is one technique which prevents changes to any
of the software.
 
But the Play Station you work on isn't a high security system, so you
wouldn't know.
 
>> on them and potentially corrupt the disks. Not that you would
>> understand the concept - it's way past your level of intelligence.
 
> Some let you run other software, some do not.
 
No, the servers I am talking about are designed so that other software
*cannot* run on them. Period. But I know you don't understand that
because it's beyond your level of intelligence.
 
> can be created on those partitions. Any "user code" or "virus" which
> can destroy the partitions on a software raid can do exactly the same
> thing on hardware raid.
 
Not at all. If no externally-sourced code can be executed on the
hardware RAID, it is impossible for any user code or virus to corrupt
the disk. However, RAID emulation is just another part of the OS - an
OS running user code and who knows what else. And I never said it was a
*common* problem. But it *is* a potential problem - one that always
concerns security experts, as it should.
 
But once again that is beyond your limited ability to comprehend.
 
>> proven. And no, as any hardware designer will tell you - you can't
>> really understand how something works until you have designed it.
 
> Remind me never to buy anything /you/ have designed.
 
I wouldn't sell it to you!
 
>> only you would consider a car engine to be similar to a RAID device.
>> Just more proof you don't understand RAID.
 
> Does the word "analogy" mean anything to you?
 
Yes, and it is not an analogy. It is a straw man argument.
 
> that is a big step forward. And you are happy to accept that I
> understand the mathematics behind RAID? But somehow you also think I
> can't understand even the basics of a RAID system?
 
Sure, you can *use* a RAID system. My wife can use a RAID system - she
does it every time she backs up to our NAS. But then she is more
intelligent than you. But no, you do not understand either the
mathematics or basics of a RAID system, as you have repeatedly shown.
 
> controller. But "to make sure everything works together" or "by the
> terms of the service contract" or "for a consistent user experience",
> everything is vendor-locked.
 
Once again you claim to be an expert on every device ever made. But
then you're an expert in everything, aren't you?
 
In reality your head is so far up your arse that you can see your
tonsils. And once again you just proved it.
 
Guess what? RAID devices use SATA disks. All SATA disks use the same
interface - it IS a standard. And basic disk partitioning is the same.
 
Now the file systems on the disks may differ - but those are generally
standard file systems, also. And there are standards for the various
RAID designs - all RAID 5 devices work similarly, for instance.
 
And because of all of this, you can often take a disk out of a brand X
RAID and install it in a brand Y RAID. Not always, but quite often -
just as you can usually take a disk out of one computer and install it
in another computer.
 
> /and/ you don't keep track of your installation media. For most
> distros, old versions are easily available and you can install as much
> as you want (commercial service contracts are, of course, time limited).
 
In some cases, yes. Others, no. Can you get a SUN Unix from 1991? If
you still have the tape, is it readable? A real question - I know of
one company who is still running that on some machines because the
specialized hardware they are using won't work in modern computers.
 
But also the vast majority of the world does NOT run Linux.
 
>> versions may not run on the old device.
 
> What "old device"? You are putting the old disk into a new machine, you
> are not using old devices.
 
The machine above still has a disk. Although they are going to have a
hard time replacing it when it finally fails. They had a hard enough
time finding a replacement disk on the last failure; they bought out all
the supplier had (about a dozen, IIRC).
 
But it still works, and would cost them a lot more money to have the
specialized hardware redesigned and rebuilt.
 
> You are a fine example of how a little knowledge can be a dangerous
> thing - you know a little about one corner of a subject, and think you
> are an expert on it all.
 
Wrong again. You read way too much into what other people say. And you
are wrong on all counts. No, I have never installed RAID emulation. I
have, however, installed and configured many hardware RAID devices over
the years.
 
But unlike you, I understand the vulnerabilities on every computer
running user code.
 
> No one suggests that the virus is running on the RAID card, any more
> than they suggest that the virus would run on the disk controller
> processors on the hard disks.
 
Sure, but while the virus can overwrite data on the RAID drives, it
cannot destroy the partitioning, etc. on the RAID - that is fixed in the
RAID configuration. Neither can it change data on one drive but not
another. ALL it can do is change the data.
 
Your software RAID emulation allows all of the above.
 
 
> /Flexibility/ is the key property of software raid. With the same RAID
> setup, it will usually be faster than hardware raid - but that is only
> one if its benefits.
 
And vulnerability is also a key property of software RAID. But you make
these claims that is will "usually be faster than hardware RAID". And
it may be - if you have a slow RAID adapter and only use your computer
for playing FreeCell.
 
But then you've never used hardware RAID, so once again you have no idea
what you're talking about. People who use it regularly know better.
 
>> RAID can do it automatically, with no speed reduction.
 
> No, RAID cannot do snapshots. Not hardware RAID, not software RAID - it
> is not part of what RAID is or does.
 
What do you think the second drive on a RAID 1 device is if not a
continually updated snapshot of the first drive? But I guess you're
only talking about static snapshots. There are dynamic ones, also.
 
But once again you try to change the subject because you can't counter
the facts.
 
>> obvious way is by configuring it for RAID 1.
 
>> But once again you show your ignorance by your statements.
 
> Attempting proof by repeated assertion, yet again?
 
Just repeated observations, David.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: