Friday, October 21, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Daniel <danielaparker@gmail.com>: Oct 21 03:24AM -0700

On Thursday, October 20, 2016 at 1:39:46 PM UTC-4, Mr Flibble wrote:
 
> The performance of map and multimap can often be improved significantly
> by using a pool allocator such as the one Boost provides.
 
Which boost allocator? In my experiments I've found the stateless
boost::fast_pool_allocator to be slower than std::allocator. Also with
boost::fast_pool_allocator there doesn't appear to be a way to free the underlying
memory pool until the process ends.
 
Daniel
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Oct 21 02:17PM +0100

On 21/10/2016 11:24, Daniel wrote:
> boost::fast_pool_allocator to be slower than std::allocator. Also with
> boost::fast_pool_allocator there doesn't appear to be a way to free the underlying
> memory pool until the process ends.
 
I've found quite the opposite. Your experiments are obviously flawed.
 
/Flibble
"Öö Tiib" <ootiib@hot.ee>: Oct 21 06:42AM -0700

On Friday, 21 October 2016 16:17:16 UTC+3, Mr Flibble wrote:
> > boost::fast_pool_allocator there doesn't appear to be a way to free the underlying
> > memory pool until the process ends.
 
> I've found quite the opposite. Your experiments are obviously flawed.
 
You have found what opposite? that std::allocator is slower and that there
is a way to free the pool 'under boost::fast_pool_allocator'?
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Oct 21 03:54PM +0100

On 21/10/2016 14:42, Öö Tiib wrote:
 
>> I've found quite the opposite. Your experiments are obviously flawed.
 
> You have found what opposite? that std::allocator is slower and that there
> is a way to free the pool 'under boost::fast_pool_allocator'?
 
I've found that performance improves using it.
 
/Flibble
Jerry Stuckle <jstucklex@attglobal.net>: Oct 20 11:00PM -0400

On 10/20/2016 12:38 PM, Scott Lurndal wrote:
 
>> Well, let's see. We can start with the fact you can only have one
>> voltage on a wire at any specific time.
 
> So use more wires. Duh.
 
Each chip has a fixed number of address and a fixed number of data lines
going in and out of it. You can't change that.
 
Duh.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Oct 20 11:21PM -0400

On 10/20/2016 6:18 PM, David Brown wrote:
> voltage point covers 4 bits. This also makes all your comments on
> timing below out by a factor of 4.
 
> But with those corrections, we can continue.
 
Yes, but there is still only one voltage on the wire at any point in
time (and space if you want to get into wavelengths - but I was
considering a single point, as you would measure with a probe. Sorry
that was too complicated for you to understand.).
 
And yes, one voltage point covers 4 bits. But that is immaterial. It
just means that one voltage point has a wavelength of about 4 inches
instead of 1 inch. The number of bits per unit of length remains the
same, and propagation times remain the same.
 
So no corrections.
 
>> before the first one was received at the destination. This slows the
>> ability of the receiving system to respond to the transmitting system.
 
> This all affects latency, but not throughput for large transfers.
 
Oh, yes it does. Ethernet is still limited to 1500 byte blocks at the
hardware layer - which is what the 40Gb rate is. If you try to transfer
a larger block from your application, it will be split into chunks no
larger than that.
 
> 4 bits per symbol). The reason for this is to cover the overhead from
> Ethernet framing and spacing, so that the actual data transfer rate of
> the Ethernet frame payloads is at 10 Gb/s for standard frame sizes.
 
Yes, and this also limits speed. Twisted pair cables have different
twist ratios for each pair, to prevent crosstalk. This means signals on
each pair will arrive at slightly different times (which is the biggest
reason for 100M length limitations on Category cable). But it also
limits the speed on the cable, which is why they have to encode the data
and send at a lower rate. It has nothing to do with Ethernet framing
and spacing; rather those are the limits that can be reliably used over
a 100M cable.
 
> switches. If some part of your network cannot sustain full speed, then
> of course it will limit the transfer speed - but that is not under
> discussion here.
 
Ah, but they are. Every one of the slows down the signal a bit,
resulting in slower data transfer rates.
 
> And if you have bought reasonable quality routers, switches and media
> converters for your 10 Gb network, then they affect latency - but not
> throughput for large transfers.
 
But when you delay the arrival of the signal, you delay the handshaking
on the network. Of course, I am also assuming there is no contention at
the router or switch.
 
>> stray capacitance in and around the circuit. They add delays to the
>> signal, which are more significant as you go higher in speed.
 
> Again, you are mixing up latency and throughput.
 
No, I am not. Latency affects throughput. An extreme proof of that is
satellite links. Latency will typically run 1/4 second or so in each
direction across a satellite link. Even with high speed links to the
satellite, throughput is significantly reduced from theoretical maximums.
 
But then I suspect you're going to argue this also, as you know better
than Hughs engineers who work with the satellites on a daily basis
(hint: they have a big installation about 12 miles NNW of me).
 
>> how to violate the laws of physics, you will win a Nobel Prize.
 
> You can try again, if you like, and see if you can think of something
> that affects throughput, and that is actually relevant to the discussion.
 
Why? You'll just continue to show your ignorance by denying the facts.
And you'll continue to do so until the day you die instead of admitting
you are wrong. You've already shown that multiple times.
 
>> There is much more than just TCP/IP's framing overhead, etc.
 
> Actually, it seems to me that I understand this all a good deal better
> than you do.
 
Not that you've shown.
 
>> And BTW - ethernet is a physical layer protocol. TCP/IP is only one of
>> the link layer protocols which can be used.
 
> I am aware of that.
 
Not that you've shown.
 
> not more than Xeon's or the ThunderX cores can handle at this rate.
> (Also, I would expect that the network cards handle the TCP/IP framing
> and checking in hardware to offload the cpu.)
 
There is more than just protocol overhead. But once again you will
continue to argue rather than admit you are wrong.
 
> a factor of perhaps 500 times or more, depending on the number of DDR
> channels). It has already been explained to you how disks systems can
> give more than enough bandwidth to keep up.
 
And I'm talking real life, not some lab setup. But you just can't admit
you're wrong, can you? A trait you have repeatedly shown.
 
 
>> I have.
 
> I must admit I was pleasantly surprised to see that you tried. You
> failed, as we all knew you would, but you get points for the effort.
 
Oh, I am accurate on every point. You just aren't smart enough to
understand. Or you won't admit you're wrong - a trait you have shown
repeatedly.
 
> you like to your potential customers" ?
 
> Or do you mean the technical aspects of Ethernet that I understand and
> you do not?
 
You obviously do NOT understand the technical aspects of Ethernet. So
all I can assume is that you snow your customers just as you're trying
to do here. But it doesn't work when you have someone who knows more
about it than you do.
 
Just because I've been around networks longer than you've been alive
doesn't mean I'm old fashioned or out of touch. But it does mean I have
a lot more experience than you do.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Oct 20 11:36PM -0400

On 10/20/2016 6:33 PM, David Brown wrote:
> not write bandwidth) and give redundancy. You may generate parities for
> data integrity and recovery. You may combine disks to increase the
> total storage.
 
A very simplistic view. Depending on the RAID type, for instance, data
may be written to two disks but only read from one, the other being used
for backup. In fact, that is the most common use for RAID storage -
backup in case one disk fails.
 
Of course, you can have more disks in a RAID array, but the operation of
most are similar.
 
You *can* set up a RAID device to read from multiple disks, but that is
unusual (although not unheard of).
 
> Everything else about the different RAID types is different balances
> between costs, space, bandwidth, redundancy, hardware requirements,
> software requirements, rebuild times, etc.
 
Of course it's a balance - just like everything else is.
 
 
> And thus we have caches or buffers in order to change back and forth
> between several slow paths and fewer faster paths while maintaining
> throughput.
 
And you're still limited to how fast the cache can be filled, which is
based on the speed of the slowest disk *for that operation*. Caches are
find if you keep requesting the same data. But if you're constantly
requesting different data, as in transferring a large file, disk
throughput will catch up to you.
 
> any variations inevitably reduce the total throughput. But for
> transfers that are big enough to make latencies a minor issue, you get
> pretty close.
 
Once gain, it all depends on the cache hit rate. For random access of
the disk and/or large data transfers, the hit rate can be pretty low.
And the lower it is, the more you have to depend on disk access rate.
 
> that depends on the quality of the published data, the care taken by the
> "real" user, and the matching of the workload between the real usage and
> the testing usage.
 
There has yet to be a disk which can be reliably accessed at published
speeds - up to and including those on IBM mainframes. It's like the
speedometer on my car. It goes to 120 mph, but there's no way the car
will go that fast (or stay on the road if I did get it to that speed).
 
>> each request.
 
> As elsewhere, you misunderstand the difference between latency and
> bandwidth or throughput.
 
Not at all. Data transfer to the cache will only be as good as the
slowest disk response to the specific request. This will directly
affect throughput.
 
 
>> You don't remember what you claimed? No surprise there.
 
> Since I did not make such a claim, why would anyone be surprised that I
> can't remember it?
 
I was right. You don't remember what you claimed. I guess it's just
another way to not admit you are wrong, as you always do.
 
> - I have never suggested anything else.
 
> (Note - I have not tested such cards, so the numbers are from web
> published benchmarks.)
 
So what do you claim bus speeds are then?
 
 
>> And you failed there, also.
 
> Well, trying to teach you anything usually fails.
 
Yes, trying to get you to admit you are wrong about anything always
fails. As I've proven before, in this very forum.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Oct 20 11:38PM -0400

On 10/20/2016 1:44 PM, Gareth Owen wrote:
 
> Nah. It's just plain old denial. He's watched younger, brighter people
> come along and take his jobs, and bankrupt his company, as he's failed
> to keep up with the changes in technology.
 
Sorry, no one has taken my jobs, and no one has bankrupted my company.
It is going quite well, and probably bringing in more in a month than
you do in a year.
 
> And he's just extremely bitter to the point he refuses to acknowledge
> that they might know something he doesn't, so he's sitting at home
> shouting at a cloud, wearing his "Make America Great Again" baseball cap.
 
I'm not bitter at all.
 
But I forgot to mention earlier - I guess someone looked through
Wikipedia and found Dunning-Kruger. Nice to know we have expert
psychologists in this group, also.
 
ROFLMAO!
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Robert Wessel <robertwessel2@yahoo.com>: Oct 21 12:05AM -0500

On Fri, 21 Oct 2016 00:33:32 +0200, David Brown
>(but not write bandwidth) and give redundancy. You may generate
>parities for data integrity and recovery. You may combine disks to
>increase the total storage.
 
 
Increasing the number of disk in a RAID-5 array also increases the
write bandwidth, even for small writes. For small writes, you're
stuck (assuming no fortuitous caching) with the standard
two-reads-plus-two-writes update cycle. Consider a four disk RAID-5
array, where the two writes are aligned so that the data and parity
blocks for the two writes (four blocks total) are all on different
disks - then those two write cycles can proceed in parallel (subject
to limitations of the RAID controller).
 
A non-fortuitous alignment of the writes will reduce the opportunities
for parallelism, but that's no different than what happens with
(potentially) parallel reads to the array.
 
So while RAID-5 writes are (as a first order approximation) four times
the work of a read, the number of those that can be performed in
parallel scales with the number of disk in the array, just as reads do
(less the constant factor of four).
 
OTOH, for large RAID-5 writes, the size of a stripe (or multiples
thereof), those can be written with just a set of writes to all disks
in the array (with the parity block being computed directly from the
data written), without the read-modify-write process, and thus also
scales with the number of disks (although the stripe size will
typically increase with the number of disks).
 
Straight replication to improve bandwidth (with or without RAID-5),
OTOH does improve read bandwidth but not write bandwidth, since all
copies need to get all writes.
David Brown <david.brown@hesbynett.no>: Oct 21 09:46AM +0200

On 21/10/16 05:21, Jerry Stuckle wrote:
> instead of 1 inch. The number of bits per unit of length remains the
> same, and propagation times remain the same.
 
> So no corrections.
 
You mean you have accepted my corrections to your mistakes, and are
happy that the description is now reasonably accurate?
 
> hardware layer - which is what the 40Gb rate is. If you try to transfer
> a larger block from your application, it will be split into chunks no
> larger than that.
 
Jumbo frames, anyone?
 
> twist ratios for each pair, to prevent crosstalk. This means signals on
> each pair will arrive at slightly different times (which is the biggest
> reason for 100M length limitations on Category cable).
 
The limitation of 110 m length for most copper Ethernet connections has
/nothing/ to do with length matching between the pairs. Length matching
of the two wires making up the pairs is important, but they key points
are the impedance (resistance, capacitance, inductance) of the cable,
how that affects the signal strength, and how it affects different
frequencies.
 
For simplicity (most c.l.c++ members are not electronics engineers),
imagine the drivers are putting a simple square in at one end. This
square wave can be viewed as a sum of sine waves of different
frequencies - the harmonics of the fundamental frequency. As these
different sine waves pass along with the wire, they do so at slightly
different speeds and with slightly different attenuation. So when they
arrive at the other end, the sum is no longer a clean square wave, but a
smeared out lumpy wave. With a long enough wire, the result is no
longer recognizable and you have reached the limit of the connection.
 
Real Ethernet drivers (especially for faster speeds) are more complex,
with pre-compensation for this effect, but you still have distance
limitations in the same way. And lower category CAT cables suffer more,
which is why their distance limitation is shorter than higher category
cables.
 
Optical transmission in fibre optics do not have this effect as there is
only one frequency transmitted (WDM excluded). There is a related
problem with varying light path length through the cable, but it has
much lower effect - thus fibre connections can reach tens of km.
 
 
>> discussion here.
 
> Ah, but they are. Every one of the slows down the signal a bit,
> resulting in slower data transfer rates.
 
No.
 
 
> But when you delay the arrival of the signal, you delay the handshaking
> on the network. Of course, I am also assuming there is no contention at
> the router or switch.
 
Handshaking affects the latency (especially of connection setup), but
not the throughput.
 
 
> But then I suspect you're going to argue this also, as you know better
> than Hughs engineers who work with the satellites on a daily basis
> (hint: they have a big installation about 12 miles NNW of me).
 
I know better than you do, anyway.
 
Hint - you can get a hell of a lot of TV channels through a satellite
link. There is a massive bandwidth, despite the latency.
 
Here's another hint - a lorry load of DVDs has a huge bandwidth, despite
huge and somewhat unpredictable latency.
 
>>> the link layer protocols which can be used.
 
>> I am aware of that.
 
> Not that you've shown.
 
I did not mention them because they were not relevant. I also didn't
mention that you can buy Ethernet cable in different colours, for the
same reason.
 
>> give more than enough bandwidth to keep up.
 
> And I'm talking real life, not some lab setup. But you just can't admit
> you're wrong, can you? A trait you have repeatedly shown.
 
That's interesting - since the specific example has been Ian's (and
Scott's) /labs/. They are not using their 10 Gb links to get faster
Facebook access - they are using them for testing and developing
equipment with high-speed networks.
 
In real life, such high speed networks are usually found in datacenters
or network backbones. And of course the average throughput on the links
will be well below the peak rates - there are few applications where you
have a long-term sustained source of data at such rates.
 
 
> Oh, I am accurate on every point. You just aren't smart enough to
> understand. Or you won't admit you're wrong - a trait you have shown
> repeatedly.
 
No, you are not accurate. You have just enough knowledge to know some
of the big words, and perhaps enough to con people into thinking that
you are an expert. But when it comes to details, you are all over the
place - you make things up as you go along, and when challenged you deny
things, change the subject, add straw men, or insult people.
 
You have often boasted about how you are a "consultant", and how others,
such as myself, "will never make it as a consultant". I now understand
exactly what kind of consultant you are, and I can imagine that you are
actually very good at the job. Your primary concern as such a
"consultant" is to sell yourself - to persuade potential customers that
/you/ can provide the knowledge and experience that they are missing.
Once you get the contract and have to start the job, you will then pass
the buck - getting others to do the work, subcontracting, changing the
parameters of the situation, recommending the customer buy different
equipment, etc. You don't actually /do/ anything, except pocket the
money, because you are not able to do the engineering or development work.
 
I am glad to say I would never make it as a "consultant" like that - I
am happy to be an engineer and developer. And when I do consultancy
(which is also part of my job), it is as an experienced developer and
engineer, rather than as a con man.
 
> all I can assume is that you snow your customers just as you're trying
> to do here. But it doesn't work when you have someone who knows more
> about it than you do.
 
Actually, with my significantly better knowledge of Ethernet than yours,
I know that I am /not/ qualified to be a consultant on the details of
Ethernet. I can give customers rough information, and can certainly
make use of standard Ethernet solutions. But I know that if a customer
wanted help to make a high speed, high bandwidth network system, I would
have to read and learn more before going into details, or even pass the
job on to someone else.
 
You, with your limited understanding of the basics and invented details,
seem to think you /are/ qualified to work with modern high-speed networking.
 
> Just because I've been around networks longer than you've been alive
> doesn't mean I'm old fashioned or out of touch. But it does mean I have
> a lot more experience than you do.
 
Plugging in a few cables 40 years ago does not mean you have 40 years of
experience in Ethernet!
 
David Brown <david.brown@hesbynett.no>: Oct 21 01:18PM +0200

On 21/10/16 05:36, Jerry Stuckle wrote:
> may be written to two disks but only read from one, the other being used
> for backup. In fact, that is the most common use for RAID storage -
> backup in case one disk fails.
 
Yes, I know. Usually you allow reads from both disks in such cases,
since it allows higher bandwidth on large reads and lower latency on
random reads, but there are cases where you don't normally read from the
second copy. For example, you might have a RAID1 pair where one disk is
a fast SSD, and the other is a slow but cheap harddisk for redundancy.
Or perhaps one of the disks is remotely accessed by iSCSI or Linux DRBD,
and again you don't want to read from it normally.
 
> Of course, you can have more disks in a RAID array, but the operation of
> most are similar.
 
Exactly - the operations are similar, and the principles are the same.
 
 
> You *can* set up a RAID device to read from multiple disks, but that is
> unusual (although not unheard of).
 
No, it is very common. Obviously with striped RAID (RAID0, RAID5,
RAID6, RAID10 etc.), you /must/ read from multiple disks in order to get
the data you need. But with mirrored setups (RAID1, or layered RAID
using RAID1) you may also read from both devices simultaneously.
Usually this is doing two independent reads from the disk - accessing
two different files at the same time, reducing latency. But it can also
be reading different parts of the same file from the two disks in order
to improve total effective bandwidth. This would only be for quite
large files, in cases where the combined reads give better throughput
despite having the extra seeks needed on each disk.
 
> find if you keep requesting the same data. But if you're constantly
> requesting different data, as in transferring a large file, disk
> throughput will catch up to you.
 
Let me try to spoonfeed this to you.
 
You have 4 disks - A, B, C and D, each able to sustain 150 MB/s
physically. Each of these disks has a 64 MB local cache. These are
connected by 600 MB/s links to a SATA port multiplier, which has a
single 600 MB/s link into the computer's IO system (IO controller, DMA,
etc. - all of which runs so much faster than 6 GB/s that we call it
unlimited speed).
 
If you try to read large streams from disk A, the data will come off the
disk in a stream at 150 MB/s, into the cache. It will come out of the
disk's cache at 600 MB/s onto the SATA link to the port multiplexer, and
go from there at 600 MB/s to the computer. These 600 MB/s transfers
will come in bursts with pauses at a 25% ratio - for an average
throughput of 150 MB/s.
 
If you try to read large streams from all four disks (as you would for a
RAID0 set), then data will come off each disk at 150 MB/s into each
disk's cache. It will then be transferred in lumps from the four disk
caches to the port multiplexer at 600 MB/s, with each disk transferring
for about a quarter of the time. The caches mean that the disks can
collect up to 64 MB (0.1 seconds worth of data) in a lump, to minimise
switching overhead. Data will go from the multiplexer to the cpu at
full 600 MB/s, running continuously.
 
/That/ is how caches in hard disks work during reads. They are not for
keeping common data online - you do that with software caches on the
computer.
 
 
 
> Once gain, it all depends on the cache hit rate. For random access of
> the disk and/or large data transfers, the hit rate can be pretty low.
> And the lower it is, the more you have to depend on disk access rate.
 
See above for an explanation of what the cache on the hard disk
/actually/ does during reads. (It is also used during writes.)
 
 
I've snipped the rest - it's getting too tedious to try and explain
things to you. And even if I were to write that grass is green, you'd
claim it was pink and that I had previously said it was blue.
David Brown <david.brown@hesbynett.no>: Oct 21 02:00PM +0200

On 21/10/16 07:05, Robert Wessel wrote:
 
> Straight replication to improve bandwidth (with or without RAID-5),
> OTOH does improve read bandwidth but not write bandwidth, since all
> copies need to get all writes.
 
Yes, that is all correct, with more detail than I had regarding RAID5.
 
For interesting RAID setups, have you ever looked at Linux's RAID-10
support? You can use it with two (or more) disks, and (for hard disks)
it can give more than twice the streaming read performance of a single
disk - better than RAID-0. Latency for random reads is also
significantly smaller. At the same time, you have the redundancy of
mirrored RAID-1, but writes are slower.
 
I can explain how it all works if anyone is curious - but we can first
wait for Jerry to deny that it is physically possible.
cross@spitfire.i.gajendra.net (Dan Cross): Oct 21 12:51PM

In article <nubfq6$6qi$1@dont-email.me>,
>> And BTW - ethernet is a physical layer protocol. TCP/IP is only one of
>> the link layer protocols which can be used.
 
>I am aware of that.
 
Sorry, quoting David instead of Jerry since I've plonk'ed Jerry.
 
A minor quibble on this point: Jerry is only half-right (I suspect
David knows what I'm going to say but didn't mention it because,
well at some point you have to say "Enough!").
 
Anyway, ethernet can be fairly said to straddle the physical and
link layers (layers 1 and 2 in the OSI model), but the TCP/IP suite
is certainly above both. IP is a network layer protocol (layer 3),
and TCP is approximately a transport layer protocol (layer 4).
 
Ethernet defines a link-level protocol that fits squarely into OSI's
layer two, but also has physical layers describing signaling protocols
and encoding (such as 10BASE5, which perhaps Jerry is familiar
with).
 
I strongly suspect Jerry did not understand the distinction.
 
- Dan C.
Jerry Stuckle <jstucklex@attglobal.net>: Oct 21 10:08AM -0400

On 10/21/2016 3:46 AM, David Brown wrote:
 
>> So no corrections.
 
> You mean you have accepted my corrections to your mistakes, and are
> happy that the description is now reasonably accurate?
 
Not at all. As I said. I was merely trying to simplify the situation
to be understandable to your miniscule knowledge.
 
>> a larger block from your application, it will be split into chunks no
>> larger than that.
 
> Jumbo frames, anyone?
 
Not at the hardware layer. The Ethernet spec limits it to 1500 bytes.
 
> are the impedance (resistance, capacitance, inductance) of the cable,
> how that affects the signal strength, and how it affects different
> frequencies.
 
Actually, it has *everything* to do with the lengths between the pairs,
as was discussed extensively in the TIA standards committee. I wasn't
part of the committee, but there has been a great deal of discussion in
the trade magazines as well as webinars explaining the decisions and why
they were made.
 
While only one pair for each transmit and receive is currently being
used, they were very concerned about how it would work when using
multiple pairs for each transmit and receive - as is used in 40Gb links?
 
> limitations in the same way. And lower category CAT cables suffer more,
> which is why their distance limitation is shorter than higher category
> cables.
 
While it is true the signal degrades over distance, it does not degrade
that significantly. And as you noted, the effect can largely be
compensated for. But while you are correct that lower CAT (which is
short for category - so "category CAT" is redundant) suffer more, CAT-5,
CAT-6, CAT-7 and CAT-8 all have the same 100M limitation.
 
> only one frequency transmitted (WDM excluded). There is a related
> problem with varying light path length through the cable, but it has
> much lower effect - thus fibre connections can reach tens of km.
 
Once again, incorrect - and you show you know nothing about fiber.
 
There are two types of fiber - single-mode and multi-mode (further
subdivided, but I'll keep it simple for you here).
 
Multi-mode fiber has a larger diameter (typically 50um or 62.5um), which
allows multiple reflections of the beam inside the fiber, resulting in
multiple modes at the other end. The result is signal distortion, and
limits fiber lengths to 2km or less, depending on bit rate. It can,
however, be driven with relatively inexpensive LEDs.
 
Single-mode fiber is smaller in diameter (typically 8-10.5um), limiting
the reflections within the fiber. The result is a single mode signal ad
the output and a much cleaner signal. Resulting distances can exceed 80km.
 
 
>> Ah, but they are. Every one of the slows down the signal a bit,
>> resulting in slower data transfer rates.
 
> No.
 
You really don't understand latency, do you?
 
>> the router or switch.
 
> Handshaking affects the latency (especially of connection setup), but
> not the throughput.
 
And handshaking doesn't affect throughput? Here's a clue. The
handshake cannot occur until the entire packet is received. Anything -
including latency - which slows the signal delays the handshake.
 
 
> I know better than you do, anyway.
 
> Hint - you can get a hell of a lot of TV channels through a satellite
> link. There is a massive bandwidth, despite the latency.
 
Sure you can. But there is no handshaking on a TV channel.
 
> Here's another hint - a lorry load of DVDs has a huge bandwidth, despite
> huge and somewhat unpredictable latency.
 
Ditto above.
 
But if latency weren't important, why would disk manufacturers spend
huge amounts of money adding caches to their drives? The ONLY think the
cache affects is latency.
 
 
> I did not mention them because they were not relevant. I also didn't
> mention that you can buy Ethernet cable in different colours, for the
> same reason.
 
What kind of cable? There is no such thing as "Ethernet cable". There
is category cable. There is coax cable. There is fiber optic cable.
Any of them can carry ethernet signals.
 
Once again you show your ignorance.
 
> or network backbones. And of course the average throughput on the links
> will be well below the peak rates - there are few applications where you
> have a long-term sustained source of data at such rates.
 
Even data centers know better than to think they can get 40Gb on a 40Gb
link for anything but a short burst. But then you've never been in a
real data center, have you? Hint: we have several within 25 miles of
me. Not only U.S. government, which is highly restricted. But Network
Solutions, Amazon and other commercial entities have data centers here.
 
> you are an expert. But when it comes to details, you are all over the
> place - you make things up as you go along, and when challenged you deny
> things, change the subject, add straw men, or insult people.
 
Nope, I am accurate on every point. And I'm not the one who keeps
changing the subject or deny things. And calling an you an idiot would
be an insult - to idiots.
 
> parameters of the situation, recommending the customer buy different
> equipment, etc. You don't actually /do/ anything, except pocket the
> money, because you are not able to do the engineering or development work.
 
That is correct. But I am successful as a consultant because I am
successful in satisfying my client's needs. I don't lie to them, and I
don't try to snow them. I do very little subcontracting, and cannot
change the parameters of the situation or different equipment, etc. -
because those are all fixed in the contract.
 
I am successful because I CAN do the engineering and development work.
I wouldn't have made it for over 25 years if I were like you think.
 
> am happy to be an engineer and developer. And when I do consultancy
> (which is also part of my job), it is as an experienced developer and
> engineer, rather than as a con man.
 
I'm glad you'll never make it as a consultant. We have too many people
like you already. Fortunately, they don't last more than 2-3 jobs
before word gets around. But those 2-3 jobs create a stain on the
entire consulting community.
 
> wanted help to make a high speed, high bandwidth network system, I would
> have to read and learn more before going into details, or even pass the
> job on to someone else.
 
Ah, but that's the difference between you and me. I can give customers
*detailed information* and it will be *accurate* as later tests have
always proven. I don't have to read and learn more before going into
the details because I've been there and done that. But it would take a
lot more than just reading and learning for you to give anyone help. It
takes years of experience.
 
> You, with your limited understanding of the basics and invented details,
> seem to think you /are/ qualified to work with modern high-speed networking.
 
Yup, as I've been proving for years. But then I started when 2400 bps
synchronous links were considered "high speed", back in the 70's.
 
>> a lot more experience than you do.
 
> Plugging in a few cables 40 years ago does not mean you have 40 years of
> experience in Ethernet!
 
Nope, because ethernet hasn't been around for 40 years. But it shows
experience that you can't begin to have.
 
 
I know what your real problem is. For years you've been snowing people
in this newsgroup to think you actually know something. But now when
someone who knows more than you comes along and challenges you, it
really bugs you. You aren't the "top dog" any more. Very typical of
someone with a severe inferiority complex.
 
Unlike you, I don't, and I don't need to be "top dog". I just want
people to know the truth, not the snow jobs you've been giving them.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Oct 21 10:15AM -0400

On 10/21/2016 8:51 AM, Dan Cross wrote:
> with).
 
> I strongly suspect Jerry did not understand the distinction.
 
> - Dan C.
 
Oh, I understand the distinction, all right. But not everything fits
into the OSI model, and there is not always a clean distinction between
physical and link layers. Ethernet is one such protocol.
 
I didn't get into the difference because I didn't want to confuse David.
He has shown he doesn't understand the different layers.
 
And I know you won't read this because you plonked me. But then it's
typical for trolls to post attacks but not read responses.
 
It's no loss to me. You're only making a fool of yourself.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Oct 21 10:31AM -0400

On 10/21/2016 7:18 AM, David Brown wrote:
> to improve total effective bandwidth. This would only be for quite
> large files, in cases where the combined reads give better throughput
> despite having the extra seeks needed on each disk.
 
Yes, with striped RAID you read from multiple disks. However, it's
generally always the *same* disks for any specific data, even though
that same data exists on one or more other disks. And if that disk is
busy, the request will wait instead of reading from another disk.
 
> go from there at 600 MB/s to the computer. These 600 MB/s transfers
> will come in bursts with pauses at a 25% ratio - for an average
> throughput of 150 MB/s.
 
Which is exactly what I was saying. If data is not in the cache, you
will be limited by disk speed. But I guess you're too stoopid to
understand English. Maybe I need to use a smaller spoon for you.
 
> collect up to 64 MB (0.1 seconds worth of data) in a lump, to minimise
> switching overhead. Data will go from the multiplexer to the cpu at
> full 600 MB/s, running continuously.
 
No difference here than if you have no cache. Either way you can get
600MB/s from 4 150MB/s disks, because even without a cache, disks will
buffer data into blocks and transfer at bus speeds. The difference is
the blocks may be as little as 512 bytes or as large as 4K or more,
depending on the disk.
 
The I/O and memory buses do not slow down to 150MB/s to transfer data
from one disk - with or without cache, and never have.
 
> /That/ is how caches in hard disks work during reads. They are not for
> keeping common data online - you do that with software caches on the
> computer.
 
No, caches speed up access to the data. For instance, in your above
example, if data is in the cache for one drive, you will get an
immediate 600MB/s data transfer. If the data is in the cache of all 4
drives, data transfer will be 2400MB/s.
 
THAT is why disks have caches, and why the larger the better (within
reason, or course).
 
>> And the lower it is, the more you have to depend on disk access rate.
 
> See above for an explanation of what the cache on the hard disk
> /actually/ does during reads. (It is also used during writes.)
 
Yes, you've once again shown what you don't know - in this case the
advantage of disk caches.
 
 
> I've snipped the rest - it's getting too tedious to try and explain
> things to you. And even if I were to write that grass is green, you'd
> claim it was pink and that I had previously said it was blue.
 
Oh, you mean you aren't going to try to show how little you know about
token ring again? Or try to again claim modern computers don't have buses?
 
How like you - you just can't admit you're wrong about anything, can you?
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Juha Nieminen <nospam@thanks.invalid>: Oct 21 10:25AM

For std::vector to use the move constructor of its elements, said
constructor must have been declared 'noexcept'. However, does the
destructor of the element also need to be 'noexcept'?
 
I'm getting conflicting information on this. Many online discussions
claim that the destructor also needs to be 'noexcept', yet when I make
a test using a class with a non-noexcept destructor using the latest
gcc, std::vector calls the move constructor just fine.
 
(This question is quite relevant because the class I'm making does
allocate memory and needs to dispose it properly in its destructor,
and I'm not sure it's kosher to make it 'noexcept' in this case.)
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
SG <s.gesemann@gmail.com>: Oct 21 06:26AM -0700

On Friday, October 21, 2016 at 12:25:19 PM UTC+2, Juha Nieminen wrote:
> For std::vector to use the move constructor of its elements, said
> constructor must have been declared 'noexcept'. However, does the
> destructor of the element also need to be 'noexcept'?
 
It should be in general. Destructors that throw exceptions are usually
difficult to deal with correctly and best be avoided.
 
This doesn't have anything to do with moving, though. Moving objects
around instead of copying them doesn't change the number of destructor
invocations. vector<T> might require T to have a no except dtor, I
don't know. I wouldn't be surprized if a throwing ~T would invoke
udefined behaviour when used in vector<T>. But I'd have to check that.
 
Cheers!
SG
"Öö Tiib" <ootiib@hot.ee>: Oct 21 06:36AM -0700

On Friday, 21 October 2016 13:25:19 UTC+3, Juha Nieminen wrote:
> For std::vector to use the move constructor of its elements, said
> constructor must have been declared 'noexcept'. However, does the
> destructor of the element also need to be 'noexcept'?
 
My streetwise assumption (can't cite scripture) is that
"Yes it is required".
 
> claim that the destructor also needs to be 'noexcept', yet when I make
> a test using a class with a non-noexcept destructor using the latest
> gcc, std::vector calls the move constructor just fine.
 
How did you achieve in your test that the destructor was throwing?
Implicitly those are 'noexcept(true)' unless the class has any bases
or members that got 'noexcept(false)' destructors.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Oct 21 04:08PM +0200

On 21.10.2016 12:25, Juha Nieminen wrote:
 
> (This question is quite relevant because the class I'm making does
> allocate memory and needs to dispose it properly in its destructor,
> and I'm not sure it's kosher to make it 'noexcept' in this case.)
 
As long as the compilers you use support it, the main question is
whether a destructor of a moved-from object ever can throw?
 
If it can then it can really mess up buffer replacement code in
std::vector; formally that's then UB.
 
But otherwise, if the compilers support it, the question is one of
portability, if the /standard/ supports it. I don't know. But I would
think that since it is of practical interest to have that support, it
probably does?
 
 
Cheers!,
 
- Alf
Paavo Helde <myfirstname@osa.pri.ee>: Oct 21 05:20PM +0300

On 21.10.2016 13:25, Juha Nieminen wrote:
 
> (This question is quite relevant because the class I'm making does
> allocate memory and needs to dispose it properly in its destructor,
> and I'm not sure it's kosher to make it 'noexcept' in this case.)
 
What is important for element moving is probably that the destructor of
*moved-away* elements does not throw exceptions. However, this cannot be
codified formally, so gcc might just assume that this is the case.
 
Anyway, this is only a theoretical issue because destructors of most
classes should be noexcept anyway, to avoid calling std::terminate()
during stack unwind. Add a try ... catch(...) block and log any
unexpected exceptions to stderr if there is no other way to make the
destructor noexcept.
 
Cheers,
Paavo
Juha Nieminen <nospam@thanks.invalid>: Oct 21 09:47AM

> 64-bit Windows has 32-bit "long int", while (AFAIK) all 64-bit *nix
> systems have 64-bit "long int".
 
On Windows (even with a compiler like mingw/gcc, which also has a
32-bit long even when compiling a 64-bit executable) it's actually
better to use std::size_t instead of unsigned long, as the former
will be 64-bit (when compiling to a 64-bit executable) while the
latter won't.
 
But of course checking the size of std::size_t in the preprocessor
is even more problematic.
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Juha Nieminen <nospam@thanks.invalid>: Oct 21 09:48AM

> But do you really need the size of long? Or is it more constructive to
> use int64_t or something like that where 64 bits are required.
 
It would be, except it's not compatible with C++98.
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
woodbrian77@gmail.com: Oct 20 09:43PM -0700

On Thursday, October 20, 2016 at 3:49:23 PM UTC-5, Vir Campestris wrote:
 
> 2) Copy the contents of "form" into the random location.
 
> FWIW I always feel that in C++ if you're using a raw pointer you're
> probably doing it wrong. Not always, but usually.
 
I disagree with that. I use std::unique_ptr some, but raw pointers
work well in a lot of cases. I have an archive on this page:
 
http://webEbenezer.net/build_integration.html
 
that shows limited use of smart pointers and liberal use of
raw pointers.
 
 
> And read about JSON
> like Öö says!
 
I work on an alternative to JSON.
 
 
Brian
Ebenezer Enterprises - In G-d we trust.
http://webEbenezer.net
"Öö Tiib" <ootiib@hot.ee>: Oct 20 11:32PM -0700

> > probably doing it wrong. Not always, but usually.
 
> I disagree with that. I use std::unique_ptr some, but raw pointers
> work well in a lot of cases.
 
Smart pointers are not the only replacement to raw pointer in C++.
There are references, containers, indexes, iterators etc. As result
I know only few cases where I have to declare something to be
of type raw pointer to something else.
 
 
> http://webEbenezer.net/build_integration.html
 
> that shows limited use of smart pointers and liberal use of
> raw pointers.
 
Please name at least 3 different cases where you need raw pointer and
why. Making people to dig in your poorly documented code base to try to
find it out themselves is fruitless, they don't have time for that.
 
 
> > And read about JSON
> > like Öö says!
 
> I work on an alternative to JSON.
 
JSON is language neutral, so your C++ code generator can not replace its
main use-cases.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: