- "Why I don't spend time with Modern C++ anymore" by Henrique Bucher - 21 Updates
- Maintaining a collection of iterators to a list - 2 Updates
- Truncate std__ostringstream? - 2 Updates
Jerry Stuckle <jstucklex@attglobal.net>: Jun 02 10:20PM -0400 On 6/2/2016 4:15 AM, Ian Collins wrote: >> How many places in the world have 100Gbs networking external to a data >> center? > Under my desk once the adapters and cables arrive.... Right. You're going to have 100Gbs at your desk, and be able to use it. ROFLMAO! You are just too much, Ian! -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 02 10:24PM -0400 On 6/2/2016 7:13 AM, Ian Collins wrote: >> cache (in which case it will operate at speeds near to - but not quite >> as fast as - an SSD), but even then the cache size is limited. > Hardware RAID is so last century... Wrong again. It is still quite heavily used - especially where huge drives and reliability are required. But you wouldn't understand that. >> an 8 disk RAID can't keep up; eventually it will have to go to disk, and >> even the fastest disks max out at around 175MB/s. > .. each. Yup, and even 8 of them together are far short of your claims of 40GBs. They're even short of 2Gbs. But I guess you never were very good at multiplication, either. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 02 10:25PM -0400 On 6/2/2016 2:19 AM, Rosario19 wrote: >> are - the chip's design restricts access to one unit at a time. > why each cpu does not has one its 'bus' or line to memory indipendet > of other? There is only one address bus and one data bus on the chip. If each CPU had a separate bus, they wouldn't be able to access the same memory. You'd just have multiple computers in one box. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 02 10:26PM -0400 On 6/2/2016 8:49 AM, Scott Lurndal wrote: > have from two to 12 (or more, depending on socket count) memory > controllers driving one or two DIMM's per each. That's not even > considering FBDIMMS or other serial connections to memory. Yes, and two processors still can't access the same DIMM or DIMMs concurrently. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 02 10:27PM -0400 On 6/2/2016 5:41 AM, BartC wrote: > byte (and avoid the problems of both actually writing to it at the same > instant), effectively giving both access in that 1ns. > You just design in spare bandwidth. Sorry, Bart, it doesn't work that way. Even propagation times within the chip takes longer than 0.5ns. You can go so far - but eventually you run into a brick wall called the laws of physics. And those can't be violated. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 02 10:29PM -0400 On 6/2/2016 8:46 AM, Scott Lurndal wrote: >> location. > I give up. You've missed the last two decades of hardware design > advancements. I build processors, do you? No, I haven't. I have been heavily involved in hardware for the last 25 years as a consultant (my original background was EE - I got into programming later). But you just don't understand the laws of physics and what the limits are. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Christian Gollwitzer <auriocus@gmx.de>: Jun 03 08:13AM +0200 Am 03.06.16 um 04:29 schrieb Jerry Stuckle: > years as a consultant (my original background was EE - I got into > programming later). But you just don't understand the laws of physics > and what the limits are. Translation: 25 years ago I started learning about current hardware architecture. Then I stopped and became a consultant. Thank you for this entertaining and educating thread. Christian |
Ian Collins <ian-news@hotmail.com>: Jun 03 08:27PM +1200 On 06/ 3/16 02:20 PM, Jerry Stuckle wrote: >> Under my desk once the adapters and cables arrive.... > Right. You're going to have 100Gbs at your desk, and be able to use it. > ROFLMAO! You are just too much, Ian! Ever seen the term "evaluate"? I never sell anything I don't use. -- Ian |
Ian Collins <ian-news@hotmail.com>: Jun 03 08:31PM +1200 On 06/ 3/16 02:24 PM, Jerry Stuckle wrote: >> Hardware RAID is so last century... > Wrong again. It is still quite heavily used - especially where huge > drives and reliability are required. Drive size is irrelevant and hardware RAID, being more complex than a simple HBA, decreases reliability. >>> even the fastest disks max out at around 175MB/s. >> .. each. > Yup, and even 8 of them together are far short of your claims of 40GBs. I made no such claim. -- Ian |
Ian Collins <ian-news@hotmail.com>: Jun 03 08:33PM +1200 On 06/ 3/16 02:25 PM, Jerry Stuckle wrote: > There is only one address bus and one data bus on the chip. If each CPU > had a separate bus, they wouldn't be able to access the same memory. > You'd just have multiple computers in one box. NUMA -- Ian |
BartC <bc@freeuk.com>: Jun 03 12:25PM +0100 On 03/06/2016 03:27, Jerry Stuckle wrote: >> You just design in spare bandwidth. > Sorry, Bart, it doesn't work that way. Even propagation times within > the chip takes longer than 0.5ns. My figures were made up to illustrate my point. But even going with them, you can surely create a memory system that can access 1024 bytes in 512ns? That will then average 0.5ns per byte (which is only 2GB/sec after all.) -- Bartc |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 03 07:54AM -0400 On 6/3/2016 4:27 AM, Ian Collins wrote: >> Right. You're going to have 100Gbs at your desk, and be able to use it. >> ROFLMAO! You are just too much, Ian! > Ever seen the term "evaluate"? I never sell anything I don't use. Yup, I understand "evaluate". And you're actually stoopid enough to think you can "evaluate" a 100Gbps link at your desk. You need to go back to sales. You know nothing about engineering. But then you've already admitted you're a troll. And a stupid one at that. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 03 07:57AM -0400 On 6/3/2016 4:31 AM, Ian Collins wrote: >> drives and reliability are required. > Drive size is irrelevant and hardware RAID, being more complex than a > simple HBA, decreases reliability. Once again you show your total ignorance. Let's see you put together a 500GB RAID out of 5MB disks. Also, hardware RAID is still more reliable than SSDs. But you have no engineering background, and have no idea what you're talking about. That's you've proven time and time again. >>> .. each. >> Yup, and even 8 of them together are far short of your claims of 40GBs. > I made no such claim. And you're still an admitted troll - and a very stupid one. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 03 07:59AM -0400 On 6/3/2016 4:33 AM, Ian Collins wrote: >> had a separate bus, they wouldn't be able to access the same memory. >> You'd just have multiple computers in one box. > NUMA SYou need to learn some engineering before you make an even bigger fool of yourself - like what NUMA means. But with some of the things you've said, that's pretty hard. After all, you're just an admitted troll, and a pretty stupid one at that. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 03 08:00AM -0400 On 6/3/2016 7:25 AM, BartC wrote: > But even going with them, you can surely create a memory system that can > access 1024 bytes in 512ns? That will then average 0.5ns per byte (which > is only 2GB/sec after all.) You need to learn some electronics before you make stupid claims like that. Having an 8kb wide data bus has its own problems. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 03 08:05AM -0400 On 6/3/2016 2:13 AM, Christian Gollwitzer wrote: > architecture. Then I stopped and became a consultant. > Thank you for this entertaining and educating thread. > Christian Translation: to be a successful consultant, I've had to keep up with advances in multiple technologies in order to deal with the designs and work with engineers in different areas. A successful consultant has to know more details about more technologies than an employee. Otherwise the consultant quickly finds himself replaced by someone who does know. Sorry, consultants aren't employees. We don't have the luxury of coasting along. And we need to know application of the technologies, not just theory as you learn in college. And you've just shown how little you know. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
BartC <bc@freeuk.com>: Jun 03 01:46PM +0100 On 03/06/2016 13:00, Jerry Stuckle wrote: >> is only 2GB/sec after all.) > You need to learn some electronics before you make stupid claims like > that. Having an 8kb wide data bus has its own problems. Memory access is typically faster than 512ns. Sram for example might be 32 times faster, so the width is only 256 bits, not exceptional. While drams have all sorts of tricks to clock data out at a faster rate than the access times. But if worries you, scale the numbers to suit. -- Bartc |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 03 09:11AM -0400 On 6/3/2016 8:46 AM, BartC wrote: > drams have all sorts of tricks to clock data out at a faster rate than > the access times. > But if worries you, scale the numbers to suit. Nothing you do can "scale numbers to fit". Even a 256 bit wide data bus is huge. Learn something about digital electronic design. Then maybe you can discuss this intelligently. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
David Brown <david.brown@hesbynett.no>: Jun 03 03:19PM +0200 On 03/06/16 14:05, Jerry Stuckle wrote: > coasting along. And we need to know application of the technologies, > not just theory as you learn in college. > And you've just shown how little you know. I think I've figured out where Scott Adams gets his ideas! <http://dilbert.com/strip/2010-02-01> <http://dilbert.com/strip/2010-03-12> <http://dilbert.com/strip/2011-01-07> <http://dilbert.com/strip/2011-11-05> |
Jerry Stuckle <jstucklex@attglobal.net>: Jun 03 09:44AM -0400 On 6/3/2016 9:19 AM, David Brown wrote: > <http://dilbert.com/strip/2010-03-12> > <http://dilbert.com/strip/2011-01-07> > <http://dilbert.com/strip/2011-11-05> Yes, definitely from you, David. An incompetent manager who knows nothing about his job or the people who work for him. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
scott@slp53.sl.home (Scott Lurndal): Jun 03 02:37PM >Nothing you do can "scale numbers to fit". Even a 256 bit wide data bus >is huge. Consider an internal bus between the non-coherent devices and the memory subsystem that is 128 bits wide. At a clock speed of 1Ghz, the bus bandwidth is 16 Gbytes/second. A 40Gbit/sec ethernet controller sources and sinks a maximum of 8Gbytes/sec at full-duplex line rate which uses 50% of the bus, which leaves plenty for a second 40Gbit/sec controller or a dozen SATA controllers running full speed. Now consider the memory subsystem. The fastest server-grade DDR4 DIMM supports 4266 MT/s (million transfers/sec). Each transfer is 8-bytes wide, so the maximum bandwidth to a single DIMM is 34 GBytes/sec (which is 2x the bandwidth of the noncoherent bus). Now, consider that a modern server will have between 4 and 12 or more memory controllers, each with a dedicated channel to a single DIMM (or sometimes a pair of DIMMS with 128-bit wide bus, e.g. ChipKill). That gives a bandwidth to memory of 136Gbytes/sec to 4 Terabytes/sec for the system as a whole. So, the memory bandwidth is sufficient to support full line rate from multiple high-performance I/O devices along with dozens of cores running at 2.5Ghz. Elementary 21st century hardware design. Next gen DDR5 may have a 512-bit-wide parallel memory interface to increase memory bandwidth substantially (Hynix HBM, JEDEC JESD235). It's entirely likely that DDR5 may also have a high-speed serial interface instead of the current 64-bit parallel interface, which provides many benefits to the hardware designer (fewer package bumps, for example). |
Paul <pepstein5@gmail.com>: Jun 03 12:01AM -0700 I have a std::list and I want to maintain a collection of iterators to the list. I am looking for ways to express this design concept. One obvious way is: std::list<char> charList; std::vector<std::list<char>::iterator> itrs; However, I want to express the fact that the vector only contains iterators to charList; With the above code, itrs could contain iterators to any list<char> and I don't want that. The problem I (think I) solved is to identify the earliest occurring non-duplicate character in a stream of characters. If all characters repeat, null is returned. My code for this comes at the end of the post. Feel free to make any comments or suggestions. However, I'm working on algorithms and am not particularly concerned with stream handling. The stream handling is very crude here, but I'm not focusing on that aspect right now. Many thanks, Paul #include<iostream> #include<vector> #include<list> char onlineFirstNonRepeater(std::istream& stream) { std::vector<int> occurred(256); // Have characters occurred previously or not? std::list<char > characters;// In the online case, these need deleting, hence a list. std::vector<std::list<char>::iterator> earliestPositions(256); // Recording the iterators in the above list char c; while(stream >> c) { if(!occurred[c]) { characters.push_back(c); earliestPositions[c] = characters.end(); --earliestPositions[c]; // Obbtain corresponding iterator to first occurrence of c. } else if(occurred[c] == 1) // Delete iterator characters.erase(earliestPositions[c]); if(occurred[c] < 2) ++occurred[c]; } return characters.empty() ? 0 : characters.front(); // The ones at the front occurred earlier. } int main() { const char c = onlineFirstNonRepeater(std::cin); if(c) std::cout << c; else std::cout << "All characters repeat"; } |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 03 10:02AM +0200 On 03.06.2016 09:01, Paul wrote: > suggestions. However, I'm working on algorithms and am not > particularly concerned with stream handling. The stream handling is > very crude here, but I'm not focusing on that aspect right now. Just use a `std::map` to associate a first position with each non-repeating character. Note also that as given, using a `std::vector<int>` with the `int` used as a DIY `bool`, and the index representing a `char` value, as a way to emulate a `std::set<char>, does not scale to Unicode characters. In both cases I'd use the `std::unordered_...` versions for efficiency, and in order to not mislead the reader that sorting is essential. > else > std::cout << "All characters repeat"; > } Cheers & hth., - Alf |
MikeCopeland <mrc2323@cox.net>: Jun 02 05:45PM -0700 Is there a way to truncate 1 or more characters from a constructed std::ostringstream? I have situations where I build a stream (data_value1, comma, data_value2, comma, etc.) and when complete I want to erase the last comma before I process the constructed variable. Currently, I convert the stream to a std::string and erase the last character, but that's clumsy. Can I do this while it's an ostringstream? Please advise. TIA --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 03 03:55AM +0200 On 03.06.2016 02:45, MikeCopeland wrote: > Currently, I convert the stream to a std::string and erase the last > character, but that's clumsy. > Can I do this while it's an ostringstream? Please advise. TIA You can change the logic used to put items into the ostream, like this: int n_written = 0; for( auto const& item : items ) { if( n_written > 0 ) { stream << ", "; } stream << item; ++n_written; } Cheers & hth., - Alf |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment