- C++ 2017 -- win, lose and draw - 3 Updates
- Could this container I made have any use? - 1 Update
- Header-only C++, what is the point? In C this would be a obvious mistake - 3 Updates
- Best way to handle mathematical divide-by-zero case - 10 Updates
- runtime error free(); invalid next size (fast); - 5 Updates
Juha Nieminen <nospam@thanks.invalid>: Feb 07 07:04AM > https://www.gotquestions.org/God-is-in-control.html > The advent of the C++ Middleware Writer is further evidence > of G-d's sovereignty. In a conversation that has absolutely nothing to do with religion, you suddenly, out of the blue, throwin in proselytizing. How about you stop being a retarded asshole? |
woodbrian77@gmail.com: Feb 07 11:25AM -0800 On Tuesday, February 7, 2017 at 1:04:34 AM UTC-6, Juha Nieminen wrote: > > of G-d's sovereignty. > In a conversation that has absolutely nothing to do with religion, > you suddenly, out of the blue, throwin in proselytizing. I'm explaining that "luck" has nothing to do with it... many thieves around here... If I were lucky as a business man, I wouldn't deserve the spoils of victory. Their many attempts over many years to derail my work/company have failed. So they resort to this sort of crap to make others think that those who work hard and risk everything they have don't deserve huge rewards. I'm blessed not lucky. If people fall for their crap, they will feel justified in attempting to steal what G-d has given me. They have an impoverished, deep-pockets mentality. I hope you will realize that everyone has a right to defend themselves. Brian Ebenezer Enterprises - "Free at last, free at last. Thank G-d Almighty we are free at last." Martin Luther King, Jr. http://webEbenezer.net |
Daniel <danielaparker@gmail.com>: Feb 07 12:49PM -0800 > I hope you will realize that everyone has a right to defend > themselves. Don't worry about it, when you're willing to work for free, you can do anything that you like, you don't have to do what other people want you to do, and it doesn't matter if nobody wants to pay for it. Best regards, Daniel |
bitrex <bitrex@de.lete.earthlink.net>: Feb 07 03:38PM -0500 On 02/06/2017 03:25 AM, Öö Tiib wrote: > My impression of 'boost::any' however is negative from > every side: slow, inconvenient, fragments memory and > produces cryptic diagnostics. IIRC boost::variant classes need to be equality comparable and hashable, so I had to write some functions to implement that, and also some "visitor" classes. Maybe I was doing it wrong? |
Juha Nieminen <nospam@thanks.invalid>: Feb 07 07:13AM > I come from the C language. I have the impression that in C ++ the ideal > is "C ++ header only" libraries. Where did you get that impression? I have never heard of such an ideal. Many libraries with heavy use of templates may be header-only, mostly because of necessity. Some non-template libraries may also be header-only, but there seldom is any good reason for it (unless the library is really small), and it often only needlessly increases compilation times (sometimes to an extraordinary extent, when we are talking about humongous libraries with tens of thousands of lines of code). |
woodbrian77@gmail.com: Feb 07 09:21AM -0800 On Tuesday, February 7, 2017 at 1:13:34 AM UTC-6, Juha Nieminen wrote: > because of necessity. Some non-template libraries may also be header-only, > but there seldom is any good reason for it (unless the library is really > small), The nature of the library and how it is (typically) used might be a factor also. My generated messaging and marshalling libraries are header-only. In my experience, the functions in the libraries are usually called once by an application. There's an exception to that where a message that conveys error information is called 3 times in one app. This thread got me thinking about my use of header-only libraries so I checked how much difference it would make size-wise if that function was in a .cc file. Doing so reduced the size of the application's text segment by 128 bytes. In this messaging context, it seems like most of the time I'll be able to get away with header-only libraries. But for my hand-written library code: https://github.com/Ebenezer-group/onwards I don't have that luxury. Brian Ebenezer Enterprises - "Everybody makes mistakes, A fault we all must share; But attitudes have changed with time: Too few of us now care." Art Buck http://webEbenezer.net |
woodbrian77@gmail.com: Feb 07 10:15AM -0800 > a factor also. My generated messaging and marshalling libraries > are header-only. In my experience, the functions in the libraries > are usually called once by an application. There's an exception to s/once/in one place/ > that where a message that conveys error information is called 3 times s/times/places/ > library code: > https://github.com/Ebenezer-group/onwards > I don't have that luxury. Brian Ebenezer Enterprises http://webEbenezer.net |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 06 11:49PM On 06/02/2017 22:46, Robert Wessel wrote: > machine, you may not be able to do anything useful after it happens, > since your programs state will be mangled. The alternative is then to > insert a test for zero before every divide, which might be very slow. You don't have to test before every divide: in this case one simply has pre-condition check at start of function (empty set throws an exception). /Flibble |
Manfred <mx2927@gmail.com>: Feb 07 12:58AM +0100 On 02/06/2017 11:25 PM, JiiPee wrote: > other languages when it comes to arithmetic. It also avoids the problems > that occur on heavily pipelined architectures where events such as > divide by zero are asynchronous."` Thanks for the quote. I may have two comments about this: 1) Appearently Bjarne is referring here to exceptions thrown by the language itself (or the standard library), and he explains why they choose not to have the language throw a division-by-zero exception - in which the argument to match the behaviour of other languages (including C) is a strong one. In the average example() I would explicitly throw an exception if the set is empty, I would not execute the division by zero, or anyway assume that the division itsels throws an exception. 2) In the general case of floating point arithmetic division by zero would have some real-valued denominator expression that can evaluate to zero or any value arbitrarily close to it. In the average() example the denominator is an integer value, and this is a relevant difference. As I wrote earlier, when there is a real-valued denominator that can evaluate to zero or any value close to it, quite often (expecially when the expression models a physical process) if you happen to have the expression evaluate to 0/0 (which would yield a NaN) you can in fact refactor the expression so as to have a well defined form over your problem domain. If you happen to get nonzero/0, you may either choose different variables, or rely on the processor or math library handling of +/-INF if this suits your problem. This is to say that with real-valued floating point arithmetic, you have significant alternatives to handle (or prevent) division by zero before you treat it as an error condition. Obviously all of this must be studied in the specifics of the problem at hand. In my experience division by zero is not the hardest part of floating point arithmetic. Actually much easier than its finite precision and rounding errors, which would result e.g. in DBL_EPSILON to be returned where a zero was expected. This can be much trickier. |
JiiPee <no@notvalid.com>: Feb 07 12:43AM On 06/02/2017 23:58, Manfred wrote: > In the average example() I would explicitly throw an exception if the > set is empty, I would not execute the division by zero, or anyway > assume that the division itsels throws an exception. what if the average is calculated 10 million times per second? would you still ldo the check eatch time? But isnt it this a similar issue to vectors v.at(2) against v[2]? |
David Brown <david.brown@hesbynett.no>: Feb 07 09:22AM +0100 On 07/02/17 01:43, JiiPee wrote: > what if the average is calculated 10 million times per second? would you > still ldo the check eatch time? > But isnt it this a similar issue to vectors v.at(2) against v[2]? Do you prefer correct code, or code that is a fraction of a percent faster but wrong? It is really quite simple, and I cannot understand why you are taking so long to get the hang of it. Let me give you the /one/ rule that actually matters: * Do not divide by zero - it is a mistake in the code. * OK? Now you have that as a rule, you can decide the best way to be sure that the rule is not broken. That will depend on the circumstances - what code you have, who is going to call the code, who might make the mistake of threatening a division by zero, and what you can do about it. Options include everything from ignoring it (because if some moron calls your function with an empty set, it is /their/ fault - and you don't want to slow down the code for competent users), returning fixed values, throwing an exception, up to automatically sending out an email to the developers with a deviation report. The same applies to /all/ other undefined behaviour in the code (such as accessing an array out of bounds) and all other kinds of errors. Your target is to avoid them happening. If you think an error /might/ happen, then you find ways to detect the situation and deal with it as best you can. |
"Öö Tiib" <ootiib@hot.ee>: Feb 07 01:00AM -0800 On Tuesday, 7 February 2017 02:43:43 UTC+2, JiiPee wrote: > > assume that the division itsels throws an exception. > what if the average is calculated 10 million times per second? would you > still ldo the check eatch time? Sure. It is dirt cheap compared to everything else involved. * Do we really have tens of millions of little vectors? * Why we produce these at such a rate? * Are we averaging same (smaller set) of vectors over and over again? * Do we average every vector's contents after every little change to it? And so on. Every case imaginable means that we have some design issue that likely allows us to improve the performance ten times *without* removing that check. ;) > But isnt it this a similar issue to vectors v.at(2) against v[2]? The out of bounds index may be received from some dirty and potentially malicious source (external file, user input, code of team of your competitor). On rest of the cases out of bounds index means programming errors. On most of those cases we likely should not continue whatever we were doing. The need to average unexisting values may be normal. For example readings of some sensor. Checking that we have received any readings from sensor before we average those makes perfect sense, continuing to work without those readings from (one?) sensor may make perfect sense. |
scott@slp53.sl.home (Scott Lurndal): Feb 07 01:22PM >Stroustrup says, in "The Design and Evolution of C++" (Addison Wesley, >1994), "low-level events, such as arithmetic overflows and divide by >zero, are assumed to be handled by a dedicated lower-level mechanism In particular, hardware detection and SIGFPE generation. |
scott@slp53.sl.home (Scott Lurndal): Feb 07 01:24PM >> It also avoids the problems that occur on heavily pipelined >> architectures where events such as divide by zero are asynchronous. >I wonder what does this mean? There were processors architectures in existence where exceptions were "imprecise", i.e. couldn't be associated with the instruction that caused the exception. Modern pipelined architectures (x86, x86_64, arm64) all require precise (synchronous) exceptions for the most part, but there are still behaviors that are asynchronous (for example, detection of memory ECC failures on stores). |
Ben Bacarisse <ben.usenet@bsb.me.uk>: Feb 07 02:06PM > On 07/02/17 01:43, JiiPee wrote: <snip> >> what if the average is calculated 10 million times per second? would you >> still ldo the check eatch time? <snip> > Let me give you the /one/ rule that actually matters: > * Do not divide by zero - it is a mistake in the code. * > OK? I don't think that's a good universal rule. It very often a good rule, and if you are writing portable C it is an essential rule, but there are times when you can rely on behaviour outside of the C++ standard. In those cases writing simpler code and letting IEEE floating point take care of the NaNs and the Infinities might be the way to go. <snip> -- Ben. |
Manfred <noname@invalid.add>: Feb 07 03:31PM +0100 On 2/7/2017 1:43 AM, JiiPee wrote: >> assume that the division itsels throws an exception. > what if the average is calculated 10 million times per second? would you > still ldo the check eatch time? Yes, unless in the program by some means it can be guaranteed that the set can /never/ be empty. As others have already replied, ensuring a correct result is more important than saving a few (or many) clock cycles: if you want to save processing time, you have to do so in such a way that still guarantees correct results. > But isnt it this a similar issue to vectors v.at(2) against v[2]? It is a different context (out of range is not division by zero). Anyway for vector elements the designers of the language decided to provide two access methods, only one of which guarantees range checking. Obviously this implies that if you want to use the [] form (unchecked) you /need/ to ensure that the vector will /never/ be accessed out of range. IMHO the rationale for choosing between the two is not speed, it is whether you can /guarantee/ that out-of-range does not happen. If it is so, the library gives you a way to save a few machine code instructions. OTOH, if you cannot guarantee this and still want the [] syntax, Bjarne gives explicit examples on how to implement a range-checked [] operator. |
David Brown <david.brown@hesbynett.no>: Feb 07 03:58PM +0100 On 07/02/17 15:06, Ben Bacarisse wrote: > those cases writing simpler code and letting IEEE floating point take > care of the NaNs and the Infinities might be the way to go. > <snip> There is always going to be scope for making things more complicated to suit particular cases - C and C++ are flexible languages, and compilers often have a range of options or features that go beyond the bare requirements of the standards. However, I think unless you are an expert who is aware of all the subtleties and has a solid understanding of the target platform(s), compiler(s) and compiler options, then it /is/ a good rule. And a poster who wonders if it is okay to skip a check purely on the basis of the number of times a function is to be called, is not such an expert (in this particular area). Actually, I'd go beyond that and keep my rule even for experts - dividing by zero is a mistake (unless you are dealing with projective planes and other fun maths). Relying on features such as IEEE floating point NaNs, infinities, or exceptions is one way of dealing with such mistakes. Sometimes it is more efficient to do your calculations hoping there is no problem such as division by zero, with the intention of detecting the problem afterwards, rather than checking for problems /before/ doing the calculation. That's fine - but you are not actually dividing by zero any more. You are doing speculative calculations that may be aborted by exceptions, or calculations using functions specified as returning NaN for a denominator of 0 or the quotient otherwise. In other words, you are using specified, defined behaviour - not the undefined behaviour of a division by zero. |
thebtc@hotmail.com: Feb 06 07:16PM -0800 #include <iostream> #include <fstream> #include <cmath> #include "params.h" // Define and allocate arrays double* rho_prev = new double[npnts]; // time step t-1 double* rho = new double[npnts]; // time step t double* rho_next = new double[npnts]; // time step t+1 double* x = new double[npnts]; // x values // Open output file std::ofstream fout(outfilename); // Declare function for timesteps void timestep(); // Function for main computation void myLoops() { //some stuff // Take timesteps timestep(); // Close file fout.close(); std::cout << "Results written to " << outfilename << std::endl; // Deallocate arrays delete[] rho_prev; delete[] rho; delete[] rho_next; delete[] x; } // Function to take timesteps void timestep() { //some stuff } So I am having a problem with this code. I included "some stuff", which really is a bunch of loops that I have already tested and there's nothing wrong with them. In fact, this entire code (which is a piece of a larger code I modularized) doesn't show any problems when I'm compiling. It compiles just fine. The problem is, when I am running the executable I get a bunch of garbage like "*** glibc detected *** ./wave1dmod: free(): invalid next size (fast): 0x0000000000733010 ***" and some other nonsense after that. I don't know how to fix this. I know it most likely has to do with the double* arrays I put at the top but I don't know what to do with them so that this error stops appearing. Please help. |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 07 06:58AM +0100 > //some stuff > } > So I am having a problem with this code. Well, to get more specific help you need to post a reproducible example. Some easily recognizable issues with the above: • Global variables. They allow distant parts of the code to affect each other, in ways that one can't grok. Nothing can be relied on where these are involved. In your case perhaps deallocation is performed multiple times but allocation is performed just once, maybe, but even if that's the case there can still be many hidden problems: just get rid of all globals. • Raw `new` and `delete`. It's not smart to try to reinvent the automobile, trying to get everything right, including the precision-machined engine parts and the very special alloys used there. Instead just buy, borrow or steal an automobile that works. The `std::vector` one is not so sexy but it's dependable, works all right and is essentially free. Cheers & hth., - Alf |
Louis Krupp <lkrupp@nospam.pssw.com.invalid>: Feb 07 12:59AM -0700 >double* rho = new double[npnts]; // time step t >double* rho_next = new double[npnts]; // time step t+1 >double* x = new double[npnts]; // x values Where is npnts assigned a value? If it's a global variable and you haven't assigned it a value before you do the allocation, it will be zero, you'll allocate a bunch of zero-length arrays, and the first time you try to store anything into one of them you'll trash the memory links that make dynamic allocation possible and you'll get the error you describe below: >So I am having a problem with this code. I included "some stuff", which really is a bunch of loops that I have already tested and there's nothing wrong with them. In fact, this entire code (which is a piece of a larger code I modularized) doesn't show any problems when I'm compiling. It compiles just fine. The problem is, when I am running the executable I get a bunch of garbage like "*** glibc detected *** ./wave1dmod: free(): invalid next size (fast): 0x0000000000733010 ***" and some other nonsense after that. I don't know how to fix this. I know it most likely has to do with the double* arrays I put at the top but I don't know what to do with them so that this error stops appearing. Please help. As Alf said, there are easier, safer ways to do this in C++. Louis |
Paavo Helde <myfirstname@osa.pri.ee>: Feb 07 10:39AM +0200 > double* rho_next = new double[npnts]; // time step t+1 > double* x = new double[npnts]; // x values > So I am having a problem with this code. I included "some stuff", which really is a bunch of loops that I have already tested and there's nothing wrong with them. In fact, this entire code (which is a piece of a larger code I modularized) doesn't show any problems when I'm compiling. It compiles just fine. The problem is, when I am running the executable I get a bunch of garbage like "*** glibc detected *** ./wave1dmod: free(): invalid next size (fast): 0x0000000000733010 ***" and some other nonsense after that. I don't know how to fix this. I know it most likely has to do with the double* arrays I put at the top but I don't know what to do with them so that this error stops appearing. Please help. The most probable cause for such errors is that your code contains buffer overruns (writing behind the end of the allocated arrays). This is a typical bug of C-style code like here (replacing malloc with new[] is just syntactic sugar). In C++ we have ways to write safer code. Use std::vector and e.g. push_back for populating them. You might also consider using at() instead of [], this will turn out-of-array access from UB into a well-defined exception. hth Paavo |
scott@slp53.sl.home (Scott Lurndal): Feb 07 01:27PM >arbage like "*** glibc detected *** ./wave1dmod: free(): invalid next size = >(fast): 0x0000000000733010 ***" and some other nonsense after that. I don't= > know how to fix this. This error is caused because you're modifying data outside the bounds of the array (which steps on bookkeeping data kept by malloc/free). |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment