- maximum reliable double precision - 1 Update
- C++ may sometimes be *too* simple (to use) - 1 Update
| Ben Bacarisse <ben.usenet@bsb.me.uk>: May 12 12:07AM +0100 You've been hit by a bug in Google Groups. Apparently it sometimes drops the ++ from the geoup name so you end up posting in comp.lang.c. I've copied my reply to comp.lang.c++ and I've set a followup-to header so maybe other replies will end up where you expect them to be... > The following program prints 2 values. Both of them are incorrect. > 5.630500000000001 - precision digits10 > 84983.71643982800015 - precision digits10-1 I see you have been pointer to the well-known paper by Goldberg. It's excellent, so do read it, but it's way too detailed for answering most simple questions, and it's not easy to follow if you don't have the right background. > int main() > { > double d = 1.1261 + 2.2522*2; The most common pit-fall for beginners is that most numbers you type can not be represented in the machine's variables. You may know that C's "double" can represent about 17 decimal digits, so 1.1261, having only 5 significant digits, should be fine. It's not. Internally, the value is stored as a binary fraction. The closest representable value is 0x1.204816f0068dc in hex. This is about 1.126100000000000100897... Then 2.2522 will be off by a bit that you multiply by two you can see why you don't get the result you want. The solution depends on what you are trying to do. You can squeeze a bit more precision using long double, and some C implementations support even wider floating point types (usually in software). Others allow you to use decimal arithmetic internally, though that, too, has its limitations. Other software options include using an arbitrary precision arithmetic library or emulating the calculations you want using integers. This last option is messy but gives you a lot of control over intermediate values are handled. -- Ben. |
| "Öö Tiib" <ootiib@hot.ee>: May 10 08:18PM -0700 On Monday, 10 May 2021 at 08:07:32 UTC+3, Juha Nieminen wrote: > And I still maintain that only an astronomically minuscule fraction of > projects out there, especially commercial projects, are going to go back > to an existing project, profile it, find out the bottlenecks and fix them. My experience is opposite. Perhaps it depends on problem domain. Only mid way failed products were never refactored later. Rest had works for to fine tune and improve speed and/or resource efficiency always planned and carried out as planned. |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment