- two's complement idea - 10 Updates
- neoGFX - The Ultimate C++ GUI Library and App/Game Framework .. Coming Soon! - 5 Updates
- little problem - 1 Update
- Regal eagle / American cloud - 1 Update
| David Brown <david.brown@hesbynett.no>: Nov 10 11:52AM +0100 On 09/11/2019 15:24, Tim Rentsch wrote: > of history. It isn't. It was a deliberate choice, made by smart > people. It isn't just a coincidence that the semantics of > wrapping works nicely in a variety of applications. Wrapping in hardware was not a deliberate choice of semantics - it was a deliberate choice of the simplest, fastest and most efficient hardware with little concern for the semantics. There are three primary choices for defined behaviour of signed integer overflow - saturation, trapping, and wrapping. None of these matches mathematical integer behaviour, of course. Saturation is a natural and useful choice in many circumstances. Trapping (including NaN implementations) is a good way of seeing that something has gone wrong, so that you either have the right answer, or know there has been an error. Wrapping has no logical use or point as a semantic of an operation. It is, however, a very convenient aid to multi-precision arithmetic. Implementations of signed integer representations other than two's complement, such as ones' complement and sign-magnitude, do not typically have wrapping semantics - they will have trapping or saturation semantics, if they have defined semantics. But in hardware, two's complement representation is the simplest and most efficient method. And if you have wrapping for addition and subtraction, it becomes easily expandable - a vital feature from the days of 8-bit ALU's being used for 16-bit, 32-bit and larger arithmetic, and still useful occasionally. This is related to the reason why in many cpu's, there is a "carry" flag for addition that doubles as the "not borrow" flag for subtraction - it means you implement your subtraction "x - y" as "x + ~y" and re-use almost all the hardware elements. Saturation and trapping cost hardware - the wrapping is free. It would be the standard in hardware even if it didn't have the use for multi-precision arithmetic. Paavo is right when viewed from the high-level software perspective - wrapping is not something people looked for, either for signed or unsigned arithmetic. For very low-level software, it /is/ useful in the case where you are building up larger arithmetic operations from smaller ones, and for certain kinds of comparisons - these are things that are mostly hidden by the compiler in a high level language like C. |
| David Brown <david.brown@hesbynett.no>: Nov 10 01:17PM +0100 On 08/11/2019 21:08, Öö Tiib wrote: > Yes, and for me it is fine if standard requires "x + 1 - 1" to trap > if x + 1 does overflow. Compilers will add their fast math options > in eye-blink anyway. I am not at all keen on the idea of the standards defining certain behaviour and then compilers having flags to disregard those definitions - that is, I think, a terrible idea. If trapping were to be added to the standards, it would be much better if the standards offered a user-selectable choice (I suppose via pragmas). Remember, features like gcc's "-ffast-math" do not instruct the compiler to ignore parts of the /C/ standard - they ignore parts of the IEEE floating point standards. This is a very different matter. As far as I know, "gcc -ffast-math" is no more non-compliant than "gcc". >> I do not want that situation in integer code. > Then there won't be. I am entirely confident that trapping on overflow will never be required by the C or C++ standards anyway, so this is hypothetical. But it is feasible that there could be optional support defined (such as via a new library section in C++), and there I would want the consistency. >> debugging code. > It is not trapping math there but debugging tool that caught "undefined > behavior". It does not help me on cases when I need trapping maths. Trapping maths can be handled in many ways - kill with an error message is often useful in debugging, but as you say it is not useful for handling it within the program. Throwing an exception, calling a trap handler, returning a signalling NaN, are other alternatives. My point was that I wanted the semantics of -fsanitize=signed-integer-overflow for detecting the overflow - how that overflow is another matter. > So I have to choose what hack is uglier to write trapping or > snapping myself manually and everyone else in same situation > like me has to do same. I am all in favour of choice, but there are a few things I disagree with you. First, you seem to want trapping to be the default in the standards - that is a very costly idea to impose on everyone else, as well as being rather difficult to specify and implement. Secondly, you seem to want this for all arithmetic. I'm fine with using trapping explicitly in code when it is particular useful - that would work naturally with exceptions in C++. But I would want the lower-level code that is not going to overflow, to be free from the cost of this checking. (Compilers might be able to eliminate some of the overhead, but they won't get everything.) Thirdly, you seem to think trapping is useful semantics in general - I disagree. There are times when it could be convenient in code where overflow is a realistic but unlikely scenario, mostly it sounds like you are happy with releasing code that is buggy as long as the consequences of the bugs are somewhat limited. I don't like that attitude, and I don't think you do either - so there is maybe still something I don't understand here. >> bonus it is more efficient. > It is better only when I know that it can no way overflow because the > values make sense in thousands while the variables can count in billions. To me, that should /always/ be the case - though I am quite happy with values that only make sense up to a thousand with a variable that can count to a thousand. I always want to know the limits of my values, and there is no need to work with bigger sizes than needed. > Yes and I want to turn the language full of specified undefined > behaviors and contradictions and useless features and defects into > programs where there are zero. You can't. It won't work. (Well, you can probably deal with contradictions in the language, though I don't know what you are referring to here.) Languages /always/ have undefined behaviours. And there is rarely a benefit in turning undefined behaviour into defined behaviour unless that behaviour is correct. If a program tries to calculate "foo(x)", and the calculation overflows and gives the wrong answer, the program is broken. Giving a definition to the overflow - whether it is wrapping, trapping, throwing, saturating - does not make the program correct. It is still broken. Some types of defined behaviour can make it easier to find the bug during testing - such as error messages on overflows. Some make it harder to find the bugs, such as wrapping. Some make it easier to limit the damage from the bugs, such as throwing an exception. None change the fact that the program has a bug. And if you look at the language, there are /loads/ of undefined behaviours. Integer overflows are just a tiny one, and one that has become a good deal less common since 32-bit int became the norm. C++ will still let programmers shoot themselves in the foot in all sorts of ways - worrying about the splinters in their fingers will not fix the hole in their feet. > are put into my programs then I want the program to give > answers that follow from those and when contradicting figures > are put in then I want my programs to refuse in civil manner. That is fair enough - that is why you should check the figures that are put into the program, not the calculations in the good code that you have written. >> - don't try to dumb down and water out the language. > A language that does not have simple features nor convenient ways to > implement such is dumbed down by its makers, not by me. No language covers everything that all programmers want - there are always compromises. > to it" and when I write "inline" then they read that "I want to > define it in header file". English is yes my 4th language but did I > really write that? Yes, you did. At least, that's what the definition of the languages says. The words used in a programming language don't always correspond to their normal meanings in English. Nor do they always remain the same as they move from compiler extensions through to standards. It is unfortunate, but true - and it is the programmer's job to write in a way that is clear to the reader /and/ precise in the language. (Incidentally, you write better English many people I know who have it as their first language. And it is not as if Estonian is close to English!) > Sometimes I want to write code where overflow is not my programming > error but allowable contradiction in input data. There I want to throw or > to result with INF on overflow. That should be handled by C++ classes that have such semantics on overflow - which you use explicitly when you want that behaviour. I'd be happy to see a standard library with this behaviour. > throw or to result with NaN when kilograms were requested > to add to meters or to subtract from degrees of angle or diameter > of 11-th apple from 10 is requested. That is a completely different matter. Personally, I would not want an exception or a NaN here - I would want a compile time error. Again, this can be handled perfectly well in C++ using strong class types. And again, it would be nice to see such libraries standardised. In both these cases, I think concepts will make the definition and use of such class template libraries a good deal neater. > or trapping failures depends on if I want to process the whole data > regardless of inconsistencies or constraint violations in it or to > interrupt on first such problem in data. Don't such use-cases make sense? Yes, which is why it is far better to deal with it using explicit choices of the types used rather than making it part of the language. >> reasons I gave above about optimising "x + 1 - 1". > Currently there are outright nothing. Integer math is in same state (or > even worse) than it was in eighties. Which is fine for many people. And since hardware assistance would give you very little benefit for what you want here, it is not surprising that it does not exist. (Hardware assistance could be a big help in a wide variety of other undefined behaviour, bugs, and low-level features in C and C++, but that is a different discussion.) > eyeballs to bleed (sorry if you happen to be fan of those > __builtin_add_overflow things) but if there was any better way to have > refusing math I would not touch wrapping math at all. I don't think anyone will tell you __builtin_add_overflow leads to pretty code! But you write these sorts of things once, and hide them away in a library so that the use of the functions is simple and clear. |
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Nov 10 02:37PM +0100 On 10.11.2019 13:17, David Brown wrote: > to ignore parts of the /C/ standard - they ignore parts of the IEEE > floating point standards. This is a very different matter. As far as I > know, "gcc -ffast-math" is no more non-compliant than "gcc". Possibly you're talking about C only, I'm not sure of the wider thread context here. Sorry for jumping in. However, with C++ the `-ffast-math` option for g++ is in conflict with the standard because `numeric_limits` then still reports IEEE 754 compliance, while e.g. NaN comparisons are then non-compliant. I remember once engaging in a heated (!) debate with Gabriel Dos Reis about this, so it's possible to have different opinions about it, even among knowledgeable people, for certain. IMO the fast and the compliant floating point types should at least by default be differently named types, not the same type with different behavior depending on an option. So I regard it as a quite nasty design bug. - Alf |
| Paavo Helde <myfirstname@osa.pri.ee>: Nov 10 04:31PM +0200 On 9.11.2019 16:24, Tim Rentsch wrote: > of history. It isn't. It was a deliberate choice, made by smart > people. It isn't just a coincidence that the semantics of > wrapping works nicely in a variety of applications. The hardware which I am familiar with properly sets the carry and overflow flags for arithmetic operations. In case of multiplication it also produces the result twice larger than the operands, in order to not lose information. So at the hardware level the results are always well-defined and complete, this is very fine. The calling program can study the flags and decide what to do about this. The problem with C is that they decided the best way to deal with this is just to ignore the overflow flag and the upper half of the multiplication result. Not sure about PDP-11 itself, but they had to think about other hardware too, and at some point they decided that the best way to deal with the problem was just to ignore all the extra information the current or future CPU-s might provide. An imaginary conversation: A: BTW, what should we do with the arithmetic overflow? There are overflow flags and extended precision results in some hardware that we could use. B: Because of speed and simplicity, we just ignore it. Each extra opcode would slow us down! A: But in case of multiplication, we might lose half of the result! B. Just ignore it. A. But what we write down in the specs? We can't just say we carry out an hardware operation and then ignore everything except the first result register. B. Let's see... Isn't there a nice name for that behavior? Yep, it's called wrapping! Write this down in the specs! A. But, but, with signed integers, there are different representations, so that on some hardware it does not actually wrap! B. Ouch, my head is hurting. Just declare it undefined then! |
| David Brown <david.brown@hesbynett.no>: Nov 10 05:08PM +0100 On 10/11/2019 14:37, Alf P. Steinbach wrote: > Possibly you're talking about C only, I'm not sure of the wider thread > context here. > Sorry for jumping in. Feel free! > However, with C++ the `-ffast-math` option for g++ is in conflict with > the standard because `numeric_limits` then still reports IEEE 754 > compliance, while e.g. NaN comparisons are then non-compliant. Fair enough - that's a good point. > I remember once engaging in a heated (!) debate with Gabriel Dos Reis > about this, so it's possible to have different opinions about it, even > among knowledgeable people, for certain. I could imagine a difference of opinion here - "gcc -ffast-math" has IEEE standard formats but not IEEE standard operations. > IMO the fast and the compliant floating point types should at least by > default be differently named types, not the same type with different > behavior depending on an option. The compiler does require an explicit option here for non-IEEE operations - it is (baring bugs or missing features - I don't know the details) IEEE standard by default. There is always a balance on these things. "-ffast-math" can be very much faster than IEEE standard behaviour on many platforms, and is perfectly acceptable for most people's use of floating point. Why should the majority have slower code (and perhaps worse accuracy) just because some few scientific users need identical results consistent across platforms? I'd actually say my preferred solution is that strict IEEE types should be a different, specific types that programmers can pick explicitly, while "-ffast-math" equivalent could be the default. The "numeric_limits" could differentiate between the representation and the operations (the same should apply to signed integers, differentiating between representation and wrapping overflow behaviour), and implementations should set the values here according to what they guarantee regardless of flags. > So I regard it as a quite nasty design bug. "Bug" implies a mistake. This is a design choice, on which people disagree - not a bug. |
| Manfred <noname@invalid.add>: Nov 10 06:34PM +0100 On 11/10/19 5:08 PM, David Brown wrote: > should the majority have slower code (and perhaps worse accuracy) just > because some few scientific users need identical results consistent > across platforms? It's not that simple (and -ffast-math does not improve accuracy), see below. > I'd actually say my preferred solution is that strict IEEE types should > be a different, specific types that programmers can pick explicitly, > while "-ffast-math" equivalent could be the default. The fact is that -ffast-math can cause loss of precision, so it can introduce nasty bugs in existing code that assumes IEEE conformance. Since managing accuracy errors is among the most common causes of floating point bugs, the majority of programmers is better off without -ffast-math. If you want the extra speed, you need to know what you are doing (and that it doesn't come for free) so you'd better ask for it explicitly. By the way, with floating point there are a number of techniques to be considered that improve efficiency before tweaking compiler flags. The |
| Manfred <noname@invalid.add>: Nov 10 06:37PM +0100 On 11/9/19 3:13 PM, Tim Rentsch wrote: >> than being handled as an error by the compiler. > I absolutely concur. Your other comments in the thread also are > right on the money. I know. Frankly I am rather baffled by the comments > of those on the other side of what you've been saying. Not a surprise, though. The internet is this too. |
| Manfred <noname@invalid.add>: Nov 10 07:02PM +0100 On 11/10/19 3:31 PM, Paavo Helde wrote: > lose information. So at the hardware level the results are always > well-defined and complete, this is very fine. The calling program can > study the flags and decide what to do about this. The main difference between the hardware and the language is that 1) the former can count on accessory entities that are not part of a variable (the flags register) and 2) it does not have to fit within the semantics of a type system, so that it is OK for WORD op WORD to yield DWORD If you go into the specifics, you may see that: 1) flags information would need to be handled by the compiler, with the possible result of littering the generated code with branches for every integer expression. 2) Having an arithmetic operator (multiplication) yield a different type than the operands would pose significant complications to the language - if you really want this, try mimicking that with some C++ wrapper classes and operators thereof, and see where you go. The solution, when the full range is needed, is to tell the compiler that this is what you want by casting the /operands/. In this way the compiler can deliver the result you want (while optimizing away the extra space for the operands themselves). > think about other hardware too, and at some point they decided that the > best way to deal with the problem was just to ignore all the extra > information the current or future CPU-s might provide See above. Moreover, given the impact of the choice, and its business consequences, it is hard to believe that this was just sloppiness. > A. But, but, with signed integers, there are different representations, > so that on some hardware it does not actually wrap! > B. Ouch, my head is hurting. Just declare it undefined then! This is fiction. |
| Ian Collins <ian-news@hotmail.com>: Nov 11 07:48AM +1300 On 11/11/2019 06:34, Manfred wrote: > that it doesn't come for free) so you'd better ask for it explicitly. > By the way, with floating point there are a number of techniques to be > considered that improve efficiency before tweaking compiler flags. With ARM processors which use NEON, -ffast-math is required if you want to vectorise floating point operation. This is because the hardware isn't IEEE compliant. This was a bit of a pain for us which resulted in lots of extra testing.... -- Ian. |
| David Brown <david.brown@hesbynett.no>: Nov 10 09:31PM +0100 On 10/11/2019 18:34, Manfred wrote: >> explicitly, while "-ffast-math" equivalent could be the default. > The fact is that -ffast-math can cause loss of precision, so it can > introduce nasty bugs in existing code that assumes IEEE conformance. -ffast-math /can/ cause loss of precision, but it can also improve precision. If you write "(x + 1e50) - 1e50", -ffast-math will let the compiler generate code for "x" while IEEE conformance requires the addition and subtraction, losing you all your precision of x (assuming x is small compared to 1e50). The key point is that it doesn't keep the consistency required by IEEE. And of course if you have code that assumes IEEE conformance and needs it for correctness, and you turn off conformance, then you risk a result that doesn't work. You can't use -ffast-math unless you know the code doesn't need IEEE conformance. So it /is/ that simple, and that obvious. And while I say this is /my/ preferred solution, I can quite happily appreciate that other people prefer something else. (And realistically, I know it takes a huge incentive before the status quo is changed. Even if it turns out, as I think is the case, that most programs would run correctly and more efficiently with -ffast-math, that does not mean the default is likely to change.) > Since managing accuracy errors is among the most common causes of > floating point bugs, the majority of programmers is better off without > -ffast-math. I don't think most programs care unduly about the accuracy of floating point calculations. I think most calculations in floating point have low enough accuracy needs that you can use doubles and accept that the answer is approximate but good enough. And I think most programmers are totally unaware of the details of IEEE and how to use the features it gives in getting slightly more control of error margins. But I freely admit that is gut feeling, not based on any kind of surveys, statistics, etc. > If you want the extra speed, you need to know what you are doing (and > that it doesn't come for free) so you'd better ask for it explicitly. It can make a very large difference, depending on the target processor and the type of code. (Many small processors with floating point hardware have hardware that supports "ordinary" floats, but does not handle infinities, NaNs, infinitesimals, etc. -ffast-math can generated code that runs in a few hardware instructions while without it you need library calls and hundreds of instructions. Yes, it can be /that/ big a difference - and that is personal experience, not gut feeling.) > By the way, with floating point there are a number of techniques to be > considered that improve efficiency before tweaking compiler flags. Of course. |
| Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Nov 10 01:22AM Houston, we have automatic code generation! \o/ https://i.imgur.com/l2gp32h.png /Flibble -- "Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin "You won't burn in hell. But be nice anyway." – Ricky Gervais "I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais "Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Bryne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?" "I'd say, bone cancer in children? What's that about?" Fry replied. "How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil." "Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say." |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 09 06:24PM -0800 On 11/9/2019 5:22 PM, Mr Flibble wrote: > Houston, we have automatic code generation! \o/ > https://i.imgur.com/l2gp32h.png Really? If I give you some C code that generates a PPM image, can you compile it into, say, JavaScript and generate the same image, in the browser? If so, well done. |
| "Öö Tiib" <ootiib@hot.ee>: Nov 10 01:44AM -0800 On Sunday, 10 November 2019 04:24:21 UTC+2, Chris M. Thomasson wrote: > Really? If I give you some C code that generates a PPM image, can you > compile it into, say, JavaScript and generate the same image, in the > browser? If so, well done. That tool is already available <https://emscripten.org/> However it is likely better to build into WebAssembly instead of JavaSript, |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 10 02:49AM -0800 On 11/10/2019 1:44 AM, Öö Tiib wrote: > That tool is already available <https://emscripten.org/> > However it is likely better to build into WebAssembly instead > of JavaSript, Indeed. Iirc, it can handle OpenGL and SDL2. |
| Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Nov 10 03:14PM On 10/11/2019 02:24, Chris M. Thomasson wrote: > Really? If I give you some C code that generates a PPM image, can you > compile it into, say, JavaScript and generate the same image, in the > browser? If so, well done. The browser will not be a target for neoGFX however neoGFX will include a browser widget in a future version and a rich text (HTML) widget in v1.0. /Flibble -- "Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin "You won't burn in hell. But be nice anyway." – Ricky Gervais "I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais "Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Bryne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?" "I'd say, bone cancer in children? What's that about?" Fry replied. "How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil." "Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say." |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 09 06:17PM -0800 On 11/9/2019 7:38 AM, Bonita Montero wrote: > Wow, finally I've got it: [...] > { > *this = (T)xa; > } [...] Have you every heard of is_lock_free: https://en.cppreference.com/w/cpp/atomic/atomic/is_lock_free I might be misunderstanding you, but either your type is atomic or not. If not, you have to figure how you are going to treat it. *this = (T)xa; is a total hack. |
| woodbrian77@gmail.com: Nov 09 04:28PM -0800 On Thursday, November 7, 2019 at 3:34:00 PM UTC-6, Jorgen Grahn wrote: > It's analogous to netinet/tcp.h, udp.h and so on, so I guess that's > the right place. And it's what the RFC uses in its examples (although > it doesn't say anything about that being normative). I wonder what steps to take to get something done. I'd like to think that it wouldn't take years. Brian |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment