- std::hexfloat - 25 Updates
Daniel <danielaparker@gmail.com>: May 21 04:37PM -0700 On Saturday, May 18, 2019 at 8:28:00 AM UTC-4, Bo Persson wrote: > So ss >> y would stop at the 'p' anyway. > The committee didn't want to break old code reading "1.0p" as one double > and one char, like in ss >> y >> ch. What about this proposal, to expand the number of characters to include p and P? https://timsong-cpp.github.io/lwg-issues/2381 It appears to be still open. Daniel |
Bart <bc@freeuk.com>: May 22 01:07AM +0100 On 21/05/2019 23:13, David Brown wrote: > In other words, bugs in the auto-translator. The flaws are in the > design and specification, rather than the implementation, but they are > bugs nonetheless. You can call a unwillingness to expend a huge, disproportionate effort in overcoming C's many shortcomings for this purpose a bug if you like. The source language has a simple, orthogonal type model, 64-bit-based, which is very easy to superimpose on the simple hardware model of the 64-bit target you want to use. But now introduce C between the two, which has a more complex, unwieldy, not-quite-so-unorthogonal type system, which is 32-bit-based even when the final target is 64-bit, with its million and one quirks, and which doesn't quite match that of the target language. > It is fine as a target language, but you need to generate correct C > code. Which means what? So that there are 0 errors and 0 warnings no matter what options somebody will apply? How can something be judged correct or not when that measure depends on which options - which can be out of your control - are applied? > Other people who write code generators or translators that produce C > manage it. And when their generated code has flaws, they blame their > generators - not the language. The fault is very largely with the unsuitably of C for the role. Except that there isn't really anything else that is as ubiquitous. > assemblers. You don't understand how C works. You are unwilling to use > many features of the language. It is not surprising that you find > generating assembly easier than generating C code. ASM doesn't stop me storing a 64-bit function pointer into a 64-bit memory location. Source language: import clib ref void fnptr fnptr := puts Generates native code (could be a one-liner but never mind): lea D0, [`puts*] mov [t.fnptr], D0 No problem. Now I get it to generate C: static void * t_fnptr; t_fnptr = (void *)(&puts); gcc (no options): (Nothing) gcc (with recommended bunch of options): t.c:66:5: warning: ISO C forbids initialization between function pointer and 'void *' [-Wpedantic] g++ (no options): t.c:66:5: error: invalid conversion from 'void (*)()' to 'void*' [-fpermissive] Three categories of message; which one was right? Because either my source code is fine, or it isn't. >> equipment? > Since your languages and tools are for you alone, it is up to you to > answer that one. I meant: which ones am I likely to come across? What can I buy from PC World that I can program, that will have some fancy processor inside where function pointers are bigger than 64 bits, or float64 has a different byte ordering from int64. If you want to generate code that > only works on platforms where you can store a function pointer in a > void* pointer (though I can't imagine why it would be useful), I explained why: to produce a list of pointers to disparate functions. (As to why /that/ might be useful, I'd have post a link to a longer explanation.) > you can > tune your options to suit. The compiler will already know where that will work and where it won't, because it presumably knows what the target is, and can either report an error if not, or arrange for it to work. > Perhaps try with "-fpermissive" ? How does that magically make it alright? I thought such an option just suppressed the warning? (For g++, it just changed an error to a warning.) Is there an actual practical problem in doing such a conversion or not? And if not, why is it bothering me with it? |
Ian Collins <ian-news@hotmail.com>: May 22 04:33PM +1200 On 22/05/2019 12:07, Bart wrote: >> bugs nonetheless. > You can call a unwillingness to expend a huge, disproportionate effort > in overcoming C's many shortcomings for this purpose a bug if you like. That's never stopped you expending a huge, disproportionate effort in whinging about C. That time would have easily been enough to fix your code. > g++ (no options): > t.c:66:5: error: invalid conversion from 'void (*)()' to 'void*' > [-fpermissive] The conversion is "conditionally-supported" in C++>=11 which makes it a "program construct that an implementation is not required to support". Thus: $ clang++ -std=c++98 -Wall -Werror -Wextra -pedantic /tmp/x.cc /tmp/x.cc:6:14: error: cast between pointer-to-function and pointer-to-object is an extension [-Werror,-Wpedantic] t_fnptr = (void*)(&puts); ^~~~~~~~~~~~~~ 1 error generated. $ clang++ -std=c++11 -Wall -Werror -Wextra -pedantic /tmp/x.cc $ -- Ian. |
Bonita Montero <Bonita.Montero@gmail.com>: May 22 07:20AM +0200 > That depends on what you mean by "syntactic sugar". static_cast > will carry out pointer adjustment so it can be used to navigate > an inheritance graph correctly. "syntactic sugar" was related to the case above. |
Juha Nieminen <nospam@thanks.invalid>: May 22 06:40AM > floating-point values, but AFAIK these bugs got fixed in the C runtime > libraries about 10-20 years ago or so. Plus there are libraries which > ensure the minimum number of decimal digits for perfect round-trip. How would you know, using standard C/C++, how many digits do you need to output in order to ensure no loss of bits when reading the value back? (And this is assuming that the C or C++ standard library being used has been implemented such that given enough decimal digits, they will be rounded to the correct direction as to restore the original value exactly.) It is my understanding that hexadecimal floating point representation *always* outputs the exact amount of digits to represent the value accurately. |
Bart <bc@freeuk.com>: May 22 11:14AM +0100 On 22/05/2019 05:33, Ian Collins wrote: > That's never stopped you expending a huge, disproportionate effort in > whinging about C. That time would have easily been enough to fix your > code. I prefer to spend the time developing a superior series of alternate languages. Highlighting the problems in C benefits the process, and often opens people's eyes to things they didn't know. And also, doing so in a forum is some light relief from actual work. But if you want to see some real whinging, watch some Jonathan Blow videos about his new 'Jai' language that is supposed to wipe the floor with C++. > ^~~~~~~~~~~~~~ > 1 error generated. > $ clang++ -std=c++11 -Wall -Werror -Wextra -pedantic /tmp/x.cc I thought this was some magic incantation to wave away all errors. But I tried it on my test (a 2900-Loc Linux version of the C file), and the lines of errors and warnings went up from 1100 to 2100! (Input was this file: https://github.com/sal55/qx/blob/master/jpeg.c, generated by an older compiler as new ones have dropped the C target.) |
David Brown <david.brown@hesbynett.no>: May 22 01:22PM +0200 On 22/05/2019 02:07, Bart wrote: >> bugs nonetheless. > You can call a unwillingness to expend a huge, disproportionate effort > in overcoming C's many shortcomings for this purpose a bug if you like. You are happy to classify your wilful and determined ignorance of C as a bug in yourself? Okay, I suppose. Certainly the idea that this is all a "huge, disproportionate effort" is your own personal problem. Undefined behaviours in C are mostly quite clear and obvious, you rarely meet them in practice, and they are mostly straightforward to handle. For a language generator, they are peanuts to deal with. These have been explained to you countless times. Of course, dealing with them nicely and efficiently involves macros and the C preprocessor. But it is apparently far better to whine and moan about deficiencies in C than to use the features of C to get what you need. > not-quite-so-unorthogonal type system, which is 32-bit-based even when > the final target is 64-bit, with its million and one quirks, and which > doesn't quite match that of the target language. C is not based on any hardware model - it is more abstract. Yes, putting that in between the two layers that have matching models will cause complications, and you will have to be careful to get it right. But as abstract models go, C's is not difficult to comprehend. >> It is fine as a target language, but you need to generate correct C code. > Which means what? So that there are 0 errors and 0 warnings no matter > what options somebody will apply? No. It means that there are no errors in the code, based on whatever restrictions you might want to place on how it is used. If you want to generate fully portable C code (matching a particular standard), then do so. If you want to generate code that has limitations on the compiler or flags needed, then do so - but make sure that you document the restrictions. Far and away the best choice here is to use conditional compilation and compiler detection. For example, if you want to allow casting between different pointer types to work for punning, and you want wrapping overflow behaviour to match your source language, then try something like this: #ifdef __GNUC__ /* Set options needed by gcc and clang for desired C variant */ #pragma GCC optimize "-fno-strict-aliasing" #pragma GCC optimize "-fwrapv" #pragma GCC diagnostic ignored "-Wformat" #elif defined(_MSC_VER) /* Set options needed by MSVC for desired C variant */ #elif defined(_BART_C) /* Bart's C compiler already supports Bart C */ #else #error Untested compiler - remove this and compile at your own risk
Subscribe to:
Post Comments (Atom)
|
No comments:
Post a Comment