- Understanding invalid user input - 3 Updates
- conversions between native and unicode encodings - 2 Updates
| Jorgen Grahn <grahn+nntp@snipabacken.se>: Nov 20 10:11PM On Fri, 2021-11-12, Juha Nieminen wrote: >> It is so as Bjarne Stroustrup teached that with his examples from day he >> added namespaces into C++ programming language . > I suppose that explains a lot. I don't know: I don't see lots of people read his books[1]. I also don't think he ever taught 'using namespace std' ... but The C++ Programming Language examples /do/ omit the prefix, and perhaps it doesn't explain clearly why. /Jorgen [1] Except me. I like his writing and I think I'm heavily influenced by his style, which I find blends well with Unix styles. -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
| Ben Bacarisse <ben.usenet@bsb.me.uk>: Nov 20 10:31PM > I also don't think he ever taught 'using namespace std' ... but The > C++ Programming Language examples /do/ omit the prefix, and perhaps it > doesn't explain clearly why. The two Stroustrup books I have on C++ pre-date namespaces. Maybe "using namespace std" was a simple way to avoid re-writing lots of examples? Even if he was scrupulous in updating his examples, I bet many course notes and online tutorial that spanned that period in the language's evolution just took the easy option! -- Ben. |
| Jorgen Grahn <grahn+nntp@snipabacken.se>: Nov 20 10:32PM On Thu, 2021-11-11, Juha Nieminen wrote: ... [avoiding std::. You may remember that I agree here, so won't comment.] > change it, for backwards compatibility. However, regardless of who is > responsible for those names, it just quite clearly shows the > brevity-over-clarity psychology behind it.) YMMV. I find the POSIX names ("len" instead of "length" and so on) help clarity, just like the one-character names in science. Like I think I've said before, I think it's a matter of your background, and perhaps of how your brain is wired. I wish it was easier to rewire. For example, I cannot learn to ignore hungarian notation. I can see a namespace "NLog", and even after several years of seeing these, I have to stop and think before my brain can accept that this is really the Log namespace -- not a namespace for some special N Log, where N stands for "Native", or "Neutron", or "Natural number" or something. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
| James Kuyper <jameskuyper@alumni.caltech.edu>: Nov 20 03:43PM -0500 C++ has been changing a little faster than I can easily keep up with. I only recently noticed that C++ now (since 2017?) seems to require support for UTF-8, UTF-16, and UTF-32 encodings, which used to be optional (and even earlier, was non-existent). Investigating futher, I was surprised by something that seems to be missing. The standard describes five different character encodings used at execution time. Two of them are implementation-define native encodings for narrow and wide characters, stored in char and wchar_t respectively. The other three are Unicode encodings, UTF-8, UTF-16, and UTF-32, stored in char8_t, char16_t, and char32_t respectively. The native encodings could both also be Unicode encodings, but the following question is specifically about implementations where that is not the case. There are codecvt facets (28.3.1.1.1) for converting between the native encodings, and between char8_t and the other Unicode encodings, but as far as I can tell, the only way to convert between native and Unicode encodings are the character conversion functions in <cuchar> (21.5.5) incorporated from the C standard library. Is it correct that the <uchar> routines do in fact perform such conversions? It's hard to be sure, because the detailed description is only cross-referenced from the C standard, which doesn't use the term "native encoding", and allows __STDC_UTF_16__ and __STDC_UTF_32__ to not be pre#defined. Is it correct that the <cuchar> routines are the only way to perform such conversions? It seems odd to me that the only way to perform such conversions uses a C style interface. |
| Sam <sam@email-scan.com>: Nov 20 04:35PM -0500 James Kuyper writes: > Is it correct that the <cuchar> routines are the only way to perform > such conversions? It seems odd to me that the only way to perform such > conversions uses a C style interface. The C++ library's support for transcoding between Unicode and various character sets has always sucked. This still remains the case. |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment