Monday, December 16, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

Melzzzzz <Melzzzzz@zzzzz.com>: Dec 15 11:28PM


> Have you ever experienced the Boost preprocessor macros? Iirc, it was
> the Chaos lib way back. Some hardcore shi%.
 
> ;^)
 
Heh, I never used boost as well :P
 
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
"Öö Tiib" <ootiib@hot.ee>: Dec 15 03:41PM -0800

On Monday, 16 December 2019 01:28:14 UTC+2, Melzzzzz wrote:
> > the Chaos lib way back. Some hardcore shi%.
 
> > ;^)
 
> Heh, I never used boost as well :P
 
Last 15 years it has been almost inevitable to use something from boost
in bigger project. Other way is to write precisely same thing yourself.
Did you have myFilesystem, myOptional and myVariant before C++17?
James Kuyper <jameskuyper@alumni.caltech.edu>: Dec 15 11:38PM -0500

On 12/15/19 9:53 AM, Daniel wrote:
...
> And yet, it still doesn't have ...
> std::int128,
 
I'm curious: how would std::int128 differ from std::int128_t, which has
been standardized? It is optional, just like all the other fixed-sized
typedefs. Like those other typedefs, if not supported, that identifier,
and the corresponding macros such at INT128_MAX, may not be used for any
conflicting purpose if the type is not supported.
What more would you want of std::int128? Making it mandatory? That won't
work any better than making std:int16_t mandatory - C++'s mandate is to
be implementable almost everywhere, including platforms on which
std::int16_t (much less std::int128_t) would not be implementable.
Melzzzzz <Melzzzzz@zzzzz.com>: Dec 16 05:18AM


> Last 15 years it has been almost inevitable to use something from boost
> in bigger project. Other way is to write precisely same thing yourself.
> Did you have myFilesystem, myOptional and myVariant before C++17?
 
Yes.
 
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
"Öö Tiib" <ootiib@hot.ee>: Dec 15 09:23PM -0800

On Monday, 16 December 2019 06:38:20 UTC+2, James Kuyper wrote:
> work any better than making std:int16_t mandatory - C++'s mandate is to
> be implementable almost everywhere, including platforms on which
> std::int16_t (much less std::int128_t) would not be implementable.
 
May be it is not about standard but about compiler makers who provide
128 bit integers that are not conformant with std::int128_t.
Daniel <danielaparker@gmail.com>: Dec 15 09:24PM -0800

On Sunday, December 15, 2019 at 11:38:20 PM UTC-5, James Kuyper wrote:
 
> std::int128_t ... has been standardized
 
I wasn't aware of that. Can you provide a reference?
 
Thanks,
Daniel
Daniel <danielaparker@gmail.com>: Dec 15 09:36PM -0800

On Monday, December 16, 2019 at 12:23:39 AM UTC-5, Öö Tiib wrote:
 
> May be it is not about standard but about compiler makers who provide
> 128 bit integers that are not conformant with std::int128_t.
 
I don't know any compilers that have std::128_t. gcc and clang have __int128,
but std::numeric_limits<__int128> is only defined if compiled with e.g.
-std=gnu++11 and not -std=c++11.
 
Daniel
"Öö Tiib" <ootiib@hot.ee>: Dec 15 09:43PM -0800

On Monday, 16 December 2019 07:36:32 UTC+2, Daniel wrote:
 
> I don't know any compilers that have std::128_t. gcc and clang have __int128,
> but std::numeric_limits<__int128> is only defined if compiled with e.g.
> -std=gnu++11 and not -std=c++11.
 
So don't I know of any. All compilers implement 128 bit integers as
non-standard extensions.
David Brown <david.brown@hesbynett.no>: Dec 16 09:35AM +0100

On 16/12/2019 00:11, Chris M. Thomasson wrote:
 
>> I can use macros but I don't write them...
 
> Have you ever experienced the Boost preprocessor macros? Iirc, it was
> the Chaos lib way back. Some hardcore shi%.
 
Boost has always pushed the limits of what can be done with C++. And
when it seems they have pushed it to something useful, but the
user-experience or the implementation is unpleasant (such as needed
"hardcore macros"), the C++ powers-that-be look at what they need to do
to the language to give users roughly the same results, but less
unpleasantness. This is one of the driving forces behind the evolution
of C++.
David Brown <david.brown@hesbynett.no>: Dec 16 09:44AM +0100

On 16/12/2019 06:43, Öö Tiib wrote:
>> -std=gnu++11 and not -std=c++11.
 
> So don't I know of any. All compilers implement 128 bit integers as
> non-standard extensions.
 
There are technical - but currently unavoidable - reasons why compilers
don't provide a full 128-bit integer type. One of these is that if
64-bit gcc were to provide a 128-bit extended integer type (using the C
and C++ technical term "extended integer type"), then "maxint_t" would
have to be that type rather than "long long int". This would have a
knock-on effect on ABI's, libraries, headers, etc., which would be out
of the question. Other complications involve support for literals of
the type. (That would open more challenges with user-defined literals -
you'd need language support for operator "" X(unsigned __int128) as well.)
 
Compilers are free to implement a type that is mostly like a 128-bit
integer type. But it can't be classified as an "extended integer type".
And thus it can't be added as int128_t and uint128_t.
 
Hopefully the C and/or C++ folks will figure out a way to improve this
deadlock, but this is not something the compiler writers can fix by
themselves. The nearest they can get is collaborating on the name of
the almost-integer-type __int128.
boltar@nowhere.co.uk: Dec 16 09:55AM

On Sun, 15 Dec 2019 20:48:08 +0000
>improvements they bring to C++ are worth using and, thankfully, you will be
>forced to use them and if you don't like that you can simply fuck off
>elsewhere: I'm sure you will love Java instead.
 
Do try growing up at some point.
boltar@nowhere.co.uk: Dec 16 09:57AM

On Sun, 15 Dec 2019 15:41:18 -0800 (PST)
 
>Last 15 years it has been almost inevitable to use something from boost
>in bigger project. Other way is to write precisely same thing yourself.
>Did you have myFilesystem, myOptional and myVariant before C++17?
 
Optional and variant are both solutions looking for a problem. Just more noise
added to the language to keep a tiny number of purists happy.
Bo Persson <bo@bo-persson.se>: Dec 16 12:13PM +0100

On 2019-12-16 at 06:24, Daniel wrote:
> On Sunday, December 15, 2019 at 11:38:20 PM UTC-5, James Kuyper wrote:
 
>> std::int128_t ... has been standardized
 
> I wasn't aware of that. Can you provide a reference?
 
The *meaning* of intN_t is standardized for all N. Still optional though
(with special rules for N = 8, 16, 32, and 64).
 
 
Bo Persson
Bart <bc@freeuk.com>: Dec 16 12:10PM

On 16/12/2019 08:44, David Brown wrote:
> knock-on effect on ABI's, libraries, headers, etc., which would be out
> of the question. Other complications involve support for literals of
> the type.
 
You mean that even when int128 types are built-in, there is no support
for 128-bit constants? And presumably not for output either, according
to my tests with gcc and g++:
 
__int128_t a;
a=170141183460469231731687303715884105727;
std::cout << a;
 
The constant overflows, and if I skip that part, I get pages of errors
when trying to print it. On gcc, I have no idea what printf format to use.
 
This is frankly astonishing with such a big and important language, and
with comprehensive compilers such as gcc and g++. Even my own 'toy'
language can do this:
 
int128 a := 170'141'183'460'469'231'731'687'303'715'884'105'727
 
println a
println a:"hs'"
println word128.maxvalue # (ie. uint128)
 
Output is:
 
170141183460469231731687303715884105727
7fff'ffff'ffff'ffff'ffff'ffff'ffff'ffff
340282366920938463463374607431768211455
 
What exactly is the problem with allowing 128-bit constants at least?
 
(My implementation is by no means complete (eg. 128/128 is missing, it
does 128/64), but it can do most of the basics and I just need to get
around to filling in the gaps.
 
It will also have arbitrary precision decimal integers /and/ floats
(another thing that was mentioned as missing from core C++), which I'm
working on this month. That will have constant and output support too:
 
a := 123.456e1000'000L
println a
 
So mere 128-bit integer-only support is trivial!)
 
> (That would open more challenges with user-defined literals -
> you'd need language support for operator "" X(unsigned __int128) as well.)
 
But this would be just a natural extension? I assume that whatever that
means, there is already X(uint64_t). There is not a totally new feature
or concept to add.
 
(And if your example means there is no '__uint128', then that's another
incredibly easy thing to have fixed!)
James Kuyper <jameskuyper@alumni.caltech.edu>: Dec 16 07:17AM -0500

On 12/16/19 12:23 AM, Öö Tiib wrote:
>> std::int16_t (much less std::int128_t) would not be implementable.
 
> May be it is not about standard but about compiler makers who provide
> 128 bit integers that are not conformant with std::int128_t.
 
There's nothing that defining std::int128 would do to solve that problem
unless the standard says something different about std::int128 than it
currently says about std::int128_t. So what would you want it to say
different?
James Kuyper <jameskuyper@alumni.caltech.edu>: Dec 16 07:25AM -0500

On 12/16/19 3:44 AM, David Brown wrote:
...
> deadlock, but this is not something the compiler writers can fix by
> themselves. The nearest they can get is collaborating on the name of
> the almost-integer-type __int128.
 
The fundamental problem is that people didn't want to implement
[u]intmax_t as intended: the largest supported integer type, a type that
must necessarily change every time that the list of supported integer
types expands to include a larger type. Instead, for some reason, they
wanted to use it as a synonym for int64_t. Why they didn't use int64_t
as the type for such interfaces, I don't know. This was in fact
predicted before the proposal to add [u]intmax_t to the C standard was
approved: someone (I think it was Doug Gwynn) predicted that someday we
would need to add something like
really_really_max_int_type_this_time_we_really_mean_it - and then people
would get into the habit of assuming that
really_really_max_int_type_this_time_we_really_mean_it was just a
synonym for int128_t, and they'd have to do it all over again. I suspect
he's right.
James Kuyper <jameskuyper@alumni.caltech.edu>: Dec 16 07:44AM -0500

On 12/16/19 12:24 AM, Daniel wrote:
> On Sunday, December 15, 2019 at 11:38:20 PM UTC-5, James Kuyper wrote:
 
>> std::int128_t ... has been standardized
 
> I wasn't aware of that. Can you provide a reference?
 
As with most of those parts of the C++ library that correspond to the C
standard library, the C++ standard says very little itself about
<stdint.h>, <inttypes.h>, <cstdint> and <cinttypes> (C++ 27.9.2p3,4 is
the main exception, and not relevant to this discussion). Instead, it
simply incorporates the corresponding wording from the C standard by
reference (C++ 17.6.1.2p4).
 
The relevant words are in the C standard.
 
First of all:
 
"For each type described herein that the implementation provides, 254)
<stdint.h> shall declare that typedef name and define the associated
macros. Conversely, for each type described herein that the
implementation does not provide, <stdint.h> shall not declare that
typedef name nor shall it define the associated macros. An
implementation shall provide those types described as ''required'', but
need not provide any of the others (described as ''optional'')."
(C 7.20.1p4)
 
Support for [u]intleastN_t (7.2.1.2p3) and [u]intfastN_t (7.20.1.3p3)
and is mandatory for N==8, 16, 32, and 64. Support for [u]intN_t is
mandatory for those same values, but only if an implementation provides
integer types with those sizes, no padding bits, and 2's complement
representation for the signed versions of those types (7.20.1.1p3). All
an implementation needs to do to justify not providing int64_t is to not
provide such a 64-bit type. For all other values of N cases, support is
optional - but per 7.20.1p4, the relevant identifiers cannot be used for
any conflicting purpose by the implementation - and that is the sense in
which I say that it has been standardized.
Bo Persson <bo@bo-persson.se>: Dec 16 02:13PM +0100

On 2019-12-16 at 13:25, James Kuyper wrote:
> types expands to include a larger type. Instead, for some reason, they
> wanted to use it as a synonym for int64_t. Why they didn't use int64_t
> as the type for such interfaces, I don't know.
 
When C was first standardized we still had some mainframes that used
36-bit ones' complement. On those intmax_t could be 72-bits and int64_t
was just not available.
 
 
> really_really_max_int_type_this_time_we_really_mean_it was just a
> synonym for int128_t, and they'd have to do it all over again. I suspect
> he's right.
 
We never learn, of course. :-)
 
 
Bo Persson
David Brown <david.brown@hesbynett.no>: Dec 16 04:21PM +0100

On 16/12/2019 13:10, Bart wrote:
>     std::cout << a;
 
> The constant overflows, and if I skip that part, I get pages of errors
> when trying to print it. On gcc, I have no idea what printf format to use.
 
There is no printf support for 128-bit integers, except for systems
where "long long" is 128-bit. (Or, hypothetically, 128-bit "long",
"int", "short" or "char".) As you know full well, gcc does not have
printf, since gcc is a compiler and printf is from the standard library.
 
> This is frankly astonishing with such a big and important language, and
> with comprehensive compilers such as gcc and g++. Even my own 'toy'
> language can do this:
 
It is only astonishing if you don't understand what it means for a
language to have a standard. It's easy to do this with a toy language
(and that is an advantage of toy languages). It would not be
particularly difficult for the gcc or clang developers to support it
either. But the ecosystem around C is huge - you can't change the
standard in a way that breaks compatibility, which is what would have to
happen to make these types full integer types.
 
I am not convinced that there is any great reason for having full
support for 128-bit types here - how often do you really need them?
There are plenty of use-cases for big integers, such as in cryptography,
but then 128-bit is not nearly big enough. And for situations where
128-bit integers are useful, how often do you need literals of that size?
 
With C++, it would not be difficult to put together a class that acts
like a 128-bit integer type for most purposes - including support for
literals and std::cout. Consider it a challenge to test your C++ knowledge.
 
 
>    a := 123.456e1000'000L
>    println a
 
> So mere 128-bit integer-only support is trivial!)
 
Arbitrary precision arithmetic is, I think, a lot more useful than
128-bit integers. But it would be well outside the scope of C. It
would be more reasonable to implement them it in a C++ library, but it
is questionable whether it makes sense for the standard library. After
all, there are many ways to implement arbitrary precision arithmetic
with significantly different trade-offs in terms of speed, run-time
memory efficiency, code space efficiency, and how these match with the
type of usage you want for them.
 
 
> But this would be just a natural extension? I assume that whatever that
> means, there is already X(uint64_t). There is not a totally new feature
> or concept to add.
 
<https://en.cppreference.com/w/cpp/language/user_literal>
 
David Brown <david.brown@hesbynett.no>: Dec 16 04:32PM +0100

On 16/12/2019 13:25, James Kuyper wrote:
> really_really_max_int_type_this_time_we_really_mean_it was just a
> synonym for int128_t, and they'd have to do it all over again. I suspect
> he's right.
 
I suspect that at the time, no one really thought about the possibility
of 128-bit integers. After all, 64-bit integers were new, and 64 bits
should be enough for anybody!
 
Personally, I think the whole "intmax_t" concept was a mistake. But
it's easier to see that kind of thing afterwards.
 
"intmax_t" is defined (roughly) as being the biggest integer type. I
wonder if the definition could be changed in the standards to apply only
to being the biggest /standard/ integer type? That would leave the door
open for /extended/ integer types that are larger. As far as I have
know, compilers that actually implement extended integer types are rare
or non-existent, so this could be done without breaking existing code or
tools.
Robert Wessel <robertwessel2@yahoo.com>: Dec 16 10:54AM -0600

On Mon, 16 Dec 2019 16:21:25 +0100, David Brown
>with significantly different trade-offs in terms of speed, run-time
>memory efficiency, code space efficiency, and how these match with the
>type of usage you want for them.
 
 
Trivially, arbitrary precision arithmetic is more useful than 128-bit
integers, in that the format can do everything the latter can do, and
more.
 
On the other hand, a proper bignum library will almost always carry a
substantial performance penalty compared to fixed 128-bit integers.
Consider that on x86-64 a typical add/subtract/compare reduces to two
instructions (ignoring data movement), and a multiply to about three
multiplies and two adds (division being the usual PITA).
 
On the third hand, 128-bit integers allow a number of things that
64-bit integers don't - a useful type in which to perform currency
calculations, usefully wide (and precise) times, and the like.
 
Personally I think both 128-bit types and arbitrary precision
arithmetic should be part of the language (whether in the language
proper or the library).
"Öö Tiib" <ootiib@hot.ee>: Dec 16 10:21AM -0800

> >Did you have myFilesystem, myOptional and myVariant before C++17?
 
> Optional and variant are both solutions looking for a problem. Just more noise
> added to the language to keep a tiny number of purists happy.
 
Nonsense. The problems are usual. Performance and reliability. Union is
rather good performance optimization in decent hands, variant is a easy
to use reliable union, and optional is a variant between value and
nothing. So "purist" who can wield such trivial tools typically
wins such a sorry "latitudinarians" who can't.
"Öö Tiib" <ootiib@hot.ee>: Dec 16 10:46AM -0800

On Monday, 16 December 2019 07:18:47 UTC+2, Melzzzzz wrote:
> > in bigger project. Other way is to write precisely same thing yourself.
> > Did you have myFilesystem, myOptional and myVariant before C++17?
 
> Yes.
 
Variant and filesystem felt too boring and complicated to
write myself so I always either used some platform-specific
or portable library (like Qt, or boost) solutions. Some "optional"
or "fallible" is simpler to write but why to bother when there is
boost already?
James Kuyper <jameskuyper@alumni.caltech.edu>: Dec 15 11:27PM -0500

> On Wed, 11 Dec 2019 03:13:44 -0800 (PST)
> James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
...
>>from a desktop machine that I maintained myself.
 
> The basic reliability of an OS should be independant of how well or otherwise
> its administered so longs as patches are installed.
 
I'm curious - how in the world do you expect an OS to be designed to
make it's reliability independent of the level of competence of the SA
maintaining it? About the only way I can see to do that is to give the
SA very few options for doing anything that he might do wrong, and a
system so-designed doesn't need any system administrators (for instance,
most of the electronics that run in my refrigerator don't need any
system administration). On powerful general purpose machines, that's
simply not an option - SAs have enormous power, and correspondingly
enormous capabilities for ruining a system if they're not sure what
they're doing.
boltar@nowhere.co.uk: Dec 16 10:02AM

On Sun, 15 Dec 2019 23:27:11 -0500
 
>I'm curious - how in the world do you expect an OS to be designed to
>make it's reliability independent of the level of competence of the SA
>maintaining it? About the only way I can see to do that is to give the
 
I expect the kernel not to crash and statically linked core system utilities
(not a concept Windows is familiar with) that are currently running should
remain running even if a sys admin deletes something he shouldn't have or
messes up a library upgrade or rm -rf /.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: