Monday, December 16, 2019

Digest for comp.lang.c++@googlegroups.com - 14 updates in 1 topic

Keith Thompson <Keith.S.Thompson+u@gmail.com>: Dec 16 10:47AM -0800

Robert Wessel <robertwessel2@yahoo.com> writes:
[...]
> Personally I think both 128-bit types and arbitrary precision
> arithmetic should be part of the language (whether in the language
> proper or the library).
 
128-bit integer types are already permitted by the C++ standard.
But adding a standard (extended or not) 128-bit integer type would
conflict with existing ABIs and libraries due to the effect on
[u]intmax_t.
 
I don't think making 128-bit integers mandatory would be practical,
given the difficulty (and questionable usefulness) of providing
them on small embedded systems. Note that gcc supports __int128,
but not on all targets.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
[Note updated email address]
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Dec 16 10:52AM -0800

David Brown <david.brown@hesbynett.no> writes:
[...]
> Personally, I think the whole "intmax_t" concept was a mistake. But
> it's easier to see that kind of thing afterwards.
[...]
 
Personally, I think intmax_t is useful (perhaps more in C than in C++).
For example:
 
some_integer_type x = some_value;
printf("x = %jd\n", (intmax_t)x);
 
The zoo of ugly format macros in <cinttypes> doesn't cover all
possible cases.
 
Of course in C++ you can just write:
 
std::cout << "x = " << x << "\n";
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
[Note updated email address]
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
"Öö Tiib" <ootiib@hot.ee>: Dec 16 10:56AM -0800

On Monday, 16 December 2019 17:32:34 UTC+2, David Brown wrote:
> should be enough for anybody!
 
> Personally, I think the whole "intmax_t" concept was a mistake. But
> it's easier to see that kind of thing afterwards.
 
Sure it was, it did not take into account how dimwittedly shortshighted
humans (even most bright engineers) are forever.
 
> know, compilers that actually implement extended integer types are rare
> or non-existent, so this could be done without breaking existing code or
> tools.
 
Non-existent at least on case of C++.
"Öö Tiib" <ootiib@hot.ee>: Dec 16 11:19AM -0800

On Monday, 16 December 2019 20:47:53 UTC+2, Keith Thompson wrote:
> But adding a standard (extended or not) 128-bit integer type would
> conflict with existing ABIs and libraries due to the effect on
> [u]intmax_t.
 
Sure, when humans make ABI they make it non-extendable, immutable
and carved into rock? I don't buy it. Didn't we evolve farther from
troglodyte apes some time ago?
 
> given the difficulty (and questionable usefulness) of providing
> them on small embedded systems. Note that gcc supports __int128,
> but not on all targets.
 
Not mandatory. Even int_least128_t not mandatory ... but just adding
some __int128 that is only barely good enough so evil monster companies
can implement polished versions into their javas, c#s, gos and swifts
is somewhat insulting. Maybe we, C and C++ programmers should give
two months strike to those fat fuckers after February world-wide. ;)
Daniel <danielaparker@gmail.com>: Dec 16 11:56AM -0800

On Monday, December 16, 2019 at 10:21:36 AM UTC-5, David Brown wrote:
 
> With C++, it would not be difficult to put together a class that acts
> like a 128-bit integer type for most purposes - including support for
> literals and std::cout. Consider it a challenge to test your C++ knowledge.
 
That's one perspective, that of a lone C++ developer or team solving a
particular problem, with the tools at hand, using a home wrapped 128-bit
integer type here, some gcc extension there, and so on. Reading David's posts,
it often sounds like that is all of C++. But it's not, that's only part
of C++, and may not be the most important part.
 
Generally speaking, open source libraries have come to provide the missing
features in the C++ standard, some supported by vendors such as Microsoft and
Google, some by individuals, and of course boost. Lord knows, the supply of
free software, even of a very high quality, vastly exceeds the demand. The
ability of open source to provide clean API's, though, depends on the coverage
of basic types that are standard. There is no point, for example, for a
database vendor to ship an API with a custom big decimal class. Consequently,
most C++ API's have to be content with binding things like big integers and
big decimals and big floats and int128 to strings, and leave it up to the user
to convert the strings into some extension type supplied by their compiler, or
some other things. Contrast that with practically all other modern languages
with richer type systems, where it is straightforward to provide libraries
that implement API's for standard things such as CBOR or SQL that bind
directly into language types.
 
Daniel
David Brown <david.brown@hesbynett.no>: Dec 16 09:15PM +0100

On 16/12/2019 19:47, Keith Thompson wrote:
> given the difficulty (and questionable usefulness) of providing
> them on small embedded systems. Note that gcc supports __int128,
> but not on all targets.
 
gcc supports 128-bit integers on 64-bit systems, which is not
unreasonable. Support for "double size" integers can be done fairly
simply and efficiently. I think it would be fine to support them on
32-bit systems too, but it's more effort than it's worth to ask
compilers to support 128-bit types on 8-bit processors!
 
I'd like to see some standard way (with standard names) to have 128-bit
integer types without increasing intmax_t. I don't think there would be
a need to make them mandatory - compilers would support them where they
are practical and useful.
David Brown <david.brown@hesbynett.no>: Dec 16 09:18PM +0100

On 16/12/2019 20:56, Daniel wrote:
> integer type here, some gcc extension there, and so on. Reading David's posts,
> it often sounds like that is all of C++. But it's not, that's only part
> of C++, and may not be the most important part.
 
Of course it is not the most important part of C++. And I think it
would be nice if a 128-bit type was standardised in the C++ library so
that you don't have to write your own (or use a third-party library -
surely Boost has one).
 
The point is merely that you /can/ write such a class in C++, and it
will work fine as a 128-bit integer for most uses. And writing such a
class can be an interesting exercise - I think it might open Bart's eyes
to some of the things you can do with C++.
 
David Brown <david.brown@hesbynett.no>: Dec 16 09:24PM +0100

On 16/12/2019 19:52, Keith Thompson wrote:
> For example:
 
> some_integer_type x = some_value;
> printf("x = %jd\n", (intmax_t)x);
 
Fair enough. In my coding, I invariably know the sizes I am dealing
with (and I usually use the <stdint.h> or <cstdint> types). So it is
good to hear the opinions of people who do other kinds of programming.
 
> The zoo of ugly format macros in <cinttypes> doesn't cover all
> possible cases.
 
Indeed.
 
> Of course in C++ you can just write:
 
> std::cout << "x = " << x << "\n";
 
Yes.
 
In C, you can use _Generic to handle this, but it is not as easily
extended as std::cout.
Ben Bacarisse <ben.usenet@bsb.me.uk>: Dec 16 08:45PM

> should be enough for anybody!
 
> Personally, I think the whole "intmax_t" concept was a mistake. But
> it's easier to see that kind of thing afterwards.
 
I disagree. See below for at least one reason.
 
> know, compilers that actually implement extended integer types are rare
> or non-existent, so this could be done without breaking existing code or
> tools.
 
This puzzles me. gcc (I am talking mainly about C here as I don't know
the C++ standard well enough) implements extended integer types (like
__int128) but not in conforming mode because that would entail making
intmax_t 128 bits wide. Did you intend a rather narrower remark that
compilers that implement extended integer types in conforming mode are
rare or non-existent?
 
intmax_t is useful, in part because it solves the printf problem. You
can portably print any integer i using
 
printf("%ld\n", (intmax_t)i);
 
whether i is an extended type like __int128 or even some weird 99-bit
integer type.
 
--
Ben.
David Brown <david.brown@hesbynett.no>: Dec 16 10:00PM +0100

On 16/12/2019 21:45, Ben Bacarisse wrote:
> intmax_t 128 bits wide. Did you intend a rather narrower remark that
> compilers that implement extended integer types in conforming mode are
> rare or non-existent?
 
gcc supports 128-bit types even with "-std=c11 -pedantic" (or other C or
C++ standards), but requires "__extension__" to avoid a warning:
 
__extension__ typedef __int128 int128;
 
int foo(void) {
return sizeof(int128);
}
 
 
__int128 is a gcc extension, and is treated as an "integer scalar type",
but it is /not/ an "extended integer type" as defined by the C
standards. That applies whether you are in conforming modes or not.
 
<https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html>
"""
GCC does not support any extended integer types.
"""
 
 
 
> printf("%ld\n", (intmax_t)i);
 
> whether i is an extended type like __int128 or even some weird 99-bit
> integer type.
 
Yes, Keith mentioned that use, which I had not thought about (it's not
something that turns up in my kind of coding).
Bart <bc@freeuk.com>: Dec 16 09:03PM

On 16/12/2019 15:21, David Brown wrote:
> where "long long" is 128-bit. (Or, hypothetically, 128-bit "long",
> "int", "short" or "char".) As you know full well, gcc does not have
> printf, since gcc is a compiler and printf is from the standard library.
 
So? printf is supposed to reflect the available primitive types. If a
compiler is extended to 128 buts, then printf should support that too.
How it does that is not the concern of the programmer.
 
(However printf currently doesn't even directly support int64_t, so
don't hold your breath for int128.)
 
> either. But the ecosystem around C is huge - you can't change the
> standard in a way that breaks compatibility, which is what would have to
> happen to make these types full integer types.
 
What exactly would be the problem in supporting an integer constant
bigger than 2**64-1? This part is inside the compiler not the library,
so if a type bigger than 2**64-1 exists, then the constant can be of
that type.
 
> I am not convinced that there is any great reason for having full
> support for 128-bit types here - how often do you really need them?
 
A few weeks ago there was a thread on clc that made use of 128-bit
numbers to create perfect hashes of words in a dictionary. And the
longest word in the dictionary, with the correct ordering of prime
numbers, would just fit into 128 bits.
 
Anyway, there seems to have long been a tradition in programming
languages with a word-sized integer type, to also provide a
double-word-sized too.
 
So 32-bit ints were available on 16-bit hardware; and 64-bit on 32-bit
machines. Since most machines now are 64-bit (other than your specialist
processors), why not have a 128-bit type? Even my rubbish compiler can
turn this:
 
int128 a,b,c
a := b+c
 
into (Dn are 64-bit registers):
 
mov D0, [b]
mov D1, [b+8]
mov D2, [c]
mov D3, [c+8]
add D0, D2
adc D1, D3
mov [a], D0
mov [a+8], D1
 
 
> There are plenty of use-cases for big integers, such as in cryptography,
> but then 128-bit is not nearly big enough. And for situations where
> 128-bit integers are useful, how often do you need literals of that size?
 
That would be like creating an arbitrary precision library without
having a way to write arbitrarily long constants. They can occur either
in input data (needing support via scanf, atol etc) or in code or {...}
data generated by machine.
 
Doing it via string conversions is crass.
 
>> So mere 128-bit integer-only support is trivial!)
 
> Arbitrary precision arithmetic is, I think, a lot more useful than
> 128-bit integers.
 
As I think Robert mentioned, it can have considerably more overheads,
especially mine. So a:=b+c might involve executing many hundreds of
instructions involving function calls, loops, and memory allocation;
compare with the code above.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Dec 16 09:13PM

>> Did you have myFilesystem, myOptional and myVariant before C++17?
 
> Optional and variant are both solutions looking for a problem. Just more noise
> added to the language to keep a tiny number of purists happy.
 
Wrong! You really are an ignorant twit.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."
Vir Campestris <vir.campestris@invalid.invalid>: Dec 16 09:45PM

On 16/12/2019 16:54, Robert Wessel wrote:
> On the third hand, 128-bit integers allow a number of things that
> 64-bit integers don't - a useful type in which to perform currency
> calculations, usefully wide (and precise) times, and the like.
 
My mind boggles.
 
32 bit numbers are quite big enough for my financial needs, and the
world GDP is only about 40 bits (in USD). What currency work have you
done where 64 bit currency wasn't enough?
 
However - the argument against _not_ implementing std::int128_t here
seems to be that a lot of people have assumed that the biggest in is
only 64 bits. When will people learn...
 
I started in PCs when a pointer and an int were both 16 bit, and people
misused int. A few years later they were both 32 bit... now they're
generally a different size, and _still_ I see int x = sizeof(...)
 
Andy
David Brown <david.brown@hesbynett.no>: Dec 17 12:04AM +0100

On 16/12/2019 22:03, Bart wrote:
 
> So? printf is supposed to reflect the available primitive types. If a
> compiler is extended to 128 buts, then printf should support that too.
> How it does that is not the concern of the programmer.
 
The C standards form the contract. A C compiler supports a given C
standard, and a C library supports the given C standard. (A compiler
and library can also be designed together and made to support each
other.) If you have a C compiler that conforms to a standard and a C
library that conforms to a standard, then together they form a C
implementation for that standard. The same applies to C++.
 
It is entirely possible for one part of this to support features that
are not supported by the other. If these are extensions, not required
by the standards, then that's fine.
 
And printf - according to its definition in the standard - does not have
any support for integer sizes bigger than "intmax_t" as defined by the
implementation (generally, by the ABI for the platform). It doesn't
matter if the compiler has support for other types - the standard printf
does not support them.
 
 
> (However printf currently doesn't even directly support int64_t, so
> don't hold your breath for int128.)
 
Any conforming C99 printf will support "long long int", which is at
least 64 bits (and generally exactly 64 bits). You can't blame the
compiler just because /you/ happen to use it with an outdated and
non-conforming library. Anyone who uses gcc as part of a conforming
implementation has printf that supports int64_t.
 
(This discussion was somewhat interesting the first couple of times it
came up - it must have been explained to you a dozen times or more.)
 
> bigger than 2**64-1? This part is inside the compiler not the library,
> so if a type bigger than 2**64-1 exists, then the constant can be of
> that type.
 
I don't think the size of the constants itself is an issue. The problem
is that you can't have a fully supported integer type bigger than
maxint_t in C, and maxint_t is constrained by the ABI for the platform.
 
> numbers to create perfect hashes of words in a dictionary. And the
> longest word in the dictionary, with the correct ordering of prime
> numbers, would just fit into 128 bits.
 
You don't mean "perfect hash" here - you mean "hash". "Perfect hash
function" has a specific meaning.
 
128-bit hashes are, in general, pointless. They are far bigger than
necessary to avoid a realistic possibility of an accidental clash, and
far too small to avoid intentional clashes.
 
Of course it is possible to find uses of 128-bit integers - especially
for obscure and artificial problems like that thread. I did not suggest
they were never useful - I am suggesting they are /rarely/ useful. And
I am suggesting that it is even rarer that there would be need for
/full/ support for such types. I didn't follow the thread in question,
but I doubt if it needed constants of 128 bit length, or printf support
for them, or support for abs, div, or other integer-related functions in
the C standard library. (Since it was a C discussion, I assume it
didn't need any integer functions from the C++ library.)
 
> especially mine. So a:=b+c might involve executing many hundreds of
> instructions involving function calls, loops, and memory allocation;
> compare with the code above.
 
Of course arbitrary precision arithmetic involves more work. But that's
what you need for things like public key cryptography. I am not
suggestion that you should use arbitrary precision arithmetic to handle
128 bit numbers - I am saying that it is rare that you need something
bigger than 64-bit but will be satisfied with 128-bit.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: