Friday, May 28, 2021

Digest for comp.lang.c++@googlegroups.com - 24 updates in 5 topics

Juha Nieminen <nospam@thanks.invalid>: May 28 04:45PM

>>linear search.
 
> A vector miss will always be O(N). The average hit will be O(N/2). From a big-O
> perspective, they're identical complexities.
 
What do you mean by "vector miss"?
 
> And either the sort needs to be done after every insert or the insert
> complexity becomes high (due to the need to move all the higher
> elements up one in the vector).
 
I think the idea was that if you already have a sorted vector (for example
a dataset that you read eg. from a file and sorted once), making repeated
searches for different values will be as fast as, if not even slightly
faster than with std::set (not asymptotically faster, but faster in
terms of wallclock time).
Bonita Montero <Bonita.Montero@gmail.com>: May 28 06:46PM +0200

> One can have efficient search in vectors, beating std::set. ...
 
Use unordered_set if you don't need sorted iteration.
When the load-factor is set correctly that will be faster.
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 05:02PM


> One can have efficient search in vectors, beating std::set. But for that
> the vector must be sorted and one must use std::lower_bound(), not
> linear search.
 
well that is because vector is more cache friendly ;)
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 05:05PM

> the vector sorted and use binary searching (std::lower_bound). Of course
> this applies mostly if you can generate the entire vector in one go and
> then you just need to search but don't need to insert or remove elements.
Problem is that if binary tree is not reorganizing internaly to be cache
friendly for every insert (balancing and making close nodes close to
each other) traversing set is awfully slow.
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 05:13PM

> elements up one in the vector).
 
> A miss on a balanced tree structure will always be O(LOG N), with
> some complexity on insert due to balancing.
While this is theorectically OK, sorting list in place lineraly
with sequential quirt sort is faster then sorting it by relinking nodes.
And after that traversal is much faster as in place sort doesn't change
memory order. Chasing pointers is unforgivin on modern hardware.
You get less then instruction per cycle ...
 
 
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
scott@slp53.sl.home (Scott Lurndal): May 28 05:23PM


>> A vector miss will always be O(N). The average hit will be O(N/2). From a big-O
>> perspective, they're identical complexities.
 
>The description for std::lower_bound<T>() says
 
I missed the reference to lower_bound (i.e. binary search). It's true that
using a binary search on a sorted vector will be O(log N).
Bonita Montero <Bonita.Montero@gmail.com>: May 28 07:31PM +0200

> Problem is that if binary tree is not reorganizing internaly to be cache
> friendly for every insert (balancing and making close nodes close to
> each other) traversing set is awfully slow.
 
A vector isn't cache-friendly either when you do a binary search.
Random-access memory-accesses are always slow.
scott@slp53.sl.home (Scott Lurndal): May 28 05:50PM

>And after that traversal is much faster as in place sort doesn't change
>memory order. Chasing pointers is unforgivin on modern hardware.
>You get less then instruction per cycle ...
 
Not really unforgiving, we spend significant resources on tuning
the cache replacement algorithms and cache prefetchers to detect
and support pointer chasing applications.
Bonita Montero <Bonita.Montero@gmail.com>: May 28 07:54PM +0200

>> each other) traversing set is awfully slow.
 
> A vector isn't cache-friendly either when you do a binary search.
> Random-access memory-accesses are always slow.
 
I just wrote a little test:
 
#include <iostream>
#include <set>
#include <vector>
#include <algorithm>
#include <random>
#include <chrono>
 
using namespace std;
using namespace chrono;
 
int main()
{
using hrc_tp = time_point<high_resolution_clock>;
size_t const ROUNDS = 10'000'000;
set<int> si;
for( int i = 1000; --i; )
si.insert( i );
mt19937_64 mt( (default_random_engine())() );
uniform_int_distribution<int> uid( 0, 999 );
int const *volatile pvi;
hrc_tp start = high_resolution_clock::now();
for( size_t r = ROUNDS; r--; )
pvi = &*si.find( uid( mt ) );
double ns = (int64_t)duration_cast<nanoseconds>(
high_resolution_clock::now() - start ).count() / (double)ROUNDS;
cout << "set: " << ns << endl;
vector<int> vi( 1'000 );
for( size_t i = 1'000; i--; vi[i] = (int)i );
bool volatile found;
start = high_resolution_clock::now();
for( size_t r = ROUNDS; r--; )
found = binary_search( vi.begin(), vi.end(), uid( mt ) );
ns = (int64_t)duration_cast<nanoseconds>( high_resolution_clock::now()
- start ).count() / (double)ROUNDS;
cout << "vec: " << ns << endl;
}
 
On my Ryzen Threadripper 3990X the times for set are 52.2 and forvec
they are 48.3. I expected the difference to be somewhat larger, but
as the set and map trees are usually red-black-trees there's some
kind of binary-lookup inside the nodes (up to 4 descendants), so the
memory access-patterns become not so random.
scott@slp53.sl.home (Scott Lurndal): May 28 08:20PM

> set<int> si;
> for( int i = 1000; --i; )
> si.insert( i );
 
A thousand integers will fit in the L1 cache. Try
it with a hundred million.
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 09:31PM

>> each other) traversing set is awfully slow.
 
> A vector isn't cache-friendly either when you do a binary search.
> Random-access memory-accesses are always slow.
 
Look, caches are pretty large these days. Rarely you will have vector
larger then L3 ;)
for example take a look at this bench:
[code]
~/.../examples/sort >>> ./list_sort 1000000
seed: 1622237019
N: 1000000
unsorted
 
0x102515e70 299531
0x102515e50 398346
0x102515e30 273381
0x102515e10 875323
0x102515df0 289978
0x102515dd0 399336
0x102515db0 21582
0x102515d90 874270
0x102515d70 114179
0x102515d50 778108
0x102515d30 451177
0x102515d10 570319
0x102515cf0 621189
0x102515cd0 138605
0x102515cb0 908552
0x102515c90 267934
array radix elapsed 0.029208 seconds
list radix elapsed 1.070269 seconds
list merge elapsed 0.329870 seconds
equal
 
equal
 
sorted
 
0x100d80710 0
0x101931470 1
0x101bb1dd0 4
0x10198ef10 6
0x101b8a350 6
0x102453390 7
0x1008aa4b0 10
0x101938ab0 10
0x100d8a650 11
0x102093f30 12
0x101a94eb0 13
0x10086a5f0 13
0x101538210 15
0x100876930 16
0x10079f010 16
0x100e9faf0 17
size of node 12, length 1000000
[/code]
Fits in cache (L3)
Now look list sort vs array sort with 8 mil elements.
Goes out of cache size ;)
[code]
~/.../examples/sort >>> ./list_sort 8000000
seed: 1622237118
N: 8000000
unsorted
 
0x1112ff670 5357700
0x1112ff650 5227237
0x1112ff630 6764785
0x1112ff610 2977649
0x1112ff5f0 5339439
0x1112ff5d0 3588837
0x1112ff5b0 6605927
0x1112ff590 1268989
0x1112ff570 5989679
0x1112ff550 492070
0x1112ff530 4063176
0x1112ff510 2130048
0x1112ff4f0 2334049
0x1112ff4d0 5751483
0x1112ff4b0 1179952
0x1112ff490 973123
array radix elapsed 0.239062 seconds
list radix elapsed 10.192995 seconds
list merge elapsed 4.259887 seconds
equal
 
equal
 
sorted
 
0x10ffe8410 0
0x10e433430 0
0x107ba73b0 1
0x11021d0b0 2
0x10b008290 3
0x1034351f0 4
0x10f710f90 5
0x10b8be5b0 6
0x10395b170 9
0x102da6fd0 9
0x10e729ef0 10
0x110a91990 11
0x103f86a90 12
0x10853c310 13
0x1065f8890 16
0x104453390 16
size of node 12, length 8000000
[/code]
While array does not fits in cache as well, it is more cache friendly
then list. Same is for binary tree. That is why btree is so much faster
then plain ordinary binary tree on modern hardware ;)
But C++ because of iterators can't use btrees :(
Also merge sort is much faster on list then radix sort? Can you beleive
that?
Look now trees benchamr. Binary trees vs btree.
[code]
~/.../rust/trees >>> ./target/release/binary_trees ±[●][master]
└── (0,0):(c:BLACK)
├── nil
└── nil
└── (0,0):(data)
├── nil
└── nil
└── (0,0):(c:BLACK)
├── nil
└── (1,1):(c:RED)
├── nil
└── nil
└── (0,0):(data)
├── nil
└── (1,1):(data)
├── nil
└── nil
└── (1,1):(c:BLACK)
├── (0,0):(c:RED)
│ ├── nil
│ └── nil
└── (2,2):(c:RED)
├── nil
└── nil
└── (0,0):(data)
├── nil
└── (1,1):(data)
├── nil
└── (2,2):(data)
├── nil
└── nil
└── (1,1):(c:BLACK)
├── (0,0):(c:BLACK)
│ ├── nil
│ └── nil
└── (2,2):(c:BLACK)
├── nil
└── (3,3):(c:RED)
├── nil
└── nil
└── (2,2):(data)
├── (1,1):(data)
│ ├── (0,0):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (3,3):(data)
├── nil
└── nil
└── (1,1):(c:BLACK)
├── (0,0):(c:BLACK)
│ ├── nil
│ └── nil
└── (3,3):(c:BLACK)
├── (2,2):(c:RED)
│ ├── nil
│ └── nil
└── (4,4):(c:RED)
├── nil
└── nil
└── (2,2):(data)
├── (1,1):(data)
│ ├── (0,0):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (3,3):(data)
├── nil
└── (4,4):(data)
├── nil
└── nil
└── (1,1):(c:BLACK)
├── (0,0):(c:BLACK)
│ ├── nil
│ └── nil
└── (3,3):(c:RED)
├── (2,2):(c:BLACK)
│ ├── nil
│ └── nil
└── (4,4):(c:BLACK)
├── nil
└── (5,5):(c:RED)
├── nil
└── nil
└── (2,2):(data)
├── (1,1):(data)
│ ├── (0,0):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (3,3):(data)
├── nil
└── (4,4):(data)
├── nil
└── (5,5):(data)
├── nil
└── nil
└── (1,1):(c:BLACK)
├── (0,0):(c:BLACK)
│ ├── nil
│ └── nil
└── (3,3):(c:RED)
├── (2,2):(c:BLACK)
│ ├── nil
│ └── nil
└── (5,5):(c:BLACK)
├── (4,4):(c:RED)
│ ├── nil
│ └── nil
└── (6,6):(c:RED)
├── nil
└── nil
└── (2,2):(data)
├── (1,1):(data)
│ ├── (0,0):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (5,5):(data)
├── (4,4):(data)
│ ├── (3,3):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (6,6):(data)
├── nil
└── nil
└── (3,3):(c:BLACK)
├── (1,1):(c:RED)
│ ├── (0,0):(c:BLACK)
│ │ ├── nil
│ │ └── nil
│ └── (2,2):(c:BLACK)
│ ├── nil
│ └── nil
└── (5,5):(c:RED)
├── (4,4):(c:BLACK)
│ ├── nil
│ └── nil
└── (6,6):(c:BLACK)
├── nil
└── (7,7):(c:RED)
├── nil
└── nil
└── (2,2):(data)
├── (1,1):(data)
│ ├── (0,0):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (5,5):(data)
├── (4,4):(data)
│ ├── (3,3):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (6,6):(data)
├── nil
└── (7,7):(data)
├── nil
└── nil
└── (3,3):(c:BLACK)
├── (1,1):(c:RED)
│ ├── (0,0):(c:BLACK)
│ │ ├── nil
│ │ └── nil
│ └── (2,2):(c:BLACK)
│ ├── nil
│ └── nil
└── (5,5):(c:RED)
├── (4,4):(c:BLACK)
│ ├── nil
│ └── nil
└── (7,7):(c:BLACK)
├── (6,6):(c:RED)
│ ├── nil
│ └── nil
└── (8,8):(c:RED)
├── nil
└── nil
└── (2,2):(data)
├── (1,1):(data)
│ ├── (0,0):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (5,5):(data)
├── (4,4):(data)
│ ├── (3,3):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (6,6):(data)
├── nil
└── (7,7):(data)
├── nil
└── (8,8):(data)
├── nil
└── nil
└── (3,3):(c:BLACK)
├── (1,1):(c:BLACK)
│ ├── (0,0):(c:BLACK)
│ │ ├── nil
│ │ └── nil
│ └── (2,2):(c:BLACK)
│ ├── nil
│ └── nil
└── (5,5):(c:BLACK)
├── (4,4):(c:BLACK)
│ ├── nil
│ └── nil
└── (7,7):(c:RED)
├── (6,6):(c:BLACK)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
└── (2,2):(data)
├── (1,1):(data)
│ ├── (0,0):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (5,5):(data)
├── (4,4):(data)
│ ├── (3,3):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (8,8):(data)
├── (7,7):(data)
│ ├── (6,6):(data)
│ │ ├── nil
│ │ └── nil
│ └── nil
└── (9,9):(data)
├── nil
└── nil
└── (3,3):(c:BLACK)
├── (1,1):(c:BLACK)
│ ├── (0,0):(c:BLACK)
│ │ ├── nil
│ │ └── nil
│ └── (2,2):(c:BLACK)
│ ├── nil
│ └── nil
└── (5,5):(c:BLACK)
├── (4,4):(c:BLACK)
│ ├── nil
│ └── nil
└── (7,7):(c:RED)
├── (6,6):(c:BLACK)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
true
0 1
1 2
2 3
3 4
4 5
0 1
1 2
2 3
3 4
4 5
0 1
1 2
2 3
3 4
4 5
valid true 9
valid true 8
valid true 7
valid true 6
valid true 5
valid true 4
valid true 3
valid true 2
valid true 1
valid true 0
valid true 9
valid true 8
valid true 7
valid true 6
valid true 5
valid true 4
valid true 3
valid true 2
valid true 1
valid true 0
valid true 9
└── (5,5):(c:BLACK)
├── (3,3):(c:BLACK)
│ ├── (1,1):(c:BLACK)
│ │ ├── nil
│ │ └── (2,2):(c:RED)
│ │ ├── nil
│ │ └── nil
│ └── (4,4):(c:BLACK)
│ ├── nil
│ └── nil
└── (7,7):(c:BLACK)
├── (6,6):(c:BLACK)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
valid true 8
└── (5,5):(c:BLACK)
├── (3,3):(c:BLACK)
│ ├── (2,2):(c:BLACK)
│ │ ├── nil
│ │ └── nil
│ └── (4,4):(c:BLACK)
│ ├── nil
│ └── nil
└── (7,7):(c:BLACK)
├── (6,6):(c:BLACK)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
valid true 7
└── (5,5):(c:BLACK)
├── (3,3):(c:BLACK)
│ ├── nil
│ └── (4,4):(c:RED)
│ ├── nil
│ └── nil
└── (7,7):(c:RED)
├── (6,6):(c:BLACK)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
valid true 6
└── (5,5):(c:BLACK)
├── (4,4):(c:BLACK)
│ ├── nil
│ └── nil
└── (7,7):(c:RED)
├── (6,6):(c:BLACK)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
valid true 5
└── (7,7):(c:BLACK)
├── (5,5):(c:BLACK)
│ ├── nil
│ └── (6,6):(c:RED)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
valid true 4
└── (7,7):(c:BLACK)
├── (6,6):(c:BLACK)
│ ├── nil
│ └── nil
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
valid true 3
└── (8,8):(c:BLACK)
├── (7,7):(c:BLACK)
│ ├── nil
│ └── nil
└── (9,9):(c:BLACK)
├── nil
└── nil
valid true 2
└── (8,8):(c:BLACK)
├── nil
└── (9,9):(c:RED)
├── nil
└── nil
valid true 1
└── (9,9):(c:BLACK)
├── nil
└── nil
valid true 0
empty tree
 
trees! 64
treest! 64
treesr! 64
average op 5360
t insert time 1.591332069
true
height 20
weight (422175,577824)
average op 5802
t1 insert time 1.722261029
true
height 26
weight (670296,329703)
average op 4334
t2 insert time 1.29017692
true
height 20
weight (422175,577824)
average op 4717
t3 insert time 1.401961993
true
height 23
weight (612266,387733)
counter 31780
average op 3397
bt insert time 1.014782747
true size 1000000
t find time 1.166620321
true
sum 1783293664
t1 find time 1.72787108
true
sum 1783293664
t2 find time 1.192506122
true
sum 1783293664
t3 find time 1.376387028
true
sum 1783293664
bt find time 1.042876439
true
sum 1783293664
average op 375
t iter time 0.110657066
true
sum 1783293664
average op 391
t1 iter time 0.115330211
true
sum 1783293664
average op 376
t2 iter time 0.110930662
true
sum 1783293664
average op 388
t3 iter time 0.114391853
true
sum 1783293664
average op 69
bt iter time 0.020547062
true
sum 1783293664
t delete time 1.806868028
true size 0
empty tree
t1 delete time 1.734909562
true size 0
empty tree
t2 delete time 1.47146888
true size 0
empty tree
t3 delete time 3.051412126
true size 0
counter 19
empty tree
bt delete time 1.052457722
true
[/code]
t,t1,t2,t3 are various binary trees, while bt is btree.
Look how superior btree is, because of cache friendlyness :)
 
 
 
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 09:41PM


> Not really unforgiving, we spend significant resources on tuning
> the cache replacement algorithms and cache prefetchers to detect
> and support pointer chasing applications.
+1
 
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 09:50PM

> as the set and map trees are usually red-black-trees there's some
> kind of binary-lookup inside the nodes (up to 4 descendants), so the
> memory access-patterns become not so random.
 
Your test is wrong. Too much iterations while data fits in cache:
look when data fits in cache:
~/.../bmaxa_data/examples >>> ./vecvsset
set: 45.2584
vec: 48.2688
so in this case set is even faster then vec, but this is iluusion.
Look how things look like when data does not fits in cache:
/.../bmaxa_data/examples >>> g++ -O2 vecvsset.cpp -march=native -o vecvsset [147]
~/.../bmaxa_data/examples >>> ./vecvsset
set: 226.181
vec: 56.9002
Holy moly vec is 4 times faster then set ;)
Do you understand now what I am talking about ?
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 05:03PM


> When I do by hand, set some atomic flag (that thread checks if it
> should stop) and then join then I can measure, can mock that flag
> checking function. It is lot easier?
I think that using flags is more practical as it less error prone ;)
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 05:06PM


> Have you ever checked how pthread_cleanup_push and pthead_cleanup_pop
> works ? These are macros and generate something with do / while inter-
> nally. What kind of bastard does design such ugly hacks ?
That is POSIX thing. Very old.
 
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 05:08PM

> }
> }
 
> And I hardly doubt that this works on most Unices.
POSIX threads is for C, when they designed it no one thought
about exceptions adn destructors ;)
Hell I don't use exceptions because of that. More hassle
then use ;)
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
scott@slp53.sl.home (Scott Lurndal): May 28 05:51PM

>> works ? These are macros and generate something with do / while inter-
>> nally. What kind of bastard does design such ugly hacks ?
>That is POSIX thing. Very old.
 
Please don't feed the troll.
Bonita Montero <Bonita.Montero@gmail.com>: May 28 08:00PM +0200

>>> nally. What kind of bastard does design such ugly hacks ?
>> That is POSIX thing. Very old.
 
> Please don't feed the troll.
 
Have I taken away one of your toys ?
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: May 28 07:54PM +0100

On Fri, 28 May 2021 17:30:52 +0200
> > You are making this quite hard work. For new threads (including the
> > main thread), cancellation is enabled by default, ...
 
> Ok, that's what I didn't consider.
 
And in case any readers think there is a race between the parent
thread's call to pthread_cancel and the worker thread's call to
pthread_setcancelstate as its first action, I should mention that there
isn't: this is because pthread_setcancelstate is not a cancellation
point. POSIX's thread cancellation, as extended for C++, is pretty well
thought through in my opinion.
Branimir Maksimovic <branimir.maksimovic@gmail.com>: May 28 09:42PM

>>> nally. What kind of bastard does design such ugly hacks ?
>>That is POSIX thing. Very old.
 
> Please don't feed the troll.
 
I beleive she pents significant time to show her code here, so she is not
really troll ;)
 
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Real Troll <real.troll@trolls.com>: May 28 09:30PM +0100

> * The final pieces of |<chrono>|: new clocks, leap seconds, time
> zones, and parsing
> * Implementation of |<format>| for text formating
 
<https://docs.microsoft.com/en-us/visualstudio/releases/2019/release-notes#16.10.0>
Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 28 12:09PM -0700

> allows implementation-defined extended integer types, both signed
> and unsigned. For example, a compiler might be provide signed
> and unsigned 128-bit integer types."
 
Right. Extended integer types are a nice idea, but I've never seen a
compiler that actually implements them.
 
> implementations from making their custom integer types that are not
> extended integer types. And lack of that turns the above into useless
> woo, as it says nothing as result. Not a thing. Waste of space.
 
gcc has __int128 *as an extension* (not supported for 32-bit targets).
The standard permits extensions:
 
A conforming implementation may have extensions (including
additional library functions), provided they do not alter the
behavior of any strictly conforming program.
 
I'm not sure how (or why!) you'd forbid extensions that happen to act
almost like integer types.
 
gcc doesn't support 128-bit integer constants. Also, making __int128 an
extended integer type would require intmax_t to be 128 bits, which would
cause serious problems with ABIs (there are standard library functions
that take arguments of type intmax_t). The alternative would have been
not to support 128-bit integers at all.
 
I'd like to see full support for 128-bit integers, but gcc's __int128 is
IMHO better than nothing (though to be honest I've never used it except
in small test programs).
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
scott@slp53.sl.home (Scott Lurndal): May 28 08:26PM


>I'd like to see full support for 128-bit integers, but gcc's __int128 is
>IMHO better than nothing (though to be honest I've never used it except
>in small test programs).
 
We use it when modeling 128-bit bus structures. We don't use
arithmetic operations on 128-bit data but do
use masking and shifting.
Udo Steinbach <trashcan@udoline.de>: May 28 10:01PM +0200

Am 2021-05-28 um 04:02 Uhr schrieb Bonita Montero:
> when closing a file-handle file data is usally written back asynchronously
 
That's wrong for Windows NT, where I had tested this, up to todays Windows I
guess. Windows 95 did lose data then (and crash). And what about formal
correctness? CloseHandle() can return an error, so you have to handle this
condition if possible. Programmer's laziness does not stand over usability.
The main exception, that I see, is an error condition while handling an
error; in C++ a new error on destruction while stack unwinding due to an
error case. Similar with threads, I think, instead of the system the thread
itself may return an error in response to a stop signal.
My rule: close, destroy, stop manually whenever possible.
--
Fahrradverkehr in Deutschland: http://radwege.udoline.de/
GPG: A245 F153 0636 6E34 E2F3 E1EB 817A B14D 3E7E 482E
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: