Tuesday, November 5, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

Siri Cruise <chine.bleu@yahoo.com>: Nov 04 07:55PM -0700

In article <qppva0$cbj$1@news.albasani.net>,
 
> What would realls speak against that numieric_limits<type> would include
> a flag whether a type has a two's complement and maybe another flag that
> says that a type has a two's complement wrap-around?
 
#define ONESCOMPLEMENT (-1==~1)
#define TWOSCOMPLEMENT (-1==~0)
 
--
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted. @
'I desire mercy, not sacrifice.' /|\
The first law of discordiamism: The more energy This post / \
to make order is nore energy made into entropy. insults Islam. Mohammed
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Nov 05 05:52AM +0100

On 05.11.2019 03:55, Siri Cruise wrote:
>> says that a type has a two's complement wrap-around?
 
> #define ONESCOMPLEMENT (-1==~1)
> #define TWOSCOMPLEMENT (-1==~0)
 
Nice idea.
 
But please use `constexpr` or `const` declarations, not macros.
 
Using macros conveys the idea that macros are a reasonable solution to
the problem of naming a compile time constant.
 
 
- Alf
Ian Collins <ian-news@hotmail.com>: Nov 05 06:17PM +1300

On 05/11/2019 17:52, Alf P. Steinbach wrote:
 
> But please use `constexpr` or `const` declarations, not macros.
 
> Using macros conveys the idea that macros are a reasonable solution to
> the problem of naming a compile time constant.
 
:)
 
--
Ian.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 05 06:56AM +0100

> #define TWOSCOMPLEMENT (-1==~0)
 
Or ...
 
 
#include <limits>
 
using namespace std;
 
constexpr bool hasTwosComplementAndWrapAround()
{
return (numeric_limits<int>::max() + 1) == numeric_limits<int>::min();
}
 
constexpr bool hasSignedShift()
{
return (-2 >> 2) == -1;
}
Christian Gollwitzer <auriocus@gmx.de>: Nov 05 08:24AM +0100

Am 05.11.19 um 06:56 schrieb Bonita Montero:
 
>     return (numeric_limits<int>::max() + 1) == numeric_limits<int>::min();
 
Couldn't this in theory format your hard drive (although in practice it
never will)? Or more realistically, always return a false negative from
a zealous optimizer?
 
Christian
Bonita Montero <Bonita.Montero@gmail.com>: Nov 05 08:35AM +0100

>> numeric_limits<int>::min();
 
> Or more realistically, always return a false negative from
> a zealous optimizer?
 
When it returns a false-negative, the function does what
it should: it says that you can't rely on the wrap-around.
"Öö Tiib" <ootiib@hot.ee>: Nov 04 11:36PM -0800

On Tuesday, 5 November 2019 09:24:17 UTC+2, Christian Gollwitzer wrote:
 
> Couldn't this in theory format your hard drive (although in practice it
> never will)? Or more realistically, always return a false negative from
> a zealous optimizer?
 
Yes, when numeric_limits<int>::is_modulo is false then the
numeric_limits<int>::max() + 1 is undefined behavior.
By progress of P0907 it seems that it will continue being
undefined behavior after C++20 too.
David Brown <david.brown@hesbynett.no>: Nov 05 10:31AM +0100

On 04/11/2019 23:39, Öö Tiib wrote:
>> as on most platforms?
 
> Language lawyers at isocpp.org seem quite uncertain what they want:
> <https://groups.google.com/a/isocpp.org/forum/#!msg/std-proposals/MZzCyAL1qRo/p493_UdUAgAJ>
 
I think there are a few points that can be taken from the discussions on
this topic in various places.
 
1. Two's complement representation, with no padding bits, should be
standardised. There simply is no need for anything else in the C++
world - there are no implementations of C++ with any other
representation, and no likelihood of them being used in the future.
Picking the one fixed representation simplifies a few things in the
standard.
 
2. Some people get very worked up about signed integer overflow
behaviour. There are also several widely believed, but incorrect, myths
about it - such as ideas that it used to be defined behaviour, or that
it always wraps on x86 systems, or that defining wrapping behaviour is
always a good idea, or that the undefined nature of signed integer
overflow is just a hangover from when different representations were
allowed. None of these is true.
 
3. Some code is written assuming that signed integers wrap on overflow,
and the code is incorrect if that is not the case for the implementation
used.
 
4. Some code would be easier to write in a correct and efficient way if
signed integer overflow wrapped.
 
5. Some code is more efficient if signed integer overflow is undefined
behaviour (assuming a suitable optimising compiler).
 
6. Some errors are significantly easier to detect (statically, or using
run-time tools like sanitizers) when signed integer overflow is
undefined behaviour.
 
7. Sometimes it would also be useful with undefined behaviour for
unsigned integer overflow, for efficiency reasons or for detecting
errors in the code.
 
 
To my mind, this all says C++ should support both models - indeed, it
should support several.
 
I would propose a set of template classes, such as:
 
std::overflow_wrap<T>
std::overflow_undef<T>
std::overflow_saturate<T>
std::overflow_throw<T>
 
These could be used with any integer types, signed and unsigned. The
standard unsigned types are equivalent to the overflow_wrap types. The
standard signed types are overflow_undef. Implementations would be free
to implement overflow_undef (and therefore plain signed integer types)
in any way they want, with the expectation that it is efficient. But
compilers should document clearly if they consider plain ints to be
overflow_undef. And if they say they are overflow_wrap, then they must
be entirely consistent about it. (If you look in MSVC's documentation,
you'll get the impression that it wraps on overflow - except when it
optimises on the assumption that overflow doesn't occur.)
 
I'd also be in favour of having a standardised pragma to change the
behaviour of plain int types, as an aid to dealing with existing code.
 
When people can't agree, and when every option has its advantages and
disadvantages, the only rational solution is to give people an explicit
choice.
David Brown <david.brown@hesbynett.no>: Nov 05 10:32AM +0100

On 04/11/2019 23:55, Bo Persson wrote:
>> says that a type has a two's complement wrap-around?
 
> Possibly that in C++20 all signed integers *are* two's complement.  :-)
 
> https://wg21.link/P0907
 
Two's complement representation, but /not/ wrapping.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 05 11:23AM +0100

> 3. Some code is written assuming that signed integers wrap on overflow,
> and the code is incorrect if that is not the case for the implementation
> used.
 
On which architectures today?
 
> 5. Some code is more efficient if signed integer overflow is undefined
> behaviour (assuming a suitable optimising compiler).
 
Which optimization?
Optimizing the following away:
 
int i;
...
if( i + 1 < i )
...
 
doesn't make sense.
James Kuyper <jameskuyper@alumni.caltech.edu>: Nov 05 09:19AM -0500

On 11/5/19 2:36 AM, Öö Tiib wrote:
 
>> Couldn't this in theory format your hard drive (although in practice it
>> never will)? Or more realistically, always return a false negative from
>> a zealous optimizer?
 
The function is_modulo() is supposed to return true if the signed
overflow is defined by the implementation to result in wrap-around as an
extension to C++ (21.3.4.1p62). The behavior is defined by neither the
standard nor the implementation, only if is_modulo() is false. If
undefined behavior causes hasTwosComplementAndWrapAround() to return
false, that means that it does NOT Wrap around, so the function is
returning precisely the result that it's supposed to return.
 
The important problem is the possibility of a false positive result:
hasTwosComplementAndWrapAround() returning true when signed overflow
does NOT wrap around, which is entirely possible, given that the
alternative to wrapping around is undefined behavior.
 
 
> Yes, when numeric_limits<int>::is_modulo is false then the
> numeric_limits<int>::max() + 1 is undefined behavior.
 
Note, in particular, that if numeric_limits<T>::is_modulo() is false,
then signed overflow might, for instance, always give a result of
numeric_limits<T>::min(). That would cause Bonita's implementation for
hasTwosComplementAndWrapAround() to return true, despite the fact that
overflow does NOT wrap around.
David Brown <david.brown@hesbynett.no>: Nov 05 03:43PM +0100

On 05/11/2019 11:23, Bonita Montero wrote:
>> and the code is incorrect if that is not the case for the implementation
>> used.
 
> On which architectures today?
 
All architectures. Some (few) compilers specifically document that they
have wrapping behaviour on overflow - most don't, even if they /usually/
wrap.
 
Most hardware these days has wrapping on signed integer overflow, unless
you specifically choose instructions with different behaviour (like
trapping on overflow or saturation). So you often get it as a
side-effect of the compiler generating efficient code. But that does
not mean the language supports it, or the compiler supports it.
Remember, "undefined behaviour" includes "sometimes giving wrapped results".
 
Consider this code:
 
int foo(int x) {
return (x * 20) / 5;
}
 
What is foo(214748367) (assuming 32-bit ints) ?
 
With x = 214748367, x * 20 is 4294967340, equal to 0x10000002c. So with
wrapping signed arithmetic, x * 20 is 44. Dividing by 5 gives 8
(rounding down).
 
 
On the other hand, it's easy to think that "foo" can be reduced to "x *
4". As long as there are no overflows in calculating "x * 20", that is
always true. So an optimising compiler that knows signed integer
overflow never happens, will generate "x * 4" code for foo - it's just
an "lea" or a "shl" instruction in x86, and much more efficient than
doing a multiply by 20 and then a divide by 5.
 
Applying this, foo(214748367) is 858993468.
 
These are two completely different answers. One is with wrapping
overflow semantics, the other is with an efficient implementation but
relies on optimising from the assumption that undefined behaviour,
signed integer overflow, never occurs.
 
 
Let's look in practice.
 
Paste this into godbolt.org, and look at it for different compilers
(with optimisations enabled).
 
const int testval = 214748367;
 
int foo(int x) {
return (x * 20) / 5;
}
 
int foo1(void) {
return foo(testval);
}
 
MSVC generates "shl eax, 2" for "foo" - efficient code, for undefined
behaviour in signed integer overflow. "foo1" is "mov eax, 8" - when
doing compile-time calculation for the value, it uses wrapping semantics
and gives a different value from the result of using the compiled function.
 
There is nothing wrong with that, of course - it is perfectly good C and
C++ behaviour. But it would be unexpected for people who think signed
integers wrap.
 
 
gcc gives an "lea" instruction for "foo", which is effectively the same
as MSVC. For "foo1", it returns 858993468, which is the same value as
it gives running the instructions of foo, but does not match wrapping
behaviour.
 
gcc also has the option "-fwrapv" to give full wrapping semantics.
"foo" is then handled by a full multiply by 20 followed by a divide by
5, and "foo1" returns 8.
 
Again, this is all fine. If you want wrapping signed semantics, you can
ask for it and get it. If not, you get efficient code.
 
 
> if( i + 1 < i )
>    ...
 
> doesn't make sense.
 
Sure it makes sense. It's basic maths - if you add 1 to a number, with
normal arithmetic, you can't make it smaller. If you specifically have
a different arithmetic system, such as a modulo system, then it's a
different matter. But C and C++ don't have that for signed integers
(though any compiler is free to give you those semantics if it wants).
Bonita Montero <Bonita.Montero@gmail.com>: Nov 05 04:58PM +0100


>> doesn't make sense.
 
> Sure it makes sense. It's basic maths - if you add 1 to a number, with
> normal arithmetic, you can't make it smaller.
 
With the same reasoning you could say that unsigneds might never
wrap; but in fact they're specified to wrap. And That's not how
computers work. If someone does this he does the above intentionally
there's only _one_ reason for it: he wants to check for wrap-around.
So this is an optimization which no one asked for.
"Öö Tiib" <ootiib@hot.ee>: Nov 05 09:02AM -0800

On Tuesday, 5 November 2019 11:31:39 UTC+2, David Brown wrote:
 
> When people can't agree, and when every option has its advantages and
> disadvantages, the only rational solution is to give people an explicit
> choice.
 
There are lot of aspects that majority of people involved seem to
agree with.
 
* on majority of cases overflow (even unsigned) is programming error
* trap (overflow that signals or throws) helps to find programming
errors most reliably and quickly
* wrap (overflow with modular arithmetic) can be used intentionally
and efficiently in some algorithms
* snap (overflow that results with saturating nan) is good when
damaged, disconnected or short circuited (temperature?) sensor
should not turn our device (airplane?) into brick
* compiler assuming that overflow not occurring is ensured by some
external logic may allow noteworthy optimizations
* efficiency is of low importance for lot of code
* efficiency of C++ is one of its design goals
* it is good when programmer's intention can be indicated in code
 
So if to think of it then it is large puzzle indeed how to resolve it
elegantly and it continues to be bad to leave it all up to
implementations to decide and then each developer to struggle
with it on his own with "this standard imposes no requirements".
 
I myself would like most current operators to trap on overflow
(by raising SIGFPE or throwing std::overflow_error) both for
signed and unsigned.
For wrap, snap and "compiler may assume that the operation
does not overflow" cases I would like new operators.
scott@slp53.sl.home (Scott Lurndal): Nov 05 05:15PM


>* on majority of cases overflow (even unsigned) is programming error
>* trap (overflow that signals or throws) helps to find programming
> errors most reliably and quickly
 
While true, see efficiency, below.
 
>* compiler assuming that overflow not occurring is ensured by some
> external logic may allow noteworthy optimizations
>* efficiency is of low importance for lot of code
 
With this, I cannot agree.
 
 
>I myself would like most current operators to trap on overflow
>(by raising SIGFPE or throwing std::overflow_error) both for
>signed and unsigned.
 
Integer? Float? Both? Isn't that a function of the hardware? Or do you expect to
generate a conditional branch to inject a signal
after every arithmetic instruction (or sequence thereof)
that could overflow?
 
Note that ARM supports a cumulative overflow (floating point only)
that can be used to check if one of several consecutive operations
overflowed without needing to check each one.
 
Even someone who believes that "efficiency is of low importance"
wouldn't be willing to accept the performance degradation caused
by such checking code.
 
 
"Öö Tiib" <ootiib@hot.ee>: Nov 05 10:19AM -0800

On Tuesday, 5 November 2019 19:15:56 UTC+2, Scott Lurndal wrote:
> > external logic may allow noteworthy optimizations
> >* efficiency is of low importance for lot of code
 
> With this, I cannot agree.
 
You can elaborate slightly more? Perhaps you disagreed before
reading "for lot of code".
 
> >(by raising SIGFPE or throwing std::overflow_error) both for
> >signed and unsigned.
 
> Integer? Float? Both? Isn't that a function of the hardware?
 
I meant unsigned and signed integers. IEEE floating point has
by default exceptions disabled so I would leave it like that.
I see no actual dichotomy between hardware or software.
 
> generate a conditional branch to inject a signal
> after every arithmetic instruction (or sequence thereof)
> that could overflow?
 
Where hardware does not help there software has to emulate it.
 
> Note that ARM supports a cumulative overflow (floating point only)
> that can be used to check if one of several consecutive operations
> overflowed without needing to check each one.
 
Yes but I meant integers.
 
> Even someone who believes that "efficiency is of low importance"
> wouldn't be willing to accept the performance degradation caused
> by such checking code.
 
I meant (and tried to say that) "for lot of code efficiency is
of low importance". It is about 90% of most code bases that
runs less than 1% of total run-time. It is lot of code. Also
it is often less well tested and defects in it sometimes
manifest like hard-to-reproduce instability.
 
What is worth to optimize or where overflow is not error I
would like to be clearly indicated with syntax:

Manfred <noname@invalid.add>: Nov 05 07:31PM +0100

On 11/5/19 6:02 PM, Öö Tiib wrote:
 
>> When people can't agree, and when every option has its advantages and
>> disadvantages, the only rational solution is to give people an explicit
>> choice.
 
Except when giving a choice complicates the matter more than the
benefits it can give.
From this perspective I like compiler flags more than adding new source
code constructs. Maybe because I think that signed wrapping, unlike
unsigned, is of no practical use.
 
 
> There are lot of aspects that majority of people involved seem to
> agree with.
 
I, for one, find some of them questionable.
 
 
> * on majority of cases overflow (even unsigned) is programming error
That's my first - unsigned wrapping /can/ be useful in some contexts. I
can't say if it is about the majority of cases, but it is enough to keep
it in place.
 
> * trap (overflow that signals or throws) helps to find programming
> errors most reliably and quickly
I would say it is a better debugging aid than nothing, but as
reliability goes it can only catch what is executed, and it is often
impossible to test all possible combinations at runtime. On the other
hand, it would degrade efficiency to a possibly unacceptable level (As
Scott pointed out).
 
> * snap (overflow that results with saturating nan) is good when
> damaged, disconnected or short circuited (temperature?) sensor
> should not turn our device (airplane?) into brick
I am not sure bout this.
Floating point arithmetic does something of the kind (NaN propagates
through), but I think that if it is about a sensor failure, and
especially if it is about safety, then the hardware itself or the device
driver should handle this properly, i.e. with specific error conditions,
rather than the language.
 
> * compiler assuming that overflow not occurring is ensured by some
> external logic may allow noteworthy optimizations
True.
 
> * efficiency is of low importance for lot of code
Disagree. It is not always important, but it is important enough to be
one of the main reasons to choose C++, so it should be of primary
importance for the language.
 
> * efficiency of C++ is one of its design goals
> * it is good when programmer's intention can be indicated in code
True, as long as verbosity is kept at a reasonable level.
 
 
> I myself would like most current operators to trap on overflow
> (by raising SIGFPE or throwing std::overflow_error) both for
> signed and unsigned.
I agree with Scott here.
 
> For wrap, snap and "compiler may assume that the operation
> does not overflow" cases I would like new operators.
 
See above, my impression is that this would result in excessive bloat of
the language. Better use compiler flags, or at most #pragmas for
sections of code.
Paavo Helde <myfirstname@osa.pri.ee>: Nov 05 09:45PM +0200

On 5.11.2019 17:58, Bonita Montero wrote:
 
> With the same reasoning you could say that unsigneds might never
> wrap; but in fact they're specified to wrap.
 
In retrospect, this (wrapping unsigneds) looks like a major design mistake.
 
IMO, wrapping integers (signed or unsigned) are an example of
"optimization which nobody asked for", and they are there basically only
because the hardware happened to support such operations.
"Öö Tiib" <ootiib@hot.ee>: Nov 05 11:55AM -0800

On Tuesday, 5 November 2019 20:31:45 UTC+2, Manfred wrote:
> That's my first - unsigned wrapping /can/ be useful in some contexts. I
> can't say if it is about the majority of cases, but it is enough to keep
> it in place.
 
Perhaps here is some difference in meaning of word "majority".
For me "majority" means "more than 50% of all". Also "all" is
all such operations in code base. You seem to express disagreement
with something else.
 
> impossible to test all possible combinations at runtime. On the other
> hand, it would degrade efficiency to a possibly unacceptable level (As
> Scott pointed out).
 
To disagree you need to tell what helps to find programming errors
involving arithmetic overflow more reliably and quickly.
Also about efficiency that would move compiler optimization developers
to right direction.
Instead of logic that it is up to programmer to ensure that here is
no overflow so if here is then this code can be erased
they would try to prove that overflow is impossible themselves so they
can erase the codes and not to generate trapping checks.
 
> especially if it is about safety, then the hardware itself or the device
> driver should handle this properly, i.e. with specific error conditions,
> rather than the language.
 
Physically damaged, disconnected or short-circuited temperature sensor
can no way repair or reconnect itself. So the software of device that
has such sensor has to work with NaNs for currently measured temperature
or has to turn into brick. There are nothing else to do.
Hardware like that of T-1000 from Terminator-2 is not yet invented.

 
> See above, my impression is that this would result in excessive bloat of
> the language. Better use compiler flags, or at most #pragmas for
> sections of code.
 
My reasoning is that operators (like say a +% b for wrapping add)
are most terse and can be most flexibly mixed with each other.
David Brown <david.brown@hesbynett.no>: Nov 05 09:48PM +0100

On 05/11/2019 16:58, Bonita Montero wrote:
> > normal arithmetic, you can't make it smaller.
 
> With the same reasoning you could say that unsigneds might never
> wrap; but in fact they're specified to wrap.
 
Unsigneds are specified to wrap, yes - but almost all occurrences of
unsigned overflow are bugs in the code.
 
/Very/ occasionally, you want wrapping semantics for integer arithmetic.
It thus made sense for C (and C++) to provide a way to get wrapping
when you need it - and it was easy to specify it for unsigned types, but
would be unduly inefficient for signed types.
 
It makes no sense that if you have 4294967295 apples in a pile, add an
apple, you get 0 apples. It makes no sense that if you have 2147483647
apples in a pile, add an apple, you get -2147483648 apples.
 
 
> And That's not how
> computers work.
 
That is utterly irrelevant. Computers work with electrical signals, and
at a different level, with bits. That has no bearing on what makes
sense in a computer language or a program. The wrapping and the two's
complement format is simply the cheapest and fastest way to make the
hardware, nothing more than that.
 
> If someone does this he does the above intentionally
> there's only _one_ reason for it: he wants to check for wrap-around.
> So this is an optimization which no one asked for.
 
The only reason to write code like manually that is through a
misunderstanding of the language.
 
But weird and irrational code can be generated as a result of inlining,
macros, templates, constant propagation, etc. And you /want/ the
compiler to optimise these and remove code that could not possibly run.
 
 
Oh, and please tell me you read the rest of my post and understood it -
both how it demonstrated an optimisation based on undefined signed
overflows, and how compilers do not treat signed integers as wrapping.
 
(While it would be nice to get an answer there, I know you won't give it
- I know you are don't understand basic human qualities like politeness,
and I know you will do anything to avoid admitting that you were wrong
and prefer to remain ignorant. I wrote my posts hoping other people
will benefit from them too. But if you prove /me/ wrong by replying
properly to my posts and questions, I would be much obliged.)
Manfred <noname@add.invalid>: Nov 05 10:48PM +0100

On 11/5/2019 8:55 PM, Öö Tiib wrote:
> For me "majority" means "more than 50% of all". Also "all" is
> all such operations in code base. You seem to express disagreement
> with something else.
My point is about what to do with the fact that many times overflow is a
programming error: even if this is true I think that unsigned overflow
should have defined behavior (and wrap) rather than being handled as an
error by the compiler.
 
>> rather than the language.
 
> Physically damaged, disconnected or short-circuited temperature sensor
> can no way repair or reconnect itself.
Undoubtedly, but that's not what I wrote.
 
So the software of device that
> has such sensor has to work with NaNs for currently measured temperature
> or has to turn into brick. There are nothing else to do.
> Hardware like that of T-1000 from Terminator-2 is not yet invented.
No need to call in Schwarzenegger for help.
My point is that rather than using NaNs the hardware or driver should
raise specific error signals (like some error code on the control I/O
port, or at the API level) instead.
David Brown <david.brown@hesbynett.no>: Nov 05 11:00PM +0100

On 05/11/2019 18:02, Öö Tiib wrote:
 
> There are lot of aspects that majority of people involved seem to
> agree with.
 
> * on majority of cases overflow (even unsigned) is programming error
 
Agreed.
 
> * trap (overflow that signals or throws) helps to find programming
> errors most reliably and quickly
 
Agreed (where "trap" could mean any kind of notification, exception,
error log, etc.). But this is something you might only want during
debugging - it is of significant efficiency cost.
 
> * wrap (overflow with modular arithmetic) can be used intentionally
> and efficiently in some algorithms
 
It is occasionally, but rarely, useful.
 
> * snap (overflow that results with saturating nan) is good when
> damaged, disconnected or short circuited (temperature?) sensor
> should not turn our device (airplane?) into brick
 
Saturation in general can be useful.
 
> * compiler assuming that overflow not occurring is ensured by some
> external logic may allow noteworthy optimizations
 
Agreed.
 
> * efficiency is of low importance for lot of code
 
Agreed.
 
> * efficiency of C++ is one of its design goals
 
Agreed.
 
> * it is good when programmer's intention can be indicated in code
 
Agreed.
 
 
A couple of other options that I didn't mention before for overflows are:
 
* NaN (signalling or quiet), so that you can do long calculations and
then check at the end if it worked.
 
* Unspecified value - the overflow will give a valid int (or whatever
type is in use) value, but the compiler gives no information about what
the value might be. This can be useful when you can detect something
has failed, and return (valid, value) pairs. It will allow the compiler
to take shortcuts and simplify calculations but not allow undefined
behaviour to "move" forwards or backwards.
 
 
> I myself would like most current operators to trap on overflow
> (by raising SIGFPE or throwing std::overflow_error) both for
> signed and unsigned.
 
I like that in debugging or finding problems - with tools like
sanitizers. But I would not want that in normal code. With this kind
of semantics, the compiler can't even simplify "x + 1 - 1" to "x". I
much prefer to be able to write code in whatever way is simplest,
clearest or most maintainable for /me/, knowing that the compiler will
turn it into the most efficient results. I intentionally use an
optimising compiler for C and C++ programming - when efficiency doesn't
matter, I'll program in Python where integers grow to avoid overflow.
 
> For wrap, snap and "compiler may assume that the operation
> does not overflow" cases I would like new operators.
 
I suggested new types for the different behaviour, but of course it is
the operations that have the behaviour, not the types. However, I can't
see a convenient way to specify overflow behaviour on operations - using
types is the best balance between flexibility and legible code.
queequeg@trust.no1 (Queequeg): Nov 05 11:31AM


> Everything gets killed and restarted when the config changes.
 
If you *really* must go with this path and can't change it to be written
properly (as others suggested), I'd suggest:
 
for (;;) pause();
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
queequeg@trust.no1 (Queequeg): Nov 05 11:36AM

> page tables. The OS can always reclaim the physical memory by
> swapping out dirty pages and replacing clean pages owned by the
> SIGSTOP'd process.
 
If there's swap. It's not obvious on an embedded platform.
 
And I agree, raise(SIGSTOP) would be better than looping on pause(). But
will the process be stopped before raise() returns? My test shows that
yes, but I don't know if it's guaranteed or only a coincidence.
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
scott@slp53.sl.home (Scott Lurndal): Nov 05 02:51PM


>And I agree, raise(SIGSTOP) would be better than looping on pause(). But
>will the process be stopped before raise() returns? My test shows that
>yes, but I don't know if it's guaranteed or only a coincidence.
 
Yes, it is guaranteed.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: