Friday, November 8, 2019

Digest for comp.lang.c++@googlegroups.com - 22 updates in 6 topics

Bonita Montero <Bonita.Montero@gmail.com>: Nov 08 04:30AM +0100

>> operators? I don't want to write everything on my own. I think it
>> should be done via enable_if ...
 
> std::vector::reserve.
 
I have to do resize for my purpose.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 08 04:32AM +0100

> This might be of interest. It does the copy ctor using
> std::memory_order_relaxed:
 
Good idea ...
... but isn't really important.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 08 10:58AM +0100

So this is the code for integral-xatomics:
 
#include <atomic>
#include <type_traits>
 
template <typename T, typename T2 = typename
std::enable_if<std::is_integral<T>::value, T>::type>
struct xatomic : public std::atomic<T>
{
xatomic() = default;
xatomic( xatomic const &xa );
 
using std::atomic<T>::operator =;
using std::atomic<T>::is_lock_free;
using std::atomic<T>::store;
using std::atomic<T>::load;
using std::atomic<T>::operator T;
using std::atomic<T>::exchange;
using std::atomic<T>::compare_exchange_weak;
using std::atomic<T>::compare_exchange_strong;
using std::atomic<T>::fetch_add;
using std::atomic<T>::fetch_sub;
using std::atomic<T>::operator ++;
using std::atomic<T>::operator --;
using std::atomic<T>::operator +=;
using std::atomic<T>::operator -=;
using std::atomic<T>::fetch_and;
using std::atomic<T>::fetch_or;
using std::atomic<T>::fetch_xor;
using std::atomic<T>::operator &=;
using std::atomic<T>::operator |=;
using std::atomic<T>::operator ^=;
};
 
template<typename T, typename T2 = typename
std::enable_if<std::is_integral<T>::value, T>::type>
inline
xatomic<T, T2>::xatomic( xatomic const &xa )
{
*this = (T)xa;
}
 
xatomic<int> xai;
 
But how can have a second xatomic for non-integral types?
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Nov 08 01:06PM

On 08/11/2019 03:30, Bonita Montero wrote:
>>> should be done via enable_if ...
 
>> std::vector::reserve.
 
> I have to do resize for my purpose.
 
You can use std::vector::reserve in conjunction with std::vector::resize.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Bonita Montero <Bonita.Montero@gmail.com>: Nov 08 03:20PM +0100

>> I have to do resize for my purpose.
 
> You can use std::vector::reserve in conjunction with std::vector::resize.
 
There are only stupid responses from you.
when I use std::vector::resize, a move- or copy-constuctor call will
be generated, preferring a move-constructor if available, no matter
if there's a ::resize() before or not.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Nov 08 03:49PM

On 08/11/2019 14:20, Bonita Montero wrote:
> when I use std::vector::resize, a move- or copy-constuctor call will
> be generated, preferring a move-constructor if available, no matter
> if there's a ::resize() before or not.
 
I'm stupid? If you use std::vector::reserve() you can then use
std::vector::emplace_back without reallocations being made, you fucktard.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 08 03:19PM -0800

On 11/8/2019 1:58 AM, Bonita Montero wrote:
> {
>     *this = (T)xa;
> }
 
I have to try this, but copying an atomic basically requires a
memory_order. It seems like you are trying to hack it. Btw, be wary of
implied seq_cst ordering. Like:
 
std::atomic<int> a = 123;
std::atomic<int> b = a; // implied seq_cst!
 
http://www.cplusplus.com/reference/atomic/atomic/operator=/
 
 
 
Robert Wessel <robertwessel2@yahoo.com>: Nov 08 02:38PM -0600

I appear to be missing the point of the distinction between a
three-way comparison with std::strong_ordering and weak_ordering. Or
perhaps more to the point, the purpose of substitutability.
 
Is this an issue of hidden(?) state that prevents objects from being
usable interchangeably? So if you had a rational_number class, and
two instances had values of 1/3 and 2/6, weak ordering would apply (1)
always, or (2) only if the numerator or denominator were visible?
 
Which also brings up the question of what the compiler is intended to
do with that information, and why there's not a
strong_partial_ordering.
"Öö Tiib" <ootiib@hot.ee>: Nov 08 03:12PM -0800

> I appear to be missing the point of the distinction between a
> three-way comparison with std::strong_ordering and weak_ordering. Or
> perhaps more to the point, the purpose of substitutability.
 
When we have multi collection of strong_ordering (or strong_equality)
values then we basically may use one value and count instead of group
of equal values since any value is substitutable for equal to it value.
With weak_ordering values we will need to have every value since the
values may have differences despite comparing equal.
 
 
> usable interchangeably? So if you had a rational_number class, and
> two instances had values of 1/3 and 2/6, weak ordering would apply (1)
> always, or (2) only if the numerator or denominator were visible?
 
The state of weakly equal values that makes those not substitutable
must be visible.
 
> Which also brings up the question of what the compiler is intended to
> do with that information, and why there's not a
> strong_partial_ordering.
 
That I don't know. Equal values being substitutable (strong equality)
and existence of non-comparable values (partial ordering) seem
logically to be orthogonal properties.
 
The partial ordering means possible problems of finding minimums,
maximums or sorting so it allows generic solutions to detect
and announce it compile time. I mean things like that floats are
not suitable for usage as keys for map.
aminer68@gmail.com: Nov 08 01:57PM -0800

Hello,
 
 
My Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well version 1.76 is here..
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware that scales very well, and it contains also a Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware that scales very well.
 
Sparse linear system solvers are ubiquitous in high performance computing (HPC) and often are the most computational intensive parts in scientific computing codes. A few of the many applications relying on sparse linear solvers include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry.
 
Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this. However, after a while people figured out that it works really well if you just stop the iteration much earlier - often you will get a very good approximation after much fewer than n steps. In fact, we can analyze how fast Conjugate gradient converges. The end result is that Conjugate gradient is used as an iterative method for large linear systems today.
 
Please download the zip file and read the readme file inside the zip to know how to use it.
 
 
You can download it from:
 
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
 
Thank you,
Amine Moulay Ramdane.
"Öö Tiib" <ootiib@hot.ee>: Nov 08 02:01AM -0800

On Wednesday, 6 November 2019 22:50:30 UTC+2, David Brown wrote:
> with "-ftrapv" optimises the two statements into "x". With
> "-fsanitize=signed-integer-overflow", the single expression "x + 1 - 1"
> is simplified to "x", but the two statements lead to checks.)
 
Is it all very similar to issue like where compiler uses 80 bit
long double (as best supported by hardware) to do mid-way
double arithmetics and so the end result does not saturate into
INF or -INF on that platform but it does on other that uses
64 bit double all the way.
 
I feel dealing with such issue" to be *lot* easier than dealing
with utter undefined behavior (that all such examples are by
current scripture). For me every solution is better than
undefined behavior.
 
> -ftrapv and -fsanitize=signed-integer-overflow, perhaps with
> <https://gotbolt.org>, to see just host costly trapping semantics can
> be. They are very far from negligible.
 
I have looked and measured its actual impact quite a lot. My
thinking of it (is surely biased but it) revolves around handful of
points.
 
1) It is about default behavior, in most code its efficiency does
not matter and for where it matters there indeed *must* be
alternatives.
 
2) Efficiency of sub-steps of processing usually does matter
where we intensively process lot of different data. With lot of data
we do hit memory latency. Processing has been for decades so
much cheaper that it can be worth to reduce memory latency
problems by adding even more processing. For example 1:4
uncompressing is faster than copy.
Note: it is ultra rare case where speed of memcpy is our actual
bottleneck and not stupidity of copying same data over and
over too lot or even copying it at all. Same is with processing
same thing pointlessly over and over.
 
3) World evolves. The Swift, Rust, Go, and D communities
already ask for hardware support for trapping integer
conversions and trapping arithmetic. People like the languages.
If hardware starts to support it better then C++ will seem
backwards without convenient ways to use trapping integers
in production code.

4) Compilers will evolve. What happened to "register" or
"inline"? Same can happen with usage of current "assume
that it does not overflow" operations. Just make those not
default but performance optimizations like register and
inline.
 
 
> Ah, okay. I think there is a big risk of confusion here, and for people
> to forget which operators do what. (There is also the risk of
> accidentally writing smilies in your code!).
 
It is not that. We see wrapping and saturating is used rarely.
Also wrapping is on half of cases used without need where
it can confuse like "for (unsigned u = 10; u < 11; --u)".
There will not be lot of mixed usages, just sometimes.
 
> wrapping type and a saturation type). Undefined or unspecified
> behaviour types can be silently promoted to other more defined behaviour
> as needed.
 
That saturating type perhaps has to be separate type indeed. It
will have "magical" values and so needs separate protection.
On rest of cases operators feel to be best. About too implicit
conversions ... it is another source of hard-to-notice defects.
 
> That also has the advantage that it can be done today, with current C++.
 
Life has shown that library features (or even worse "debugging tools")
as opposed to language features will be adopted slowly by programmers
and the compiler writers are even slower to start thinking in correct
direction.
David Brown <david.brown@hesbynett.no>: Nov 08 02:03PM +0100

On 08/11/2019 11:01, Öö Tiib wrote:
> double arithmetics and so the end result does not saturate into
> INF or -INF on that platform but it does on other that uses
> 64 bit double all the way.
 
Yes - and that is a real mess for standardisation and repeatability.
It's okay if you are not bothered about the portability and
repeatability of your floating point calculations, and are happy to
accept that an unrelated change in one part of your program can lead to
different results somewhere else just because it changed when floating
point registers get moved out into the stack. If that's fine, you can
use "-ffast-math" or equivalent switches and tell your compiler it is
free to make such code. (Personally, I use "-ffast-math" in my own code
- but it is not something everyone wants.)
 
I do not want that situation in integer code. I fully expect the
compiler to generate identical code - equally efficient, equal results,
equal levels of static checking - whether I break a calculation into
multiple statements or have it all as one expression. I want to make
the decisions about the arrangements based on the clarity of the code,
not on where I want the compiler to optimise or where I want it to
insert hidden range checks.
 
I am much happier with the function of
"-fsanitize=signed-integer-overflow" here - it generates more checks,
less efficient code, but the functionality is clear and consistent.
I'll use that when I am happy to take the performance hit in the name of
debugging code.
 
> with utter undefined behavior (that all such examples are by
> current scripture). For me every solution is better than
> undefined behavior.
 
The whole point is that you /don't/ hit undefined behaviour. Any time
your code attempts to execute something with undefined behaviour, it is
a bug in your code. Tools (like sanitizers) that help you find the
undefined behaviour are good. Static analysis can help too. Once you
have your code correct, you are not doing anything undefined, and there
is no need for run-time checks that can't ever be triggered.
 
To me, undefined behaviour is /better/ than the alternatives for most
purposes. It helps me write better and more correct code, and as a
bonus it is more efficient.
 
Maybe this is something that stems from my programming education - I was
taught in terms of specifications. Any function has a specification
that gives its pre-conditions and its post-conditions. The function
will assume the pre-conditions are valid, and establish the
post-conditions. If you don't ensure you satisfy the pre-conditions
before calling the function, you have no write to expect any know
behaviour at all from the function. This principle is known popularly
as "garbage in, garbage out". It has been understood since the birth of
the programmable computer:
 
"""
On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into
the machine wrong figures, will the right answers come out?' I am not
able rightly to apprehend the kind of confusion of ideas that could
provoke such a question.
"""
 
If you want to say that some programmers don't know that integer
overflow is undefined behaviour, and think it wraps, then blame the way
C and C++ is taught and what the books say, and do something about that
- don't try to dumb down and water out the language.
 
 
> 1) It is about default behavior, in most code its efficiency does
> not matter and for where it matters there indeed *must* be
> alternatives.
 
C++ (and C) are often chosen because of their efficiency. If you start
adding run-time tests everywhere, you lose that. Why would you then use
C and C++ at all? If you want extra safety and are willing to pay the
efficiency price, there are other languages to choose from - such as
Java or C#. When you choose to program in C or C++, you are accepting a
responsibility to program carefully and correctly. You are picking a
language that trusts the programmer. Why would you then want the
language or compiler to tell you "I /know/ you claim to be a good
programmer who knows the language and codes carefully - but I don't
believe you, and I'm going to nanny you all the way" ?
 
And once you start saying "C++ programmers can't be trusted to write
expressions that don't overflow", where do you stop? Can you trust them
to use pointers, or do you want run-time checks on all uses of pointers?
Can you trust them to use arrays? Function pointers? Memory
allocation? Multi-threading? There are many, many causes of faults and
bugs in C++ programs that are going to turn up a lot more often than
integer overflows (at least in the 32-bit world - most calculations
don't come close to that limit).
 
> If hardware starts to support it better then C++ will seem
> backwards without convenient ways to use trapping integers
> in production code.
 
I have nothing against using trapping integer arithmetic in cases where
it is useful. But it should only be as an active choice - either by
choosing special types in the code, or by enabling debug options in the
tools.
 
And hardware-assisted trapping cannot reach the efficiency of having
overflow as undefined behaviour, or even of wrapping, precisely for the
reasons I gave above about optimising "x + 1 - 1".
 
>> to forget which operators do what. (There is also the risk of
>> accidentally writing smilies in your code!).
 
> It is not that. We see wrapping and saturating is used rarely.
 
And thus it should be written explicitly, with types that have names.
Symbol combinations are easily mixed up and forgotten, especially if
they are rarely used.
 
One option - which could easily be implemented in C++ of today - would
be to write "x + wrapping + y" as a wrapping addition operator.
 
> Also wrapping is on half of cases used without need where
> it can confuse like "for (unsigned u = 10; u < 11; --u)".
 
That is an example of well-defined behaviour that is void of meaning.
 
> will have "magical" values and so needs separate protection.
> On rest of cases operators feel to be best. About too implicit
> conversions ... it is another source of hard-to-notice defects.
 
I would be careful about providing implicit conversions.
 
> as opposed to language features will be adopted slowly by programmers
> and the compiler writers are even slower to start thinking in correct
> direction.
 
Life has shown that library features are precisely how C++ has evolved
in the last decade or two - language features have mostly been added in
order to make library features safer to use, easier to use, more
efficient, or easier to implement. The next big leap for the C++
language - metaclasses - is precisely so that we can write libraries
that handle things that would previously need language changes.
"Öö Tiib" <ootiib@hot.ee>: Nov 08 12:08PM -0800

On Friday, 8 November 2019 15:03:54 UTC+2, David Brown wrote:
> use "-ffast-math" or equivalent switches and tell your compiler it is
> free to make such code. (Personally, I use "-ffast-math" in my own code
> - but it is not something everyone wants.)
 
Yes, and for me it is fine if standard requires "x + 1 - 1" to trap
if x + 1 does overflow. Compilers will add their fast math options
in eye-blink anyway.
 
> I do not want that situation in integer code.
 
Then there won't be.
 
> less efficient code, but the functionality is clear and consistent.
> I'll use that when I am happy to take the performance hit in the name of
> debugging code.
 
It is not trapping math there but debugging tool that caught "undefined
behavior". It does not help me on cases when I need trapping maths.
 
> > current scripture). For me every solution is better than
> > undefined behavior.
 
> The whole point is that you /don't/ hit undefined behaviour.
 
The whole issue that you seemingly don't understand is that there are
cases (and those seem to be majority) where I don't need to have undefined
behaviors nor wrapping behaviors nor arbitrarily growing precisions.
I want it to trap by default and sometimes to have snapping behavior.
 
But I have either undefined behavior with signed or wrapping
behavior with unsigned and non-portable intrinsic functions.
So I have to choose what hack is uglier to write trapping or
snapping myself manually and everyone else in same situation
like me has to do same.
 
> To me, undefined behaviour is /better/ than the alternatives for most
> purposes. It helps me write better and more correct code, and as a
> bonus it is more efficient.
 
It is better only when I know that it can no way overflow because the
values make sense in thousands while the variables can count in billions.
 
> able rightly to apprehend the kind of confusion of ideas that could
> provoke such a question.
> """
 
Yes and I want to turn the language full of specified undefined
behaviors and contradictions and useless features and defects into
programs where there are zero. When wrong figures
are put into my programs then I want the program to give
answers that follow from those and when contradicting figures
are put in then I want my programs to refuse in civil manner.
For me quarter of good cases working is not good
enough quality. And I'm frustrated when I can't find a way to
use it and have to use some kind of nonconforming hacks by
compiler vendor.
 
> overflow is undefined behaviour, and think it wraps, then blame the way
> C and C++ is taught and what the books say, and do something about that
> - don't try to dumb down and water out the language.
 
A language that does not have simple features nor convenient ways to
implement such is dumbed down by its makers, not by me.
 
> language or compiler to tell you "I /know/ you claim to be a good
> programmer who knows the language and codes carefully - but I don't
> believe you, and I'm going to nanny you all the way" ?
 
The languages C and C++ have ages ago stopped trusting me. When I write
"register" then they treat like I actually meant "I don't want pointers
to it" and when I write "inline" then they read that "I want to
define it in header file". English is yes my 4th language but did I
really write that?
 
> bugs in C++ programs that are going to turn up a lot more often than
> integer overflows (at least in the 32-bit world - most calculations
> don't come close to that limit).
 
Sometimes I want to write code where overflow is not my programming
error but allowable contradiction in input data. There I want to throw or
to result with INF on overflow. In similar manner like I want it to
throw or to result with NaN when kilograms were requested
to add to meters or to subtract from degrees of angle or diameter
of 11-th apple from 10 is requested. Choice between if I want silent
or trapping failures depends on if I want to process the whole data
regardless of inconsistencies or constraint violations in it or to
interrupt on first such problem in data. Don't such use-cases make sense?
 
 
> And hardware-assisted trapping cannot reach the efficiency of having
> overflow as undefined behaviour, or even of wrapping, precisely for the
> reasons I gave above about optimising "x + 1 - 1".
 
Currently there are outright nothing. Integer math is in same state (or
even worse) than it was in eighties.
 
> they are rarely used.
 
> One option - which could easily be implemented in C++ of today - would
> be to write "x + wrapping + y" as a wrapping addition operator.
 
I see that I technically mostly use wrapping to implement trapping,
snapping or otherwise refusing math. The code does make everybody's
eyeballs to bleed (sorry if you happen to be fan of those
__builtin_add_overflow things) but if there was any better way to have
refusing math I would not touch wrapping math at all.
Robert Ramey started to write his safe throwing integer library decade
ago and it still isn't added to boost I think.
 
> efficient, or easier to implement. The next big leap for the C++
> language - metaclasses - is precisely so that we can write libraries
> that handle things that would previously need language changes.
 
Too slowly and too lot into useless semantic garbage. At least
std::byte gets useful in C++20. We may finally assign a whopping char
to it without UB on half of the cases. :D
https://wg21.cmeerw.net/cwg/issue2338
So there are tiny progresses and good news all the time.
Soviet_Mario <SovietMario@CCCP.MIR>: Nov 08 11:00AM +0100

when including WXWIDGETS in a stub of program under
codeblocks, using wxsmith as rad plugin, the compiler
complains about not finding
 
WX/SETUP.H
 
After a scan of the whole machine, no such file was found
anywhere
 
 
also some other headers had to be edited manually as the
actual folder of wxinstal was not set
 
for instance, I had to manually write
#include </usr/include/wx-3.0/wx/app.h>
instead of plain
#include <wx/app.h>
 
I can't find the place where to input search path for
headers, in general
 
but in particular SETUP.H just is missing
 
has anybody met this same problem ?
where should I get this and where should I put ?
 
sorry for the trivial question :\
 
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
"Miguel Giménez" <me@privacy.net>: Nov 08 05:10PM +0100

El 08/11/2019 a las 11:00, Soviet_Mario escribió:
 
> has anybody met this same problem ?
> where should I get this and where should I put ?
 
> sorry for the trivial question :\
 
wx/setup.h is in the same folder of the compiled library, in Windows it
may look like:
 
c:\wxWidgets-3.1.3\lib\gcc_dll\mswu\wx\setup.h
 
You must select Project -> Build options in the menu, and add the search
paths to the Search directories -> Compiler pane:
 
/usr/include/wx-3.0
c:\wxWidgets-3.1.3\lib\gcc_dll\mswu <- change to your folder
 
--
Saludos
Miguel Giménez
"Miguel Giménez" <me@privacy.net>: Nov 08 05:14PM +0100

> the compiler complains about not finding WX/SETUP.H
 
You must search setup.h, not SETUP.H
 
--
Saludos
Miguel Giménez
Soviet_Mario <SovietMario@CCCP.MIR>: Nov 08 06:50PM +0100

On 08/11/2019 17:10, Miguel Giménez wrote:
 
> wx/setup.h is in the same folder of the compiled library, in
> Windows it may look like:
 
> c:\wxWidgets-3.1.3\lib\gcc_dll\mswu\wx\setup.h
 
 
no the file is MISSING. I have scanned the whole machine
with "catfish" and it does not find ant setup.h that can be
associated with WX (there are a lot, but are distinct files
not regarding WX)
 
 
> You must select Project -> Build options in the menu, and
> add the search paths to the Search directories -> Compiler > pane:
 
TNX, this advice works, at least ! Every other header is
found without having to manually edit the include directive.
But I still lack the damn setup.h :/
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Soviet_Mario <SovietMario@CCCP.MIR>: Nov 08 06:52PM +0100

On 08/11/2019 17:14, Miguel Giménez wrote:
>> the compiler complains about not finding WX/SETUP.H
 
> You must search setup.h, not SETUP.H
 
well, I just stressed the word in "bold" actually.
I made a case insensitive search (here on linux case matters
in filenames, so I had been wary)
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Keith Thompson <kst-u@mib.org>: Nov 08 09:58AM -0800

>> the compiler complains about not finding WX/SETUP.H
 
> You must search setup.h, not SETUP.H
 
It depends. If the OP is on Windows, setup.h and SETUP.H are the
same file.
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
"Miguel Giménez" <me@privacy.net>: Nov 08 07:02PM +0100

El 08/11/2019 a las 18:58, Keith Thompson escribió:
 
>> You must search setup.h, not SETUP.H
 
> It depends. If the OP is on Windows, setup.h and SETUP.H are the
> same file.
 
He uses Linux:
 
 
--
Saludos
Miguel Giménez
"Miguel Giménez" <me@privacy.net>: Nov 08 07:09PM +0100

The setup.h is copied during wxWidgets compilation, but the original for
your system should be still at /usr/include/wx-3.0/include/wx/gtk
 
--
Saludos
Miguel Giménez
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 07 03:38PM -0800

On 11/7/2019 3:06 PM, Melzzzzz wrote:
>> This damn error still persists. MSVC needs to fix it. Have not checked
>> the 2019 version yet. It has to do with reporting the size of an
>> allocated array using overloaded new and delete. Here is the code:
[...]
> Try array of objects with destructors. With destructors they have to
> place number of objects in array, so then it is possiblity they will
> report correct size...
 
Okay. Wrt the following code:
________________________
#include <cstdio>
#include <new>
 
struct custom_allocator {
static void* allocate(std::size_t size) {
void* const mem = ::operator new(size);
std::printf("custom_allocator::allocate(%p, %lu)\n",
(void*)mem, (unsigned long)size);
return mem;
}
 
static void deallocate(void* const mem, std::size_t size) {
std::printf("custom_allocator::deallocate(%p, %lu)\n",
(void*)mem, (unsigned long)size);
::operator delete(mem);
}
};
 
 
template<typename T>
struct allocator_base {
static void* operator new(std::size_t size) {
return custom_allocator::allocate(size);
}
 
static void* operator new[](std::size_t size) {
return custom_allocator::allocate(size);
}
 
static void operator delete(void* mem) {
if (mem) {
custom_allocator::deallocate(mem, sizeof(T));
}
}
 
static void operator delete [](void* mem, std::size_t size) {
if (mem) {
custom_allocator::deallocate(mem, size);
}
}
};
 
 
template<std::size_t T_size>
class buf {
char mem[T_size];
};
 
 
class buf2 : public buf<1234>, public allocator_base<buf2> {
char mem2[1000];
};
 
 
struct foo : public allocator_base<foo>
{
foo() { std::printf("(%p)->foo::foo()\n", (void*)this); }
~foo() { std::printf("(%p)->foo::~foo()\n", (void*)this); }
 
int a;
char b[10];
double c;
};
 
 
int main() {
buf2* b = new buf2;
delete b;
 
b = new buf2[5];
delete [] b;
 
foo* f = new foo[3];
delete [] f;
 
return 0;
}
 
________________________
 
 
On GCC I get the following output:
________________________
custom_allocator::allocate(00977320, 2234)
custom_allocator::deallocate(00977320, 2234)
custom_allocator::allocate(00977320, 11174)
custom_allocator::deallocate(00977320, 11174)
custom_allocator::allocate(00977320, 80)
(00977328)->foo::foo()
(00977340)->foo::foo()
(00977358)->foo::foo()
(00977358)->foo::~foo()
(00977340)->foo::~foo()
(00977328)->foo::~foo()
custom_allocator::deallocate(00977320, 80)
________________________
 
 
On MSVC 2017, I get:
________________________
custom_allocator::allocate(00C8A4C8, 2234)
custom_allocator::deallocate(00C8A4C8, 2234)
custom_allocator::allocate(00C8CFF0, 11170)
custom_allocator::deallocate(00C8CFF0, 2234)
custom_allocator::allocate(00C85D98, 76)
(00C85D9C)->foo::foo()
(00C85DB4)->foo::foo()
(00C85DCC)->foo::foo()
(00C85DCC)->foo::~foo()
(00C85DB4)->foo::~foo()
(00C85D9C)->foo::~foo()
custom_allocator::deallocate(00C85D98, 76)
________________________
 
Humm... Using the dtor forced MSVC to get it right wrt returning the
same size for a deallocation of an array. It still errors on the first
case. Interesting wrt the original allocation size and deallocation
sizes not matching. Thanks Melzzzzz. :^)
 
Also notice the difference between GCC:
 
custom_allocator::allocate(00977320, 11174)
custom_allocator::deallocate(00977320, 11174)
 
and MSVC:
 
custom_allocator::allocate(00C8CFF0, 11170)
custom_allocator::deallocate(00C8CFF0, 2234)
 
 
The 80 - 76 = 4 aspect wrt GCC and MSVC is interesting as well wrt the
size of the dynamic array. It seems like GCC uses 4 extra bytes because
2234 * 5 = 11170. I wonder if extra data allows it to get the correct
size on array deallocation. Humm...
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: