Saturday, December 19, 2015

Digest for comp.lang.c++@googlegroups.com - 23 updates in 3 topics

"Öö Tiib" <ootiib@hot.ee>: Dec 18 09:12PM -0800

> can't build C++ (with say gcc) on top of C++ built with ghs for instance
> (one of the specific compilers I can't opt out of). A C layer would have
> to be in between them for the standard ABI.
 
Yes, or some other layer (like SWIG). IOW C++ ABI isn't standardized but
may be one day.
 
> C++11 features for some of the code, but I'd feel less silly if I was
> doing that for Rust or D or something else entirely. Rust on C on C++
> looks less nuts somehow.
 
All the differences between common programming languages are irrelevant
syntax sugar. What is the difference if I have to write "int x = void;"
or "int_fast16_t x;" to get uninitialized integer variable named "x"?
I fail to see major benefits of neither syntax. Just type what is needed
and done.
 
 
> Also, C++11 allowing me to write member x_ = 0 in my class declarations
> gets me close, but not 100% to where I'd rather be with respect to
> member variables.
 
It is actually unclear where you want to be with data members of your
classes. The 'struct' of C is mostly fine. C++ added some convenience;
C++11 added even more convenience. It feels that I can type and read
too fast ... so the convenience does give next to no returns. IDEs
that try to be overly helpful and convenient on top of that (like Apple
Xcode) are becoming outright annoying with that help.
 
> the fastest establishment of invariants (like both int and vector<int>
> do as you pointed out). I don't know much about D or Rust yet, but I
> do know they both do this.
 
Is it really so hard to type "int_fast16_t x = INT_MIN;" if you need that
"x" to be initialized? To me the issues of software development are
always totally elsewhere.
 
> particular industry have long abandoned pure C++, opting for a mix of C++
> and something else (often lua or C#) I'm slowly & stubbornly realizing
> that perhaps they're not all just crazy kids.
 
The programming languages are software design tools. There's no
point whatsoever to change C++ to be exactly like some other programming
language. People who like C# or lua should use those instead of C++.
 
> > class's constructor. That is what the constructor is for!
 
> That's not my issue either. I am neither unwilling to write local
> "int x = 0", nor am I unsure where I am required to initialize member x_.
 
In C++ you are not required to initialize anything.
 
> don't want to break the language to satisfy my desires in those
> specific contexts, and I accept that the suggestion of "int x is zero
> unless stated unspecified" would do just that.
 
In C++ int is zero when it is stated to be zero.
 
> highly optimized language. Might not be D or Rust. I've done
> nothing other than C++/C/ASM for my entire career, so other languages
> are very new ground for me.
 
If you need some sort of language to sit somewhere then put it to sit
there.
mark <mark@invalid.invalid>: Dec 19 09:29AM +0100

On 2015-12-19 06:12, Öö Tiib wrote:
> Is it really so hard to type "int_fast16_t x = INT_MIN;" if you need that
> "x" to be initialized?
 
I find it extremely funny that you post a buggy initialization snippet,
especially in the context of this discussion.
 
No, it's not hard to type. But programmers make mistakes. It's very easy
to forget an initialization somewhere and the infrastructure for
tracking that down is far from perfect. Security bugs from using
uninitialized variables are quite common.
"Öö Tiib" <ootiib@hot.ee>: Dec 19 07:39AM -0800

On Saturday, 19 December 2015 10:29:34 UTC+2, mark wrote:
> > "x" to be initialized?
 
> I find it extremely funny that you post a buggy initialization snippet,
> especially in the context of this discussion.
 
I would find it funny as well but can you point out what defect you did
mean? I was really afraid that the damn auto-almost-thinking-for-me IDEs
have already made me to forget the syntax. :D
 
Code:
 
#include <iostream>
#include <stdint.h>
#include <limits.h>
 
int main()
{
int_fast16_t x = INT_MIN;
std::cout << x << "\r";
}
 
Output:
 
-2147483648
 
Was it to demonstrate how the reviewers do mistakes as well?
 
> to forget an initialization somewhere and the infrastructure for
> tracking that down is far from perfect. Security bugs from using
> uninitialized variables are quite common.
 
I fully agree with all those points. Debug tools are better or worse but
none is perfect and some defects are harder to track down than others but
none is avoidable. Programmer is still most major factor of success but
produces defects rapidly on all levels.
 
However how is it relevant? Most buggy program and uselessly non-maintainable
code that I have ever seen was written in C#. The syntax sugar and quality
of tools are matter of taste and tiny difference in convenience. My own
feeling is that the almost too convenient tools try to write almost
everything in that almost too convenient syntax for me. :D USENET is good,
here I can at least type something myself.
mark <mark@invalid.invalid>: Dec 19 06:13PM +0100

On 2015-12-19 16:39, Öö Tiib wrote:
> }
 
> Output:
 
> -2147483648
 
int_fast16_t might be 16 bits only (and int 16-bit ones' complement or
32 bits) - so you have an integer overflow. One of the nasty kind, with
undefined behavior.
 
You would probably be saved by a compiler warning, someone else using
your code might not look at them.
 
I don't even have to do dig up some obscure platform. Chromium stdint.h
for Visual C++ (int is int32_t):
 
// 7.18.1.3 Fastest minimum-width integer types
typedef int8_t int_fast8_t;
typedef int16_t int_fast16_t;
typedef int32_t int_fast32_t;
typedef int64_t int_fast64_t;
typedef uint8_t uint_fast8_t;
typedef uint16_t uint_fast16_t;
typedef uint32_t uint_fast32_t;
typedef uint64_t uint_fast64_t;
"Öö Tiib" <ootiib@hot.ee>: Dec 19 10:58AM -0800

On Saturday, 19 December 2015 19:14:13 UTC+2, mark wrote:
 
> int_fast16_t might be 16 bits only (and int 16-bit ones' complement or
> 32 bits) - so you have an integer overflow. One of the nasty kind, with
> undefined behavior.
 
'int_fast16_t' is guaranteed to be the fastest integral type with a size of
at least 16 bits. 'int' is guaranteed to be the fastest integral type with
a range of at least [-32767,32767]. Now explain, how it can't be that 'int_fast16_t' can not represent all values of 'int' including 'INT_MIN'.
 
> You would probably be saved by a compiler warning, someone else using
> your code might not look at them.
 
Warning about what? There was no defect at first place.
 
> typedef uint16_t uint_fast16_t;
> typedef uint32_t uint_fast32_t;
> typedef uint64_t uint_fast64_t;
 
What C++ compiler/platform is Chromium? AFAIK it is a web browser.
Seems it contains stdint.h indeed but I can't observe the code that
you describe:
 
https://chromium.googlesource.com/chromiumos/third_party/coreboot/+/5ef93446471589941c7bedc84bdd49a0ae3e2bbb/src/arch/x86/include/stdint.h
 
Quote:
 
typedef signed int int_fast16_t;
 
Perhaps post from where you took your odd code. Internet is indeed full
of buggy implementations of any standard function, class, macro or even
typedef but I do not use such.
David Brown <david.brown@hesbynett.no>: Dec 19 08:14PM +0100

On 19/12/15 19:58, 嘱 Tiib wrote:
> integral type with a range of at least [-32767,32767]. Now explain,
> how it can't be that 'int_fast16_t' can not represent all values of
> 'int' including 'INT_MIN'.
 
Not quite - "int" is guaranteed to be the most /natural/ integer type
(with a range of at least -32767..32767) of a given platform, not
necessarily the fastest. And "fastest" might vary according to details
of the chip and system. For example, on the 68332 microcontroller,
"int" is 32-bit because it is the standard for that ISA (m68k) and the
natural size for the cpu, its ALU and its registers. However, the chip
has only a 16-bit bus - 16-bit values are measurably faster when storing
data in off-chip memory, and therefore a sensible choice for
"int_fast16_t" would be 16-bit. On other platforms, the compiler
implementer might have decided that the more efficient use of cache
means that 16-bit values are faster than 32-bit values, and implemented
int_fast16_t as a 16-bit short int.
 
(It is also quite possible to use C++ on a cpu in which normal "int" is
not two's complement, while int_fast16_t must be two's complement - but
that's getting rather hypothetical.)
 
However, my main objection to "int_fast16_t x = INT_MIN;" is in how it
reads - the code says you want x to have at least 16 bits, be as fast as
possible, and you don't care about anything more than 16 bits. By
coincidence to the way C++ is implemented on your system, the code works
- but it is a poor way of writing it.
 
"Öö Tiib" <ootiib@hot.ee>: Dec 19 12:58PM -0800

On Saturday, 19 December 2015 21:15:07 UTC+2, David Brown wrote:
> implementer might have decided that the more efficient use of cache
> means that 16-bit values are faster than 32-bit values, and implemented
> int_fast16_t as a 16-bit short int.
 
Are there C++ compilers with 'int_fast16_t' of 16 bits and 'int'
32 bits? Ok, maybe asking for C++ is too much ... are there such C99
implementations?
 
> possible, and you don't care about anything more than 16 bits. By
> coincidence to the way C++ is implemented on your system, the code works
> - but it is a poor way of writing it.
 
OK. That is good objection on its own. Ont the other hand typing
"int_fast16_t x = std::numeric_limits<int_fast16_t>::min();" is sort of
at ugly side.
 
Unsure what to do then. I usually leave variables about whose value I
am uncertain at declaration point uninitialized. If there was really
some requirement to have everything initialized then I would like to
have the most useless and unlikely value so code attempting to use it
will most likely blow up. Signed minimum feels like perfect for that.
mark <mark@invalid.invalid>: Dec 19 10:15PM +0100

On 2015-12-19 19:58, Öö Tiib wrote:
 
> 'int_fast16_t' is guaranteed to be the fastest integral type with a size of
> at least 16 bits. 'int' is guaranteed to be the fastest integral type with
> a range of at least [-32767,32767]. Now explain, how it can't be that 'int_fast16_t' can not represent all values of 'int' including 'INT_MIN'.
 
Yes. But 'int' can just as well be 32 bits with INT_MIN=–2147483648. A
16-bit 'int_fast16_t' can't represent that.
 
>> You would probably be saved by a compiler warning, someone else using
>> your code might not look at them.
 
> Warning about what? There was no defect at first place.
 
Warning about your BUG.
 
#include <iostream>
#include <stdint.h>
#include <limits.h>
 
int main() {
int_fast16_t x = INT_MIN;
std::cout << sizeof(x) << " " << x << "\n";
std::cout << sizeof(int_fast16_t) << " " << sizeof(INT_MIN) << "\n";
}
 
 
$ g++ --version
g++.exe (Rev4, Built by MSYS2 project) 5.2.0
 
$ g++ int_fast16_t.cpp -o int_fast16_t.exe
int_fast16_t.cpp: In function 'int main()':
int_fast16_t.cpp:6:22: warning: overflow in implicit constant conversion
[-Woverflow]
int_fast16_t x = INT_MIN;
^
 
$ ./int_fast16_t.exe
2 0
2 4
"Öö Tiib" <ootiib@hot.ee>: Dec 19 01:32PM -0800

On Saturday, 19 December 2015 23:15:56 UTC+2, mark wrote:
 
> $ ./int_fast16_t.exe
> 2 0
> 2 4
 
Ok, thanks, mea culpa.
"Öö Tiib" <ootiib@hot.ee>: Dec 19 01:39PM -0800

On Saturday, 19 December 2015 22:59:32 UTC+2, Öö Tiib wrote:
> some requirement to have everything initialized then I would like to
> have the most useless and unlikely value so code attempting to use it
> will most likely blow up. Signed minimum feels like perfect for that.
 
Nah, discard that. Seems that there is 'INT_FAST16_MIN' macro available.
David Brown <david.brown@hesbynett.no>: Dec 19 11:05PM +0100

On 19/12/15 21:58, Öö Tiib wrote:
 
> Are there C++ compilers with 'int_fast16_t' of 16 bits and 'int'
> 32 bits? Ok, maybe asking for C++ is too much ... are there such C99
> implementations?
 
I don't know of any off-hand, but I can't say I have had much use of
int_fast16_t on any of the tools I use. Usually I prefer the simpler
and more explicit int16_t form (or just "int" if appropriate).
 
 
> OK. That is good objection on its own. Ont the other hand typing
> "int_fast16_t x = std::numeric_limits<int_fast16_t>::min();" is sort of
> at ugly side.
 
Yes, but "int x = 0;" is pretty easy to type, and usually a better
choice of initialiser than INT_MIN.
 
> some requirement to have everything initialized then I would like to
> have the most useless and unlikely value so code attempting to use it
> will most likely blow up. Signed minimum feels like perfect for that.
 
I can understand that. However, leaving them uninitialised and using
good warnings (if your compiler supports them) will mean that your code
"blows up" during compilation rather than when running, and that's the
best you can get.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Dec 19 01:14AM

On Fri, 2015-12-18, Flix wrote:
> class called ImVector.
 
> This class is commented in this way:
> // Lightweight std::vector<> like class to avoid dragging dependencies
 
A weird thing to say about a well-standardized part of C++ which has
been around for a few decades, and is used by pretty much everyone ...
 
> (also: windows implementation of STL with debug enabled is absurdly
> slow, so let's bypass it so our code runs fast in debug).
 
Even more bizarre, unless the author is sarcastic.
 
> // Our implementation does NOT call c++ constructors because we don't
> use them in ImGui.
 
Also bizarre; you can't write a C++ class (a template in this case)
without dealing with C++ constructors. It's like living without using
DNA. But probably the author means that ImVector<T> doesn't construct
any T objects.
 
> Don't use this class as a straight std::vector
> replacement in your code!
 
Sound advice ...
 
> I thought that char arrays had no C++ constructor/destructor.
> Why do Valgrind complains when resizing the array ?
 
Carefully read what valgrind says, and read the code. It shouldn't be
too hard to figure out -- valgrind is pretty good at saying what went
wrong. You don't tell /us/ what valgrind says, so we can't help.
(I would have expected use of uninitialized data rather than a memory
leak.)
 
In general, when valgrind complains, it has good reasons.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
mark <mark@invalid.invalid>: Dec 19 09:07AM +0100

On 2015-12-19 02:14, Jorgen Grahn wrote:
>> // Lightweight std::vector<> like class to avoid dragging dependencies
 
> A weird thing to say about a well-standardized part of C++ which has
> been around for a few decades, and is used by pretty much everyone ...
 
There are people who are using C++ for some useful language features and
don't want to drag in C++ libraries (including the standard library).
 
The story C++ has on binary compatibility is just pathetic.
 
>> (also: windows implementation of STL with debug enabled is absurdly
>> slow, so let's bypass it so our code runs fast in debug).
 
> Even more bizarre, unless the author is sarcastic.
 
Huh? They are exactly right.
 
std::vector push_back is just one giant clusterfuck. There are >100
calls being made in there. The amount of useless overhead is mind-boggling.
 
Even for VC++ std::vector operator[]:
 
call [...]::operator[]
--> call [...]::_Myfirst
--> call [...]::_Get_data
--> call [...]::_Get_second
 
for ImVector:
call [...]::operator[]
 
With debug builds without inlining, there is a huge difference.
 
 
> Also bizarre; you can't write a C++ class (a template in this case)
> without dealing with C++ constructors. It's like living without using
> DNA.
 
No, it's living without unnecessary overhead. They are only supporting
what C++ now calls trivially copyable types and are using memcpy.
 
I have written my own vector replacement (only supporting trivially
copyable types) and for many use cases it's significantly faster than
std::vector (not just in debug builds).
Jorgen Grahn <grahn+nntp@snipabacken.se>: Dec 19 10:34AM

On Sat, 2015-12-19, mark wrote:
> On 2015-12-19 02:14, Jorgen Grahn wrote:
> > On Fri, 2015-12-18, Flix wrote:
...
 
> Huh? They are exactly right.
 
> std::vector push_back is just one giant clusterfuck. There are >100
> calls being made in there. The amount of useless overhead is mind-boggling.
 
You're talking about non-optimized builds with a certain Microsoft
implementation of the standard library I guess. I honestly don't
care: I have a good implementation of std::vector here which I use.
I expect others to have a good C++ implementation too; they have been
freely available for at least a decade.
 
If you have a problem with std::vector, the solution is /not/ to
create your own incompatible replacement. Instead, don't use debug
builds, or get a better standard library. Or accept that debug builds
are ... debug builds.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
"Öö Tiib" <ootiib@hot.ee>: Dec 19 08:06AM -0800

On Saturday, 19 December 2015 10:08:07 UTC+2, mark wrote:
> >> slow, so let's bypass it so our code runs fast in debug).
 
> > Even more bizarre, unless the author is sarcastic.
 
> Huh? They are exactly right.
 
Yes but in a way that makes good trolls indistinguishable
from genuine kooks. (Morgan's maxim)
 
What is the point of using debug build and then write hand-optimized code
in it? The whole scenario is either eccentric (kook) or good sarcasm (troll)
but it is impossible to tell what.
omarcornut@gmail.com: Dec 19 10:50AM -0800

> You're talking about non-optimized builds with a certain
> Microsoft implementation of the standard library I guess.
> I honestly don't care:
 
Well you don't but people using said implementation in non-optimized builds care, it is as simple as that. And they are a lots of those people, believe me.
 
> If you have a problem with std::vector, the solution is /not/ to
> create your own incompatible replacement. Instead, don't use
> debug builds, or get a better standard library.
 
Ridiculous and lazy.
 
You'd rather not use debug builds and suggest your hundred/thousands of users to not use debug builds and change their standard library, rather than write 30 lines of code?
 
Aside from Microsoft debug implementation, any standard-complying std::vector is likely to be slower than a finely tuned non-standard version for your own need. std:: classes are great but claiming that they are fit for all use cases is just lazy.
 
 
> What is the point of using debug build and then write hand-optimized code
> in it? The whole scenario is either eccentric (kook) or good sarcasm (troll)
> but it is impossible to tell what.
 
It is neither.
 
When you work on a game meant to run at real-time / interactive frame-rate, when more so a complex game (imagine your typical console title) having a debug build that runs a 5 fps is absolutely redhibitory. You want to maximize your debugging capabilities every minute of the day while minimizing your performance cost.
 
That library quoted above (which I wrote, and I wrote that comment) is meant to be always-on and always-available and it would be a huge flaws if using the library in a debug build took 4 ms of your time every frame. Visual Studio in particular has really harsh (slow) settings by default nowadays and this extra cost in unacceptable for a library that may process thousands of UI objects every frame.
 
People who are finding std::vector satisfactory probably haven't been faced with writing a highly efficient and complex video game, in a highly competitive environment. Consider that GTA5 is running on the PS3 and have a look up at the specs for PS3 on Wikipedia, I can guarantee you they aren't using std::vector. When you gets into small details the implementation of STL classes is as varied as they are implementations and this variation can actually be an issue with porting. Considering how simple it is to rewrite that sort of containers it makes sense for many more uses that you can imagine. I agree it seems weird for many users of C++ but it is also a reality for many developers and it is a lack of imagination and understanding of different ecosystems and requirements to consider than rewriting own set of containers is so unusual or eccentric.
 
Back to the original post: is Valgrind report something and there is a case of a legit memory leak obviously this is a bug in the library and should be reported on the github for said library.
omarcornut@gmail.com: Dec 19 11:37AM -0800

> I thought that char arrays had no C++ constructor/destructor.
> Why do Valgrind complains when resizing the array ?
 
char doesn't have a constructor/destructor but the memory itself still needs to be freed of course. It's hard to tell what your issue is given the lack of Valgrind report and/or code sample. I'd imagine the issue is probably that you have a ImVector whose destructor wasn't called (perhaps you have a ImVector on the heap somewhere).
 
There's no ImVector<char[4096]> in the ImGui codebase so it doesn't seem like code in the default library?
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Dec 19 08:08PM


> That library quoted above (which I wrote, and I wrote that comment) is meant to be always-on and always-available and it would be a huge flaws if using the library in a debug build took 4 ms of your time every frame. Visual Studio in particular has really harsh (slow) settings by default nowadays and this extra cost in unacceptable for a library that may process thousands of UI objects every frame.
 
> People who are finding std::vector satisfactory probably haven't been faced with writing a highly efficient and complex video game, in a highly competitive environment. Consider that GTA5 is running on the PS3 and have a look up at the specs for PS3 on Wikipedia, I can guarantee you they aren't using std::vector. When you gets into small details the implementation of STL classes is as varied as they are implementations and this variation can actually be an issue with porting. Considering how simple it is to rewrite that sort of containers it makes sense for many more uses that you can imagine. I agree it seems weird for many users of C++ but it is also a reality for many developers and it is a lack of imagination and understanding of different ecosystems and requirements to consider than rewriting own set of containers is so unusual or eccentric.
 
> Back to the original post: is Valgrind report something and there is a case of a legit memory leak obviously this is a bug in the library and should be reported on the github for said library.
 
There is nothing wrong with using std::vector (or the STL in general)
for games programming if the implementation is of decent quality and, in
the case of std::vector, you provide a custom allocator that doesn't
perform value initialization if that is your use-case; there is no need
to write your own container to do a similar job.
 
Optimizing for debug builds is just plain craziness sausages.
 
/Flibble
"Öö Tiib" <ootiib@hot.ee>: Dec 19 12:16PM -0800

On Saturday, 19 December 2015 20:50:55 UTC+2, omarc...@gmail.com wrote:
 
Please do not erase atributions, it is Usenet and not some web forum so
people won't get whom you quote or to what you reply if you erase those.
Restoring those ...
 
 
> When you work on a game meant to run at real-time / interactive
> frame-rate, when more so a complex game (imagine your typical console
> title) having a debug build that runs a 5 fps is absolutely redhibitory.
 
Do not use debug build then.
 
> You want to maximize your debugging capabilities every minute of the
> day while minimizing your performance cost.
 
The debuggers work excellently with optimized builds. The only
thing that I avoid defining is NDEBUG (because that erases asserts).
 
 
> That library quoted above (which I wrote, and I wrote that comment) is
> meant to be always-on and always-available and it would be a huge flaws
> if using the library in a debug build took 4 ms of your time every frame.
 
I use debug versions only in unit-tests and automated tests. It does
not matter for me how long those run on test farm.
 
> Visual Studio in particular has really harsh (slow) settings by default
> nowadays and this extra cost in unacceptable for a library that may
> process thousands of UI objects every frame.
 
The slow run-time-checked debug versions of standard containers and
iterators are to track down errors like where someone did 'push_back'
element into a 'vector' and forgot that it *may* invalidate iterators.
Such logic errors are all caught by unit tests quickly so I do not need
those checks in versions that are debugged manually.
 
> a highly competitive environment. Consider that GTA5 is running on the
> PS3 and have a look up at the specs for PS3 on Wikipedia, I can guarantee
> you they aren't using std::vector.
 
What? Most people who use C++ for anything use it because they need
its excellent performance. Optimized (by decent compiler) code of
'std::vector' is about as good as same thing written in assembler.
 
> When you gets into small details the implementation of STL classes is
> as varied as they are implementations and this variation can actually be
> an issue with porting.
 
Fear, Uncertainty and Doubt. There was maybe case or two 20 years ago and
old women of bus-station gossip of it. World has changed.
 
> Considering how simple it is to rewrite that sort of containers it
> makes sense for many more uses that you can imagine.
 
Good containers aren't simple to write correctly. I have seen enough
of both kinds. It is tricky to gain authority among guys like you.
Typically I argue with them then let them to waste a week on it and
then demonstrate them how well used 'std' and/or 'boost' containers
are easier to debug, but still beat their "container". Often 5:1.
There are lot of excellent containers written by best and tested by
many. *Choosing* correct one is the trick to learn, not how to
invent yet another square wheel.
 
> for many developers and it is a lack of imagination and understanding
> of different ecosystems and requirements to consider than rewriting own
> set of containers is so unusual or eccentric.
 
I do not know what you imagine C++ is commonly used for by these
"many users". Good C++ developers are scarce so most are used only to
write most critical parts of anything.
 
 
> Back to the original post: is Valgrind report something and there is a
> case of a legit memory leak obviously this is a bug in the library and
> should be reported on the github for said library.
 
Of course. I have seen also standard libraries leaking memory and
crashing in situations but it happens very rarely and the defects are
repaired rather fast by vendors because lot more people evaluate code of
those, test those and use those.
Patricia Anaka <panakabear@gmail.com>: Dec 18 04:27PM -0800

> It looks like 5.04 is at least a year old; 5.05 and 5.06 have been
> released:
 
5.0.4 is still the newest one for Nintendo. I will see if I can make a simple demo of the bug and then email them about it.
Nobody <nobody@nowhere.invalid>: Dec 19 07:37AM

On Thu, 17 Dec 2015 11:07:43 -0800, Patricia Anaka wrote:
 
> Here's some sample data I pass the function.
 
> point 0: 221.000000 50.000000
> point 4: 221.000000 50.000000
 
The first point is coincident with the last. That's going to result in the
first iteration (i=0, j=nvert-1) calculating 0.0/0.0 in the test.
 
Printing some of the values may coincidentally change the way in which
calculations are performed, meaning that the function coincidentally
produces the desired result.
Paavo Helde <myfirstname@osa.pri.ee>: Dec 19 03:10AM -0600

Nobody <nobody@nowhere.invalid> wrote in
>> (testx < (points[j].x - points[i].x) * (testy - points[i].y) /
>> (points[j].y - points[i].y) + points[i].x))
>> c = !c;
 
 
The first part of the condition (points[i].y>testy) != (points[j].y>
testy) should avoid division by zero.
 
If the compiler finds a couple of y values unequal in the first line and
yet gets zero when subtracting them on the third line, then I would think
it would be another compiler bug. But I am not a true floating-point
expert.
 
Anyway, it is easy to test for OP what is actually happening. If the bug
is in optimizing away the j update as I suspect, then printing out j
after the loop should show it is still at the initial value nvert-1. And
any floating-point shenanigans can be avoided by variating the y values a
bit in the sample data.
 
Cheers
Paavo
David Brown <david.brown@hesbynett.no>: Dec 19 02:43PM +0100

> other. Which in my case, is not the case. And as I said before,
> that issue is certainly not causing the problem for which I was
> requesting help.
 
I understand now that the problem really was a compiler bug. It is a
rare thing, but it happens. The ARM CC compiler you have here is, if I
understand it correctly, Keil's compiler that ARM took over a number of
years ago. The only time in recent years that I have seen a compiler
that generated clearly incorrect code (rather than just sub-optimal
code) for perfectly good source code was also with a Keil compiler -
this time for the 8051. I believe ARM is in the process of switching to
clang/llvm for their official compiler, but it is unlikely that the
Nitendo platform compiler will change soon.
 
 
But I stand by my assertion that the algorithm needs some work to be
suitable. You can well say that it won't be called with points close
together - but that is /now/, while you know of that issue. The issue
was clearly unknown during testing, or else it would not have been
tested with points that lie on vertical and horizontal lines. And you
can expect that the issue will also be unknown in the future when other
people are using the function.
 
So the code should be fixed, and tested on all sorts of unrealistic
cases as well as realistic ones - because what is "realistic" will
change. Failing that, it should be documented extraordinarily well,
with lots of assertions, and preferably a name change to something like
"point_in_polygon_restricted_cases".
 
Working around the compiler bug may be the first priority, but don't
leave bombs in the code for future users.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: