- A right alternative to IEEE-754's format - 4 Updates
- how model this problem? - 5 Updates
- To name, perchance pleasingly - 4 Updates
- Virtual functions - 2 Updates
- Alternatives to Visual Studio 2015 or later - 2 Updates
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 27 07:26AM -0700 I've been thinking about John Gustafson's unums, and something occur- red to me. All of our IEEE-754 floating point values are stored with either an explicit-1 or implicit-1, with the mantissa bits being ascribed to the bits after that 1, moving away from the left. What if we did it differently? What if we stored the bits from the right (like the way we would write our own numbers, of "125" being the equivalent of "125.", rather than "1.25e+2"? This would require along two additional pieces of information, which are (1) the length of the "mantissa" portion in bytes, and (2) position of the period (which moves left from the fully stored bit pattern), but we would gain the ability to have far greater precision, and to have precision out beyond our need to round. It would guarantee sufficient resolution based on the needs of the computation to never have invalid results in rounding in our computations. ----- It seems to me that IEEE-754 is working form the perspective of a binary 1.xxxxxx. This format I propose would be a "xxxxxxx." pat- tern. The period position would indicate how far from the right to move, such that if it were 0 it would be "xxxxxxx.", and if it were 2 it would be "xxxxx.xx" and so on. Instead of the explicit-1 or implicit-1 mantissa: [sign][exp][mantissa] [sign][exp]1.[mantissa] It would be either this or this (depending on whether or not we would even need to store the exponent portion using this methodology): [sign][exp][period][length][bits] [sign] [period][length][bits] And if we wanted to use a fixed format, the bits [length] could be removed and the assumed bit storage would be various bit sizes based on resolution. [sign][exp][period][bits] [sign] [period][bits] Given the nature of numbers represented in binary there could also be a few bits reserved for special case scenarios, like for repeating bit patterns so they can work out to whatever degree of precision is required, but are stored minimally, with low integer, zero, and plus and minus infinity values all stored with minimal bits, and so on. ----- It just makes sense to me to store the data you need for the thing, and to leave guesswork to something other than the computational math engine of your CPU. I think it would be desirable to integrate this kind of design alongside traditional IEEE-754 engines, so that you have their traditional support for fast/semi-reliable computation, but then to add this new engine which guarantees exact value computation (out to a rounding level), no matter how long the computation takes or how much memory it requires. MPFR already does this same sort of thing in software. I think adding it to hardware would be desirable at this point in our available transistor budgets and mature design toolsets. -- Rick C. Hodgin |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Mar 27 07:07PM +0100 On 27/03/2018 15:26, Rick C. Hodgin wrote: > either an explicit-1 or implicit-1, with the mantissa bits being > ascribed to the bits after that 1, moving away from the left. > What if we did it differently? The only thing wrong with IEEE-754 is allowing a representation of negative zero. There is no such thing as negative zero. [snip tl;dr] /Flibble -- "Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Bryne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?" "I'd say, bone cancer in children? What's that about?" Fry replied. "How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil." "Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say." |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 27 11:22AM -0700 On Tuesday, March 27, 2018 at 2:08:05 PM UTC-4, Mr Flibble wrote: > > What if we did it differently? > The only thing wrong with IEEE-754 is allowing a representation of > negative zero. There is no such thing as negative zero. There are many problems with IEEE-754. Different architectures aren't even required to produce the same result. And even in our C++ compilers, we have an option set to allow for compliant round- ing, or fast rounding, which results in an extra store-and-load being done in the 80387-based FPUs, because internally they do not round properly without a store. Stanford Seminar: Beyond Floating Point https://www.youtube.com/watch?v=aP0Y1uAA-2Y Beating Floats At Their Own Game https://www.youtube.com/watch?v=N05yYbUZMSQ https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868 -- Rick C. Hodgin PS -- This is not "evangelistic" except in the context of getting people to leave IEEE-754 standards. :-) |
"Öö Tiib" <ootiib@hot.ee>: Mar 27 11:26AM -0700 On Tuesday, 27 March 2018 21:08:05 UTC+3, Mr Flibble wrote: > > What if we did it differently? > The only thing wrong with IEEE-754 is allowing a representation of > negative zero. There is no such thing as negative zero. All these number formats are about efficiency of certain calculations, efficiency of calculations means transistors. Transistors however are somewhat under level of topicality here. |
alessio211734 <alessio211734@yahoo.it>: Mar 27 03:04AM -0700 Hello, I should model a problem in c++. I have a region compose by a set of points where every point have some property and a prev and next point. start ->p1<->p2<->p3<->p4<->end I would like split in more regions this points based on some nearby property like curvature and high. I would like that in every region keep the adjacenty property of every point. I tried to write some code: I have some doubts about, what's happen if I delete some elements of shared list I think the iterator became invalid. With a similar solution I need to sinchronyze every region when I delete an element and I should again understand what's happen on insert new element. template <class T> class SubRegion { public: typedef typename std::list<T>::iterator ElementIterator; typedef typename std::list<T>::const_iterator ConstElementIterator; SubRegion(const std::list<ElementIterator> & lit) { pElements = lit; }; typename std::list< ElementIterator >::iterator begin() { return pElements.begin(); }; typename std::list< ElementIterator >::iterator end() { return pElements.end(); }; T & element(typename std::list< ElementIterator >::iterator pelement) { return (*(*pelement)); }; std::list<ElementIterator> pElements; }; template <class T> class Region { public: void buildRegions(std::list<T> & elements) { // suppose list with 3 element shared_list = elements; SubRegion<T>::ElementIterator it = elements.begin(); std::list<SubRegion<T>::ElementIterator> subElement1; subElement1.push_back(it); subRegion.push_back(subElement1); std::list<SubRegion<T>::ElementIterator> subElement2; subElement2.push_back(++it); subElement2.push_back(++it); SubRegion<T> r1(subElement1); SubRegion<T> r2(subElement2); subRegion.push_back(r1); subRegion.push_back(r2); //SubRegion<T> s1 } SubRegion<T> & getRegion(int i) { return subRegion[i]; } protected: std::vector< SubRegion<T> > subRegion; std::list<T> shared_list; }; |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Mar 27 11:08AM On Tue, 2018-03-27, alessio211734 wrote: > I have a region compose by a set of points where every point have > some property and a prev and next point. > start ->p1<->p2<->p3<->p4<->end Are you simply trying to say you have a polygon defined by its verticises? And that each vertex has a property? > I would like split in more regions this points based on some nearby > property like curvature and high. I would like that in every region > keep the adjacenty property of every point. I can't guess what this means. Perhaps it's not a polygon, but then I don't know what it is. Also "based on some nearby property" sounds very vague. You'd get better help if you could clarify the problem a bit more. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
alessio211734 <alessio211734@yahoo.it>: Mar 27 05:01AM -0700 Il giorno martedì 27 marzo 2018 13:08:54 UTC+2, Jorgen Grahn ha scritto: > -- > // Jorgen Grahn <grahn@ Oo o. . . > \X/ snipabacken.se> O o . it's a 2d section composed by 2d vertex. The section is a list of 2d vertex order in ccw way. I should divide the initial region(the initial section) in more regions based on the curvature and other property To calculate this region I should know the adjacent vertex of every vertex. Maybe after I divide in subregion I need to merge some region together. |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Mar 27 12:30PM On Tue, 2018-03-27, alessio211734 wrote: > property To calculate this region I should know the adjacent vertex > of every vertex. Maybe after I divide in subregion I need to merge > some region together. I still don't understand if this is a polygon or not, but perhaps someone else does and can answer. Good luck! /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
"Öö Tiib" <ootiib@hot.ee>: Mar 27 11:05AM -0700 On Tuesday, 27 March 2018 15:31:14 UTC+3, Jorgen Grahn wrote: > I still don't understand if this is a polygon or not, but perhaps > someone else does and can answer. Good luck! What he described sounded to me like what is in computer graphics called "polyline" (not "polygon"). I may be wrong because I also did not understand his question. |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Mar 27 01:34PM On Mon, 2018-03-26, Richard wrote: > exercism.io, I often ended up with a single function that was the > solution. However, I wanted to exhibit that the only thing that > should be declared in the global namespace is main(), I personally don't believe in that rule. A namespace for a library: yes. A namespace for a subsystem in a larger program: yes. Namespaces just to get better-looking names in general: yes. But for the bulk of a program I think of the global namespace as the program's context, and am happy with it. But then I'm unusually sensitive to repetition and duplication in names. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Daniel <danielaparker@gmail.com>: Mar 27 06:50AM -0700 On Monday, March 26, 2018 at 1:18:01 PM UTC-4, Daniel wrote: > anagram x; > } > Error: a namespace name is not allowed There's some discussion in the boost docs about the rationale for settling on "tuples" for the subnamespace for the tuple class, c.f. http://www.boost.org/doc/libs/1_55_0/libs/tuple/doc/design_decisions_rationale.html. "The final ... solution is now to have all definitions in namespace ::boost::tuples ... The subnamespace name tuples raised some discussion. The rationale for not using the most natural name 'tuple' is to avoid having an identical name with the tuple template. Namespace names are, however, not generally in plural form in boost libraries. First, no real trouble was reported for using the same name for a namespace and a class and we considered changing the name 'tuples' to 'tuple'. But we found some trouble after all. Both gcc and edg compilers reject using declarations where the namespace and class names are identical" Daniel |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Mar 27 02:47PM On Mon, 2018-03-26, Daniel wrote: > On Monday, March 26, 2018 at 6:05:17 AM UTC-4, Jorgen Grahn wrote: ... >> the class itself. > I suppose cbor::value (msgpack::value) would be an option, but value by > itself seems uninspired. cbor::value feels much better to me (except I might wonder how "refined" this value is). If you keep the context (cbor) in mind, it reads as "the kind of value that CBOR operates on". To me the name is more demystifying and unsurprising than uninspired. > Or perhaps cbor::packed (msgpack::packed), as > these encapsulate packed arrays of bytes. Yes, if the packedness is important to the user. Not if he doesn't care if the value is unpacked when accessed, or unpacked when retrieved. > be sparsely responded to here :-) But at the same, there are clearly > best practices for naming, and with libraries it's helpful to follow > conventions, so feedback appreciated. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
legalize+jeeves@mail.xmission.com (Richard): Mar 27 04:18PM [Please do not mail me a copy of your followup] Jorgen Grahn <grahn+nntp@snipabacken.se> spake the secret code >But then I'm unusually sensitive to repetition and duplication in >names. I agree with the tension you describe. However, I'm purposefully doing this because I see far too many "C++ libraries" and other such things where they dump everything in the global namespace. I want namespaces to be visible, front-and-center, in my sample solutions. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
legalize+jeeves@mail.xmission.com (Richard): Mar 26 03:59PM [Please do not mail me a copy of your followup] Jorgen Grahn <grahn+nntp@snipabacken.se> spake the secret code >I have trouble applying them in practical code, too, after 15 years >with C++. In the sense that run-time polymorphism (a better word for >the language feature as a whole, IMHO) is not needed in every program. I found that when practicing pure TDD, pure abstract interfaces show up more often in my code. That means virtual functions in C++. There are other design benefits to using pure abstract interfaces, but when testing behavioral classes (e.g. not simple value types like std::string) you tend to have one class collaborating with another and you want to mock out the collaborators. This is trivial if the collaboration is done through a pure virtual interface. A modern compiler may even notice that this "virtual" function is only ever called on one particular class implementation and eliminate the indirection through the vtable. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
Cholo Lennon <chololennon@hotmail.com>: Mar 27 10:13AM -0300 On 26/03/18 17:20, Richard wrote: >> Yes. That's one of the issues I have with TDD in C++: I have to make >> my code more complex to use it. > How does it become more complex? Well, if you need to apply, for example, dependency injection in order to test your code, the complexity increases. > with good abstractions, highly cohesive classes and loosely coupled > deisgns. All of those things make for code that is simpler to > understand. Yes, in the end maybe you have better abstractions, but the code complexity is higher (more clases, interfaces, etc). Don't get me wrong, unit testing is an integral part of my daily work, it pay off. I really believe in testing (which is not the same as TDD, we've already discussed this in other thread in the past). Regards -- Cholo Lennon Bs.As. ARG |
wyniijj@gmail.com: Mar 26 11:03PM -0700 Rick C. Hodgin於 2018年3月26日星期一 UTC+8下午7時58分52秒寫道: > never know for sure until you try. > -- > Rick C. Hodgin I read again your original post and found I missed a point that you were trying 'remote debug'. I had such experiences long ago on DOS (using WATCOM C/C++, IIRC, the so called 'advanced' feature may had existed for long time). Such things are no longer necessary in Linux. From the video link you provided. I noticed something 1: In serious C++ program, a catch-all block is nearly necessary in main block, IMO. Otherwise some part of throw behavior happened in some programs (maybe included) may actually be undefined. The try/catch should not be said(by the standard) and therefore understood as the 'standard error handling machanism`, by which the general sense of error handling can not be correctly and nicely done (try/catch or setjmp were once called the advanced error handling machanism). 2: In C++, "what you see might not be what you think". This applies to general C++ programs and 1: as well. Since the behavior of C++ codes depend more havily on what the header files contain, which are normally invisible to programmers and also suffer changes. Another thing is about class containing virtual members (because another topic in this forum mentioned this). There are also things hidden from average understanding, but I am not going to say more about this here. Let's divide bugs into 3 catagories a) bugs from the logic of solving the problems b) bugs from implementation of the problem-solving c) bugs from underlying libraies or the compiler My hobby is limiting the development tool dependency to essentially minimum. Using debugger to find bugs is not the primary resort in general except c). At least, nowadays programs are actually composed of many concurrently running sub-programs. You can't use debugger. In time I had to do many 'step's in debuging, I just do it, even 1000 steps, in considering that such a thing won't happen twice. |
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 27 05:49AM -0700 > of many concurrently running sub-programs. You can't use debugger. > In time I had to do many 'step's in debuging, I just do it, even 1000 > steps, in considering that such a thing won't happen twice. It's fairly rare any more that I have algorithms which operate in a manner other than I expect. My developer skills are mature and I do know how to write the algorithms I intend, both syntactically and efficiently. So the errors are rarely in my coding skills. I presume this is true for nearly all mature developers as well. The things I check are mostly to make sure I typed in everything as I intended. Despite my best efforts in writing and reviewing what I wrote, I still have mistakes in coding get past me. I have no idea how it happens either because in my validation I can even re-read a thing I wrote as I intended it to be, only to have it mechanically translated through my fingers into something else. It's like I'm completely blind to what it really says because I'm remembering it from my intention in memory, rather than what's really there. This, of course, happens in the moment. If I come back to something a day later it's not like that. Then I read what's actually there. In any event, I typically write a few hundred lines of code, and then go in and single-step through it and test it all, then go on and continue on again. The edit-and-continue debugger allows me to fix the bugs rapidly as I'm sitting there with solid validation or verification that the code works or doesn't work. The Visual Studio Debugger is a very powerful tool. It's served as the model for what I intend to have in CAlive, though with my goal in CAlive to be a much faster debugger, one like what we saw back in the days of CodeView on text-based screens. -- Rick C. Hodgin |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment