- No exceptions, no STL - 3 Updates
- High horse - 9 Updates
- calls to hash function in unordered_set - 3 Updates
- Another interesting Cpp.chat - 1 Update
Melzzzzz <Melzzzzz@zzzzz.com>: Jun 11 08:20PM Rant: my new job at embedded programming ;) So I am now C/C++ programmer... -- press any key to continue or any other to quit... |
woodbrian77@gmail.com: Jun 11 02:40PM -0700 On Monday, June 11, 2018 at 3:20:45 PM UTC-5, Melzzzzz wrote: > Rant: my new job at embedded programming ;) > So I am now C/C++ programmer... Probably if we had static exceptions: https://www.reddit.com/r/cpp/comments/8iw72i/p0709_r0_zerooverhead_deterministic_exceptions/ you could use both exceptions and STL. My hope is that these will be available before the committee formally adopts them. Brian Ebenezer Enterprises - In G-d we trust. http://webEbenezer.net |
Ian Collins <ian-news@hotmail.com>: Jun 12 09:46AM +1200 > you could use both exceptions and STL. My hope is > that these will be available before the committee > formally adopts them. For once I agree with you :) -- Ian. |
Juha Nieminen <nospam@thanks.invalid>: Jun 11 07:56AM > I once had a bug that hung around for 18 months. Once we understood it > we could set up the correct environment, and it would fail most days. > Not every day though. No reasonable unit test can be run that long. The idea of TDD is not to catch every single bug in existence. Its idea is to catch simple bugs *immediately* when they are written. The sooner in the development process a bug is caught, the better. (TDD also has the beneficial side-effect that it kind of places checks and guards into the testing procedure so that even if a particular piece of code doesn't seem to fail the tests written explicitly for it, but it has an effect on something else elsewhere, which start failing as a result, the tests written for that something else will potentially catch the problem immediately.) |
Daniel <danielaparker@gmail.com>: Jun 11 04:56AM -0700 On Monday, June 11, 2018 at 3:57:13 AM UTC-4, Juha Nieminen wrote: > The idea of TDD is not to catch every single bug in existence. Its idea > is to catch simple bugs *immediately* when they are written. The sooner > in the development process a bug is caught, the better. That's not how TDD evangelists such as Robert Martin characterize TDD. They characterize it as a methodology where the tests completely capture the requirements, and simultaneously validate those requirements. Martin doesn't budge on the tests being the specification for the application, check out some of the archived discussions on comp.software.extreme-programming. Also, do skilled developers make "simple bugs"? I don't observe that, at least for my understanding of "simple bugs". Check out some popular repositories on github, do you see simple bugs in their issue logs? You'll see issues for cross platform compiler differences, and defects in attempts at solving complicated problems, but "simple bugs"? For example, in github projects that involve text processing, and due to C++'s defining characteristics near infinite overhead for streams and "you pay for what you don't need", there seems to be a recurring interest in providing a complete replacement for C++ floating point conversion following netlib code and ACM papers (e.g. RapidJson), or alternatively to attempt naive floating conversion (e.g. sajson, pjson, gayson) with code like if (*s == '.') { ++s; double fraction = 1; while (isdigit(*s)) { fraction *= 0.1; result += (*s++ - '0') * fraction; } } The latter, of course, can never be made correct, no matter how many unit tests are made to pass, and the former is hard to clear the issue log completely. Daniel |
boltar@cylonHQ.com: Jun 11 01:57PM On Mon, 11 Jun 2018 04:56:51 -0700 (PDT) >The latter, of course, can never be made correct, no matter how many unit >tests are made to pass, and the former is hard to clear the issue log >completely. I'm obviously missing something. Apart from being a pointless rewrite of atof() and no limit on how long the fraction read can be, what exactly is wrong with the above code? As far as I can see it will produce the correct result until fraction becomes zero. You could also do "atoi(s++) / pow(10,strlen(s))" but I doubt it would be any more accurate or quicker. |
Paavo Helde <myfirstname@osa.pri.ee>: Jun 11 05:06PM +0300 On 11.06.2018 14:56, Daniel wrote: > Also, do skilled developers make "simple bugs"? I don't observe that, at > least for my understanding of "simple bugs". Check out some popular > repositories on github, do you see simple bugs in their issue logs? This is like proposing that pedestrian bridges should be 20 cm wide with no railings because skilled walkers can walk straight anyway. Yes they can, most of the time. I suspect that if there are no simple bugs in the issue logs this is because these have been caught by unit tests before commit, that's exactly what the unit tests are meant for. |
Daniel <danielaparker@gmail.com>: Jun 11 07:43AM -0700 > the above code? As far as I can see it will produce the correct result > until > fraction becomes zero. You could also do "atoi(s++) / > pow(10,strlen(s))" but I doubt it would be any more accurate or quicker. As you continue to perform the multiplication operations in the loop, you are going to compute values that cannot be accurately represented in floating-point variables. This error will accumulate over time and result in the output of incorrect digits. These issues are well known, and code written this way can never pass round trip tests of any breadth. Nevertheless, some projects do use naive conversion methods, motivated by the sheer slowness of the standard library conversion functions, such as sprintf, even compared to the dtoa implementation on netlib by David Gay (http://www.netlib.org/fp/dtoa.c) that is generally regarded as a safe implementation. For a discussion of the issues with floating point conversion, see http://kurtstephens.com/files/p372-steele.pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.4656&rep=rep1&type=pdf https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf Daniel |
Daniel <danielaparker@gmail.com>: Jun 11 08:11AM -0700 On Monday, June 11, 2018 at 10:07:10 AM UTC-4, Paavo Helde wrote: > I suspect that if there are no simple bugs in the issue logs this is > because these have been caught by unit tests before commit, that's > exactly what the unit tests are meant for. While popular github projects are generally well covered by unit tests, they don't appear to my eye to be following TDD practices, or aspire to "100% coverage" (whatever that means.) My own view is that the main reason there are no simple bugs on master is the disinclination of developers to embarrass themselves in front of the world, and a consequent increase in alertness and attentiveness. Daniel |
Paavo Helde <myfirstname@osa.pri.ee>: Jun 11 09:05PM +0300 On 11.06.2018 18:11, Daniel wrote: > While popular github projects are generally well covered by unit tests, they > don't appear to my eye to be following TDD practices, or aspire to "100% > coverage" (whatever that means.) If they are well covered, this means that the unit tests are there for a purpose as nobody is forcing the developers to use them. That purpose might not be TDD (which is only tangentially related) or "100% test coverage" (which is not a meaningful purpose and is impossible anyway). For cross-platform projects an obvious purpose of unit tests would be to make sure that at least the basic functionality works after porting the code to a new hardware or software environment. However, one can as well to use the test suite for ensuring that one's changes have not broken anything important. This has nothing to do with TDD. |
Daniel <danielaparker@gmail.com>: Jun 11 11:30AM -0700 On Monday, June 11, 2018 at 2:05:43 PM UTC-4, Paavo Helde wrote: > code to a new hardware or software environment. However, one can as well > to use the test suite for ensuring that one's changes have not broken > anything important. This has nothing to do with TDD. Agreed, on all points :-) Particularly about the "100%", which I find especially irritating. There are projects that are reporting a "100%" coverage from something called "coveralls", but I'm pretty sure that's not actually a measure of, say, the kind of coverage that James Kuyper talked about in another post. Daniel |
"Öö Tiib" <ootiib@hot.ee>: Jun 11 01:33PM -0700 On Monday, 11 June 2018 21:30:34 UTC+3, Daniel wrote: > coverage from something called "coveralls", but I'm pretty sure that's not > actually a measure of, say, the kind of coverage that James Kuyper talked > about in another post. The 100% coverage typically means that every line of code was executed by tests at least once. That concept is perhaps useful for interpreted language developers who don't have compilers and so can otherwise commit code with syntax errors into repo. For good program that "coverall" is certainly too low margin and so the whole metric is red herring. |
Frank Tetzel <s1445051@mail.zih.tu-dresden.de>: Jun 11 02:48PM +0200 Hi, can anybody give me a reasonable explanation why the hash function for a simple unordered_set is called twice as much as I would expect. Example: #include <iostream> #include <unordered_set> struct MyHash{ static int numberOfCalls; size_t operator()(int i) const noexcept{ ++numberOfCalls; //return std::hash<int>{}(i); return i; } }; int MyHash::numberOfCalls = 0; int main(){ std::unordered_set<int,MyHash> set; std::cout << MyHash::numberOfCalls << '\n'; set.reserve(1024); std::cout << MyHash::numberOfCalls << '\n'; for(int i=0; i<1024; ++i){ set.emplace(i); } std::cout << MyHash::numberOfCalls << '\n'; return 0; } output: 2047 It seems to always be (inserted_elements * 2) - 1. When adding 2 elements, it hashes 3 times. When adding 1 element, it only hashes once. What is happening here? Why is the hash calculated twice? What is so special about the last or first element? Btw, I'm running gcc 8.1.1. Best regards, Frank |
Ike Naar <ike@iceland.freeshell.org>: Jun 11 05:17PM > only hashes once. What is happening here? Why is the hash calculated > twice? What is so special about the last or first element? > Btw, I'm running gcc 8.1.1. Here, with g++ 4.8.4 it also hashes 2047 times, but if 'noexcept' is removed from MyHash::operator() it hashes 1024 times. |
"Öö Tiib" <ootiib@hot.ee>: Jun 11 01:13PM -0700 On Monday, 11 June 2018 15:49:10 UTC+3, Frank Tetzel wrote: > only hashes once. What is happening here? Why is the hash calculated > twice? What is so special about the last or first element? > Btw, I'm running gcc 8.1.1. Standard-library cashes potentially slow and throwing hashes. For example in first standard library that happened under my hand right now (and it seems at least 4 years old) I see in its bits/hashtable.h: template<typename _Tp, typename _Hash> using __cache_default = __not_<__and_<// Do not cache for fast hasher. __is_fast_hash<_Hash>, // Mandatory to make local_iterator default // constructible and assignable. is_default_constructible<_Hash>, is_copy_assignable<_Hash>, // Mandatory to have erase not throwing. __detail::__is_noexcept_hash<_Tp, _Hash>>>; So it does not cache your hashes and instead calls the hash function when it needs to. It needs to do so so few times because you did reserve, otherwise it would likely need to call it more times. Also if you want your MyHash to be "slow" but noexcept then in the standard library that I see it can be done by disabling its copy-assignment: MyHash& operator=(const MyHash&) = delete; Good to know when you happen to have relatively slow non-throwing hasher. ;) |
woodbrian77@gmail.com: Jun 11 08:49AM -0700 I enjoyed this Cpp.chat. Jon and Phil have Ben Deane and Gašper Ažman on and discuss an interesting proposal for 0202 ++C: https://duckduckgo.com/?q=+Cpp.chat+%22Ben+Deane%22++%22Ga%C5%A1per+A%C5%BEman%22&t=ffab&ia=videos&iax=videos&iai=xm4betb9xjc The only thing is it will probably be 7 or more years before I'm able to use this in my library code: https://github.com/Ebenezer-group/onwards Better late than never I guess. Brian Ebenezer Enterprises - Enjoying programming again. http://webEbenezer.net |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment