- neoGFX Sample Application - Chess - 3 Updates
- "GotW #97: Assertions (Difficulty: 4/10)" by Herb Sutter - 12 Updates
- Halting Problem Final Conclusion [2021 update to my 2004 statement](Boolean?) - 1 Update
- atomic<>-costs - 3 Updates
- 2019 wish list - 3 Updates
- Strange compiler warning... - 1 Update
- [Jesus Loves You] Time is approaching - 2 Updates
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Jan 06 07:18PM Hi! I feel like a slight diversion from core neoGFX work so I am going to create a chess game (including AI). This will be the second time I have created a chess engine, the first time being over a decade ago and that engine could beat Microsoft Chess Titans level 10. https://i.imgur.com/varREDJ.png Message ends. /Flibble -- π |
Bart <bc@freeuk.com>: Jan 06 07:45PM On 06/01/2021 19:18, Mr Flibble wrote: > created a chess engine, the first time being over a decade ago and that > engine could beat Microsoft Chess Titans level 10. > https://i.imgur.com/varREDJ.png Nice demo pic. Reminds me of the first time I had my hands on a colour display, around 1979 and working with a PDP11 plus Fortran. Displaying a chess board was one of my early tests (but since I couldn't play chess at all, it was limited to the graphics). On yours, you might want to tweak the colours though; one side (black?) looks like it's in black and white, and the other in colour. |
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Jan 06 08:26PM On 06/01/2021 19:45, Bart wrote: > Nice demo pic. Reminds me of the first time I had my hands on a colour display, around 1979 and working with a PDP11 plus Fortran. > Displaying a chess board was one of my early tests (but since I couldn't play chess at all, it was limited to the graphics). > On yours, you might want to tweak the colours though; one side (black?) looks like it's in black and white, and the other in colour. X11 colours goldenrod and silver applied as a gradient to a base texture. I will add shininess at some point. /Flibble -- π |
Paavo Helde <myfirstname@osa.pri.ee>: Jan 06 10:19AM +0200 06.01.2021 00:47 ΓΓΆ Tiib kirjutas: > misremembering, absent minded, tired and fallible. We just learn, > memorize, recall, concentrate our focus and correct if we notice > as we go. Agreed. Here we are talking about assert lines which are themselves meant for validating other code. If I leave them in in the production, then someone would need to take extra care of validating of validations. > When dealing with code base of hundreds thousands > of lines it is important to let every line you altered to be validated > by other people as it will catch most issues earlier. In my experience, no. Validating my code by myself by writing assert lines has detected much more bugs in my code than code review by other people. This probably also depends on the project type. Imagine implementing something like JPEG compression algorithm, what are the chances a human reviewer would notice an one-off mistake somewhere deep in the algorithm? |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 06 09:26AM On Tue, 2021-01-05, Paavo Helde wrote: > Yes, in release builds these are disabled, both for speed and for > avoiding spurious false alarms (when you write thousands of asserts, > some of those are bound to be buggy (i.e. over-cautious)). For the record, I'm against ever disabling asserts, and I'm against debug/release builds in general. But there are different schools of thought. Most people at my workplace agree with Paavo. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
spudisnotyourbud@grumpysods.com: Jan 06 09:36AM On 6 Jan 2021 09:26:20 GMT >For the record, I'm against ever disabling asserts, and I'm against >debug/release builds in general. But there are different schools of >thought. Most people at my workplace agree with Paavo. There is often no right answer. Would you want the avionic software in an aircraft to assert and die at 30K feet possibly rendering the aircraft uncontrollable, or would you want it to continue running with bad data and possibly put the aircraft into an unrecoverable dive? Hello Boeing! |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 06 12:41PM > aircraft to assert and die at 30K feet possibly rendering the aircraft > uncontrollable, or would you want it to continue running with bad data > and possibly put the aircraft into an unrecoverable dive? Hello Boeing! I don't intend to join yet another long discussion about assertions, but just to provide one other viewpoint: If I have an 'assert(expr)' in that code, it means "When I wrote this code I assumed expr would be true at this point. If it's not, it means I made a horrible design error, and I have no idea what the code will do from now on. Whatever it is, it's nothing that has ever been tested or thought of." and then, yes, I would prefer the software to die at 30K feet! Provided that there is a fallback ... Which there probably needs to be, anyway. I don't see assertions as very different from other things which lead to crashes: segfaults, division by zero ... they all protect me from the craziness which would otherwise follow. The main difference is I have to add the assertions manually. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jan 06 03:51PM +0100 On 05.01.2021 12:25, Paavo Helde wrote: > Yes, in release builds these are disabled, both for speed and for > avoiding spurious false alarms (when you write thousands of asserts, > some of those are bound to be buggy (i.e. over-cautious)). Consider classifying asserts based on computational cost, and disabling only the very costly (e.g. O(n) time check) asserts in releases. There is a difference between e.g. jet fighter flight control software that should just keep on working no matter what, and say a billing system. Which can greatly influence the choice of leaving fast assertions enabled or not. But e.g. for the latter, it seems to me better that the system crashes than that it produces incorrect bills. - Alf |
Manfred <noname@invalid.add>: Jan 06 04:52PM +0100 On 1/6/21 9:19 AM, Paavo Helde wrote: > Here we are talking about assert lines which are themselves meant for > validating other code. If I leave them in in the production, then > someone would need to take extra care of validating of validations. Agreed on the agreement :) I like the "validate the validation" part. > In my experience, no. Validating my code by myself by writing assert > lines has detected much more bugs in my code than code review by other > people. I am also a fan of asserts - I tend to spread them around a lot, and I agree that as far as debugging goes they are probably more effective than code reviews - probably related with my programming habits. Nonetheless code reviews do have their value, especially with respect to code maintainability - some extra pairs of eyes /can/ make code more clear and readable. They can occasionally spot mistakes, if the reviewers are member of the same team that share knowledge about the problem domain. Reviews in general, however, are /very/ useful, especially when working in team. > This probably also depends on the project type. Imagine implementing > something like JPEG compression algorithm, what are the chances a human > reviewer would notice an one-off mistake somewhere deep in the algorithm? This is also true, the problem domain plays a significant role in asserts' effectiveness. I guess because of how it impacts code structure. |
Manfred <noname@invalid.add>: Jan 06 05:08PM +0100 On 1/6/21 3:51 PM, Alf P. Steinbach wrote: >> some of those are bound to be buggy (i.e. over-cautious)). > Consider classifying asserts based on computational cost, and disabling > only the very costly (e.g. O(n) time check) asserts in releases. Hmm. I don't think it depends on performance - I mean the choice whether to leave an assert in place or not. > system. Which can greatly influence the choice of leaving fast > assertions enabled or not. But e.g. for the latter, it seems to me > better that the system crashes than that it produces incorrect bills. This is indeed a good example (unrelated with computational cost btw.) However I doubt if 'assert' is a good name for some piece of code that detects an error in billing, which is supposed to abort a transaction. It probably falls into the category of software exception; asserts are supposed to be strictly tested before release. I mean, you sure verify if an assert triggers during testing. If you want to code some consistency check to be executed as part of the released product, most probably you should choose some other construct than 'assert'. |
Paavo Helde <myfirstname@osa.pri.ee>: Jan 06 08:29PM +0200 06.01.2021 16:51 Alf P. Steinbach kirjutas: > system. Which can greatly influence the choice of leaving fast > assertions enabled or not. But e.g. for the latter, it seems to me > better that the system crashes than that it produces incorrect bills. To be honest, I would not trust myself to write jet fighter software, I'm too impatient for that. Fortunately our software is much more mundane and if it crashes or produces wrong results, nobody dies (at least not directly). On the other hand, performance is critical for our software and enabling all asserts in production is not an option. Actually we have a second flavor of assert which is also present in release builds, but this is used very sparingly because the resulting error messages would be "too technical" for our customers. Instead, there are lots of condition checks and explicit exception throwing, most of which are never triggered in production. |
Makis <Makis@gkarvounis.eu>: Jan 06 07:38PM +0100 > aircraft to assert and die at 30K feet possibly rendering the aircraft > uncontrollable, or would you want it to continue running with bad data > and possibly put the aircraft into an unrecoverable dive? Hello Boeing! The maxim should be: Don't die on assert! In any software! |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jan 06 06:47PM On Wed, 2021-01-06, Manfred wrote: > On 1/6/21 3:51 PM, Alf P. Steinbach wrote: ... > detects an error in billing, which is supposed to abort a transaction. > It probably falls into the category of software exception; asserts are > supposed to be strictly tested before release. Are you talking about the assert() macro specifically? I think a lot of people use it in a wider sense, so that "an assert" which fails could e.g. log and abort an transaction. In fact I think Stroustrup uses it that way in TC++PL, when he recommends not using assert(). The most important part of an assert is IMO that it establishes a precondition or invariant -- not exactly what happens if it does not hold. > want to code some consistency check to be executed as part of the > released product, most probably you should choose some other construct > than 'assert'. Alf didn't write about any consistency checks. Note that preconditions and invariants can be found in any software system. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Manfred <noname@invalid.add>: Jan 06 08:34PM +0100 On 1/6/21 7:47 PM, Jorgen Grahn wrote: >> It probably falls into the category of software exception; asserts are >> supposed to be strictly tested before release. > Are you talking about the assert() macro specifically? The discussion upthread is explicitly about asserts that result in process termination if the test is enabled and fails, and may or may not be disabled in production code. This fits the assert() macro specifically, or any other equivalent. I think a lot > of people use it in a wider sense, so that "an assert" which fails > could e.g. log and abort an transaction. In fact I think Stroustrup > uses it that way in TC++PL, when he recommends not using assert(). I'm OK with a feature (function or macro) which asserts in a broader sense, however, given that assert() is so well established and known, I think that it is confusing to call 'assert' something that is non-fatal. > The most important part of an assert is IMO that it establishes a > precondition or invariant -- not exactly what happens if it does not > hold. I guess you mean if it is not fatal? 'Cause that's what this is all about. >> than 'assert'. > Alf didn't write about any consistency checks. Note that > preconditions and invariants can be found in any software system. Maybe preconditions and invariants would be a better term - semantics. That's not the point. My point is that if you want such kind of test delivered in the released product, use something else than assert(). That's better than using assert() and #undef NDEBUG |
Ian Collins <ian-news@hotmail.com>: Jan 07 08:45AM +1300 On 06/01/2021 22:26, Jorgen Grahn wrote: > For the record, I'm against ever disabling asserts, and I'm against > debug/release builds in general. But there are different schools of > thought. Most people at my workplace agree with Paavo. I share your views and workplace battles... If asserts are used correctly, they are there to protect against unrecoverable errors, so a controlled exit with a nice stack dump and core is the only safe course of action. If asserts are disabled, how does the code know it is safe to proceed? -- Ian. |
olcott <NoOne@NoWhere.com>: Jan 06 01:04PM -0600 On 1/6/2021 11:05 AM, Malcolm McLean wrote: > OK, so there's a good insight here. Halts() is reducible to either return true, > return false, or do neither. We don't need any other ocde, [code] because we are > only interested in one outcome. No this is a huge mistake. When we consider which of the two values that Halts() can return to H_Hat() the correct answer is neither because the invocation of Halts() by H_Hat() would be infinitely recursive if Halts() did not abort this invocation. There is never ever any case where any function can ever correctly return a value to its infinitely recursive invocation, not even if we sprinkle magic fairy dust. Although a decider must always eventually indicate a decision it is not required to return a result to every caller. That so many people in this forum seemed to think otherwise seems quite nuts. They might as well have said that all infinite loops only iterate ten times. The term decider doesn't really have a standard meaning. In fact, it is lamentable that Sipser chose the terms decider and recognizer, since they seem to confuse students. Yuval Filmus Assistant Professor in the Henry and Marilyn Taub Department of Computer Science at the Technion – Israel Institute of Technology. https://cs.stackexchange.com/questions/84433/what-is-decider > So it's clear that we can neither return "true" nor "false", because H_Hat will > invert it an make it incorrect. Pathological Self-Reference(Olcott 2004) is defined as the case where a (a) declarative sentence, (b) polar question, (c) logical proposition cannot be resolved to yes/no true/false because of self-reference. When we ask the polar question: What Boolean value can Halts() correctly return to H_Hat() we see that this question has Pathological Self-Reference(Olcott 2004). > or abort the program, pas up a result to a > higher level (Most languages allow for this sort of thing, but Turing machines > don't ) You reject the Church-Turing thesis? -- Copyright 2021 Pete Olcott "Great spirits have always encountered violent opposition from mediocre minds." Einstein |
Bonita Montero <Bonita.Montero@gmail.com>: Jan 06 03:09PM +0100 I wrote a little benchmark that measures the cost of atomic-operations when a number of hw-threads are constantly hammering the same cacheline. Here it is: #include <iostream> #include <atomic> #include <mutex> #include <condition_variable> #include <chrono> #include <cstdlib> #include <vector> #include <thread> using namespace std; using namespace chrono; int main() { unsigned hc = thread::hardware_concurrency(); if( !hc ) return EXIT_FAILURE; uint64_t const ROUNDS = 1'000'000; auto benchmark = [&]( bool xadd ) { mutex mtx; condition_variable cvReady, cvStart; unsigned ready; bool start; uint64_t count; atomic<uint64_t> atomicValue; uint64_t totalTicks; uint64_t totalFails; auto atomicThread = [&]() { using hrc_tp = time_point<high_resolution_clock>; unique_lock<mutex> lock( mtx ); ++ready; cvReady.notify_one(); for( ; !start; cvStart.wait( lock ) ); uint64_t n = count; lock.unlock(); uint64_t ref = atomicValue; hrc_tp start = high_resolution_clock::now(); uint64_t fails = 0; if( !xadd ) for( ; n; --n ) for( ; !atomicValue.compare_exchange_weak( ref, ref + 1, memory_order_relaxed ); ++fails ); else for( ; n; --n ) atomicValue.fetch_add( 1, memory_order_relaxed ); uint64_t ns = duration_cast<nanoseconds>( high_resolution_clock::now() - start ).count(); lock.lock(); totalTicks += ns; totalFails += fails; }; for( unsigned t = 1; t <= hc; ++t ) { vector<thread> vt; vt.reserve( t ); ready = 0; start = false; count = ROUNDS; totalTicks = 0; totalFails = 0; for( unsigned i = 1; i <= t; ++i ) vt.emplace_back( atomicThread ); unique_lock<mutex> lock( mtx ); for( ; ready != t; cvReady.wait( lock ) ); start = true; cvStart.notify_all(); lock.unlock(); for( thread &thr : vt ) thr.join(); double ns = (double)(int64_t)totalTicks / (int)t / ROUNDS; double failsPerSucc = (double)(int64_t)totalFails / (int)t / ROUNDS; cout << t << " threads: " << ns << "ns"; if( !xadd ) cout << ", avg fails: " << failsPerSucc; cout << endl; } }; cout << "xchg:" << endl; benchmark( false ); cout << "xadd:" << endl; benchmark( true ); } I've got a 2,9Ghz 64 core Threadripper 3990X and when all 128 threads are XCHGing on the same cacheline, each successful operation takes about 1.500ns, i.e. 4.350 clock cycles until there's a successful XCHG. There are about 4 swap-failures until a fifth succeeds. And a successful XADD is about 240ns, i.e. 700 clock cycles. So show me your data ! |
Bonita Montero <Bonita.Montero@gmail.com>: Jan 06 05:55PM +0100 Sorry, under Windows this test doesn't exactly works if you have more than 64 threads since the scheduler can only handle up to 64 threads per processor-group. So I've did a dirty workaround: #if defined(_MSC_VER) #include <Windows.h>
Subscribe to:
Post Comments (Atom)
|
No comments:
Post a Comment