- Simulating Halt Decider (SHD) Copyright (c) 2022 Mr Flibble - 1 Update
- Ben agrees the H(D,D)==0 according to its criterion - 2 Updates
- I was wrong about gcc's inlining behavior - 4 Updates
- Is Copy-and-swap idiom too slow in assignment operator? - 2 Updates
- converting a 700,000+ line Fortran 77 plus 50,000+ line C++ program to C++, part 1 - 1 Update
- why use static_cast ? - 1 Update
| Mr Flibble <flibble@reddwarf.jmc.corp>: Oct 24 07:19PM +0100 Signaling (Simulating) Halt Decider (SHD) Copyright (c) 2022 Mr Flibble: I have an idea for a signaling simulating halt decider that forks the simulation into two branches if the input calls the halt decider as per [Strachey 1965]'s "Impossible Program": void P(void (*x)()) { if (H(x, x)) infinite_loop: goto infinite_loop; return; } int main() { std::cout << "Input halts: " << H(P, P) << std::endl; } When the simulator detects the call to H in P it forks the simulation into a non-halting branch (returning 0 to P) and a halting branch (returning 1 to P) and continues the simulation of these two branches in parallel. If the non-halting branch is determined to halt AND the halting branch is determined to not halt then pathology is detected and reported via a sNaP (signaling Not a Program) signal (analogous to IEEE 754's sNaN (signaling Not a Number) signal) If EITHER branch is determined to be correctly decided then that will be the decision of the halting decider. Crucially this scheme will handle (and correctly decide) the following case whereby the result of H is discarded by the input: void Px(void (*x)()) { (void) H(x, x); return; } Obviously my idea necessitates extending the definition of a halt decider: 1) Decider decision is HALTS if input halts. 2) Decider decision is NON-HALTING if input does not halt. 3) Decider rejects pathological input as invalid by signaling sNaP. https://github.com/i42output/halting-problem#readme /Flibble |
| olcott <polcott2@gmail.com>: Oct 23 07:59PM -0500 Ordinary code analysis proves that H(D,D)==0 according to its criterion. I have a friend with a masters degree in computer science that agreed to this after a 75 minute phone discussion carefully analyzing the first three pages of my paper. He also immediately agreed with the Sipser approved criterion with no discussion needed. Original message: https://groups.google.com/g/comp.theory/c/YmACFEiAoNk/m/wujVvKPvAAAJ On 10/17/2022 10:23 AM, Ben Bacarisse wrote: >> But it does'nt meet the criteria, sincd it never correctly determines >> that the correct simulation of its input is non-halting. > Are you dancing round the fact that PO tricked the professor? <Sipser Approved Verbatim Abstract> MIT Professor Michael Sipser has agreed that the following verbatim paragraph is correct (he has not agreed to anything else in this paper): If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D would never stop running unless aborted then H can abort its simulation of D and correctly report that D specifies a non-halting sequence of configurations. </Sipser Approved Verbatim Abstract> *to this paper: Rebutting the Sipser Halting Problem Proof* https://www.researchgate.net/publication/364302709_Rebutting_the_Sipser_Halting_Problem_Proof > -- the one no one cares about. D(D) halts (so H is not halt decider), > but D(D) would not halt unless H stops the simulation. > H /can/ correctly determine this silly criterion (in this one case) This is the criterion that Ben erased from his reply: On 10/17/2022 12:11 AM, olcott wrote: > because..." was one way he used to put it before finding a more > tricky wording. For years, the project has simply been to find > words he can dupe people with. *It is implausible that professor Sipser could be duped* *into approving an abstract to a paper with this title* *Rebutting the Sipser Halting Problem Proof* |
| olcott <polcott2@gmail.com>: Oct 24 01:17PM -0500 On 10/23/2022 7:59 PM, olcott wrote: > *It is implausible that professor Sipser could be duped* > *into approving an abstract to a paper with this title* > *Rebutting the Sipser Halting Problem Proof* I emailed Ben a copy of this and invited him to make a rebuttal. Since he responded to my first email I know that it reached him. I am sure that he knows there is no correct rebuttal and that is his reason for not responding. -- Copyright 2022 Pete Olcott "Talent hits a target no one else can hit; Genius hits a target no one else can see." Arthur Schopenhauer |
| Juha Nieminen <nospam@thanks.invalid>: Oct 24 06:45AM > If your program is not big or release builds are not frequent, you can try -flto. > It reduces difference between local and global things. I don't think you understand. I didn't move the function implementations to the header and make them 'inline'. They were still in their own source file. The only thing I did was add the word 'inline' in front of the functions, and the code became as fast as previously. By them being in the global scope they weren't being inlined within each other. By adding 'inline' it made gcc inline them, making the code fast once again. |
| Juha Nieminen <nospam@thanks.invalid>: Oct 24 07:00AM > In fact the effect of inlining is only that large because the calling > convention is inefficient at least on x86/x64. The original "classical" reason for inlining in the distant past was that it would avoid an extra function call, which adds many clock cycles. However, that's not the reason why inlining makes code faster nowadays. Function call overhead is extremely small, even inconsequential in most cases. By far the main reason why inlining makes code faster nowadays, especially in number-crunching code, is that it allows the compiler to optimize the calling code with things like auto-vectorization. A function call (that doesn't get inlined) in a tight inner loop acts effectively as an optimization barrier: The compiler can't see what the function is doing and can't do autovectorization, move things around, etc. (For example, if the function reads some value from an array, if the compiler can inline that read, it will help it optimize those array accesses.) >> It appears that gcc uses a different inlining strategy when dealing >> with local and global functions (that have no 'inline' keyword). > I guess it does for good reasons. Well, it's enormously inconvenient. Having your number-crunching class become almost twice as slow just because you did some refactoring and cleaning up (that shouldn't affect is performance) is annoying. (Good thing I actually had a benchmark and noticed.) I really had to go my way to make as many functions 'inline' as I reasonably could, moving some of the function implementations to the header, making others private (so that they could be declared 'inline' in the source file), and so on. Even then I had to compromise by having the class become about 20% slower (in order to avoid excessive amounts of code in the header file). Really annoying. |
| Michael S <already5chosen@yahoo.com>: Oct 24 10:11AM -0700 On Monday, October 24, 2022 at 9:45:51 AM UTC+3, Juha Nieminen wrote: > By them being in the global scope they weren't being inlined within each > other. By adding 'inline' it made gcc inline them, making the code fast > once again. Or, may be, you don't understand. I am guessing that the decisive difference between your former variant and the slow one is that in former variant compiler was able to figure out that the function is called exactly once. For gcc that happens to be the strongest criterion for inlining, criterion that beats almost anything short of -O0 and, may be, in some situations, of -Og. When in your last variant you added inline keyword, gcc did inlining due to some other, weaker, heuristics. Something like: "It's possibly called mores than once and it's not particularly short, but it's not outrageously long either, so, while I think that inlining here is not very good idea I am willing to respect the explicit wish of the programmer". With -flto, on the other hand, it's possible that compiler will be able to figure out, again, that the function in question is called once. |
| scott@slp53.sl.home (Scott Lurndal): Oct 24 05:54PM >For gcc that happens to be the strongest criterion for inlining, >criterion that beats almost anything short of -O0 and, may be, >in some situations, of -Og. The GCC criteria for inlining is fundamentally based on the size of the function being inlined, as shown in an earlier post. It certainly has no problem inlining the same function call several times in a single function, so long as the code footprint isn't too large. And if you really, really want to blow up the function's icache footprint, GCC allows overriding it's internal heuristics with the "always_inline" attribute. |
| Juha Nieminen <nospam@thanks.invalid>: Oct 24 07:10AM > this->a = other.a; > this->b = other.b; > ... It depends on the situation. For example, suppose you are assigning one (object similar to) std::string to another: If the target already has enough capacity to contain the source string, then no new allocations will be needed and it's just a simple straightforward string copy. If the copy-and-swap idiom had been used here, a new dynamic memory allocation would have been done and the existing one would have been deleted, for no good reason. The difference becomes even more drastic with classes like std::list (or any class with a similar functionality): If the target already has elements in it, then assigning can just assign the source elements onto the existing target elements, thus avoiding unneeded extra allocations. The copy-and-swap idiom would make dynamic allocations for every single element to be copied, and then delete the existing ones, for no reason. Of course in other situations it doesn't really make much of a difference. |
| JiiPee <kerrttuPoistaTama11@gmail.com>: Oct 24 05:38PM +0300 On 24/10/2022 10:10, Juha Nieminen wrote: > It depends on the situation. good answer. good point. yes not blindly following a given rule. But, I wonder why on those videos those expects do not mention so much about this but its like "this is the elegant way to do it"? |
| Tim Rentsch <tr.17687@z991.linuxsc.com>: Oct 24 12:42AM -0700 > We are converting a 700,000+ Fortran 77 lines of code plus 50,000+ C++ > lines of code engineering software product to C++. [...] Which makes the posting not topical for comp.lang.c. |
| Lynn McGuire <lynnmcguire5@gmail.com>: Oct 23 11:14PM -0500 On 10/23/2022 5:04 PM, Michael S wrote: > then why do you port to C++? > Is not it easier to port to Modern Fortran (F20xx) or, may be, to port > majority of code to Modern Fortran and minority to Julia ? The maintenance on the mixed code is becoming cumbersome. And I cannot find a good Fortran / C++ compiler mix that meets my needs. And I need a 64 bit version of our software yesterday. Seven of my users have now tried to move to 64 bit Excel. I prefer C++. I've been programming in Fortran since 1975, Pascal since 1983, C since 1987, Smalltalk since 1993, and C++ since 1999. I will miss Fortran but not totally since we have a Fortran interpreter embedded in our software since 1983. I and two of my programmers converted our Smalltalk Windows user interface to C++ in 2001/2002. The port was great, the code base only increased from 250,000 lines to 350,000 lines. Lynn |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment