| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 04 10:50PM -0700 On 5/2/2021 10:21 AM, olcott wrote: > } > Can a C/C++ function be defined that correctly decides whether or not > H_Hat((u32)H_Hat) halts? One suggestion... use a proper function pointer? |
| olcott <NoOne@NoWhere.com>: May 05 09:52AM -0500 On 5/5/2021 12:50 AM, Chris M. Thomasson wrote: >> Can a C/C++ function be defined that correctly decides whether or not >> H_Hat((u32)H_Hat) halts? > One suggestion... use a proper function pointer? I have to force everything to be 32 bits. -- Copyright 2021 Pete Olcott "Great spirits have always encountered violent opposition from mediocre minds." Einstein |
| James Kuyper <jameskuyper@alumni.caltech.edu>: May 04 11:42PM -0400 On 5/4/21 6:43 AM, Juha Nieminen wrote: ...> When Knuth wrote "premature optimization is the root of all evil" > in 1974, I doubt he was thinking about std::string::substr() > vs. std::string::compare() in a language standardized in 1998. The fact that the specifics of a particular optimization were unknown to him doesn't render his advice about optimization irrelevant to that decision. It was very general advice, not specific to any particular programming language or development environment. > more efficient one is available, because "premature optimization > is the root of all evil", and there's literally zero reason not to > use the more efficient version. If you say you've seen such behavior, I can't prove you wrong, but I would consider any developer who did such a thing insane. Such behavior is pretty much completely unrelated to what Knuth actually meant. > manually inlining code, and other such hacks. The more such hacks > are used, the less readable and maintainable the code belongs, and it > becomes. While those are the worst examples of premature optimization, any time you have a choice between a clear, easily understood algorithm, and a more efficient algorithm that's less easy to understand, Knuth's adage quite rightly favors using the clearer one rather than the more efficient one, unless and until you've determined through such methods as profiling that it's sufficiently less efficient for the difference to matter. I could easily imagine someone who was not sympathetic to such advice incorrectly describing such a decision the way you did above. The key thing that would be incorrect about doing so would the claim that "there's literally zero reason not to use the more efficient version". Greater clarity can be a very legitimate reason for such a decision. |
| Juha Nieminen <nospam@thanks.invalid>: May 05 05:01AM > key thing that would be incorrect about doing so would the claim that > "there's literally zero reason not to use the more efficient version". > Greater clarity can be a very legitimate reason for such a decision. The argument has been made that getting the program working and stable in a timely manner is more important than whether it happens to be a few seconds faster or not. After all, in this busy world we don't have time to waste on such trivialities as code optimization. We need a working product yesterday! We don't have time nor money to make it actually efficient! The thing is, if that's the approach, then do you really think that such developers will later come back to the product and go through the time and effort to profiling it and refactoring it to make it more efficient? After all, writing the inefficient version and then the more efficient version is more time spent than merely writing the inefficient version and doing nothing more. So, the best way to avoid this is to write the more efficient version from the get-go! Then there's no need to go back and profile the program and refactor it (which sometimes may mean extensive changes to the program). Sure, maybe if the "KISS" version of the routine takes 1 minute to write and the more efficient version takes 3 hours to write, there might be an argument there. However, the more efficient version takes 3 minutes to write, I would argue it's worth it. |
| David Brown <david.brown@hesbynett.no>: May 05 11:26AM +0200 On 05/05/2021 07:01, Juha Nieminen wrote: > to waste on such trivialities as code optimization. We need a working > product yesterday! We don't have time nor money to make it actually > efficient! That is certainly correct for some systems. > such developers will later come back to the product and go through the > time and effort to profiling it and refactoring it to make it more > efficient? For a great deal of code written, there is no "later". Many products - and the software in them - have short lifetimes. If you spend a lot of time developing efficient solutions, the product will never exist - while you have been ensuring your system reacts in 10 milliseconds instead of 100 milliseconds, competing companies have taken the market and you (or your client) is out of business. The competing company won't bother optimising their software efficiency because they will only be making the product for 6 months before moving on to something new. Software development means getting the balance right between the different costs in time, money, resources, etc. If time to market is the key issue, then optimising the software development might mean using off-the-shelf embedded Linux boards running code in Python even though you are just controlling a few buttons and a small motor that could be controlled by a $1 microcontroller. If developer costs are the key issue, you use whatever language that one developer knows best. If production costs are the key issue, you use the cheapest microcontroller. If battery lifetimes are key, you optimise there. There are only two things you can be sure about. One is that if you concentrate /solely/ on run-time speed efficiency of the code, you will be wrong more often than you are right. The other is that no matter what the time or money pressure, getting the software /right/ trumps all other considerations. Knuth's observation is not about optimisation of code speed - it is about optimisation of the development process. > After all, writing the inefficient version and then the more efficient > version is more time spent than merely writing the inefficient version > and doing nothing more. And if writing the inefficient version takes less time than writing the efficient version, then often it is the right answer. Computer time is vastly cheaper than developer time - you don't spend even an hour of developer time to save a millisecond of run time unless that code is going to be run at least a million million times. /Some/ code is run that often, but most code is not. |
| James Kuyper <jameskuyper@alumni.caltech.edu>: May 05 09:53AM -0400 On 5/5/21 1:01 AM, Juha Nieminen wrote: ... > don't have time to waste on such trivialities as code optimization. > We need a working product yesterday! We don't have time nor money to > make it actually efficient! Code optimization can be very important - but it's also difficult and costly, which is why you should be concentrating your time optimizing those locations where most of the processing time is actually being spent. Diffusing your efforts by trying to optimize everything is a was of expensive developer time. > After all, writing the inefficient version and then the more > efficient version is more time spent than merely writing the > inefficient version and doing nothing more. I've never worked on a project where the program wasn't undergoing constant revision. As soon as we deliver one version, we start working on the next. In my experience adding new features in one version, and improving them in a later version is the norm, not the exception. > version from the get-go! Then there's no need to go back and profile > the program and refactor it (which sometimes may mean extensive > changes to the program). If you spend time making a part of the program more complicated and harder to understand in the name of making it more efficient, without having first confirmed that it's a significant source of delays, that effort is far more likely than not to have been wasted. That time could have been better spent delivering other desired features, or improving the performance of a section that really is a major time sink. Also, having made the code harder to understand will pay off with bugs and maintenance nightmares in the future. |
| olcott <NoOne@NoWhere.com>: May 04 07:41PM -0500 On 5/4/2021 2:24 PM, Kaz Kylheku wrote: > Each call Halts(P, P) is a new Turing machine, where the tape contents > depend only on Halts and P, and not any global execution traces gathered > by a larger surrouding machine. int Halts(u32 P, u32 I) { Simulate(P,I) until decided not halts or halts. if decided not halts return 0. else return 1. } -- Copyright 2021 Pete Olcott "Great spirits have always encountered violent opposition from mediocre minds." Einstein |
| Kaz Kylheku <563-365-8930@kylheku.com>: May 05 12:55AM ["Followup-To:" header set to comp.theory.] > if decided not halts return 0. > else return 1. > } All you're showing is that Halts is a wrapper for Simulate. OK, so the problem is coming from Simulate. Simulate does not decide the same arguments the same way each time. It depends on existing values global variables such as buffers of execution traces. To meet the conditions of the halting proof, any such globals must be reset to the same state before Simulate is called. |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment