- Transforming "C" into a Turing equivalent language 001 [Providing unlimited memory access to C] - 6 Updates
- Preprocessor output into Class processor - 6 Updates
- Are there any asm-instructions to support OOP - 6 Updates
- Transforming "C" into a Turing equivalent language 001 [Providing unlimited memory access to C] - 4 Updates
olcott <NoOne@NoWhere.com>: Aug 30 08:35PM -0500 On 8/30/2020 6:23 PM, Mike Terry wrote: > system addressed through arbitrary long URLs. (Ok, maybe URLs have a > max length, but we can envisage some new storage device addressed > through IO subsystem not having this restriction. I only really need to know about computations that are TM equivalent: meaning for example a computation performed by a "C" or x86 machine language program that executed on an input will derive equivalent output to a Turing machine on equivalent input. > thousands of posts so we'll see how it goes in this newsgroup. > Regards, > Mike. I was just having this exact same idea. This means that a single feature that I added to my x86-UTM environment makes this environment Turing Complete for any computation that fits within its memory 32-bit memory space: u32* Allocate(u32 Size); -- Copyright 2020 Pete Olcott |
olcott <NoOne@NoWhere.com>: Aug 30 09:35PM -0500 On 8/30/2020 8:30 PM, Ben Bacarisse wrote: > Note "may". In the end it is likely to come down to how one chooses to > interpret the standard. It's really not a clear-cut question. (See my > reply to Mike Terry about that.) On 8/30/2020 6:23 PM, Mike Terry wrote: > design or language considerations, > Regards, > Mike. So Kaz spurred an idea in you that spurred an idea in Mike that seems to complete the circle. If Mike is right then any computation performed within the 32-bit 4GB address space of the x86 architecture that terminates normally would be equivalent to a computation on a Turing Machine having equivalent input. Since the x86 machine language program was generated by a "C" compiler then the "C" version of the program is a Turing equivalent computation. As long as this reasoning is completely correct this part of the analysis is finished. Every computation by a "C" language program on an input that terminates normally is equivalent to the computation on a Turing on equivalent input. Here is where the analysis gets extended: The input to the halt decider is the x86 machine language of itself. This means that the input to the Turing machine must be this same algorithm implemented as a conventional five-Tuple Turing machine. Since the task of making a "C" to conventional five-tuple Turing Machine translator is very difficult we define an some augmentations that create a virtual machine language that is Turing equivalent that "C" maps to much more directly. When this virtual machine executes a halt decider that takes its own virtual machine description as its input then we can know by proxy that the x86 halt decider has equivalent input to an actual Turing machine. The input to the x86 based UTM derives equivalent output to the output of a virtual machine that is equivalent to a Turing machine and has input equivalent to the input to the x86 based UTM. This proves (by proxy) that input to an actual Turing machine that is equivalent to the input to the x86 UTM would derive equivalent output from the actual Turing machine. -- Copyright 2020 Pete Olcott |
olcott <NoOne@NoWhere.com>: Aug 30 11:55PM -0500 On 8/30/2020 9:35 PM, olcott wrote: > > design or language considerations, > > Regards, > > Mike. If we assume that Mike's analysis is correct then we can conclude that every x86 program that derives a result has an equivalent Turing machine computation on equivalent input deriving an equivalent result. If an x86 based halt decider derives a correct halting decision on the basis of its own x86 code then there exists an equivalent Turing machine (using the same algorithm) that correctly decides halting on its own Turing machine description. Now if we just had some computer science to back that up. -- Copyright 2020 Pete Olcott |
olcott <NoOne@NoWhere.com>: Aug 31 09:59AM -0500 On 8/30/2020 8:24 PM, Ben Bacarisse wrote: > Yes, you can switch larger integer indexes for larger pointers. The two > views are interchangeable. (Or have I missed the point you are > making?). If we had pointer names that were handled under-the-covers by a Turing equivalent process then we could still have the essence of "C" on an abstract machine with unlimited memory. This is not much more than a paraphrase of what Kaz already said. The difference is that we switched the context from the stack to the heap. I think that I just figured out how to make at least one linked list of truly unlimited length. It depends on unlimited length integers. To explain this system I would use 1024-bit addressing that could be scaled upward at any point in time. 2^1024 (1.797693134862315907729305190789e+308) bytes of RAM 179769313486231590772930519078902473361797697894230657273430081157732675805500963132708477322407536021120113879871393357658789768814416622492847430639474124377767893424865485276302219601246094119453082952085005768838150682342462881473913110540827237163350510684586298239947245938479716304835356329624224137216 Here is the maximum address value in hex: 0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff Implementing such a system on actual hardware requires some sort of relative addressing. Using every single atom in the whole universe to encode one single binary digit each would provide a woefully deficient amount of RAM for 1024-bit addressing: ...it is estimated that the there are between 10^78 to 10^82 atoms in the known, observable universe... https://www.universetoday.com/36302/atoms-in-the-universe/# The key thing about this memory system is that because it has no theoretical limit to its scalability every Turing equivalent machine tied to this abstract memory system that can be implemented in actual hardware would be computationally identical to its abstract machine version until the concrete machine reached its memory limit. This provides a way to empirically test many theory of computation problems. -- Copyright 2020 Pete Olcott |
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Aug 31 11:18AM -0700 > to an abstract model of computation that is equivalent to a Turing > machine in such a way that any Turing complete computation can be > written in the "C" programming language. [...] I encourage anyone in comp.lang.c or comp.lang.c++ who wants to reply to these posts to take a look at comp.theory, which has been dominated by these arguments for years. -- Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com Working, but not speaking, for Philips Healthcare void Void(void) { Void(); } /* The recursive call of the void */ |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Aug 31 09:34PM ["Followup-To:" header set to comp.lang.c.] On Mon, 2020-08-31, Keith Thompson wrote: > I encourage anyone in comp.lang.c or comp.lang.c++ who wants to > reply to these posts to take a look at comp.theory, which has been > dominated by these arguments for years. Ugh. I took a look and now wish I hadn't. (Not that it seemed toxic or anything; it was just a very depressing ghost of a newsgroup.) /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
boltar@nuttyella.co.uk: Aug 31 08:54AM On Sun, 30 Aug 2020 21:59:12 +0200 >nowhere close to state of the art of low power consumption. A >Cortex-M0+ device (32-bit) will use much less power than a PIC or an >8051 for the same task. Lets see some figures then because I find that very hard to believe particularly if the PIC is operating in wake-on-interrupt mode. |
David Brown <david.brown@hesbynett.no>: Aug 31 11:33AM +0200 >> 8051 for the same task. > Lets see some figures then because I find that very hard to believe > particularly if the PIC is operating in wake-on-interrupt mode. Look them up yourself - the details all depend on exactly what you want the system to do. But here's a hint for you - the current consumption of the core during the deepest sleep modes is almost completely irrelevant in most battery powered devices. Or if you want proper help with embedded design, find someone who is willing to spoon-feed you and who also has more than your own "did a few courses twenty years ago with a single microcontroller" experience. Finding things "hard to believe" just because they are outside your own experiences seems to be a pattern for you. It does not encourage others to help you. You appear to be convinced that your very narrow and badly outdated thoughts on embedded development are all there is to know of the field - so there is little point in giving you advice or information. If you wake up one day and realise that perhaps you don't know everything, maybe you can ask some questions and learn something. |
boltar@nowhere.co.uk: Aug 31 09:37AM On Mon, 31 Aug 2020 11:33:57 +0200 >the field - so there is little point in giving you advice or information. >If you wake up one day and realise that perhaps you don't know >everything, maybe you can ask some questions and learn something. Translation: "Err, sorry, I can't provide those figures but I'll will provide a load of self righteous bluster in its place and hope you go away". |
David Brown <david.brown@hesbynett.no>: Aug 31 12:11PM +0200 >> everything, maybe you can ask some questions and learn something. > Translation: "Err, sorry, I can't provide those figures but I'll will provide > a load of self righteous bluster in its place and hope you go away". I certainly could provide figures. I could point you to a Cortex-M4 microcontroller with 20 nA sleep modes. I could point you to designs with code written in C++ that don't have batteries at all, but get enough energy from the user walking around. But what does that help you or anyone else? |
boltar@nowhere.co.uk: Aug 31 03:43PM On Mon, 31 Aug 2020 12:11:10 +0200 >> a load of self righteous bluster in its place and hope you go away". >I certainly could provide figures. I could point you to a Cortex-M4 >microcontroller with 20 nA sleep modes. I could point you to designs Yes, 20nA, very good. However not as good as these 8 bit PIC: http://www.farnell.com/datasheets/1523199.pdf "PIC microcontrollers with XLP technology feature the world's lowest active an d sleep power consumption" "Active currents down to 50 μA/MHz Sleep current as low as 9 nA Battery lifetime > 20 years" Chew on that. |
David Brown <david.brown@hesbynett.no>: Aug 31 10:50PM +0200 > Sleep current as low as 9 nA > Battery lifetime > 20 years" > Chew on that. You missed the bit where I pointed out how utterly useless these figures are. If you knew the first thing about what you were doing, you'd understand that already. |
Terje Mathisen <terje.mathisen@tmsw.no>: Aug 31 07:50AM +0200 Juha Nieminen wrote: >> Thats not the kind of jerkiness I was talking about which if you had half >> a working braincell you'd have understood. > Notice how you are the only one here throwing insults at people. :-( > I wonder why that is. To all the people here "discussing" needed frame rates etc.: Please take a look at one or more of Michael Abrash' keynote presentations during Oculus Connect! They have done a _lot_ of research into this over the last short decade, you can start at the 2014 video: https://www.youtube.com/watch?v=knQSRTApNcs or go directly to one of the later which shows what they figured out in the meantime. In order to avoid visual artifacts which destroy the AR/VR experience, they have to get a serious amount of processing done in a very short timeframe. Terje -- - <Terje.Mathisen at tmsw.no> "almost all programming can be viewed as an exercise in caching" |
Juha Nieminen <nospam@thanks.invalid>: Aug 31 06:45AM > To all the people here "discussing" needed frame rates etc.: Please take > a look at one or more of Michael Abrash' keynote presentations during > Oculus Connect! That's a good example. If it were indeed true that people can't distinguish anything beyond something like 24 (or 25, or 30) frames per second, then it shouldn't matter if a VR headset updates at eg. 30 Hz. That would certainly be beneficial since even lower-end PCs would be able to handle it. Yet, something like 80-90 Hz refresh rate is required so that the refresh rate itself doesn't contribute to nausea. Quite clearly people can distinguish between even 60 Hz and 80 Hz. |
boltar@nuttyella.co.uk: Aug 31 08:51AM On Sun, 30 Aug 2020 17:11:07 +0000 (UTC) >> a working braincell you'd have understood. >Notice how you are the only one here throwing insults at people. >I wonder why that is. Because I get tired of arguing the toss with someone who doesn't understand the point despite me being very clear about it. Did you seriously think when I mentioned jerkiness in HIGH DEFINITION video it had anything to do with frame rate in 2020?? Try engaging your brain first next time. |
boltar@nuttyella.co.uk: Aug 31 08:53AM On Sun, 30 Aug 2020 11:01:58 -0700 (PDT) >> I wonder why that is. >People who try to pointlessly insult or belittle others do it usually >because of inferiority complex and a lack of self-respect. Thanks for your valuable insights there Yoda. Any more wisdom you can bless us with from your Dummies Guide to Psychology book? |
Juha Nieminen <nospam@thanks.invalid>: Aug 31 12:15PM > the point despite me being very clear about it. Did you seriously think when > I mentioned jerkiness in HIGH DEFINITION video it had anything to do with > frame rate in 2020?? Try engaging your brain first next time. No, what you are doing is trying to deflect and move goalposts, in order to not have to admit having made a mistake. Your original claim, which started the framerate discussion, was that it's useless for video games to go above 3 Hz refresh rates because humans can't distinguish anything higher. That game developers aiming at higher framerates is just them trying to show off. Now you are talking about some "high definition video", as if it had anything at all to do with your original assertion (which was about game framerates, absolutely nothing to do with video codecs). |
boltar@nowhere.co.uk: Aug 31 03:35PM On Mon, 31 Aug 2020 12:15:09 +0000 (UTC) >useless for video games to go above 3 Hz refresh rates because humans can't >distinguish anything higher. That game developers aiming at higher >framerates is just them trying to show off. 2 entirely different threads which you've clearly got mixed up in your head because A) you're an idiot or B) you're just looking for an argument. Here's my original post: -------------- From: boltar@nuttyella.co.uk Subject: Re: Are there any asm-instructions to support OOP Message-ID: <rib6af$1kuh$1@gioia.aioe.org> On Fri, 28 Aug 2020 10:34:17 -0400 Stefan Monnier <monnier@iro.umontreal.ca> wrote: >when the individual images are "perfectly" crisp (as is typically the >case in sports on TV where it's recorded at higher rates to be able to >replay in slow motion IIUC). A current problem with video streams is the compression algorithm not being able to keep up with fast motion or panning and introducing a noticable jerkiness to the output. This only seem to happen on HD so possibly its an artifact of H264 because SD streams just used to break up into blocks. -------------- |
olcott <NoOne@NoWhere.com>: Aug 30 07:17PM -0500 On 8/30/2020 6:23 PM, Andy Walker wrote: >> equivalent. > You don't know that the machines are going to fail to halt, > in general. Halting problem. You don't know how long to wait to Yes that is why a refutation of the conventional self-referential halting problem proofs would be significant. > you know that both machines halt, *then* you can find out whether > they are equivalent. If the machines are equivalent on some input, > you don't know whether they are equivalent on other inputs. Since That two machines always derive equivalent output from equivalent input and fail to halt on the same inputs proves that these two machines are equivalent. We may also know that two machines are equivalent on all inputs if each machine can simulate the other. > completely inscrutable, in general, given arbitrary data. > If you want to know things about programs, then you have to > write them in such a way that theorems about them can be proved. I only need to prove that certain computations that halt on specific inputs are equivalent to a Turing machine on equivalent input. My whole goal here is to prove that a halt decider written in "C" that analyzes the x86 machine language of itself could be directly applied to Turing machines. > Then you may be able to say interesting things about programs that > conform to your specification. Otherwise the future is somewhat > bleak. Any virtual machine that implements an abstract model of computation that is provably Turing equivalent will have identical computation to any concrete hardware implementation of this abstract model until a computation on this concrete model requires more RAM than it has. In simpler words a program written in the virtual machine language will run exactly the same (identical execution trace) on the abstract machine as the concrete machine unless and until the concrete machine runs out of RAM. If we did not hit the RAM limit then we know that the execution of the program on concrete machine would be equivalent to executing an equivalent program on an actual Turing machine. Every "C" language program that can be translated into the machine language of any virtual machine that is provably Turing equivalent would derive equivalent output to equivalent input on an actual Turing machine. This establishes the basis of the line-of-reasoning showing that my intuition that "C" programs always have been equivalent to Turing machines is probably correct. The above possibly creates a new term-of-the-art of computer science, a Turing equivalent computation: A Turing equivalent computation is any computation that can be translated into the machine language of any virtual machine that is provably Turing equivalent. -- Copyright 2020 Pete Olcott |
Richard Damon <Richard@Damon-Family.org>: Aug 30 08:43PM -0400 On 8/30/20 4:19 PM, olcott wrote: > Or we could construe not halting as an empty set of output. As long as > the two machines produced equivalent output when they halted or failed > to halt on equivalent input the two machines are black box equivalent. But just because we haven't halted YET, doesn't mean that if we wait some arbitrary period of time more that neither of them might halt. Any definition based on comparing the outputs at some finite time is incomplete, and possibly inaccurate. |
olcott <NoOne@NoWhere.com>: Aug 30 08:21PM -0500 On 8/30/2020 7:43 PM, Richard Damon wrote: > some arbitrary period of time more that neither of them might halt. Any > definition based on comparing the outputs at some finite time is > incomplete, and possibly inaccurate. On the other hand any definition based on comparing the results after both machines have halted would be correct for the computations that result in halting. -- Copyright 2020 Pete Olcott |
olcott <NoOne@NoWhere.com>: Aug 31 08:47AM -0500 On 8/31/2020 3:59 AM, Andy Walker wrote: >> That two machines always derive equivalent output from equivalent >> input and fail to halt on the same inputs proves that these two >> machines are equivalent. That is why you have to read everything that I said and not merely glance at a few word before forming you rebuttal. On 8/30/2020 7:17 PM, olcott wrote: > We may also know that two machines are equivalent on all > inputs if each machine can simulate the other. The easy way to know that a machine is Turing equivalent is to start with a Turing machine and then only add features to this exact same model of computation that make it easier to work with and not have more computational power. If you mess up and add computational power then this "mistake" refutes Church-Turing. > It would, but there is no effective general method to show > that. You're going round in circles. But we knew that. You ignored the way to show that reiterated above. -- Copyright 2020 Pete Olcott |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |