- Yet about what can we take as enough precision and about computational complexity.. - 1 Update
- I will add again more logical rigor to my post about about computational complexity - 1 Update
- I correct again one last typo, here is my final post about computational complexity - 1 Update
- I correct a typo, here is my final post about computational complexity - 1 Update
- I continu about computational complexity by being more and more rigorous, read again - 1 Update
- Yet more rigorous about computational complexity.. - 1 Update
- I correct a mistake in my post - 2 Updates
aminer68@gmail.com: Jan 11 04:20PM -0800 Hello, Yet about what can we take as enough precision and about computational complexity.. As you have just noticed i said before that 2+2=4 is a system that inherently contains enough precision that we call enough precision, because we have to know that it is judged and dictated by our minds of we humans, but when the mind sees a time complexity of n*log(n) or n , it will measure them by reference to the other time complexities that exist, and we can notice that they are average time complexities if we compare them with an exponential time complexity and with a log(n) time complexity, so this measure dictates that the time complexities of n*log(n) and n are average resistance (by analogy with material resistance, read below to understand), but our minds of humans will also notice that this average resistance is not an exact resistance, so they are missing precision and exactitude, so like the example that i give below of the obese person, we can call the time complexities such as n*log(n) and n as fuzzy and this look like probability calculations. Read the rest of all my previous thoughts to understand: I will add again more logical rigor to my post about about computational complexity: As you have just noticed in my previous post (read below), i said the following: That time complexities such as n*log(n) and n are fuzzy. But we have to be more logical rigor: But what can we take as enough precision ? I think that when we say 2+2=4, it has no missing part of precision, so this fact is enough precision, but if we say a time complexity of n*log(n), there is a missing part of precision, because n*log(n) is dependent on reality that needs in this case more precision about an exact precision about the resistance of the algorithm(read below my analogy with material resistance), so this is why we can affirm that time complexities such as n*log(n) and n are fuzzy, because there is a missing part of precision, but eventhough there is a missing part of precision, there is enough precision that permits to predict that there resistance in reality are average resistance. Read the rest of my previous thoughts to understand: I correct again one last typo, here is my final post about computational complexity: I continu about computational complexity by being more and more rigorous, read again: I said previously(read below) that for example the time complexities such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than if it was a quadratic complexity or exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict, since computational complexity can predict the resistance of the algorithm if it is high or low or average (by analogy with material resistance, read below to understand). Read my previous thoughts to understand: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jan 11 03:02PM -0800 Hello, I will add again more logical rigor to my post about about computational complexity: As you have just noticed in my previous post (read below), i said the following: That time complexities such as n*log(n) and n are fuzzy. But we have to be more logical rigor: But what can we take as enough precision ? I think that when we say 2+2=4, it has no missing part of precision, so this fact is enough precision, but if we say a time complexity of n*log(n), there is a missing part of precision, because n*log(n) is dependent on reality that needs in this case more precision about an exact precision about the resistance of the algorithm(read below my analogy with material resistance), so this is why we can affirm that time complexities such as n*log(n) and n are fuzzy, because there is a missing part of precision, but eventhough there is a missing part of precision, there is enough precision that permits to predict that there resistance in reality are average resistance. Read the rest of my previous thoughts to understand: I correct again one last typo, here is my final post about computational complexity: I continu about computational complexity by being more and more rigorous, read again: I said previously(read below) that for example the time complexities such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than if it was a quadratic complexity or exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict, since computational complexity can predict the resistance of the algorithm if it is high or low or average (by analogy with material resistance, read below to understand). Read my previous thoughts to understand: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jan 11 01:33PM -0800 Hello.. I correct again one last typo, here is my final post about computational complexity: I continu about computational complexity by being more and more rigorous, read again: I said previously(read below) that for example the time complexities such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than if it was a quadratic complexity or exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict, since computational complexity can predict the resistance of the algorithm if it is high or low or average (by analogy with material resistance, read below to understand). Read my previous thoughts to understand: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jan 11 01:05PM -0800 Hello, I correct a typo, here is my final post about computational complexity: I continu about computational complexity by being more and more rigorous, read again: I said previously(read below) that for example a time complexity such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than if it was a quadratic complexity or exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict, since computational complexity can predict the resistance of the algorithm if it is high of low or average (by analogy with material resistance, read below to understand). Read my previous thoughts to understand: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jan 11 01:01PM -0800 Hello, I continu about computational complexity by being more and more rigorous, read again: I said previously(read below) that for example a time complexity such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than if it was a quadratic complexity or than an exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict, since computational complexity can predict the resistance of the algorithm if it is high of low or average (by analogy with material resistance, read below to understand). Read my previous thoughts to understand: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jan 11 12:01PM -0800 Hello, Yet more rigorous about computational complexity.. I said previously(read below) that for example a time complexity such as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is an average resistance(read below to understand the analogy with material resistance) or we can say that n*log(n) is faster than a quadratic complexity or than an exponential complexity, but we can not say giving a time complexity of n or n*log(n) how fast it is giving the input of the n of the time complexity, so since it is not exact prediction, so it is fuzzy, but this level of fuzziness, like in the example below of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read my previous thoughts to understand: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jan 11 07:38AM -0800 Hello.. I correct a mistake in my post: i mean that the insertion sort of time complexity n^2 is quadratic, read again: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jan 11 07:39AM -0800 Hello.. I correct a mistake in my post: i mean that the insertion sort of time complexity n^2 is quadratic, read again: What is science? and is computational complexity science ? You just have seen me talking about computational complexity, but we need to answer the questions of: What is science ? and is computational complexity science ? I think that we have to be more smart because there is like higher level abstractions in science, and we can be in those abstractions exact precisions of science, but we can be more fuzzy precisions that are useful and that are also science, to understand me more, let me give you an example: If i say that a person is obese, so he has a high risk to get a disease because he is obese. Now you are understanding more that with this abstraction we are not exact precision, but we are more fuzzy , but this fuzziness is useful and its level of precision is also useful, but is it science ? i think that this probabilistic calculations are also science that permits us to predict that the obese person has a high risk to get a disease. And this probabilistic calculations are like a higher level abstractions that lack exact precision but they are still useful precisions. This is how look like computational complexity and its higher level abstractions, so you are immediately understanding that a time complexity of O(n*log(n)) or a O(n) is like a average level of resistance(read below to know why i am calling it resistance by analogy with material resistance) when n grows large, and we can immediately notice that an exponential time complexity is a low level resistance when n grows large, and we can immediately notice that a log(n) time complexity is a high level of resistance when n grows large, so those time complexities are like a higher level abstractions that are fuzzy but there fuzziness, like in the example above of the obese person, permits us to predict important things in the reality, and this level of fuzziness of computational complexity is also science, because it is like probability calculations that permits us to predict. Read the rest of my previous thoughts to understand better: The why of computational complexity.. Here is my previous answer about computational complexity and the rest of my current answer is below: ===================================================================== Horand gassmann wrote: "Where your argument becomes impractical is in the statement "n becomes large". This is simply not precise enough for practical use. There is a break-even point, call it n_0, but it cannot be computed from the Big-O alone. And even if you can compute n_0, what if it turns out that the breakeven point is larger than a googolplex? That would be interesting theoretically, but practically --- not so much." I don't agree, because take a look below at how i computed the binary search time complexity, it is a divide and conquer algorithm, and it is log(n), but we can notice that a log(n) is good when n becomes large, so this information is practical because a log(n) time complexity is excellent in practice when n becomes large, and when you look at an insertion sort you will notice that it is a quadratic time complexity of n^2, here again, you can feel that it is practical because an quadratic time complexity is not so good when n becomes large, so you can say that n^2 is not so good in practice when n becomes large, so as you are noticing having time complexities of log(n) and n^2 are useful in practice, and for the rest of the the time complexities you can also benchmark the algorithm in the real world to have an idea at how it is performing. ================================================================= I think i am understanding better Lemire and Horand gassmann, they say that if it is not exact needed practical precision, so it is not science or engineering, but i don't agree with this, because science and engineering can be like working with more higher level abstractions that are not exact needed practical precision calculations, but they can still be useful precision in practice, it is like being a fuzzy precision that is useful, this is why i think that probabilistic calculations are also scientific , because probabilistic calculations are useful in practice because they can give us important informations on the reality that can also be practical, this is why computational complexity is also useful in practice because it is like a higher level abstractions that are not all the needed practical precision, but it is precision that is still useful in practice, this is why like probabilistic calculations i think computational complexity is also science. Read the rest of my previous thoughts to understand better: More on computational complexity.. Notice how Horand gassmann has answered in sci.math newsgroup: Horand gassmann wrote the following: "You are right, of course, on one level. An O(log n) algorithm is better than an O(n) algorithm *for large enough inputs*. Lemire understands that, and he addresses it in his blog. The important consideration is that _theoretical_ performance is a long way from _practical_ performance." And notice how what Lemire wrote about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." So as you are noticing that both of them want to say that computational complexity is far from practical, but i don't agree with them, because time complexity is like material resistance, and it informs us on important practical things such as an algorithm of log(n) time complexity is like much more resistant than O(n) when n becomes large, and i think this kind of information of time complexity is practical, this is why i don't agree with Lemire and Horand gassmann, because as you notice that time complexity is scientific and it is also engineering. Read the rest of my post to understand more what i want to say: More precision about computational complexity, read again: I have just read the following webpage of a PhD Computer Scientist and researcher from Montreal, Canada where i am living now from year 1989, here is the webpage and read it carefully: Better computational complexity does not imply better speed https://lemire.me/blog/2019/11/26/better-computational-complexity-does-not-imply-better-speed/ And here is his resume: https://lemire.me/pdf/resume/resumelemire.pdf So as you are noticing on the webpage above he is saying the following about computational complexity: "But it gets worse: these are not scientific models. A scientific model would predict the running time of the algorithm given some implementation, within some error margin. However, these models do nothing of the sort. They are purely mathematical. They are not falsifiable. As long as they are mathematically correct, then they are always true. To be fair, some researchers like Knuth came up with models that closely mimic reasonable computers, but that's not what people pursuing computational complexity bounds do, typically." But i don't agree with him because i think he is not understanding the goal of computational complexity, because when we say that an algorithm has a time complexity of n*log(n), you have to understand that it is by logical analogy like saying in physics what is the material resistance, because the n*log(n) means how well the algorithm is "amortizing"(it means reducing) the "time" that it takes taking as a reference of measure the average time complexity of an algorithm, i said that the time complexity is like the material resistance in physics, because if the time complexity n grows large, it is in physics like a big force that you apply to the material that is by logical analogy the algorithm, so if the time complexity is log(n), so it will amortize the time that it takes very well, so it is in physics like the good material resistance that amortizes very well the big force that is applied to it, and we can easily notice that an algorithm becomes faster, in front of the data that is giving to him, by going from an exponential time complexity towards a logarithmic time complexity, so we are noticing that the time complexity is "universal" and it measures how well the algorithm amortizes(that means it reduces) the time that it takes taking as a reference of measure the average time complexity of an algorithm, so this is is why computational complexity is scientific and also it is engineering and it gives us information on the physical world. So to give an interesting example of science in computing, we can ask of what is the time complexity of a binary search algorithm, and here is my mathematical calculations of its time complexity: Recurrence relation of a binary search algorithm is: T(n)=T(n/2)+1 Because the "1" is like a comparison that we do in each step of the divide and conquer method of the binary search algorithm. So the calculation of the recurrence equation gives: 1st step=> T(n)=T(n/2) + 1 2nd step=> T(n/2)=T(n/4) + 1 ……[ T(n/4)= T(n/2^2) ] 3rd step=> T(n/4)=T(n/8) + 1 ……[ T(n/8)= T(n/2^3) ] . . kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times) Adding all the equations we get, T(n) = T(n/2^k) + k times 1 This is the final equation. So how many times we need to divide by 2 until we have only one element left? So it must be: n/2^k= 1 This gives: n=2^k this give: log n=k [taken log(base 2) on both sides ] Put k= log n in the final equation above and it gives: T(n) = T(1) + log n T(n) = 1 + log n [we know that T(1) = 1 , because it's a base condition as we are left with only one element in the array and that is the element to be searched so we return 1] So it gives: T(n) = O(log n) [taking dominant polynomial, which is n here) This is how we got "log n" time complexity for binary search. Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment