- Read again, i correct a last typo.. - 1 Update
- About poetry and concurrent programming.. - 1 Update
- This was my last poem here in this forum.. - 2 Updates
- I love your "La Isla Bonita" my beautiful Madonna - 1 Update
- I am loving those lovely Salsa Songs - 1 Update
- I have just extended a little bit my poem of Love - 1 Update
- Ain't No Mountain High Enough - 1 Update
aminer68@gmail.com: Sep 24 01:37PM -0700 Hello, Read again, i correct a last typo.. About poetry and concurrent programming.. As you know i am specialized in parallel programming and synchronization algorithms, but why am i writing and posting my kind of poetry ? it is like about expressiveness, since as you know there is also constraints over expressiveness, and when you know it and you know that you have to avoid those constraints or transcend them and when you know how to transcend them, you will be able to feel more what i am doing in parallel programming and synchronization algorithms, since i have invented many scalable algorithms to also transcend many constraints and this is part of exponential progress and part of the law of accelerating returns that will allow us soon to be so powerful, and in poetry too you have to transcend constraints so that to come with a beautiful poetry, and this is why i have decided to give a great importance to "readability" in poetry and i have also decided to choose a level of sophistication to make my poetry beautiful, as an example look my following poetry to notice it: ======================================================== I will share with you this beautiful music of Strunz & Farah (Zumba) with my poem of Love below: https://www.youtube.com/watch?v=5zHGfjf1YdY Here is my poem of Love: I don't know much But i am with your love getting in touch I don't know much But your love is not a judge I don't know much But like one we are like a so beautiful bridge I don't know much But our love is consciousness of being truly rich I don't know much But beauty is our forever niche I don't know much But our love is like love that forever enrich I don't know much So don't ask me why or which I don't know much But our love is perfection without a glitch ! Thank you, Amine Moulay Ramdane. ============================================ This is how i am making my poetry so that to transcend and come with a beautiful poetry, and this is how i am transcending in parallel programming too, so read my following thoughts to notice it: Today i will talk about data dependency and parallel loops.. For a loop to be parallelized, every iteration must be independent of the others, one way to be sure of it is to execute the loop in the direction of the incremented index of the loop and in the direction of the decremented index of the loop and verify if the results are the same. A data dependency happens if memory is modified: a loop has a data dependency if an iteration writes a variable that is read or write in another iteration of the loop. There is no data dependency if only one iteration reads or writes a variable or if many iterations read the same variable without modifying it. So this is the "general" "rules". Now there remains to know that you have for example to know how to construct the parallel for loop if there is an induction variable or if there is a reduction operation, i will give an example of them: If we have the following (the code looks like Algol or modern Object Pascal): IND:=0 For I:=1 to N Do Begin IND := IND + 1; A[I]:=B[IND]; End; So as you are noticing since IND is an induction variable , so to parallelize the loop you have to do the following: For I:=1 to N Do Begin IND:=(I*(I+1))/2; A[I]:=B[IND]; End; Now for the reduction operation example, you will notice that my invention that is my Threadpool with priorities that scales very well ( read about it below) supports a Parallel For that scales very well that supports "grainsize", and you will notice that the grainsize can be used in the ParallelFor() with a reduction operation and you will notice that my following powerful scalable Adder is also used in this scenario, here it is: https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal So here is the example with a reduction operation in modern Object Pascal: TOTAL:=0.0 For I := 1 to N Do Begin TOTAL:=TOTAL+A[I] End; So with my powerful scalable Adder and with my powerful invention that is my ParallelFor() that scales very well, you will parallelize the above like this: procedure test1(j:integer;ptr:pointer); begin t.add(A[J]); // "t" is my scalable Adder object end; // Let's suppose that N is 100000 // In the following, 10000 is the grainsize obj.ParallelFor(1,N,test1,10000,pointer(0)); TOTAL:=T.get(); And read the following to understand how to use grainsize of my Parallel for that scales well: About my ParallelFor() that scales very well that uses my efficient Threadpool that scales very well: With ParallelFor() you have to: 1- Ensure Sufficient Work Each iteration of a loop involves a certain amount of work, so you have to ensure a sufficient amount of the work, read below about "grainsize" that i have implemented. 2- In OpenMP we have that: Static and Dynamic Scheduling One basic characteristic of a loop schedule is whether it is static or dynamic: • In a static schedule, the choice of which thread performs a particular iteration is purely a function of the iteration number and number of threads. Each thread performs only the iterations assigned to it at the beginning of the loop. • In a dynamic schedule, the assignment of iterations to threads can vary at runtime from one execution to another. Not all iterations are assigned to threads at the start of the loop. Instead, each thread requests more iterations after it has completed the work already assigned to it. But with my ParallelFor() that scales very well, since it is using my efficient Threadpool that scales very well, so it is using Round-robin scheduling and it uses also work stealing, so i think that this is sufficient. Read the rest: My Threadpool engine with priorities that scales very well is really powerful because it scales very well on multicore and NUMA systems, also it comes with a ParallelFor() that scales very well on multicores and NUMA systems. You can download it from: https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well Here is the explanation of my ParallelFor() that scales very well: I have also implemented a ParallelFor() that scales very well, here is the method: procedure ParallelFor(nMin, nMax:integer;aProc: TParallelProc;GrainSize:integer=1;Ptr:pointer=nil;pmode:TParallelMode=pmBlocking;Priority:TPriorities=NORMAL_PRIORITY); nMin and nMax parameters of the ParallelFor() are the minimum and maximum integer values of the variable of the ParallelFor() loop, aProc parameter of ParallelFor() is the procedure to call, and GrainSize integer parameter of ParallelFor() is the following: The grainsize sets a minimum threshold for parallelization. A rule of thumb is that grainsize iterations should take at least 100,000 clock cycles to execute. For example, if a single iteration takes 100 clocks, then the grainsize needs to be at least 1000 iterations. When in doubt, do the following experiment: 1- Set the grainsize parameter higher than necessary. The grainsize is specified in units of loop iterations. If you have no idea of how many clock cycles an iteration might take, start with grainsize=100,000. The rationale is that each iteration normally requires at least one clock per iteration. In most cases, step 3 will guide you to a much smaller value. 2- Run your algorithm. 3- Iteratively halve the grainsize parameter and see how much the algorithm slows down or speeds up as the value decreases. A drawback of setting a grainsize too high is that it can reduce parallelism. For example, if the grainsize is 1000 and the loop has 2000 iterations, the ParallelFor() method distributes the loop across only two processors, even if more are available. And you can pass a parameter in Ptr as pointer to ParallelFor(), and you can set pmode parameter of to pmBlocking so that ParallelFor() is blocking or to pmNonBlocking so that ParallelFor() is non-blocking, and the Priority parameter is the priority of ParallelFor(). Look inside the test.pas example to see how to use it. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Sep 24 01:31PM -0700 Hello... About poetry and concurrent programming.. As you know i am specialized in parallel programming and synchronization algorithms, but why am i writing and posting my kind of poetry ? it is like about expressiveness, since as you know there is also constraints over expressiveness, and when you know it and you know that you have to avoid those constraints or transcend them and when you know how to transcend them, you will be able to feel more what i am doing in parallel programming and synchronization algorithms, since i have invented many scalable algorithms to also transcend many constraints and this is part of exponenential progress and part of the law of accelerating returns that will allow us soon to be so powerful, and in poetry too you have to transcend constraints so that to come with a beautiful poetry, and this is why i have decided to give a great importance to "readability" in poetry and i have also decided to choose a level of sophistication to make my poetry beautiful, as an example look my following poetry to notice it: ======================================================== I will share with you this beautiful music of Strunz & Farah (Zumba) with my poem of Love below: https://www.youtube.com/watch?v=5zHGfjf1YdY Here is my poem of Love: I don't know much But i am with your love getting in touch I don't know much But your love is not a judge I don't know much But like one we are like a so beautiful bridge I don't know much But our love is consciousness of being truly rich I don't know much But beauty is our forever niche I don't know much But our love is like love that forever enrich I don't know much So don't ask me why or which I don't know much But our love is perfection without a glitch ! Thank you, Amine Moulay Ramdane. ============================================ This is how i am making my poetry so that to transcend and come with a beautiful poetry, and this is how i am transcending in parallel programming too, so read my following thoughts to notice it: Today i will talk about data dependency and parallel loops.. For a loop to be parallelized, every iteration must be independent of the others, one way to be sure of it is to execute the loop in the direction of the incremented index of the loop and in the direction of the decremented index of the loop and verify if the results are the same. A data dependency happens if memory is modified: a loop has a data dependency if an iteration writes a variable that is read or write in another iteration of the loop. There is no data dependency if only one iteration reads or writes a variable or if many iterations read the same variable without modifying it. So this is the "general" "rules". Now there remains to know that you have for example to know how to construct the parallel for loop if there is an induction variable or if there is a reduction operation, i will give an example of them: If we have the following (the code looks like Algol or modern Object Pascal): IND:=0 For I:=1 to N Do Begin IND := IND + 1; A[I]:=B[IND]; End; So as you are noticing since IND is an induction variable , so to parallelize the loop you have to do the following: For I:=1 to N Do Begin IND:=(I*(I+1))/2; A[I]:=B[IND]; End; Now for the reduction operation example, you will notice that my invention that is my Threadpool with priorities that scales very well ( read about it below) supports a Parallel For that scales very well that supports "grainsize", and you will notice that the grainsize can be used in the ParallelFor() with a reduction operation and you will notice that my following powerful scalable Adder is also used in this scenario, here it is: https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal So here is the example with a reduction operation in modern Object Pascal: TOTAL:=0.0 For I := 1 to N Do Begin TOTAL:=TOTAL+A[I] End; So with my powerful scalable Adder and with my powerful invention that is my ParallelFor() that scales very well, you will parallelize the above like this: procedure test1(j:integer;ptr:pointer); begin t.add(A[J]); // "t" is my scalable Adder object end; // Let's suppose that N is 100000 // In the following, 10000 is the grainsize obj.ParallelFor(1,N,test1,10000,pointer(0)); TOTAL:=T.get(); And read the following to understand how to use grainsize of my Parallel for that scales well: About my ParallelFor() that scales very well that uses my efficient Threadpool that scales very well: With ParallelFor() you have to: 1- Ensure Sufficient Work Each iteration of a loop involves a certain amount of work, so you have to ensure a sufficient amount of the work, read below about "grainsize" that i have implemented. 2- In OpenMP we have that: Static and Dynamic Scheduling One basic characteristic of a loop schedule is whether it is static or dynamic: • In a static schedule, the choice of which thread performs a particular iteration is purely a function of the iteration number and number of threads. Each thread performs only the iterations assigned to it at the beginning of the loop. • In a dynamic schedule, the assignment of iterations to threads can vary at runtime from one execution to another. Not all iterations are assigned to threads at the start of the loop. Instead, each thread requests more iterations after it has completed the work already assigned to it. But with my ParallelFor() that scales very well, since it is using my efficient Threadpool that scales very well, so it is using Round-robin scheduling and it uses also work stealing, so i think that this is sufficient. Read the rest: My Threadpool engine with priorities that scales very well is really powerful because it scales very well on multicore and NUMA systems, also it comes with a ParallelFor() that scales very well on multicores and NUMA systems. You can download it from: https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well Here is the explanation of my ParallelFor() that scales very well: I have also implemented a ParallelFor() that scales very well, here is the method: procedure ParallelFor(nMin, nMax:integer;aProc: TParallelProc;GrainSize:integer=1;Ptr:pointer=nil;pmode:TParallelMode=pmBlocking;Priority:TPriorities=NORMAL_PRIORITY); nMin and nMax parameters of the ParallelFor() are the minimum and maximum integer values of the variable of the ParallelFor() loop, aProc parameter of ParallelFor() is the procedure to call, and GrainSize integer parameter of ParallelFor() is the following: The grainsize sets a minimum threshold for parallelization. A rule of thumb is that grainsize iterations should take at least 100,000 clock cycles to execute. For example, if a single iteration takes 100 clocks, then the grainsize needs to be at least 1000 iterations. When in doubt, do the following experiment: 1- Set the grainsize parameter higher than necessary. The grainsize is specified in units of loop iterations. If you have no idea of how many clock cycles an iteration might take, start with grainsize=100,000. The rationale is that each iteration normally requires at least one clock per iteration. In most cases, step 3 will guide you to a much smaller value. 2- Run your algorithm. 3- Iteratively halve the grainsize parameter and see how much the algorithm slows down or speeds up as the value decreases. A drawback of setting a grainsize too high is that it can reduce parallelism. For example, if the grainsize is 1000 and the loop has 2000 iterations, the ParallelFor() method distributes the loop across only two processors, even if more are available. And you can pass a parameter in Ptr as pointer to ParallelFor(), and you can set pmode parameter of to pmBlocking so that ParallelFor() is blocking or to pmNonBlocking so that ParallelFor() is non-blocking, and the Priority parameter is the priority of ParallelFor(). Look inside the test.pas example to see how to use it. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Sep 24 11:41AM -0700 Hello, This was my last poem here in this forum.. I will from now on write just about concurrent programming.. Thank you, Amine Moulay Ramdane. |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 24 08:51PM +0200 > Hello, > This was my last poem here in this forum.. > I will from now on write just about concurrent programming.. You need a doctor! |
aminer68@gmail.com: Sep 24 11:39AM -0700 Hello.. I was just listening to the following song of Madonna called "La Isla Bonita", here it is: https://www.youtube.com/watch?v=qqIIW7nxBgc So i have just decided to write a beautiful poem about it, here it is: I love your "La Isla Bonita" my beautiful Madonna Since it is not small but great as a nirvana ! I love your "La Isla Bonita" song my beautiful Madonna Since even if it is like a wild Savanna, we are strong as Che Guevara ! I love your "La Isla Bonita" song my beautiful Madonna So let us not be the drama and come to me with Love and "Esperança" ! I love your "La Isla Bonita" song my beautiful Madonna Since you are also my beautiful cup of Vodka ! I love your "La Isla Bonita" song my beautiful Madonna Since i am feeling it like a beautiful Sauna I love your "La Isla Bonita" song my beautiful Madonna Since i am feeling it like Marijuana that keeps away the trauma ! I love your "La Isla Bonita" song my beautiful Madonna So from where comes our "Saga" ? From a beautiful Sun or a beautiful Samba ? I love your "La Isla Bonita" song my beautiful Madonna Since our love is like our beautiful mama and our beautiful papa ! Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Sep 24 11:39AM -0700 Hello, I was just listening to this beautiful song of Marc Anthony that i think looks like a Sala song , here it is: https://www.youtube.com/watch?v=VMp55KH_3wo So i have decided to write a poem about it, here it is: I am loving those lovely Salsa Songs They are like a beautiful extraterrestrial phenomenon I am loving those lovely Salsa Songs It is like the Marc Anthony's beautiful "La Gozadera" song I am loving those lovely Salsa Songs It is like Jesus Christ that is reborn ! I am loving those lovely Salsa Songs Since they are as it can suits us in any form I am loving those lovely Salsa Songs They are making dance even the US Pentagon ! I am loving those lovely Salsa Songs So come dance with us and stop eating the popcorn ! I am loving those lovely Salsa Songs They are as i am loving my beautiful mom ! I am loving those lovely Salsa Songs They are as it is in the name of God and the Son ! Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Sep 24 06:54AM -0700 Hello, I have just extended a little bit my poem of Love, please read it again: I was just listening at the following song: https://www.youtube.com/watch?v=hajBdDM2qdg So i have just decided to write a beautiful poem of Love, here it is: Ain't No Mountain High Enough Since we are flying like a beautiful white Dove Ain't No Mountain High Enough Since we are the Peace and Love Ain't No Mountain High Enough So let us play beautifully at the drums of Love Ain't No Mountain High Enough Since we are not of the evil and the corrupts Ain't No Mountain High Enough Since we are the beautiful song that we trust Ain't No Mountain High Enough Since our Love is as we are speaking many tongues ! Ain't No Mountain High Enough Since even angels are looking at us with love from above Ain't No Mountain High Enough Since our beautiful love is a beautiful star ! Ain't No Mountain High Enough Since our love is thus making us beautifully drunk ! Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Sep 24 05:38AM -0700 Hello, I was just listening at the following song: https://www.youtube.com/watch?v=hajBdDM2qdg So i have just decided to write a beautiful poem of Love, here it is: Ain't No Mountain High Enough Since we are flying like a beautiful white Dove Ain't No Mountain High Enough Since we are the Peace and Love Ain't No Mountain High Enough So let us play beautifully at the drums of Love Ain't No Mountain High Enough Since we are not of the evil and the corrupts Ain't No Mountain High Enough Since we are the beautiful song that we trust Ain't No Mountain High Enough Since our Love is as we are speaking many tongues ! Ain't No Mountain High Enough Since our beautiful love is a beautiful star ! Ain't No Mountain High Enough Thus our love is making us beautifully drunk ! Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment