Sunday, April 24, 2016

Digest for comp.programming.threads@googlegroups.com - 6 updates in 4 topics

bleachbot <bleachbot@httrack.com>: Apr 23 08:23PM +0200

bleachbot <bleachbot@httrack.com>: Apr 23 09:43PM +0200

bleachbot <bleachbot@httrack.com>: Apr 23 11:39PM +0200

Ramine <ramine@1.1>: Apr 23 05:40PM -0700

Hello,
 
 
Professor Jeremy O'Brien, Director of the Centre for Quantum Photonics
at the University of Bristol — predicts, "In less than ten years quantum
computers will begin to outperform everyday computers, leading to
breakthroughs in artificial intelligence, the discovery of new
pharmaceuticals and beyond. The very fast computing power given by
quantum computers has the potential to disrupt traditional businesses
and challenge our cyber-security. Businesses need to be ready for a
quantum future because it's coming."
 
 
Please read and look at the video here, this is real and it will happen !
 
http://www.enterrasolutions.com/2016/03/are-we-in-the-stretch-run-towards-quantum-computing.html
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Apr 23 03:43PM -0700

Hello..
 
 
Nano-Biological Computing – Quantum Computer Alternative!
 
https://www.youtube.com/watch?v=xcHcNyC6O84
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Apr 23 02:24PM -0700

Hello...
 
I will make the USL methodology clear...
 
USL methodology is like playing a probability game...
 
From my previous proof i can say that USL methodology that uses
nonlinear regression is a like playing probability game...
 
Because in the USL methodology the much much greater part of the chance
probabilistically will hit and gives us the possibility of forecating up
to 10X the maximum number of cores and threads of the performance data
measurements, so it is a better approximation.
 
And because a much much smaller part of the chance probabilistically
will hit and gives us the possibility of forecating up to 5X the maximum
number of cores and threads of the performance data measurements.
 
So forecasting up to 10X the maximum number of cores and threads
of the performance data measurements is the limit with USL methodlogy,
so if you want to optimize the criterion of the cost, you have to
forecast up to 10X the maximum number of cores and threads of the
performance data measurements, and see the tendency, if it says
that you can scale more and more on for example NUMA architecture ,
so when you want to buy bigger NUMA systems, make sure that you buy them
with the right configuration that permit to add more processors
and more memory, and you have to go buying step by step more and more
processors and memory, and on each step you will be able to test again
the Computer NUMA system that you have bought empirically with my USL
programs,to better forecast again farther the scalability and optimize
more the criterion of the cost, so as you have noticed my USL programs
are great tools and important tools !
 
I have included the 32 bit and 64 bit windows executables of my
programs inside the zip file to easy the job for you.
 
You can download my USL programs version 3.0 with the source code from:
 
https://sites.google.com/site/aminer68/universal-scalability-law-for-delphi-and-freepascal
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: