Thursday, February 18, 2021

Digest for comp.programming.threads@googlegroups.com - 5 updates in 5 topics

Amine Moulay Ramdane <aminer68@gmail.com>: Feb 17 04:08PM -0800

Hello,
 
 
Here is my new powerful inventions of scalable algorithms..
 
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
I have just updated my powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair
and Starvation-free and costless on the reader side
(that means with no atomics and with no fences on the reader side), they use sys_membarrier expedited on Linux and FlushProcessWriteBuffers() on windows, and now they work with both Linux and Windows, and i think my inventions are really smart, since read the following PhD researcher,
he says the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
So as you have just noticed he says the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
So i think that my above powerful inventions of scalable reader-writer locks are efficient and FIFO fair and Starvation-free.
 
LW_Fast_RWLockX that is a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does spin-wait, and also Fast_RWLockX a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does not spin-wait, but waits on my SemaMonitor, so it is energy efficient.
 
You can read about them and download them from my website here:
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
Also my other inventions are the following scalable RWLocks that are
FIFO fair and starvation-free:
 
Here is my invention of a scalable and starvation-free and FIFO fair and lightweight Multiple-Readers-Exclusive-Writer Lock called LW_RWLockX, it works across processes and threads:
 
https://sites.google.com/site/scalable68/scalable-rwlock-that-works-accross-processes-and-threads
 
And here is my inventions of New variants of Scalable RWLocks that are FIFO fair and Starvation-free:
 
https://sites.google.com/site/scalable68/new-variants-of-scalable-rwlocks
 
More about the energy efficiency of Transactional memory and more..
 
I have just read the following PhD paper, it is also about energy efficiency of Transactional memory, here it is:
 
Techniques for Enhancing the Efficiency of Transactional Memory Systems
 
http://kth.diva-portal.org/smash/get/diva2:1258335/FULLTEXT02.pdf
 
And i think it is the best known energy efficient algorithm for
Transactional memory, but i think it is not good, since
look at how for 64 cores the Beta parameter can be 16 cores,
so i think i am smart and i have just invented a much more energy efficient and powerful scalable fast Mutex and i have also just invented scalable RWLocks that are starvation-free and fair, read about them in my below writing and thoughts:
 
More about deadlocks and lock-based systems and more..
 
I have just read the following from an software engineer from Quebec Canada:
 
A deadlock-detecting mutex
 
https://faouellet.github.io/ddmutex/
 
And i have just understood rapidly his algorithm, but i think
his algorithm is not efficient at all, since we can find
if a graph has a strongly connected component in around a time complexity O(V+E), so then the algorithm above of the engineer from Quebec Canada takes around a time complexity of O(n*(V+E)), so it is not good.
 
So a much better way is to use my following way of detecting deadlocks:
 
DelphiConcurrent and FreepascalConcurrent are here
 
Read more here in my website:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
And i will soon enhance much more DelphiConcurrent and FreepascalConcurrent to support both Communication deadlocks
and Resource deadlocks.
 
About Transactional memory and locks..
 
 
I have just read the following paper about Transactional memory and locks:
 
http://sunnydhillon.net/assets/docs/concurrency-tm.pdf
 
 
I don't agree with the above paper, since read my following thoughts
to understand:
 
I have just invented a new powerful scalable fast mutex, and it has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- It solves the problem of lock convoying
 
So my new invention also solves the following problem:
 
The convoy phenomenon
 
https://blog.acolyer.org/2019/07/01/the-convoy-phenomenon/
 
 
And here is my other new invention of a Scalable RWLock that works across processes and threads that is starvation-free and fair and i will soon enhance it much more and it will become really powerful:
 
https://sites.google.com/site/scalable68/scalable-rwlock-that-works-accross-processes-and-threads
 
And about Lock-free versus Lock, read my following post:
 
https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic
 
And about deadlocks, here is also how i have solved it, and i will soon enhance much more DelphiConcurrent and FreepacalConcurrent:
 
DelphiConcurrent and FreepascalConcurrent are here
 
Read more here in my website:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
 
So i think with my above scalable fast mutex and my scalable RWLocks
that are starvation-free and fair and by reading the following about composability of lock-based systems, you will notice that lock-based systems are still useful.
 
 
"About composability of lock-based systems..
 
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized."
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
 
About mathematics and about abstraction..
 
 
I think my specialization is also that i have invented many software algorithms and software scalable algorithms and i am still inventing other software scalable algorithms and algorithms, those scalable algorithms and algorithms that i have invented are like inventing mathematical theorems that you prove and present in a higher level abstraction, but not only that but those algorithms and scalable algorithms of mine are presented in a form of higher level software abstraction that abstract the complexity of my scalable algorithms and algorithms, it is the most important part that interests me, for example notice how i am constructing higher level abstraction in my following tutorial as methodology that, first, permits to model the synchronization objects of parallel programs with logic primitives with If-Then-OR-AND so that to make it easy to translate to Petri nets so that to detect deadlocks in parallel programs, please take a look at it in my following web link because this tutorial of mine is the way of learning by higher level abstraction:
 
 
How to analyse parallel applications with Petri Nets
 
 
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
 
So notice that my methodology is a generalization that solves communication deadlocks and resource deadlocks in parallel programs.
 
1- Communication deadlocks that result from incorrect use of
event objects or condition variables (i.e. wait-notify
synchronization).
 
 
2- Resource deadlocks, a common kind of deadlock in which a set of
threads blocks forever because each thread in the set is waiting to
acquire a lock held by another thread in the set.
 
 
This is what interests me in mathematics, i want to work efficiently in mathematics in a much higher level of abstraction, i give you
an example of what i am doing in mathematics so that you understand,
look at how i am implementing mathematics as a software parallel conjugate gradient system solvers that scale very well, and i am presenting them in a higher level of abstraction, this is how i am abstracting the mathematics of them, read the following about it to notice it:
 
About SOR and Conjugate gradient mathematical methods..
 
I have just looked at SOR(Successive Overrelaxation Method),
and i think it is much less powerful than Conjugate gradient method,
read the following to notice it:
 
COMPARATIVE PERFORMANCE OF THE CONJUGATE GRADIENT AND SOR METHODS
FOR COMPUTATIONAL THERMAL HYDRAULICS
 
https://inis.iaea.org/collection/NCLCollectionStore/_Public/19/055/19055644.pdf?r=1&r=1
 
 
This is why i have implemented in both C++ and Delphi my Parallel Conjugate Gradient Linear System Solver Library that scales very well, read my following thoughts about it to understand more:
 
 
About the convergence properties of the conjugate gradient method
 
The conjugate gradient method can theoretically be viewed as a direct method, as it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix, in the absence of round-off error. However, the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, and the exact solution is never obtained. Fortunately, the conjugate gradient method can be used as an iterative method as it provides monotonically improving approximations to the exact solution, which may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number κ(A) of the system matrix A: the larger is κ(A), the slower the improvement.
 
Read more here:
 
http://pages.stat.wisc.edu/~wahba/stat860public/pdf1/cj.pdf
 
 
So i think my Conjugate Gradient Linear System Solver Library
that scales very well is still very useful, read about it
in my writing below:
 
Read the following interesting news:
 
The finite element method finds its place in games
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fhpc.developpez.com%2Factu%2F288260%2FLa-methode-des-elements-finis-trouve-sa-place-dans-les-jeux-AMD-propose-la-bibliotheque-FEMFX-pour-une-simulation-en-temps-reel-des-deformations%2F
 
But you have to be aware that finite element method uses Conjugate Gradient Method for Solution of Finite Element Problems, read here to notice it:
 
Conjugate Gradient Method for Solution of Large Finite Element Problems on CPU and GPU
 
https://pdfs.semanticscholar.org/1f4c/f080ee622aa02623b35eda947fbc169b199d.pdf
 
 
This is why i have also designed and implemented my Parallel Conjugate Gradient Linear System Solver library that scales very well,
here it is:
 
My Parallel C++ Conjugate Gradient Linear System Solver Library
that scales very well version 1.76 is here..
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware that scales very well, and it contains also a Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware that scales very well.
 
Sparse linear system solvers are ubiquitous in high performance computing (HPC) and often are the most computational intensive parts in scientific computing codes. A few of the many applications relying on sparse linear solvers include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry.
 
Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this. However, after a while people figured out that it works really well if you just stop the iteration much earlier - often you will get a very good approximation after much fewer than n steps. In fact, we can analyze how fast Conjugate gradient converges. The end result is that Conjugate gradient is used as an iterative method for large linear systems today.
 
Please download the zip file and read the readme file inside the zip to know how to use it.
 
You can download it from:
 
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
--
 
As you have noticed i have just written above about my Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well, but here is my Parallel Delphi and Freepascal Conjugate Gradient Linear System Solvers Libraries that scale very well:
 
Parallel implementation of Conjugate Gradient Dense Linear System solver library that is NUMA-aware and cache-aware that scales very well
 
https://sites.google.com/site/scalable68/scalable-parallel-implementation-of-conjugate-gradient-dense-linear-system-solver-library-that-is-numa-aware-and-cache-aware
 
PARALLEL IMPLEMENTATION OF CONJUGATE GRADIENT SPARSE LINEAR SYSTEM SOLVER LIBRARY THAT SCALES VERY WELL
 
https://sites.google.com/site/scalable68/scalable-parallel-implementation-of-conjugate-gradient-sparse-linear-system-solver
 
 
Thank you,
Amine Moulay Ramdane.
"Christian Hanné" <the.hanne@gmail.com>: Feb 14 03:45AM +0100

> so i think i am smart and i have just invented a much more energy efficient and powerful scalable fast Mutex and i have also just invented scalable RWLocks that are starvation-free and fair, read about them in my below writing and thoughts:
 
RW-Locks are never fair and starvation-free.
Amine Moulay Ramdane <aminer68@gmail.com>: Feb 17 11:44AM -0800

Hello..
 
 
More of my philosophy about the exponential progress of our humanity..
 
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
I think that the white supremacists and neo-nazis and many humans are
not understanding correctly the exponential progress of our
humanity, so notice in my following news that the exponential progress of our humanity is now going much faster and faster, so look for example at the new discovery below that will permit computers and phones to run thousands of times faster, so you are noticing that it will make artificial intelligence and such be much more powerful, so i think we will soon be able to enhance much more our human genetics and will
be able to do much more than that because the exponential progress of our humanity will make us soon really much much more powerful, so we have to be much more optimistic, so you have to read my following thoughts below about the exponential progress of our humanity and you have to read my following news below:
 
More political philosophy about: Are humans smart ?
 
So are humans smart ?
 
 
I am very positive about humans, and i think that humans are smart, since we are also advancing by the following process of Swarm Intelligence that is so efficient, read about it here:
 
 
How Swarm Intelligence Is Making Simple Tech Much Smarter
 
 
https://singularityhub.com/2018/02/08/how-swarm-intelligence-is-making-simple-tech-much-smarter/
 
 
So i think that "collectively", we humans, we are smart, and i think that you will soon notice it by the following exponential progress of our humanity, read about in my following thoughts:
 
 
More political philosophy about do we have to be pessimistic..
 
 
I think it is a beautiful day in history, and how can you understand it ?
 
 
First you will notice that the exponential progress of our humanity is going very fast and is going faster and faster , look at the following video to understand it:
 
 
Exponential Progress: Can We Expect Mind-Blowing Changes In The Near Future
 
 
https://www.youtube.com/watch?v=HfM5HXpfnJQ&t=144s
 
 
So as you are noticing that you have to take this exponential progress of our humanity into account and be much more optimistic.
 
 
And you have to know that this exponential progress of our humanity also comes from the following process:
 
 
How Swarm Intelligence Is Making Simple Tech Much Smarter
 
 
https://singularityhub.com/2018/02/08/how-swarm-intelligence-is-making-simple-tech-much-smarter/
 
 
And here is my new poem called: "This beautiful exponential progress is not a distress"
 
 
--
 
 
This beautiful exponential progress is not a distress
 
 
Because it is a beautiful maturity but not the adolescence
 
 
This beautiful exponential progress is not a distress
 
 
And we have to beautifully tune it with the beautiful consensus
 
 
This beautiful exponential progress is not a distress
 
 
So let us not be just guesses but technicality and science
 
 
This beautiful exponential progress is not a distress
 
 
So let us be a beautiful expressiveness that is not helpless
 
 
This beautiful exponential progress is not a distress
 
 
So let us take together this beautiful breakfast
 
 
This beautiful exponential progress is not a distress
 
 
Since you are also like my so beautiful Princess
 
 
This beautiful exponential progress is not a distress
 
 
So let us be a beautiful presence for present and future acceptance
 
 
--
 
 
Read my following news:
 
 
With the following new discovery computers and phones could run thousands of times faster..
 
Prof Alan Dalton in the School of Mathematical and Physics Sciences at the University of Sussex, said:
 
"We're mechanically creating kinks in a layer of graphene. It's a bit like nano-origami.
 
"Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future.
 
"This kind of technology -- "straintronics" using nanomaterials as opposed to electronics -- allows space for more chips inside any device. Everything we want to do with computers -- to speed them up -- can be done by crinkling graphene like this."
 
Dr Manoj Tripathi, Research Fellow in Nano-structured Materials at the University of Sussex and lead author on the paper, said:
 
"Instead of having to add foreign materials into a device, we've shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate."
 
The development is a greener, more sustainable technology. Because no additional materials need to be added, and because this process works at room temperature rather than high temperature, it uses less energy to create.
 
Read more here:
 
https://www.sciencedaily.com/releases/2021/02/210216100141.htm
 
 
AI system optimally allocates workloads across thousands of servers to
cut costs, save energy
 
Read more here:
 
https://techxplore.com/news/2019-08-ai-optimally-allocates-workloads-thousands.html
 
It is related to the following article, i invite you to read it:
 
Why Energy Is A Big And Rapidly Growing Problem For Data Centers
 
It's either a breakthrough in our compute engines, or we need to get
deadly serious about doubling the number of power plants on the planet.
 
Read more here:
 
https://www.forbes.com/sites/forbestechcouncil/2017/12/15/why-energy-is-a-big-and-rapidly-growing-problem-for-data-centers/#1d126295a307
 
And it is related to my following thoughts, i invite you to read them:
 
About how to beat Moore's Law and about Energy efficiency..
 
I am a white arab and i am also an inventor of many scalable algorithms
and algorithms, and now i will talk about: "How to beat Moore's Law ?"
and more about: "Energy efficiency"..
 
How to beat Moore's Law ?
 
I think with the following discovery, Graphene can finally be used in
CPUs, and it is a scale out method, read about the following discovery
and you will notice it:
 
New Graphene Discovery Could Finally Punch the Gas Pedal, Drive Faster CPUs
 
Read more here:
 
https://www.extremetech.com/computing/267695-new-graphene-discovery-could-finally-punch-the-gas-pedal-drive-faster-cpus
 
The scale out method above with Graphene is very interesting, and here
is the other scale up method with multicores and parallelism:
 
Beating Moore's Law: Scaling Performance for Another Half-Century
 
Read more here:
 
https://www.infoworld.com/article/3287025/beating-moore-s-law-scaling-performance-for-another-half-century.html
 
Also read the following:
 
"Also Modern programing environments contribute to the problem of
software bloat by placing ease of development and portable code above
speed or memory usage. While this is a sound business model in a
commercial environment, it does not make sense where IT resources are
constrained. Languages such as Java, C-Sharp, and Python have opted for
code portability and software development speed above execution speed
and memory usage, while modern data storage and transfer standards such
as XML and JSON place flexibility and readability above efficiency.
 
The Army can gain significant performance improvements with existing
hardware by treating software and operating system efficiency as a key
performance parameter with measurable criteria for CPU load and memory
footprint. The Army should lead by making software efficiency a priority
for the applications it develops. Capability Maturity Model Integration
(CMMI) version 1.3 for development processes should be adopted across
Army organizations, with automated code analysis and profiling being
integrated into development. Additionally, the Army should shape the
operating system market by leveraging its buying power to demand a
secure, robust, and efficient operating system for devices. These
metrics should be implemented as part of the Common Operating
Environment (COE)."
 
And about improved Algorithms:
 
Hardware improvements mean little if software cannot effectively use the
resources available to it. The Army should shape future software
algorithms by funding basic research on improved software algorithms to
meet its specific needs. The Army should also search for new algorithms
and techniques which can be applied to meet specific needs and develop a
learning culture within its software community to disseminate this
information."
 
Read the following:
 
https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies
 
More about Energy efficiency..
 
You have to be aware that parallelization of the software
can lower power consumption, and here is the formula
that permits you to calculate the power consumption of
"parallel" software programs:
 
Power consumption of the total cores = (The number of cores) * (
1/(Parallel speedup))^3) * (Power consumption of the single core).
 
Also read the following about energy efficiency:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there,
and Java and FreePascal are also good at Energy efficiency.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
Using artificial intelligence to find new uses for existing medications
 
Read more here:
 
https://techxplore.com/news/2021-01-artificial-intelligence-medications.html
 
New research shows machine learning could lop a year off technology
design cycle
 
Read more here:
 
https://techxplore.com/news/2021-01-machine-lop-year-technology.html
 
Accelerating AI computing to the speed of light
 
Read more here:
 
https://techxplore.com/news/2021-01-ai.html
 
At a major AI research conference, one researcher laid out how existing
AI techniques might be used to analyze causal relationships in data.
 
Read more here:
 
https://www.technologyreview.com/2019/05/08/135454/deep-learning-could-reveal-why-the-world-works-the-way-it-does/
 
Also read the following news:
 
Researchers engineer a tiny antibody capable of neutralizing the coronavirus
 
Read more here:
 
https://phys.org/news/2021-02-tiny-antibody-capable-neutralizing-coronavirus.html?fbclid=IwAR0B7TKas-la17aRdsYiZVLw7nYwrLlKF3ldkiduV3W0oTGwKDGPAnpHcrE
 
 
Scientists uncover potential antiviral treatment for COVID-19
 
Read more here:
 
https://medicalxpress.com/news/2021-02-scientists-uncover-potential-antiviral-treatment.html?fbclid=IwAR18LKpb4CIG5lhhe4XD0Rvr6is_-KaraqfitniXEoFMJiyOgdsMan-bRgQ
 
 
Computer model makes strides in search for COVID-19 treatments
 
Read more here:
 
https://medicalxpress.com/news/2021-02-covid-treatments.html?fbclid=IwAR1AYnulQoHxXifEkP_qQWMOrZDdFAw4HoWbWwPPP__LEkvyGKpfb9jWNGk
 
Look at this interesting video:
 
Are Hydrogen-Powered Cars The Future?
 
https://www.youtube.com/watch?v=NfkfLRiYgac&fbclid=IwAR2Yh84hKWElluUoqsApfyQQkbE578PzQHqhCa9vsUDRbc2h0eqnlc-JTF4
 
More about Protein Folding and more of my news..
 
Look at the following interesting video:
 
Has Protein Folding Been Solved?
 
https://www.youtube.com/watch?v=yhJWAdZl-Ck
 
And read the following news about Protein Folding:
 
DeepMind may just have cracked one of the grandest challenges in
biology. One that rivals the discovery of DNA's double helix. It could
change biomedicine, drug discovery, and vaccine development forever.
 
Read more here:
 
https://singularityhub.com/2020/12/15/deepminds-alphafold-is-close-to-solving-one-of-biologys-greatest-challenges/
 
 
Here is a new important discovery and more news..
 
Solving complex physics problems at lightning speed
 
"A calculation so complex that it takes twenty years to complete on a
powerful desktop computer can now be done in one hour on a regular
laptop. Physicists have now designed a new method to calculate the
properties of atomic nuclei incredibly quickly."
 
Read more here:
 
https://www.sciencedaily.com/releases/2021/02/210201090810.htm
 
 
Why is MIT's new "liquid" AI a breakthrough innovation?
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F312174%2FPourquoi-la-nouvelle-IA-liquide-de-MIT-est-elle-une-innovation-revolutionnaire-Elle-apprend-continuellement-de-son-experience-du-monde%2F
 
And here is Ramin Hasani, Postdoctoral Associate (he is an Iranian):
 
https://www.csail.mit.edu/person/ramin-hasani
 
And here he is:
 
http://www.raminhasani.com/
 
 
He is the study's lead author of the following new study:
 
New 'Liquid' AI Learns Continuously From Its Experience of the World
 
Read more here:
 
https://singularityhub.com/2021/01/31/new-liquid-ai-learns-as-it-experiences-the-world-in-real-time/
 
 
And read the following interesting news:
 
 
Global race for artificial intelligence: the EU continues to fall behind
the leading US and China
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F312189%2FCourse-mondiale-a-l-intelligence-artificielle-l-UE-continue-de-se-laisser-distancer-par-les-Etats-Unis-premiers-en-la-matiere-et-par-la-Chine-qui-se-rapproche-du-sommet-a-grande-vitesse%2F
 
 
And read my following news:
 
More precision about Metformin and COVID 19 and more..
 
I have just read the following study about Metformin and COVID 19,
i invite you to read it:
 
https://www.thelancet.com/action/showPdf?pii=S2666-7568%2820%2930033-7
 
I think that the above study was not yet precise, so it can not be
generalized.
 
Here is the study that can be generalized for age, sex, race, obesity,
and hypertension or chronic kidney disease and heart failure:
 
"Use of the diabetes drug metformin -- before a diagnosis of COVID-19 --
is associated with a threefold decrease in mortality in COVID-19
patients with Type 2 diabetes, according to a new study. Diabetes is a
significant comorbidity for COVID-19. This beneficial effect remained,
even after correcting for age, sex, race, obesity, and hypertension or
chronic kidney disease and heart failure.
 
"This beneficial effect remained, even after correcting for age, sex,
race, obesity, and hypertension or chronic kidney disease and heart
failure," said Anath Shalev, M.D., director of UAB's Comprehensive
Diabetes Center and leader of the study.
 
"Since similar results have now been obtained in different populations
from around the world -- including China, France and a
Amine Moulay Ramdane <aminer68@gmail.com>: Feb 17 10:58AM -0800

Hello...
 
 
With the following discovery computers and phones could run thousands of times faster..
 
Prof Alan Dalton in the School of Mathematical and Physics Sciences at the University of Sussex, said:
 
"We're mechanically creating kinks in a layer of graphene. It's a bit like nano-origami.
 
"Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future.
 
"This kind of technology -- "straintronics" using nanomaterials as opposed to electronics -- allows space for more chips inside any device. Everything we want to do with computers -- to speed them up -- can be done by crinkling graphene like this."
 
Dr Manoj Tripathi, Research Fellow in Nano-structured Materials at the University of Sussex and lead author on the paper, said:
 
"Instead of having to add foreign materials into a device, we've shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate."
 
The development is a greener, more sustainable technology. Because no additional materials need to be added, and because this process works at room temperature rather than high temperature, it uses less energy to create.
 
Read more here:
 
https://www.sciencedaily.com/releases/2021/02/210216100141.htm
 
 
AI system optimally allocates workloads across thousands of servers to
cut costs, save energy
 
Read more here:
 
https://techxplore.com/news/2019-08-ai-optimally-allocates-workloads-thousands.html
 
It is related to the following article, i invite you to read it:
 
Why Energy Is A Big And Rapidly Growing Problem For Data Centers
 
It's either a breakthrough in our compute engines, or we need to get
deadly serious about doubling the number of power plants on the planet.
 
Read more here:
 
https://www.forbes.com/sites/forbestechcouncil/2017/12/15/why-energy-is-a-big-and-rapidly-growing-problem-for-data-centers/#1d126295a307
 
And it is related to my following thoughts, i invite you to read them:
 
About how to beat Moore's Law and about Energy efficiency..
 
I am a white arab and i am also an inventor of many scalable algorithms
and algorithms, and now i will talk about: "How to beat Moore's Law ?"
and more about: "Energy efficiency"..
 
How to beat Moore's Law ?
 
I think with the following discovery, Graphene can finally be used in
CPUs, and it is a scale out method, read about the following discovery
and you will notice it:
 
New Graphene Discovery Could Finally Punch the Gas Pedal, Drive Faster CPUs
 
Read more here:
 
https://www.extremetech.com/computing/267695-new-graphene-discovery-could-finally-punch-the-gas-pedal-drive-faster-cpus
 
The scale out method above with Graphene is very interesting, and here
is the other scale up method with multicores and parallelism:
 
Beating Moore's Law: Scaling Performance for Another Half-Century
 
Read more here:
 
https://www.infoworld.com/article/3287025/beating-moore-s-law-scaling-performance-for-another-half-century.html
 
Also read the following:
 
"Also Modern programing environments contribute to the problem of
software bloat by placing ease of development and portable code above
speed or memory usage. While this is a sound business model in a
commercial environment, it does not make sense where IT resources are
constrained. Languages such as Java, C-Sharp, and Python have opted for
code portability and software development speed above execution speed
and memory usage, while modern data storage and transfer standards such
as XML and JSON place flexibility and readability above efficiency.
 
The Army can gain significant performance improvements with existing
hardware by treating software and operating system efficiency as a key
performance parameter with measurable criteria for CPU load and memory
footprint. The Army should lead by making software efficiency a priority
for the applications it develops. Capability Maturity Model Integration
(CMMI) version 1.3 for development processes should be adopted across
Army organizations, with automated code analysis and profiling being
integrated into development. Additionally, the Army should shape the
operating system market by leveraging its buying power to demand a
secure, robust, and efficient operating system for devices. These
metrics should be implemented as part of the Common Operating
Environment (COE)."
 
And about improved Algorithms:
 
Hardware improvements mean little if software cannot effectively use the
resources available to it. The Army should shape future software
algorithms by funding basic research on improved software algorithms to
meet its specific needs. The Army should also search for new algorithms
and techniques which can be applied to meet specific needs and develop a
learning culture within its software community to disseminate this
information."
 
Read the following:
 
https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies
 
More about Energy efficiency..
 
You have to be aware that parallelization of the software
can lower power consumption, and here is the formula
that permits you to calculate the power consumption of
"parallel" software programs:
 
Power consumption of the total cores = (The number of cores) * (
1/(Parallel speedup))^3) * (Power consumption of the single core).
 
Also read the following about energy efficiency:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there,
and Java and FreePascal are also good at Energy efficiency.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
Using artificial intelligence to find new uses for existing medications
 
Read more here:
 
https://techxplore.com/news/2021-01-artificial-intelligence-medications.html
 
New research shows machine learning could lop a year off technology
design cycle
 
Read more here:
 
https://techxplore.com/news/2021-01-machine-lop-year-technology.html
 
Accelerating AI computing to the speed of light
 
Read more here:
 
https://techxplore.com/news/2021-01-ai.html
 
At a major AI research conference, one researcher laid out how existing
AI techniques might be used to analyze causal relationships in data.
 
Read more here:
 
https://www.technologyreview.com/2019/05/08/135454/deep-learning-could-reveal-why-the-world-works-the-way-it-does/
 
Also read the following news:
 
Researchers engineer a tiny antibody capable of neutralizing the coronavirus
 
Read more here:
 
https://phys.org/news/2021-02-tiny-antibody-capable-neutralizing-coronavirus.html?fbclid=IwAR0B7TKas-la17aRdsYiZVLw7nYwrLlKF3ldkiduV3W0oTGwKDGPAnpHcrE
 
 
Scientists uncover potential antiviral treatment for COVID-19
 
Read more here:
 
https://medicalxpress.com/news/2021-02-scientists-uncover-potential-antiviral-treatment.html?fbclid=IwAR18LKpb4CIG5lhhe4XD0Rvr6is_-KaraqfitniXEoFMJiyOgdsMan-bRgQ
 
 
Computer model makes strides in search for COVID-19 treatments
 
Read more here:
 
https://medicalxpress.com/news/2021-02-covid-treatments.html?fbclid=IwAR1AYnulQoHxXifEkP_qQWMOrZDdFAw4HoWbWwPPP__LEkvyGKpfb9jWNGk
 
Look at this interesting video:
 
Are Hydrogen-Powered Cars The Future?
 
https://www.youtube.com/watch?v=NfkfLRiYgac&fbclid=IwAR2Yh84hKWElluUoqsApfyQQkbE578PzQHqhCa9vsUDRbc2h0eqnlc-JTF4
 
More about Protein Folding and more of my news..
 
Look at the following interesting video:
 
Has Protein Folding Been Solved?
 
https://www.youtube.com/watch?v=yhJWAdZl-Ck
 
And read the following news about Protein Folding:
 
DeepMind may just have cracked one of the grandest challenges in
biology. One that rivals the discovery of DNA's double helix. It could
change biomedicine, drug discovery, and vaccine development forever.
 
Read more here:
 
https://singularityhub.com/2020/12/15/deepminds-alphafold-is-close-to-solving-one-of-biologys-greatest-challenges/
 
 
Here is a new important discovery and more news..
 
Solving complex physics problems at lightning speed
 
"A calculation so complex that it takes twenty years to complete on a
powerful desktop computer can now be done in one hour on a regular
laptop. Physicists have now designed a new method to calculate the
properties of atomic nuclei incredibly quickly."
 
Read more here:
 
https://www.sciencedaily.com/releases/2021/02/210201090810.htm
 
 
Why is MIT's new "liquid" AI a breakthrough innovation?
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F312174%2FPourquoi-la-nouvelle-IA-liquide-de-MIT-est-elle-une-innovation-revolutionnaire-Elle-apprend-continuellement-de-son-experience-du-monde%2F
 
And here is Ramin Hasani, Postdoctoral Associate (he is an Iranian):
 
https://www.csail.mit.edu/person/ramin-hasani
 
And here he is:
 
http://www.raminhasani.com/
 
 
He is the study's lead author of the following new study:
 
New 'Liquid' AI Learns Continuously From Its Experience of the World
 
Read more here:
 
https://singularityhub.com/2021/01/31/new-liquid-ai-learns-as-it-experiences-the-world-in-real-time/
 
 
And read the following interesting news:
 
 
Global race for artificial intelligence: the EU continues to fall behind
the leading US and China
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F312189%2FCourse-mondiale-a-l-intelligence-artificielle-l-UE-continue-de-se-laisser-distancer-par-les-Etats-Unis-premiers-en-la-matiere-et-par-la-Chine-qui-se-rapproche-du-sommet-a-grande-vitesse%2F
 
 
And read my following news:
 
More precision about Metformin and COVID 19 and more..
 
I have just read the following study about Metformin and COVID 19,
i invite you to read it:
 
https://www.thelancet.com/action/showPdf?pii=S2666-7568%2820%2930033-7
 
I think that the above study was not yet precise, so it can not be
generalized.
 
Here is the study that can be generalized for age, sex, race, obesity,
and hypertension or chronic kidney disease and heart failure:
 
"Use of the diabetes drug metformin -- before a diagnosis of COVID-19 --
is associated with a threefold decrease in mortality in COVID-19
patients with Type 2 diabetes, according to a new study. Diabetes is a
significant comorbidity for COVID-19. This beneficial effect remained,
even after correcting for age, sex, race, obesity, and hypertension or
chronic kidney disease and heart failure.
 
"This beneficial effect remained, even after correcting for age, sex,
race, obesity, and hypertension or chronic kidney disease and heart
failure," said Anath Shalev, M.D., director of UAB's Comprehensive
Diabetes Center and leader of the study.
 
"Since similar results have now been obtained in different populations
from around the world -- including China, France and a UnitedHealthcare
analysis -- this suggests that the observed reduction in mortality risk
associated with metformin use in subjects with Type 2 diabetes and
COVID-19 might be generalizable," Shalev said.
 
After controlling for other covariates, age, sex and metformin use
emerged as independent factors affecting COVID-19-related mortality.
Interestingly, even after controlling for all these other covariates,
death was significantly less likely -- with an odds ratio of 0.33 -- for
Type 2 diabetes subjects taking metformin, compared with those who did
not take metformin."
 
Read more here:
 
https://www.sciencedaily.com/releases/2021/01/210114164004.htm
 
About Metformin and COVID 19 and more..
 
People who suffer from diabetes are at greater risk of death from
diseases like COVID 19. Researchers found that those taking metformin (a
cheap and popular anti-diabetes drug) are protected from COVID 19.
 
Future studies will have to explore how metformin might confer these
protective effects, provide a careful risk benefit assessment and
determine whether the indications for metformin treatment should be
broadened in the face of the ongoing COVID-19 pandemic.
 
Read more here:
 
https://www.medrxiv.org/content/10.1101/2020.07.29.20164020v1
 
About Metformin and Statin and cancers..
 
Metformin found not to prevent prostate cancer:
 
Read more here:
 
https://www.renalandurologynews.com/home/news/urology/prostate-cancer/metformin-found-not-to-prevent-prostate-cancer/
 
 
But Metformin, Statin Combination Reduces Mortality in High-Risk
Prostate Cancer
 
Read more here to notice:
 
"Patients who received the combination also experienced a reduction in
risk for all-cause mortality (32%) and prostate cancer mortality (54%).
However, metformin alone did not have any significant effects on
all-cause and prostate cancer mortality.
 
https://www.curetoday.com/view/metformin-statin-combination-reduces-mortality-in-highrisk-prostate-cancer
 
And notice that the longer half-lives of rosuvastatin, atorvastatin,
pitavastatin, and pravastatin allow these agents to maintain a
therapeutic drug concentration over a 24-hour period and allow alternate
administration times.
 
 
And taking Metformin prevents other cancers, read here to notice it:
 
 
"Indeed, observational studies with time-related biases have reported
extraordinary effects of ranging from 20 to 94% reductions in the risk
of cancer (10), suggesting that metformin may be more effective at
preventing or treating cancer than preventing the cardiovascular
complications of diabetes."
 
https://care.diabetesjournals.org/content/37/7/1786#:~:text=Indeed%2C%20observational%20studies%20with%20time,the%20cardiovascular%20complications%20of%20diabetes.
 
 
And more precision about I3C (Indole-3-carbinol) and cancer..
 
I have just read the following article, i invite you to read it:
 
Broccoli and Brussels sprouts: Cancer foes
 
https://news.harvard.edu/gazette/story/2019/05/beth-israel-researchers-uncover-anti-cancer-drug-mechanism-in-broccoli/
 
But i think that the above article is not speaking about the following
research that says the following about I3C (Indole-3-carbinol):
 
"In vivo studies showed that I3C inhibits the development of different
cancers in several animals when given before or in parallel to a
carcinogen. However, when I3C was given to the animals after the
carcinogen, I3C promoted carcinogenesis. This concern regarding the
long-term effects of I3C treatment on cancer risk in humans resulted in
Amine Moulay Ramdane <aminer68@gmail.com>: Feb 17 10:18AM -0800

Hello..
 
 
It is my last post about philosophy and politics here in this newsgroup.
 
More philosophy about my personality and more..
 
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
I said that i think i am smart, and first, i will ask a smart question
so that you understand:
 
If i say to you the following:
 
I am a white arab and i love France.
 
or
 
If i say to you:
 
I am a white arab and i love USA.
 
Do you think it is convincing ?
 
So notice carefully my new smart proverb that says:
 
"There is an important difference between the appearance of a reality and the truth of a reality, this is why in science you have not to be confident with the appearances, since in science you have to understand the truth, so, to be able to understand the truth you have to know how to be patience before understanding the truth and not to rush in like a fool by lack of wisdom "
 
So do you understand it clearly ?
 
So now you are understanding that if you say to a person or to a country: I love you.
 
You have to know that even in science we have not to be confident with
the appearances of a reality, so the country or person have to be convinced that you are actually doing something for that person or that country that you love that benefit them. But you have to be smarter than that and you have to know the psychology of people and how people behave so that you become much smarter, hence i said the following in my below writing:
 
"So when you are a human and you look at a beautiful flower that we call a rose, you will have the tendency to like it, so it is like philosophy, we have the tendency to want to eat good food and to be more and more perfection so that to be the right perfection and strenght, and it is in philosophy like an engine that pushes humans forward towards more and more perfection, and as i am explaining it below in my philosophy that this strenght that brings perfection brings stability in this instability of life"
 
So as you are noticing that we can logically infer from it that the psychology and behavior of people is also that people are demanding "high" standards of "quality" even if they are not high
standards of quality, so this is why i am making you understand my personality by writing my thoughts so that you understand that i am benefiting people with some of my high quality scalable algorithms or algorithms that i have invented and that i am also sharing with you or i am making you understand my thoughts of my philosophy, because i know that so that people be confident with me, i have to be high standards of quality.
 
More philosophy about economic growth..
 
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
What factors contribute to economic growth?
 
"Increases in "productivity" are the major factor responsible for per capita economic growth—this has been especially evident since the mid-19th century. Most of the economic growth in the 20th century was due to increased output per unit of labor, materials, energy, and land (less input per widget)."
 
Read more here:
 
https://en.wikipedia.org/wiki/Economic_growth#:~:text=Increases%20in%20productivity%20are%20the,(less%20input%20per%20widget).
 
 
So as you are noticing that increases in productivity are the major factor responsible for per capita economic growth, so it is a so important fact that you have to take into account, and economic growth is important because it is the means by which we can improve the quality of our standard of living. It also enables us to cater for any increases in our population without having to lower our standard of living, so then i have just given my thoughts about productivity, so here is my thoughts about productivity:
 
I have also just posted about the following thoughts from the following PhD computer scientist:
 
 
https://lemire.me/blog/about-me/
 
Read more here his thoughts about productivity:
 
 
https://lemire.me/blog/2012/10/15/you-cannot-scale-creativity/
 
 
And i think he is making a mistake:
 
 
Since we have that Productivity = Output/Input
 
 
But better human training and/or better tools and/or better human smartness and/or better human capacity can make the Parallel productivity part much bigger that the Serial productivity part, so it can scale much more (it is like Gustafson's Law).
 
 
And it looks like the following:
 
 
About parallelism and about Gustafson's Law..
 
 
Gustafson's Law:
 
• If you increase the amount of work done by each parallel
task then the serial component will not dominate
• Increase the problem size to maintain scaling
• Can do this by adding extra complexity or increasing the overall
problem size
 
 
Scaling is important, as the more a code scales the larger a machine it
can take advantage of:
 
• can consider weak and strong scaling
• in practice, overheads limit the scalability of real parallel programs
• Amdahl's law models these in terms of serial and parallel fractions
• larger problems generally scale better: Gustafson's law
 
 
Load balance is also a crucial factor.
 
So as said above:
 
Since we have that Productivity = Output/Input
 
But better human training and/or better tools and/or better human smartness and/or better human capacity can make the Parallel productivity part much bigger that the Serial productivity part, so it can scale much more (it is like Gustafson's Law).
 
Also Productivity focuses on quantity produced, created, or completed. but the Throughput is the rate of production or the rate at which something can be processed (throughput = output / duration). Throughput is a measure of comparative effectiveness of a process or an operation, e.g. a count of items completed per month.
 
So then Throughput is important for Productivity, so then i am a white arab and i think i am smart since i have also invented many scalable algorithms that enhance much parallelism that is so important for Throughput.
 
Here is some of my scalable algorithms that i have invented:
 
https://sites.google.com/site/scalable68/scalable-mlock
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
https://sites.google.com/site/scalable68/scalable-rwlock-that-works-accross-processes-and-threads
 
https://groups.google.com/forum/#!topic/comp.programming.threads/VaOo1WVACgs
 
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
 
Also i have decided to not share others of my scalable algorithms and it is competition, so i am seeking like a balance between collaboration and competition.
 
Also an important factor in economic growth comes from the following:
 
"Heiner Rindermann and James Thompson uncovered that the "smart fraction" of a country is quite influential in impacting the performance of that country, for example, its GDP."
 
Read the following to notice it:
 
Let's look for example at USA, so read the following from Jonathan Wai that is a Ph.D., it says:
 
"Heiner Rindermann and James Thompson uncovered that the "smart fraction" of a country is quite influential in impacting the performance of that country, for example, its GDP."
 
And it also says the following:
 
""According to recent population estimates, there are about eight Chinese and Indians for every American in the top 1 percent in brains." But consider that the U.S. benefits from the smart fractions of every other country in the world because it continues to serve as a magnet for brainpower, something that is not even factored into these rankings.
 
What these rankings clearly show is America is likely still in the lead in terms of brainpower. And this is despite the fact federal funding for educating our smart fraction is currently zero. Everyone seems worried Americans are falling behind, but this is because everyone is focusing on average and below average people. Maybe it's time we started taking a closer look at the smartest people of our own country."
 
Read more here:
 
https://www.psychologytoday.com/us/blog/finding-the-next-einstein/201312/whats-the-smartest-country-in-the-world
 
So as you are noticing it's immigrants(and there are about eight Chinese and Indians for every American in the top 1 percent in brains) that are making USA a rich country.
 
And read also the following to understand more:
 
Why Silicon Valley Wouldn't Work Without Immigrants
 
There are many theories for why immigrants find so much success in tech. Many American-born tech workers point out that there is no shortage of American-born employees to fill the roles at many tech companies. Researchers have found that more than enough students graduate from American colleges to fill available tech jobs. Critics of the industry's friendliness toward immigrants say it comes down to money — that technology companies take advantage of visa programs, like the H-1B system, to get foreign workers at lower prices than they would pay American-born ones.
 
But if that criticism rings true in some parts of the tech industry, it misses the picture among Silicon Valley's top companies. One common misperception of Silicon Valley is that it operates like a factory; in that view, tech companies can hire just about anyone from anywhere in the world to fill a particular role.
 
But today's most ambitious tech companies are not like factories. They're more like athletic teams. They're looking for the LeBrons and Bradys — the best people in the world to come up with some brand-new, never-before-seen widget, to completely reimagine what widgets should do in the first place.
 
"It's not about adding tens or hundreds of thousands of people into manufacturing plants," said Aaron Levie, the co-founder and chief executive of the cloud-storage company Box. "It's about the couple ideas that are going to be invented that are going to change everything."
 
Why do tech honchos believe that immigrants are better at coming up with those inventions? It's partly a numbers thing. As the tech venture capitalist Paul Graham has pointed out, the United States has only 5 percent of the world's population; it stands to reason that most of the world's best new ideas will be thought up by people who weren't born here.
 
If you look at some of the most consequential ideas in tech, you find an unusual number that were developed by immigrants. For instance, Google's entire advertising business — that is, the basis for the vast majority of its revenues and profits, the engine that allows it to hire thousands of people in the United States — was created by three immigrants: Salar Kamangar and Omid Kordestani, who came to the United States from Iran, and Eric Veach, from Canada.
 
But it's not just a numbers thing. Another reason immigrants do so well in tech is that people from outside bring new perspectives that lead to new ideas.
 
Read more here:
 
https://www.nytimes.com/2017/02/08/technology/personaltech/why-silicon-valley-wouldnt-work-without-immigrants.html
 
So the tools that permit to make the parallel Productivity part bigger is also automation and robots and parallelism and artificial intelligence.. read my following thoughts about automation and
about artificial intelligence etc.:
 
More precision about capitalism and about National Vanguard..
 
I have just read the following article from a white supremacist website called National Vanguard:
 
 
Why Capitalism Fails
 
 
https://nationalvanguard.org/2015/07/why-capitalism-fails/
 
 
And it is saying the following about why capitalism fails:
 
 
"Capitalism permits inheritance, the command transfer of private property to a designated new owner upon the death of the previous owner. And therein is the flaw: inherited wealth isn't earned by its owner, yet it leads to a class segregation of men that has nothing to do with how much wealth they have earned; i.e., nothing to do with how much or how well or how significantly they have worked."
 
 
I am a white arab and i think i am smart since i have invented many scalable algorithms, and i will answer with my fluid intelligence: I think the above article is not taking into account the risk factor and and the smartness factor, so there have to be mechanisms, that are like engines, that "encourage" to or/and "make" a part of the people work by taking risks or great risks and by doing there best (so that to become rich) or/and that "encourage" to or/and "make" the smartest to give there best with there smartness (so that to become rich), so i think capitalism has those mechanisms in form of rewards by allowing to become "rich" and in form of rewards by allowing inheritance, the command transfer of private property to a designated new owner upon the death of the previous owner: Since it "encourages" to or/and "makes" a part of the people work by taking risks and by doing there best (so that to become rich) or/and it encourages to or/and makes the smartest give there best with there smartness (so that to become rich).
 
 
And notice that i am also defining taking a "risk" as working "hard".
 
 
And the above article is saying the following:
 
 
"Capitalism constantly looks for ways to reduce labor costs. Automation made human labor less necessary than it had been when capitalism first appeared. When automation did appear, people who had the talent, the skills, and the motivation to make contributions began to find no jobs, or to become uncompetitive with mass-production if they tried to employ themselves."
 
 
I think it is not true, because read the following:
 
 
https://singularityhub.com/2019/01/01/ai-will-create-millions-more-jobs-than-it-will-destroy-heres-how/
 
 
And read the following:
 
 
Here is the advantages and disadvantages of automation:
 
 
Following are some of the advantages of automation:
 
 
1. Automation is the key to the shorter workweek. Automation will allow
the average number of working hours per week to continue to decline,
thereby allowing greater leisure hours and a higher quality life.
 
 
2. Automation brings safer working conditions for the worker. Since
there is less direct physical participation by the worker in the
production process, there is less chance of personal injury to the worker.
 
 
3. Automated production results in lower prices and better products. It
has been estimated that the cost to machine one unit of product by
conventional general-purpose machine tools requiring human operators may
be 100 times the cost of manufacturing the same unit using automated
mass-production techniques. The electronics industry offers many
examples of improvements in manufacturing technology that have
significantly reduced costs while increasing product value (e.g., colour
TV sets, stereo equipment, calculators, and computers).
 
 
4. The growth of the automation industry will itself provide employment
opportunities. This has been especially true in the computer industry,
as the companies in this industry have grown (IBM, Digital Equipment
Corp., Honeywell, etc.), new jobs have been created.
These new jobs include not only workers directly employed by these
companies, but also computer programmers, systems engineers, and other
needed to use and operate the computers.
 
 
5. Automation is the only means of increasing standard of living. Only
through productivity increases brought about by new automated methods of
production, it is possible to advance standard of living. Granting wage
increases without a commensurate increase in productivity
will results in inflation. To afford a better society, it is a must to
increase productivity.
 
 
Following are some of the disadvantages of automation:
 
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: