Amine Moulay Ramdane <aminer68@gmail.com>: Feb 18 12:13PM -0800 Hello.. Samsung has developed high bandwidth memory with integrated AI processing, who will double the performance of AI systems, according to the company Read more here: https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F312714%2FSamsung-a-developpe-une-memoire-a-large-bande-passante-avec-traitement-IA-integre-qui-doublera-la-performance-des-systemes-d-IA-selon-la-societe%2F And read all my following news and thoughts: Breakthrough mRNA vaccine developed for cancer immunotherapy by Chinese scientists Breakthrough research from China was able to reprogram the immune system to shrink tumour cells and prevent tumours spreading. Read more here: https://news.sky.com/story/breakthrough-mrna-vaccine-developed-for-cancer-immunotherapy-by-chinese-scientists-12220758?fbclid=IwAR1RE1GEv7--pYrPVTGAzxhXByWq0TN0Rr3HgX1_hyam_iR1PCSDTdjNYtE# Yet more precision and more of my philosophy about the exponential progress of our humanity.. I am posting the following new video to the french people here, since i am a white arab who talks and writes english and french and arabic, and the following interesting video talks about the exponential progress of our humanity, so i invite you to look at it: Le futur selon Ray Kurzweil, l'icône mondiale du transhumanisme et de la singularité https://www.youtube.com/watch?v=8FBgfmdNhSY And look below to my following news, since i think you will be able by reading them to feel that we are advancing quickly with an exponential progress.. More of my philosophy about the exponential progress of our humanity.. I am a white arab, and i think i am smart since i have also invented many scalable algorithms and algorithms.. I think that the white supremacists and neo-nazis and many humans are not understanding correctly the exponential progress of our humanity, so notice in my following news that the exponential progress of our humanity is now going much faster and faster, so look for example at the new discovery below that will permit computers and phones to run thousands of times faster, so you are noticing that it will make artificial intelligence and such be much more powerful, so i think we will soon be able to enhance much more our human genetics and we will also be able to do much more than that because the exponential progress of our humanity will very soon (in about 10 years or 15 years from now) make us much more powerful, so we have to be much more optimistic, so you have to read my following thoughts below about the exponential progress of our humanity and you have to read my following news below: More political philosophy about: Are humans smart ? So are humans smart ? I am very positive about humans, and i think that humans are smart, since we are also advancing by the following process of Swarm Intelligence that is so efficient, read about it here: How Swarm Intelligence Is Making Simple Tech Much Smarter https://singularityhub.com/2018/02/08/how-swarm-intelligence-is-making-simple-tech-much-smarter/ So i think that "collectively", we humans, we are smart, and i think that you will soon notice it by the following exponential progress of our humanity, read about in my following thoughts: More political philosophy about do we have to be pessimistic.. I think it is a beautiful day in history, and how can you understand it ? First you will notice that the exponential progress of our humanity is going very fast and is going faster and faster , look at the following video to understand it: Exponential Progress: Can We Expect Mind-Blowing Changes In The Near Future https://www.youtube.com/watch?v=HfM5HXpfnJQ&t=144s So as you are noticing that you have to take this exponential progress of our humanity into account and be much more optimistic. And you have to know that this exponential progress of our humanity also comes from the following process: How Swarm Intelligence Is Making Simple Tech Much Smarter https://singularityhub.com/2018/02/08/how-swarm-intelligence-is-making-simple-tech-much-smarter/ And here is my new poem about the exponential progress of our humanity called: "This beautiful exponential progress is not a distress" -- This beautiful exponential progress is not a distress Because it is a beautiful maturity but not the adolescence This beautiful exponential progress is not a distress And we have to beautifully tune it with the beautiful consensus This beautiful exponential progress is not a distress So let us not be just guesses but technicality and science This beautiful exponential progress is not a distress So let us be a beautiful expressiveness that is not helpless This beautiful exponential progress is not a distress So let us take together this beautiful breakfast This beautiful exponential progress is not a distress Since you are also like my so beautiful Princess This beautiful exponential progress is not a distress So let us be a beautiful presence for present and future acceptance -- Read my following news: With the following new discovery computers and phones could run thousands of times faster.. Prof Alan Dalton in the School of Mathematical and Physics Sciences at the University of Sussex, said: "We're mechanically creating kinks in a layer of graphene. It's a bit like nano-origami. "Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future. "This kind of technology -- "straintronics" using nanomaterials as opposed to electronics -- allows space for more chips inside any device. Everything we want to do with computers -- to speed them up -- can be done by crinkling graphene like this." Dr Manoj Tripathi, Research Fellow in Nano-structured Materials at the University of Sussex and lead author on the paper, said: "Instead of having to add foreign materials into a device, we've shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate." The development is a greener, more sustainable technology. Because no additional materials need to be added, and because this process works at room temperature rather than high temperature, it uses less energy to create. Read more here: https://www.sciencedaily.com/releases/2021/02/210216100141.htm AI system optimally allocates workloads across thousands of servers to cut costs, save energy Read more here: https://techxplore.com/news/2019-08-ai-optimally-allocates-workloads-thousands.html It is related to the following article, i invite you to read it: Why Energy Is A Big And Rapidly Growing Problem For Data Centers It's either a breakthrough in our compute engines, or we need to get deadly serious about doubling the number of power plants on the planet. Read more here: https://www.forbes.com/sites/forbestechcouncil/2017/12/15/why-energy-is-a-big-and-rapidly-growing-problem-for-data-centers/#1d126295a307 And it is related to my following thoughts, i invite you to read them: About how to beat Moore's Law and about Energy efficiency.. I am a white arab and i am also an inventor of many scalable algorithms and algorithms, and now i will talk about: "How to beat Moore's Law ?" and more about: "Energy efficiency".. How to beat Moore's Law ? I think with the following discovery, Graphene can finally be used in CPUs, and it is a scale out method, read about the following discovery and you will notice it: New Graphene Discovery Could Finally Punch the Gas Pedal, Drive Faster CPUs Read more here: https://www.extremetech.com/computing/267695-new-graphene-discovery-could-finally-punch-the-gas-pedal-drive-faster-cpus The scale out method above with Graphene is very interesting, and here is the other scale up method with multicores and parallelism: Beating Moore's Law: Scaling Performance for Another Half-Century Read more here: https://www.infoworld.com/article/3287025/beating-moore-s-law-scaling-performance-for-another-half-century.html Also read the following: "Also Modern programing environments contribute to the problem of software bloat by placing ease of development and portable code above speed or memory usage. While this is a sound business model in a commercial environment, it does not make sense where IT resources are constrained. Languages such as Java, C-Sharp, and Python have opted for code portability and software development speed above execution speed and memory usage, while modern data storage and transfer standards such as XML and JSON place flexibility and readability above efficiency. The Army can gain significant performance improvements with existing hardware by treating software and operating system efficiency as a key performance parameter with measurable criteria for CPU load and memory footprint. The Army should lead by making software efficiency a priority for the applications it develops. Capability Maturity Model Integration (CMMI) version 1.3 for development processes should be adopted across Army organizations, with automated code analysis and profiling being integrated into development. Additionally, the Army should shape the operating system market by leveraging its buying power to demand a secure, robust, and efficient operating system for devices. These metrics should be implemented as part of the Common Operating Environment (COE)." And about improved Algorithms: Hardware improvements mean little if software cannot effectively use the resources available to it. The Army should shape future software algorithms by funding basic research on improved software algorithms to meet its specific needs. The Army should also search for new algorithms and techniques which can be applied to meet specific needs and develop a learning culture within its software community to disseminate this information." Read the following: https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies More about Energy efficiency.. You have to be aware that parallelization of the software can lower power consumption, and here is the formula that permits you to calculate the power consumption of "parallel" software programs: Power consumption of the total cores = (The number of cores) * ( 1/(Parallel speedup))^3) * (Power consumption of the single core). Also read the following about energy efficiency: Energy efficiency isn't just a hardware problem. Your programming language choices can have serious effects on the efficiency of your energy consumption. We dive deep into what makes a programming language energy efficient. As the researchers discovered, the CPU-based energy consumption always represents the majority of the energy consumed. What Pereira et. al. found wasn't entirely surprising: speed does not always equate energy efficiency. Compiled languages like C, C++, Rust, and Ada ranked as some of the most energy efficient languages out there, and Java and FreePascal are also good at Energy efficiency. Read more here: https://jaxenter.com/energy-efficient-programming-languages-137264.html RAM is still expensive and slow, relative to CPUs And "memory" usage efficiency is important for mobile devices. So Delphi and FreePascal compilers are also still "useful" for mobile devices, because Delphi and FreePascal are good if you are considering time and memory or energy and memory, and the following pascal benchmark was done with FreePascal, and the benchmark shows that C, Go and Pascal do rather better if you're considering languages based on time and memory or energy and memory. Read again here to notice it: https://jaxenter.com/energy-efficient-programming-languages-137264.html Using artificial intelligence to find new uses for existing medications Read more here: https://techxplore.com/news/2021-01-artificial-intelligence-medications.html New research shows machine learning could lop a year off technology design cycle Read more here: https://techxplore.com/news/2021-01-machine-lop-year-technology.html Accelerating AI computing to the speed of light Read more here: https://techxplore.com/news/2021-01-ai.html At a major AI research conference, one researcher laid out how existing AI techniques might be used to analyze causal relationships in data. Read more here: https://www.technologyreview.com/2019/05/08/135454/deep-learning-could-reveal-why-the-world-works-the-way-it-does/ Also read the following news: Researchers engineer a tiny antibody capable of neutralizing the coronavirus Read more here: https://phys.org/news/2021-02-tiny-antibody-capable-neutralizing-coronavirus.html?fbclid=IwAR0B7TKas-la17aRdsYiZVLw7nYwrLlKF3ldkiduV3W0oTGwKDGPAnpHcrE Scientists uncover potential antiviral treatment for COVID-19 Read more here: https://medicalxpress.com/news/2021-02-scientists-uncover-potential-antiviral-treatment.html?fbclid=IwAR18LKpb4CIG5lhhe4XD0Rvr6is_-KaraqfitniXEoFMJiyOgdsMan-bRgQ Computer model makes strides in search for COVID-19 treatments Read more here: https://medicalxpress.com/news/2021-02-covid-treatments.html?fbclid=IwAR1AYnulQoHxXifEkP_qQWMOrZDdFAw4HoWbWwPPP__LEkvyGKpfb9jWNGk Look at this interesting video: Are Hydrogen-Powered Cars The Future? https://www.youtube.com/watch?v=NfkfLRiYgac&fbclid=IwAR2Yh84hKWElluUoqsApfyQQkbE578PzQHqhCa9vsUDRbc2h0eqnlc-JTF4 More about Protein Folding and more of my news.. Look at the following interesting video: Has Protein Folding Been Solved? https://www.youtube.com/watch?v=yhJWAdZl-Ck And read the following news about Protein Folding: DeepMind may just have cracked one of the grandest challenges in biology. One that rivals the discovery of DNA's double helix. It could change biomedicine, drug discovery, and vaccine development forever. Read more here: https://singularityhub.com/2020/12/15/deepminds-alphafold-is-close-to-solving-one-of-biologys-greatest-challenges/ Here is a new important discovery and more news.. Solving complex physics problems at lightning speed "A calculation so complex that it takes twenty years to complete on a powerful desktop computer can now be done in one hour on a regular laptop. Physicists have now designed a new method to calculate the properties of atomic nuclei incredibly quickly." Read more here: https://www.sciencedaily.com/releases/2021/02/210201090810.htm Why is MIT's new "liquid" AI a breakthrough innovation? Read more here: https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F312174%2FPourquoi-la-nouvelle-IA-liquide-de-MIT-est-elle-une-innovation-revolutionnaire-Elle-apprend-continuellement-de-son-experience-du-monde%2F And here is Ramin Hasani, Postdoctoral Associate (he is an Iranian): https://www.csail.mit.edu/person/ramin-hasani And here he is: http://www.raminhasani.com/ He is the study's lead author of the following new study: New 'Liquid' AI Learns Continuously From Its Experience of the World Read more here: https://singularityhub.com/2021/01/31/new-liquid-ai-learns-as-it-experiences-the-world-in-real-time/ And read the following interesting news: Global race for artificial |
Amine Moulay Ramdane <aminer68@gmail.com>: Feb 18 11:07AM -0800 Hello.. More precision about my new inventions of scalable algorithms.. My below powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair and Starvation-free and costless on the reader side (that means with no atomics and with no fences on the reader side), they use sys_membarrier expedited on Linux and FlushProcessWriteBuffers() on windows, and if you look at the source code of my LW_Fast_RWLockX.pas and Fast_RWLockX.pas inside the zip file, you will notice that in Linux they call two functions that are membarrier1() and membarrier2(), the membarrier1() registers the process's intent to use MEMBARRIER_CMD_PRIVATE_EXPEDITED and membarrier2() executes a memory barrier on each running thread belonging to the same process as the calling thread. Read more here to understand: https://man7.org/linux/man-pages/man2/membarrier.2.html Here is my new powerful inventions of scalable algorithms.. I have just updated my powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair and Starvation-free and costless on the reader side (that means with no atomics and with no fences on the reader side), they use sys_membarrier expedited on Linux and FlushProcessWriteBuffers() on windows, and now they work with both Linux and Windows, and i think my inventions are really smart, since read the following PhD researcher, he says the following: "Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;" Read more here: http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html So as you have just noticed he says the following: "Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;" So i think that my above powerful inventions of scalable reader-writer locks are efficient and FIFO fair and Starvation-free. LW_Fast_RWLockX that is a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does spin-wait, and also Fast_RWLockX a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does not spin-wait, but waits on my SemaMonitor, so it is energy efficient. You can read about them and download them from my website here: https://sites.google.com/site/scalable68/scalable-rwlock Also my other inventions are the following scalable RWLocks that are FIFO fair and starvation-free: Here is my invention of a scalable and starvation-free and FIFO fair and lightweight Multiple-Readers-Exclusive-Writer Lock called LW_RWLockX, it works across processes and threads: https://sites.google.com/site/scalable68/scalable-rwlock-that-works-accross-processes-and-threads And here is my inventions of New variants of Scalable RWLocks that are FIFO fair and Starvation-free: https://sites.google.com/site/scalable68/new-variants-of-scalable-rwlocks More about the energy efficiency of Transactional memory and more.. I have just read the following PhD paper, it is also about energy efficiency of Transactional memory, here it is: Techniques for Enhancing the Efficiency of Transactional Memory Systems http://kth.diva-portal.org/smash/get/diva2:1258335/FULLTEXT02.pdf And i think it is the best known energy efficient algorithm for Transactional memory, but i think it is not good, since look at how for 64 cores the Beta parameter can be 16 cores, so i think i am smart and i have just invented a much more energy efficient and powerful scalable fast Mutex and i have also just invented scalable RWLocks that are starvation-free and fair, read about them in my below writing and thoughts: More about deadlocks and lock-based systems and more.. I have just read the following from an software engineer from Quebec Canada: A deadlock-detecting mutex https://faouellet.github.io/ddmutex/ And i have just understood rapidly his algorithm, but i think his algorithm is not efficient at all, since we can find if a graph has a strongly connected component in around a time complexity O(V+E), so then the algorithm above of the engineer from Quebec Canada takes around a time complexity of O(n*(V+E)), so it is not good. So a much better way is to use my following way of detecting deadlocks: DelphiConcurrent and FreepascalConcurrent are here Read more here in my website: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent And i will soon enhance much more DelphiConcurrent and FreepascalConcurrent to support both Communication deadlocks and Resource deadlocks. About Transactional memory and locks.. I have just read the following paper about Transactional memory and locks: http://sunnydhillon.net/assets/docs/concurrency-tm.pdf I don't agree with the above paper, since read my following thoughts to understand: I have just invented a new powerful scalable fast mutex, and it has the following characteristics: 1- Starvation-free 2- Tunable fairness 3- It keeps efficiently and very low its cache coherence traffic 4- Very good fast path performance 5- And it has a good preemption tolerance. 6- It is faster than scalable MCS lock 7- It solves the problem of lock convoying So my new invention also solves the following problem: The convoy phenomenon https://blog.acolyer.org/2019/07/01/the-convoy-phenomenon/ And here is my other new invention of a Scalable RWLock that works across processes and threads that is starvation-free and fair and i will soon enhance it much more and it will become really powerful: https://sites.google.com/site/scalable68/scalable-rwlock-that-works-accross-processes-and-threads And about Lock-free versus Lock, read my following post: https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic And about deadlocks, here is also how i have solved it, and i will soon enhance much more DelphiConcurrent and FreepacalConcurrent: DelphiConcurrent and FreepascalConcurrent are here Read more here in my website: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent So i think with my above scalable fast mutex and my scalable RWLocks that are starvation-free and fair and by reading the following about composability of lock-based systems, you will notice that lock-based systems are still useful. "About composability of lock-based systems.. Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized." Read more here: https://queue.acm.org/detail.cfm?id=1454462 About mathematics and about abstraction.. I think my specialization is also that i have invented many software algorithms and software scalable algorithms and i am still inventing other software scalable algorithms and algorithms, those scalable algorithms and algorithms that i have invented are like inventing mathematical theorems that you prove and present in a higher level abstraction, but not only that but those algorithms and scalable algorithms of mine are presented in a form of higher level software abstraction that abstract the complexity of my scalable algorithms and algorithms, it is the most important part that interests me, for example notice how i am constructing higher level abstraction in my following tutorial as methodology that, first, permits to model the synchronization objects of parallel programs with logic primitives with If-Then-OR-AND so that to make it easy to translate to Petri nets so that to detect deadlocks in parallel programs, please take a look at it in my following web link because this tutorial of mine is the way of learning by higher level abstraction: How to analyse parallel applications with Petri Nets https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets So notice that my methodology is a generalization that solves communication deadlocks and resource deadlocks in parallel programs. 1- Communication deadlocks that result from incorrect use of event objects or condition variables (i.e. wait-notify synchronization). 2- Resource deadlocks, a common kind of deadlock in which a set of threads blocks forever because each thread in the set is waiting to acquire a lock held by another thread in the set. This is what interests me in mathematics, i want to work efficiently in mathematics in a much higher level of abstraction, i give you an example of what i am doing in mathematics so that you understand, look at how i am implementing mathematics as a software parallel conjugate gradient system solvers that scale very well, and i am presenting them in a higher level of abstraction, this is how i am abstracting the mathematics of them, read the following about it to notice it: About SOR and Conjugate gradient mathematical methods.. I have just looked at SOR(Successive Overrelaxation Method), and i think it is much less powerful than Conjugate gradient method, read the following to notice it: COMPARATIVE PERFORMANCE OF THE CONJUGATE GRADIENT AND SOR METHODS FOR COMPUTATIONAL THERMAL HYDRAULICS https://inis.iaea.org/collection/NCLCollectionStore/_Public/19/055/19055644.pdf?r=1&r=1 This is why i have implemented in both C++ and Delphi my Parallel Conjugate Gradient Linear System Solver Library that scales very well, read my following thoughts about it to understand more: About the convergence properties of the conjugate gradient method The conjugate gradient method can theoretically be viewed as a direct method, as it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix, in the absence of round-off error. However, the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, and the exact solution is never obtained. Fortunately, the conjugate gradient method can be used as an iterative method as it provides monotonically improving approximations to the exact solution, which may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number κ(A) of the system matrix A: the larger is κ(A), the slower the improvement. Read more here: http://pages.stat.wisc.edu/~wahba/stat860public/pdf1/cj.pdf So i think my Conjugate Gradient Linear System Solver Library that scales very well is still very useful, read about it in my writing below: Read the following interesting news: The finite element method finds its place in games Read more here: https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fhpc.developpez.com%2Factu%2F288260%2FLa-methode-des-elements-finis-trouve-sa-place-dans-les-jeux-AMD-propose-la-bibliotheque-FEMFX-pour-une-simulation-en-temps-reel-des-deformations%2F But you have to be aware that finite element method uses Conjugate Gradient Method for Solution of Finite Element Problems, read here to notice it: Conjugate Gradient Method for Solution of Large Finite Element Problems on CPU and GPU https://pdfs.semanticscholar.org/1f4c/f080ee622aa02623b35eda947fbc169b199d.pdf This is why i have also designed and implemented my Parallel Conjugate Gradient Linear System Solver library that scales very well, here it is: My Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well version 1.76 is here.. Author: Amine Moulay Ramdane Description: This library contains a Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware that scales very well, and it contains also a Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware that scales very well. Sparse linear system solvers are ubiquitous in high performance computing (HPC) and often are the most computational intensive parts in scientific computing codes. A few of the many applications relying on sparse linear solvers include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry. Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this. However, after a while people figured out that it works really well if you just stop the iteration much earlier - often you will get a very good approximation after much fewer than n steps. In fact, we can analyze how fast Conjugate gradient converges. The end result is that Conjugate gradient is used as an iterative method for large linear systems today. Please download the zip file and read the readme file inside the zip to know how to use it. You can download it from: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library Language: GNU C++ and Visual C++ and C++Builder Operating Systems: Windows, Linux, Unix and Mac OS X on (x86) -- As you have noticed i have just written above about my Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well, but here is my Parallel Delphi and Freepascal Conjugate Gradient Linear System Solvers Libraries that scale very well: |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment