- Read again , i correct a last typo.. - 1 Update
- More about my Diploma and my education and my way of doing.. - 1 Update
- The finite element method finds its place in games - 1 Update
- How to beat Moore's Law ? - 1 Update
- 96% of jobs, those who do not seek human ingenuity, are destined to disappear due to AI, according to Gary Kasparov - 1 Update
- A Promising Antiviral Is Being Tested for the Coronavirus—but Results Are Not Yet Out - 1 Update
- More about discrete-event simulation (DES) and more.. - 1 Update
aminer68@gmail.com: Feb 29 01:39PM -0800 Hello, Read again , i correct a last typo.. Read the following interesting news: The finite element method finds its place in games Read more here: https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fhpc.developpez.com%2Factu%2F288260%2FLa-methode-des-elements-finis-trouve-sa-place-dans-les-jeux-AMD-propose-la-bibliotheque-FEMFX-pour-une-simulation-en-temps-reel-des-deformations%2F But you have to be aware that finite element method uses Conjugate Gradient Method for Solution of Finite Element Problems, read here to notice it: Conjugate Gradient Method for Solution of Large Finite Element Problems on CPU and GPU https://pdfs.semanticscholar.org/1f4c/f080ee622aa02623b35eda947fbc169b199d.pdf This is why i have also designed and implemented my Parallel Conjugate Gradient Linear System Solver library that scales very well, here it is: My Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well version 1.76 is here.. Author: Amine Moulay Ramdane Description: This library contains a Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware that scales very well, and it contains also a Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware that scales very well. Sparse linear system solvers are ubiquitous in high performance computing (HPC) and often are the most computational intensive parts in scientific computing codes. A few of the many applications relying on sparse linear solvers include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry. Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this. However, after a while people figured out that it works really well if you just stop the iteration much earlier - often you will get a very good approximation after much fewer than n steps. In fact, we can analyze how fast Conjugate gradient converges. The end result is that Conjugate gradient is used as an iterative method for large linear systems today. Please download the zip file and read the readme file inside the zip to know how to use it. You can download it from: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library Language: GNU C++ and Visual C++ and C++Builder Operating Systems: Windows, Linux, Unix and Mac OS X on (x86) -- As you have noticed i have just written above about my Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well, but here is my Parallel Delphi and Freepascal Conjugate Gradient Linear System Solvers Libraries that scale very well: Parallel implementation of Conjugate Gradient Dense Linear System solver library that is NUMA-aware and cache-aware that scales very well https://sites.google.com/site/scalable68/scalable-parallel-implementation-of-conjugate-gradient-dense-linear-system-solver-library-that-is-numa-aware-and-cache-aware PARALLEL IMPLEMENTATION OF CONJUGATE GRADIENT SPARSE LINEAR SYSTEM SOLVER LIBRARY THAT SCALES VERY WELL https://sites.google.com/site/scalable68/scalable-parallel-implementation-of-conjugate-gradient-sparse-linear-system-solver Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 29 01:28PM -0800 Hello.. More about my Diploma and my education and my way of doing.. As you have noticed i am a white arab, i live in Quebec Canada since year 1989. Now if you ask me how i am making "money" so that to be able to live.. You have to understand my way of doing, I have gotten my Diploma in Microelectronics and informatics in 1988, it is not a college level diploma, my Diploma is a university level Diploma, it looks like an Associate degree or the french DEUG. Read here about the Associate degree: https://en.wikipedia.org/wiki/Associate_degree And after i have gotten my Diploma , I have also succeeded one year of pure 'mathematics" at the university level of mathematics. So i have studied and succeeded 3 years at the university level.. Now after that i have come to Canada in year 1989 and i have started to study more software computing and to study network administration in Quebec Canada, and after that i have started to work as a network administrator for many years, after that around years 2001 and 2002 i have started to implement some of my softwares like PerlZip that looked like PkZip of PKware software company, but i have implemented it for Perl , and i have implemented the Dynamic Link Libraries of my PerlZip that permits to compress and decompress etc. with the "Delphi" compiler, so my PerlZip software product was very fast and very efficient, in year 2002 i have posted the Beta version on internet, and as a proof , please read about it here: http://computer-programming-forum.com/52-perl-modules/ea157f4a229fc720.htm And after that i have sold the release version of my PerlZip product to many many companies and to many individuals around the world, and i have even sold it to many Banks in Europe, and with that i have made more money. And after that i have started to work like a software developer consultant and a network administrator, the name of my company was and is CyberNT Communications, here it is: Here is my company in Quebec(Canada) called CyberNT Communications, i have worked as a software developer and as a network administrator, read the proof here: https://opencorporates.com/companies/ca_qc/2246777231 Also read the following part of a somewhat old book of O'Reilly called Perl for System Administration by David N. Blank-Edelman, and you will notice that it contains my name and it speaks about some of my Perl modules: https://www.oreilly.com/library/view/perl-for-system/1565926099/ch04s04.html And here is one of my new software project that is my powerful Parallel Compression Library was updated to version 4.4 You can download it from: https://sites.google.com/site/scalable68/parallel-compression-library And read more about it below: Author: Amine Moulay Ramdane Description: Parallel Compression Library implements Parallel LZ4 , Parallel LZMA , and Parallel Zstd algorithms using my Thread Pool Engine. - It supports memory streams, file streams and files - 64 bit supports - lets you create archive files over 4 GB , supports archives up to 2^63 bytes, compresses and decompresses files up to 2^63 bytes. - Parallel compression and parallel decompression are extremely fast - Now it supports processor groups on windows, so that it can use more than 64 logical processors and it scales well. - It's NUMA-aware and NUMA efficient on windows (it parallelizes the reads and writes on NUMA nodes) - It minimizes efficiently the contention so that it scales well. - It supports both compression and decompression rate indicator - You can test the integrity of your compressed file or stream - It is thread-safe, that means that the methods can be called from multiple threads - Easy programming interface - Full source codes available. Now my Parallel compression library is optimized for NUMA (it parallelizes the reads and writes on NUMA nodes) and it supports processor groups on windows and it uses only two threads that do the IO (and they are not contending) so that it reduces at best the contention, so that it scales well. Also now the process of calculating the CRC is much more optimized and is fast, and the process of testing the integrity is fast. I have done a quick calculation of the scalability prediction for my Parallel Compression Library, and i think it's good: it can scale beyond 100X on NUMA systems. The Dynamic Link Libraries for Windows and Dynamic shared libraries for Linux of the compression and decompression algorithms of my Parallel Compression Library and for my Parallel archiver were compiled from C with the optimization level 2 enabled, so they are very fast. Here are the parameters of the constructor: First parameter is: The number of cores you have specify to run the compression algorithm in parallel. Second parameter is: A boolean parameter that is processorgroups to support processor groups on windows , if it is set to true it will enable you to scale beyond 64 logical processors and it will be NUMA efficient. Just look at the Easy compression library for example, if you have noticed it's not a parallel compression library: http://www.componentace.com/ecl_features.htm And look at its pricing: http://www.componentace.com/order/order_product.php?id=4 My parallel compression library costs you 0$ and it's a parallel compression library.. Also i am an inventor of many scalable algorithms, read my following thoughts to notice it: Here is my other new invention.. As you have noticed i have just implemented my EasyList here: https://sites.google.com/site/scalable68/easylist-for-delphi-and-freepascal But i have just enhanced its algorithm to be scalable in the Add() method and in the search methods, but it is not all , i will use for that my just new invention that is my generally scalable counting networks, also its parallel sort algorithm will become much much more scalable , because i will use for that my other invention of my fully my scalable Threadpool, and it will use a fully scalable parallel merging algorithm , and read below about my just new invention of generally scalable counting networks: Here is my previous new invention of a scalable algorithm: I have just read the following PhD paper about the invention that we call counting networks and they are better than Software combining trees: Counting Networks http://people.csail.mit.edu/shanir/publications/AHS.pdf And i have read the following PhD paper: http://people.csail.mit.edu/shanir/publications/HLS.pdf So as you are noticing they are saying in the conclusion that: "Software combining trees and counting networks which are the only techniques we observed to be truly scalable" But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. And i think i will sell my new algorithm of a generally scalable counting networks to Microsoft or Google or Embarcadero or such software companies. So you have to be careful with the actual counting networks algorithm that is not generally scalable. My other new invention is my scalable reference counting and here it is: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references And my other new invention is my scalable Fast Mutex that is really powerful, and here it is: About fair and unfair locking.. I have just read the following lead engineer at Amazon: Highly contended and fair locking in Java https://brooker.co.za/blog/2012/09/10/locking.html So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking. I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation. But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Tunable fairness 3- It keeps efficiently and very low its cache coherence traffic 4- Very good fast path performance 5- And it has a good preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 29 01:02PM -0800 Hello, Read the following interesting news: The finite element method finds its place in games Read more here: https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fhpc.developpez.com%2Factu%2F288260%2FLa-methode-des-elements-finis-trouve-sa-place-dans-les-jeux-AMD-propose-la-bibliotheque-FEMFX-pour-une-simulation-en-temps-reel-des-deformations%2F But you have to be aware that finite element method uses Conjugate Gradient Method for Solution of Finite Element Problems, read here to notice it: Conjugate Gradient Method for Solution of Large Finite Element Problems on CPU and GPU https://pdfs.semanticscholar.org/1f4c/f080ee622aa02623b35eda947fbc169b199d.pdf This is why i have also designed and implemented my Parallel Conjugate Gradient Linear System Solver library that scales very well, here it is: My Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well version 1.76 is here.. Author: Amine Moulay Ramdane Description: This library contains a Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware that scales very well, and it contains also a Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware that scales very well. Sparse linear system solvers are ubiquitous in high performance computing (HPC) and often are the most computational intensive parts in scientific computing codes. A few of the many applications relying on sparse linear solvers include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry. Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this. However, after a while people figured out that it works really well if you just stop the iteration much earlier - often you will get a very good approximation after much fewer than n steps. In fact, we can analyze how fast Conjugate gradient converges. The end result is that Conjugate gradient is used as an iterative method for large linear systems today. Please download the zip file and read the readme file inside the zip to know how to use it. You can download it from: https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library Language: GNU C++ and Visual C++ and C++Builder Operating Systems: Windows, Linux, Unix and Mac OS X on (x86) -- As you have noticed i have just wrote above my Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well, but here is my Parallel Delphi and Freepascal Conjugate Gradient Linear System Solvers Libraries that scale very well: Parallel implementation of Conjugate Gradient Dense Linear System solver library that is NUMA-aware and cache-aware that scales very well https://sites.google.com/site/scalable68/scalable-parallel-implementation-of-conjugate-gradient-dense-linear-system-solver-library-that-is-numa-aware-and-cache-aware PARALLEL IMPLEMENTATION OF CONJUGATE GRADIENT SPARSE LINEAR SYSTEM SOLVER LIBRARY THAT SCALES VERY WELL https://sites.google.com/site/scalable68/scalable-parallel-implementation-of-conjugate-gradient-sparse-linear-system-solver Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 29 12:04PM -0800 Hello, How to beat Moore's Law ? I think with the following discovery, Graphene can finally be used in CPUs, and it is a scale out method, read about the following discovery and you will notice it: New Graphene Discovery Could Finally Punch the Gas Pedal, Drive Faster CPUs Read more here: https://www.extremetech.com/computing/267695-new-graphene-discovery-could-finally-punch-the-gas-pedal-drive-faster-cpus The scale out method above with Graphene is very interesting, and here is the other scale up method with multicores and parallelism: Beating Moore's Law: Scaling Performance for Another Half-Century Read more here: https://www.infoworld.com/article/3287025/beating-moore-s-law-scaling-performance-for-another-half-century.html Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 29 11:36AM -0800 Hello, 96% of jobs, those who do not seek human ingenuity, are destined to disappear due to AI, according to Gary Kasparov. Kasparov wants to be clear: artificial intelligence is a tool that will dominate in closed systems designed by humans. The former chess champion also put the issue of basic universal income on the table to help the unemployed due to the introduction of artificial intelligence. If everyone agrees that AI and automation will cut many jobs, the discord lies in the impact on the general level of employment. In other words, could the number of jobs created in other sectors compensate for the job losses in the sectors affected by automation? Sam Altman (co-founder of OpenAI) answered the question at the New York Times New Work Summit midway through the previous year: artificial intelligence will probably replace most of today's jobs, but it should lead the way more personalized jobs and a massive increase in "material abundance" which could increase world GDP by 50% per year for a few decades. Read more here: https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F295132%2F96-pourcent-des-emplois-ceux-qui-ne-sollicitent-pas-l-ingeniosite-humaine-sont-appeles-a-disparaitre-du-fait-de-l-IA-d-apres-Gary-Kasparov-le-champion-du-jeu-d-echecs-vaincu-par-Deep-Blue-d-IBM-en-1997%2F Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 29 11:02AM -0800 Hello, A Promising Antiviral Is Being Tested for the Coronavirus—but Results Are Not Yet Out The drug remdesivir is effective against many other viruses, and some experts are optimistic that it — or similar compounds — may work for the pathogen responsible for COVID-19 Read more here on Scientific American: https://www.scientificamerican.com/article/a-promising-antiviral-is-being-tested-for-the-coronavirus-but-results-are-not-yet-out/ Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 29 09:20AM -0800 Hello, More about discrete-event simulation (DES) and more.. I am not only a more serious software developer specialized in parallel programming and synchronization algorithms, but i have also studied operational research and i have studied more mathematics, so here is an interesting Free simulation software for Delphi and Freepascal called OpenSIMPLY: https://opensimply.org/ And i have also implemented M/M/n queuing model simulation with Object Pascal, here it is: https://sites.google.com/site/scalable68/m-m-n-queuing-model-simulation-with-object-pascal And i have also made my tutorial on how to solve Jackson network problem with mathematical modeling, you can download it from my website: https://sites.google.com/site/scalable68/jackson-network-problem I have also implemented Maxflow algorithm for Delphi and FreePascal, here it is: https://sites.google.com/site/scalable68/maxflow-algorithm-for-delphi-and-freepascal Thank you, Amine Moulay Ramdane./ |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment