Saturday, June 17, 2017

Digest for comp.programming.threads@googlegroups.com - 5 updates in 5 topics

rami18 <coco@coco.com>: Jun 16 10:40PM -0400

Hello................
 
Read again my final corrected post... i correct some typos because i
write fast..
 
About scalability..
 
This is an interesting subject..
 
To be more proficient on scalability, more precisely , to render
my algorithms scalable on NUMA and multicores systems, i have used my
smartness to invent new algorithms..So look at my new algorithms of
my Scalable Parallel C++ Conjugate Gradient Linear System Solver
Library here:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
i have invented this new algorithms to be cache-aware and scalable on
NUMA and multicores systems, but what i want you to understand, is
that by doing so, it has become easy for me to understand Deep Learning
Artificial intelligence, because at the very heart of Deep Learning,
you have to minimize an error function and this result in a doing Matrix
calculations that are the most expensive part that you can parallelize,
and this is why Deep learning is scalable, it's why we are using GPU to
scale Deep Learning softwares.. this has become easy for me to
understand , because my Scalable Parallel C++ Conjugate Gradient Linear
System Solver Library above does Matrix calculations that are the most
expensive part and that are scalable, so this has enhanced my knowledge
efficiently , and now i am also capable of scaling Deep Learning
Artificial intelligence, this is why i am now capable of understanding
more easily Deep Learning Artificial intelligence.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
rami18 <coco@coco.com>: Jun 16 10:34PM -0400

Hello........
 
Read again, i correct
 
About scalability..
 
This is an interesting subject..
 
To be more proficient on scalability, more precisely , to render
my algorithms scalable on NUMA and multicores systems, i have used my
smartness to invent new algorithms..So look at my new algorithms of
my Scalable Parallel C++ Conjugate Gradient Linear System Solver
Library here:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
i have invented this new algorithms to be cache-aware and scalable on
NUMA and multicores systems, but what i want you to understand, is
that by doing so, it has become easy for me to understand Deep Learning
Artificial intelligence, because at the very heart of Deep Learning,
you have to minimize an error function and this result in a doing Matrix
calculations that are the most expensive part that you can parallelize,
and this is why Deep learning is scalable, it's why we are using GPU to
scale Deep Learning softwares.. this has become easy for me to
understand , because my Scalable Parallel C++ Conjugate Gradient Linear
System Solver Library above does Matrix calculations that are the most
expensive part and that are scalable, so this has enhanced knowledge
efficiently , and now i am also capable of scaling Deep Learning
Artificial intelligence, this is why i am now capable of understanding
more easily Deep Learning Artificial intelligence.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
rami18 <coco@coco.com>: Jun 16 10:27PM -0400

Hello...............
 
 
About scalability..
 
This is an interesting subject..
 
To be more proficient on scalability, more precisely , to render
my algorithms scalable on NUMA and multicores systems, i have used my
smartness to invent new algorithms..So look at my new algorithms of
my Scalable Parallel C++ Conjugate Gradient Linear System Solver
Library here:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
i have invented this new algorithms to be cache-aware and scalable on
NUMA and multicores systems, but what i want you to understand, is
that by doing so, it has become easy for me to understand Deep Learning
Artificial intelligence, because at the very heart of Deep Learning,
you have to minimize an error function and this result in a doing Matrix
calculations that are the most expensive part that you can parallelize,
and this is why Deep learning scalable, it's why we are using GPU to
scale Deep Learning softwares.. this has become easy for me to
understand , because my Scalable Parallel C++ Conjugate Gradient Linear
System Solver Library above does Matrix calculations that are the most
expensive part and that are scalable, so this has enhanced knowledge
efficiently , and now i am also capable of scaling Deep Learning
Artificial intelligence, this is why i am now capable of understanding
more easily Deep Learning Artificial intelligence.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
rami18 <coco@coco.com>: Jun 16 07:26PM -0400

Hello...................
 
The essence of software programming..
 
As you know i am a white arab, and i am a software programmer too, i
have done also operational research and i have sold my softwares and i
have worked as a software consultant and as a software programmer, and i
have learned network administration and i have worked also as network
administrator on many companies..
 
Now i was asking myself a question..
 
What is software programming ?
 
To answer this question you have to take a look at my tutorial
on Petri Nets , i have written on this tutorial about some of my new
technics, you can download and read the tutorial from here:
 
https://sites.google.com/site/aminer68/how-to-analyse-parallel-applications-with-petri-nets
 
Doing Petri Net smart model checking is also like doing operational
research..
So as you have noticed to do software programming you have to be
capable of doing like operational research.. software programming
can not be limited to programming, software programming has
to solve scheduling problems with linear programming and
it has to solve the shortest path problem with graph theory or with
linear programming and it has to solve problems from Queueing theory
etc. etc. this is why i have done operational research to enhance my
professionalism in software programming.
 
What also want i to say ?
 
You will say that a Threadpool is a Threadpool, but a Threadpool can
be sophisticated like my more scalable Threadpool, if you are
using one producer and many workers threads , my Threadpool is
scalable, read about it on my following explanation:
 
Please read this:
 
"Just as with normal thread pool usage, the main program thread may
create Tasks that will get queued on the global queue (e.g. Task1 and
Task2) and threads will grab those Tasks typically in a FIFO manner.
Where things diverge is that any new Tasks (e.g. Task3) created in the
context of the executing Task (e.g. Task2) end up on a local queue for
that thread pool thread."
 
Read more here:
 
http://www.danielmoth.com/Blog/New-And-Improved-CLR-4-Thread-Pool-Engine.aspx
 
You will notice that there is a contention on the global queue from the
producer threads and from the consumer threads on the Microsoft CLR
Threadpool engine, so that's not good.
 
But please look at the source code of my Threadpool engine that scales
well, it does eliminate the contention on the consumer threads side by
using technics such as lock-striping and other technics..
 
And my efficient threadpool that scales well supports the following:
 
- Now it can use processor groups on windows, so that it can use more
than 64 logical processors and it scales well.
 
The following have been added to my efficient Threadpool engine:
 
- The worker threads enters in a wait state when there is no job in the
concurrent FIFO queues - for more efficiency -
 
- You can distribute your jobs to the worker threads and call any method
with the threadpool's execute() method.
 
- It uses work-stealing to be more efficient.
 
- You can configure it to use stacks or FIFO queues , when you use
stacks it will be cache efficient.
 
- Now it can use processor groups on windows, so that it can use more
than 64 logical processors and it scales well.
 
- Now it distributes the jobs on multiple FIFO queues or stacks so that
it scales well.
 
- You can wait for the jobs to finish with the wait() method.
 
- It's NUMA-aware and NUMA efficient.
You can download it from:
 
https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-that-scales-well
 
And you can download my efficient Threadpool engine with priorities that
scales well from:
 
https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-with-priorities-that-scales-well
 
 
Also:
 
 
Efficient morality is optimization, and optimiztion is performance and
reliability, and reliability includes security.
 
Why Go Optimization ?
 
Shorter horizons. Planning horizons are now typically around three
months, thanks to the reporting period required by the gnomes on Wall
Street. Only efficient optimization is compatible with that kind of
requirements.
 
Tactical planning. It's already a jungle out there when it comes to
developing or installing large-scale distributed applications. That,
combined with shorter development times and launch horizons, requires
efficient optimization.
 
Capital efficiency. Efficient optimization is not just about the future.
Efficient optimization addresses the serious need to squeeze more out of
your current capital equipment.
 
Better tools. To solve performance problems in datacenters involving
1000s of servers spanning multiple tiers, we need tools that go beyond
simple reporting and enable better data discovery.
 
And here is some of my useful software tools that i have created:
 
Universal Scalability Law program was updated to version 3.12
 
Author: Amine Moulay Ramdane
 
Now it compiles correctly on LLVM-based Delphi compilers..
 
Where also do you use it ?
 
You use it for example to optimize more the cost/performance ratio on
multicores and manycores.
 
With -nlr option means that the problem will be solved with the
mathematical nonlinear regression using the simplex method as a
minimization, if you don't specify -nlr, the problem will be solved by
default by the mathematical polynomial regression, and since it uses
regression , you can use it for example to test your system on many more
cores with just a few points, and after that using regression it searchs
for the cost/performance ratio that is optimal for you.
 
Please read more about my Universal Scalability Law for Delphi and
FreePascal, it comes with a graphical and a command-line program.
 
I have included a 32 bit and 64 bit windows executables called usl.exe
and usl_graph.exe inside the zip, please read the readme file to know
how to use it, it is a very powerful tool.
 
You can read about it and download the new version 3.12 from here:
 
https://sites.google.com/site/aminer68/universal-scalability-law-for-delphi-and-freepascal
 
And:
 
You have to appreciate my inventions of my C++ synchronization objects
library..
 
Here is why:
 
Here is the problem of optimistic Transactional memory:
 
If there is more conflicts between reads and writes you have to rollback
and to serialize also etc. and this will be less energy efficient and it
will be less faster than pessimistic locking mechanisms
 
So i think that my C++ Synchronization objects library and my Delphi
synchronization objects are still really useful..
 
You can download it from:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
Thank you,
 
 
And:
 
Scalable Parallel C++ Conjugate Gradient Linear System Solver
Library version 1.64
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Scalable Parallel implementation of
Conjugate Gradient Dense Linear System Solver library that is
NUMA-aware and cache-aware, and it contains also a Scalable
Parallel implementation of Conjugate Gradient Sparse Linear
System Solver library that is cache-aware.
 
Please download the zip file and read the readme file inside the
zip to know how to use it.
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
You can download it from:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
You can download my other projects from:
 
https://sites.google.com/site/aminer68/
 
 
And also here is some of my other thoughts about quality..
 
Production must also take into account norms such as:
 
- Ensuring quality
- Easy the maintenance
- Highering productivity
- Easing documentation
 
Those norms must also be fulfilled, and the criteria of technicity
and methodology are important in this regard.
 
To ensure quality, audits are required and tests are required and
the factors of quality must be met, those factors such as:
 
1- Correctness
2- Reliability
3- Efficiency
4- Integrity
5- Usability
6- Maintenability
7- Testability
8- Flexibility
9- Reusability
10- Interoperability
11- Portability
 
And you have to know that easy of use is positive for maniability , but
it can be detrimental to security, so you have to know how to balance.
 
And to enhance more correctness and reliability and maintenability, i
said that:
 
How to improve also quality and timeliness of software ?
 
You have to abstract also the graph that offer a deeper insights into
the nature of objects and the system.
 
And this has to be done smartly..
 
The abstract graph of the system must reveal the following:
 
1- It has to reveal the hierarchy of roles and the
breadths of those roles
 
2- It has to offer a deeper insights into the nature of the objects, the
abstract graph must reveal easily the triads why-what-how.
 
3- the abstract graph must be clear and it has to clearly and
efficiently and plainly communicates what the roles do.
 
4- It has to have a high unity, a role has high unity if all actions
described in the role have one purpose.
 
5- You have to have a good coupling between objects and methods. This is
the visibility and degree to which two objects or methods communicate
with each other. This is the degree of interdependence between software
modules, a measure of how closely connected two routines or modules are,
it must be low coupling. A low coupling is a sign of well-structured
software system
 
And about efficiency, the four major components of efficiency in
software engineering are:
 
1- User efficiency:
 
The amount of time and effort users will spend to learn how to use
the program, how to prepare the data, and how to interpret and use the
output.
 
2- Maintenance Efficiency:
 
The amount of time and effort maintenance programmers will spend
reading a program and its accompanying technical documentation
in order to understand it well enough to make any necessary
modifications.
 
3- Algorithmic complexity:
 
The inherent efficiency of the method itself, regardless of wich
machine we run it on or how we code it.
 
4- Coding efficiency:
 
This is the traditional efficiency measure. Here we are concerned
with how much processor time and memory space a computer program
requires to produce correct answer.
 
Twenty years ago, the most expensive aspect of programming was computer
costs, consequently we tended to "optimize for the machine." Today,
the most expensive aspect of programming is the programmers costs,
because today programmers cost more money than hardware.
 
Computer programs should be written with these goals in mind:
 
1- To be correct and reliable
 
2- To be easy to use for its intended end-user population
 
3- To be easy to understand and easy to change.
 
Here is among other things the key aspects of end-user efficiency:
 
1- Program robustness
2- Program generality
2- Portability
4- Input/Output behavior
5- User documentation.
 
Here is the the key points in achieving maintenance efficiency:
 
1- A clear, readable programming style
2- Adherence to structured programming.
3- A well-designed, functionally modular solution
4- A thoroughly tested and verified program with build-in debugging
and testing aids
5- Good technical documentation.
 
You have to know that i have used a Top-Down methodology to design my
projects.. the Top-Down methodology begins with the overall goals of the
program- what we wich to achieve instead of how -, and after that it
gets on more details and how to implement them.
 
And i have taken care with my objects and modules of the following
characteristics:
 
- Logical coherence
 
- Independence:
 
It is like making more pure functions of functional programming to avoid
side-effects and to easy the maintenance and testing steps.
 
- Object oriented design and coding
 
- and also structure design and coding with sequence , iteration and
conditionals.
 
 
 
Thank you,
Amine Moulay Ramdane.
rami18 <coco@coco.com>: Jun 16 04:57PM -0400

Hello............
 
 
How to build software for a computer 50 times faster than anything in
the world
 
Read more here:
 
https://www.sciencedaily.com/releases/2017/06/170615133233.htm
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: