Saturday, December 17, 2016

Digest for comp.lang.c++@googlegroups.com - 6 updates in 4 topics

Jorgen Grahn <grahn+nntp@snipabacken.se>: Dec 17 12:26PM

On Thu, 2016-12-15, Scott Lurndal wrote:
>>> MMU is not a requirement for malloc/new.
 
>>As I recall the original Amiga used an MC68000 processor, with a 24-bit
>>address bus and MMU?
 
24-bit bus yes, but no MMU.
 
>>The 68000 was very nice.
 
Indeed.
 
> IIRC, 68030 was the first 68k with an integrated MMU. Even the
> 88100 needed an 88200 for the MMU functionality.
 
> The Amiga 1000 (sold mine recently) had a 7mhz 68000.
 
Yes, and even the later Amigas with 68030 or 68040 CPUs didn't use the
MMU. All processes used one common, flat, shared memory space.
 
(There may have been Amiga-like or -branded computers after Commodore
folded in 1994. I don't know about them.)
 
As a programmer you learned to clean up your resources. As a user,
you learned not to trust the system after one process had crashed.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
David Brown <david.brown@hesbynett.no>: Dec 17 11:56PM +0100

On 15/12/16 19:46, bitrex wrote:
 
> Thanks. Often on small uPs simply disabling interrupts (assuming that a
> timer interrupt is where the scheduler "tick" is coming from) and
> re-enabling after read/write is enough to ensure the operation is atomic.
 
Yes, that could be an efficient implementation of the operations on
std::atomic<int> on such devices.
Tim Rentsch <txr@alumni.caltech.edu>: Dec 16 11:00PM -0800

>> for linked lists.
 
> Due to poor pivot choice worst case performance will manifest more
> often and that is quadratic complexity.
 
There is no reason that the choice of a pivot value has to be any
worse in a linked list quicksort than an array quicksort. In
particular, the entire linked list can be scanned, any constant
number of times, looking for a pivot value, without changing the
order of the algorithm.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Dec 17 08:17PM

On 17/12/2016 07:00, Tim Rentsch wrote:
> particular, the entire linked list can be scanned, any constant
> number of times, looking for a pivot value, without changing the
> order of the algorithm.
 
Nah.
 
/Flibble
Ramine <toto@toto.net>: Dec 16 04:02PM -0800

Hello.........
 
As you know i am an arab..
 
And i am also an inventor...
 
I have invented many algorithms, here they are:
 
1- My SemaMonitor
2- My SemaCondvar
3- and my MLock
4- and my many variants of scalable RWLocks
5- My Scalable SeqlockX: a variant of Seqlock that eliminates livelock.
6- My scalable Parallel Conjugate Gradient linear system
solver library
7- Also my Parallel archiver can be considered an invention, read
about it on my site..
8- My Parallel Varfiler
9- My StringTree
 
I have grouped many of my inventions on my C++ synchronization objects
library, you can download it from here:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
and also i have grouped some of my inventions on my Scalable Parallel
C++ Conjugate Gradient Linear System Solver Library, you can download it
from here:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
And you can download all my projects with the source code from here:
 
https://sites.google.com/site/aminer68/
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <toto@toto.net>: Dec 16 03:38PM -0800

Hello..........
 
 
Yet more about my projects...
 
The four major components of efficiency are:
 
1- User efficiency:
 
The amount of time and effort users will spend to learn how to use
the program, how to prepare the data, and how to interpret and use the
output.
 
2- Maintenance Efficiency:
 
The amount of time and effort maintenance programmers will spend
reading a program and its accompanying technical documentation
in order to understand it well enough to make any necessary
modifications.
 
3- Algorithmic complexity:
 
The inherent efficiency of the method itself, regardless of wich
machine we run it on or how we code it.
 
4- Coding efficiency:
 
This is the traditional efficiency measure. Here we are concerned
with how much processor time and memory space a computer program
requires to produce correct answer.
 
Twenty years ago, the most expensive aspect of programming was computer
costs, consequently we tended to "optimize for the machine." Today,
the most expensive aspect of programming is the programmers costs,
because today programmers cost more money than hardware.
 
Computer programs should be written with these goals in mind:
 
1- To be correct and reliable
 
2- To be easy to use for its intended end-user population
 
3- To be easy to understand and easy to change.
 
Here is among other things the key aspects of end-user efficiency:
 
1- Program robustness
2- Program generality
2- Portability
4- Input/Output behavior
5- User documentation.
 
Here is the the key points in achieving maintenance efficiency:
 
1- A clear, readable programming style
2- Adherence to structured programming.
3- A well-designed, functionally modular solution
4- A thoroughly tested and verified program with build-in debugging
and testing aids
5- Good technical documentation.
 
You have to know that i have used a Top-Down methodology to design my
projects.. the Top-Down methodology begins with the overall goals of the
program- what we wich to achieve instead of how -, and after that it
gets on more details and how to implement them.
 
And i have taken care with my objects and modules of the following
characteristics:
 
- Logical coherence
 
- Independence:
 
It is like making more pure functions of functional programming to avoid
side-effects and to easy the maintenance and testing steps.
 
- Object oriented design and coding
 
- and also structure design and coding with sequence , iteration and
conditionals.
 
And about the testing phase read the following:
 
Alexandre Machado wrote:
 
>- You don't have both, unit and performance tests
>Have you ever considered this? I'm sure that it would make
>make it easier for other Delphi devs to start using it, no?
 
You have to know that i have also used the following method of testing
called black box testing:
 
https://en.wikipedia.org/wiki/Black-box_testing
 
This is why i have written this:
 
I have thoroughly tested and stabilized more my parallel archiver for
many years, and now i think that it is more stable and efficient, so i
think that you can be more confident with it.
 
This also true for all my other projects, i have followed the black box
testing also with them...
 
For race conditions , i think for an experienced programmer in parallel
programming like me, this is not a so difficult task to avoid race
conditions.
 
For sequential consistency i have also written this:
 
I have implemented my inventions with FreePascal and Delphi compilers
that don't reorder loads and stores even with compiler optimization, and
this is less error prone than C++ that follows a relaxed memory model
when compiled with optimization, so i have finally compiled my
algorithms implementations with FreePascal into Dynamic Link Libraries
that are used by C++ in a form of my C++ Object Synchronization Library.
 
So this is much easier to make a correct sequential consistency with
Delphi and Freepascal because it is less error prone.
 
Other than that you have to know that i am an experienced programmer in
parallel programming also, so i think that my projects are more stable
and fast.
 
You can download all my projects from:
 
https://sites.google.com/site/aminer68/
 
 
 
Thank you,
Amine Moulay Ramdane..
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: