Friday, August 26, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 11 topics

Jerry Stuckle <jstucklex@attglobal.net>: Aug 25 07:37PM -0400

On 8/25/2016 5:07 PM, Mr Flibble wrote:
> to use exceptions.
 
> So it seems your fractal wrongness happily continues unabated.
 
> /Flibble
 
Yes, I know why they prohibit it. But if you could READ, you would see
how exceptions are being misused - like you do. And the line I quoted
is a very good example of misuse of exceptions.
 
The pros and cons are not specific to Google. They are general guides
followed by GOOD C++ programmers - which you are not.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Lynn McGuire <lynnmcguire5@gmail.com>: Aug 25 08:00PM -0500

On 8/24/2016 11:30 AM, Richard wrote:
> business analysts and product owners to create acceptance criteria in
> a way that not only made sense to them but also became an executable
> unit that could directly verify the functionality.
 
Thanks for the reference of FitNesse.
 
We invoke the all phases of our testing from the command line and use /COMMANDS for directions. For the first three phases, we
perform a detailed analysis of the output files using a custom built tool in C++. All 1,100+ of them.
 
The fourth phase of the testing is just an endurance test using 13,000+ input files to make sure that we do not crash. No analysis
is done of the results.
 
So, adding more command line options is a natural for us.
 
Thanks,
Lynn
Lynn McGuire <lynnmcguire5@gmail.com>: Aug 25 08:04PM -0500

On 8/22/2016 2:15 AM, Richard wrote:
> testing. This is the so-called "Humble Dialog". Put the automated
> testing to work on the underlying business logic that produces results
> shown by the UI.
 
BTW, we do use a CAD front end that looks kinda like Visio. But, this is ours that we have been writing since 1987. Our calculation
engine dates back to the early 1960s.
https://www.winsim.com/screenshots.html (man, I need to update these !)
 
Thanks,
Lynn
Gareth Owen <gwowen@gmail.com>: Aug 26 03:03PM +0100


>>Evidence?
 
> Every measured comparison I've seen says they come out about the
> same.
 
Ditto. Now, how many of our anecdotes do we need to make some data?
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Aug 26 04:53PM +0100

On 26/08/2016 00:37, Jerry Stuckle wrote:
 
> Yes, I know why they prohibit it. But if you could READ, you would see
> how exceptions are being misused - like you do. And the line I quoted
> is a very good example of misuse of exceptions.
 
Again your fractal wrongness manifests. The line you quoted about not
throwing an exception for invalid user input is simply wrong: invalid
user input is an error condition which typically causes some modal
process to be cancelled and is handled perfectly using exceptions.
 
 
> The pros and cons are not specific to Google. They are general guides
> followed by GOOD C++ programmers - which you are not.
 
Nope. The Google style guide is a guide for contributing to Google open
source projects and not a guide for writing modern C++ with good quality.
 
/Flibble
legalize+jeeves@mail.xmission.com (Richard): Aug 26 04:52PM

[Please do not mail me a copy of your followup]
 
Lynn McGuire <lynnmcguire5@gmail.com> spake the secret code
>is ours that we have been writing since 1987. Our calculation
>engine dates back to the early 1960s.
> https://www.winsim.com/screenshots.html (man, I need to update these !)
 
Yum, chemical engineering. I love it!
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
legalize+jeeves@mail.xmission.com (Richard): Aug 26 04:56PM

[Please do not mail me a copy of your followup]
 
Lynn McGuire <lynnmcguire5@gmail.com> spake the secret code
>input files to make sure that we do not crash. No analysis
>is done of the results.
 
>So, adding more command line options is a natural for us.
 
Yeah, you've got enough infrastructure already that adding new
command-line options is a small incremental cost.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Lynn McGuire <lynnmcguire5@gmail.com>: Aug 26 01:21PM -0500

On 8/26/2016 11:56 AM, Richard wrote:
 
>> So, adding more command line options is a natural for us.
 
> Yeah, you've got enough infrastructure already that adding new
> command-line options is a small incremental cost.
 
One of my programmers is trying to add a new test mode that rolls through all of the dialogs connected to the symbols in a particular
drawing. That should be the test that smokes out a few problems in our new sparse data item structure. We just want to instance the
dialog and then click ok. That should be interesting.
 
Thanks,
Lynn
Ramine <ramine@1.1>: Aug 26 02:15PM -0400

Hello........
 
 
C++ synchronization objects library
 
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains 9 synchronization objects, first one is my
scalable SeqlockX that is a variant of Seqlock that eliminates the
weakness of Seqlock that is "livelock"of the readers when there is more
writers, and second is my scalable MLock that is a scalable
lock , and third is my SemaMonitor that combines all characteristics
of a semaphore and an eventcount and also a windows
Manual-reset event and also a windows Auto-reset event, and fourth
is my scalable DRWLock that is a scalable reader-writer lock that is
starvation-free and it does spin-wait, and five is is my scalable
DRWLockX that is a scalable reader-writer lock that is
starvation-free and it doesn't spin-wait, but it waits on the Event
objects and my SemaMonitor, so it is energy efficient, and six one is
my scalable asymmetric DRWLock that doesn't use any atomic operations
and/or StoreLoad style memory barriers on the
reader side, so it look like RCU, and it is fast. This scalable
Asymmetric Distributed Reader-Writer Mutex is FIFO fair on the
writer side and FIFO fair on the reader side and of course it is
starvation-free and it does spin-wait, and seven one is my scalable
asymmetric DRWLockX that doesn't use any atomic operations
and/or StoreLoad style memory barriers on the reader side, so it
look like RCU, and it is fast. This scalable Asymmetric Distributed
Reader-Writer Mutex is FIFO fair on the writer side and FIFO fair on the
reader side and of course it is starvation-free, and it does not
spin-wait, but it waits on Event objects and my SemaMonitor, so it
is energy efficient, and eight is my LW_Asym_RWLockX that is a
lightweight scalable Asymmetric Reader-Writer Mutex that uses a technic
that looks like Seqlock without looping on the reader side
like Seqlock, and this has permited the reader side to be costless,
it is FIFO fair on the writer side and FIFO fair on the reader side and
it is of course Starvation-free and it does spin-wait, and nine is my
Asym_RWLockX, a lightweight scalable Asymmetric Reader-Writer Mutex that
uses a technic that looks like Seqlock without looping
on the reader side like Seqlock, and this has permited the reader
side to be costless, it is FIFO fair on the writer side and FIFO fair on
the reader side and it is of course Starvation-free and it does not
spin-wait, but waits on my SemaMonitor, so it is energy efficient.
 
 
You can download my library from:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
My scalable Asymmetric Reader-Writer Mutex calls the windows
FlushProcessWriteBuffers() just one time, but my scalable
Asymmetric Distributed Reader-Writer Mutex calls
FlushProcessWriteBuffers() many times.
 
I have implemented my inventions with FreePascal and Delphi compilers
that don't reorder loads and stores even with compiler optimization, and
this is less error prone than C++ that follows a
relaxed memory model when compiled with optimization, so i have finally
compiled my algorithms implementations with FreePascal
into Dynamic Link Libraries that are used by C++ in a form of my C++
Object Synchronization Library.
 
If you take a look at the zip file , you will notice that it contains
the DLLs Object pascal source codes, to compile those dynamic link
libraries source codes you will have to download my SemaMonitor Object
pascal source code and my SeqlockX Object pascal source code and my
scalable MLock Object pascal source code and my scalable DRWLock Object
pascal source code from here:
 
https://sites.google.com/site/aminer68/
 
I have compiled and included the 32 bit and 64 bit windows Dynamic Link
libraries inside the zip file, if you want to compile the dynamic link
libraries for Unix and Linux and OSX on (x86) , please download the
source codes of my SemaMonitor and my scalable SeqlockX and my scalable
MLock and my scalable DRWLock and compile them yourself.
 
My SemaMonitor of my C++ synchronization objects library is
easy to use, it combines all characteristics of a semaphore and an
eventcount and also a windows Manual-reset event and also a windows
Auto-reset event, here is its C++ interface:
 
class SemaMonitor{
SemaMonitor(bool state, long1 InitialCount1=0,long1
MaximumCount1=INFINITE);
~SemaMonitor();
 
void wait(signed long mstime=INFINITE);
void signal();
void signal_all();
void signal(long1 nbr);
void setSignal();
void resetSignal();
long2 WaitersBlocked();
};
 
So when you set the first parameter that is state of the constructor to
true. it will add the characteristic of a Semaphore to the to the
Eventcount, so the signal will not be lost if the threads are not
waiting for the SemaMonitor objects, but when you set the first
parameter of the construtor to false, it will not behave like a
Semaphore because if the threads are not waiting for the SemaCondvar or
SemaMonitor the signal will be lost..
 
the parameters InitialCount1 and MaximumCount1 is the semaphore
InitialCount and MaximumCount.
 
The wait() method is for the threads to wait on the SemaMonitor
object for the signal to be signaled.
 
and the signal() method will signal one time a waiting thread on the
SemaMonitor object.
 
the signal_all() method will signal all the waiting threads on the
SemaMonitor object.
 
the signal(long2 nbr) method will signal nbr number of waiting threads
 
the setSignal() and resetSignal() methods behave like the windows Event
object's methods that are setEvent() and resetEvent().
 
and WaitersBlocked() will return the number of waiting threads on
the SemaMonitor object.
 
As you have noticed my SemaMonitor is a powerful synchronization object.
 
Please read the readme files inside the zip file to know more about them..
 
Here is my new invention that is my new algorithm:
 
I have invented a new algorithm of my scalable Asymmetric Distributed
Reader-Writer Mutex, and this one is costless on the reader side, this
one doesn't use any atomic operations and/or StoreLoad style memory
barriers on the reader side, my new algorithm has
added a technic that looks like Seqlock, but this technic doesn't
loop as Seqlock. Here is my algorithm:
 
On the reader side we have this:
 
--
procedure TRWLOCK.RLock(var myid:integer);
 
var myid1:integer;
id:long;
begin
 
 
myid1:=0;
id:=FCount5^.fcount5;
if (id mod 2)=0
then FCount1^[myid1].fcount1:=1
else FCount1^[myid1].fcount1:=2;
if ((FCount3^.fcount3=0) and (id=FCount5^.fcount5) and
(FCount1^[myid1].fcount1=1))
then
else
begin
LockedExchangeAdd(nbr^.nbr,1);
if FCount1^[myid1].fcount1=2
then LockedExchangeAdd(FCount1^[myid1].fcount1,-2)
else if FCount1^[myid1].fcount1=1
then LockedExchangeAdd(FCount1^[myid1].fcount1,-1);
event2.wait;
LockedExchangeAdd(FCount1^[myid1].fcount1,1);
LockedExchangeAdd(nbr^.nbr,-1);
end;
end;
--
 
The writer side will increment FCount5^.fcount5 like does
a Seqlock, and the reader side will grap a copy of FCount5^.fcount5
and copy it on the id variable, if (id modula 2) is equal to zero
that means the writer side has not modified yet Fcount3^.fcount3,
and the reader side will test again if FCount3^.fcount3 equal 0, and
if id=FCount5^.fcount5 didn't change and if FCount1^[myid1].fcount1 that
we have assigned before didn't change and that means that we are sure
that the writer side will block on FCount1^[myid1].fcount1 equal 1.
 
And notice with me that i am not looping like in Seqlock.
 
And the rest of my algorithm is easy to understand.
 
This technic that looks like Seqlock without looping like Seqlock
will allow us to be sure that although the x86 architecture will reorder
the loads of the inside reader critical section , the loads
inside the reader critical section will not go beyond the load of
FCount5^.fcount5 and this will allow my algorithm to work correctly.
 
My algorithm is FIFO fair on the writer side and FIFO fair on the
Reader side , and of course it is Starvation-free, and it is
suitable for realtime critical systems.
 
My Asym_RWLockX and LW_Asym_RWLockX algorithms work the same.
 
You will find the source code of my new algorithm here:
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
 
It is the version 2 that is my own algorithm.
 
and you can download the source code of my Asym_RWLockX and
LW_Asym_RWLockX algorithms that work the same from here:
 
https://sites.google.com/site/aminer68/scalable-rwlock
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
 
Thank you,
Amine Moulay Ramdane.
bleachbot <bleachbot@httrack.com>: Aug 26 07:01PM +0200

bleachbot <bleachbot@httrack.com>: Aug 26 07:28PM +0200

bleachbot <bleachbot@httrack.com>: Aug 26 07:44PM +0200

bleachbot <bleachbot@httrack.com>: Aug 26 08:03PM +0200

bleachbot <bleachbot@httrack.com>: Aug 26 08:10PM +0200

bleachbot <bleachbot@httrack.com>: Aug 26 08:14PM +0200

Ramine <ramine@1.1>: Aug 26 02:11PM -0400

Hello....
 
 
My Scalable Parallel C++ Conjugate Gradient Linear System Solver Library
was updated to version 1.5.
 
Now it supports processor groups on windows , so it will allow you to go
and scale beyond 64 logical processors and it will be NUMA efficient.
 
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Scalable Parallel implementation of
Conjugate Gradient Dense Linear System Solver library that is
NUMA-aware and cache-aware, and it contains also a Scalable
Parallel implementation of Conjugate Gradient Sparse Linear
System Solver library that is cache-aware.
 
Please download the zip file and read the readme file inside the zip
to know how to use it.
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and OSX on (x86)
 
 
You can download it from:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Aug 26 02:04PM -0400

Hello,
 
Read again my corrected post about coroutines
 
In this post i will explain to you what i have written as 64 bit
assembler routines for my Stackful coroutines
library..
 
But first comes first, why have i decided to implement
this assembler routines of my stackful coroutines library ?
 
I have decided to do it because i have learned how the operating
system saves and restores the registers and the stacks when
it works with threads , because coroutines are not far away from
threads, you can learn a lot on implementing threads by implementing
coroutines..
 
 
Now what is the important advantages of my coroutines library?
 
- Context switching is expensive with threads, with coroutines it is
really fast.
 
- Coroutines eliminate race conditions and that's really interesting
on embedded systems and other applications.
 
- My mutex and semaphore of my coroutines library are much much
more faster than the mutex and semaphore used on processes and threads.
 
 
Now how have i implemented it with 64 bit assembler routines?
 
First you have to replace the RSP and RBP registers with a pointer to
a dynamic allocated memory to be able to save the local variables
of the coroutines.
 
Second, you have to avoid to generate the stack frames to simplify
the implementation in assembler, this will simplify the saving
of the RBP register for example, and you have to search for the
returned IP register address with the RSP register to to be able
to return back from the yield() routine, other than that
you have to save some registers and restores them back.
 
You can download my Stackful coroutines library for Delphi and
FreePascal from:
 
https://sites.google.com/site/aminer68/stackful-coroutines-library-for-delphi-and-freepascal
 
You can download my Object oriented Stackful coroutines library for
Delphi and FreePascal from:
 
https://sites.google.com/site/aminer68/object-oriented-stackful-coroutines-library-for-delphi-and-freepascal
 
 
That's all for today.
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Aug 26 01:45PM -0400

Hello,
 
I must be frank about Deep Learning in artificial intelligence...
 
I think that this invention is not so bright, because what is
the basic of Deep Learning in artificioal intelligence, is
how the minimize the error of the cost function that
is the difference between the desired output and the output,
so what you are actually doing is looking for the global minimum
, and this can be done by iterative methods like PSO in combination with
Simulated annealing to be able to converge or with a direct method like
the stochastic gradient descent, this is the very important thing to
know, so i don't think that Deep Learning is a bright invention, because
it's something easy that permit us to recognize objects and high level
constructs even if there is hazards in them and Deep Learning is like a
low level programming, like using assembler programming, so there is a
need to simplify it with a way to do
with it high level programming.
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Aug 26 01:02PM -0400

Hello.....
 
As you have noticed i have discussed about artificial intelligence
yesterday, but today i will explain something important..
 
What you are seeing today such as Deep Leaning is the low level
programming of artificial intelligence, it's low level like assembler
programming, and what you must know is that Deep Learning in artificial
intelligence is like the low level circuits that permit us to construct
high level programs in artificial intelligence that do natural language
recognition and Speech Recognition and Natural Reasonning, what Deep
Learning has brought is that it can recognize objects although objects
have some hazard in them, so the learning process of Deep Learning will
try to adapt the weights of the neural nets to be able to recognize the
objects even if they have some hazard in them, but Deep Learning has
evolved to a much optimized and faster network that is called Deep
Convolutional Networks that is not so difficult to understand.
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Aug 26 01:29PM -0400

On 8/26/2016 1:02 PM, Ramine wrote:
 
> As you have noticed i have discussed about artificial intelligence
> yesterday, but today i will explain something important..
 
> What you are seeing today such as Deep Leaning is the low level
 
I correct: I mean: Deep Learning, not Deep Leaning.
 
 
Alan Mackenzie <acm@muc.de>: Aug 26 02:16PM

Release Announcement
CC Mode Version 5.33
Alan Mackenzie
 
This message announces the availability of a new version of CC Mode,
an Emacs and XEmacs mode for editing C (ANSI and K&R), C++,
Objective-C, Java, CORBA's IDL, Pike and AWK code.
 
A list of user visible changes is detailed in the NEWS file and in the
URL listed below. More information, including links to download the
source, are available on the CC Mode web page:
 
<http://cc-mode.sourceforge.net/>
 
Send email correspondence to
 
bug-cc-mode@gnu.org
 
For a list of changes please see
 
<http://cc-mode.sourceforge.net/changes-533.php>
 
--
Alan Mackenzie (Nuremberg, Germany).
Marcel Mueller <news.5.maazl@spamgourmet.org>: Aug 26 10:33AM +0200

On 25.08.16 22.36, Cholo Lennon wrote:
> Well, in my particular case I just need the same behavior on Windows,
> Linux and Solaris, so I suppose my code is "portable" across those
> platforms.
 
All of them have a directory structure and will look at the current
folder first.
 
I am in doubt that a platform exists, which has a concept of a directory
structure and which does not fulfill your requirement.
 
 
Marcel
"Öö Tiib" <ootiib@hot.ee>: Aug 26 01:43AM -0700

On Wednesday, 24 August 2016 21:46:16 UTC+3, Cholo Lennon wrote:
 
> # include < h-char-sequence> new-line
 
> with the identical contained sequence (including > characters, if any)
> from the original directive./
 
Typical trick to achieve portability in sense that you seem to look after
is to have every code file in whole product with name that starts with
latin character, contains only latin characters, arabic numbers, minuses,
underlines and dots *AND* is case-insensitively unique (regardless of
directory or library or module it is part of).
 
That sounds simple to follow so it may be surprise how hard it is in
practice in some situations and with bigger projects. However ... it will
reward you with no need to know all those details about every compiler
and file system targeted.
thomas.grund.1975@gmail.com: Aug 26 01:38AM -0700

Am Donnerstag, 25. August 2016 17:38:56 UTC+2 schrieb bitrex:
 
> Under the latest GCC with -std=C++11, it causes a runtime error:
> std::logic_error essentially complaining that you're trying to construct
> a string from a nullptr
 
Good to know!
 
Thanks,
Thomas
Tim Rentsch <txr@alumni.caltech.edu>: Aug 25 08:33PM -0700

> }
 
> const int A::table[] = {0, 1, 2, 3, 4};
> ---------------------------------------------------
 
Something that acts like a lookup table can be provided pretty
straightforwardly using a function-like macro. Here is a simple
sketch (I chose different numbers for the example values):
 
#define table(k) ( \
(k) == 0 ? 27 : \
(k) == 1 ? 28 : \
(k) == 2 ? 29 : \
(k) == 3 ? 35 : \
(k) == 4 ? 47 : \
-1 \
)
 
(Obviously we might want to choose a better name in actual code.)
 
Calls to the table() macro are usable as constant expressions
when the "index" is a constant expression:
 
enum { SOMETHING = 3 };
char foo[ table(SOMETHING) ];
 
char bas[ table((int)2.0) ];
 
char bar[ table(12-11) ];
 
const int fantastic = 4;
char foobas[ table(fantastic) ];
 
Calls to table() with a constant argument also can be used as
'case' labels, if that is wanted.
 
If dynamic lookup is needed in addition to compile-time lookup,
this can be done by providing an inline function that accesses an
array:
 
extern const int table_values_array[];
 
static inline int
(table)( int index ){
return table_values_array[ index ];
}
 
Any "indexing" where the index is a dynamic value (eg, an auto
variable) rather than a compile-time constant may use the 'table'
function rather than the macro (this is done in the usual way by
putting parentheses around the function name):
 
int
dumb_example( int foo, int bas ){
return (table)(foo-bas);
}
 
Even low levels of optimization (eg, -O1 in gcc) should turn this
function call into just an array access.
 
The initializers for elements of table_values_array[] may be
written using repeated calls to the table() macro, to ensure
synchronization:
 
const int table_values_array[] = {
table(0),
table(1),
table(2),
table(3),
table(4),
};
 
That's everything. Not very high-powered, and some people may
blanch at using the preprocessor, but it works and seems easy
enough to understand.
 
The method shown above is simple, but it does have a problem of
sorts - the number of calls to table() in the array initializer
need to be kept in sync with the macro definition "by hand", as
it were. There are various techniques for avoiding this problem.
Here is one method, showing first how to define the values and
the table() macro:
 
#define TABLE_ENTRIES_FOREACH(k,WHAT) \
WHAT(k,0,27) \
WHAT(k,1,28) \
WHAT(k,2,29) \
WHAT(k,3,35) \
WHAT(k,4,47) \
/* end of TABLE_ENTRIES_FOREACH */
 
#define table(k) ( TABLE_ENTRIES_FOREACH( k, TABLE_VALUE_CHOOSE ) -1 )
#define TABLE_VALUE_CHOOSE( k, key, value ) (k) == (key) ? (value) :
 
And now the definition of the values array, with initializers:
 
const int table_values_array[] = {
#define TABLE_ARRAY_INITIALIZER( _1, _2, value ) (value),
TABLE_ENTRIES_FOREACH( 0, TABLE_ARRAY_INITIALIZER )
#undef TABLE_ARRAY_INITIALIZER
};
 
Obviously this scheme is more complicated in terms of how much
preprocessor machinery is used. Its compensating advantage is
that now all the values and indices are in only one place.
Different circumstances may favor one or the other, depending
on a variety of forces.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: