Saturday, February 6, 2016

Digest for comp.lang.c++@googlegroups.com - 18 updates in 6 topics

Jorgen Grahn <grahn+nntp@snipabacken.se>: Feb 06 06:07PM

On Tue, 2016-02-02, Jerry Stuckle wrote:
> invalid input rejected. You need to duplicate error conditions, some of
> which can terminate the test - and finally, evaluate the results.
 
> Thorough testing is more than adding one line to a make file.
 
I misunderstood -- I though you described a scenario where the tests
existed (as code) and the only thing missing was a way to build and
execute them.
 
(I'm not sure what you /did/ mean, but never mind.)
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Jerry Stuckle <jstucklex@attglobal.net>: Feb 06 04:17PM -0500

On 2/6/2016 1:07 PM, Jorgen Grahn wrote:
> execute them.
 
> (I'm not sure what you /did/ mean, but never mind.)
 
> /Jorgen
 
Test have to be created for each unit. They should be created based on
the design, not the code. They should include not only expected input,
but unexpected input which should be rejected. The purpose should be to
try to make the unit fail, not just ensure that it works. Some of these
failure scenarios, for instance, may cause a segfault, terminating the
test prematurely.
 
You can use a make file to run the tests. However, you still need to
validate the output. And if this is a change to an already tested
module i.e. to add a feature), you also need to validate the output
against the previous tests (regression testing).
 
This cannot be done with a make file.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Ian Collins <ian-news@hotmail.com>: Feb 07 10:22AM +1300

Jerry Stuckle wrote:
> module i.e. to add a feature), you also need to validate the output
> against the previous tests (regression testing).
 
> This cannot be done with a make file.
 
There is software to do all of the donkey work known as unit test
frameworks. If the tests pass, the make succeeds. If the fail, the
make files.
 
Read up on them and you might learn something.
 
 
--
Ian Collins
Wouter van Ooijen <wouter@voti.nl>: Feb 06 11:31PM +0100

Op 06-Feb-16 om 10:22 PM schreef Ian Collins:
> frameworks. If the tests pass, the make succeeds. If the fail, the
> make files.
 
> Read up on them and you might learn something.
 
Breaking in on this: is there a convenient way to make a test that
should fail compilation?
 
Wouter van Ooiejn
Ian Collins <ian-news@hotmail.com>: Feb 07 11:46AM +1300

Wouter van Ooijen wrote:
 
>> Read up on them and you might learn something.
 
> Breaking in on this: is there a convenient way to make a test that
> should fail compilation?
 
I can think of two off hand, calling a function that hasn't been written
yet and a static assert.
 
Calling function or a specific overload that hasn't been written yet may
sound daft, but it's a good check if you are writing your tests first.
If you don't think you have written the function and the test compiles,
chances are there's conversion operator in the mix.
 
Static assert would probably be most useful in template code, I haven't
had cause to use one in tests.
 
--
Ian Collins
Zaphod Beeblebrox <a.laforgia@gmail.com>: Feb 05 04:00PM -0800

On Friday, 5 February 2016 15:32:40 UTC, Mr Flibble wrote:
 
> Who is this Jerry Stuckle guy? He is giving me a headache.
 
Someone very good at designing websites:
 
http://www.jdscomputer.net/
Jerry Stuckle <jstucklex@attglobal.net>: Feb 05 07:14PM -0500

On 2/5/2016 7:00 PM, Zaphod Beeblebrox wrote:
 
>> Who is this Jerry Stuckle guy? He is giving me a headache.
 
> Someone very good at designing websites:
 
> http://www.jdscomputer.net/
 
Absolutely nothing to do with me - which you could see if you weren't so
stoopid.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Zaphod Beeblebrox <a.laforgia@gmail.com>: Feb 06 07:04AM -0800

On Saturday, 6 February 2016 00:14:36 UTC, Jerry Stuckle wrote:
 
[...]
> Absolutely nothing to do with me - which you could see if you weren't so
> stoopid.
 
Watch your language, mr "Jerry I've got a big penis Stuckle".
Prroffessorr Fir Kenobi <profesor.fir@gmail.com>: Feb 06 09:17AM -0800

W dniu sobota, 6 lutego 2016 16:04:51 UTC+1 użytkownik Zaphod Beeblebrox napisał:
> > Absolutely nothing to do with me - which you could see if you weren't so
> > stoopid.
 
> Watch your language, mr "Jerry I've got a big penis Stuckle".
 
wath yours moron
Jerry Stuckle <jstucklex@attglobal.net>: Feb 06 04:12PM -0500

On 2/6/2016 10:04 AM, Zaphod Beeblebrox wrote:
>> Absolutely nothing to do with me - which you could see if you weren't so
>> stoopid.
 
> Watch your language, mr "Jerry I've got a big penis Stuckle".
 
The truth hurts, doesn't it?
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
"Öö Tiib" <ootiib@hot.ee>: Feb 06 02:24PM -0800

On Saturday, 6 February 2016 23:12:35 UTC+2, Jerry Stuckle wrote:
> >> stoopid.
 
> > Watch your language, mr "Jerry I've got a big penis Stuckle".
 
> The truth hurts, doesn't it?
 
http://scienceblogs.com/tetrapodzoology/2008/02/22/he-loved-pigs-too-much/
Marcel Mueller <news.5.maazl@spamgourmet.org>: Feb 06 06:11PM +0100

On 05.02.16 23.26, Lynn McGuire wrote:
> 2. many of the DataItem instances are exactly alike since they are
> snapshots of a user's workspace
 
You really perform a deep copy to take a snap of your data and you do
this quite often?
No wonder that you run out of memory.
 
I have some similar requirements. An application that takes a snap of
the entire database on each user action to ensure data consistency. The
snap is nothing but a unique UTC time stamp. Each object instance has a
time stamp too. If an object is too new an older copy is used instead.
For this to work foreign keys never refer directly to the target
instance. They are just small key objects.
 
> 3. all kinds of data: strings, integers, doubles, string arrays, double
> arrays, integer array, strings larger than 300 characters are compressed
> using zlib
 
The data in your items looks like you are trying to implement
polymorphism by something like a custom union. Turning into C++
polymorphism could save memory. It looks a bit over-engineering to store
#Int with some hundred bytes.
 
> 4. DataItems are stored in a hierarchical object system using a primary
> key in DataGroup objects
 
So you have two levels. The DataGroup and the Key.
 
> 5. not sure what you are asking
> 6. no database backend
> 7. no concurrency (yet)
 
You probably never want to implement multi-threading with this data model.
 
> file or closing a file
> 9. I think that it is DataItems but will not know for sure until
> completion of the current deduplication project
 
A memory profiler should give you the answer.
 
> Here is part of the declaration for the DataItem and DesValue classes. There are no member variables in the ObjPtr class.
 
The size of the ObjPtr will not be zero, because C++ forbids this.
 
> {
> private:
 
> int datatype; // Either #Int, #Real, #String or #Enumerated
 
Polymorphism?
 
> int vectorFlag; // Flag indicating value contains an Array.
 
Use one item with all flags rather than individual integers saves Memory.
 
> int descriptorName; // name of Corresponding DataDescriptor
 
Redundant? You have DataDescriptor * below.
 
> // DataGroup * owner; // The DataGroup instance to which this
> item belongs
 
Is this not always part of caller context?
 
> std::vector <DataGroup *> owners; // The DataGroup instance(s) to
> which this item belongs
 
? - Either this or the above is redundant.
If you do not need the uplinks for your business, a reference counter
might be enough.
 
> DesValue * inputValue; // DesValue containing permanent input value
> DesValue * scratchValue; // DesValue containing scratch input value
 
Do not clobber the data store with intermediate data used for editing.
Use subclassing or shadow instances for this purpose. I guess only a few
instances have reasonable differences between inputValue and scratchValue.
 
By the way. What about the memory consumption of the DesValue objects?
They are bulky too.
 
> int writeTag; // a Long representing the object for purposes of
> reading/writing
 
No idea. Maybe also better a part of a mutable subclass.
 
> int unitsClass; // nil or the symbol of the class
> std::string unitsArgs; // a coded string of disallowed units
 
What about the storage of the strings behind this?
 
> std::map <int, std::vector <int> > dependentsListMap;
 
And this one could grow really large.
The nested structure is really not recommended. At least std::multi_map
should be better.
 
Depending on the number of entries std::map is not the best choice. It
allocates an object for each item. If the number of entries is quite
small or if it changes rarely a sorted vector performs better.
 
> DataDescriptor * myDataDescriptor;
 
See above.
 
> BOOL scratchChangedComVector; // if the scratch value was changed
 
See flags above.
 
> virtual int isDataItem () { return true; };
 
Your class is virtual anyway, so using subclasses for different types is
straight forward.
 
> {
> public:
 
> int datatype; // Either #Int, #Real, or #String.
 
Again hand made polymorphism.
 
> unsigned char * compressedData;
> unsigned long compressedDataLength;
> std::vector <unsigned long> uncompressedStringLengths;
 
Really bad design. If there are many objects of this type then you know
where your problem is.
 
Use sub classes with only one of the fields and avoid the additional
allocations for the referenced value objects.
 
> DesValue is defined or undefined. If isSet is false, getValue returns
> nil despite the contents of value, while getString and getUnits return
> the empty string despite the contents of stringValue and units.
 
Join all flags into one field.
 
> (bottom)
> std::string errorMessage; // message about last conversion of
> string to value
 
This looks again like transitional data that should not be part of the
persistent data model.
 
> std::string unitsArgs; // a coded string of disallowed units
 
 
Adjusting the data model would probably save at least 50% of memory even
without deduplication. Especially the DesValue Objects are unnecessarily
bulky. If you also do not create excessive deep copies of the structures
you should be fine. No need for deduplication as far as I can see. COW
should be sufficient.
 
It looks like you are trying to reinvent Excel. Well considering the
speed its design is probably similar. ;-)
 
 
Marcel
Lynn McGuire <lmc@winsim.com>: Feb 06 03:10PM -0600

On 2/6/2016 11:11 AM, Marcel Mueller wrote:
 
> It looks like you are trying to reinvent Excel. Well considering the
> speed its design is probably similar. ;-)
 
> Marcel
 
Thanks !
 
Lynn
Ramine <ramine@1.1>: Feb 06 01:21PM -0800

Hello..........
 
 
Parallel implementation of Conjugate Gradient Sparse Linear System
Solver library was updated to version 1.32
 
https://sites.google.com/site/aminer68/parallel-implementation-of-conjugate-gradient-sparse-linear-system-solver
 
Read here:
 
https://en.wikipedia.org/wiki/Sparse_matrix
 
As you have noticed it says:
 
"When storing and manipulating sparse matrices on a computer, it is
beneficial and often necessary to use specialized algorithms and data
structures that take advantage of the sparse structure of the matrix.
Operations using standard dense-matrix structures and algorithms are
slow and inefficient when applied to large sparse matrices as processing
and memory are wasted on the zeroes. Sparse data is by nature more
easily compressed and thus require significantly less storage. Some very
large sparse matrices are infeasible to manipulate using standard
dense-matrix algorithms."
 
I have taken care of that on my new algorithm, i have used
my ParallelIntHashList datastructure to store the sparse matrices
of the linear systems so that it become very fast and so that it
doesn't waste on the zeros, in fact my new algorithm doesn't store the
zeros of the sparse matrix of the linear system.
 
I have also implemented another scalable parallel algorithm that is
cache-aware an NUMA-aware and that is scalable on NUMA architecture, and
that is designed for dense matrices that you find on Linear Equations
arising from Integral Equation Formulations, this one stores the zeros
of the sparse matrix of the linear system , here it is:
 
Scalable Parallel implementation of Conjugate Gradient Linear System
solver library that is NUMA-aware and cache-aware was updated to version
1.23
 
https://sites.google.com/site/aminer68/scalable-parallel-implementation-of-conjugate-gradient-linear-system-solver-library-that-is-numa-aware-and-cache-aware
 
 
 
Thank you,
Amine Moulay Ramdane.
bleachbot <bleachbot@httrack.com>: Feb 06 07:19PM +0100

Robert Wessel <robertwessel2@yahoo.com>: Feb 06 04:31AM -0600

On Fri, 05 Feb 2016 18:00:30 +0100, Marcel Mueller
 
>> Doesn't matter, the doubles in A require 8-byte alignment, which 28 is not.
 
>Think, you are a bit too fast.
>A 32 bit platform never needs 64 bit alignment.
 
 
There is certainly no such rule. S/360 required 64-bit alignment for
both a number of integer operations that accessed memory and for
doubles, and IIRC, Alpha required doubles to be 64-bit aligned even
when the system was used as a 32 bit one, as the versions of Windows
for Alpha did. Requiring more than 32-bit alignment for doubles is
not that uncommon in hardware, and is preferred in many cases for
performance reasons even if the hardware does not require it.
Paavo Helde <myfirstname@osa.pri.ee>: Feb 06 03:20PM +0200

On 6.02.2016 12:31, Robert Wessel wrote:
> for Alpha did. Requiring more than 32-bit alignment for doubles is
> not that uncommon in hardware, and is preferred in many cases for
> performance reasons even if the hardware does not require it.
 
And SSE2 et al require or strongly encourage 128-bit alignment also on
32-bit x86.
Marcel Mueller <news.5.maazl@spamgourmet.org>: Feb 06 03:13PM +0100

On 06.02.16 14.20, Paavo Helde wrote:
>> performance reasons even if the hardware does not require it.
 
> And SSE2 et al require or strongly encourage 128-bit alignment also on
> 32-bit x86.
 
Well, SSE2 is not really 32 bit, as well as Alpha 64 is not.
 
 
Marcel
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: