Sunday, March 25, 2018

Digest for comp.lang.c++@googlegroups.com - 14 updates in 5 topics

computer45 <computer45@cyber.com>: Mar 25 04:31PM -0400

Hello....
 
 
This is only one post here about Islam..
 
I will give my explanation of Islam and the Koran..
 
The interpretation of ISIS is that the Koran is the words of Allah,
but this is false, this is not the correct interpretation, because the
truth is that prophet Mohamed was enhanced by the extraterrestrial that
is called archangel Gabriel to be able to write the Koran, so from this
you can understand that the Koran can contain many "errors", because it
was written by a human that we call prophet Mohamed that was enhanced by
an "extraterrestrial", but you have to be aware that those
extraterrestrials want from us to believe and to follow "superior"
morality, those extraterrestrials have only somewhat "enhanced"
prophet Mohamed(asw) to be able to write the Koran. This is the right
interpretation, it means that archangel Gabriel is "only" an
"extraterrestrial", he is an extraterrestrial like those on
the following videos:
 
 
Is it time for the Pentagon to take UFOs seriously?
 
Look at this video:
 
http://video.foxnews.com/v/5757403109001/
 
Former CIA Agent on his Deathbed Reveals Alien & UFO TRUTH
 
Look at this video:
 
https://www.youtube.com/watch?v=kF8_KPV3XPY&t=6s
 
 
So i think extraterrestrials do exist and they have enhanced the human
that we call prophet Mohamed to be able for him to write the Koran.
 
 
Thank you,
Amine Moulay Ramdane.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 25 04:29PM -0700

On Sunday, March 25, 2018 at 4:31:32 PM UTC-4, computer45 wrote:
> I will give my explanation of Islam and the Koran..
 
Here is mine:
 
Satan is a liar.
He deceives all who will not seek the truth.
For some he uses Islam.
For others atheism.
For some agnosticism.
For others something else.
 
He doesn't care what he uses to deceive you. In fact, he tailors
his attack against you based on what you will believe. His goal is
to keep everyone away from Christ, to keep them unforgiven for their
sin and solidly on the path to Hell.
 
--
Rick C. Hodgin
Christiano <christiano@engineer.com>: Mar 25 04:01PM -0300

Page 313 has the following program:
 
auto old_state = cin.rdstate(); // remember the current state of cin
cin.clear(); // make cin valid
process_input(cin); // use cin
cin.setstate(old_state); // now reset cin to its old state
 
But in my opinion this program is wrong. Imagine this situation:
 
first line stores {fail=1, eof=1, bad=0} in the old_state variable
second line clears cin flags
third line result cin's flag to be {fail=1, eof=0, bad=1}
Fourth line sets flags to {fail=1, eof=1, bad=1}
 
However {fail=1, eof=1, bad=1} != {fail=1, eof=1, bad=0}
So, this program is wrong (I think).
 
Can you confirm, please? Thank you.
 
 
------------------------------------------
 
I can paste part of the book according to the law of fair use:
https://www.copyright.gov/fair-use/more-info.html
 
Lippman 5:
https://www.pearson.com/us/higher-education/program/Lippman-C-Primer-5th-Edition/PGM270560.html
Christiano <christiano@engineer.com>: Mar 25 04:04PM -0300

On 03/25/18 16:01, Christiano wrote:
> https://www.copyright.gov/fair-use/more-info.html
 
> Lippman 5:
> https://www.pearson.com/us/higher-education/program/Lippman-C-Primer-5th-Edition/PGM270560.html
 
In my opinion the correct program should be:
 
auto old_state = cin.rdstate(); // remember the current state of cin
cin.clear(); // make cin valid
process_input(cin); // use cin
cin.clear(old_state); // now reset cin to its old state
computer45 <computer45@cyber.com>: Mar 25 02:44PM -0400

On 3/24/2018 5:21 PM, Chris M. Thomasson wrote:
 
> Fwiw, here is a little JavaScript vector field plotter I made:
 
> http://webpages.charter.net/appcore/fractal/field
 
> Actually, this can be used to map out a network...
 
 
 
Read again about network of SPSC queue to form MPMC:
 
https://books.google.ca/books?id=jZG_DQAAQBAJ&pg=PA276&lpg=PA276&dq=SPSC+and+queue+and+MPMC&source=bl&ots=KwfRYpYWW3&sig=GYE7Sn7ZlhNsJISvTjV4bXnjvDc&hl=en&sa=X&ved=0ahUKEwia4fKKjojaAhUCuVkKHZnbBvMQ6AEIjgEwCQ#v=onepage&q=SPSC%20and%20queue%20and%20MPMC&f=false
 
 
As you have noticed a matrix of (N-1)*(N-1) SPSC queues
is needed to compose an MPMC queue , and strict FIFO order that can be
broken in it, this is not good, this is why i said before:
 
 
 
Other than the strict FIFO order that can be broken, here is another
problem with the distributed network of SPSC queues, here it is:
 
 
--
 
5. finally, SPSC may not be a good point for massive ITC(inter-thread
communication):.
 
Because the space complexity goes in O(N^2), N is the number of threads.
It is not rare to see a server with 1k or 2k hardware threads. And "many
core" is the final destination of CPU from current sight.
 
---
 
 
Read all the following webpage and the responses to it to understand:
 
https://www.infoq.com/articles/High-Performance-Java-Inter-Thread-Communications
 
 
So i think my scalable FIFO queue algorithm and its implementation is
an invention and it is still useful.
 
 
Thank you,
Amine Moulay Ramdane.
computer45 <computer45@cyber.com>: Mar 25 02:50PM -0400

On 3/24/2018 5:21 PM, Chris M. Thomasson wrote:
 
> Fwiw, here is a little JavaScript vector field plotter I made:
 
> http://webpages.charter.net/appcore/fractal/field
 
> Actually, this can be used to map out a network...
 
 
 
Read again about network of SPSC queues to form MPMC:
 
https://books.google.ca/books?id=jZG_DQAAQBAJ&pg=PA276&lpg=PA276&dq=SPSC+and+queue+and+MPMC&source=bl&ots=KwfRYpYWW3&sig=GYE7Sn7ZlhNsJISvTjV4bXnjvDc&hl=en&sa=X&ved=0ahUKEwia4fKKjojaAhUCuVkKHZnbBvMQ6AEIjgEwCQ#v=onepage&q=SPSC%20and%20queue%20and%20MPMC&f=false
 
 
As you have noticed a matrix of (N-1)*(N-1) SPSC queues
is needed to compose an MPMC queue , and strict FIFO order can be broken
in it, this is not good, this is why said also:
 
 
Other than the strict FIFO order that can be broken, here is another
problem with the distributed network of SPSC queues, here it is:
 
 
--
 
5. finally, SPSC may not be a good point for massive ITC(inter-thread
communication):.
 
Because the space complexity goes in O(N^2), N is the number of threads.
It is not rare to see a server with 1k or 2k hardware threads. And "many
core" is the final destination of CPU from current sight.
 
---
 
 
Read all the following webpage and the responses to it to understand:
 
https://www.infoq.com/articles/High-Performance-Java-Inter-Thread-Communications
 
 
So i think my scalable FIFO queue algorithm and its implementation is
an invention and it is still useful.
 
 
Thank you,
Amine Moulay Ramdane.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 25 05:08AM -0700

On Sunday, March 25, 2018 at 7:18:41 AM UTC-4, Rick C. Hodgin wrote:
> to while working on this isolated component.
 
> I've caught more bugs that way than I've ever caught with unit
> testing, and probably even with release-to-testers testing.
 
I've also looked at it from a cost factor. If a high-level developer
is paid X, and a unit tester is paid k*X where k < 1, then there is a
labor savings for the unit tester to do some of the labor.
 
But when you consider the time in testing, the familiarity with the
system being written and developed. the unit tester will approach the
"black box" being given the specs by the developer as to what needs
to be tested and how, which means some additional up-front labor on
behalf of the developer anyway (to document those needs), and now you
have a "blind" unit tester writing code to validate something you had
previously been there intimately with. You had the algorithms open,
the purpose of the program flow was in your mind, you were "in the
zone" in the code ... so why sacrifice that tremendous asset only to
pass on to the unit tester some ability to test your code when you
could test it right there?
 
And now you're back to paying X for your labor, and for your testing,
rather than X for your labor and k*X where k < 1 for the testing,
but your X labor goes up slightly to document what the tester needs,
which means you're probably back on par with what you would have cost-
wise had you done the testing yourself.
 
And then there's productivity per developer.
 
In my experience, it's far more desirable for a developer to spend a
little extra time double- and triple-checking their changes with some
last minute manual testing than it is to release something with a bug
and come back and fix it later. The developer is there in the full
mindset of the code, and is fluid during that last double- and triple-
check time. But when it's released into a build, built, and tested by
someone else (or even something else as a unit test later), the dev-
eloper must re-enter that fluid state, which takes time and increases
the risk of error because it's unlikely the developer will reach the
full fluid state without a protractive effort, and if the bug that's
being examined is non-trivial (something highly likely if a skilled
developer missed it with double- and triple-checks), then it's going
to require a notably longer period of time to fix it than it should.
 
In the end, I conclude that unit testing has validity in some types
of code, and in other types of code it's a waste of time, material,
and resources.
 
That being said, I am pushing for more unit testing in my life as I
get older because I am not able to enter that "fluid zone" as easily
or as deeply as I could when I was younger. I can at times, but not
as a matter of course as before. As a result, I would like to have
more testing to validate I haven't forgotten something I previously
fixed through my own aging-related inabilities.
 
-----
On additional factor, in Visual Studio 2015 there was a new native
tool introduced into the base VS package, which was called "Code
Analysis." It would go through your code and perform smart fuzzy
testing on your algorithms, and perform the "what-if" scenarios.
It would say "Okay, here's a logical test for an if block... what
if it was true, what would happen?" Then it would examine it and
report on its findings. It would then come back and say, "Okay,
now what if it wee false, what would happen?" And it would examine
that condition.
 
It rigorously iterates through all possible choices and looks for
a wide range of errors in your code, such as uninitialized variables,
use after free, etc.
 
Using this new tool, on a C/C++ code base about 30K lines long, I
found about 15 places where potential errors existed. In all but
a couple cases, they were conditions that would never exist because
there was some check higher up in the code which precluded the error
condition from ever being a possibility. But in a couple cases,
there were legitimate bugs that it found where, had that particular
scenario happened, it would've errored out or produced invalid
results.
 
And to this day, on that algorithm, I still have a place somewhere
in my code where it writes beyond the buffer limits on something.
I have spent days trying to find it and cannot. I've downloaded
memory tools to look for buffer overruns, ported it to other com-
pilers to see if they could find any errors. I've written my own
manual memory subsystem (which I reported on in the comp.lang.c
thread a year or two back) so I could walk the heap on each of my
allocations and releases.
 
I still haven't found it despite all this testing. All I've been
able to do is modify my malloc() algorithms to use my_malloc()
and allocate an extra 16 bytes before and after each requested
memory block, then return a pointer to the 16 bytes after the
start. That has solved it, but it's a hack workaround and not
a true fix.
 
Every now and again when I get a few spare bit, I go back in and
try to find it. It's been probably four years now and I cannot
find it.
 
I'm nearly convinced there isn't a problem in my code as per my
understanding of how things work, but that what I'm thinking is
a legal use of a pointer or block is actually illegal, resulting
in undefined behavior at that point. And until I'm able to learn
what it is that's causing the error, the extra buffer around the
allocated memory blocks is a sufficient work-around.
 
-----
To be honest, those are the areas where I could see having some
type of natural language testing abilities would be of greatest
benefit. To be able to traverse your code looking at all of the
possible code paths rigorously, reporting on things used apart
from initialization, or after releasing, or many other such tests,
coupled to the ability to get a digital map of everything done in
the memory subsystem, and to be able to provide compile-time info
regarding where a pointer should operate only, and to be able to
trap any errors of use beyond that range during this special
compilation mode which adds the checks for such things.
 
Those kinds of developer tools would help me more than unit test-
ing on my own algorithms as they are fairly easy to test if they're
working with a few manual cases applied during development.
 
At least that's been my experience.
 
--
Rick C. Hodgin
bartc <bc@freeuk.com>: Mar 25 01:48PM +0100

On 24/03/2018 23:13, Rick C. Hodgin wrote:
> those code bases? The only one I am aware of is Intel's C++
> compiler, and it's $1600 for the professional version per year,
> which I also think is outrageous.
 
$1600 in a professional environment is peanuts.
 
How much is the annual salary of the person doing the programming? How
much are the on-going business costs per employee for premises,
computers, networking and everything else?
 
If Intel's compiler offers improved code (over thousands of copies of
applications), or even just a faster, more productive development cycle,
then it may pay for itself anyway.
 
> Intel should give away their
> compilers because it would encourage people to use their CPUs
> instead of other manufacturer CPUs.
 
I don't think Intel are doing too badly, but I hardly think a $1600 cost
for a tool is a significant factor when designing and manufacturing a
new product.
 
 
--
bartc
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 25 06:31AM -0700

On Sunday, March 25, 2018 at 8:48:53 AM UTC-4, bartc wrote:
> I don't think Intel are doing too badly, but I hardly think a $1600 cost
> for a tool is a significant factor when designing and manufacturing a
> new product.
 
It's outside of my reach for a personal additional expense.
 
I bought an Intel C++ Compiler 4.5 back in the day, but have lost
it since then. It integrated with Visual Studio 2003 IIRC. I
have downloaded and tried their current compiler on a free trial
basis, but I have not been able to get the developer environment
I need.
 
I keep looking, but I'm thinking CAlive is still my best bet,
assuming I can ever get it completed. :-)
 
--
Rick C. Hodgin
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Mar 25 03:54PM +0100

On 25/03/2018 11:58, Rick C. Hodgin wrote:
> And as I say, to me it's a coding style. The style of coding you
> do involves writing unit tests or not.
 
> Just my opinion.
 
Your opinion is wrong. Writing unit tests is NOT a coding style it is
an essential part of the software development process used when creating
non-toy software.
 
Your software is obviously toy and of little inherent value beyond it
being a disguised form of proselytization that only has value to you and
your pet fish.
 
/Flibble
 
--
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates
a world that is so full of injustice and pain. That's what I would say."
Chris Ahlstrom <OFeem1987@teleworm.us>: Mar 25 12:24PM -0400

bartc wrote this copyrighted missive and expects royalties:
 
>> compiler, and it's $1600 for the professional version per year,
>> which I also think is outrageous.
 
> $1600 in a professional environment is peanuts.
 
It's too much if gcc or clang can do the job 98% as well.
 
 
> I don't think Intel are doing too badly, but I hardly think a $1600 cost
> for a tool is a significant factor when designing and manufacturing a
> new product.
 
Indeed.
 
--
The mind is its own place, and in itself
Can make a Heav'n of Hell, a Hell of Heav'n.
-- John Milton
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 25 09:56AM -0700

On Sunday, March 25, 2018 at 10:55:08 AM UTC-4, Mr Flibble wrote:
> an essential part of the software development process used when creating
> non-toy software.
 
> Your software is obviously toy and of little inherent value...
 
There's a wide base of software developers who disagree with you.
There's also a wide base who agree with you. It's an opinion held
by all involved based on their needs.
 
My last word on the subject.
 
--
Rick C. Hodgin
Ian Collins <ian-news@hotmail.com>: Mar 26 07:24AM +1300

On 03/25/2018 11:36 PM, Rick C. Hodgin wrote:
 
> I was never able to justify that degree of labor spent on writing
> tests when you, as a developer, are supposed to test the code you
> write as you go.
 
That's because you were doing it wrong - there shouldn't be unit test
writers, the developers should write their own tests.
 
> up its value or member list, and as you hover over each part
> it drops down into the child member contents, and into that
> member's children, etc. It all takes a few seconds to navigate.
 
Manually, not something you can do after every change on every build.
 
> (1b) Make changes to the existing code,
> (1c) Test your new / changed algorithm manually anyway,
> (1d) Write the test, or update the test code
 
Done correctly it will reduce the development cost. Developers will
spend more time writing code rather than chasing bugs. You will also
find bugs faster, the closers to its introduction a bug is found, the
cheaper it is to fix.
 
> (2b) Changes are made to code
> (2c) Possibly some additional changes need to be made
> to the tests
 
Errors in tests are usually the result of misunderstandings of or
ambiguities in the requirements. Both need fixing fast!
 
<snip>
 
> comprehensive in my memory and abilities as I was when I was much
> younger. I do make mistakes as I get older, and I can see the
> value in testing more and more because of it.
 
Good!
 
--
Ian.
Daniel <danielaparker@gmail.com>: Mar 25 10:51AM -0700

I've been skimming the boost include directory for inspiration for naming subdirectories, namespaces, and class names, when the most natural name would be the same. As in boost, assume all lower case naming convention, so no help from mixed case differentiation.
 
Some examples from boost:
 
subdirectory name: bimap
namespace name: bimaps
class name: bimap
 
subdirectory name: iterator
namespace name: iterators
class name: iterator
 
subdirectory name: function_types
namespace name: function_types
class name: function_type
 
subdirectory name: optional
namespace name: optional_ns
class name: optional
 
Any thoughts? My immediate context is where the most natural name for a subdirectory, namespace and class name is "cbor" (a class encapsulating the Concise Binary Object Representation and related algorithms.)
 
Thanks,
Daniel
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: