Wednesday, June 6, 2018

Digest for comp.lang.c++@googlegroups.com - 24 updates in 10 topics

Sky89 <Sky89@sky68.com>: Jun 06 06:22PM -0400

Hello,
 
 
Yet again about my programming philosophy
 
Hope you have understood my previous programming philosophy about
"reliability" , and hope you have read my criticism about C++ and C..
 
Now you have to understand me more..
 
You have noticed that i am coding in modern Object Pascal of Delphi
and FreePascal and Lazarus(with FreePascal), i am an experienced
programmer in modern Object Pascal, but i am also an "inventor", because
i am enhancing Delphi and FreePascal and Lazarus with my "scalable"
algorithms that i have "invented", for example if you take a look at C++
and Boost you will notice that there reference counting and there
shared_ptr and weak_ptr implementations is not "scalable", this is why i
have also invented my Scalable reference counting with efficient
support for weak references for Delphi and FreePascal here, read about
it here and you will notice that it is powerful:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
As you will notice my invention doesn't exist in C++ or C and
doesn't exist in Rust and doesn't exist in ADA etc.
 
So as you are noticing i am "enhancing" Delphi and FreePascal
with my scalable algorithms that i have invented,
you have to look at my other scalable algorithms that i have
invented on my website here:
 
https://sites.google.com/site/scalable68/
 
 
And you will notice that i am enhancing Delphi and FreePascal
with my scalable algorithms and my other projects like
my Parallel archiver and my Parallel Compression Library,
read about them and you will notice that they are too really powerful,
and look also at my other projects in my website above..
 
 
And i think i will sell some of my other new scalable algorithms
and there implementations to software companies like Embarcadero and
Microsoft and Google etc.
 
 
This is my programming philosophy and my way of thinking..
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Jun 06 05:42PM -0400

Hello,
 
You have to understand my programming philosophy..
 
As you have noticed on my previous posts , i was talking "reliability",
i think that C and C++ programmers don't understand the philosophy
of ADA and modern Object Pascal, because ADA and modern Object Pascal
are also brothers, and they give much "importance" to "reliability" than
C or C++ , because from the start they have been designed for more
reliablity than C++ or C, i love ADA because i love modern Object
Pascal, modern Object Pascal of Delphi and FreePascal is like a "decent"
reliability, but ADA is also more restrictive to be a higher reliability
! i am still studying ADA and Rust to know wich is better.. and i will
come with more information about that.. and why i am coding
in modern Object Pascal of Delphi and FreePascal ? i am
using Delphi and Freepascal(in the Delphi mode), because
they have become RAD and that's good for more productivity, and
they become more powerful, and since Delphi and FreePascal are
conservative compilers , they have like a decent "performance" and like
a "decent" reliability and a like "decent" portability, this is why
love them, i love also also ADA because it learns me to be "higher"
reliability, this is why i am also following what is happening in ADA
and Rust.
 
Please read the rest of my following previous posts to know better
about my thoughts:
 
C++ is out of talk
 
I will not waste my time with C++, because from the start C++ was
handicaped, because it has inherited the deficiencies of C,
like i have exposed in my previous posts, i think C is "not" a
good programming language because it is "too" weakly typed and it allows
implicit type conversions that are bad for reliability etc. and this
looks like the mess of assembler, because C was "too" low level for
reliability, and since C++ has inherited from C, C++ has inherited this
too low level parts that are not good for reliability, so i will
not waste my time with C++ or with C, and i will continu
to code in "modern" Object Pascal of Delphi and FreePascal that is more
conservative because it has a "decent" reliability and a "decent"
performance, and those Delphi and FreePascal compilers are "powerful"
today. And i will also work with "Java", because Mono is not following
fast the developement of C# and it is not as portable as Java.
 
And here is what i wrote about C++ and Delphi and FreePascal and ADA:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
Also Delphi is still better for many things, and you have to get more
"technical" to understand it, this is why you have to look at this
following video about Delphi that is more technical:
 
Why are C# Developers choosing Delphi to create Mobile applications
 
https://www.youtube.com/watch?v=m8ToSr4zOVQ
 
 
And I think there is still a "big" problem with C++ and C..
 
Look at C++ explicit conversion functions, they were introduced in
C++11, but this doesn't come by "default" in C++, like in modern Object
Pascal of Delphi and FreePascal and like in ADA , because in C++ you
have to write explicit conversion functions, so this is not good for
reliability in C++, and C++ doesn't by "default" come with range
checking and Run-time checks that catch conversion from negative signed
to unsigned and arithmetic overflow , you have for example to add and
use SafeInt library for that, and C++ doesn't by "default" catch
out-of-bounds indices of dynamic and static arrays this is why C++ is
not good for reliability.
 
But Delphi and FreePascal like ADA come with range checking and Run-time
checks that catch conversion from negative signed to unsigned , and
catch out-of-bounds indices of dynamic and static arrays and catch
arithmetic overflow etc. and you can also dynamically catch this
exception of ERangeError etc. and they do not allow those bad
implicit type conversions of C++ that are not good for reliability.
 
And you can carefully read the following, it is very important:
 
https://critical.eschertech.com/2010/07/07/run-time-checks-are-they-worth-it/
 
 
And about Escher C++ Verifier, read carefully:
 
"Escher C Verifier enables the development of formally-verifiable
software in a subset of C (based on MISRA-C 2012)."
 
Read here:
 
http://www.eschertech.com/products/index.php
 
 
So it verifies just a "subset" of C, so that's not good for C++
because for other applications that are not a subset of C , it can
not do for example Run-time checks, so we are again into
this problem again that C++ and C don't have range checking and many
Run-time checks, so that's not good in C++ and C because it is not good
for reliability and it is not good for safety-critical systems.
 
 
So for all the reasons above , i think i will stop coding in C++ and
i will quit C++.
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Jun 06 04:40PM -0400

Hello,
 
 
C++ is out of talk
 
I will not waste my time with C++, because from the start C++ was
handicaped, because it has inherited the deficiencies of C,
like i have exposed in my previous posts, i think C is "not" a
good programming language because it is "too" weakly typed and it allows
implicit type conversions that are bad for reliability etc. and this
looks like the mess of assembler, because C was "too" low level for
reliability, and since C++ has inherited from C, C++ has inherited this
too low level parts that are not good for reliability, so i will
not waste my time with C++ or with C, and i will continu
to code in "modern" Object Pascal of Delphi and FreePascal that is more
conservative because it has a "decent" reliability and a "decent"
performance, and those Delphi and FreePascal compilers are "powerful"
today. And i will also work with "Java", because Mono is not following
fast the developement of C# and it is not as portable as Java.
 
And here is what i wrote about C++ and Delphi and FreePascal and ADA:
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
Also Delphi is still better for many things, and you have to get more
"technical" to understand it, this is why you have to look at this
following video about Delphi that is more technical:
 
Why are C# Developers choosing Delphi to create Mobile applications
 
https://www.youtube.com/watch?v=m8ToSr4zOVQ
 
 
And I think there is still a "big" problem with C++ and C..
 
Look at C++ explicit conversion functions, they were introduced in
C++11, but this doesn't come by "default" in C++, like in modern Object
Pascal of Delphi and FreePascal and like in ADA , because in C++ you
have to write explicit conversion functions, so this is not good for
reliability in C++, and C++ doesn't by "default" come with range
checking and Run-time checks that catch conversion from negative signed
to unsigned and arithmetic overflow , you have for example to add and
use SafeInt library for that, and C++ doesn't by "default" catch
out-of-bounds indices of dynamic and static arrays this is why C++ is
not good for reliability.
 
But Delphi and FreePascal like ADA come with range checking and Run-time
checks that catch conversion from negative signed to unsigned , and
catch out-of-bounds indices of dynamic and static arrays and catch
arithmetic overflow etc. and you can also dynamically catch this
exception of ERangeError etc. and they do not allow those bad
implicit type conversions of C++ that are not good for reliability.
 
And you can carefully read the following, it is very important:
 
https://critical.eschertech.com/2010/07/07/run-time-checks-are-they-worth-it/
 
 
And about Escher C++ Verifier, read carefully:
 
"Escher C Verifier enables the development of formally-verifiable
software in a subset of C (based on MISRA-C 2012)."
 
Read here:
 
http://www.eschertech.com/products/index.php
 
 
So it verifies just a "subset" of C, so that's not good for C++
because for other applications that are not a subset of C , it can
not do for example Run-time checks, so we are again into
this problem again that C++ and C don't have range checking and many
Run-time checks, so that's not good in C++ and C because it is not good
for reliability and it is not good for safety-critical systems.
 
 
So for all the reasons above , i think i will stop coding in C++ and
i will quit C++.
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Jun 06 03:38PM -0400

Hello,
 
More precision , read again..
 
This is my last post here on the C++ and C forums, because
i will quit C++ and stop from coding in C++..
 
I think there is still a "big" problem with C++ and C..
 
Look at C++ explicit conversion functions, they were introduced in
C++11, but this doesn't come by "default" in C++, like in modern Object
Pascal of Delphi and FreePascal and like in ADA , because in C++ you
have to write explicit conversion functions, so this is not good for
reliability in C++, and C++ doesn't by "default" come with range
checking and Run-time checks that catch conversion from negative signed
to unsigned and arithmetic overflow , you have for example to add and
use SafeInt library for that, and C++ doesn't by "default" catch
out-of-bounds indices of dynamic and static arrays this is why C++ is
not good for reliability.
 
But Delphi and FreePascal like ADA come with range checking and Run-time
checks that catch conversion from negative signed to unsigned , and
catch out-of-bounds indices of dynamic and static arrays and catch
arithmetic overflow etc. and you can also dynamically catch this
exception of ERangeError etc. and they do not allow those bad
implicit type conversions of C++ that are not good for reliability.
 
And you can carefully read the following, it is very important:
 
https://critical.eschertech.com/2010/07/07/run-time-checks-are-they-worth-it/
 
 
And about Escher C++ Verifier, read carefully:
 
"Escher C Verifier enables the development of formally-verifiable
software in a subset of C (based on MISRA-C 2012)."
 
Read here:
 
http://www.eschertech.com/products/index.php
 
 
So it verifies just a "subset" of C, so that's not good for C++
because for other applications that are not a subset of C , it can
not do for example Run-time checks, so we are again into
this problem again that C++ and C don't have range checking and many
Run-time checks, so that's not good in C++ and C because it is not good
for reliability and it is not good for safety-critical systems.
 
 
So for all the reasons above , i think i will stop coding in C++ and
i will quit C++.
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Jun 06 03:16PM -0400

Hello,
 
 
I think there is still a "big" problem with C++ and C..
 
Look at C++ explicit conversion functions, they were introduced in
C++11, but this doesn't come by "default" in C++, like in modern Object
Pascal of Delphi and FreePascal and like in ADA , because in C++ you
have to write explicit conversion functions, so this is not good for
reliability in C++, and C++ doesn't by "default" come with range
checking and Run-time checks that catch conversion from negative signed
to unsigned, you have for example to add and use SafeInt library for
that, and C++ doesn't by "default" catch out-of-bounds indices of
dynamic and static arrays and catch, this is why C++ is not good for
reliability.
 
But Delphi and FreePascal like ADA come with range checking and Run-time
checks that catch conversion from negative signed to unsigned , and
catch out-of-bounds indices of dynamic and static arrays and catch
arithmetic overflow etc. and you can also dynamically catch this
exception of ERangeError etc. and they do not allow those bad
implicit type conversions of C++ that are not good for reliability.
 
And you can carefully read the following, it is very important:
 
https://critical.eschertech.com/2010/07/07/run-time-checks-are-they-worth-it/
 
 
And about Escher C++ Verifier, read carefully:
 
"Escher C Verifier enables the development of formally-verifiable
software in a subset of C (based on MISRA-C 2012)."
 
Read here:
 
http://www.eschertech.com/products/index.php
 
 
So it verifies just a "subset" of C, so that's not good for C++
because for other applications that are not a subset of C , it can
not do for example Run-time checks, so we are again into
this problem again that C++ and C don't have range checking and many
Run-time checks, so that's not good in C++ and C because it is not good
for reliability and it is not good for safety-critical systems.
 
 
So for all the reasons above , i think i will stop coding in C++ and
i will quit C++.
 
 
 
Thank you,
Amine Moulay Ramdane.
Ian Collins <ian-news@hotmail.com>: Jun 06 02:52PM +1200

On 06/06/18 02:19, Dan Cross wrote:
 
> My apologies if that wasn't clear.
 
> That said, I have yet to see a strong argument that TDD is the
> most efficient way to get a robust body of tests.
 
From my personal experience, the best argument has been the dislike
programmers have for writing tests for the sake of writing tests. I have
had a number of colleagues who would, if they could get away with it,
flat out refuse to write tests, that was the testers job. They were
however quite happy to adopt TDD because the tests were part of writing
the code... Go figure!
 
> On a side note, I'm curious what your thoughts are on this:
> https://pdfs.semanticscholar.org/7745/5588b153bc721015ddfe9ffe82f110988450.pdf
 
I'll parse and comment back later.
 
--
Ian.
Paavo Helde <myfirstname@osa.pri.ee>: Jun 06 02:51PM +0300

On 6.06.2018 1:02, Vir Campestris wrote:
> Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
> where you have to have absolute faith that your units tests will catch
> every bug.
 
Unit tests cannot catch every bug because for that the unit tests should
be effectively bug-free themselves, which is hard to achieve given that
unit tests are typically not tested themselves.
 
For example, some weeks ago I discovered a buggy unit test which had
been in our test suite for over 10 years. It was testing that some
particular operations are failing as expected, and this seemed to work
fine for years. Alas, it appeared that not the operations are failing,
but a test part itself was buggy and failed every time, regardless of
which operation it tested. Moreover, after fixing the test it appeared
that half of the operations it tested should actually succeed instead of
failing. So that was one buggy unit test.
 
Also, given that the unit test must run in a limited time, it can only
test a finite set of parameters. So, a particularly mischievous "unit"
could just play back recorded responses for this finite set during
testing and fail miserably whenever called with other parameters. See
also: VW exhaust testing; see also: self-learning AI.
"Öö Tiib" <ootiib@hot.ee>: Jun 06 09:42AM -0700

On Wednesday, 6 June 2018 14:51:31 UTC+3, Paavo Helde wrote:
> which operation it tested. Moreover, after fixing the test it appeared
> that half of the operations it tested should actually succeed instead of
> failing. So that was one buggy unit test.
 
Have to make it habit at least to code-review the unit tests. It happens
quite often that unit is feed with mock data that is logically inconsistent
(typically because of incomplete edits of copy-paste) and then it tests
that unit gives nice results. Now when someone corrects the unit to sanity
check the input then unit test breaks.
 
The thing that works is quality driven development. Unit tests are good
tool but people who push that these are silver bullets against numerous
(all) problems should be treated like all the other snake oil salesmen.
legalize+jeeves@mail.xmission.com (Richard): Jun 06 04:43PM

[Please do not mail me a copy of your followup]
 
Ian Collins <ian-news@hotmail.com> spake the secret code
>programmers have for writing tests for the sake of writing tests. I have
>had a number of colleagues who would, if they could get away with it,
>flat out refuse to write tests, that was the testers job.
 
This is what I meant when I referred to writing tests after the
implementation as feeling like a "developer tax". I'd already written
my code and debugged it and the test was at that point of no benefit
to me as a developer. It's overhead.
 
>They were
>however quite happy to adopt TDD because the tests were part of writing
>the code... Go figure!
 
...because when I write the test first, I get something out of it as a
developer and it's helping me write correct code (and therefore
spending less time in debugging sessions).
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
legalize+jeeves@mail.xmission.com (Richard): Jun 06 04:47PM

[Please do not mail me a copy of your followup]
 
Paavo Helde <myfirstname@osa.pri.ee> spake the secret code
>> Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
>> where you have to have absolute faith that your units tests will catch
>> every bug.
 
I can't speak for Jeffries, but this subtle "catch *every* bug"
phrasing sounds like you're overstating the case. I've never heard
any TDD (or unit testing) advocate suggest that *every* bug will be
caught.
 
It's easy to dismiss UT/TDD advocates when you overstate their position.
 
>could just play back recorded responses for this finite set during
>testing and fail miserably whenever called with other parameters. See
>also: VW exhaust testing; see also: self-learning AI.
 
Unit tests are great for testing control flow logic. They're not so
good at things like numerical computation where the input domain is
huge and it's infeasible to test every combination of inputs and
verify the calculation for the outputs. You can use numerical insight
into the computation ("white box" testing) to sprinkle example inputs
at interesting points, but ultimately example-based testing isn't
going to give high confidence. I think this is where approaches like
fuzz testing or property-based testing have an advantage over example
based testing.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
"Öö Tiib" <ootiib@hot.ee>: Jun 06 10:06AM -0700

On Wednesday, 6 June 2018 19:47:52 UTC+3, Richard wrote:
> any TDD (or unit testing) advocate suggest that *every* bug will be
> caught.
 
> It's easy to dismiss UT/TDD advocates when you overstate their position.
 
We see the damage that has been done by such positions in industry.
 
It is typical that if to send a good quality assurance specialist for a
week to check a product of team that has worked with dream of
silver bullet unit tests (that assure quality) then the release will
delay for month or couple because architectural level issues have
slipped in unnoticed.
 
It is sometimes difficult to manage expectations of stakeholders
involved because of that damage. It is irrelevant what jeffries
and uncle bobs meant, it matters how pointy haired fatsos did hear
it.
wyniijj@gmail.com: Jun 05 10:30PM -0700

struct B {
B() {};
};
 
class AList {
B *m_ptr;
public:
AList() : m_ptr(0) {};
const B* ptr() const { return m_ptr; };
B* ptr() { return m_ptr; };
};
 
int main()
{
AList aa;
B** bpp=&aa.ptr(); // how to resolve this?
 
return 0;
};
 
$g++ test.cpp
test.cpp:16:18: error: lvalue required as unary '&' operand
B** bpp=&aa.ptr();
^
----------
Bo Persson <bop@gmb.dk>: Jun 06 07:53AM +0200

> B** bpp=&aa.ptr();
> ^
> ----------
 
Why do you need this in the first place? Storing pointers to pointers is
not all that useful.
 
Anyway, the problem is that the ptr() call returns a temporary which
goes away at the ';'. Taking the address of something that immediately
goes away is not very useful.
 
If you must have a pointer to a pointer, you have to store the original
pointer first:
 
B* ptr = aa.ptr();
B** bpp = &ptr;
 
 
Bo Persson
Barry Schwarz <schwarzb@dqel.com>: Jun 05 11:07PM -0700

> AList() : m_ptr(0) {};
> const B* ptr() const { return m_ptr; };
> B* ptr() { return m_ptr; };
 
Why do you have two versions of ptr()?
 
>{
> AList aa;
> B** bpp=&aa.ptr(); // how to resolve this?
 
ptr() returns a value, not an object. It makes no sense to evaluate
the address of a value. Think about
int p = &2;
 
If you want to store the address of the member m_ptr of a particular
object of type AList, you will need a member or friend function that
evaluates that address. Alternately, you could make m_ptr public and
then code
bpp = &aa.m_ptr;
 
The usual reason for a data member to be private is to prevent a user
from manipulating it except by provided interfaces. If bpp contains
the address of some m_ptr, then you have lost control of that object.
The usual approach is to have a get and a set member function that
will access the object for you (and perform the appropriate validity
checks).
 
 
--
Remove del for email
"Fred.Zwarts" <F.Zwarts@KVI.nl>: Jun 06 09:22AM +0200

schreef in bericht
news:b9872b81-a24e-4c06-b417-9bdd85421807@googlegroups.com...
> B** bpp=&aa.ptr();
> ^
>----------
 
Change
B* ptr() { return m_ptr; };
into
> B*& ptr() { return m_ptr; };
 
bpp will now receive the addres of m_ptr.
 
Whether it is wise to expose the address of a private member is another
question.
wyniijj@gmail.com: Jun 06 12:37AM -0700

F.Zwarts於 2018年6月6日星期三 UTC+8下午3時22分40秒寫道:
 
> bpp will now receive the addres of m_ptr.
 
> Whether it is wise to expose the address of a private member is another
> question.
 
Thanks. That works.
"Fred.Zwarts" <F.Zwarts@KVI.nl>: Jun 06 10:44AM +0200

"Barry Schwarz" schreef in bericht
news:sfuehd1rnj6c9q5chbdki04vuek5avqiq4@4ax.com...
>> const B* ptr() const { return m_ptr; };
>> B* ptr() { return m_ptr; };
 
>Why do you have two versions of ptr()?
 
These are different functions. Note the "const". One is used for const
objects, other for non-const objects.
 
const Alist Alconst;
const B * Bl = Alconst.ptr (); // Will use the const version.
 
Alist Al;
B* Cl = Al.ptr (); // Will use the non-const version.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 06 07:02AM -0400

> B** bpp=&aa.ptr();
> ^
> ----------
 
Here's two alternative solutions. Whether either of these is suitable
depends upon the reason why you need bpp:
 
B** ptrptr() { return &m_ptr; }
...
B** bpp = aa.ptrptr();
 
Note that, functionally, this is equivalent to Fred Zwarts' suggestion.
Alternatively,
 
B* bp = aa.ptr();
B** bpp = &bp;
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 06 03:34PM +0200

> test.cpp:16:18: error: lvalue required as unary '&' operand
> B** bpp=&aa.ptr();
> ^
 
Eventually you'll advance to three star programmer level, and these
early problems will seem trivial.
 
<url: http://wiki.c2.com/?ThreeStarProgrammer>
 
 
Cheers!,
 
- Alf
Sky89 <Sky89@sky68.com>: Jun 05 09:15PM -0400

Hello,
 
 
I think C++ is the future, and i have learned Delphi and FreePascal that
uses modern Object Pascal, and i have coded in C++, and i think i am
understanding more C++, and i think that C++ is "easy" for me because
i am a more experienced computer programmer, and also i found it easy,
and since i am experienced with modern Object Pascal that is good on
readability, modern Object Pascal has learned me to write "clear" code
and less crypted code, so an important requirement is to write
clear code and less crypted code, this is why i am more apt to write
clear C++ code that easy maintenance and that enhance reliability,
also in the last days i have thought about those implicit type
conversions in C++ that were also inherited from C, and i have said to
myself that it is not good for reliability, but as i was learning more
C++ , i have discovered that you can control and disallow those implicit
type conversions as i have showed you in my previous post using operator
overloading etc. and i have also thought about bounds checking on
base arrays in C++, so C++ lacks bounds checking on base arrays,
and this is not good for reliability, so i have discovered that you
can solve this problem by using STL vectors that perform bounds checking
when the .at() member function is called, also
i have thought about integer overflow and underflow and conversion from
negative signed to unsigned , and i have showed
you that you can solve those problems with with C++ SafeInt here:
 
https://github.com/dcleblanc/SafeInt
 
 
So i have finally come to the conclusion that C++ is really powerful,
so i have said that i will continu to bring the best to C++..
 
 
And here is what i wrote on my previous posts about that, read them
carefully to understand me more:
 
I have thought more about C++, and I think C++ is really powerful
because STL vectors perform bounds checking when the .at() member
function is called, and for integer overflow or underflow and conversion
from negative signed to unsigned or more efficient strict-type safety,
here is how to do it with SafeInt here:
 
https://github.com/dcleblanc/SafeInt
 
 
I have just written the following program using SafeInt, please look at
it and try it to notice that C++ is powerful:
 
===
 
#include "SafeInt.hpp"
using namespace std;
#include <climits>
#include <iostream>
#include <sstream>
#include <stdexcept>
 
class my_exception : public std::runtime_error {
std::string msg;
public:
my_exception(const std::string &arg, const char *file, int line) :
std::runtime_error(arg) {
std::ostringstream o;
o << file << ":" << line << ": " << arg;
msg = o.str();
}
~my_exception() throw() {}
const char *what() const throw() {
return msg.c_str();
}
};
 
#define throw_line(arg) throw my_exception(arg, __FILE__, \
__LINE__);
 
 
class CMySafeIntException : public SafeIntException
{
public:
static void SafeIntOnOverflow()
{
cout << "Caught a SafeInt Overflow exception!" << endl;
throw_line("SafeInt exception");
}
static void SafeIntOnDivZero()
{
cout << "Caught a SafeInt Divide By Zero exception!" << endl;
throw_line("SafeInt exception");
}
};
 
void a1(SafeInt<unsigned __int8, CMySafeIntException> a)
{
 
cout << (int)a << endl;
}
 
int main()
{
try {
 
//throw std::invalid_argument("exception");
 
unsigned __int8 i1 = 250;
unsigned __int8 i2 = 150;
SafeInt<unsigned __int8, CMySafeIntException> si1(i1);
SafeInt<unsigned __int8, CMySafeIntException> si2(i2);
SafeInt<unsigned __int8, CMySafeIntException> siResult = si1 + si2;
cout << (int)siResult << endl;
 
 
a1(-1);
 
}
catch (const std::runtime_error &ex) {
std::cout << ex.what() << std::endl;
}
//catch (const std::invalid_argument &ex) {
// std::cout << ex.what() << std::endl;
// }
 
 
}
 
 
====
 
 
 
Also I have said before that C++ allows some "implicit" type conversions
and that is not good, but i have learned more C++ and now i
am understanding it more, and i think C++ is really powerful !
because you can "control" implicit conversins(that means disallowing
implicit conversions) by doing the following in C++, look carefully at
the following C++ code that you can extend:
 
 
===
 
include <iostream>
#include <stdexcept>
 
struct controlled_int {
// allow creation from int
controlled_int(int x) : value_(x) { };
controlled_int& operator=(int x) { value_ = x; return *this; };
// disallow assignment from bool; you might want to use
BOOST_STATIC_ASSERT instead
controlled_int& operator=(bool b) { std::cout << "Exception: Invalid
assignment of bool to controlled_int" << std::endl;
throw; return *this; };
 
// creation from bool shouldn't happen silently
explicit controlled_int(bool b) : value_(b) { };
 
// conversion to int is allowed
operator int() { return value_; };
 
// conversion to bool errors out; you might want to use
BOOST_STATIC_ASSERT instead
 
operator bool() { std::cout << "Invalid conversion of controlled_int
to bool" << std::endl;
// throw std::logic_error("Exception: Invalid conversion of
controlled_int //to bool");
};
 
private:
int value_;
};
 
int main()
{
controlled_int a(42);
 
// This errors out:
// bool b = a;
// This gives an error as well:
a = true;
 
std::cout << "Size of controlled_int: " << sizeof(a) << std::endl;
std::cout << "Size of int: " << sizeof(int) << std::endl;
 
return 0;
}
 
 
===
 
 
 
And as you have noticed i have invented my C++ synchronization objects
library for Windows and Linux here:
 
https://sites.google.com/site/scalable68/c-synchronization-objects-library
 
And i have invented my Scalable Parallel C++ Conjugate Gradient Linear
System Solver Library for Windows and Linux here:
 
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
My next invention that is coming soon is the following:
 
I am finishing a C++ implementation of a "scalable" reference counting
with a "scalable" C++ implementation of shared_ptr and weak_ptr. Because
the implementations in Boost and C++ are "not" scalable. I will bring to
you my new scalable algorithms soon.
 
 
So stay tuned !
 
 
Thank you,
Amine Moulay Ramdane.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jun 06 03:01AM -0700

You would do yourself benefit to seek God and listen to His
advice, and change your ways:
 
https://www.biblegateway.com/passage/?search=Proverbs+18%3A13&version=KJV
 
Proverbs 17:13 -- He that answereth a matter before he
heareth it, it is folly and shame unto him.
 
You repeatedly post in haste, only to later correct yourself.
And:
 
https://www.biblegateway.com/passage/?search=James+1%3A19&version=KJV
 
James 1:19 -- Wherefore, my beloved brethren, let every man
be swift to hear, slow to speak, slow to wrath:
 
You post things before (1) knowing enough about the
subject matter, (2) with grammar errors indicating it's
an improperly considered reply that is too hasty, and (3) you do
not seek to learn from others but only espouse your current
thinking on things, which you've demonstrated repeatedly is
wrong at least as often as it's right. You also try and compare
other things to C++ which are not even in the same league.
 
You are harming yourself, Ramine. Repeatedly.
 
Slow down. Ask questions. Breathe. Pause. And above all
seek the truth. Don't respond in the moment. Give yourself
at least 10 minutes to consider your post BEFORE posting.
 
Don't let the enemy destroy your credibility by using you in
a manner that is against God's guidance. Seek God and let
Him move you from where you are to where you should be.
Let Him be your schoolmaster. You'll find Him amazing, Ramine.
All His ways lead to love manifesting, and people helping people.
 
That's only God the Father, Son, and Holy Spirit, Ramine.
None other. No great deceivers.
 
--
Rick C. Hodgin
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jun 06 03:24AM -0700

Ramine, I have tried to reply to you privately in email, but
you post with an invalid email address.
 
Why?
 
--
Rick C. Hodgin
Sky89 <Sky89@sky68.com>: Jun 05 08:13PM -0400

Hello,
 
Now that i have understood more C++ that it is really powerful, look
at my previous post about C++ implicit type conversions to notice it,
and i have also said that STL vectors perform bounds checking when the
.at() member function is called, and for integer overflow or underflow
or more efficient strict-type safety, here is how to do it with C++
SafeInt here:
 
https://github.com/dcleblanc/SafeInt
 
So C++ is safe and really powerful, so i will continu to bring the
best to C++..
 
So as you have noticed i have invented my C++ synchronization objects
library for Windows and Linux here:
 
https://sites.google.com/site/scalable68/c-synchronization-objects-library
 
And i have invented my Scalable Parallel C++ Conjugate Gradient Linear
System Solver Library for Windows and Linux here:
 
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
My next invention that is coming soon is the following:
 
I am finishing a C++ implementation of a "scalable" reference counting
with a "scalable" C++ implementation of shared_ptr and weak_ptr. Because
the implementations in Boost and C++ are "not" scalable. I will bring to
you my new scalable algorithms soon.
 
 
So stay tuned !
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: Jun 05 08:00PM -0400

Hello,
 
I think C++ is really powerful, read the following:
 
I have said before that C++ allows some "implicit" conversions
and that is not good, but i have learned more C++ and now i
am understanding it more, and i think C++ is really powerful !
because you can "control" implicit conversiosn(that means disallowing
implicit conversions) by doing the following in C++, look carefully at
the following C++ code that you can extend:
 
 
===
 
include <iostream>
#include <stdexcept>
 
struct controlled_int {
// allow creation from int
controlled_int(int x) : value_(x) { };
controlled_int& operator=(int x) { value_ = x; return *this; };
// disallow assignment from bool; you might want to use
BOOST_STATIC_ASSERT instead
controlled_int& operator=(bool b) { std::cout << "Exception: Invalid
assignment of bool to controlled_int" << std::endl;
throw; return *this; };
 
// creation from bool shouldn't happen silently
explicit controlled_int(bool b) : value_(b) { };
 
// conversion to int is allowed
operator int() { return value_; };
 
// conversion to bool errors out; you might want to use
BOOST_STATIC_ASSERT instead
 
operator bool() { std::cout << "Invalid conversion of controlled_int
to bool" << std::endl;
// throw std::logic_error("Exception: Invalid conversion of
controlled_int //to bool");
};
 
private:
int value_;
};
 
int main()
{
controlled_int a(42);
 
// This errors out:
// bool b = a;
// This gives an error as well:
a = true;
 
std::cout << "Size of controlled_int: " << sizeof(a) << std::endl;
std::cout << "Size of int: " << sizeof(int) << std::endl;
 
return 0;
}
 
 
===
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: