Thursday, June 14, 2018

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

boltar@cylonHQ.com: Jun 14 09:58AM

On Wed, 13 Jun 2018 11:49:24 -0400
>> windows.
 
>split window was added to ios several years ago, however, it doesn't
>make a lot of sense to have more than two windows on a tablet and
 
Doesn't it? I had 3 open on my tablet last night. Don't confuse your own use
cases with everyone elses.
 
>certainly not a phone.
 
Can't disagree there, unless the phone supports a seperate monitor.
 
>nevertheless, *four* apps can currently be run at the same time on an
>ipad, although it's not particularly useful:
><https://photos5.appleinsider.com/gallery/21608-24920-IMG_0391-l.jpg>
 
Wow, so iOS has reached Windows 1.0 functionality levels. Will wonders never
cease?
 
>the key is that a tablet is *not* a laptop. it's good for a *different*
>set of tasks than a laptop, although there is some overlap.
 
Unless its a large tablet with a bluetooth keyboard.
 
 
>false.
 
>nothing on the android side comes anywhere close to an ipad pro with an
>apple pencil.
 
LOL, yeah ok :)
boltar@cylonHQ.com: Jun 14 10:00AM

On Wed, 13 Jun 2018 11:49:25 -0400
>> objective-C which should have been strangled at birth).
 
>apple's prices are not absurd, the desktop market is shrinking and they
>aren't interested in the server market, which is even smaller.
 
It might be smaller, but the margins are much healthier than desktop.
 
 
>mobile is the future.
 
Sure it is. All development and other types of content creation will one day be
done on a pokey 3 inch screen and a virtual keyboard. Right.
Rosario19 <Ros@invalid.invalid>: Jun 14 11:55AM +0200

On Wed, 13 Jun 2018 20:01:34 +0100, Chris Vine wrote:
>> ... // code using p
>> ~T::p(); // call manually the default destructor for p
>> }
 
i imagined
 
if(p)
{T::p();// call manually the default void arg constructor for p
... // code using p
~T::p(); // call manually the default destructor for p
}
 
or
if(p)
{p.();// call manually the default void arg constructor for p
... // code using p
p.~(); // call manually the default destructor for p
}
 
but they are wrong

>> on memory?
 
>You should use placement new:
>https://en.cppreference.com/w/cpp/language/new#Placement_new
 
thank you very much, appear i have not idioms of C++ so would be this:
 
T *p=malloc(sizeof(T));
if(p)
{p->T(); // call manually the default void arg constructor for p
... // code using p and *p
p->~T();// call manually the default destructor for p
}
 
and the sys not automatic call it... this can be useful for doing
arrays of type T using malloc
Rosario19 <Ros@invalid.invalid>: Jun 14 11:57AM +0200

On Thu, 14 Jun 2018 11:55:49 +0200, Rosario19 wrote:
 
 
> ... // code using p and *p
> p->~T();// call manually the default destructor for p
> }
 
 
T *p=malloc(sizeof(T));
if(p)
{p->T(); // call manually the default void arg constructor for p
... // code using p and *p
p->~T();// call manually the default destructor for p
free(p);
}
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 13 07:30PM -0400

On 06/13/2018 09:23 AM, Daniel wrote:
> On Wednesday, June 13, 2018 at 5:17:54 AM UTC-4, Ian Collins wrote:
...
>> only specification
 
> Robert Martin has, or as near as, check out the old discussions in
> comp.software.extreme-programming.
 
You know what you're referring to - anyone else would have to guess. If
you could identify a specific message in which he made such a claim,
that would make it a lot easier for other people to check it out and
either agree with you or explain to you how you've misinterpreted his words.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:26PM -0700

The links you posted made interesting reading. Thank you
for posting them.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:29PM -0700


> On a side note, I'm curious what your thoughts are on this:
> https://pdfs.semanticscholar.org/7745/5588b153bc721015ddfe9ffe82f110988450.pdf
 
Another very interesting read. Keep those good links a comin'.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:41PM -0700

> they have said, in particular, what Robert Martin has said, is that
> specifications can be fully expressed by tests, and that the tests can fully
> validate the specifications thus expressed.
 
Do you mean verify rather than validate? The tests verify (ie, when
they pass) that the implementation conforms for those tests, but does
not validate the tests themselves. To validate a specification (be
it expressed in tests or otherwise) requires the client to agree that
the specification expresses what they want.
 
> This would fall under the
> heading of being a bold claim, and, in my opinion, one that cannot be
> substantiated.
 
The idea that any finite set of tests can fully verify a function
with an unbounded input is, to put it bluntly, wrong.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:43PM -0700


>> Are you suggesting that TDD is the one and only way to write code
>> that you know "works"?
 
> No.
 
FWIW that is how the statement struck me when I read your
posting. Just fyi.
Daniel <danielaparker@gmail.com>: Jun 13 06:49PM -0700

On Wednesday, June 13, 2018 at 2:28:09 PM UTC-4, Paavo Helde wrote:
> drift. But that's easy to fix: if it hurts, don't do that! There is no
> reason why a number should be converted from binary to decimal and back
> thousands of times.
 
Paavo, I don't think you understand the problem with the naive floating
point conversion code that I posted. Consequently, I don't think your
comment makes sense :-) Let me post a full example, using the gason string
to double implementation (but the same difficulties apply to all naive
conversions including sajson and pjson):
 
#include <iostream>
#include <string>
#include <stdlib.h>
#include <iomanip>
#include <limits>
 
// From https://github.com/vivkin/gason
static double string2double(char *s, char **endptr) {
char ch = *s;
if (ch == '-')
++s;
 
double result = 0;
while (isdigit(*s))
result = (result * 10) + (*s++ - '0');
 
if (*s == '.') {
++s;
double fraction = 1;
while (isdigit(*s)) {
fraction *= 0.1;
result += (*s++ - '0') * fraction;
}
}
 
if (*s == 'e' || *s == 'E') {
++s;
double base = 10;
if (*s == '+')
++s;
else if (*s == '-') {
++s;
base = 0.1;
}
 
unsigned int exponent = 0;
while (isdigit(*s))
exponent = (exponent * 10) + (*s++ - '0');
double power = 1;
for (; exponent; exponent >>= 1, base *= base)
if (exponent & 1)
power *= base;
result *= power;
}
 
*endptr = s;
return ch == '-' ? -result : result;
}
 
void test1()
{
char* endp = nullptr;
char* value = "123456789.12345678";
 
double d1 = string2double(value, &endp);
double d2 = strtod(value, &endp);
 
int precision = std::numeric_limits<double>::max_digits10; // 17
 
std::cout << value << ",\t" << std::setprecision(precision) << d1 << ",\t" << d2 << std::endl;
}
 
void test2()
{
char* endp = nullptr;
char* value = "1.15507e-173";
 
double d1 = string2double(value, &endp);
double d2 = strtod(value, &endp);
 
int precision = std::numeric_limits<double>::max_digits10; // 17
 
std::cout << value << ",\t" << std::setprecision(precision) << d1 << ",\t" << d2 << std::endl;
}
 
void test3()
{
char* endp = nullptr;
char* value = "42.229999999999997";
 
double d1 = string2double(value, &endp);
double d2 = strtod(value, &endp);
 
int precision = std::numeric_limits<double>::max_digits10; // 17
 
std::cout << value << ",\t" << std::setprecision(precision) << d1 << ",\t" << d2 << std::endl;
}
 
void test4()
{
char* endp = nullptr;
char* value = "13.449999999999999";
 
double d1 = string2double(value, &endp);
double d2 = strtod(value, &endp);
 
int precision = std::numeric_limits<double>::max_digits10; // 17
 
std::cout << value << ",\t" << std::setprecision(precision) << d1 << ",\t" << d2 << std::endl;
}
 
int main()
{
std::cout << "Input,\t\t\tgason,\t\t\tstrtod" << std::endl;
test1();
test2();
test3();
test4();
}
 
Output:
 
Input, gason, strtod
123456789.12345678, 123456789.12345678, 123456789.12345678
1.15507e-173, 1.1550700000000217e-173, 1.15507e-173
42.229999999999997, 42.230000000000011, 42.229999999999997
13.449999999999999, 13.450000000000001, 13.449999999999999
 
Notice that the C function strtod round trips for the specified precision,
which I've set to the number of base-10 digits needed to uniquely represent
all distinct double values. gason on the other hand, fails to round trip in
three of my (carefully chosen!) test cases, because of cumulative errors in naive code. Many years ago, popular C/C++ compilers including Visual C++ and
GNU also failed some round trip tests, but today you can be certain that
they get that right.
 
As the author of an open source json thing, I can assure you that if
floating point conversions did not round trip, users would report defects in
the issues log (like they have for sajson, for example), and there's no way to fix a naive converter. Bad to have that in the issues log.
 
Best regards,
Daniel
Daniel <danielaparker@gmail.com>: Jun 13 07:24PM -0700

On Wednesday, June 13, 2018 at 7:30:14 PM UTC-4, James Kuyper wrote:
 
> > Robert Martin has, or as near as, check out the old discussions in
> > comp.software.extreme-programming.
 
> You know what you're referring to - anyone else would have to guess.
 
Or do a search on the newsgroup I referred to
 
> If you could identify a specific message in which he made such a claim,
> that would make it a lot easier for other people to check it out and
> either agree with you or explain to you how you've misinterpreted his words.
 
Here's a few quotes. These are in the context of XP, if you search
around, you should be able to find more in the context of TDD. These aren't
the ones I was specifically looking for, I was looking for an exchange I
had with him when he was at his most extreme about the tests *being* the
specification. Unfortunately he used quotes credited to me in his sig,
so it's very difficult for me to find anything substantive searching on my
name and "Robert Martin".
 
https://groups.google.com/forum/#!searchin/comp.software.extreme-programming/robert$20martin$20specification%7Csort:date/comp.software.extreme-programming/AWHY1EkwFvw/-IwaWj7nJ0EJ
 
"Unit tests *specify* how the logic of the code should work. The
distinction between "check" and "specify" is important. In XP, the
unit tests are written before the code that they specify. The unit
tests are very low level and very details specifications of the
behavior of a module." - RM
 
"Again, it's a *specification*, not a check. The acceptance tests are
written first and act as the true requirements."
 
>- Acceptance test check that the logic of the code match the customer
> needs
 
Again, it's a *specification*, not a check. The acceptance tests are
written first and act as the true requirements. - RM
 
https://groups.google.com/forum/#!searchin/comp.software.extreme-programming/robert$20martin$20specification%7Csort:date/comp.software.extreme-programming/wrbX0dpiFnU/LhKYllu25k4J
 
"... human language is not particularly good at capturing
the kind of precision necessary for documenting requirements. The XP
solution to this is to have the customer write acceptance tests for
each story (requirement). These acceptance tests are written in a
very high level scripting language that is designed specifically for
the application. They are executable. When run against the system
they demonstrate that the system conforms to the requirements.
 
When a requirements document is executable, and when it is run against
the system every day, it cannot lie. It is precise and unambiguous.
When the acceptance tests are written in high level scripting
langages, they are readable by developers and customers alike. They
become a very powerful and useful form of requirements documentation."
 
- RM
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 14 01:02AM -0400

On 06/13/2018 10:24 PM, Daniel wrote:
>>> comp.software.extreme-programming.
 
>> You know what you're referring to - anyone else would have to guess.
 
> Or do a search on the newsgroup I referred to
 
That's where the guess work I was talking about would come in - we have
to guess what it is that you're referring to, and try to guess what
search strings might bring it up. Unless you're sufficiently specific
about which messages you're talking about, it's always possible that
other people will only locate messages in which a more reasonable point
of view is being expressed.
Ian Collins <ian-news@hotmail.com>: Jun 14 05:04PM +1200

On 14/06/18 14:24, Daniel wrote:
> behavior of a module." - RM
 
> "Again, it's a *specification*, not a check. The acceptance tests are
> written first and act as the true requirements."
 
Isn't that what I and others have been saying? The tests are "very low
level and very detailed specifications", they are not the only
specification.
 
>> needs
 
> Again, it's a *specification*, not a check. The acceptance tests are
> written first and act as the true requirements. - RM
 
That's how I have used acceptance tests in the past, they were written
by the customer or a proxy and were the last word on the functionality,
superseding or used to update the original written specification.
 
> langages, they are readable by developers and customers alike. They
> become a very powerful and useful form of requirements documentation."
 
> - RM
 
Also true. Who better than the customer to confirm that the product
does what they have asked and paid for? This may not be adequate in a
highly specified environment such as medical software, but it is for a
large segment of the market.
 
None of the above claims TDD test to be the one and only specification.
 
--
Ian.
Ian Collins <ian-news@hotmail.com>: Jun 14 05:05PM +1200

On 14/06/18 13:43, Tim Rentsch wrote:
 
>> No.
 
> FWIW that is how the statement struck me when I read your
> posting. Just fyi.
 
It matched to tone of the OP...
 
--
Ian.
Paavo Helde <myfirstname@osa.pri.ee>: Jun 14 10:16AM +0300

On 14.06.2018 4:49, Daniel wrote:
> 1.15507e-173, 1.1550700000000217e-173, 1.15507e-173
> 42.229999999999997, 42.230000000000011, 42.229999999999997
> 13.449999999999999, 13.450000000000001, 13.449999999999999
 
And the last line, when printed out in the rounded form with "%.1f" for
the user convenience, would become:
 
13.4 13.5 13.4
 
So what? From where does the number like 13.449999999999999 come? Is it
a result of calculations or is it a user input? If it is user input,
then it is originally in string form, so store it in string form if it
needs to be json-ed or whatever. No round-trip problems. If it is a
result of calculations then it might deviate from its exact value by a
number of other reasons so the last digits do not really matter and
discrepancies in the end results are to be expected. If string format is
needed, just print them with enough precision to not lose the signal in
the noise.
 
> three of my (carefully chosen!) test cases, because of cumulative errors in naive code. Many years ago, popular C/C++ compilers including Visual C++ and
> GNU also failed some round trip tests, but today you can be certain that
> they get that right.
 
Then it seems fortunate that I am using strtod_l() in my code and have
not tried to develop fancy alternatives. But to be honest, even so we
have had our own fights with test results coming out 13.4 on one machine
and 13.5 on another. The trick is to not round them up and to compare
them with an epsilon.
 
 
> As the author of an open source json thing, I can assure you that if
> floating point conversions did not round trip, users would report defects in
> the issues log (like they have for sajson, for example), and there's no way to fix a naive converter. Bad to have that in the issues log.
 
I bet these are the same people who demanded for years that Excel fixed
their "10*0.1 bug".
 
A perfect round trip would be nice, but not really essential. I can
easily see in some situations one could prefer performance instead.
 
Cheers
Paavo
Juha Nieminen <nospam@thanks.invalid>: Jun 14 09:06AM

> floating-point representation because they have different
> implementations of "text to floating-point" and "floating-point to
> text".
 
I think it's a bad idea to transmit floating point values between systems
in decimal ascii representation, because there's an enormously high chance
that information will be lost and the actual value will be different in
the different systems.
 
One way to circumvent that is to use a hexadecimal ascii representation
instead. (There might be obscure architectures out there where this still
will lose information, but at least with standard IEEE floating point
it ought to be an exact 1-to-1 representation without loss of information.)
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:33PM -0700

>> of 'goto', even though it's one of the simplest features to implement.
 
> With a compiler that optimises away tail recursion, functions become labels, and
> applications become gotos.
 
But with an important difference - a tail call returns from its
enclosing function right then and there, whereas a goto may
wander in the weeds inside the same function, going who knows
where. A tail call is always a very structured goto.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:34PM -0700

>> i think 'Object Oriented Programming' when you sue it - making
>> objects passing pointers
 
> Object-oriented programming doesn't necessitate passing objects by pointer.
 
I find this statement somewhat laughable. It's no accident
that 'this' is a pointer rather than a value.
Siri Cruise <chine.bleu@yahoo.com>: Jun 13 08:09PM -0700

In article <kfnmuvyf8z2.fsf@x-alumni2.alumni.caltech.edu>,
> enclosing function right then and there, whereas a goto may
> wander in the weeds inside the same function, going who knows
> where. A tail call is always a very structured goto.
 
Write a program with a single while loop and a single case:
 
int state := 1;
while state>0 do
case state in
1: (first executed basic block; state := n),
...
n: (nth basic block; state := m),
...
r: (final executed basic block: state := 0)
esac
od
 
--
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted. @
'I desire mercy, not sacrifice.' /|\
I'm saving up to buy the Donald a blue stone This post / \
from Metebelis 3. All praise the Great Don! insults Islam. Mohammed
"Öö Tiib" <ootiib@hot.ee>: Jun 13 11:09PM -0700

On Thursday, 14 June 2018 04:34:23 UTC+3, Tim Rentsch wrote:
 
> > Object-oriented programming doesn't necessitate passing objects by pointer.
 
> I find this statement somewhat laughable. It's no accident
> that 'this' is a pointer rather than a value.
 
Stroustrup did somewhere explain that 'this' is a reference that uses
pointer's syntax because reference wasn't yet invented when 'this' was
introduced into language.
Juha Nieminen <nospam@thanks.invalid>: Jun 14 09:00AM

>> Object-oriented programming doesn't necessitate passing objects by pointer.
 
> I find this statement somewhat laughable. It's no accident
> that 'this' is a pointer rather than a value.
 
I quite often use object-oriented programming, sometimes with virtual
functions and all, without handling objects by pointer, but by value.
 
'this' might act similarly to a const pointer, but it's rare to even
need to refer to it explicitly. Even when you do, it can be the only
exception.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:11PM -0700

>> occupations from developer to witchdoctor.
 
> The fact that comparing floating point values with == is inherently
> brittle is a *fact*, and there is nothing superstitious with it.
 
I'm sorry you didn't understand the point I was making.
 
Incidentally, to be a question of fact the question must be
objectively decidable. Whether using == with floating point
values is "inherently brittle" involves subjective judgment,
because it depends on what someone thinks "inherently brittle"
means. Rational people may reasonably disagree. In such cases
the question cannot be a question of fact.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 06:14PM -0700

> appears to be a bug in the compiler, triggered indeed by the presence
> of std::numeric_limits<double>::max() in the code (albeit the bug was
> a different and more interesting one from what I had imagined).
 
Your experience seems to have led you to a superstitious and
erroneous conclusion. The problem has nothing to do with
using std::numeric_limits<double>::max(), or using equality
comparison.
Juha Nieminen <nospam@thanks.invalid>: Jun 14 08:56AM

> actually needed calculations involve things like gigahertzes or
> picometers which would not be easily representable/convertible in that
> system.
 
The other advantage of floating point, besides the larger range, is that
the accuracy of the most-signficant-digits doesn't depend on the magnitude
of the number. In other words, you will have about 15 (decimal) most
significant digits of accuracy regardless of whether your values are
in the magnitude of 1e1, 1e10, or 1e100, for instance.
 
(Floating point values should always be thought as "the n most significant
digits of the actual value being represented", rather than an absolutely
exact value. This amount is always the same, regardless of the magnitude,
sans perhaps the absolutely edge cases.)
 
As for comparing floating point values with operator ==, it depends.
In some situations it's completely reliable, such as:
 
double d = 1.0;
if(d == 1.0) ...
 
That will always evaluate to true. (I sometimes get the feeling that
some people have the misconception that floating point values operate under
quantum physics and the Heisenberg uncertainty principle, in that you can
never trust them to have an exact value at any given point. Obviously
that's not the case.)
Tim Rentsch <txr@alumni.caltech.edu>: Jun 13 05:53PM -0700


>> static constexpr Bas t5();
>> };
 
> That's not a static member. It's a static function declaration.
 
Oh yes. I had forgotten this delightful ambiguity.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: