Monday, January 28, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Daniel <danielaparker@gmail.com>: Jan 28 10:05AM -0800

Suppose we have some classes,
 
struct A
{
typedef char char_type;
};
 
struct B
{
typedef wchar_t char_type;
};
 
and some functions that take instances of these classes along with a
string. Would you be inclined to write those functions like this:
 
// (*)
template <class T>
void f(const T& t, const std::basic_string<typename T::char_type>& s)
{}
 
or like this
 
// (**)
template <class T, class CharT>
typename std::enable_if<std::is_same<typename T::char_type,CharT>::value,void>::type
g(const T& t, const std::basic_string<CharT>& s)
{}
 
or some other way? Criteria are technical reasons and ease of documentation.
 
(In this construction, it is ruled out that A and B can be
specializations of a common class templated on CharT.)
 
Thanks,
Daniel
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 06:13PM

On 28/01/2019 18:05, Daniel wrote:
> {
> typedef wchar_t char_type;
> };
 
Don't use wchar_t as it isn't portable so is next to useless. We have
Unicode now.
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Daniel <danielaparker@gmail.com>: Jan 28 11:03AM -0800

On Monday, January 28, 2019 at 1:14:08 PM UTC-5, Mr Flibble wrote:
> > };
 
> Don't use wchar_t as it isn't portable so is next to useless. We have
> Unicode now.
 
In practice, char everywhere and 16 bit wchar_t on Windows only
are the only character types that currently matter for a library
writer that aspires to having users. You may regret that wchar_t
exists, but there is a substantial amount of code that has it. In
any case, feel free to substitute char16_t or char32_t, or
another type altogether, the question is the same.
 
Daniel
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 09:27PM

On 28/01/2019 19:03, Daniel wrote:
> exists, but there is a substantial amount of code that has it. In
> any case, feel free to substitute char16_t or char32_t, or
> another type altogether, the question is the same.
 
In practice, if you know what you are doing, a library writer that aspires
to having users will have as little OS specific (Windows) code in their
libraries as possible. I have removed most of the wchar_t from "neolib"
and only have one instance of wchar_t in "neoGFX" (which is much larger
than "neolib").
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Manfred <invalid@add.invalid>: Jan 28 12:34AM +0100

On 1/27/19 10:33 PM, Daniel wrote:
> In the meantime I think it's fair to say that authors of accounting and portfolio management software have by and large abandoned C++ as their language of choice.
 
Bjarne Stroustrup is a Managing Director in the technology division of
Morgan Stanley.
I am pretty sure they use a lot of C++ in there.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 12:19AM

On 27/01/2019 21:33, Daniel wrote:
> In the meantime I think it's fair to say that authors of accounting and portfolio management software have by and large abandoned C++ as their language of choice.
 
Again with the negative vibes. Where do you get this anti-C++ stuff from?
Are you just making it up as you go along? Backup your assertions or they
can be dismissed with no further thought.
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Daniel <danielaparker@gmail.com>: Jan 27 05:28PM -0800

On Sunday, January 27, 2019 at 6:34:55 PM UTC-5, Manfred wrote:
 
> Bjarne Stroustrup is a Managing Director in the technology division of
> Morgan Stanley.
> I am pretty sure they use a lot of C++ in there.
 
Retail or wholesale banking? I doubt if they use C++ in retail for
accounting or client account management in retail, which is the subject of
my last post. Retail is less complicated but bigger than wholesale. In
wholesale, I doubt if they use C++ for account management either.
 
In wholesale, in quantitative modelling and risk, there are many valuation
models which are analogous to the heat equation in physics, and monte
carlo simulations of risk factors that are heavily compute intensive, and
which rely on massively parallel grid calculation. These do not use fixed
decimal arithmetic, as pennies are irrelevant for these purposes, and
happily get by with doubles. The major UK based commercial product in this
space is written in C#, but does use some C++ for simulation. Internal
development in this space tends to be a mix of C# and C++.
 
Daniel
Robert Wessel <robertwessel2@yahoo.com>: Jan 28 12:18AM -0600

On Sun, 27 Jan 2019 21:48:44 +0100, "Alf P. Steinbach"
>to represent the foreign debt of the USA to the penny. It's right at the
>limit today. Next year, or in a few years, it will not suffice.
 
>Is this like a year-2000 problem or the C time problem coming up on us?
 
 
As a first order approximation, no one should be using float for
currency.
 
As for databases, any serious relational database supports "DECIMAL"
datatypes (the internal format is not relevant, and is often binary,
but the precision specification of is of the number of decimal digits,
and a decimal scaling factor).
 
https://dev.mysql.com/doc/refman/8.0/en/fixed-point-types.html
 
Languages like Java and C# have support for similar datatypes. And
that's one reason they see heavy business use - they actually can
reasonably accommodate currency, without the programmer jumping
through hoops.
Daniel <danielaparker@gmail.com>: Jan 28 06:16AM -0800

On Sunday, January 27, 2019 at 12:27:10 PM UTC-5, Manfred wrote:
> dominant: that's where they make money. So if the market wants a Decimal
> type, they'll provide such a type with /some/ internal representation,
> it doesn't matter /which/ specific representation.
 
DBMS vendors wouldn't make any money at all if they expected users to
user their vendor specific APIs, let alone vendor specific types.
That's why when new DBMS products comes on the market, they generally
implement C drivers to standard or de facto standard API's, such as JDBC or
ODBC, so users can use these databases through broadly supported interfaces.
In Java and C#, these interfaces are quite pleasant to use, and there is an
enormous amount of tooling available around them, both open source and
commercial. In C++, they are not.
 
In C++, there are open source API's over the de facto standard ODBC,
such as nanodbc https://github.com/nanodbc/nanodbc/issues but they
are not as pleasant to use. Partly because C++ doesn't have big integer,
big decimal, or date time. (Also because C++ doesn't have introspection,
but that's a separate matter.)
 
There are some very specialized and very expensive DBMS products where
people might build with their libraries and use their own API's, but
I believe that's quite rare. When I've used Netezza in C# or C++, for
example, I've used ODBC bindings, and I think most people do.
> representation of a native type: such representation is key to
> performance and efficiency, which is the drive of the language, unlike
> for commercial vendors.
 
Of course, a language that has C++ streams cannot be said to all about
"performance and efficiency". It is remarkable that C++ introduced an
abstraction that appears to be so inherently slow, especially for
text/number conversions. If a language does not have good abstraction that
map into user space, the things that users build will be neither performant
nor efficient.
 
> hardware level, and anyway there is no common such representation among
> architectures, the language does not provide such a type as a native
> type.
 
No architectures provide anything that maps directly into C++ streams or
filesystem directory_entry either. There is conversion and abstraction into
a form that is more convenient for the user.
 
> In this context, choosing one representation would possibly be
> efficient for one architecture, but inefficient for another one or for a
> future one.
 
If a user needs, for example, to convert a floating point number into
a text representation with fewest decimal digits after the dor, that's not
handled in hardware either, but C++ is going to do that (with to_chars).
C++ already does a lot of things that take you away from the hardware.
I don't think big integer, big decimal and date time are any different.
 
Daniel
scott@slp53.sl.home (Scott Lurndal): Jan 28 02:26PM

>> at the application or library level.
 
 
>It raises the question of what representation common database APIs in
>the financial world use.
 
They don't use binary floating point, as a rule. On the Z-series,
it's BCD (packed decimal).
scott@slp53.sl.home (Scott Lurndal): Jan 28 02:29PM

>accounting or client account management in retail, which is the subject of
>my last post. Retail is less complicated but bigger than wholesale. In
>wholesale, I doubt if they use C++ for account management either.
 
Citi, Morgan, Credit Suiesse, etc. use C++ for BS modelling amongst other things.
Very little effort is needed for simple account management, and many still
use COBOL on Z-series for that. The bulk of software development in the big
houses is now aimed at internet, apps and modelling.
 
https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model
"Öö Tiib" <ootiib@hot.ee>: Jan 28 07:08AM -0800

On Monday, 28 January 2019 16:16:45 UTC+2, Daniel wrote:
> text/number conversions. If a language does not have good abstraction that
> map into user space, the things that users build will be neither performant
> nor efficient.
 
Can you illustrate that claim of slowness with code? Perhaps you do
something odd in it.
 
C++ streams usually read and write how fast hardware lets these to read
and to write. For example on this macbook I happen to have under hand
it is about 3 GB/s read and 2 GB/s write. Only thing that seems
crappy is that std::filesystem is not supported on it. But Apple has
always been unfriendly to developers so no biggie.
scott@slp53.sl.home (Scott Lurndal): Jan 28 03:24PM


>Can you illustrate that claim of slowness with code? Perhaps you do
>something odd in it.
 
>C++ streams usually read and write how fast hardware lets these to read
 
That may, perhaps, under certain conditions, be true. However, the
overhead required by C++ streams (and/or stdio) take cycles away from
other jobs on the system.
 
>and to write. For example on this macbook I happen to have under hand
>it is about 3 GB/s read and 2 GB/s write.
 
Provide the methodology used for these tests, including a description
of the hardware and test software. No spinning rust can provide those datarates; you have
to go to high end NVMe SSD setups. Do include measurements of the system overhead
when compared with using lower-overhead I/O mechanisms like mmap or O_DIRECT.
 
C++ streams, aside from the sillyness of the syntax, has significant run-time
overhead (especially when sitting on top of stdio streams) (and leaving
aside the kernel buffers into which the data is first read).
Daniel <danielaparker@gmail.com>: Jan 28 07:28AM -0800

On Monday, January 28, 2019 at 9:29:49 AM UTC-5, Scott Lurndal wrote:
 
> Citi, Morgan, Credit Suiesse, etc. use C++ for BS modelling amongst other
> things.
 
I don't think they use Black-Scholes for valuing trades anymore. The closed
form Black-Scholes model is simple and inexpensive to calculate, but doesn't
adapt to changing volatility surfaces. Most quant desks use iterative finite
difference methods that model price movements and volatility movements
simultaneously, or monte carlo methods. All of it is somewhat problematic,
as it reduces to asset prices and volatility assumed to be driven by
parameterized stochastic processes. In physics, the parameters of a model
can be computed once and for all, while in finance they're recalibrated
daily, and tend to jump around. The models tend to work quite well in the
good times, basically relying on arbitrage arguments in the presence of
perfect liquidity, and fall apart in the bad, when there is no liquidity, as
seen in 2006.
 
Black-Scholes is still used in risk, though, especially for monte carlo
methods, and credit risk, where individual trades may have to be valued
10,000 times for each of say 250 time buckets. Less so in historical VaR,
though.
 
While desk quants traditionally developed in C++, much of it is now written
in C#. The vendor products tend to be written in C# or JAVA, with some C++
in the most compute intensive components.
 
> Very little effort is needed for simple account management, and many still
> use COBOL on Z-series for that. The bulk of software development in the > big houses is now aimed at internet, apps and modelling.
 
There's a lot of effort in deep learning and deep neural nets, some of it
targeting fraud, and some of it the chat person that you meet when you log
onto a retail account. In the fraud area, much of the effort uses Python,
which affords a rich interface to deep neural net components. Here the
effort is more on accessing data and training the net than actual
programming.
 
And yes, internet and apps. The banks are afraid of future competition from
the technology companies, such as we see from Alibaba in China, and are
investing heavily to try to ward that off.
 
Daniel
Daniel <danielaparker@gmail.com>: Jan 28 08:36AM -0800

On Monday, January 28, 2019 at 10:08:26 AM UTC-5, Öö Tiib wrote:
> it is about 3 GB/s read and 2 GB/s write. Only thing that seems
> crappy is that std::filesystem is not supported on it. But Apple has
> always been unfriendly to developers so no biggie.
 
Reading and writing chunks of text from a C++ stream is quite fast, and
comparable to fread and fwrite. But
 
double x;
uint64_t m;
is >> x;
is >> m;
 
or
 
double y;
uint64_t n;
os << y;
os << n;
 
is inordinately slow. That's why in the category of JSON parsers, for
example, none of the reasonably performing ones use streams for that
purpose, even though streams are more convenient because they can be
initialized with the 'C' locale. Many use the C functions sprintf for output
(jsoncpp) and sscanf or strtold for input (jsoncpp, nholmann, jsoncons), and
go through the extra step of undoing the effect of whatever locale is in
effect (unfortunately the _l versions of the C functions are not available
on all platforms.) But the C functions, even with the format parsing and
backing out the locale, are measurably much faster than the stream
alternatives.
 
nhlomann recently dropped sprintf for fp output (for platforms that support
ieee fp) and changed to a slightly modified version of Florian Loitsch's
implementation of girsu2 https://florian.loitsch.com/publications. This
improved their performance by about 2 1/2 times over what they were doing
previously with sprintf, by my benchmarks. Other JSON libraries have also
moved to girsu3, for platforms that support IEEE fp, with a backup
implementation using sprintf and strtod in the rare cases that girsu3
rejects.
 
Note that the girsu2 and girsu3 have an additional advantage over the stream
and C functions, in that they guarantee shortest decimal digits after the
dot almost always (girsu2) or always unless rejected (girsu3). Careful JSON
libraries (such as cJSON) convert doubles to string with 15 digits of
precision, than convert back to text, check if that round trips, and if not
convert to string with 17 digits of precision.
 
JSON libraries that are not cognisant of these things tend to end up with
unhappy users that express their unhappiness in the issues logs.
 
Everybody is hopeful for to_chars, but it's not widely available yet.
 
Daniel
Daniel <danielaparker@gmail.com>: Jan 28 08:41AM -0800

On Monday, January 28, 2019 at 11:36:58 AM UTC-5, Daniel wrote:
 
> Careful JSON libraries (such as cJSON) convert doubles to string with 15
> digits of precision, than convert back to text,
 
Errata: convert back to double
 
"Öö Tiib" <ootiib@hot.ee>: Jan 28 10:00AM -0800

On Monday, 28 January 2019 18:36:58 UTC+2, Daniel wrote:
> uint64_t n;
> os << y;
> os << n;
 
Oh that. That is misuse. It was perhaps meant for human interface.
One side serializes with German locale and other side blows up.
 
> on all platforms.) But the C functions, even with the format parsing and
> backing out the locale, are measurably much faster than the stream
> alternatives.
 
Typical "good case" oriented thinking. AFAIK the "number" in JSON is full
precision and does not have INFs or NaNs. So the stuff overlaps only
partially and so the standard library tools can't handle it anyway.

 
> JSON libraries that are not cognisant of these things tend to end up with
> unhappy users that express their unhappiness in the issues logs.
 
> Everybody is hopeful for to_chars, but it's not widely available yet.
 
Perhaps things like GMP or MPIR can be used with JSON. The doubles
and ints will eventually cause the JSON-discussion with some Python
script to break. Hopefully nothing does meltdown then as result.
Paavo Helde <myfirstname@osa.pri.ee>: Jan 28 08:50PM +0200

On 28.01.2019 18:36, Daniel wrote:
> go through the extra step of undoing the effect of whatever locale is in
> effect (unfortunately the _l versions of the C functions are not available
> on all platforms.)
 
When comparing with the C *_l versions C++ streams need to be imbued
with an explicit locale, otherwise it's not a fair comparison. I suspect
they would still be slower, but maybe not so much.
Daniel <danielaparker@gmail.com>: Jan 28 10:54AM -0800

On Monday, January 28, 2019 at 1:01:01 PM UTC-5, Öö Tiib wrote:
 
> Oh that. That is misuse. It was perhaps meant for human
> interface.
> One side serializes with German locale and other side blows up.
 
stringstream provides exactly the same capabilities as the C
functions, including precision and formatting, except it's easier
to take control of locale. If you go through the issues logs
of the libraries that use the "C" functions, you'll find
most them have had issues with the German/Spanish/whatever
locale, and had to fix it. Sometimes they've regressed. It just
that the streams are so slow. I've benchmarked "os << d" against
a C implementation by David Gay on netlib which is accepted as
being correct, and the stream implementation was 40 times slower.
I tried implementing my own versions of istringstream and
ostringstream, with a buffer that could be reused, and less
conditionals, but it only increased performance by a factor of 2,
so not worth it.
 
 
> Typical "good case" oriented thinking. AFAIK the "number" in JSON is full
> precision
 
The JSON specification is ambiguous about that, but quality of
implementation considerations suggest it should be full
precision, but it should also be a number with the least decimal
digits after the dot that represent the number. Nobody wants
to see 17 digits after the decimal point when a shorter version
represents the same number. Using streams or the C functions,
finding the shortest representation requires a round trip test,
starting with less precision and going with full precision when
that fails. Most libraries that use the C functions go with 15
digits of precision, and skip the round trip test, but this will
occasionally fail round trip. cJSON is careful in this respect,
but it comes at a performance cost.
 
The libraries that have gone with girsu2 and girsu3 implementations (RapidJSON, nhlomann, jsoncons), when the floating point architecture is IEEE 754, can achieve what JSON requires, what users want, but at the cost of introducing more non-standard "stuff" into the libraries. How big do you want
your json library to be?
 
> and does not have INFs or NaNs. So the stuff overlaps only
> partially and so the standard library tools can't handle it anyway.
 
All JSON libraries test the double value for nan or inf before
writing it, and most output a JSON null value if it is nan or
inf. Some make it a user option what to output, some users want to substitute a text string "NaN" or "Inf" or "-Inf" when
encoding JSON, and recover the nan or inf when decoding again.
This is irrespective of whether they are using stringstream
or C functions or custom code.
 
Daniel
Daniel <danielaparker@gmail.com>: Jan 28 11:56AM -0800

On Monday, January 28, 2019 at 1:50:37 PM UTC-5, Paavo Helde wrote:
 
> When comparing with the C *_l versions C++ streams need to be imbued
> with an explicit locale, otherwise it's not a fair comparison. I suspect
> they would still be slower, but maybe not so much.
 
When comparing with a standard number format like JSON, streams
need to be embued with 'C' locale. Most JSON libraries that use
the C functions use the regular versions and reverse the affects
of the global locale as a second step. They don't use the C *_l
versions, because they're not standard and unavailable on some
platforms, and any benefits from #ifdef's are too small to be
worth it. But the C functions still handily beat stringstream, or
any custom stream class with a more efficient streambuf.
 
From a users perspective, in many cases the performance of
streams would be good enough, "good enough for government work"
is the expression. But in a language that's supposed to be fast,
it's an embarrassment to the language that it has something this
slow. How is it even possible to design something that slow?
Maybe it's gotten better since I last benchmarked, maybe my
benchmarks on VS and Windows x64 architecture aren't
representative of what could be done with gnu or clang on linux
x64, but it's factual that library writers everywhere have
abandoned streams.
 
Daniel
Paavo Helde <myfirstname@osa.pri.ee>: Jan 28 11:22PM +0200

On 28.01.2019 21:56, Daniel wrote:
> representative of what could be done with gnu or clang on linux
> x64, but it's factual that library writers everywhere have
> abandoned streams.
 
Same here, using sprintf() and friends instead of streams. Though to be
honest, I have never seen 40x speed differences, at most ca 3x IIRC.
 
For curiosity, I just did some benchmarks (Windows x64 VS2017 Release).
With an actual output file fprintf() won over std::ofstream by a factor
of 1.7x:
 
C++ stream: 0.999657 s
fprintf() : 0.595898 s
 
(test code below).
 
With just numeric conversions the difference was a bit larger, 2.1x
 
stringstream: 1.00417 s
sprintf() : 0.481967 s
 
These tests are single-threaded so the global locale dependencies
probably do not play a great role; things might be different in a
heavily multithreaded regime.
 
-----------------------------
 
// first test
#include <iostream>
#include <fstream>
#include <iomanip>
#include <chrono>
#include <stdio.h>
 
const int N = 1000000;
 
void test1(std::ostream& os) {
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
os << f << "\n";
f += 0.5;
}
}
 
void test2(FILE* os) {
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
if (fprintf(os, "%.15g\n", f)<0) {
throw std::runtime_error(strerror(errno));
}
f += 0.5;
}
}
 
int main() {
try {
std::ofstream os1;
os1.exceptions(std::ofstream::failbit | std::ofstream::badbit);
os1.open("C:\\tmp\\tmp1.txt", std::ios_base::binary);
os1 << std::setprecision(15);
 
auto start1 = std::chrono::steady_clock::now();
test1(os1);
auto finish1 = std::chrono::steady_clock::now();
 
FILE* os2 = fopen("C:\\tmp\\tmp2.txt", "wb");
if (!os2) {
throw std::runtime_error(strerror(errno));
}
auto start2 = std::chrono::steady_clock::now();
test2(os2);
auto finish2 = std::chrono::steady_clock::now();
fclose(os2);
 
std::chrono::duration<double> diff1 = finish1-start1;
std::chrono::duration<double> diff2 = finish2-start2;
 
std::cout << "C++ stream: " << diff1.count() << " s\n";
std::cout << "fprintf() : " << diff2.count() << " s\n";
} catch (const std::exception& e) {
std::cerr << "ERROR: " << e.what() << "\n";
}
}
 
------------------------------
// second test
#include <iostream>
#include <sstream>
#include <iomanip>
#include <chrono>
#include <stdio.h>
 
const int N = 1000000;
 
unsigned int test1() {
// x is mostly for keeping the compiler
// from optimizing the whole code away.
unsigned int x = 0;
std::ostringstream os;
os << std::setprecision(15);
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
os << f;
f += 0.5;
x += os.str()[0];
os.str(std::string());
}
return x;
}
 
unsigned int test2() {
unsigned int x = 0;
char buff[32];
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
sprintf(buff, "%.15g", f);
f += 0.5;
x += buff[0];
}
return x;
}
 
int main() {
try {
 
auto start1 = std::chrono::steady_clock::now();
auto res1 = test1();
auto finish1 = std::chrono::steady_clock::now();
 
auto start2 = std::chrono::steady_clock::now();
auto res2 = test2();
auto finish2 = std::chrono::steady_clock::now();
 
if (res1!=res2) {
throw std::runtime_error("results differ");
}
 
std::chrono::duration<double> diff1 = finish1-start1;
std::chrono::duration<double> diff2 = finish2-start2;
 
std::cout << "stringstream: " << diff1.count() <<
" s\n";
std::cout << "sprintf() : " << diff2.count() <<
" s\n";
 
} catch (const std::exception& e) {
std::cerr << "ERROR: " << e.what() << "\n";
}
}
Lynn McGuire <lynnmcguire5@gmail.com>: Jan 28 01:44PM -0600

"The State of C++ on Windows"
https://kennykerr.ca/2019/01/25/the-state-of-cpp-on-windows/
 
According to Microsoft.
 
Lynn
JiiPee <no@notvalid.com>: Jan 28 07:09AM

On 27/01/2019 14:09, Mr Flibble wrote:
 
> IDs for objects or object types? Use uint32_t or uint64_t for object
> ID and UUID for object type. Enums are hard coded and brittle.
 
> /Flibble
 
I am talking about naming objects, like: PLAYER, BALL....
"Öö Tiib" <ootiib@hot.ee>: Jan 28 02:29AM -0800

On Saturday, 26 January 2019 16:49:34 UTC+2, JiiPee wrote:
 
> {
 
>     // draw walk up
 
> }
 
If you need to inherit types then use classes (or structs or
class template instantiations) for such types. Unions or enums
can't be inherited from. Your code, translated to usage of struct:
 
#include <iostream>

struct AnimationNameBase
{
enum Value {NoName};
AnimationNameBase(int v) : v_(v) {}
AnimationNameBase& operator=(int v) {v_ = v; return *this;}
bool operator==(int v) const {return v_ == v;}
int v_{NoName};
};
 
struct Animation
{
AnimationNameBase m_name{AnimationNameBase::NoName};
 
void doAnimation(AnimationNameBase name)
{
m_name = name;
}
};
 
 
struct PlayerAnimationNames : AnimationNameBase
{
enum Value {WalkUp = NoName + 1, WalkDown, WalkRight};
};
 
 
int main()
{
Animation playerAnimation;
 
playerAnimation.doAnimation(PlayerAnimationNames::WalkUp);
 
// code.....then in some function:
 
if(playerAnimation.m_name == PlayerAnimationNames::WalkUp)
{
// draw walk up
}
std::cout << "kind of works™\n";
}
 
Personally to me such whole logic feels messed up, regardless if it is
enum or struct.
"Öö Tiib" <ootiib@hot.ee>: Jan 28 01:33AM -0800

On Friday, 25 January 2019 21:26:05 UTC+2, JiiPee wrote:
> > animation. The vector will also allow you to update position too.
 
> its not about the direction or moving... its about having enum in a
> class. how can I use an enum in class...
 
You put it up sort of messy way. Start from basics.
What is your program data structure and relations?
You have data types "texture", "player", "direction" and "animation"?
What other data there is? What is possible state of objects of
those types?
Is some of these types property of other type? How?
Can there be none? many? Always fixed amount or changing?
What of those relations can change during program run?
Then what are the actions with objects of those data types?
What can do what using what?
The "texture" can perhaps be "drawn"?
The "animation" can perhaps be "animated"?
The "player" can perhaps "move" to "direction"?
What other actions are there?
What action is part of what action?
Write that all down in plain prose English first.
Do not make templates before it is clearly compile-
time fixed relation, value or property.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: