Friday, June 5, 2020

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jun 04 07:06PM -0700

On 6/4/2020 1:16 PM, Alf P. Steinbach wrote:
>>>> On 6/2/2020 11:39 PM, Chris M. Thomasson wrote:
>>>>> On 6/2/2020 9:13 PM, Alf P. Steinbach wrote:
>>>>>> On 02.06.2020 08:30, Chris M. Thomasson wrote:
[...]
 
Wrt fast, check this out wrt obtaining the hypotenuse:
 
https://forums.parallax.com/discussion/147522/dog-leg-hypotenuse-approximation
 
Using that for the roots might be good for generating a faster rendering
of a fractal, however, it might not be such a good thing to use in my
fractal root storage experiment thing here:
 
https://github.com/ChrisMThomasson/fractal_cipher/blob/master/RIFC/cpp/ct_rifc_sample.cpp
 
Btw, can you run this?
 
It really should be using an arbitrary precision floating point package.
Or perhaps, rationals might work as well.
Ralf Goertz <me@myprovider.invalid>: Jun 05 10:01AM +0200

Am Thu, 4 Jun 2020 22:16:41 +0200
> cos_value*cos_value ); return (sgn( v )*abs_sin_value)*cos_value;
> }
> };
 
Did you know that with g++ you have `void sincos(double x, double *sin,
double *cos)` available? It is much faster than computing sin and cos
separately.
 
> -> double
> { return pow( power, 1.0/base ); }
> };
 
I find your naming convention a bit confusing. In b^e=p I know b as the
base, e as the exponent and p the power.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 05 10:52AM +0200

On 05.06.2020 10:01, Ralf Goertz wrote:
 
> Did you know that with g++ you have `void sincos(double x, double *sin,
> double *cos)` available? It is much faster than computing sin and cos
> separately.
 
Thanks, I didn't know, but should have guessed.
 
There is also an x86 instruction, fsincos, but an SO answer claims that
 
"As CPUs become faster, old hardware trig instructions like fsin have
not kept pace. With some CPUs, a software function, using a polynomial
approximation for sine or another trig function, is now faster than a
hardware instruction. In short, fsincos is too slow."
<url: https://stackoverflow.com/a/24470801>
 
 
>> };
 
> I find your naming convention a bit confusing. In b^e=p I know b as the
> base, e as the exponent and p the power.
 
Yes, I kept the naming in the OP's original code up-thread, where "r is
the root to find in the base b".
 
I believe the standard term for the n in n'th root is "degree". The OP's
use of "base" for the n in a root can be viewed as consistent with use
of "base" for the fixed parameter of a logarithm function; a view where
"base" refers to the fixed parameter. However, the standard use of
"base" in this context refers to the base of an exponentiation, e.g. x
is the "base" in both x^n=p and n=logₓp.
 
The function name `func` supports general templated measuring code.
 
- Alf
David Brown <david.brown@hesbynett.no>: Jun 05 11:32AM +0200

On 05/06/2020 10:01, Ralf Goertz wrote:
 
> Did you know that with g++ you have `void sincos(double x, double *sin,
> double *cos)` available? It is much faster than computing sin and cos
> separately.
 
No, "g++" does not have such a function - it is part of glibc, not the
compiler, and it is a non-standard extension in <math.h>. That means
you need to define the macro "_GNU_SOURCE" before including <math.h> to
get access to it.
 
And it is /not/ faster than calculating sin and cos separately, because
gcc is smart enough to combine calls to sin and cos with the same
operand to a call to sincos.
 
You can see it here (with test code that is suitable for C and C++, in
case there was a difference):
 
#define _GNU_SOURCE
#include <math.h>
 
typedef struct SC {
double s;
double c;
} SC;
 
SC test1(double x) {
SC sc;
sc.s = sin(x);
sc.c = cos(x);
return sc;
}
 
SC test2(double x) {
SC sc;
sincos(x, &sc.s, &sc.c);
return sc;
}
 
With -O2, gcc (via <https://godbolt.org>) gives the same code for both
functions:
 
test1:
sub rsp, 24
mov rsi, rsp
lea rdi, [rsp+8]
call sincos
movsd xmm0, QWORD PTR [rsp+8]
movsd xmm1, QWORD PTR [rsp]
add rsp, 24
ret
 
(clang and Intel icc do the same optimisation.)
 
 
"sincos" is like the "div" function in <stdlib.h> - a relic from the
past when compilers were weaker, that was useful at the time but has
lost its purpose. (Unless, of course, you feel its use makes your code
clearer in some way - in which case, great.)
Ralf Goertz <me@myprovider.invalid>: Jun 05 11:51AM +0200

Am Fri, 5 Jun 2020 11:32:28 +0200
> the compiler, and it is a non-standard extension in <math.h>. That
> means you need to define the macro "_GNU_SOURCE" before including
> <math.h> to get access to it.
 
Well, that's why I wrote that the function was available with g++. As
Alf mentioned MinGW he'd be using glibc in that case.
 
> And it is /not/ faster than calculating sin and cos separately,
> because gcc is smart enough to combine calls to sin and cos with the
> same operand to a call to sincos.
 
That might very well be. I remember to have played around with this when
it still mattered.
David Brown <david.brown@hesbynett.no>: Jun 05 12:35PM +0200

On 05/06/2020 11:51, Ralf Goertz wrote:
>> <math.h> to get access to it.
 
> Well, that's why I wrote that the function was available with g++. As
> Alf mentioned MinGW he'd be using glibc in that case.
 
For his case, perhaps. In almost all /my/ uses of g++, I don't have
glibc. It is not always easy to know if someone is talking about a very
specific implementation, or in more general terms.
 
>> same operand to a call to sincos.
 
> That might very well be. I remember to have played around with this when
> it still mattered.
 
It may still matter for weaker tools or special situations, but IMHO you
are usually best aiming at writing the code in a clear and simple way,
with high-level algorithmic efficiency in mind, and letting the compiler
figure out the low-level efficiency details.
 
And if you want fast floating point in gcc, "-ffast-math" can make a
/lot/ of difference to some code.
Manfred <noname@add.invalid>: Jun 05 01:50PM +0200

On 6/5/2020 11:32 AM, David Brown wrote:
 
> And it is /not/ faster than calculating sin and cos separately, because
> gcc is smart enough to combine calls to sin and cos with the same
> operand to a call to sincos.
 
Yes, it /is/ faster than calculating sin and cos separately, that is why
the gcc optimizer replaces separate calls to sin and cos into a single
sincos call.
Which, by the way, only works when the optimizer can detect that such
calls are effectively part of the same operation.
 
> past when compilers were weaker, that was useful at the time but has
> lost its purpose.  (Unless, of course, you feel its use makes your code
> clearer in some way - in which case, great.)
 
I wouldn't call it a relic - in fact the need to calculate sin and cos
in pairs is very common, so it is a very useful function.
David Brown <david.brown@hesbynett.no>: Jun 05 02:08PM +0200

On 05/06/2020 13:50, Manfred wrote:
 
> Yes, it /is/ faster than calculating sin and cos separately, that is why
> the gcc optimizer replaces separate calls to sin and cos into a single
> sincos call.
 
I don't care how the compiler and/or library implements sin, cos or
sincos. It could use polynomials, or FPU instructions, or a Cordic
algorithm, or a circle and a ruler. It doesn't matter if the compiler
and/or library calculates sincos in a single cycle, or if it calculates
sin first then cos. The speed of the user code is /identical/ when you
write the sin and cos separately, or when you write a call to sincos,
because the code generated by the compiler is identical.
 
> Which, by the way, only works when the optimizer can detect that such
> calls are effectively part of the same operation.
 
Which they are in this case - and any case where you could reasonably
expect sincos to be an alternative.
 
>> code clearer in some way - in which case, great.)
 
> I wouldn't call it a relic - in fact the need to calculate sin and cos
> in pairs is very common, so it is a very useful function.
 
It is a relic at the user level - because it is not needed to calculate
sin and cos efficiently in pairs, and will typically lead to uglier code.
 
From Alf's code (with more conventional syntax):
 
#include <math.h>
double foo1(double v) {
return sin(v) * cos(v);
}
 
Using sincos:
 
#define _GNU_SOURCE
#include <math.h>
double foo2(double v) {
double s, c;
sincos(v, &s, &c);
return s * c;
}
 
Improvements in the compiler mean you no longer need to write awkward
and convoluted code like foo2 - you can write it in a neater manner like
foo1 and get /identical/ performance. That is why sincos is a relic.
 
(Of course the library function sincos itself is still important to get
the efficiency - but that is an implementation detail. It can be hidden
from the user - it could be called __sin_and_cos, or implemented as
inline assembly or a builtin function. None of that matters to the user.)
Melzzzzz <Melzzzzz@zzzzz.com>: Jun 05 12:20PM

> sin first then cos. The speed of the user code is /identical/ when you
> write the sin and cos separately, or when you write a call to sincos,
> because the code generated by the compiler is identical.
 
So compiler uses sincos when appropriate?
 
> foo1 and get /identical/ performance. That is why sincos is a relic.
 
calculating both sincos is much cheaper as you get both results when
calculating only one.
 
> the efficiency - but that is an implementation detail. It can be hidden
> from the user - it could be called __sin_and_cos, or implemented as
> inline assembly or a builtin function. None of that matters to the user.)
 
So instead of using sincos when needed we call both sin and cos and
expect from compiler to figure out?
 
 
--
current job title: senior software engineer
skills: c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
David Brown <david.brown@hesbynett.no>: Jun 05 02:55PM +0200

On 05/06/2020 14:20, Melzzzzz wrote:
>> write the sin and cos separately, or when you write a call to sincos,
>> because the code generated by the compiler is identical.
 
> So compiler uses sincos when appropriate?
 
Yes, it should do. (Compilers are not perfect, of course.)
 
>> foo1 and get /identical/ performance. That is why sincos is a relic.
 
> calculating both sincos is much cheaper as you get both results when
> calculating only one.
 
Yes - but that is the implementation.
 
>> inline assembly or a builtin function. None of that matters to the user.)
 
> So instead of using sincos when needed we call both sin and cos and
> expect from compiler to figure out?
 
Write the code in the clear and logical fashion - let the compiler
figure out the low-level details.
Manfred <noname@add.invalid>: Jun 05 03:11PM +0200

On 6/5/2020 10:52 AM, Alf P. Steinbach wrote:
> On 05.06.2020 10:01, Ralf Goertz wrote:
<...>
> approximation for sine or another trig function, is now faster than a
> hardware instruction. In short, fsincos is too slow."
> <url: https://stackoverflow.com/a/24470801>
 
There's more to that: "All 64-bit SIMD integer instructions use MMX
registers, which share register state with the x87 floating point
stack." (from e.g. the Intel® 64 and IA-32 Architectures Optimization
Reference Manual)
This means that in order to use any of the x87 instructions with SIMD
code (most common on x64) special care must be taken by the compiler.
Speed in floating point mode is a tricky subject, for example the x87
added the hardware parallel pipeline for FPU instructions, while SIMD
instructions follow the main CPU pipeline, so it really depends on
details of the code generated by the compiler.
As far as I know on x64 the x87 instructions have lost interest mostly
because of the spread of SIMD code, part of which is incompatible with
(or at least unpractical to mix with) x87 instructions.
 
However, most algorithms that calculate trigonometric functions do
produce the combined sin/cos pair much faster than the separate ones.
Melzzzzz <Melzzzzz@zzzzz.com>: Jun 05 01:18PM

> registers, which share register state with the x87 floating point
> stack." (from e.g. the Intel® 64 and IA-32 Architectures Optimization
> Reference Manual)
 
That's for MMX, not sor SSE...
 
 
--
current job title: senior software engineer
skills: c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
Manfred <noname@add.invalid>: Jun 05 04:52PM +0200

On 6/5/2020 3:18 PM, Melzzzzz wrote:
>> stack." (from e.g. the Intel® 64 and IA-32 Architectures Optimization
>> Reference Manual)
 
> That's for MMX, not sor SSE...
 
I believe my quote was clear enough: "All 64-bit SIMD integer
instructions...", which are part of SSE.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 05 11:03AM -0400

On 6/5/20 5:32 AM, David Brown wrote:
> On 05/06/2020 10:01, Ralf Goertz wrote:
...
 
> And it is /not/ faster than calculating sin and cos separately, because
> gcc is smart enough to combine calls to sin and cos with the same
> operand to a call to sincos.
 
Actually, it's a little too smart about that - the way it's implemented
can break well-formed code even when compiling with every option turned
on to make g++ conform as accurately as possible. See
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=46926>. It's still an open
bug nearly a decade later - no one has been assigned to work on it yet.
 
That was a bug report against gcc about it's handling of C code.
However, I just rewrote the code using C++, and it fails the same way
when compiled using g++. g++ --version says
 
(Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0/
 
Oddly enough, I can no longer reproduce this bug using gcc.
 
Summary:
A third-party library I was linking to defined it's own function named
sincos(). The definition of that function didn't use any of the tricks
that glibc's sincos() does to make it significantly faster than separate
calls to sin() and cos(). It simply made separate calls to sin() and
cos(). That's dumb, but the code was strictly conforming C (and my
modified version is well-formed C++). sincos is not a reserved
identifier in either language, so there's nothing wrong with naming a
function that way.
My own code had a pair of sin() and cos() function calls with the same
argument.
In both cases, gcc noticed the sin() and cos() calls, and converted them
into a single call to sincos(). This made the third-party library's
sincos() function infinitely recursive, and caused my function to call
that one - the program ended up aborting when it ran out of stack.
gcc was invoked in a fully-conforming mode which means this was not a
valid optimization. It should either have converted the sin() and cos()
calls to call to a function with a name reserved to the implementation,
such as _Sincos(), or it shouldn't have fiddled with the sin() and cos()
calls at all.
David Brown <david.brown@hesbynett.no>: Jun 05 05:36PM +0200

On 05/06/2020 17:03, James Kuyper wrote:
 
> Actually, it's a little too smart about that - the way it's implemented
> can break well-formed code even when compiling with every option turned
> on to make g++ conform as accurately as possible.
 
Yes, I saw that bug - and I agree it appears to be a bug. I've seen
similar recursive calls generated from home-made memcpy functions (the
compiler sees the contents of the function, recognizes it as a pattern
matching memcpy, so turns the loop into a call to memcpy...). But in
that case, the compiler has the excuse that memcpy is a standard
function, and not one you should be implementing.
 
I think the way to solve these things would be to have implementation
functions in the library use internal (reserved identifier) names, and
make the user-visible names be weak aliases. That way you could define
your own sincos() function, and if the compiler figured out the best
implementation was to call __library_internal_sincos(), there would be
no problem.
 
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Jun 05 11:58AM -0700

>> <math.h> to get access to it.
 
> Well, that's why I wrote that the function was available with g++. As
> Alf mentioned MinGW he'd be using glibc in that case.
 
I thought MinGW used Microsoft's C runtime, not glibc.
 
In any case, the sincos() function is provided by the runtime
library, not by the compiler. It's available if you use clang++
with glibc, and it's provided by other library implementations as
well (musl and newlib both provide it).
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jun 05 01:42PM -0700

On 6/4/2020 7:06 PM, Chris M. Thomasson wrote:
> [...]
 
> Wrt fast, check this out wrt obtaining the hypotenuse:
 
> https://forums.parallax.com/discussion/147522/dog-leg-hypotenuse-approximation
 
Humm... Fwiw, imvho, it would be fun to use this interesting
approximation for normalization of vectors in my experimental field.
sqrt is used to gain the length of a vector wrt normalization, well,
that makes a triangle, that can be used with the approx...
 
For instance the following animation uses the test vector field with
sqrt. It uses cos sin as well to gain the spirals wrt interpolating from
0...pi/2 where 0 is neutral and pi/2 is the equipotential in a 2d field.
However, I want to focus on the sqrt approx for now. Actually, I do not
need cos and sin to gain the perpendicular field in 2d. I use them to be
able to smoothly animate an interpolation of angles from 0...pi/2.
 
https://youtu.be/JIM-QioOhdY
 
Wonder what the approximation would look like visually when compared to
the original... Humm...
 
Fwiw, this can go 3d, humm... Need to add a z parameter the the dog leg
approx... In 2d they are alwyas zero. Here is an example in 3d using sqrt:
 
https://skfb.ly/6QU86
 
Here is a field where the lines are drawn from a 3d grid:
 
https://skfb.ly/6St9p
 
 
alelvb <alelvb@inwind.it>: Jun 05 02:45PM +0200

Hello folks,
I've written the following code (that I report entirely)
of which I'm not able to find the bug.
 
I explain the scenario better for comprehension.
 
Originally this code was realized using a std::vector of pointers to
T (the code uses templates).
I rewrote the code using a std::vector of std::vector<T> but I get the
same error: it computes the determinant of 1,2 and 3 dimensions matrices
but it fails to do the computing with larger dimensions.
The error given is reported in the subject of this message and is
obviously related to memory deletion.
But to avoid to handle raw pointers I switched to std::vector so I don't
know what else can I do if not ask to more expert programmers.
 
Thank you
 
//********** code follows ***********
 
#include <initializer_list>
#include <iostream>
#include <vector>
 
using uInt = long long unsigned int;
 
template <typename T>
class Matrix
{
/* ===================
* the Matrix class
* =================== */
public:
 
/* ===================
* the constructors
* =================== */
 
Matrix(std::initializer_list<std::initializer_list<T>> lil);
Matrix(std::initializer_list<T> il, char type = 'r');
Matrix(const Matrix<T>& m) = default;
Matrix(Matrix<T>&& r) = default;
Matrix(uInt r, uInt c);
Matrix() = delete;
~Matrix() = default;
 
/* ===================
* assignment oper.
* =================== */
 
Matrix<T>& operator=(const Matrix<T>& m) = default;
Matrix<T>& operator=(Matrix<T>&& m) = default;
 
/* ===================
* friend functions
* =================== */
 
friend std::ostream& operator<< (std::ostream& out, const Matrix<T>& m)
// the put in operator
{
for(uInt i=0; i < m.row; i++)
{
for(uInt j=0; j < m.col; j++)
{
out << m.data[i][j] << ' ';
}
out << '\n';
}
return out;
}
 
friend Matrix<T> operator* (const double s, const Matrix<T>& m)
{
Matrix<T> result(m.row, m.col);
 
for(uInt i=0; i < m.row; i++)
{
for(uInt j=0; j < m.col; j++)
{
if(m[i][j] != 0)
result[i][j] = s * m[i][j];
}
}
return result;
}
 
friend T determinant(Matrix<T> &matrix, uInt dim)
{
T Det = 0;
 
Matrix<T> mC(matrix.row, matrix.col);
 
if(matrix.row != matrix.col)
{
std::cout << "The determinant of a matrix is defined "
<< "for square matrices only." << '\n';
exit(0);
}
if(dim == 1)
{
return matrix[0][0];
}
else
{
for(uInt i=0; i < matrix.col; i++)
{
if(matrix[0][i] != 0)
{
mC = matrix.submatrix(i);
Det += mC.sign(0,i) * matrix[0][i] * determinant(mC, dim-1);
}
}
}
return Det;
}
 
/* ===================
* member functions
* =================== */
 
uInt rows()
{
return row;
}
uInt columns()
{
return col;
}
 
T trace();
Matrix<T> traspose();
Matrix<T> submatrix(uInt c) const;
 
std::vector<T> operator[] (const uInt r) const
{
return data[r];
}
std::vector<T>& operator[] (const uInt r)
{
return data[r];
}
 
Matrix<T> operator+(const Matrix<T>& m);
Matrix<T> operator-(const Matrix<T>& m);
Matrix<T> operator- ();
 
Matrix<T> operator*(T s);
Matrix<T> operator*(const Matrix<T>& m);
 
/* ===================
* data members
* =================== */
 
private:
std::vector<std::vector<T>> data;
uInt row;
uInt col;
 
int sign(uInt r, uInt c)
{
return ((r+c)%2 == 0) ? 1 : -1;
}
};
 
/* ===================
* constructor implem
* =================== */
 
template <typename T>
Matrix<T>::Matrix(std::initializer_list<std::initializer_list<T>> lil)
{
uInt r=0, count=0, maxcols=0;
 
for(auto y : lil)
{
r++;
std::vector<T> v;
data.push_back(v);
 
for(auto x : y)
{
count++;
}
 
if(count > maxcols)
{
maxcols = count;
}
count=0;
}
 
row = r;
col = maxcols;
 
// initialization
int i=0;
for(auto y : lil)
{
for(auto x : y)
{
data[i].push_back(x);
}
i++;
}
}
 
template <typename T>
Matrix<T>::Matrix(std::initializer_list<T> li, char type)
{
uInt i=0;
std::vector<T> d;
 
switch(type)
{
case 'r':
for(auto x : li)
{
d.push_back(x);
i++;
}
row = 1;
col = i;
 
data.push_back(d);
break;
 
case 'c':
i=0;
for(auto x : li)
{
d.erase(d.begin(), d.end());
d.push_back(x);
data.push_back(d);
i++;
}
col = 1;
row = i;
break;
 
default:
std::cout << "possible values are:"
"\nc for column vector"
"\nr for row vector" << '\n';
exit(0);
break;
}
}
 
template <typename T>
Matrix<T>::Matrix(uInt r, uInt c)
: row (r),
col (c)
{
for(uInt i=0; i < r; i++)
{
std::vector<T> v;
 
for(uInt j=0; j < c; j++)
{
v.push_back(static_cast<T>(0));
}
 
data.push_back(v);
}
}
 
/* ===================
* member funct impl
* =================== */
 
template <typename T>
Matrix<T> Matrix<T>::submatrix(uInt c) const
{
uInt dim = row;
 
if(c >= dim)
{
std::cout << "column index out of range" << '\n';
exit(-1);
}
 
Matrix<T> m(dim-1, dim-1);
 
for(uInt i=0; i < m.row; i++)
{
for(uInt j=0; j <= c; j++)
{
m[i][j] = data[i+1][j];
}
for(uInt j=c; j < m.col; j++)
{
m[i][j] = data[i+1][j+1];
}
}
return m;
}
 
template <typename T>
Matrix<T> Matrix<T>::operator+(const Matrix<T>& m)
{
if((col == m.col) and (row == m.row))
{
Matrix<T> result(row, col);
 
for(uInt i=0; i < row; i++)
{
for(uInt j=0; j < col; j++)
{
result[i][j] = ( ((*this)[i])[j] + m[i][j]);
}
}
return result;
}
else
{
std::cout << "To sum two matrices they have to have the same "
<< "dimensions" << '\n';
exit(-1);
}
}
 
template <typename T>
Matrix<T> Matrix<T>::operator- (const Matrix<T>& m)
// binary minus operator
{
return ((*this) + (m * -1));
}
 
template <typename T>
Matrix<T> Matrix<T>::operator- ()
// unary minus operator
{
return ((*this) * -1);
}
 
template <typename T>
Matrix<T> Matrix<T>::operator*(const T s)
{
Matrix<T> result(row, col);
 
for(uInt i=0; i < row; i++)
{
for(uInt j=0; j < col; j++)
{
if((*this)[i][j] != 0)
result[i][j] = s * (*this)[i][j];
}
}
return result;
}
 
template <typename T>
Matrix<T> Matrix<T>::operator*(const Matrix<T>& m)
{
Matrix<T> result(row, m.col);
T sum = 0;
 
if(col == m.row)
{
for(uInt i=0; i < row; i++)
{
for(uInt j=0; j < row; j++)
{
for(uInt k=0; k < col; k++)
{
sum += (*this)[i][k] * m[k][j];
}
result[i][j] = sum;
sum = 0;
}
}
return result;
}
else
{
std::cout << "the number of colums of the left matrix operand must\n"
<< "equal the number of rows of the right matrix
operand" << '\n';
exit(-1);
}
}
 
template <typename T>
T Matrix<T>::trace()
{
if(row != col)
{
std::cout << "Trace operation is defined "
<< "for square matrices only." << '\n';
exit(0);
}
 
T sum = 0;
 
for(uInt i=0; i < row; i++)
{
sum += data[i][i];
}
return sum;
}
 
template <typename T>
Matrix<T> Matrix<T>::traspose()
{
Matrix<T> tr(col, row);
 
for(uInt i=0; i < row; i++)
{
for(uInt j=0; j < col; j++)
{
tr[j][i] = data[i][j];
}
}
return tr;
}
 
/* ===================
* the main function
* =================== */
 
int main()
{
Matrix<double> m {{0,0,0,2},{0,0,1,1},{2,1,0,1},{0,1,1,0}}; //NO
//Matrix<double> m {{0,1,0},{0,1,2},{1,0,1}}; //OK
//Matrix<double> m {{-1,2},{0,1}}; //OK
//Matrix<double> m {2}; //OK
 
std::cout << m << '\n';
std::cout << "Determinant = " << determinant(m, m.rows());
}
Paavo Helde <eesnimi@osa.pri.ee>: Jun 05 07:08PM +0300

05.06.2020 15:45 alelvb kirjutas:
> But to avoid to handle raw pointers I switched to std::vector so I don't
> know what else can I do if not ask to more expert programmers.
 
> Thank you
 
The debugger shows index out of bounds assert failure in submatrix()
function, with j = 3.
 
for (uInt j = 0; j <= c; j++) {
m[i][j] = data[i+1][j];
}
 
I guess you want 'j < c' instead of 'j <= c'.
alelvb <alelvb@inwind.it>: Jun 05 09:41PM +0200

Il 05/06/20 18:08, Paavo Helde ha scritto:
> m[i][j] = data[i+1][j];
> }
 
> I guess you want 'j < c' instead of 'j <= c'.
 
Thank you Mr Helde,
that was right the problem. Now it works. If I tell you that I watched
the code for a couple of days without any clue do you believe me?
 
Thank you so so much.
 
But the curious thing is that it worked for size smaller than 4.
thank you
queequeg@trust.no1 (Queequeg): Jun 05 11:39AM


> Spacecraft displays should be 4:3 and NOT widescreen to ensure
> consistency with 2001: A Space Odyssey.
 
And what exactly does it have to do with C++?
 
--
Honesty is the best policy - unless, of course, you are dealing with
your wife, your girlfriend, your banker, your employer, the I.R.S.,
your creditors...
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 05 02:14PM +0100

On 05/06/2020 12:39, Queequeg wrote:
 
>> Spacecraft displays should be 4:3 and NOT widescreen to ensure
>> consistency with 2001: A Space Odyssey.
 
> And what exactly does it have to do with C++?
 
Fuck. Off.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jun 05 11:54AM -0700

On 6/4/2020 10:53 AM, Mr Flibble wrote:
> Hi!
 
> Spacecraft displays should be 4:3 and NOT widescreen to ensure
> consistency with 2001: A Space Odyssey.
 
lol.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jun 05 05:47AM

On Tue, 2020-06-02, Bonita Montero wrote:
 
> [I wrote]
 
>> std::pair<bool, bool>
 
> This never would be 2 bits large.
 
Like I indicated in the part you snipped. But the OP never explicitly
asked for that, and never AFAICT came back with more information.
 
You can't answer the question "how to define 2 bit variable" without
knowing its purpose, and /that's/ what we should have told the OP.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Juha Nieminen <nospam@thanks.invalid>: Jun 05 09:26AM

>>>    combining of pairs of two bits yourself.
 
>> Very inefficient !
 
> It's space efficient. Sometimes that is what's needed.
 
It's space efficient only when you need a lot of bits (I'd say at a very
minimum in the hundreds).
 
If you know at compile time the number of bits you need, then
std::bitset is much more efficient and better than std::vector<bool>.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: