Saturday, January 9, 2021

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

TDH1978 <thedeerhunter1978@movie.uni>: Jan 08 06:48PM -0500

On 2020-12-31 22:24:46 +0000, Keith Thompson said:
 
> problem (what you posted won't compile because uint8_t is not defined),
> along with the exact command line you used to compile it and the output
> of "gcc --version"?
 
Sorry for my late response, but I've been trying, in vain, to get you a
self-contained source file. Every time I move the code around, the
warning dispappears. After much trial and error, I got to the point
where if I don't change anything in my project, and simply add the line
'std::string xyz = "abc";' in some totally unrelated function, the
warning in the clear_mac() function disappears. This leads me to
believe that this is indeed a compiler bug; I have no other explanation.
 
As I mentioned earlier, I did not get this warning with gcc/g++ 10.2.0.
I will wait for gcc/g++ 10.2.2 and try again. For now, I will live
with this warning.
 
Here is the compiler info you requested:
 
cmd> gcc --version
gcc (GCC) 10.2.1 20201125 (Red Hat 10.2.1-9)
Brian Wood <woodbrian77@gmail.com>: Jan 08 08:25PM -0800

> > default tool for fixed-size arrays. It's the /other/ solutions that
> > need a motivation.
> When you actually have an answer get back to me.
 
I like hearing from him here. His perspective is
usually interesting.
 
Brian
Manfred <noname@add.invalid>: Jan 09 07:14PM +0100

On 1/7/2021 8:15 AM, Öö Tiib wrote:
> be bit bigger than raw multidimensional because of such padding. So
> that (however little) resource/performance issue is certainly allowed by
> standard. I would first like to see it demonstrated.
 
As you say below, data layout is a much more common issue than a
performance hit being significant due to a few padding bytes. /If/
performance is suspect to be an actual problem, then demonstration
first, sure.
 
> Can raw array somehow be used to solve those common and real problems?
> I would love to see that demonstrated too as I do not even imagine it being
> possible.
 
One might think of some multidimensional (e.g. image) processing (or
compression) algorithm that assumes no padding. Then it is cleaner and
easier to use a raw array than an array of arrays, rather than rewriting
a possibly complex algorithm to handle some arbitrary padding between lines.
I mean, the sport of using std::array<std::array<...>, ...> wouldn't
justify such a rewrite per se.
Although, for this specific example, with a multidim raw array the UB
fluff of the C++ standard might get in the way, so the case is not
trivial either.
 
A probably more realistic example is some API between modules, possibly
even written in different languages, that exchange binary data whose
layout is part the contract. Then you want full control of such layout,
including padding.
In such scenario I believe you don't want a std::array as a member of
some struct that is passed through such interface. This is independent
of performance, of course.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 01 07:12PM +0100

> and could be in a tight loop where performance matters. Constantly doing
> bounds checking on a fixed size array in this situation is a waste of
> CPU cycles and needlessly slows the program down.
 
I think he has in mind the iterator-debugging facility when you arm
them with the proper #defines.
Ian Collins <ian-news@hotmail.com>: Jan 02 08:21AM +1300

> and could be in a tight loop where performance matters. Constantly doing
> bounds checking on a fixed size array in this situation is a waste of
> CPU cycles and needlessly slows the program down.
 
std::array is no less efficient in this situation. The bounds checking
is a compile time operation.
 
--
Ian.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 01 08:35PM +0100

>> CPU cycles and needlessly slows the program down.
 
> std::array is no less efficient in this situation.
> The bounds checking is a compile time operation.
 
Bounds-checking can be compile-time only if the index is a constant.
olcott <NoOne@NoWhere.com>: Jan 09 11:58AM -0600

On 1/9/2021 11:30 AM, Ben Bacarisse wrote:
>> good to know.
 
> When H_Hat is called as H_Hat(H_Hat), the call to Halts in H_Hat does
> not return, making H_Hat(H_Hat) a non-terminating computation.
 
When this same computation is executed in a simulator the Halt decider
determines that the exectution trace of H_Hat() meets this specification:
 
Infinite Recursion detection AXIOM
(a) Within an execution trace
(b) The same function is called
(c) From the same machine address
(d) With the same data // Richard Damon credit
(e) A second time
(f) Without any control flow instructions inbetween
 
Thus meeting this mutually agreed to definition of non halting behavior:
 
On 11/27/2020 9:02 PM, Ben Bacarisse wrote:
> A computation that would not halt if its simulation were not
> halted is indeed a non-halting computation.
 
On Saturday, November 28, 2020 at 2:00:28 PM UTC-8, olcott wrote:
> Every computation that would not halt if its simulation
> were not halted is by logical necessity a non-halting computation.
 
The halt decider then stops this otherwise infinite recursion and
decides not halting.
 
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
olcott <NoOne@NoWhere.com>: Jan 09 03:25PM -0600

On 1/9/2021 2:07 PM, Kaz Kylheku wrote:
 
>> If you are actually paying attention to what I am saying and not simply
 
> I don't care about what you're saying; I'm pointing to what your *code*
> *isn't* saying.
 
Infinite Recursion detection AXIOM
(a) Within an execution trace
(b) The same function is called
(c) From the same machine address
(d) With the same data // Richard Damon credit
(e) A second time
(f) Without any control flow instructions inbetween
 
The execution trace of H_Hat() proves that H_Hat() meets the above
specification therefore proving that H_Hat can be correctly decided as
not halting on this basis without needing to look at the source code.
 
I will try to clean this up a little. Each of my recent execution traces
had some key detail missing.
 
Having to look at 320 of pages of source code is not going to make this
any clearer.
 
Simply understanding that correct halt deciding criteria for H_Hat()
exists [Infinite Recursion detection AXIOM] and seeing that the
execution trace of H_Hat() meets this criteria is all that is really
needed for complete proof that Halts() decides H_Hat() correctly.
 
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
olcott <NoOne@NoWhere.com>: Jan 09 05:07PM -0600

On 1/9/2021 4:28 PM, Kaz Kylheku wrote:
 
> You have not; your program indicates that H_Hat(H_Hat) fails to
> terminate, yet in fact there is every reason to believe that
> that expression terminates.
 
The conventional halting problem undecidability proofs conclude that no
universal halt decider can possibly exist on the basis that a particular
class of universal halt deciders cannot possibly exist.
 
It seems to be true that no universal halt decider exists that attempts
to divide all of its inputs into halting and not halting on the basis of
answering the question:
 
Does the input program halt on its input?
 
On the other hand: A universal halt decider that attempts to divide all
of its inputs into halting and not halting on the basis of answering the
question:
 
Does the simulation of the input have to be stopped to prevent its
otherwise infinite execution?
 
Does not suffer from the Pathological self-reference(Olcott 2004) and
can thus be defined.
 
Because this revised criteria does divide its inputs into halting and
non-halting by this criteria:
 
On 11/27/2020 9:02 PM, Ben Bacarisse wrote:
> A computation that would not halt if its simulation were not
> halted is indeed a non-halting computation.
 
On Saturday, November 28, 2020 at 2:00:28 PM UTC-8, olcott wrote:
> Every computation that would not halt if its simulation
> were not halted is by logical necessity a non-halting computation.
 
it still directly applies to the actual halting problem itself.
 
 
--
Copyright 2021 Pete Olcott
 
"Great spirits have always encountered violent opposition from mediocre
minds." Einstein
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 08 04:42PM -0800

On 1/8/2021 12:04 AM, Bonita Montero wrote:
>> Its an odd way to design things. You know, all threads might
>> not startup at the same time... ready, start and count, ect... ...
 
> There's no false sharing.
 
Nit pick, there can be.
 
totalTicks += ns;
totalFails += fails;
 
can interfere with atomicValue. Think about it.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 08 04:48PM -0800

On 1/8/2021 1:28 AM, Bonita Montero wrote:
>     bool fits = ((uintptr_t)&::a & 64 - 1) == 0;
>     cout << (fits ? "fits" : "doesn't fit") << endl;
> }
 
That should do it. :^)
Bonita Montero <Bonita.Montero@gmail.com>: Jan 09 06:37AM +0100

> totalTicks += ns;
> totalFails += fails;
> can interfere with atomicValue. Think about it.
 
There's nothing which interferes.
Make the atomicValue aligned, and you don't measure
different results.
 
But here's the result of a
 
* 2x Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
* 2x AMD EPYC 7551 32-Core Processor
* 2x AMD EPYC 7401 24-Core Processor
 
https://easyupload.io/4jaazg
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 08 09:44PM -0800

On 1/8/2021 9:37 PM, Bonita Montero wrote:
>> totalFails += fails;
>> can interfere with atomicValue. Think about it.
 
> There's nothing which interferes.
 
Is atomicValue on a completely aligned on a boundary and on a different
cache line than anything else, think of totalTicks, totalFails?
 
 
 
> Make the atomicValue aligned, and you don't measure
> different results.
 
Do it anyway.
 
 
Bonita Montero <Bonita.Montero@gmail.com>: Jan 09 07:43AM +0100

> Is atomicValue on a completely aligned on a boundary and on a different
> cache line than anything else, think of totalTicks, totalFails?
 
No, it isn't, but it doesn't jneed to be. While the threads are running
and incrementing atomicValue nothing happens to the same cacheline.
Align it and see that you won't get any different results.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 08 10:58PM -0800

On 1/8/2021 10:43 PM, Bonita Montero wrote:
>> different cache line than anything else, think of totalTicks, totalFails?
 
> No, it isn't, but it doesn't jneed to be. While the threads are running
> and incrementing atomicValue nothing happens to the same cacheline.
 
Even when some threads finish iteration and mess with totalTicks and
totalFails while others are still iterating?
 
Bonita Montero <Bonita.Montero@gmail.com>: Jan 09 09:22AM +0100

>> and incrementing atomicValue nothing happens to the same cacheline.
 
> Even when some threads finish iteration and mess with totalTicks and
> totalFails while others are still iterating?
 
That's not relevant since the loops overlap by 99% or more.
scott@slp53.sl.home (Scott Lurndal): Jan 09 04:55PM


>No, it isn't, but it doesn't jneed to be. While the threads are running
>and incrementing atomicValue nothing happens to the same cacheline.
>Align it and see that you won't get any different results.
 
You are wrong. But since you insist on removing context and attributions
from your posts, I'll not try to explain to you why.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 09 06:25PM +0100

>> Align it and see that you won't get any different results.
 
> You are wrong. But since you insist on removing context and attributions
> from your posts, I'll not try to explain to you why.
 
No, I'm not wrong. At least 99% of the time the threads are pounding on
atomicValue they're not contending with other threads doing different
things in the same cacheline. The small skew of the threads after being
started when they do the unlock() isn't relevant.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 09 12:25PM -0800

On 1/9/2021 9:25 AM, Bonita Montero wrote:
> atomicValue they're not contending with other threads doing different
> things in the same cacheline. The small skew of the threads after being
> started when they do the unlock() isn't relevant.
 
I see what you mean, but there can be false sharing: Keep in mind that I
am being rather pedantic here Bonita. Okay. The main point is that using
XADD when you can, is generally "better" than a CMPXCHG loop. Also, you
are using compare_exchange_weak. This means that on platforms other than
x86, perhaps using LL/SC, it can spuriously fail even if the CAS should
of succeeded. So, on a PPC, it would be fun to test
compare_exchange_weak vs compare_exchange_strong.
 
I remember a scenario a long time ago where a compare_exchange_strong
was needed to get a state machine to work properly. It was well before
C++11, but these problems are nothing new. Where a failed CAS actually
meant it really failed on its comparand, not spuriously failed when it
should of succeeded. The state machine depended on this difference!
 
Basically, isolation via proper aligning and padding is just a good
habit to get into. Especially on those other platforms. False sharing
can invade the reservation granule and cause actual livelock like
scenarios. On a x86 it won't cause a CMPXCHG to fail, but it causes
unnecessary cache thrashing, destroying performance. On a PPC, false
sharing to the reservation can be rather deadly... Livelock it when the
system in under heavy periods of load.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 09 12:52PM -0800

On 1/8/2021 1:28 AM, Bonita Montero wrote:
>     bool fits = ((uintptr_t)&::a & 64 - 1) == 0;
>     cout << (fits ? "fits" : "doesn't fit") << endl;
> }
 
Works fine. Now I am thinking about dynamic memory ala new. Probably
have to use aligned_alloc and placement new? My old alignment code way
back in the day still works, but its definitely not standard!
 
https://pastebin.com/raw/f37a23918
 
Back in 2009. Damn, I am getting older!
Bonita Montero <Bonita.Montero@gmail.com>: Jan 09 09:56PM +0100

> I see what you mean, but there can be false sharing:
> Keep in mind that I am being rather pedantic here Bonita.
 
The threads don't become released by cvStart synchonously. cvStart is
signalled for all threads at the same time but the threads need to get
hold of the mutex that is associated with this CV also and this can be
done one thread after another when the threads release the mutex. So
the first thread is releasing the second thread and and so forth. But
this results only in a small skew which is insignificant related to
the time the threads are doing the atomic operation.
What would be more appropriate here would be to have a semaphore which
is released by the number of waiting threads. But there are no semapho-
res until C++20.
 
 
> The main point is that using
> XADD when you can, is generally "better" than a CMPXCHG loop.
 
My test doesn't imply what it is better when you are using either oper-
ation for synchronitzation-purposes. It simply tests their effectiveness
and nothing mire.
 
> are using compare_exchange_weak. This means that on platforms other than
> x86, perhaps using LL/SC, it can spuriously fail even if the CAS should
> of succeeded.
 
That's intended because if you're using LL/SC for synchronization
you can't do anything against spuriuous fails other than repeating
the operation as well.
 
> Basically, isolation via proper aligning and padding is just a good
> habit to get into. ...
 
It's not necessary for this benchmark. It won't get less accurate
beyond the inacurracy you have from other measurement error effects.
So the whole discussion is nonsense.
Bonita Montero <Bonita.Montero@gmail.com>: Jan 09 09:57PM +0100

> Works fine. Now I am thinking about dynamic memory ala new. Probably
> have to use aligned_alloc and placement new? My old alignment code way
> back in the day still works, but its definitely not standard!
 
The newed memory would be aligned also.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 09 01:01PM -0800

On 1/9/2021 12:57 PM, Bonita Montero wrote:
>> have to use aligned_alloc and placement new? My old alignment code way
>> back in the day still works, but its definitely not standard!
 
> The newed memory would be aligned also.
 
Really? When you find some free time, can you code me up a quick
example? I am still using my old code:
 
https://pastebin.com/raw/f37a23918
 
It would be really fun to spice it up with C++11! I think you can help
me out, since I am not totally familiar with some exotic C++ features. I
was almost always in C. I still have code in x86 asm that implemented my
atomic lib.
 
So, is there a way to convert my simple ralloc region allocator into
something that is 100% standard compliant?
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 09 01:38PM -0800

On 1/9/2021 12:56 PM, Bonita Montero wrote:
 
> That's intended because if you're using LL/SC for synchronization
> you can't do anything against spuriuous fails other than repeating
> the operation as well.
 
True. However, the retry logic can be more "efficient" using
compare_exchange_strong rather than trying to emulate the same thing
with compare_exchange_weak.
 
 
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jan 08 05:49PM -0800

Fwiw, here is a highly experimental and very crude test of my vector
field algorihtm that produces a file called ct_plane.ppm. The code was
quickly ported from pure C99. However, its going to make a C++
programmers eyes bleed! Using defines and C io, ect... My port was
basically just enough to get it to compile in C++11. I need to work on
it to make proper classes out of ct_canvas, ct_axes, ct_plane and ct_rgb.
 
I am just wondering if anybody else can get it to compile on their C++11
compiler of choice? Does it produce the ppm; can you view it? How many
warnings do you get?
 
 
Any advise?
 
 
Here is the code:
__________________________________
/*
Spiral Fractal Experiment for Alf
By: Chris M. Thomasson
Version: Pre-Alpha (0.0.0)
 
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
 
*_____________________________________________________________*/
 
 
#include <cstdlib>
#include <cstdio>
#include <cassert>
#include <complex>
#include <cmath>
 
 
#define CT_WIDTH (1920 * 1)
#define CT_HEIGHT (1080 * 1)
 
#define CT_PI 3.1415926535897932384626433832795
#define CT_PI2 (CT_PI * 2)
 
 
typedef std::complex<double> ct_complex;
 
 
struct ct_rgb_meta
{
double red;
double blue;
double green;
double alpha;
};
 
 
#define CT_RGB_META(mp_red, ct_blue, ct_green, ct_alpha) \
{ (mp_red), (ct_blue), (ct_green), (ct_alpha) }
 
 
 
struct ct_rgb
{
unsigned char r;
unsigned char g;
unsigned char b;
struct ct_rgb_meta meta; // meta data for this color
};
 
 
#define CT_RGB(mp_red, mp_green, mp_blue) \
{ (mp_red), (mp_green), (mp_blue), CT_RGB_META(0., 0., 0., 0.) }
 
 
struct ct_canvas
{
unsigned long width;
unsigned long height;
struct ct_rgb* buf;
};
 
bool
ct_canvas_create(
struct ct_canvas* const self,
unsigned long width,
unsigned long height
) {
std::size_t size = width * height * sizeof(*self->buf);
 
self->buf = (ct_rgb*)std::calloc(1, size);
 
if (self->buf)
{
self->width = width;
self->height = height;
 
return true;
}
 
return false;
}
 
void
ct_canvas_destroy(
struct ct_canvas const* const self
) {
std::free(self->buf);
}
 
 
 
double
ct_canvas_log_density_post_processing_get_largest(
struct ct_canvas const* const self
) {
double largest = 0;
 
std::size_t size = self->width * self->height;
 
for (std::size_t i = 0; i < size; ++i)
{
struct ct_rgb const* c = self->buf + i;
 
if (largest < c->meta.alpha)
{
largest = c->meta.alpha;
}
}
 
return largest;
}
 
 
void
ct_canvas_log_density_post_processing(
struct ct_canvas* const self
) {
double largest =
ct_canvas_log_density_post_processing_get_largest(self);
std::size_t size = self->width * self->height;
 
printf("largest = %f\n", largest);
 
for (std::size_t i = 0; i < size; ++i)
{
struct ct_rgb* color = self->buf + i;
 
if (color->meta.alpha > 0)
{
struct ct_rgb_meta c = color->meta;
 
double dense = log10(c.alpha) / c.alpha;
 
c.red *= dense;
if (c.red > 1.0) c.red = 1.0;
 
c.green *= dense;
if (c.green > 1.0) c.green = 1.0;
 
c.blue *= dense;
if (c.blue > 1.0) c.blue = 1.0;
 
color->r = (unsigned char)(c.red * 255);
color->g = (unsigned char)(c.green * 255);
color->b = (unsigned char)(c.blue * 255);
}
}
}
 
 
 
 
bool
ct_canvas_save_ppm(
struct ct_canvas const* const self,
char const* fname
) {
std::FILE* fout = std::fopen(fname, "wb");
 
if (fout)
{
char const ppm_head[] =
"P6\n"
"# Chris M. Thomasson Simple 2d Plane ver:0.0.0.0 (pre-alpha)";
 
std::fprintf(fout, "%s\n%lu %lu\n%u\n",
ppm_head,
self->width, self->height,
255U);
 
std::size_t size = self->width * self->height;
 
for (std::size_t i = 0; i < size; ++i)
{
struct ct_rgb* c = self->buf + i;
std::fprintf(fout, "%c%c%c", c->r, c->g, c->b);
}
 
if (! std::fclose(fout))
{
return true;
}
}
 
return false;
}
 
 
 
 
struct ct_axes
{
double xmin;
double xmax;
double ymin;
double ymax;
};
 
struct ct_axes
ct_axes_from_point(
ct_complex z,
double radius
) {
struct ct_axes axes = {
z.real() - radius, z.real() + radius,
z.imag() - radius, z.imag() + radius
};
 
return axes;
}
 
 
struct ct_plane
{
struct ct_axes axes;
double xstep;
double ystep;
};
 
 
void
ct_plane_init(
struct ct_plane* const self,
struct ct_axes const* axes,
unsigned long width,
unsigned long height
) {
self->axes = *axes;
 
double awidth = self->axes.xmax - self->axes.xmin;
double aheight = self->axes.ymax - self->axes.ymin;
 
assert(width > 0 && height > 0 && awidth > 0.0);
 
double daspect = fabs((double)height / width);
double waspect = fabs(aheight / awidth);
 
if (daspect > waspect)
{
double excess = aheight * (daspect / waspect - 1.0);
self->axes.ymax += excess / 2.0;
self->axes.ymin -= excess / 2.0;
}
 
else if (daspect < waspect)
{
double excess = awidth * (waspect / daspect - 1.0);
self->axes.xmax += excess / 2.0;
self->axes.xmin -= excess / 2.0;
}
 
self->xstep = (self->axes.xmax - self->axes.xmin) / width;
self->ystep = (self->axes.ymax - self->axes.ymin) / height;
}
 
 
 
struct ct_plot
{
struct ct_plane plane;
struct ct_canvas* canvas;
};
 
 
void
ct_plot_init(
struct ct_plot* const self,
struct ct_axes const* axes,
struct ct_canvas* canvas
) {
ct_plane_init(&self->plane, axes, canvas->width - 1, canvas->height
- 1);
self->canvas = canvas;
}
 
 
bool
ct_plot_point(
struct ct_plot* const self,
ct_complex z,
struct ct_rgb const* color
) {
long x = (long)((z.real() - self->plane.axes.xmin) /
self->plane.xstep);
long y = (long)((self->plane.axes.ymax - z.imag()) /
self->plane.ystep);
 
if (x > -1 && x < (long)self->canvas->width &&
y > -1 && y < (long)self->canvas->height)
{
// Now, we can convert to index.
std::size_t i = x + y * self->canvas->width;
 
assert(i < self->canvas->height * self->canvas->width);
 
self->canvas->buf[i] = *color;
return true;
}
 
return false;
}
 
 
 
bool
ct_plot_add(
struct ct_plot* const self,
ct_complex z,
struct ct_rgb const* color
) {
long x = (long)((z.real() - self->plane.axes.xmin) /
self->plane.xstep);
long y = (long)((self->plane.axes.ymax - z.imag()) /
self->plane.ystep);
 
if (x > -1 && x < (long)self->canvas->width &&
y > -1 && y < (long)self->canvas->height)
{
// Now, we can convert to index.
std::size_t i = x + y * self->canvas->width;
 
assert(i < self->canvas->height * self->canvas->width);
 
struct ct_rgb* exist = &self->canvas->buf[i];
 
exist->meta.red += color->meta.red;
exist->meta.green += color->meta.green;
exist->meta.blue += color->meta.blue;
exist->meta.alpha += 1.0;
 
return true;
}
 
return true;
}
 
 
// slow, so what for now... ;^)
void ct_circle(
struct ct_plot* const plot,
ct_complex c,
double radius,
unsigned int n
) {
double abase = CT_PI2 / n;
 
for (unsigned int i = 0; i < n; ++i)
{
double angle = abase * i;
 
ct_complex z = {
c.real() + cos(angle) * radius,
c.imag() + sin(angle) * radius
};
 
struct ct_rgb rgb = { 255, 255, 255 };
 
ct_plot_point(plot, z, &rgb);
}
}
 
 
void ct_line(
struct ct_plot* const plot,
ct_complex p0,
ct_complex p1,
unsigned int n
) {
ct_complex dif = p1 - p0;
 
double normal_base = 1. / n;
 
for (unsigned int i = 0; i < n; ++i)
{
double normal = normal_base * i;
 
ct_complex z = p0 + dif * normal;
 
struct ct_rgb rgb = { 255, 255, 0 };
 
ct_plot_point(plot, z, &rgb);
}
}
 
 
 
 
 
/* Vector Field
By: Chris M. Thomasson
______________________________________________________*/
#include <vector>
 
struct ct_vfield_point
{
ct_complex p;
double m;
ct_rgb color;
};
 
typedef std::vector<ct_vfield_point> ct_vfield_vector;
 
 
ct_complex
ct_normalize(
ct_complex p
) {
double dis = std::abs(p);
 
if (! std::isnan(dis) && dis > 0.0000001)
{
p /= dis;
}
 
return p;
}
 
ct_complex
ct_vfield_normal(
ct_vfield_vector const& self,
ct_complex p,
double npow
) {
ct_complex g = { 0.0, 0.0 };
 
const int imax = self.size();
 
for (int i = 0; i < imax; ++i)
{
ct_complex dif = self[i].p - p;
 
double sum = dif.real() * dif.real() + dif.imag() * dif.imag();
double mass = std::pow(sum, npow);
 
if (! std::isnan(mass) && mass > 0.000001)
{
g.real(g.real() + self[i].m * dif.real() / mass);
g.imag(g.imag() + self[i].m * dif.imag() / mass);
}
}
 
return ct_normalize(g);
}
 
 
double ct_pi_normal(
ct_complex p,
double start_angle = 0.0
) {
double angle = std::arg(p) + start_angle;
if (angle < 0.0) angle = angle + CT_PI2;
angle = angle / CT_PI2;
return angle;
}
 
 
 
void ct_vfield_plot_line(
ct_vfield_vector const& self,
struct ct_plot* const plot,
ct_complex p0,
double power,
double astart,
unsigned int n,
ct_rgb const& color
) {
double radius = .028; // 1 / (n - 0.0) * CT_PI;// .01;
ct_complex z = p0;
 
for (unsigned int i = 0; i < n; ++i)
{
//double ir = i / (n - 1.0);
 
ct_complex vn = ct_vfield_normal(self, z, power);
 
 
double angle = astart + arg(vn);
 
//double xxxx = ct_pi_normal(angle, 0);
 
ct_complex zn = {
z.real() + std::cos(angle) * radius,
z.imag() + std::sin(angle) * radius
};
 
//ct_rgb color_angle = color;// CT_RGBF(1, 1, 1);
 
// plot.line(z, zn, 1.9, color_angle);
 
ct_line(plot, z, zn, 128);
 
z = zn;
}
}
 
 
void ct_vfield_plot_line_para(
ct_vfield_vector const& self,
struct ct_plot* const plot,
ct_complex p0,
double power,
double astart,
double astart_radius,
unsigned int n,
ct_rgb lcolor
) {
double abase = CT_PI2 / n;
double radius = astart_radius;
 
//ct_rgb bcolor = { 1, 0, 0, { 0, 0, 0 } };
 
ct_complex xxxSTART = ct_vfield_normal(self, p0, power);
double angleXXX = std::arg((p0 + p0 * xxxSTART) - p0);
 
for (unsigned int i = 0; i < n; ++i)
{
//double ir = i / (n - 1.);
 
double angle = abase * i + .0001 + angleXXX;
 
ct_complex p = {
std::cos(angle) * radius + p0.real(),
std::sin(angle) * radius + p0.imag()
};
 
ct_rgb xcolor = lcolor;
 
ct_line(plot, p0, p, 256);
 
ct_vfield_plot_line(self, plot, p, power, astart, 1024, xcolor);
}
}
 
 
// Plotter Lines
void ct_vfield_plot_lines(
ct_vfield_vector const& self,
struct ct_plot* const plot,
double power,
double astart,
double astart_radius,
unsigned int n
) {
std::printf("\n\nct_vfield_plot_lines plotting...\n\n");
 
for (unsigned int i = 0; i < self.size(); ++i)
{
//double ir = i / (self.size() - 1.);
if (self[i].m > 0.0) continue;
ct_rgb lcolor = self[i].color;
ct_vfield_plot_line_para(self, plot, self[i].p, power, astart,
astart_radius, n, lcolor);
std::printf("line:(%u of %llu)\r", i, (unsigned long
long)self.size() - 1);
}
}
 
 
 
 
 
 
 
 
/* Spiral Fractal
By: Chris M. Thomasson
______________________________________________________*/
 
void ct_spiral_fractal(
unsigned int ri,
unsigned int rn,
struct ct_plot* const plot,
ct_vfield_vector& vfield,
ct_complex p0,
ct_complex p1,
unsigned int n
) {
if (ri >= rn) return;
 
ct_complex dif = p1 - p0;
 
double sangle = std::arg(dif);
double sradius = std::abs(dif);
 
double rbase = sradius / n;
double abase = CT_PI / 2 / n;
 
for (unsigned long i = 0; i < n; ++i)
{
double radius = rbase * i;
 
double angle = 0 + std::sin(abase * i) * CT_PI2 + sangle;
double anglenx = 0 + std::sin(abase * (i + 1)) * CT_PI2 + sangle;
 
ct_complex s0_raw = { std::cos(angle), std::sin(angle) };
ct_complex s1_raw = { std::cos(anglenx), std::sin(anglenx) };
 
ct_complex s0 = p0 + s0_raw * radius;
ct_complex s1 = p0 + s1_raw * (radius + rbase);
 
ct_rgb color = { 0, 0, 0, { .7, .6, .5 } };
 
ct_plot_add(plot, s0, &color);
ct_plot_add(plot, s1, &color);
 
//ct_plot_add(plot, -s0, &color);
// ct_plot_add(plot, -s1, &color);
 
 
if (ri == rn - 1)
{
//ct_line(plot, s0, s1, 128);
 
vfield.push_back({ s0, -1, { 1, 0, 0, { 0, 0, 0 } } });
//vfield.push_back({ s1, -1, { 1, 0, 0, { 0, 0, 0 } } });
}
 
 
if (!((i + 1) % 2))
{
ct_spiral_fractal(ri + 1, rn, plot, vfield, s0, s1, n * 2);
}
 
else
{
ct_spiral_fractal(ri + 1, rn, plot, vfield, s1, s0, n * 2);
}
}
}
 
 
 
 
/* Entry
______________________________________________________*/
int main(void)
{
std::printf("\n\nChris M. Thomasson's Vector Field based on Spiral
Plot\n\n");
 
 
struct ct_canvas canvas;
 
if (ct_canvas_create(&canvas, CT_WIDTH, CT_HEIGHT))
{
ct_complex plane_origin = { 0, 0 };
double plane_radius = 2.0;
 
struct ct_axes axes = ct_axes_from_point(plane_origin,
plane_radius);
 
struct ct_plot plot;
ct_plot_init(&plot, &axes, &canvas);
 
// Our unit circle
ct_circle(&plot, { 0, 0 }, 1.0, 2048);
ct_circle(&plot, { 2, 0 }, 2.0, 2048);
ct_circle(&plot, { -2, 0 }, 2.0, 2048);
ct_circle(&plot, { 0, 2 }, 2.0, 2048);
ct_circle(&plot, { 0, -2 }, 2.0, 2048);
 
{
ct_vfield_vector vfield;
 
std::printf("\n\nct_spiral_fractal plotting...\n\n");
ct_spiral_fractal(0, 3, &plot, vfield, { -1, 0 }, { 1, 0 }, 3);
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: