Tuesday, May 16, 2023

Digest for comp.lang.c++@googlegroups.com - 7 updates in 4 topics

Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: May 15 11:01PM -0400

Chris M. Thomasson wrote:
 
>> g++ (GCC) 13.1.1 20230429 on x86_64 GNU/Linux
 
> Nice! I do not have that version installed. How does it respond to the
> program that uses a std::vector?
The vector prog's output is empty, so everything is good.
 
> MSVC likes it and everything is padded
> and aligned. Iirc, std::vector should honor alignas, right?
 
I don's think it is the vector specifically that honors alignas; I
rather think the data structure of type ct_page, whether allocated by
vector or in any other legal way, can get aligned accordingly to its
alignment specifier (on my system, it's implementation specific as
alignof(std::max_align_t) is 16 so the alignments of 64 are "extended"
-- but happen to be supported.
 
On a side note, see if your code benefits from using
std::hardware_constructive_interference_size instead of hardcoded 64 for
the cacheline_size.
 
HTH
-Pavel
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 16 03:46PM -0700

On 5/15/2023 8:01 PM, Pavel wrote:
 
>> Nice! I do not have that version installed. How does it respond to the
>> program that uses a std::vector?
> The vector prog's output is empty, so everything is good.
 
Excellent. Thanks for taking the time to give it a go, Pavel. I really
need to install a recent GCC. Stuck on MSVC right now for a project.
 
 
> alignment specifier (on my system, it's implementation specific as
> alignof(std::max_align_t) is 16 so the alignments of 64 are "extended"
> -- but happen to be supported.
 
Remember those early hyperthreaded Pentium's? There was something called
the aliasing problem that would make two hyperthreaded threads falsely
share cache lines and would destroy performance? Iirc, the l2 cache
lines were 128 bytes split into two 64 byte regions. The workaround was
to offset the stacks of each thread using alloca.
 
 
> On a side note, see if your code benefits from using
> std::hardware_constructive_interference_size instead of hardcoded 64 for
> the cacheline_size.
 
It will definitely benefit. I am wondering if
std::hardware_constructive_interference_size is always guaranteed to be
the l2 cache line size?
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 16 03:27PM -0700

>> the damn thing, being nice... In the mean time, its digging a grave site
>> to bury your job at...
 
> What?
 
The title of the song I linked to is:
 
How Could - U - Love a Killa
 
I was trying to compare the AI to a potential job killer. Yet, people
love it and will gladly teach it, even though there is a high
probability that it will take their damn jobs away in the near future.
Does that make better sense?
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 16 03:37PM -0700


>> https://youtu.be/ao-Sahfy7Hg?t=75
 
>> Bow down before the one you serve...
 
> Ah, thats what your previous post was about!
 
Indeed! Sorry for being so cryptic. ;^o
 
> Haven't heard that track for a while. Funk metal was big for a while though
> seems to have vanished now. Mordred and Mindfunk were good examples of that
> genre.
 
Agreed. I have not heard of Mordred and Mindfunk until now, thanks. Need
to check them out. Fwiw, I have been on a long programming binge in C++
since last night. Got some pretty results:
 
 
https://i.ibb.co/B3HQ258/image.png
 
:^)
Vir Campestris <vir.campestris@invalid.invalid>: May 16 09:43PM +0100

On 14/05/2023 02:46, Bonita Montero wrote:
>> modern machines?
 
> There are not much differences in access times on old HDDs.
> And DRAM access times aren't much different the last 20 yrs.
 
Remind me what was the memory cycle time on an IBM360?
 
I started off on mainframes. The system would quite happily perform IO
on several disc drives simultaneously without affecting the CPU at all.
IIRC our big system had ~40 disc drives.
 
The speed of all these things have gone up, but the ratios between the
speeds have changed too. Some of us do go back a long way.
 
Andy
scott@slp53.sl.home (Scott Lurndal): May 16 09:57PM


>Remind me what was the memory cycle time on an IBM360?
 
>I started off on mainframes. The system would quite happily perform IO
>on several disc drives simultaneously without affecting the CPU at all.
 
I think that depends on the model. The lower-end models didn't have
real DMA IIRC.
 
>IIRC our big system had ~40 disc drives.
 
The more important consideration for bandwidth is the channel
(bus/tag) capacity and the ability for the host storage controller
(channel program) to interleave accesses to multiple drives on
a channel (sometimes splitting seek from read to allow the heads to
position while transfering from a different drive on the string).
 
Burroughs disk subsystems could multiplex eight (1MByte/s) host channels to a
string of 16 drives, so any eight could be transferring into memory
simultaneously (well, interleaved by the I/O controller when
accessing main memory).
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 15 10:12PM -0700

On 5/12/2022 12:16 PM, Chris M. Thomasson wrote:
> Using my experimental vector field to generate a fractal formation. Here
> is generation two:
 
> https://fractalforums.org/gallery/1612-120522191048.png
 
Fwiw, a worm hole fractal field of mine:
 
https://i.ibb.co/LPq6tz8/image.png
 
FB:
https://www.facebook.com/photo?fbid=967529867739345&set=pcb.967523291073336
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: