Sunday, January 20, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 11 topics

queequeg@trust.no1 (Queequeg): Jan 20 09:03PM


> (people of somewhat higher quality than you-
 
Do you consider yourself a person of a "higher quality" than Rick?
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
David Brown <david.brown@hesbynett.no>: Jan 14 12:10AM +0100

On 12/01/2019 20:25, fir wrote:
>> people try to use it that way, and that some people think that was what
>> it used to be - those are mistakes made from ignorance and misunderstanding.
 
> lol, imo you get into realm of lies here.. maybe its your ignorance?
 
Fortunately for me, no one considers your opinion to be worth the pixels
it is written on.
 
> c was intendet to give a control on machine resources (two famous of them - (b)its and (c)ycles)
 
C was never intended to give control of cycles.
queequeg@trust.no1 (Queequeg): Jan 14 12:25PM


> end it moron, too much bandwidth wasted
 
Said fir, quoting 167 lines or 9 kB of text.
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
gazelle@shell.xmission.com (Kenny McCormack): Jan 15 01:14PM

In article <729758a6-375c-4c88-9472-62e653e1f4df@googlegroups.com>,
 
>Pick one. The worst, most ridiculous teaching you see me teach.
>We'll examine why I teach it ... and you can prove me wrong system-
>atically.
 
All of it.
 
As fir notes, all you have to do is to do a search on your name, collect
all of it into a file, and you will have a corpus of 100% BS. All for
your perusal.
 
Quite simple, in fact.
 
--
"There are two things that are important in politics.
The first is money and I can't remember what the second one is."
- Mark Hanna -
scott@slp53.sl.home (Scott Lurndal): Jan 14 02:22PM


>> I personally think it's desirable to add a version of memcpy that's defined to do a byte-at-a-time, at least optionally memory_order_relaxed, atomic copy. I think most standard memcpy implementations would already be usable as an implementation. And that would solve this problem for the case in which there is no processing inside the reader critical section, and the data is trivially copyable. But so far that's only a personal opinion. And not all the details are clear.
 
>That sort of memcpy would be very useful. And I agree that a lot of
>existing memcpy impls should be okay, or rather easily adapted to work.
 
Why a memcpy? Just code it directly as an assignment loop. byte-wise bulk copies are horribly
inefficient.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Jan 14 08:54PM -0800

>> least.
 
> The memcpy function I was proposing would do roughly that, but it would not give you the guarantee that the load of e.g. the member "a" is atomic. You could instead read parts of "a" written by different concurrent writes. It would guarantee that each byte read corresponds to a byte of "a" that was actually written at some point.
 
> This greatly reduces the cost on some hardware. And it's sufficient for simple seq_lock critical sections. If the read of "a" reads from multiple writes, you're going to throw the result away anyway.
 
Sounds good enough, will work fine for a seq_lock.
 
 
> The actual implementation of this memcpy would read at whatever granularity was convenient for the hardware, just like memcpy does now.
 
Should it be a compiler barrier? I think so. Humm...
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Jan 14 10:54PM -0800

On 1/14/2019 8:54 PM, Chris M. Thomasson wrote:
 
>> The actual implementation of this memcpy would read at whatever
>> granularity was convenient for the hardware, just like memcpy does now.
 
> Should it be a compiler barrier? I think so. Humm...
 
memcpy byte-level relaxed atomic load, can work on bytes, or whatever is
fastest for the underlying system.
 
memcpy member-by-member relaxed atomic load, always works on actual
members just like my simple snapshot example code.
 
They would be different, but it does not matter wrt just reading in a
seq_lock read-side critical section. However, if there was some
processing within the section, well, the member-by-member load, like my
snapshot might be "better", in a sense.
 
all are compiler barriers just like a std::atomic operation.
 
Or, perhaps even a magical atomic_memcpy<std::memory_order> wrt relaxed,
acquire or seq_cst? All of the membars would be compatible with
std::atomic::load...
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Jan 14 10:20PM -0800

On 1/4/2019 6:37 PM, Chris M. Thomasson wrote:
>         w->process();
 
>         // Now, we can gain the next pointer.
>         ct_work* next = w->get_next();
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
just an idea... What about combining the user processing with the
loading of the next pointer? So, processing can convert the code above
into something like:
 
ct_work* next = w->process();
 
It seems rather horrible to intrude into user processing. Damn.
 
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jan 15 12:17AM +0100

On 12.01.2019 16:26, Unto Sten wrote:
 
> Okay, I have to study more about lambdas. Could you refer to some
> C++ example code that demonstrates the functor property of lambda
> functions?
 
The following code aids in unwrapping the exception messages of a C++11
exception that can contain a nested exception that can contain..., and
so on. In this code cppx::C_str is a `typedef` of `char const*`. Usually
it's ungood to name pointer types but I've found this one useful, so.
 
 
------------------------------------------------------------------------
#pragma once // Source encoding: UTF-8 with BOM (π is a lowercase
Greek "pi").
 
#include <cppx-core/collections/is_empty.hpp> //
cppx::is_empty
#include <cppx-core/core-language/$use_from_namespace.hpp> // CPPX_USE_STD
#include <cppx-core/text/C_str_.hpp> // cppx::C_str
 
#include <exception> // std::(exception, rethrow_exception)
#include <functional> // std::function
#include <utility> // std::move
 
namespace cppx
{
CPPX_USE_STD( exception, function, move, rethrow_if_nested, string );
 
inline void call_with_description_lines_from(
const exception& x,
const function<void( const C_str )>& f
)
{
f( x.what() );
try
{
rethrow_if_nested( x );
}
catch( const exception& rx )
{
call_with_description_lines_from( rx, f );
}
catch( ... )
{
f( "<a non-standard exception>" );
}
}
 
inline auto description_lines_from( const exception& x )
-> string
{
string result;
const auto add = [&]( const C_str s ) -> void
{
if( not is_empty( result ) )
{
result += '\n';
}
result += s;
};
call_with_description_lines_from( x, add );
return result;
}
 
} // namespace cppx
------------------------------------------------------------------------
 
 
Here the lambda in `description_lines_from` adds lines of text to the
local variable in that function, `result`.
 
 
[snip]
 
 
Cheers!,
 
- Alf
Keith Thompson <kst-u@mib.org>: Jan 13 10:38PM -0800


>> I don't see Rick's postings, but it's hard for my killfile to keep
>> up with all the people who feel the need to reply to him publicly.
 
> NO.
 
Yes.
 
> We frequently get that old "don't respond to him and he will go away".
 
He won't go away whether we respond to him or not. I've solved the
direct problem for myself; as far as I'm concerned, he effectively
doesn't exist. The responses continue to be annoying. (And they're
not all in threads with obviously off-topic subject lines.)
 
You won't accomplish anything by publicly engaging him. (And I
probably won't accomplish anything by talking to you about it.)
 
[...]
 
--
Keith Thompson (The_Other_Keith) kst@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
gazelle@shell.xmission.com (Kenny McCormack): Jan 14 11:04AM

In article <ln4labeoay.fsf@kst-u.example.com>,
>>> up with all the people who feel the need to reply to him publicly.
 
>> NO.
 
>Yes.
 
NO
 
>not all in threads with obviously off-topic subject lines.)
 
>You won't accomplish anything by publicly engaging him. (And I
>probably won't accomplish anything by talking to you about it.)
 
That's pretty much what it boils down to, isn't it. You say one thing,
your opponents say another. Who's to say? Who's the judge? Answer: No one.
 
As I said before:
 
You should probably look into setting up your newsreader to have a
whitelist (i.e., a list of the few people who post here whose stuff you
actually will deign to read) rather than trying to maintain a blacklist.
 
--
The last time a Republican cared about you, you were a fetus.
Elephant Man <conanospamic@gmail.com>: Jan 20 09:00PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 20 09:01PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 20 09:01PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 20 09:01PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 20 10:23PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 20 10:23PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 20 10:23PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Vir Campestris <vir.campestris@invalid.invalid>: Jan 20 09:56PM

On 20/01/2019 05:30, Real Troll wrote:
 
> <https://www.cs.helsinki.fi/u/luontola/tdd-2009/kalvot/02-Code-Quality.pdf>
 
"No lower-case L or upper-case o, ever. int elapsedTimeInDays;"
 
Oh look, a lower case L.
 
"camel case hard to read"
 
While I appreciate his efforts - and largely agree with them - he could
at least be self consistent.
 
Incidentally the bugs I've had over the years that have been hardest to
track down are those that are not amenable to testing. Things like race
conditions, uninitialised data, and wild pointers.
 
Andy
Ian Collins <ian-news@hotmail.com>: Jan 21 07:52AM +1300

On 21/01/2019 05:31, Scott Lurndal wrote:
> of python2 code, which becomes a second-class citizen in newer distributions.
> Nobody is going to spend the time or dollars to upgrade the python code
> for python3.
 
Pity the poor sods who run windows sever 2003... As for Python 3, why
bother?
 
> We have also discontinued use of various C++ features in certain
> parts of the codebase (such as soi disant smart pointers) for performance reasons.
 
I still find your claims about smart pointers hard to believe, unless
you were doing some thing seriously wrong with them.
 
--
Ian.
David Brown <david.brown@hesbynett.no>: Jan 20 08:22PM +0100

On 20/01/2019 17:30, James Kuyper wrote:
> using a corresponding identifier defined by such programmers. Of course,
> if you're absolutely certain that your code will never have to work with
> code written by such people, then you don't need such prefixes.
 
How about if you are absolutely certain that such code is rare, and not
your fault or your problem, and that neither you nor others reading your
code should suffer extra ugliness because of such rare code?
 
You can't protect against people being malicious or stupid. It can make
sense to take precautions against more likely accidental mistakes, but
this is not one of them. And the :: prefix to "std" will not help if
someone puts their own code in "std" - it would only help against
someone making a new nested namespace called "std". It is simply an
ugly way for some C++ programmers to make it look like they are smarter
than others, when they in fact are not.
Vir Campestris <vir.campestris@invalid.invalid>: Jan 20 09:30PM

> I've wondered
> if 2011 C++ is the new C.
 
You can't run any reasonable subset of C++ without an RTL.
 
I've run C bare metal. No RTL at all, and a few lines of assembler to
start it up.
 
Andy
queequeg@trust.no1 (Queequeg): Jan 20 09:11PM


> 2) The output is "yes x = 0 yes". Why does the call to print() work
> after the destructor has been called to destroy the object?
 
The method call works, because methods are not destroyed when the object
is destroyed. Only the data is. I don't know what the standard says about
it, but I've seen this behavior many times.
 
Access to `x` within print() works, because you're fortunate and nothing
overwritten this memory yet. It's not guaranteed to work.
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
Horizon68 <horizon@horizon.com>: Jan 20 11:55AM -0800

Hello..
 
 
I have just looked at the following Scalable hash map:
 
https://groups.google.com/forum/#!topic/lock-free/qCYGGkrwbcA
 
 
As you can read that its cost of read transaction (find operation) is
about 30 cycles, this is what makes it interesting.
 
But you have to know that i have "invented" the following scalable
algorithms and there implementations, read about them:
 
"LW_Asym_RWLockX that is a lightweight scalable Asymmetric Reader-Writer
Mutex that uses a technic that looks like Seqlock without looping on the
reader side like Seqlock, and this has permited the reader side to be
costless, it is FIFO fair on the writer side and FIFO fair on the reader
side and it is of course Starvation-free and it does spin-wait, and my
Asym_RWLockX, a lightweight scalable Asymmetric Reader-Writer Mutex that
uses a technic that looks like Seqlock without looping on the reader
side like Seqlock, and this has permited the reader side to be costless,
it is FIFO fair on the writer side and FIFO fair on the reader side and
it is of course Starvation-free and it does not spin-wait, but waits on
my SemaMonitor, so it is energy efficient."
 
You can download them from my website:
 
https://sites.google.com/site/scalable68/c-synchronization-objects-library
 
 
And as you have noticed since my scalable algorithms above are costless
in the reader side , so i will use them in my following scalable
parallel hashtable to make it scalable and costless in the reader side
of my scalable Asymmetric Reader-Writer Mutex:
 
https://sites.google.com/site/scalable68/scalable-parallel-hashlist
 
 
And i will use them inside my following scalable Parallel Varfiler to
make it scalable and costless in the reader side of my scalable
Asymmetric Reader-Writer Mutex:
 
https://sites.google.com/site/scalable68/scalable-parallel-varfiler
 
 
And I just "enhanced" my Scalable Parallel Varfiler benchmarks,
please run the following multicore benchmark for my scalable Parallel
Varfiler called "test3.exe" that you will find inside the zip file , you
can download the zip file from:
 
https://sites.google.com/site/scalable68/parallel-varfiler-benchmarks
 
 
And you can download my Scalable Parallel Varfiler from:
 
https://sites.google.com/site/scalable68/scalable-parallel-varfiler
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jan 20 11:51AM -0800

Hello...
 
 
My new "invention" that is an enhanced fully scalable algorithm is
finished and is coming soon..
 
I have just enhanced "much" more my "invention" of a scalable algorithm
of a scalable reference counting with efficient support for weak
references, i think i am the only one who has invented this scalable
algorithm, because it is the only one who is suited for non-garbage
collecting languages such as C++ and Rust and Delphi, and i have just
made my enhanced algorithm fully scalable on manycores and multicores
and NUMA systems by using a clever scalable algorithm, so i think i will
"sell" my new invention that is my enhanced scalable reference counting
algorithm with efficient support for weak references and its
implementation to Microsoft or to Google or to Intel or Embarcadero
 
And about memory safety and memory leaks in programming languages..
 
Memory safety is the state of being protected from various software bugs
and security vulnerabilities when dealing with memory access, such as
buffer overflows and dangling pointers.
 
I am also working with Delphi and FreePascal and C++, and as you have
noticed i have invented a scalable reference counting with efficient
support for weak references that is really powerful, read about it and
download it from here(it is the Delphi and FreePascal implementation):
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
And you have to understand that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory
leaks and this reference counting of mine is also "scalable", and i
think that this invention of mine is the only one that you will find,
and you will not find it in C++ and you will not find it in Rust.
 
Also Delphi and FreePascal detect the out of bounds in arrays and
strings like this by making range checks enabled:
 
In the {$R+} state, all array and string-indexing expressions are
verified as being within the defined bounds, and all assignments to
scalar and subrange variables are checked to be within range. **If a
range check fails, an ERangeError exception is raised (or the program is
terminated if exception handling is not enabled).
 
Range Checks is OFF by default. To enable it, you can add this directive
to your code:
 
{$RANGECHECKS ON}
 
You can use also generic (template) style containers for bound checking,
my following writing to understand more:
 
About C++ and Delphi and FreePascal generic (template) style containers..
 
Generics.Collections of Delphi and FreePascal for generic (template)
style containers that you can download from here:
 
https://github.com/maciej-izak/generics.collections
 
TList of Generics.Collections of Delphi and FreePascal is implemented
the same as STL C++ Vectors: they are array-based. And since data
structureS are the same then also performance should be comparable.
 
So I've done a small test between Tlist of Generics.Collections of
Delphi and FreePascal and C++ vector, it's an addition of 3000000
records of 16 byte length, in one loop, here is the results:
 
Tlist time = 344ms
Vector time = 339ms
 
It seems they are the same, the test use only the function ( List.add ,
vector.push_back).
 
STL vectors with the at() and Delphi TList of Generics.Collections of
Delphi and FreePascal perform bounds checking.
 
 
So i think that with my invention above and with all my other inventions
that are my scalable algorithms and there implementations and such in
C++ and Delphi and FreePascal that you will find
in my following website, Delphi and FreePascal have become powerful:
 
https://sites.google.com/site/scalable68/
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: