Wednesday, December 19, 2018

Digest for comp.lang.c++@googlegroups.com - 17 updates in 6 topics

"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: Dec 18 01:30PM -0800

> // Copyright (C) 2000 - 2009, Richard J. Wagner
> // All rights reserved.
> //
 
Fwiw: It really does not create "true" random numbers. Instead, it
outputs pseudo random numbers. If you are interested in a crypto safe
(e.g., wrt a good crypto safe hash, sha384?) HMAC based generation of
pseudo random numbers, look here:
 
http://funwithfractals.atspace.cc/ct_cipher
 
Here is a sample C impl with a hardcoded in-memory secret key of
"Password" and sha256 for the hash used with the HMAC:
 
https://groups.google.com/d/topic/comp.lang.c/a53VxN8cwkY/discussion
(please, read _all_ if interested...)
 
The ciphertext can actually be used for crypto safe pseudo-random numbers.
 
The PRNG stream would be password protected such that different
passwords will produce radically different streams. Also, the random
numbers are actually encrypting files. So, we can encrypt files created
from the output of an actual TRNG. The seed for the CSPRNG is the
password and the hash algo.
 
Interested?
David Brown <david.brown@hesbynett.no>: Dec 19 09:44AM +0100

On 18/12/18 22:30, Chris M. Thomasson wrote:
> On 12/13/2018 10:27 PM, jmani.20@gmail.com wrote:
 
> Fwiw: It really does not create "true" random numbers. Instead, it
> outputs pseudo random numbers.
 
/All/ deterministic algorithms generate pseudo random sequences.
 
> (e.g., wrt a good crypto safe hash, sha384?) HMAC based generation of
> pseudo random numbers, look here:
 
> http://funwithfractals.atspace.cc/ct_cipher
 
First, /you/ are not qualified to judge what is a "crypto safe"
algorithm - you want your algorithm to be studied deeply by a whole lot
of experts looking for weak points.
 
Secondly, your algorithm does not generate true random numbers either -
assuming your encryption algorithm is good enough, you have a
cryptographically secure pseudo random number generator. (Of course, a
CSPRNG is usually just as good as a TRNG.)
 
I don't mean to disparage your work, and I don't really doubt that your
algorithm is secure - but in this field, you need serious justification
and confirmation from experts before you can make claims about being secure.
 
Juha Nieminen <nospam@thanks.invalid>: Dec 19 12:13PM

> Fwiw: It really does not create "true" random numbers.
 
I wish people stopped saying this, because it causes confusion and
misunderstandings. When this is repeated over and over for PRNGs,
even cryptographically strong ones, the vast majority of people get
the false impression that the "randomness" of these PRNGs is somehow
"poorer" than that of a "real" random number source.
 
In reality, the output of a (cryptographically strong) PRNG is essentially
indistinguishable from a true source of randomness, with the exception that
the PRNG will give you the same number stream for the same initial seed.
Otherwise, however, they are indistinguishable.
 
If you are given two very large streams or random numbers (like millions
or even billions of numbers), one from a (cryptographically strong) PRNG
and another from a true source of randomness, there is no test you can
apply to tell for certain which one was produced by which type of
generator. (This may be the case for non-strong PRNGs, but one of the
requisites for cryptographic strength is that there really is no
disinguishable pattern nor predictability, just like true randomness.)
 
Thus for all intents and purposes, for pretty much any practical
application, it makes exactly zero difference that a PRNG "does not
create true random numbers". For all intents and purposes, in practice,
it does.
 
I wish people stopped claiming it doesn't.
jameskuyper@alumni.caltech.edu: Dec 19 04:38AM -0800

On Wednesday, December 19, 2018 at 7:13:34 AM UTC-5, Juha Nieminen wrote:
> > Fwiw: It really does not create "true" random numbers.
 
> I wish people stopped saying this, because it causes confusion and
> misunderstandings. When this is repeated over and over for PRNGs,
...
> create true random numbers". For all intents and purposes, in practice,
> it does.
 
> I wish people stopped claiming it doesn't.
 
"true random numbers" are a theoretical concept; appealing to
"practicality" doesn't make the distinction disappear. The right
solution is not to pretend that pseudo-random numbers are truly random,
but rather to understand that the fact that they aren't truly random
isn't a problem for most practical purposes.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Dec 19 10:30PM

On Tue, 2018-12-18, David Brown wrote:
> between 1 and 100000000, and a vulnerability causing a buffer overflow
> if it is called with negative values, that will not matter because the
> code will not be used with unchecked values.
 
I guessed something like that, but PRNGs tend to be created once, with
limited user-provided input, and then only be stepped based on
user-provided stimuli, with no actual user input. There's not a lot the
user can do to cause a buffer overflow.
 
(Contradicting myself: if you can predict the random number sequence,
you can perhaps overload something, just like you can misbalance a
hash table by knowing the hash function and feeding it doctored data.)
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Daniel <danielaparker@gmail.com>: Dec 18 11:07PM -0800

Suppose I have
 
#include <iostream>
#include <vector>
#include <type_traits>
 
template <class T, class S, class Enable=void>
struct call_traits
{
static void f(const S& val)
{
static_assert("No match here");
}
};
 
// is_sequence_container_of
 
template <class T,class S, class Enable=void>
struct is_sequence_container_of : std::false_type {};
 
template <class T,class S>
struct is_sequence_container_of<T,S,
typename std::enable_if<std::is_same<typename std::iterator_traits<typename S::iterator>::value_type,T>::value
>::type>
: std::true_type {};
 
template<class T, typename S>
struct call_traits<T, S,
typename std::enable_if<is_sequence_container_of<T,S>::value>::type>
{
static void f(const S& val)
{
std::cout << "general sequence container\n";
}
};
 
struct own_vector : std::vector<int64_t>
{
using std::vector<int64_t>::vector;
};
struct a_vector : std::vector<int64_t> { using std::vector<int64_t>::vector; };
 
// (1)
template <>
struct call_traits<int64_t, own_vector>
{
static void f(const own_vector& val)
{
std::cout << "my own vector\n";
}
};
 
 
int main(int argc, char **argv)
{
call_traits<int64_t, own_vector>::f(own_vector());
call_traits<int64_t, a_vector>::f(a_vector());
call_traits<int64_t, std::vector<int64_t>>::f(std::vector<int64_t>());
}
 
This compiles and gives the expected results:
 
my own vector
general sequence container
general sequence container
 
However, if I replace (1) with the more general
 
// (2)
 
template <class T>
struct call_traits<T, own_vector>
{
static void f(const own_vector& val)
{
std::cout << "my own vector\n";
}
};
 
I get
 
Error 'call_traits<int64_t,own_vector,void>': more than one partial specialization matches the template argument list
 
could be 'call_traits<T,S,std::enable_if<is_sequence_container_of<T,S,void>::value,void>::type>'
 
or 'call_traits<T,own_vector,void>'
 
Suggestions for how to make (2) work?
 
Thanks,
Daniel
Sam <sam@email-scan.com>: Dec 19 07:05AM -0500

Daniel writes:
 
> 'call_traits<T,S,std::enable_if<is_sequence_container_of<T,S,void>::value,void>::type>'
 
> or 'call_traits<T,own_vector,void>'
 
> Suggestions for how to make (2) work?
 
You'll need to have your (2) also use SFINAE, but with a logically-opposite
result than the novel that's written for your existing specialization. So,
for containers where the existing specialization resolves, your (2)'s SFINAE
fails and takes it itself out of overload resolution.
 
Perhaps, take:
 
> std::enable_if<std::is_same<typename std::iterator_traits<typename
> S::iterator>::value_type,T>::value
> >::type>
 
Lop off the trailing std::enable_if< … ::type>, then put that into its own
traits class, then have a std::not of that in the other one, and use the
first one in the existing specialization, and the not-version in your
own_vector specialization.
Daniel <danielaparker@gmail.com>: Dec 19 10:32AM -0800

On Wednesday, December 19, 2018 at 7:05:54 AM UTC-5, Sam wrote:
> result than the novel that's written for your existing specialization. So,
> for containers where the existing specialization resolves, your (2)'s SFINAE
> fails and takes it itself out of overload resolution.
 
That doesn't give the desired result, though. The desired result is to have
the same result as (1). Not take itself out of overload resolution.
 
Daniel
"AL.GleeD NICE" <al.glead.aa@gmail.com>: Dec 19 10:11AM -0800

· Defragment tables with lots of accumulated events
Juha Nieminen <nospam@thanks.invalid>: Dec 18 07:10PM

> and store their pointers, how is that fragmenting anything? Or
> if I allocate all 1000 objects in a single allocation and
> divide the allocation up?
 
If all your program ever allocates is those 1000 objects, then maybe.
However, programs do a lot more than that.
 
> And, for that matter, who cares if virtual memory is "fragmented"?
> (Physical memory is, of course, always "fragmented" into pages).
 
Traversing RAM in a random order is slower than traversing it
consecutively (regardless of whether it's physical or virtual
memory). When traversing it consecutively, the hardware prefethces
data and other such things.
 
 
> 1) How does it increase overall memory consumption? Are you
> referring to the 8-16 byte overhead per allocation? That's
> in the noise.
 
Because memory fragmentation causes many gaps of free memory to
form between allocated blocks. When these gaps are too small for
a new allocation, additional memory will be allocated even though
there may be a lot of it free. It's just that all that free memory
is in such small gaps that the new allocation may not fit in any of
them.
 
There's a reason why many garbage-collected runtime systems (such
as the Java runtime) perform memory compaction. (Memory compaction
is rearranging all the allocated blocks into consecutive memory,
removing all the free gaps.)
 
> 2) How does it 'increase the amount (number) of cache misses',
> particularly when each object consumes multiple cache lines?
 
Because traversing memory in random order causes more cache misses
than traversing it consecutively.
scott@slp53.sl.home (Scott Lurndal): Dec 18 08:33PM

>> divide the allocation up?
 
>If all your program ever allocates is those 1000 objects, then maybe.
>However, programs do a lot more than that.
 
Most of mine are performance sensitive and allocate upon
startup, or use pools.
 
 
 
>Traversing RAM in a random order is slower than traversing it
>consecutively (regardless of whether it's physical or virtual
>memory).
 
That's certainly not accurate. Access latency to DRAM is not
address sensitive (modulo minor effects such as leaving pages
open in a DIMM, any memory controller interleaving behavior,
and internal fifo backpressure).
 
The various levels (L1I, L1D, L2, LLC) of cache are designed
to reduce the average access latency to DRAM.
 
>When traversing it consecutively, the hardware prefethces
>data and other such things.
 
The team I work with is responsible for architecting
prefetchers and we do extensive real-world workload tracing
in order to have sufficient data to decide whether a particular
prefetcher is achieving the required performance. Few
interesting server workloads access memory using regular strides;
random accesses are much more common (large graph data structures,
for example).
 
>Because traversing memory in random order causes more cache misses
>than traversing it consecutively.
 
That's entirely dependent upon the stride distance. Which in real-world
workloads is generally larger than the cache line size.
Vir Campestris <vir.campestris@invalid.invalid>: Dec 18 09:12PM

On 14/12/2018 14:02, Scott Lurndal wrote:
> avoided. I prefer to think that if you don't understand
> how to use them properly, you shouldn't be programming in
> a language that supports them.
 
I may be one of them.
 
I believe _raw_ pointers should be avoided.
 
Certainly if I find code that says
 
... * foo = new ...
 
alarm bells go off. unique_ptr is almost always a better solution.
 
Andy
Vir Campestris <vir.campestris@invalid.invalid>: Dec 18 09:16PM

On 14/12/2018 11:25, JiiPee wrote:
> Ye it has like 1000 slots to start with. But the main point I wanted
> vector is because its easy to use (add element in the middle, pushback,
> etc). array is not easy to use.
 
If you are inserting things in the middle frequently vector may not be
the right type either.
 
It involves moving all the objects after the insertion up by one.
 
Andy
Lynn McGuire <lynnmcguire5@gmail.com>: Dec 18 03:24PM -0600

On 12/14/2018 5:23 AM, JiiPee wrote:
> (pushback, remove, clear, insert). array does not have there.. so its
> difficult to use. how do you easily add an element in the middel of an
> array? :)
 
Yup, the key word there is "easily".
 
Lynn
Juha Nieminen <nospam@thanks.invalid>: Dec 19 12:06PM

> I believe _raw_ pointers should be avoided.
 
It depends on what you are doing.
 
If you are creating your own STL-style dynamic data container, raw pointers
*inside* the class may be complete A-ok, and the best way to go (after all,
that's what all standard library data container implementations do).
 
Also, sometimes raw pointers can be used for more than just handling
dynamically allocated memory. For example "iterators" to a static array
are raw pointers, and there's nothing wrong in using them there.
(In fact, quite often std::vector iterators are nothing more than
typedeffed raw pointers, so you are probably using them even without
knowing.)
ram@zedat.fu-berlin.de (Stefan Ram): Dec 18 06:58PM

Is this possible: A bug (memory leak) in GCC?
 
What is your opinion about this?
 
Udo S. recently reported in the German C++ newsgroup
"comp.lang.iso-c++" that GCC's library contains code like:
 
template<typename _Facet> locale::locale(const locale& __other, _Facet* __f)
{ _M_impl = new _Impl(*__other._M_impl, 1);
 
Let me remind you: The locales contain reference-counting
memory management for facets. When the last locale
referencing a facet (by a pointer) goes away, the facet also
will be deleted. (Unless the facet was constructed with
constructor argument »1«.)
 
Now, Udo said that the locale cannot possibly delete the
facet when the »new« quoted above will throw, so there'd be
a leak. He said that the situation with LLVM was similar:
 
void locale::__install_ctor(const locale& other, facet* f, long id)
{ if (f)
__locale_ = new __imp(*other.__locale_, f, id);
else
 
. I then wrote a program for a recent GCC to demonstrate
this supposed leak.
 
The following output of my program shows a situation where
»new _Impl« does not throw, and there is no leak:
 
Allocate facet with new:
New 736 bytes at 0x3ca940.
Create new locale with facet:
Delete 736 bytes at 0x3ca940.
Closing.
Closed.
Everything ok, no memory leak.
 
. The following output of my program shows a situation where
»new _Impl« does throw, and there is a leak:
 
Allocate facet with new:
New 736 bytes at 0x3ca940.
Create new locale with facet:
Throwing:
*** Caught bad_alloc. ***
*** Oh no! memory leak detected! ***
 
You can see my source code below. BTW: Note that an old version
of clang emitted a seemingly spurios warning for my code:
 
main.cpp:112:3: warning: This statement is never executed [clang-analyzer-alpha.deadcode.UnreachableCode]
{ try{ main1(); }
^
 
. Source code (might not be portable and only work with a
recent [as of 2018-12] version of gcc):
 
#include <iostream>
#include <locale>
#include <sstream>
#include <new>
 
static void * traced_memory = nullptr; /* the address of the facet allocated */
 
/* DO NOT change "trace_next_allocation" here, but only in main1, directly before the allocation! */
static auto trace_next_allocation { 0 };
 
/* DO NOT change "simulate_bad_alloc" here, but only in main1, directly before the allocation! */
static auto simulate_bad_alloc { 0 };
 
/* replace standard operator "new" by the following customized version */
void * operator new( size_t const size )
{ if( simulate_bad_alloc )
{ simulate_bad_alloc = 0;
::std::cout << "Throwing:\n";
throw ::std::bad_alloc {}; }
auto const mem = malloc(size);
if( trace_next_allocation )
{ ::std::cout << "New " << size << " bytes at " << mem << ".\n";
traced_memory = mem;
trace_next_allocation = 0; }
return mem; }
 
/* replace standard operator "delete" by the following customized version */
void operator delete( void * const ptr )noexcept
{ if( ptr == traced_memory )::std::cout << "Delete bytes at " << ptr << ".\n";
free( ptr );
if( ptr == traced_memory )traced_memory = nullptr; /* ok, it did NOT leak */ }
 
/* replace standard operator "delete (sized)" by the following customized version */
void operator delete( void * const ptr, std::size_t const size )noexcept
{ if( ptr == traced_memory )::std::cout << "Delete " << size << " bytes at " << ptr << ".\n";
free( ptr );
if( ptr == traced_memory )traced_memory = nullptr; /* ok, it did NOT leak */ }
 
static void check()
{ ::std::cout <<
( traced_memory?
"*** Oh no! memory leak detected! ***\n":
"Everything ok, no memory leak.\n" ); }
 
struct csv_whitespace : std::ctype<wchar_t>{};
 
static void main1()
{ { ::std::istringstream const istringstream{ "" };
::std::cout << "Allocate facet with new:" << '\n';
trace_next_allocation = 1;
auto * const csv_whitespace_facet = new csv_whitespace; /* is this allocation leaked? */
 
std::cout << "Create new locale with facet:" << '\n';
simulate_bad_alloc = 1;
std::locale( istringstream.getloc(), csv_whitespace_facet ); /* object intentionally destroyed immediately after creation */
 
std::cout << "Closing.\n"; }
std::cout << "Closed.\n"; }
 
int main()
{ try{ main1(); }
catch( ::std::bad_alloc& ba ){ ::std::cout << "*** Caught bad_alloc. ***\n"; }
check(); }
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Dec 18 10:20PM

On 18 Dec 2018 18:58:14 GMT
 
> Now, Udo said that the locale cannot possibly delete the
> facet when the »new« quoted above will throw, so there'd be
> a leak.
 
There may be a bug but I do not think it lies in the code you have
posted.
 
If new throws std::bad_alloc then there is no leak with respect to that
particular allocation because there has been none, and surely neither
locale's constructor nor _Impl's constructor will run? If the _Impl
constructor above throws then the new expression is required to clean
up any memory the operator new() allocated for the _Impl object in
question.
 
Furthermore if the _Impl constructor throws then the locale object's
constructor will execute no further, so it cannot increment __f's
reference count. As the __f object is not passed to _Impl's
constructor I don't immediately see where the problem arises there
either (possibly the object accessed through the __other reference does
something amiss with other facets, who knows). I think you are going
to have to dig deeper into where the leak you have found arises (I have
not examined whether your leak detector works correctly and whether the
bug lies there instead).
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: