Friday, October 5, 2018

Digest for comp.lang.c++@googlegroups.com - 21 updates in 5 topics

Horizon68 <horizon@horizon.com>: Oct 04 02:50PM -0700

Hello..
 
Read this:
 
 
I think human consciousness comes from quantum physics world..
 
Read this very interesting webpage to understand more:
 
The strange link between the human mind and quantum physics
 
http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics
 
 
Thank you,
Amine Moulay Ramd
Paavo Helde <myfirstname@osa.pri.ee>: Oct 05 09:51AM +0300

On 5.10.2018 0:50, Horizon68 wrote:
 
> Read this very interesting webpage to understand more:
 
> The strange link between the human mind and quantum physics
 
> http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics
 
Of course there is a link between mind and quantum, the brain consists
of atoms which are best described by quantum physics at low level. The
same holds for a piece of rock, for example.
 
+1 for the article to mention decoherence and explaining this causes the
quantum effects to vanish at room temperatures 1E16 times faster than it
takes for a neuron to trigger a single signal.
 
The article also contains repeated remarks in the form "But there is no
evidence that such a thing is remotely feasible."
 
So I think you are mistaken about the article, it is not about any
mystical link, it is about lunatics seeking such links. Basically all
their mambo-jambo comes down to "I do not understand this, therefore
quantum!". It remains unclear why not "therefore spaghetti monster!" or
"therefore 7 dwarfs!".
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Oct 05 11:02AM +0200

On 05.10.2018 08:51, Paavo Helde wrote:
> evidence that such a thing is remotely feasible."
 
> So I think you are mistaken about the article, it is not about any
> mystical link, it is about lunatics seeking such links.
 
I agree about the lunatics. However, do note that they include Roger
Penrose, chair of the math department (whatever) at Oxford University.
He collaborated with Stephen Hawking on the results Stephen is most
famous for, about black holes.
 
I'd go even further and call the idea of black holes as places with
unusual time and space axis directions, just sheer lunacy: failing to
understand that a breakdown in math results means the math or the model
is wrong, not reality is. Which implies... That good old Stephen Hawking
was misled into insanity by Roger Penrose?
 
Hm. Anyway, I once wrote a little ironical piece about Roger Penrose's
insane views on AI. <url:
https://alfps.wordpress.com/2010/06/03/an-ironclad-proof-that-you-are-smarter-than-roger-penrose/>
:)
 
 
 
 
> their mambo-jambo comes down to "I do not understand this, therefore
> quantum!". It remains unclear why not "therefore spaghetti monster!" or
> "therefore 7 dwarfs!".
 
Yes.
 
 
Cheers!,
 
- Alf (offtopic mode, before first coffee!)
James Kuyper <jameskuyper@alumni.caltech.edu>: Oct 05 08:21AM -0400

On 10/05/2018 05:02 AM, Alf P. Steinbach wrote:
...
> understand that a breakdown in math results means the math or the model
> is wrong, not reality is. Which implies... That good old Stephen Hawking
> was misled into insanity by Roger Penrose?
 
The singularity at the center of a black hole is unambiguously a place
where gravitational fields get so strong that our understanding of
physics breaks down, because we've never been able to conduct
experiments in fields that strong. That's acknowledged by most of the
authorities in the field.
However, that time and space axes undergo distortion due to strong
gravitational fields is a fundamental aspect of the General theory of
Relativity (GR). Physicists have gone out of their way to locate
evidence that distinguishes GR from any proposed alternative, and the
evidence they've collected fits GR better than any meaningfully
different theory (at least one of the alternatives that has been
proposed turned out to be mathematically equivalent to GR, just
described in a different way). If this be lunacy, then I see nothing
wrong with being this kind of a lunatic.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Oct 05 09:23PM +0200

On 05.10.2018 14:21, James Kuyper wrote:
> proposed turned out to be mathematically equivalent to GR, just
> described in a different way). If this be lunacy, then I see nothing
> wrong with being this kind of a lunatic.
 
Roger Penrose is a lunatic for his belief that strong AI is impossible
without some quantum-level help, as he's written two or maybe three
books about, and his ability to simply not see any inconsistency in his
attempted mathematical proofs of that. They're trivial errors, not
recognized or /seen/ by the chair of the Oxford math department. But
then he is in good company of otherwise brilliant mathematicians or
thinkers, such Kurt Gödel, who tried to mathemathically prove the
existence of the Christian religion's god in 1941. Not to mention
Descartes. However, according to Pascal Descartes was only religious on
three occasions, namely when he tried to gain social favor by proving
the existence of that same god, less rigorously than Gödel later on.
 
Now regarding willy-nilly /directions/ of the time axis within a black
hole, you know, grandfather paradox. Bang. That's it: the theory is
provably not applicable to this case.
 
The insanity lies in believing that nature is that perverse. It isn't.
Our models of nature can be, because they're idealizations,
simplifications, that necessarily gloss over some details lest they be
as complex as nature itself – some of which details, become very
significant in the extreme regime of a black hole.
 
 
Cheers!,
 
- Alf (still in off-topic mode, or mood :-) )
 
 
Notes:
¹ <url: http://en.wikipedia.org/wiki/Gödel's_ontological_proof>
Melzzzzz <Melzzzzz@zzzzz.com>: Oct 05 07:36PM

> books about, and his ability to simply not see any inconsistency in his
> attempted mathematical proofs of that. They're trivial errors, not
> recognized or /seen/ by the chair of the Oxford math department.
 
As they are trivial care to show them?
Before Penrose I heard that strong AI is impossible because there is no
algoritm to make algorithms... from my mathematical logic proffesor.
That was in 1987 before Penrose wrote book...
 
--
press any key to continue or any other to quit...
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Oct 05 09:54PM +0200

On 05.10.2018 21:36, Melzzzzz wrote:
>> attempted mathematical proofs of that. They're trivial errors, not
>> recognized or /seen/ by the chair of the Oxford math department.
 
> As they are trivial care to show them?
 
I already linked to ¹something I blogged in 2010, up-thread. That was
just one of his attempted proofs. I don't have his books here, and I'm
not sure I still have the first book -- as I recall someone borrowed it
and didn't return it. But I recall the errors were (1) myriad, and (2)
mostly trivial. It's amazing what a religious-like belief can do.
 
 
> Before Penrose I heard that strong AI is impossible because there is no
> algoritm to make algorithms... from my mathematical logic proffesor.
> That was in 1987 before Penrose wrote book...
 
Yes, he was not the first.
 
There were a bunch of philosophers very attracted by the notion of
adding mathematical rigor to their arguments. But they mainly used math
as an impressive-looking /notation/ to express their handwaiving
arguments in. Penrose differed by actually trying to do things
mathematically, but as with a proof of a perpetuum mobile machine you
don't need to even look at the details to know it's wrong, and the
problem (for his sanity) is that he didn't draw that conclusion.
 
 
Cheers!,
 
- Alf
 
¹
https://alfps.wordpress.com/2010/06/03/an-ironclad-proof-that-you-are-smarter-than-roger-penrose/
Melzzzzz <Melzzzzz@zzzzz.com>: Oct 05 07:59PM

> mathematically, but as with a proof of a perpetuum mobile machine you
> don't need to even look at the details to know it's wrong, and the
> problem (for his sanity) is that he didn't draw that conclusion.
 
Problem is that algorithm is finite set of steps. You can't program
creativity.
 
 
--
press any key to continue or any other to quit...
"Öö Tiib" <ootiib@hot.ee>: Oct 05 01:21PM -0700

On Friday, 5 October 2018 22:59:41 UTC+3, Melzzzzz wrote:
> > problem (for his sanity) is that he didn't draw that conclusion.
 
> Problem is that algorithm is finite set of steps. You can't program
> creativity.
 
That is trivial fallacy. One of the oldest programming languages is Lisp
that does not make much difference between code and data. Now look around
us ... world is full of data. For all practical purposes it is infinite.
Train that data into program and it perhaps acquires creativity in the
process. ;)
https://tmrwedition.com/2017/02/06/artificial-intelligence-is-already-a-better-artist-than-you-are/
Melzzzzz <Melzzzzz@zzzzz.com>: Oct 05 08:31PM

> Train that data into program and it perhaps acquires creativity in the
> process. ;)
> https://tmrwedition.com/2017/02/06/artificial-intelligence-is-already-a-better-artist-than-you-are/
It is not trivial. My professor started with proof that there is
algorithm for proofing valid logic formulae. Result is that there isn't
one. Then he derived that algorithm for making algorithms does not
exists...
Your example is trivial fallacy, since it talks about something else,
unrelated to the subject.
I hope we will alive in 20 years and talk about this then.
 
 
--
press any key to continue or any other to quit...
"Öö Tiib" <ootiib@hot.ee>: Oct 05 03:12PM -0700

On Friday, 5 October 2018 23:31:35 UTC+3, Melzzzzz wrote:
> It is not trivial. My professor started with proof that there is
> algorithm for proofing valid logic formulae. Result is that there isn't
> one.
 
I meant there is data. Explanations that explain data and fit with
other data are theories. Rarely there is just one theory but more
often there are several that contradict with each other and
sometimes there is data that is not explained by any theory.
 
From contradicting theories pick what seems simplest. Decisions can
be made in the domain of knowledge by applying that theory (and
heuristics derived from it).
 
When theory is missing then accept ignorance. When decision is needed
in that domain of knowledge then throw dice or apply theory from some
other domain of knowledge.
 
Accept that most theories can be never proven but may be falsified by
observations made in the future, so carve nothing into stone besides
abstract mathematics.
 
> Then he derived that algorithm for making algorithms does not
> exists...
 
Why? Every theory can be implemented as algorithm and also throwing
dice can be implemented as algorithm.
 
> Your example is trivial fallacy, since it talks about something else,
> unrelated to the subject.
 
I do not understand. You said there are no creativity so I gave example.
 
> I hope we will alive in 20 years and talk about this then.
 
Oh, there likely will be opportunity to talk about this every year and
several times.
 
What is thinking? I believe that thinking is capability to form and to
build and to fit and to apply theories. To propose puzzles within domain
of knowledge that are tricky to solve and to find heuristics that
simplify solving those.
What is intuition? It is capability to apply theory (or its heuristics)
to some neighboring but unfamiliar domain of knowledge.
What is creativity? It is to do same with distant or unrelated domain of knowledge without direct need.
ram@zedat.fu-berlin.de (Stefan Ram): Oct 05 07:04PM

>+1
 
If it's important that a type has a two's complement
representation, can't this be expressed by a concept?
 
(The following example assumes a C++-Implementation where
»int« and »long« have a two's complement representation.)
 
#include <type_traits>
 
/* The following concept has to provided/adjusted for each
implementation when it's using a list with »is_same«.
Maybe a requires clause can be used to figure out the
representation at compile. */
template< typename T >concept UsesTwosComplementRepresentation =
::std::is_same< T, int >::value || ::std::is_same< T, long >::value;
 
void example( UsesTwosComplementRepresentation auto x ){}
ram@zedat.fu-berlin.de (Stefan Ram): Oct 05 08:40PM

>physics breaks down, because we've never been able to conduct
>experiments in fields that strong. That's acknowledged by most of the
>authorities in the field.
 
It's true for linguistic reasons alone because "a place
where our understanding of physics breaks down" is the
/meaning/ of the world "singularity".
 
>However, that time and space axes undergo distortion due to
>strong gravitational fields is a fundamental aspect of the
>General theory of Relativity (GR).
 
GR actually explains that the gravitational fields /is/
a distortion of spacetime. The cause is the energy-
momentum tensor (EMT). It does not matter how large the
EMT is, as long as it's not zero everywhere (and in fact
it is not zero everywhere).

>distinguishes GR from any proposed alternative, and the
>evidence they've collected fits GR better than any
>meaningfully different theory
 
Correct!
 
WRG to the subject: The eye today is seens as a protrusion
(i.e., a part) of the brain ("The mammalian eye is formed
from a collapsed ventrical of the brain." - CompVisNotes.pdf).
 
A photon can causes a cis-trans isomerization of the
11-cis-retinal chromophore in the G-protein coupled receptor
rhodopsin in this brain protrusion. This isomerization is
process which one might deem to be describable only with
quantum mechanics.
ram@zedat.fu-berlin.de (Stefan Ram): Oct 05 09:36PM

> representation at compile. */
>template< typename T >concept UsesTwosComplementRepresentation =
>::std::is_same< T, int >::value || ::std::is_same< T, long >::value;
 
Maybe something like the following?
 
template< typename T >
concept bool UsesTwosComplementsRepresentation =
requires{ new int[ -( T( -5 )!=(( ~T( 5 ) )+ 1 ) )]; };
ram@zedat.fu-berlin.de (Stefan Ram): Oct 05 09:40PM

Supersedes: <requires-20181005223530@ram.dialup.fu-berlin.de>
[correction of concept definition]
 
> representation at compile. */
>template< typename T >concept UsesTwosComplementRepresentation =
>::std::is_same< T, int >::value || ::std::is_same< T, long >::value;
 
Maybe something like the following?
 
template< typename T >
concept UsesTwosComplementRepresentation =
requires{ new int[ -( T( -5 )!=(( ~T( 5 ) )+ 1 ) )]; };
Horizon68 <horizon@horizon.com>: Oct 05 01:26PM -0700

Hello,
 
 
Read the following interesting webpage:
 
Memory Models: x86 is TSO, TSO is Good
 
Essentially, the conclusion is that x86 in practice implements the old
SPARC TSO memory model.
 
The big take-away from the talk for me is that it confirms the
observation made may times before that SPARC TSO seems to be the optimal
memory model. It is sufficiently understandable that programmers can
write correct code without having barriers everywhere. It is
sufficiently weak that you can build fast hardware implementation that
can scale to big machines.
 
 
Read more here:
 
https://jakob.engbloms.se/archives/1435
 
 
Thank you,
Amine Moulay Ramdane.
Tim Rentsch <txr@alumni.caltech.edu>: Oct 05 08:29AM -0700


> Standard does describe situations. Those are situations that cause
> malloc to return null pointer or errno to be set ENOMEM or ENOSPC
> and the like. What else these are?
 
That isn't what I meant by "situation", but let's take a look at
it. The C++ standard says this (in n4659 23.10.11 p2) about the
<cstdlib> allocation functions (which includes malloc):
 
Effects: These functions have the semantics specified in the
C standard library.
 
The C standard, in n1570 7.22.3 p2, says this about the behavior
of memory management functions (which includes malloc) [in part;
the full paragraph is much longer, but this sentence is the only
relevant one]:
 
If the space cannot be allocated, a null pointer is returned.
 
Two points are important here. First, the condition "space
cannot be allocated" is stated in the text of the C standard,
which means it is something known to the abstract machine. In
fact we don't know what it means in terms of running on actual
hardware. The condition "being out of stack space" is exactly
the opposite of that: it is defined only in terms of what's
going on in the actual hardware, and not something that is known
to the abstract machine. Isn't it true that C++ programs, like C
programs, are defined in terms of behavior in the abstract
machine?
 
Second, the dependence (of what malloc(), etc., do) on this
condition is explicit in the text of the standard. Unless there
is some explicit statement to the contrary, the stated semantics
are meant to apply without exception. For example, 7.22.2.1 p2
gives these semantics for rand():
 
The rand function computes a sequence of pseudo-random
integers in the range 0 to RAND_MAX.
 
This statement doesn't mean rand() gives a pseudo-random value
unless one "cannot be generated"; it means rand() always gives a
pseudo-random value, or more precisely that the definition of
rand() specifies that it always gives a pseudo-random value.
Or here is another example. Consider the code fragment:
 
unsigned a = 10, b = -3, c = a+b;
 
The definition of the + operator specifies the result of
performing the addition, and that definition is unconditional.
If it happens that the chip running the executable has a hot spot
that flips a bit in one of its registers, that doesn't make the
expression 'a+b' have undefined behavior. The behavior is
defined regardless of whether running the program is carried out
correctly.
 
There is no explicit statement in the C standard, or AFAIAA the
C++ standard, giving a dependence (in the stated semantics) on
some condition like "running out of stack". Hence whether that
condition is true cannot change whether a program has defined
behavior or undefined behavior.
 
>> doesn't change that.
 
> Anything that is not defined by any passage of standard *is*
> undefined behavior in C++. That may be different in C.
 
The same is true in C, with the understanding that any _behavior_
that is not defined is undefined behavior, which is also the
case for C++. If there is any explicit definition of behavior,
then the behavior is not undefined.
 
> Is it in C that because standard does not state that automatic
> storage has limits and what happens when the limits are exhausted
> then the automatic storage is unlimited in C?
 
AFAICT neither standard has any statement about automatic storage
having limits, let alone a statement about what happens when such
limits are exhausted.
 
The C++ standard gives this general statement, in 4.1 p2.1:
 
If a program contains no violations of the rules in this
International Standard, a conforming implementation shall,
within its resource limits, accept and correctly execute
that program.
 
There is no requirement that an implementation correctly execute
a program that exceeds any resource limit, including the resource
of automatic storage. But that doesn't change whether the
semantics of said program are defined: if the C/C++ standards
define the semantics, they are still defined whether the program
can be correctly executed or not.
 
> some local variable breaks automatic storage limits then what
> actually happens is undefined behavior exactly because standard
> did not define it.
 
That isn't right. In both standards, the rule is that when there
is a definition for the semantics of a particular construct, that
definition applies unconditionally unless there is an explicit
provision to the contrary. It is only when a construct has no
definition that the behavior is undefined.
 
To put this in concrete terms, consider the following program:
 
#include <stdio.h>
 
unsigned ribbet( unsigned );
 
int
main( int argc, char *argv[] ){
unsigned u = ribbet( 0U - argc );
printf( "%u\n", u );
return 0;
}
 
unsigned long
ribbet( unsigned u ){
if( u == 0 ) return 1234567;
return 997 + ribbet( u + 1000003 );
}
 
Going through it a piece at a time:
 
In main():
 
The expression '0U - argc' has explicitly defined behavior,
and the definition is unconditional.
 
The function call 'ribbet( 0U - argc )' has explicitly
defined behavior, and the definition is unconditional.
 
The initializing declaration 'unsigned u = ...' has
explicitly defined behavior, and the definition is
unconditional.
 
The statement calling printf() has explicitly defined
behavior, and the definition is unconditional (assuming
a hosted implementation in C, which IIUC C++ requires).
 
The 'return 0;' has explicitly defined behavior, and the
definition is unconditional.
 
In ribbet():
 
The expression 'u == 0' has explicitly defined behavior, and
the definition is unconditional.
 
The controlled statement 'return 1234567;' has explicitly
defined behavior, and the definition is unconditional.
 
The if() statement has explicitly defined behavior, and the
definition is unconditional.
 
The expression 'u + 1000003' has explicitly defined behavior,
and the definition is unconditional.
 
The recursive call to ribbet() has explicitly defined
behavior, and the definition is unconditional.
 
The expression '997 + ribbet( ... )' has explicitly defined
behavior, and the definition is unconditional.
 
The final return statement has explicitly defined behavior,
and the definition is unconditional.
 
Every piece of the program has its behavior explicitly defined,
and in every case there are no exceptions given for the stated
semantics. There is therefore no undefined behavior. If a
particular implementation runs out of some resource trying to
execute the program, it may not execute correctly, but that
doesn't change the definition of the program's semantics (which
is to say, its behavior). The key point is that the behavior is
_defined_: we know what the program is supposed to do, even if
running the program doesn't actually do that. Pulling the plug,
having the processor catch on fire, the OS killing the process
because it ran out of swap space, or running out of automatic
storage, all can affect what program execution actually does;
but none of those things changes what the standard says the
program is meant to do. Undefined behavior means the standard
doesn't say anything about what is meant to happen, and that
is simply not the case here.
Tim Rentsch <txr@alumni.caltech.edu>: Oct 05 08:45AM -0700

>> defined, then where is it defined?"
 
> For stack overflow, the behavior might be defined by the
> implementation.
 
In fact the behavior is already defined by the Standard(s).
Please see my longer reply just recently posted.
 
> documentation.
 
> Note that if the behavior were defined by the C++ standard, MSVC++
> could not implement their own behavior (at least not legally).
 
I think you're assuming that behavior being defined implies
correct execution. That isn't the case. This distinction
is touched on in my other posting. Please let me know if
you would like some clarification.
"Öö Tiib" <ootiib@hot.ee>: Oct 05 09:26AM -0700

On Friday, 5 October 2018 18:29:37 UTC+3, Tim Rentsch wrote:
> semantics of said program are defined: if the C/C++ standards
> define the semantics, they are still defined whether the program
> can be correctly executed or not.
 
Your position appears to be that the standard imposes no requirements
to a program that exceeds some (stated or otherwise) limit however
the behavior is still defined. My position is that it is undefined
in that situation. Perhaps we have to agree to disagree about it?
 
Ralf Goertz <me@myprovider.invalid>: Oct 05 08:56AM +0200

Am Thu, 4 Oct 2018 17:15:29 +0100
 
> I don't seem to feel the angst others do with how the standard streams
> print the value of the fixed or minimum sized integer types provided
> by stdint.h.
 
[OT] As a German it is always a bit strange to see or hear the word
angst in a conversation in english. According to dict.leo.org it
literally translates to its counterpart Angst with the modifiers
"Lebensangst" (which describes the mood of hopelessness and fear of the
future and which is rearely used AFAICT) or panisch (panic). Both those
meanings I have a hard time recognising as appropriate here. This is
probably another case of differences of dictionary definition and usage.
One other expample being "idiosyncratic" for which the primary
dictionary meaning apart from the medical term seems to be something
like "unbearably disgusting" which rarely fits in the contexts I read or
hear this word.
 
> somewhat extreme to break congruity with C and make all the fixed and
> minimum width types in stdint.h their own distinct types just to
> please operator << and >>.
 
But you wouldn't need to break congruity, would you? I don't know that
much C but it doesn't have the *stream* operators << and >>. So it would
have been possible to make [u]int8_t a separate type which behaves like
[un]signed char in all aspects shared between C and C++ and still define
those operators to behave the way you would expect from integer types.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Oct 05 12:39PM +0100

On Fri, 5 Oct 2018 08:56:54 +0200
> Am Thu, 4 Oct 2018 17:15:29 +0100
> schrieb Chris Vine <chris@cvine--nospam--.freeserve.co.uk>:
[snip]
> have been possible to make [u]int8_t a separate type which behaves like
> [un]signed char in all aspects shared between C and C++ and still define
> those operators to behave the way you would expect from integer types.
 
I do not think that it is possible to "make [u]int8_t a separate type
which behaves like [un]signed char in all aspects shared between C and
C++ and still define [operators << and >>] to behave the way you would
expect from integer types". To provide overloading on operator <<
and >> then uint8_t would require to be a distinct type, rather as
wchar_t is (and presumably you would apply this to each fixed or
minimum width type). But if you did that then uint8_t and the other
fixed and minimum width integers _would_ behave differently between C
and C++. In particular, code which does not break the strict aliasing
rule in C might break it in C++.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: