Thursday, February 11, 2021

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 11 02:13AM -0800

On 2/11/2021 12:50 AM, Melzzzzz wrote:
> I hadn't got single warning about this, and puzzlingly
> there is another Data in another cpp file which didn't
> caused problem. Sheesh 5 hours of debugging...
 
No warning at all?
 
struct xxx { };
struct xxx { };
 
should barf up something in a compiler.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 11 02:14AM -0800

On 2/11/2021 12:50 AM, Melzzzzz wrote:
> I hadn't got single warning about this, and puzzlingly
> there is another Data in another cpp file which didn't
> caused problem. Sheesh 5 hours of debugging...
 
Humm.... Have not tried it using a precompiled header...
Sam <sam@email-scan.com>: Feb 11 07:36AM -0500

Melzzzzz writes:
 
> I hadn't got single warning about this, and puzzlingly
> there is another Data in another cpp file which didn't
> caused problem. Sheesh 5 hours of debugging...
 
A one definition rule violation is undefined behavior, no diagnostic
required. If a compiler/linker manages to detect it, consider it just as an
extra bonus. However the standard does not require a diagnostic.
Melzzzzz <Melzzzzz@zzzzz.com>: Feb 11 01:32PM


> struct xxx { };
> struct xxx { };
 
> should barf up something in a compiler.
 
Not if it is just
struct xxxx;
us fucking dns api in stdafx
and my Data, 1 obj file, and ms Data other dll.
 
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
Melzzzzz <Melzzzzz@zzzzz.com>: Feb 11 01:34PM


> A one definition rule violation is undefined behavior, no diagnostic
> required. If a compiler/linker manages to detect it, consider it just as an
> extra bonus. However the standard does not require a diagnostic.
 
Fucking morons. They forward declare struct Data; in header which
every GUI file includes, that is precompiled header.
 
Bunch of amateurs.
 
--
current job title: senior software engineer
skills: x86 aasembler,c++,c,rust,go,nim,haskell...
 
press any key to continue or any other to quit...
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 11 01:45PM -0800

On 2/11/2021 5:32 AM, Melzzzzz wrote:
> struct xxxx;
> us fucking dns api in stdafx
> and my Data, 1 obj file, and ms Data other dll.
 
Wow. Imvho, the compiler should feel like puking it guts out. Yet it
hides this from you. Just, wow.
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 11 12:29PM

Hi!
 
I am starting work on creating a new Python implementation from scratch using "neos" my universal compiler that can compile any programming language. I envision this implementation to be significantly faster than the currently extant Python implementations (which isn't a stretch given how poorly they perform).
 
Sample neos session (parsing a fibonacci program, neoscript rather than Python in this case):
 
neos 1.0.0.0 ED-209
] help
h(elp)
s(chema) <path to language schema> Load language schema
l(oad) <path to program> Load program
list List program
c(ompile) Compile program
r(un) Run program
![<expression>] Evaluate expression (enter interactive mode if expression omitted)
:<input> Input (as stdin)
q(uit) Quit neos
lc List loaded concept libraries
t(race) <0|1|2|3|4|5> [<filter>] Compiler trace
m(etrics) Display metrics for running programs
] lc
[neos.core] (file:///D:\code\neos\build\win32\vs2019\x64\Release\core.ncl)
[neos.boolean]
[neos.language]
[neos.logic]
[neos.math]
[neos.module]
[neos.object]
[neos.string]
[neos.math.universal] (file:///D:\code\neos\build\win32\vs2019\x64\Release\core.math.universal.ncl)
] s neoscript
Loading schema 'neoscript'...
Language: Default neoGFX scripting language
Version: 1.0.0
Copyright (C) 2019 Leigh Johnston
neoscript] l examples/neoscript/fibonacci.neo
neoscript] list
File 'examples/neoscript/fibonacci.neo':
-- neoscript example: Fibonacci
 
using neos.string;
using neos.stream;
 
import fn to_string(x : i32) -> string;
import fn to_integer(s : string) -> i32;
import proc input(s : out string);
import proc print(s : in string);
 
-- functions are pure
def fn add(x, y : i32) -> i32
{
return x + y;
}
def fn fib(x : i32) -> i32
{
if (x < 2)
return 1;
else
return add(fib(x-1), fib(x-2));
}
 
-- procedures are impure
def proc main()
s : string;
{
print("Enter a positive "
"integer: ");
input(s);
print("Fibonacci(" + s + ") = " + to_string(fib(to_integer(s))) + "\n");
}
neoscript] t 1
neoscript] c
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(g)
folding: string.utf8(g) <- string.utf8.character.alpha()
folded: string.utf8(g) <- string.utf8.character.alpha() = string.utf8(gn)
folding: string.utf8(gn) <- string.utf8.character.alpha()
folded: string.utf8(gn) <- string.utf8.character.alpha() = string.utf8(gni)
folding: string.utf8(gni) <- string.utf8.character.alpha()
folded: string.utf8(gni) <- string.utf8.character.alpha() = string.utf8(gnir)
folding: string.utf8(gnir) <- string.utf8.character.alpha()
folded: string.utf8(gnir) <- string.utf8.character.alpha() = string.utf8(gnirt)
folding: string.utf8(gnirt) <- string.utf8.character.alpha()
folded: string.utf8(gnirt) <- string.utf8.character.alpha() = string.utf8(gnirts)
folding: string.utf8(gnirts) <- string.utf8.character.period()
folded: string.utf8(gnirts) <- string.utf8.character.period() = string.utf8(gnirts.)
folding: string.utf8(gnirts.) <- string.utf8.character.alpha()
folded: string.utf8(gnirts.) <- string.utf8.character.alpha() = string.utf8(gnirts.s)
folding: string.utf8(gnirts.s) <- string.utf8.character.alpha()
folded: string.utf8(gnirts.s) <- string.utf8.character.alpha() = string.utf8(gnirts.so)
folding: string.utf8(gnirts.so) <- string.utf8.character.alpha()
folded: string.utf8(gnirts.so) <- string.utf8.character.alpha() = string.utf8(gnirts.soe)
folding: string.utf8(gnirts.soe) <- string.utf8.character.alpha()
folded: string.utf8(gnirts.soe) <- string.utf8.character.alpha() = string.utf8(gnirts.soen)
folding: source.package.name() <- string.utf8(gnirts.soen)
folded: source.package.name() <- string.utf8(gnirts.soen) = source.package.name(neos.string)
folding: source.package.import() <- source.package.name(neos.string)
folded: source.package.import() <- source.package.name(neos.string) = source.package.import(neos.string)
folding: source.package.import(neos.string) <- source.package.import(neos.string)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(g)
folding: string.utf8(g) <- string.utf8.character.alpha()
folded: string.utf8(g) <- string.utf8.character.alpha() = string.utf8(gn)
folding: string.utf8(gn) <- string.utf8.character.alpha()
folded: string.utf8(gn) <- string.utf8.character.alpha() = string.utf8(gni)
folding: string.utf8(gni) <- string.utf8.character.alpha()
folded: string.utf8(gni) <- string.utf8.character.alpha() = string.utf8(gnir)
folding: string.utf8(gnir) <- string.utf8.character.alpha()
folded: string.utf8(gnir) <- string.utf8.character.alpha() = string.utf8(gnirt)
folding: string.utf8(gnirt) <- string.utf8.character.alpha()
folded: string.utf8(gnirt) <- string.utf8.character.alpha() = string.utf8(gnirts)
folding: string.utf8(gnirts) <- string.utf8.character.underscore()
folded: string.utf8(gnirts) <- string.utf8.character.underscore() = string.utf8(gnirts_)
folding: string.utf8(gnirts_) <- string.utf8.character.alpha()
folded: string.utf8(gnirts_) <- string.utf8.character.alpha() = string.utf8(gnirts_o)
folding: string.utf8(gnirts_o) <- string.utf8.character.alpha()
folded: string.utf8(gnirts_o) <- string.utf8.character.alpha() = string.utf8(gnirts_ot)
folding: language.identifier() <- string.utf8(gnirts_ot)
folded: language.identifier() <- string.utf8(gnirts_ot) = language.identifier(to_string)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(x)
folding: language.identifier() <- string.utf8(x)
folded: language.identifier() <- string.utf8(x) = language.identifier(x)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(r)
folding: string.utf8(r) <- string.utf8.character.alpha()
folded: string.utf8(r) <- string.utf8.character.alpha() = string.utf8(re)
folding: string.utf8(re) <- string.utf8.character.alpha()
folded: string.utf8(re) <- string.utf8.character.alpha() = string.utf8(reg)
folding: string.utf8(reg) <- string.utf8.character.alpha()
folded: string.utf8(reg) <- string.utf8.character.alpha() = string.utf8(rege)
folding: string.utf8(rege) <- string.utf8.character.alpha()
folded: string.utf8(rege) <- string.utf8.character.alpha() = string.utf8(reget)
folding: string.utf8(reget) <- string.utf8.character.alpha()
folded: string.utf8(reget) <- string.utf8.character.alpha() = string.utf8(regetn)
folding: string.utf8(regetn) <- string.utf8.character.alpha()
folded: string.utf8(regetn) <- string.utf8.character.alpha() = string.utf8(regetni)
folding: string.utf8(regetni) <- string.utf8.character.underscore()
folded: string.utf8(regetni) <- string.utf8.character.underscore() = string.utf8(regetni_)
folding: string.utf8(regetni_) <- string.utf8.character.alpha()
folded: string.utf8(regetni_) <- string.utf8.character.alpha() = string.utf8(regetni_o)
folding: string.utf8(regetni_o) <- string.utf8.character.alpha()
folded: string.utf8(regetni_o) <- string.utf8.character.alpha() = string.utf8(regetni_ot)
folding: language.identifier() <- string.utf8(regetni_ot)
folded: language.identifier() <- string.utf8(regetni_ot) = language.identifier(to_integer)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(s)
folding: language.identifier() <- string.utf8(s)
folded: language.identifier() <- string.utf8(s) = language.identifier(s)
folded: source.package.import(neos.string) <- source.package.import(neos.string) = ()
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(m)
folding: string.utf8(m) <- string.utf8.character.alpha()
folded: string.utf8(m) <- string.utf8.character.alpha() = string.utf8(ma)
folding: string.utf8(ma) <- string.utf8.character.alpha()
folded: string.utf8(ma) <- string.utf8.character.alpha() = string.utf8(mae)
folding: string.utf8(mae) <- string.utf8.character.alpha()
folded: string.utf8(mae) <- string.utf8.character.alpha() = string.utf8(maer)
folding: string.utf8(maer) <- string.utf8.character.alpha()
folded: string.utf8(maer) <- string.utf8.character.alpha() = string.utf8(maert)
folding: string.utf8(maert) <- string.utf8.character.alpha()
folded: string.utf8(maert) <- string.utf8.character.alpha() = string.utf8(maerts)
folding: string.utf8(maerts) <- string.utf8.character.period()
folded: string.utf8(maerts) <- string.utf8.character.period() = string.utf8(maerts.)
folding: string.utf8(maerts.) <- string.utf8.character.alpha()
folded: string.utf8(maerts.) <- string.utf8.character.alpha() = string.utf8(maerts.s)
folding: string.utf8(maerts.s) <- string.utf8.character.alpha()
folded: string.utf8(maerts.s) <- string.utf8.character.alpha() = string.utf8(maerts.so)
folding: string.utf8(maerts.so) <- string.utf8.character.alpha()
folded: string.utf8(maerts.so) <- string.utf8.character.alpha() = string.utf8(maerts.soe)
folding: string.utf8(maerts.soe) <- string.utf8.character.alpha()
folded: string.utf8(maerts.soe) <- string.utf8.character.alpha() = string.utf8(maerts.soen)
folding: source.package.name() <- string.utf8(maerts.soen)
folded: source.package.name() <- string.utf8(maerts.soen) = source.package.name(neos.stream)
folding: source.package.import() <- source.package.name(neos.stream)
folded: source.package.import() <- source.package.name(neos.stream) = source.package.import(neos.stream)
folding: source.package.import(neos.stream) <- source.package.import(neos.stream)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(t)
folding: string.utf8(t) <- string.utf8.character.alpha()
folded: string.utf8(t) <- string.utf8.character.alpha() = string.utf8(tu)
folding: string.utf8(tu) <- string.utf8.character.alpha()
folded: string.utf8(tu) <- string.utf8.character.alpha() = string.utf8(tup)
folding: string.utf8(tup) <- string.utf8.character.alpha()
folded: string.utf8(tup) <- string.utf8.character.alpha() = string.utf8(tupn)
folding: string.utf8(tupn) <- string.utf8.character.alpha()
folded: string.utf8(tupn) <- string.utf8.character.alpha() = string.utf8(tupni)
folding: language.identifier() <- string.utf8(tupni)
folded: language.identifier() <- string.utf8(tupni) = language.identifier(input)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(s)
folding: language.identifier() <- string.utf8(s)
folded: language.identifier() <- string.utf8(s) = language.identifier(s)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(t)
folding: string.utf8(t) <- string.utf8.character.alpha()
folded: string.utf8(t) <- string.utf8.character.alpha() = string.utf8(tn)
folding: string.utf8(tn) <- string.utf8.character.alpha()
folded: string.utf8(tn) <- string.utf8.character.alpha() = string.utf8(tni)
folding: string.utf8(tni) <- string.utf8.character.alpha()
folded: string.utf8(tni) <- string.utf8.character.alpha() = string.utf8(tnir)
folding: string.utf8(tnir) <- string.utf8.character.alpha()
folded: string.utf8(tnir) <- string.utf8.character.alpha() = string.utf8(tnirp)
folding: language.identifier() <- string.utf8(tnirp)
folded: language.identifier() <- string.utf8(tnirp) = language.identifier(print)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(s)
folding: language.identifier() <- string.utf8(s)
folded: language.identifier() <- string.utf8(s) = language.identifier(s)
folded: source.package.import(neos.stream) <- source.package.import(neos.stream) = ()
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(g)
folding: string.utf8(g) <- string.utf8.character.alpha()
folded: string.utf8(g) <- string.utf8.character.alpha() = string.utf8(gn)
folding: string.utf8(gn) <- string.utf8.character.alpha()
folded: string.utf8(gn) <- string.utf8.character.alpha() = string.utf8(gni)
folding: string.utf8(gni) <- string.utf8.character.alpha()
folded: string.utf8(gni) <- string.utf8.character.alpha() = string.utf8(gnir)
folding: string.utf8(gnir) <- string.utf8.character.alpha()
folded: string.utf8(gnir) <- string.utf8.character.alpha() = string.utf8(gnirt)
folding: string.utf8(gnirt) <- string.utf8.character.alpha()
folded: string.utf8(gnirt) <- string.utf8.character.alpha() = string.utf8(gnirts)
folding: string.utf8(gnirts) <- string.utf8.character.underscore()
folded: string.utf8(gnirts) <- string.utf8.character.underscore() = string.utf8(gnirts_)
folding: string.utf8(gnirts_) <- string.utf8.character.alpha()
folded: string.utf8(gnirts_) <- string.utf8.character.alpha() = string.utf8(gnirts_o)
folding: string.utf8(gnirts_o) <- string.utf8.character.alpha()
folded: string.utf8(gnirts_o) <- string.utf8.character.alpha() = string.utf8(gnirts_ot)
folding: language.identifier() <- string.utf8(gnirts_ot)
folded: language.identifier() <- string.utf8(gnirts_ot) = language.identifier(to_string)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(x)
folding: language.identifier() <- string.utf8(x)
folded: language.identifier() <- string.utf8(x) = language.identifier(x)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(r)
folding: string.utf8(r) <- string.utf8.character.alpha()
folded: string.utf8(r) <- string.utf8.character.alpha() = string.utf8(re)
folding: string.utf8(re) <- string.utf8.character.alpha()
folded: string.utf8(re) <- string.utf8.character.alpha() = string.utf8(reg)
folding: string.utf8(reg) <- string.utf8.character.alpha()
folded: string.utf8(reg) <- string.utf8.character.alpha() = string.utf8(rege)
folding: string.utf8(rege) <- string.utf8.character.alpha()
folded: string.utf8(rege) <- string.utf8.character.alpha() = string.utf8(reget)
folding: string.utf8(reget) <- string.utf8.character.alpha()
folded: string.utf8(reget) <- string.utf8.character.alpha() = string.utf8(regetn)
folding: string.utf8(regetn) <- string.utf8.character.alpha()
folded: string.utf8(regetn) <- string.utf8.character.alpha() = string.utf8(regetni)
folding: string.utf8(regetni) <- string.utf8.character.underscore()
folded: string.utf8(regetni) <- string.utf8.character.underscore() = string.utf8(regetni_)
folding: string.utf8(regetni_) <- string.utf8.character.alpha()
folded: string.utf8(regetni_) <- string.utf8.character.alpha() = string.utf8(regetni_o)
folding: string.utf8(regetni_o) <- string.utf8.character.alpha()
folded: string.utf8(regetni_o) <- string.utf8.character.alpha() = string.utf8(regetni_ot)
folding: language.identifier() <- string.utf8(regetni_ot)
folded: language.identifier() <- string.utf8(regetni_ot) = language.identifier(to_integer)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(s)
folding: language.identifier() <- string.utf8(s)
folded: language.identifier() <- string.utf8(s) = language.identifier(s)
folding: string.utf8() <- string.utf8.character.alpha()
folded: string.utf8() <- string.utf8.character.alpha() = string.utf8(t)
folding: string.utf8(t) <-
Paavo Helde <myfirstname@osa.pri.ee>: Feb 11 04:51PM +0200

11.02.2021 14:29 Mr Flibble kirjutas:
> language.  I envision this implementation to be significantly faster
> than the currently extant Python implementations (which isn't a stretch
> given how poorly they perform).
 
Could you please take care to have it properly multithreadable? A GIL
lock is not something worth to replicate. A good principle would be to
have no global static state at all.
mickspud@potatofield.co.uk: Feb 11 03:46PM

On Thu, 11 Feb 2021 16:51:34 +0200
 
>> I am starting work on creating a new Python implementation from scratch
>> using "neos" my universal compiler that can compile any programming
>> language.  I envision this implementation to be significantly faster
 
Any? Hows it going with declarative languages such as SQL or Prolog then?
 
 
>Could you please take care to have it properly multithreadable? A GIL
>lock is not something worth to replicate. A good principle would be to
>have no global static state at all.
 
Or better yet forget threads and go multiprocess. If you're developing
on a proper OS anyway, on Windows the pain isn't worth it I imagine.
David Brown <david.brown@hesbynett.no>: Feb 11 06:13PM +0100

On 11/02/2021 15:51, Paavo Helde wrote:
 
> Could you please take care to have it properly multithreadable? A GIL
> lock is not something worth to replicate. A good principle would be to
> have no global static state at all.
 
Removing the GIL always sounds like a great idea, but remember it would
make a lot of Python code incompatible. Some people writing Python code
will assume that certain operations - assignment being a key example -
are atomic and safe to do in threaded code. The GIL can therefore make
it easier to write thread-safe code in Python, because you don't need
explicit locks for many uses of shared state.
 
You can certainly argue that the GIL was an unfortunate implementation
choice when Python was first created, and you can certainly argue that
relying on it in this way is questionable practice. But there is plenty
of Python code that /does/ rely on it that way.
 
Standard practice for efficient Python coding is to avoid doing too much
hard work in pure Python. Anything really time-consuming should be done
externally - such as OS calls, extension libraries for image processing,
databases, numpty, etc. These all release the GIL before doing their
work, thus giving you good multi-threading performance. It's only if
you are trying to do a lot of calculations or time-consuming work in
pure Python in multiple threads that the GIL is a problem. And if
that's the case, you can usually use the "processing" library instead of
the "threading" library and get your multi-processing that way.
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 11 05:37PM

>>> using "neos" my universal compiler that can compile any programming
>>> language.  I envision this implementation to be significantly faster
 
> Any? Hows it going with declarative languages such as SQL or Prolog then?
 
Yes, any. Why do you feel the need to ask about declarative languages specifically?
 
>> have no global static state at all.
 
> Or better yet forget threads and go multiprocess. If you're developing
> on a proper OS anyway, on Windows the pain isn't worth it I imagine.
 
You serious, bruv? Threads are essential.
 
/Flibble
 
--
😎
Paavo Helde <myfirstname@osa.pri.ee>: Feb 11 08:13PM +0200

11.02.2021 19:13 David Brown kirjutas:
>> have no global static state at all.
 
> Removing the GIL always sounds like a great idea, but remember it would
> make a lot of Python code incompatible.
 
I'm sure Mr Flibble does not care about such things very much ;-)
 
> Some people writing Python code
> will assume that certain operations - assignment being a key example -
> are atomic and safe to do in threaded code.
 
There is zero need that a scripting language sees the same data state in
all threads. Actually it's the opposite, because any kind of shared
state needs synhronization and should be avoided as much as possible.
 
> The GIL can therefore make
> it easier to write thread-safe code in Python, because you don't need
> explicit locks for many uses of shared state.
 
Locking a data state at script level does not scale anyway. A much
better approach is to use message queues for communicating between threads.
 
> choice when Python was first created, and you can certainly argue that
> relying on it in this way is questionable practice. But there is plenty
> of Python code that /does/ rely on it that way.
 
This does not make this code right.
 
> pure Python in multiple threads that the GIL is a problem. And if
> that's the case, you can usually use the "processing" library instead of
> the "threading" library and get your multi-processing that way.
 
All this "standard practice" is the consequence of the fact that Python
does not support multithreading properly.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 11 07:58PM +0100

Python isn't used for Performance-critical code at all when it
comes to the speed of the interpretation of the language itself.
So your work would be worthless in this sense. But sometimes
people use native libraries with Python which result in perfor-
mant solution. But in this case your solution would be worthless
either.
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 11 07:01PM

On 11/02/2021 18:58, Bonita Montero wrote:
> people use native libraries with Python which result in perfor-
> mant solution. But in this case your solution would be worthless
> either.
 
You think my solution is worthless because you have no clue whatsoever about what problem it is solving, dear.
 
/Flibble
 
--
😎
Bonita Montero <Bonita.Montero@gmail.com>: Feb 11 08:15PM +0100

> You think my solution is worthless because you have no clue whatsoever
> about what problem it is solving, dear.
 
Then tell me what are the typical performance-sensitive applications
written in Python and depending on the speed of the langugage itself
and not the libraries are.
I think you're an dreaming idiot which isn't capable to master such
a huge project.
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 11 07:37PM

On 11/02/2021 19:15, Bonita Montero wrote:
> and not the libraries are.
> I think you're an dreaming idiot which isn't capable to master such
> a huge project.
 
Obviously as you have not learnt anything new since your last reply you simply repeat yourself as you still have no clue.
 
/Flibble
 
--
😎
Bonita Montero <Bonita.Montero@gmail.com>: Feb 11 08:47PM +0100

>> a huge project.
 
> Obviously as you have not learnt anything new since your last reply
> you simply repeat yourself as you still have no clue.
 
You have nothing to argue to what I've said.
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 11 07:50PM

On 11/02/2021 19:47, Bonita Montero wrote:
 
>> Obviously as you have not learnt anything new since your last reply
>> you simply repeat yourself as you still have no clue.
 
> You have nothing to argue to what I've said.
 
What you said is not even wrong so there is no suitable response other than simple dismissal. You have no clue about what I am doing or why.
 
/Flibble
 
--
😎
Bonita Montero <Bonita.Montero@gmail.com>: Feb 11 08:53PM +0100

> You have no clue about what I am doing or why.
 
You're simply not capable to write a Python-implementation.
Ben Bacarisse <ben.usenet@bsb.me.uk>: Feb 11 09:14PM


> I am starting work on creating a new Python implementation from
> scratch using "neos" my universal compiler that can compile any
> programming language.
 
Is there a published paper describing this work? I thought the idea of
a universal compiler had bitten the dust, so I'm curious so see what
your approach is.
 
--
Ben.
David Brown <david.brown@hesbynett.no>: Feb 11 10:18PM +0100

On 11/02/2021 19:13, Paavo Helde wrote:
 
>> Removing the GIL always sounds like a great idea, but remember it would
>> make a lot of Python code incompatible. 
 
> I'm sure Mr Flibble does not care about such things very much ;-)
 
Well, it's entirely up to him how he handles this. I can merely make
some comments that he can take into consideration. I don't know Mr.
Flibble's priorities here - perhaps existing Python code is irrelevant
for his uses.
 
 
> There is zero need that a scripting language sees the same data state in
> all threads. Actually it's the opposite, because any kind of shared
> state needs synhronization and should be avoided as much as possible.
 
Python is not a scripting language. It is a general purpose language
that works well for short scripts, amongst other things - it is used for
a /huge/ range of program types.
 
My point is that because of the GIL, a lot of shared state in Python
does not need any kind of synchronisation effort - the GIL gives you
that naturally.
 
There are, of course, good reasons for minimising shared state, and I
agree that message queues (which are easy in Python, and work almost
identically between threads or processes if you really need active
multitasking in plain Python) are an excellent way to handle information
sharing between threads and processes.
 
But I am not arguing about what is the "best" way to structure code in
Python or any other language. I am pointing out how some code /is/
written in Python. And relying on the GIL in this way is not entirely
unreasonable for code, because you know that's how the GIL works.
 
>> relying on it in this way is questionable practice.  But there is plenty
>> of Python code that /does/ rely on it that way.
 
> This does not make this code right.
 
Of course. But saying the code is "not right", or at least "not written
as well as it should be" does not mean such code is not written and run
by many people.
 
>> the "threading" library and get your multi-processing that way.
 
> All this "standard practice" is the consequence of the fact that Python
> does not support multithreading properly.
 
No - it is a consequence of Python being an interpreted and dynamic
language, emphasising flexibility and fast development over run-time
efficiency. Different languages have their strengths and weaknesses,
and run-time speed is not a strength of Python. Threading is an
irrelevant issue for a great deal of Python code. For example, modern
Python frameworks for server applications are often based on
asynchronous structures in the Python code. The underlying cpu
intensive (or network or disk intensive) work may be done in multiple
threads, but the Python code can be single-threaded asynchronous code.
The GIL or a lack of it makes no difference.
 
You can reasonably say that the popularity of the "processing" library
for some kind of work is a consequence of the GIL, as you get one GIL
per process and can thus do simultaneous work in the Python code.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 11 01:36PM -0800

On 2/11/2021 4:29 AM, Mr Flibble wrote:
> given how poorly they perform).
 
> Sample neos session (parsing a fibonacci program, neoscript rather than
> Python in this case):
[...]
 
Is this Python 3? If so, I have a program for you to test on it. Its a
sample implementation of my HMAC cipher. The code creates a test vector
for a sanity check against one of my C implementations. Here is my
Python 3 test vector code:
 
https://pastebin.com/raw/NAnsBJAZ
 
If everything is working correctly, you will get the following ciphertext:
 
72 3F 81 1F 5E B4 3D F4 E0 DD 50 CC 20 16 FF 8F
14 F5 31 37 14 31 35 F3 1C 99 EB 00 30 DA 44 CB
C6 AC E1 D9 BE 23 68 58 DE
 
Here is the C version:
 
https://pastebin.com/raw/feUnA3kP
 
(uncomment PYTHON_TEST_VECTOR to get the same ciphertext as the Python 3
test vector)
 
Afaict, it might be a good thing to test your Python impl against?
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 11 08:55PM +0100

On 11.02.2021 09:34, Chris M. Thomasson wrote:
> https://paulmck.livejournal.com
 
> He is hyper smart... Big time.
 
And a free book! Great! Except... I'm not so much into shared memory
parallel programming, but others here may be.
 
Cheers!,
 
- Alf
David Brown <david.brown@hesbynett.no>: Feb 11 12:17PM +0100

(This came from the "The weirdest compiler bug" thread in c.l.c++, but I
think it makes more sense as a thread in itself, and is equally relevant
to C and C++. Although it is gcc-specific, the principles are general
for all C11 and C++11 tools.)
 
 
Like most compilers, gcc ships with some "language support" libraries -
not the actual C or C++ runtime libraries, but code to support parts of
the language. Here I have been looking at gcc's code for supporting
atomic accesses. I'd like to get thoughts and opinions about the
correctness of the implementation. I'm particularly interested in the
targets I use myself, but I believe the problems affect any target.
 
 
From the gcc Wiki page <https://gcc.gnu.org/wiki/Atomic/GCCMM> there is
a link to a "sample libatomic", which was perhaps just meant as a simple
starting point. But it is shipped with gcc now, AFAICS. (It is
certainly shipped with the "gnu arm embedded" toolchain, but that
includes many things besides the compiler. If it turns out that it is
that toolchain packaging that is the problem rather than gcc, I'll
happily move my complaining to them!)
 
The "sample" libatomic library is here:
 
<https://gcc.gnu.org/wiki/Atomic/GCCMM?action=AttachFile&do=view&target=libatomic.c>
 
Basically, operations are done "lock-free" (using __atomic_ builtins) if
the size has __atomic_always_lock_free. Otherwise a simple user-mode
busy waiting spin lock is used, picked from an array of 16 locks using a
hash on the address.
 
In typical use on a modern multi-core system, you are going to be making
sure you use sizes that are lock-free in most cases (when you can get
64-bit, and perhaps 128-bit, that's normally enough). If you have a
type that is bigger, this library will use the locks. The likelihood of
there being a contention on the locks is very low in practice,
especially with the multiple locks with address hashing. And /if/ there
is contention, it means the losing thread will busy-wait while the other
thread finishes its work. Since the other thread is likely to be on a
different core, and the actual operation is fast, a short busy-wait is
not a problem. The only time when it would be a long delay is if one
thread took the lock, then got pre-empted before releasing it. But with
multiple cores, typical big OS "almost everything at the same level"
thread priorities, and a scheduler that boosts low priority threads that
have been stuck for a while, you can be confident that sooner or later -
usually sooner - the locking thread will run and release the lock.
 
 
Smaller embedded systems are a different world. As an example (because
it is the processor I am using), consider a Cortex-M processor and an
RTOS (real-time operating system). In this kind of system, you have a
single cpu core, and you have a strict hierarchy in thread priorities.
The highest priority runnable thread is always running (with interrupts
being somewhat akin to even higher priority threads). When you use
"real" locks - mutexes supplied by the OS - there are priority boosting
mechanisms to avoid deadlocks or blocks when a high priority thread
wants a lock that is held by a low priority thread.
 
But with spin locks, that doesn't work. So if a low priority thread
takes a lock, and then it is pre-empted (such as by an external
interrupt from a device) and a high priority thread or interrupt routine
wants the lock, that high priority thread will spin forever. It will
never give the low priority thread the chance to run and release the lock.
 
All this means that if you use atomics in your embedded system,
everything looks fine and all your testing will likely succeed. But
there is always the chance of a deadlock happening.
 
 
The efficient way to do general atomic operations (ones that can't be
handled by a single instruction or a restartable load/store exclusive
loop) on single-core devices is to disable interrupts temporarily. It
won't help for multi-core systems - there the locking must be done with
OS help.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 10 05:30PM -0800

On 2/9/2021 10:35 PM, red floyd wrote:
 
>> There are a lot of smart people commenting on that thread. Heck, even
>> I am there. ;^)
 
> Not quite a compiler bug, but a library bug.
 
Well, the compiler optimization would break POSIX. I thought it was a
bit odd because gcc is used for the Linux Kernel.
 
 
 
> The fix was to declare it const volatile:
 
> const volatile char *const register = (const char *) 0x12345678L;
 
> That one took a while to find.
 
Oh yeah. I have definitely had to debug code like that. Back in the day,
volatile was a work around. Today, using std::atomic can be a better option.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: