Monday, September 30, 2019

Digest for comp.lang.c++@googlegroups.com - 15 updates in 4 topics

legalize+jeeves@mail.xmission.com (Richard): Sep 30 07:38PM

[Please do not mail me a copy of your followup]
 
David Brown <david.brown@hesbynett.no> spake the secret code
 
>The programming world has a big problem with its image, and often it is
>seen as not being inclusive enough.
 
The programming community is as inclusive as it needs to be in that
it doesn't exclude anyone. You got the skills and the motivation
and you're in. It's as simple as that. It has always been this way.
Maybe in the deep south during the Jim Crow era, you would be refused
a job as a programmer in a "data center", but everyone has access to
computers and programming has never been easier and the Jim Crow era
is a distant memory to anyone who was alive at the time.
 
At this point, the barrier is one of individual motivation and
interest and nothing more.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Sep 30 01:38PM -0700

On 9/30/2019 12:38 PM, Richard wrote:
>> seen as not being inclusive enough.
 
> The programming community is as inclusive as it needs to be in that
> it doesn't exclude anyone.
 
It language itself is 100% innocent. Anybody can choose to learn it. The
language is fine with that.
 
 
 
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Sep 30 01:42PM -0700

On 9/30/2019 1:38 PM, Chris M. Thomasson wrote:
>> it doesn't exclude anyone.
 
> It language itself is 100% innocent. Anybody can choose to learn it. The
> language is fine with that.
 
Actually, every language is innocent. How many of them say to a person
thinking about using it: "Well, you must fill out the following forms
before you can program using myself.". None.
 
 
 
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 06:04PM +0200

I just came across another issue:
With my primary compiler theres's an internal function called __chkstk
called when I do alloca. This function is usually called when there are
more than a page full of variables in the stack-frame. That's because
Windows recognizes the need for more stack space only through touching
the next unmapepd page down the stack. So why hasn't MS so be so clever
as to design Windows that it would also recoginize noncontignous acces-
ses to the stack-pages? Are these pages handled are like being over-
comitted?
Keith Thompson <kst-u@mib.org>: Sep 30 11:13AM -0700


> Anyway, the difference with alloca () would be in a safer
> management of possible collisions and running out of stack
> space ? (compared with the plain declarations) ?
 
No. Neither VLAs nor alloca() provide any protection against running
out of space. Both have undefined behavior if there isn't enough memory
for the allocation.
 
alloca() (which, again, is non-standard) is a function that returns a
pointer. It has no provision to return a null pointer on failure.
 
Defining a VLA, on the other hand, creates an array object to which you
can meaningfully apply sizeof.
 
Also, space allocated by alloca() is deallocated on return from the
enclosing function, not at the end of the enclosing block.
 
Since alloca() is a function, it can be invoked in contexts where a VLA
can't be created. If you call alloca() as part of a function argument,
it's likely to corrupt the stack.
 
VLAs were introduced in C99 and made optional in C11 and are not
supported, except perhaps as an extension, in C++.
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */
Keith Thompson <kst-u@mib.org>: Sep 30 11:14AM -0700

Anton Shepelev <anton.txt@g{oogle}mail.com> writes:
[...]
> At least, it is absent from TCC, but the specific compilers
> are of no importance.
 
I just used alloca() with tcc on my system. It generates a call to the
implementation in the GNU C library. (With gcc, calls to alloca() are
inlined.)
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */
William Ahern <william@25thandClement.com>: Sep 30 11:01AM -0700

> as to design Windows that it would also recoginize noncontignous acces-
> ses to the stack-pages? Are these pages handled are like being over-
> comitted?
 
The compiler does this to ensure the alloca'd region doesn't extend past the
stack guard page. The kernel can handle noncontiguous stack access, but what
happens if your block extends *past* the stack into the dynamic heap region?
 
See https://blog.qualys.com/securitylabs/2017/06/19/the-stack-clash
and https://lwn.net/Articles/725832/ for more info
gazelle@shell.xmission.com (Kenny McCormack): Sep 30 04:26PM

In article <e2720f0b-ca77-4e65-be10-8179a49f9cbc@googlegroups.com>,
>> ming-language. But I use it for performance-reasons. Can anyone name
>> compilers that don't support alloca()?
 
>I'm curious, can you give an example of where you'd use this?
 
I think "man alloca" covers this well enough.
 
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/Infallibility
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 06:51PM +0200

> Since when (if at any time) was variable size automatic array supported ?
 
Since C99.
Paavo Helde <myfirstname@osa.pri.ee>: Sep 30 10:26PM +0300

On 30.09.2019 19:04, Bonita Montero wrote:
> the next unmapepd page down the stack. So why hasn't MS so be so clever
> as to design Windows that it would also recoginize noncontignous acces-
> ses to the stack-pages?
 
I guess this would mean setting up e.g. 64 guard pages instead of one or
two, and most of those pages would remain untouched and unused
(many-many service threads use extremely little stack space).
 
Also, this would not avoid the need for checks at alloca(), it would
still need to check that the allocation fits in the total free space of
the current stack.
 
> Are these pages handled are like being over-comitted?
 
As a little experiment shows, in Windows the virtual address space for
the stack is reserved, but not committed. Strictly speaking, you cannot
over-commit something which is not committed. But I can see how this
might feel similar.
scott@slp53.sl.home (Scott Lurndal): Sep 30 07:38PM

>of declaration of the variable-size array.
 
>I can't figure out how the compiler would reference them in
>standard efficient ways (use of registers).
 
 
mov %rsp, %rax # rax now has starting address of alloca() region
sub %rcx, %rsp # rcx is the number of bytes being allocated.
 
....
 
mov %rsp, %rbx # rbx now has starting address of second alloca() region
sub %rdx, %rsp # rdx is the number of bytes being allocated
....
ret # all alloca storage is deallocated
 
(AT&T form)
 
It can always spill rax or rbx to the stack under register pressure.
scott@slp53.sl.home (Scott Lurndal): Sep 30 07:46PM

>> as to design Windows that it would also recoginize noncontignous acces-
>> ses to the stack-pages?
 
>I guess this would mean setting up e.g. 64 guard pages instead of one or
 
A "guard page" is just a PTE with the valid bit reset. In most cases,
that PTE is there regardless of whether that page is actually being
used (or has been allocated).
 
The OS can look at the faulting address vis-a-vis the stack region allocated at
process creation and heuristically determine if the stack should be
extended to include that address.
 
>two, and most of those pages would remain untouched and unused
>(many-many service threads use extremely little stack space).
 
Virtual memory is a rare commodity on 32-bit split virtual address
space systems - it's the virtual address space that is consumed by
guard pages; and often the stack grows down towards the heap, allocation
a sufficiently large VLA or alloca() may corrup the heap if the OS can't
distinguish beween the accesses (which is likely).
Joe Pfeiffer <pfeiffer@cs.nmsu.edu>: Sep 30 02:28PM -0600

>> not specified.
 
> ah ! So you are suggestion sort of internal reordering on the stack
> (like : all known types before, then the var-sized) ?
 
I'm saying there is nothing (I know of) prohibiting a compiler from
doing that; without thinking too deeply about it, I suspect that if I
were writing the compiler that would be my first approach (since, as you
suggest, that would permit the other variables to be accessed through
constant offsets).
aminer68@gmail.com: Sep 30 11:41AM -0700

Hello,
 
 
What about garbage collection?
 
Read what said this serious specialist called Chris Lattner:
 
"One thing that I don't think is debatable is that the heap compaction
behavior of a GC (which is what provides the heap fragmentation win) is
incredibly hostile for cache (because it cycles the entire memory space
of the process) and performance predictability."
 
"Not relying on GC enables Swift to be used in domains that don't want
it - think boot loaders, kernels, real time systems like audio
processing, etc."
 
"GC also has several *huge* disadvantages that are usually glossed over:
while it is true that modern GC's can provide high performance, they can
only do that when they are granted *much* more memory than the process
is actually using. Generally, unless you give the GC 3-4x more memory
than is needed, you'll get thrashing and incredibly poor performance.
Additionally, since the sweep pass touches almost all RAM in the
process, they tend to be very power inefficient (leading to reduced
battery life)."
 
Read more here:
 
https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html
 
Here is Chris Lattner's Homepage:
 
http://nondot.org/sabre/
 
And here is Chris Lattner's resume:
 
http://nondot.org/sabre/Resume.html#Tesla
 
 
This why i have invented the following scalable algorithm and its
implementation that makes Delphi and FreePascal more powerful:
 
My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here..
 
Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful.
 
Because I have just read the following web page:
 
https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations
 
But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why:
 
A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used.
 
If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns.
 
Therefore, reference counting in TComponent has been removed.
 
Also because i have just added TAMInterfacedPersistent to my invention.
 
To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation.
 
I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems,
and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves
the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable".
 
And please read the readme file inside the zip file that i have just extended to make you understand more.
 
You can download my new scalable reference counting with efficient support for weak references version 1.37 from:
 
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Sep 30 11:05AM -0700

Hello...
 
 
More about compile time and build time..
 
Look here about Java it says:
 
 
"Java Build Time Benchmarks
 
I'm trying to get some benchmarks for builds and I'm coming up short via Google. Of course, build times will be super dependent on a million different things, but I'm having trouble finding anything comparable.
 
Right now: We've got ~2 million lines of code and it takes about 2 hours for this portion to build (this excludes unit tests).
 
What do your build times look like for similar sized projects and what did you do to make it that fast?"
 
 
Read here to notice it:
 
https://www.reddit.com/r/java/comments/4jxs17/java_build_time_benchmarks/
 
 
So 2 million lines of code of Java takes about 2 hours to build.
 
 
And what do you think that 2 millions lines of code takes
to Delphi ?
 
Answer: Just about 20 seconds.
 
 
Here is the proof from Embarcadero, read and look at the video to be convinced about Delphi:
 
https://community.idera.com/developer-tools/b/blog/posts/compiling-a-million-lines-of-code-with-delphi
 
C++ also takes "much" more time to compile than Delphi.
 
 
This is why i said previously the following:
 
 
I think Delphi is a single pass compiler, it is very fast at compile time, and i think C++ and Java and C# are multi pass compilers that are much slower than Delphi in compile time, but i think that the generated executable code of Delphi is still fast and is faster than C#.
 
And what about the Advantages and disadvantages of single and multi pass compiler?
 
And From Automata Theory we get that any Turing Machine that does 2 (or more ) pass over the tape, can be replaced with an equivalent one that makes only 1 pass, with a more complicated state machine. At the theoretical level, they the same. At a practical level, all modern compilers make only one pass over the source code. It typically translated into an internal representation that the different phases analyze and update. During flow analysis basic blocks are identified. Common sub expression are found and precomputed and results reused. During loop analysis, invariant code will be moved out the loop. During code emission registers are assigned and peephole analysis and code reduction is applied.
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

David Brown <david.brown@hesbynett.no>: Sep 30 04:56PM +0200

On 30/09/2019 16:33, Anton Shepelev wrote:
>> the program to function as required.
 
> Reliance on compiler features puts the same fetters on free
> and commerical code.
 
The great majority of C and C++ programs are dependent on features that
are limited to specific target processors, compilers, and/or OS's. Some
programs are written in a way to minimise these dependencies, or at
least to isolate them in a small number of files. For many other
programs, it really doesn't matter - the code will only be used with one
compiler or on one target.
 
And some features are non-standard but found on many compilers -
alloca() is one of these. If you are using a serious compiler, it will
support alloca(). Your alloca() code might not work on a tiny
microcontroller, or on a toy compiler like TCC, but it is highly likely
that a lot more of the code will have trouble there too.
 
 
> You misunderstood my usage of "no longer". I meant that
> once you start using alloca() your program is no longer
> standard C.
 
It is, I believe, extremely unlikely that use of alloca() will be the
deciding factor for which compilers can be used with the code.
Melzzzzz <Melzzzzz@zzzzz.com>: Sep 30 03:29PM

> `alloca(I)' worth it? Is not there a reasonably simple
> solution within standard C, such as an array, a variable-
> length array, or a pre-allocated memory block on the heap?
 
VLA elliminates need for alloca...
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
Bo Persson <bo@bo-persson.se>: Sep 30 05:29PM +0200

On 2019-09-30 at 16:09, Bonita Montero wrote:
 
>> I'm curious, can you give an example of where you'd use this?
 
> At a point where I did know I needed only a small variable sized array
> of pointers.
 
If it is a *very* small number, you might of course always allocate the
upper limit (like 10), and just use the 5-7 needed by a specific invocation.
 
If the upper limit is significantly larger, the stack usage might make
alloca() less than 100% portable anyway.
 
 
 
Bo Persson
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 05:31PM +0200

Am 30.09.2019 um 17:29 schrieb Melzzzzz:
>> solution within standard C, such as an array, a variable-
>> length array, or a pre-allocated memory block on the heap?
 
> VLA elliminates need for alloca...
 
C++ hasn't VLAs.
I only want to discuss this issue in both groups as this is a special
compiler-specific problem. A compiler that supports C++ also usually
compiles C-code. And if it supports alloca() it supports it in both
languages.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 05:33PM +0200

> If it is a *very* small number, you might of course always allocate
> the upper limit (like 10), and just use the 5-7 needed by a specific
> invocation.
 
No, the limit is higher, at most about 100 pointers.
 
> If the upper limit is significantly larger, the stack usage might
> make alloca() less than 100% portable anyway.
 
The code I write is not for Arduinos.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 06:04PM +0200

I just came across another issue:
With my primary compiler theres's an internal function called __chkstk
called when I do alloca. This function is usually called when there are
more than a page full of variables in the stack-frame. That's because
Windows recognizes the need for more stack space only through touching
the next unmapepd page down the stack. So why hasn't MS so be so clever
as to design Windows that it would also recoginize noncontignous acces-
ses to the stack-pages? Are these pages handled are like being over-
comitted?
gazelle@shell.xmission.com (Kenny McCormack): Sep 30 04:26PM

In article <e2720f0b-ca77-4e65-be10-8179a49f9cbc@googlegroups.com>,
>> ming-language. But I use it for performance-reasons. Can anyone name
>> compilers that don't support alloca()?
 
>I'm curious, can you give an example of where you'd use this?
 
I think "man alloca" covers this well enough.
 
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/Infallibility
Anton Shepelev <anton.txt@g{oogle}mail.com>: Sep 30 05:30PM +0300

Bonita Montero:
 
>Depending on the set of compilers that support `alloca()'
>this doesn't count.
 
I disagree. It is a difference between standard code and
compiler-dependent code. You are forcing users to use
specific compilers, regardless of how large a subset it is.
 
>And usually you have dependencies on functions becond those
>of standard-C/C++ anyway.
 
Those dependencies are available as C code or libraries,
whereas `alloca()' is not and cannot be.
 
>>Are the benefits of `alloca(I)' worth it?
 
>Yes, alloca() is fast.
 
I didn't say it wasn't, but suggested that you consider a
fast solution in standard C. By the way, have you made
certain that dynamic memory allocation would be the
bottleneck of your algorithm?
 
>>C, such as an array, a variable-length array, or a pre-
>>allocated memory block on the heap?
 
>VLAs aren't part of C++ ->
 
You have written to comp.lang.c too.
 
>-> and heap-allocations haver a magnitude higher cost.
 
I proposed to reuse the same heap block in many invocations
of your functions. Sometimes it is possibe.
 
--
() ascii ribbon campaign - against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
scott@slp53.sl.home (Scott Lurndal): Sep 30 02:30PM

>`alloca()' is no longer starndard C and depends on specific
>compilers, which harms the freedom of the users of your code
>to compile it with *any* C compiler.
 
Good grief. 90% of all C code isn't open source and the programmer
is free to use whatever features necessary for the program to
function as required.
 
And, FWIW, alloca() has never been "starndard" C.
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 30 06:48PM +0200

On 30/09/2019 14:02, Bonita Montero wrote:
> ming-language. But I use it for performance-reasons. Can
> anyone name
> compilers that don't support alloca()?
 
 
sorry, mine is not an answer but a related question
 
Since when (if at any time) was variable size automatic
array supported ?
 
I mean a temporary array local to a function whose size
becomes known only at runtime (seems to me a generalization
of variadic functions, which, too, does not know at compile
time how much stack space will consume).
 
If implemented, It seems equivalent to alloca () ... is it
there some limitation then in ORDER of automatic variable
creations ?
 
I mean, im my simple mind I'd guess such variable size
should follow all "sure" (known sized) variabiles and not to
exceed one single dynamic sized variable (or at least to
avoid strange and heavy overhead in access, not just _ESP, _EBP)
 
a single variable array on stack with no further variables
does not seem that difficult to implement, or am I wrong ?
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 06:51PM +0200

> Since when (if at any time) was variable size automatic array supported ?
 
Since C99.
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 30 06:55PM +0200

On 30/09/2019 18:51, Bonita Montero wrote:
>> Since when (if at any time) was variable size automatic
>> array supported ?
 
> Since C99.
 
for curiosity, this feature has one or more of the
limitations I feared ... or even none of them ?
 
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 06:58PM +0200


>> Since C99.
 
> for curiosity, this feature has one or more of the limitations I feared
> ... or even none of them ?
 
Yes, you could run out of stack-space, maybe hitting a guard-page
in the best case so that you wouldn't corrupt any other data.
Philipp Klaus Krause <pkk@spth.de>: Sep 30 07:14PM +0200

Am 30.09.19 um 18:51 schrieb Bonita Montero:
>> Since when (if at any time) was variable size automatic array supported ?
 
> Since C99.
 
And C99 is the only version where support for them is mandatory.
 
As of C11, they are optional.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 07:16PM +0200

>> Since C99.
 
> And C99 is the only version where support for them is mandatory.
 
> As of C11, they are optional.
 
And with C++ you can ask the compiler via __STDC_NO_VLA__ if VLAs
are not supported.
Joe Pfeiffer <pfeiffer@cs.nmsu.edu>: Sep 30 11:20AM -0600

> consume).
 
> If implemented, It seems equivalent to alloca () ... is it there some
> limitation then in ORDER of automatic variable creations ?
 
There is no such limitation on your C program. The VLA can be first,
last, or in the middle. And you can have more than one of them.
 
Where in the activation record a compiler chooses to put it is, AFAIK,
not specified.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 07:42PM +0200

>> limitation then in ORDER of automatic variable creations ?
 
> There is no such limitation on your C program. The VLA can be first,
> last, or in the middle. And you can have more than one of them.
 
alloca() can be called at almost any point in a function (with some
compilers not within a function-call).
scott@slp53.sl.home (Scott Lurndal): Sep 30 05:52PM

>> Since when (if at any time) was variable size automatic array supported ?
 
>Since C99.
 
Albeit available pre-C99 standardization in several compilers.
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 30 07:53PM +0200

On 30/09/2019 18:58, Bonita Montero wrote:
 
> Yes, you could run out of stack-space, maybe hitting a
> guard-page
> in the best case so that you wouldn't corrupt any other data.
 
uhm, I didn't mean THAT kind of risk (unavoidable), but the
possible difficulties in defining variables beyond the point
of declaration of the variable-size array.
 
I can't figure out how the compiler would reference them in
standard efficient ways (use of registers).
 
Anyway, the difference with alloca () would be in a safer
management of possible collisions and running out of stack
space ? (compared with the plain declarations) ?
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 30 07:54PM +0200

On 30/09/2019 19:20, Joe Pfeiffer wrote:
> last, or in the middle. And you can have more than one of them.
 
> Where in the activation record a compiler chooses to put it is, AFAIK,
> not specified.
 
ah ! So you are suggestion sort of internal reordering on
the stack (like : all known types before, then the var-sized) ?
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 30 08:01PM +0200

On 30/09/2019 19:20, Joe Pfeiffer wrote:
> last, or in the middle. And you can have more than one of them.
 
> Where in the activation record a compiler chooses to put it is, AFAIK,
> not specified.
 
I have QT that does not supports them, but, if sb would be
so kind to copy a minimal asm generated code for some
 
int Funz (int A, int B)
{
int C = 3;
int Buf [A];
int D = 2;
Buf [A-1] = B;
return (A + B * D - C);
}
 
int main (void)
{
return Funz (4, 3);
}
 
I'd like to examine the assembler generated in order to
figure out how Buf (and D, beyond it apparently) are addressed
 
ah, if not to annoying, TURNING OFF most if not all
optimizationz (maybe the example is too minimal to be sort
of resolved at compile time :\)
 
tnx in advance
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 30 08:02PM +0200

On 30/09/2019 19:42, Bonita Montero wrote:
 
> alloca() can be called at almost any point in a function
> (with some
> compilers not within a function-call).
 
I've not clear what differences there are in alloca () usage
vs plain declaration, then ....
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 03:06PM +0200


>> Thanks! I found it online.
 
> It's funny how casually people just admit to their illegal piracy of
> intellectual property.
 
https://www.google.com/search?client=firefox-b-d&q=%22Effective+Modern+C%2B%2B%22+doctype%3Apdf
;-)
David Brown <david.brown@hesbynett.no>: Sep 30 03:30PM +0200

On 30/09/2019 14:32, Frederick Gotham wrote:
 
>> It's funny how casually people just admit to their illegal piracy of
>> intellectual property.
 
> Is piracy theft? Is it stealing?
 
Yes, piracy is theft (and therefore stealing). But copyright abuse is
not piracy, nor is any unlicensed use of intellectual property. (To be
"theft", you have to take something away from the rightful owner, so
making a copy is not theft.) However, accurate terms such as
"unlicensed copyright abuse" do not sound as dramatic as "piracy".
 
Not that there was any suggestion of copyright abuse here - Queequeg
(presumably) found a legitimate website selling the book.
queequeg@trust.no1 (Queequeg): Sep 30 02:47PM


>> It's funny how casually people just admit to their illegal piracy of
>> intellectual property.
 
> Is piracy theft? Is it stealing?
 
Fun fact: in my jurisdiction, downloading music, movies or ebooks is
completely legal. Only sharing (therefore, downloading with torrents
qualify) is not. For some reason it's different only with computer
programs and games.
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

queequeg@trust.no1 (Queequeg): Sep 30 12:02PM


>> Thanks! I found it online.
 
> It's funny how casually people just admit to their illegal piracy of
> intellectual property.
 
It's funny how quick people are to jump to wrong conclusions based on
limited input and their own experience.
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 30 05:32AM -0700

On Wednesday, September 25, 2019 at 2:16:43 PM UTC+1, Juha Nieminen wrote:
 
> It's funny how casually people just admit to their illegal piracy of
> intellectual property.
 
 
Is piracy theft? Is it stealing?
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 03:06PM +0200


>> Thanks! I found it online.
 
> It's funny how casually people just admit to their illegal piracy of
> intellectual property.
 
https://www.google.com/search?client=firefox-b-d&q=%22Effective+Modern+C%2B%2B%22+doctype%3Apdf
;-)
David Brown <david.brown@hesbynett.no>: Sep 30 03:30PM +0200

On 30/09/2019 14:32, Frederick Gotham wrote:
 
>> It's funny how casually people just admit to their illegal piracy of
>> intellectual property.
 
> Is piracy theft? Is it stealing?
 
Yes, piracy is theft (and therefore stealing). But copyright abuse is
not piracy, nor is any unlicensed use of intellectual property. (To be
"theft", you have to take something away from the rightful owner, so
making a copy is not theft.) However, accurate terms such as
"unlicensed copyright abuse" do not sound as dramatic as "piracy".
 
Not that there was any suggestion of copyright abuse here - Queequeg
(presumably) found a legitimate website selling the book.
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 30 07:14AM -0700

On Monday, September 30, 2019 at 2:31:01 PM UTC+1, David Brown wrote:
 
> "theft", you have to take something away from the rightful owner, so
> making a copy is not theft.) However, accurate terms such as
> "unlicensed copyright abuse" do not sound as dramatic as "piracy".
 
 
I can't keep track of the meaning in the paragraph.
queequeg@trust.no1 (Queequeg): Sep 30 02:47PM


>> It's funny how casually people just admit to their illegal piracy of
>> intellectual property.
 
> Is piracy theft? Is it stealing?
 
Fun fact: in my jurisdiction, downloading music, movies or ebooks is
completely legal. Only sharing (therefore, downloading with torrents
qualify) is not. For some reason it's different only with computer
programs and games.
 
--
https://www.youtube.com/watch?v=9lSzL1DqQn0
David Brown <david.brown@hesbynett.no>: Sep 30 04:48PM +0200

On 30/09/2019 16:14, Frederick Gotham wrote:
>> making a copy is not theft.) However, accurate terms such as
>> "unlicensed copyright abuse" do not sound as dramatic as "piracy".
 
> I can't keep track of the meaning in the paragraph.
 
Piracy is theft.
 
Making unlicensed copies, like copying e-books, games, music, etc., that
you are supposed to buy, is not theft. It is illegal, but it is not
theft. And it is certainly not piracy.
 
I hope that answers your original question.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 02:02PM +0200

I know alloca() neither is an official part of the C nor C++ program-
ming-language. But I use it for performance-reasons. Can anyone name
compilers that don't support alloca()?
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 30 07:02AM -0700

On Monday, September 30, 2019 at 1:02:16 PM UTC+1, Bonita Montero wrote:
> I know alloca() neither is an official part of the C nor C++ program-
> ming-language. But I use it for performance-reasons. Can anyone name
> compilers that don't support alloca()?
 
 
I'm curious, can you give an example of where you'd use this?
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 04:09PM +0200

>> ming-language. But I use it for performance-reasons. Can anyone name
>> compilers that don't support alloca()?
 
> I'm curious, can you give an example of where you'd use this?
 
At a point where I did know I needed only a small variable sized array
of pointers.
Anton Shepelev <anton.txt@g{oogle}mail.com>: Sep 30 05:12PM +0300

Bonita Montero:
 
>C++ programming-language. But I use it for performance-
>reasons. Can anyone name compilers that don't support
>alloca()?
 
At least, it is absent from TCC, but the specific compilers
are of no importance. The problem is that code with
`alloca()' is no longer starndard C and depends on specific
compilers, which harms the freedom of the users of your code
to compile it with *any* C compiler. Are the benefits of
`alloca(I)' worth it? Is not there a reasonably simple
solution within standard C, such as an array, a variable-
length array, or a pre-allocated memory block on the heap?
 
--
() ascii ribbon campaign - against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 04:16PM +0200

> `alloca()' is no longer starndard C and depends on specific
> compilers, which harms the freedom of the users of your code
> to compile it with *any* C compiler.
 
Depending on the set of compilers that support alloca() this doesn't
count. And usually you have dependencies on functions becond those
of standard-C/C++ anyway.
 
> Are the benefits of `alloca(I)' worth it?
 
Yes, alloca() is fast.
 
> Is not there a reasonably simple solution within standard C,
> such as an array, a variable-length array, or a pre-allocated
> memory block on the heap?
 
VLAs aren't part of C++ and heap-allocations haver a magnitude
higher cost.
scott@slp53.sl.home (Scott Lurndal): Sep 30 02:30PM

>`alloca()' is no longer starndard C and depends on specific
>compilers, which harms the freedom of the users of your code
>to compile it with *any* C compiler.
 
Good grief. 90% of all C code isn't open source and the programmer
is free to use whatever features necessary for the program to
function as required.
 
And, FWIW, alloca() has never been "starndard" C.
Anton Shepelev <anton.txt@g{oogle}mail.com>: Sep 30 05:30PM +0300

Bonita Montero:
 
>Depending on the set of compilers that support `alloca()'
>this doesn't count.
 
I disagree. It is a difference between standard code and
compiler-dependent code. You are forcing users to use
specific compilers, regardless of how large a subset it is.
 
>And usually you have dependencies on functions becond those
>of standard-C/C++ anyway.
 
Those dependencies are available as C code or libraries,
whereas `alloca()' is not and cannot be.
 
>>Are the benefits of `alloca(I)' worth it?
 
>Yes, alloca() is fast.
 
I didn't say it wasn't, but suggested that you consider a
fast solution in standard C. By the way, have you made
certain that dynamic memory allocation would be the
bottleneck of your algorithm?
 
>>C, such as an array, a variable-length array, or a pre-
>>allocated memory block on the heap?
 
>VLAs aren't part of C++ ->
 
You have written to comp.lang.c too.
 
>-> and heap-allocations haver a magnitude higher cost.
 
I proposed to reuse the same heap block in many invocations
of your functions. Sometimes it is possibe.
 
--
() ascii ribbon campaign - against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
Anton Shepelev <anton.txt@g{oogle}mail.com>: Sep 30 05:33PM +0300

Scott Lurndal to Anton Shepelev:
 
 
>Good grief. 90% of all C code isn't open source and the
>programmer is free to use whatever features necessary for
>the program to function as required.
 
Reliance on compiler features puts the same fetters on free
and commerical code.
 
>And, FWIW, alloca() has never been "starndard" C.
 
You misunderstood my usage of "no longer". I meant that
once you start using alloca() your program is no longer
standard C.
 
--
() ascii ribbon campaign - against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 04:44PM +0200


> I disagree. It is a difference between standard code and
> compiler-dependent code. You are forcing users to use
> specific compilers, regardless of how large a subset it is.
 
Most programs use platform-specific means; that's rarely an issue.
 
>> those of standard-C/C++ anyway.
 
> Those dependencies are available as C code or libraries,
> whereas `alloca()' is not and cannot be.
 
I only wanted to give an example that you often depend not
only on C or C++. And alloca() is only one of this examples.
 
> I didn't say it wasn't, but suggested that you consider a
> fast solution in standard C.
 
I need it in C++. But the genral issue applies for both C
and C++. So I posted it in both newsgroups.
 
>> VLAs aren't part of C++ ->
 
> You have written to comp.lang.c too.
 
I don't want to discuss VLAs but alloca().
Ian Collins <ian-news@hotmail.com>: Sep 30 04:36PM +1300

On 30/09/2019 09:45, Vir Campestris wrote:
 
> Our coding standard says to use a sensible name - and then stick a
> random 8-digit hex value after it. It even tells you how to derive the 8
> digits if like most of us you are on Linux.
 
Are there any mainstream compilers that don't support #pragma once? It
makes life much simpler.
 
--
Ian.
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 30 01:11AM -0400

On 9/27/19 10:37 PM, Mark wrote:
> kFavoriteWine xx = XXX_YYY ;
 
> }
> Why does the upper case enumerator collided with the preprocessor macro even though the macro is wrapped in a namespace? Can someone point me to some standard verbiage saying why that's wrong
 
The key point is that preprocessing (including the expansion of macros)
occurs during translation phase 4 (5.2p4). Namespaces aren't recognized
as such until translation phase 8 (5.2p8), and therefore have no effect
on preprocessing.
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 30 01:34AM -0700

On Saturday, September 28, 2019 at 3:37:42 AM UTC+1, Mark wrote:
> kFavoriteWine xx = XXX_YYY ;
 
> }
> Why does the upper case enumerator collided with the preprocessor macro even though the macro is wrapped in a namespace? Can someone point me to some standard verbiage saying why that's wrong
 
Every source code file (.c, .cpp) in your project gets all of its header files added into it, and the preprocessor performs substitutions. What you're left with is called a translation unit.
 
source file (and it's headers) ------> PREPROCESSOR -------------> translation unit
 
It is the translation units that are then fed into the compiler:
 
translation unit -----------------> COMPILER -------------------> object file
 
And then it is the object files that are fed into the linker:
 
object files -----------------> LINKER --------------------> executable file
Jorgen Grahn <grahn+nntp@snipabacken.se>: Sep 30 12:25PM

On Mon, 2019-09-30, Ian Collins wrote:
>> digits if like most of us you are on Linux.
 
> Are there any mainstream compilers that don't support #pragma once? It
> makes life much simpler.
 
Life is already simple; I think this thread exaggerates the problem.
In a header foo/bar.h in project someproj:
 
#ifndef SOMEPROJ_FOO_BAR_H
 
The include guard will be unique (as an include guard) and very
unlikely to clash with anything else.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Ben Bacarisse <ben.usenet@bsb.me.uk>: Sep 30 02:55PM +0100


> #ifndef SOMEPROJ_FOO_BAR_H
 
> The include guard will be unique (as an include guard) and very
> unlikely to clash with anything else.
 
A handy tip is to start with the H_:
 
#ifndef H_SOMEPROJ_FOO_BAR
 
because SOMEPROJ might begin with E and, at least in C, macro names
beginning with E are reserved for future library versions.
 
--
Ben.
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 30 02:16AM -0700

On Saturday, September 28, 2019 at 12:55:24 PM UTC+1, Rick C. Hodgin wrote:
 
> When you do, you will find what I'm talking about, and not
> because I say so, but because it's all real.
 
 
Rick, are you doing anything in December from 20th til 30th?
rick.c.hodgin@gmail.com: Sep 30 02:59AM -0700

On Monday, September 30, 2019 at 5:17:06 AM UTC-4, Frederick Gotham wrote:
> > When you do, you will find what I'm talking about, and not
> > because I say so, but because it's all real.
 
> Rick, are you doing anything in December from 20th til 30th?
 
 
Private inquiries can be asked and addressed in email.
 
--
Rick C. Hodgin
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 30 03:31AM -0700


> > Rick, are you doing anything in December from 20th til 30th?
 
> Private inquiries can be asked and addressed in email.
 
I propose to you that we discuss this publicly in the midst of the disbelievers.
David Brown <david.brown@hesbynett.no>: Sep 30 03:22PM +0200

On 30/09/2019 12:31, Frederick Gotham wrote:
 
>> Private inquiries can be asked and addressed in email.
 
> I propose to you that we discuss this publicly in the midst of the disbelievers.
 
I suspect you'll find the disbelievers will be happier for you to
discuss private inquiries by email.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 2 updates in 2 topics

aminer68@gmail.com: Sep 29 04:05PM -0700

Hello,
 
 
As you have noticed i am also coding with Delphi and Freepascal
in the Delphi mode, now here is a A Performance Comparison Of C# 2013, Delphi Xe6, And Python 3.4:
 
"Delphi XE6 is considerably faster than C# 2013 and Python 3.4 in terms of response time. Python is stronger than the other two languages only in terms of code density. Delphi XE6 uses 50% less memory than C# 2013 and almost 54% less memory than Python. That is to say, the programs
coded in Delphi XE6 can run faster by using less memory."
 
Read more here:
 
https://pdfs.semanticscholar.org/a8e1/2ac9f4bdb3b47f79df26c7c27cb175afa139.pdf
 
 
I am not stupid to also use Delphi and Freepascal compilers,
and of course i am also using there modern Object Pascal (and in the Delphi mode of Freepascal), and if you are not convinced that i am not stupid to use them, here is the proof that Delphi is a serious compiler:
 
NASA is also using Delphi, read about it here:
 
https://community.embarcadero.com/blogs/entry/want-moreexploration-40857
 
 
The European Space Agency is also using Delphi, read about it here:
 
https://community.embarcadero.com/index.php/blogs/entry/delphi-s-involvement-with-the-esa-rosetta-comet-spacecraft-project-1
 
 
Read more here:
 
https://glooscapsoftware.blogspot.com/2017/05/software-made-with-delphi-how-do-you.html

 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Sep 29 02:19PM -0700

Hello...
 
 
About Monitors..
 
I have just took a look at the following about an implementation of
Win32 Condition Variables and Monitors for Delphi:
 
https://blog.gurock.com/software/win32-condition-variables-and-monitors-for-delphi/
 
 
But i think that my inventions that are my SemaMonitor and my SemaCondvar are much "powerful", here they are:
 
https://sites.google.com/site/scalable68/semacondvar-semamonitor
 
Read about my SemaMonitor and my SemaCondvar and you will notice
that they are a more capable inventions.
 
And now i think that i will construct a higher abstraction that is a Monitor over my SemaCondvar, but this Monitor will support the following:
 
 
My new Monitor will support the following:
 
"If you don't want the signal to be lost if the threads are not waiting, just pass True to the state argument of to the constructor, if you pass False to the state argument of the construtor so the signals will be lost if the threads are not waiting."
 
 
Also i think i will add support of priorities to my inventions that are my SemaMonitor and my SemaCondvar, they will support the following priorities:
 
You will be able to give the following priorities:
 
LOW_PRIORITY
 
NORMAL_PRIORITY
 
HIGH_PRIORITY
 
 
 
And more than that i will add my following invention that is my Fast Mutex that is powerful to my SemaMonitor and to my SemaCondvar and to my new Monitor, read about it in my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Good fairness
3- It keeps efficiently and very low the cache coherence traffic
4- Very good fast path performance (it has the same performance as the
scalable MCS lock when there is contention.)
5- And it has a decent preemption tolerance.
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to
Google or Embarcadero or such software companies.
 
 
So as you have noticed i am an "inventor" and i think i will sell
some of my software inventions and there implementations to
Google or to Intel or to Microsoft or to Embarcadero or such
companies.
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.