- "CppCon 2019: Herb Sutter ???De-fragmenting C++: Making Exceptions and RTTI More Affordable and Usable???" - 3 Updates
- alloca()-support - 10 Updates
- What about garbage collection? - 1 Update
- More about compile time and build time.. - 1 Update
legalize+jeeves@mail.xmission.com (Richard): Sep 30 07:38PM [Please do not mail me a copy of your followup] David Brown <david.brown@hesbynett.no> spake the secret code >The programming world has a big problem with its image, and often it is >seen as not being inclusive enough. The programming community is as inclusive as it needs to be in that it doesn't exclude anyone. You got the skills and the motivation and you're in. It's as simple as that. It has always been this way. Maybe in the deep south during the Jim Crow era, you would be refused a job as a programmer in a "data center", but everyone has access to computers and programming has never been easier and the Jim Crow era is a distant memory to anyone who was alive at the time. At this point, the barrier is one of individual motivation and interest and nothing more. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Sep 30 01:38PM -0700 On 9/30/2019 12:38 PM, Richard wrote: >> seen as not being inclusive enough. > The programming community is as inclusive as it needs to be in that > it doesn't exclude anyone. It language itself is 100% innocent. Anybody can choose to learn it. The language is fine with that. |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Sep 30 01:42PM -0700 On 9/30/2019 1:38 PM, Chris M. Thomasson wrote: >> it doesn't exclude anyone. > It language itself is 100% innocent. Anybody can choose to learn it. The > language is fine with that. Actually, every language is innocent. How many of them say to a person thinking about using it: "Well, you must fill out the following forms before you can program using myself.". None. |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 06:04PM +0200 I just came across another issue: With my primary compiler theres's an internal function called __chkstk called when I do alloca. This function is usually called when there are more than a page full of variables in the stack-frame. That's because Windows recognizes the need for more stack space only through touching the next unmapepd page down the stack. So why hasn't MS so be so clever as to design Windows that it would also recoginize noncontignous acces- ses to the stack-pages? Are these pages handled are like being over- comitted? |
Keith Thompson <kst-u@mib.org>: Sep 30 11:13AM -0700 > Anyway, the difference with alloca () would be in a safer > management of possible collisions and running out of stack > space ? (compared with the plain declarations) ? No. Neither VLAs nor alloca() provide any protection against running out of space. Both have undefined behavior if there isn't enough memory for the allocation. alloca() (which, again, is non-standard) is a function that returns a pointer. It has no provision to return a null pointer on failure. Defining a VLA, on the other hand, creates an array object to which you can meaningfully apply sizeof. Also, space allocated by alloca() is deallocated on return from the enclosing function, not at the end of the enclosing block. Since alloca() is a function, it can be invoked in contexts where a VLA can't be created. If you call alloca() as part of a function argument, it's likely to corrupt the stack. VLAs were introduced in C99 and made optional in C11 and are not supported, except perhaps as an extension, in C++. -- Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst> Will write code for food. void Void(void) { Void(); } /* The recursive call of the void */ |
Keith Thompson <kst-u@mib.org>: Sep 30 11:14AM -0700 Anton Shepelev <anton.txt@g{oogle}mail.com> writes: [...] > At least, it is absent from TCC, but the specific compilers > are of no importance. I just used alloca() with tcc on my system. It generates a call to the implementation in the GNU C library. (With gcc, calls to alloca() are inlined.) -- Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst> Will write code for food. void Void(void) { Void(); } /* The recursive call of the void */ |
William Ahern <william@25thandClement.com>: Sep 30 11:01AM -0700 > as to design Windows that it would also recoginize noncontignous acces- > ses to the stack-pages? Are these pages handled are like being over- > comitted? The compiler does this to ensure the alloca'd region doesn't extend past the stack guard page. The kernel can handle noncontiguous stack access, but what happens if your block extends *past* the stack into the dynamic heap region? See https://blog.qualys.com/securitylabs/2017/06/19/the-stack-clash and https://lwn.net/Articles/725832/ for more info |
gazelle@shell.xmission.com (Kenny McCormack): Sep 30 04:26PM In article <e2720f0b-ca77-4e65-be10-8179a49f9cbc@googlegroups.com>, >> ming-language. But I use it for performance-reasons. Can anyone name >> compilers that don't support alloca()? >I'm curious, can you give an example of where you'd use this? I think "man alloca" covers this well enough. -- The randomly chosen signature file that would have appeared here is more than 4 lines long. As such, it violates one or more Usenet RFCs. In order to remain in compliance with said RFCs, the actual sig can be found at the following URL: http://user.xmission.com/~gazelle/Sigs/Infallibility |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 30 06:51PM +0200 > Since when (if at any time) was variable size automatic array supported ? Since C99. |
Paavo Helde <myfirstname@osa.pri.ee>: Sep 30 10:26PM +0300 On 30.09.2019 19:04, Bonita Montero wrote: > the next unmapepd page down the stack. So why hasn't MS so be so clever > as to design Windows that it would also recoginize noncontignous acces- > ses to the stack-pages? I guess this would mean setting up e.g. 64 guard pages instead of one or two, and most of those pages would remain untouched and unused (many-many service threads use extremely little stack space). Also, this would not avoid the need for checks at alloca(), it would still need to check that the allocation fits in the total free space of the current stack. > Are these pages handled are like being over-comitted? As a little experiment shows, in Windows the virtual address space for the stack is reserved, but not committed. Strictly speaking, you cannot over-commit something which is not committed. But I can see how this might feel similar. |
scott@slp53.sl.home (Scott Lurndal): Sep 30 07:38PM >of declaration of the variable-size array. >I can't figure out how the compiler would reference them in >standard efficient ways (use of registers). mov %rsp, %rax # rax now has starting address of alloca() region sub %rcx, %rsp # rcx is the number of bytes being allocated. .... mov %rsp, %rbx # rbx now has starting address of second alloca() region sub %rdx, %rsp # rdx is the number of bytes being allocated .... ret # all alloca storage is deallocated (AT&T form) It can always spill rax or rbx to the stack under register pressure. |
scott@slp53.sl.home (Scott Lurndal): Sep 30 07:46PM >> as to design Windows that it would also recoginize noncontignous acces- >> ses to the stack-pages? >I guess this would mean setting up e.g. 64 guard pages instead of one or A "guard page" is just a PTE with the valid bit reset. In most cases, that PTE is there regardless of whether that page is actually being used (or has been allocated). The OS can look at the faulting address vis-a-vis the stack region allocated at process creation and heuristically determine if the stack should be extended to include that address. >two, and most of those pages would remain untouched and unused >(many-many service threads use extremely little stack space). Virtual memory is a rare commodity on 32-bit split virtual address space systems - it's the virtual address space that is consumed by guard pages; and often the stack grows down towards the heap, allocation a sufficiently large VLA or alloca() may corrup the heap if the OS can't distinguish beween the accesses (which is likely). |
Joe Pfeiffer <pfeiffer@cs.nmsu.edu>: Sep 30 02:28PM -0600 >> not specified. > ah ! So you are suggestion sort of internal reordering on the stack > (like : all known types before, then the var-sized) ? I'm saying there is nothing (I know of) prohibiting a compiler from doing that; without thinking too deeply about it, I suspect that if I were writing the compiler that would be my first approach (since, as you suggest, that would permit the other variables to be accessed through constant offsets). |
aminer68@gmail.com: Sep 30 11:41AM -0700 Hello, What about garbage collection? Read what said this serious specialist called Chris Lattner: "One thing that I don't think is debatable is that the heap compaction behavior of a GC (which is what provides the heap fragmentation win) is incredibly hostile for cache (because it cycles the entire memory space of the process) and performance predictability." "Not relying on GC enables Swift to be used in domains that don't want it - think boot loaders, kernels, real time systems like audio processing, etc." "GC also has several *huge* disadvantages that are usually glossed over: while it is true that modern GC's can provide high performance, they can only do that when they are granted *much* more memory than the process is actually using. Generally, unless you give the GC 3-4x more memory than is needed, you'll get thrashing and incredibly poor performance. Additionally, since the sweep pass touches almost all RAM in the process, they tend to be very power inefficient (leading to reduced battery life)." Read more here: https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html Here is Chris Lattner's Homepage: http://nondot.org/sabre/ And here is Chris Lattner's resume: http://nondot.org/sabre/Resume.html#Tesla This why i have invented the following scalable algorithm and its implementation that makes Delphi and FreePascal more powerful: My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here.. Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful. Because I have just read the following web page: https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why: A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used. If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns. Therefore, reference counting in TComponent has been removed. Also because i have just added TAMInterfacedPersistent to my invention. To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation. I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems, and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable". And please read the readme file inside the zip file that i have just extended to make you understand more. You can download my new scalable reference counting with efficient support for weak references version 1.37 from: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Sep 30 11:05AM -0700 Hello... More about compile time and build time.. Look here about Java it says: "Java Build Time Benchmarks I'm trying to get some benchmarks for builds and I'm coming up short via Google. Of course, build times will be super dependent on a million different things, but I'm having trouble finding anything comparable. Right now: We've got ~2 million lines of code and it takes about 2 hours for this portion to build (this excludes unit tests). What do your build times look like for similar sized projects and what did you do to make it that fast?" Read here to notice it: https://www.reddit.com/r/java/comments/4jxs17/java_build_time_benchmarks/ So 2 million lines of code of Java takes about 2 hours to build. And what do you think that 2 millions lines of code takes to Delphi ? Answer: Just about 20 seconds. Here is the proof from Embarcadero, read and look at the video to be convinced about Delphi: https://community.idera.com/developer-tools/b/blog/posts/compiling-a-million-lines-of-code-with-delphi C++ also takes "much" more time to compile than Delphi. This is why i said previously the following: I think Delphi is a single pass compiler, it is very fast at compile time, and i think C++ and Java and C# are multi pass compilers that are much slower than Delphi in compile time, but i think that the generated executable code of Delphi is still fast and is faster than C#. And what about the Advantages and disadvantages of single and multi pass compiler? And From Automata Theory we get that any Turing Machine that does 2 (or more ) pass over the tape, can be replaced with an equivalent one that makes only 1 pass, with a more complicated state machine. At the theoretical level, they the same. At a practical level, all modern compilers make only one pass over the source code. It typically translated into an internal representation that the different phases analyze and update. During flow analysis basic blocks are identified. Common sub expression are found and precomputed and results reused. During loop analysis, invariant code will be moved out the loop. During code emission registers are assigned and peephole analysis and code reduction is applied. Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment