Muttley@dastardlyhq.com: May 11 08:13AM On Wed, 10 May 2023 18:19:12 +0200 >> be allocating huge amounts of memory on the fly. ... >There's no perfect memory allocator. The three mentioned allocators >currently make the best compromise between blend and performance. Well thats your opinion. You may be right for some systems but I suspect for common OS's with good memory management like Windows, Linux, MacOS etc realloc() works quite nicely. If it was useless it wouldn't exist in the first place. |
Bonita Montero <Bonita.Montero@gmail.com>: May 11 01:43PM +0200 > Well thats your opinion. ... It's not my opition but all modern and performant memory allocators are built nearly in the same way. |
Muttley@dastardlyhq.com: May 11 01:37PM On Thu, 11 May 2023 13:43:49 +0200 > > Well thats your opinion. ... >It's not my opition but all modern and performant memory allocators >are built nearly in the same way. So what? They're all lowest common denominator. If you think allocating a large number of unneeded pages and then managing the memory themselves potentially causing page swapping slowdowns instead of simply extending the current reserved segment is a good idea then there's a bridge for sale with your name on it. As I said, the only time its a good idea is if you know beforehand you're going to need that memory very soon anyway such as in a DBMS. |
scott@slp53.sl.home (Scott Lurndal): May 11 04:06PM >your name on it. >As I said, the only time its a good idea is if you know beforehand you're going >to need that memory very soon anyway such as in a DBMS. I spent a fair amount of time in the Oracle RDBMS internals. They never used realloc. Or malloc, for that matter. The shared pools were using shmat (and later mmap) for allocation and as soon as large pages (4MB on 32-bit, 2MB/1GB on 64-bit) were available, they used mmap to allocate (the large pages, due to alignment constraints in the MMU/TLB were statically reserved when the system booted to ensure contiguity). |
Bonita Montero <Bonita.Montero@gmail.com>: May 11 06:33PM +0200 > So what? They're all lowest common denominator. It is currently the best compromise between memory consumption and performance. This cannot be supported on systems with extremely little RAM, but the corresponding allocators are then much slower because there's more synchronization. > managing the memory themselves potentially causing page swapping > slowdowns instead of simply extending the current reserved segment > is a good idea then there's a bridge for sale with your name on it. If you obtain pages from the kernel under Windows, no DRAM is assigned to them directly, but only a corresponding amount of RAM is subtracted from the swap, so that, in the event that the page has to be swapped later, the corresponding swap is available. The memory for each page reserved with MEM_COMMIT is then only actually allocated when the page is first accessed; if necessary, swapping is then carried out. Under Linux, the whole thing goes even further, or there no swap is deducted at all, but the first time the page is accessed, either the application is terminated if no memory could be allocated, or another application is terminated so that the memory can be allocated. This is called overcommitting and Windows only does this for stacks. |
scott@slp53.sl.home (Scott Lurndal): May 11 05:18PM >is terminated if no memory could be allocated, or another application is >terminated so that the memory can be allocated. This is called >overcommitting and Windows only does this for stacks. Linux only overcommits if the administrator configures it to overcommit. It is very simple to disable overcommit at boot if necessary for the desired workload. The idea of overcommit originated in AIX, IIRC. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment