Wednesday, October 2, 2019

Digest for comp.lang.c++@googlegroups.com - 5 updates in 2 topics

Anton Shepelev <anton.txt@gmail.com>: Oct 03 01:47AM +0300

Chris M. Thomasson:
 
> Will have more time to get back to you later on tonight.
> However, take a deep look at reaps:
> https://people.cs.umass.edu/~emery/pubs/berger-oopsla2002.pdf
 
Their meaning of heap refers the group's previous researh
(Heap Layers) rather than to the general usage of the term
w.r.t. computer memory organisation. Heap layers seem to
comprise a sort of hierarchical and flexibly composable
memory manager where each layer allocates and deallocates
memory from the layer on top (!) of it, implemented as a
mix-in "superclass", but I don't understand why mix-ins in
particular and OOP in general should be essential to this
architencture. The rationale that they provide in the two
articles seems to be equally valid for conventional
dependency inversion. Remeber at least the parser
combinators thread here (comp.lang.c) to see what level of
composability is possible in C.
 
This 12-page article devotes only a page-worth of space
(including diagrams) to the explanation of
reaps -- allegedly its central subject and the only original
contribution described. When an object on in the region
part of the reap dies, it goes to an associated heap, which
is used for subsequent allocations until exhausted, when the
reap continues to grow. How exactly it works with respect
to objects of variable size is unclear, and the only example
does not include the case of resurrections (allocations from
the heap part).
 
In fig. 3b, which fails to explain the meanig of arrows,
`sbrk' is connected only with RegionHeap. Does it mean that
LeaHeap is composed directly of the removed chunks in the
region area? if so, whence does it take the memory to store
its metadata, which can hardly be of constant or upper-
bounded size? In general, LeaHeap seems to operate on the
"holes" in the region in order to prevent growing the region
unless absolutely unavoidable. I wonder, therefore, if this
allcoator does not, after an initial period of region
growth, degrade to LeaHeap performance in case of an
approximately balanced alloc/free sequence, because
ClearOptimizedHeap, by definition, will then work with
LeaHeap most of the time.
 
My personal and, as usual, naive thought was to manage a
collection of custom relocatable pointers:
 
struct relpointer
{ void* _; };
 
or indirect poiners void**, and defragment the region from
time to time. The user will use these pointers and the
memory allocator will make sure survive the rearangement.
In order avoid double indirection at every pointer access, a
lock mechanism can be introduced to ensure that a pointer is
not relocated while the lock holds. Locks should guard
performace-critical secsions with intensive pointer access.
 
In order to avoid serious slow-downs due to this occasional
tidying-up, one could do it no more than one "hole" at a
time using a fancy algorithm of step-wise defragmentation,
where each step is a fast operation that decreases
fragmentaion in one of several ways:
 
1. by moving a hole up the stack, so that collapsing it
requires the relocation of a smaller memory block from
top of the region-stack,
 
2. by moving a hole up the stack to a place adjacent with
another hole,
 
3. by collapsing a hole (located sufficiently close to
top of stack) through shifting all the memory in front
of it to the left by the size of the hole,
 
4. by plugging a hole with a block or blocks of the same
total size somewhere up the stack,
 
5. by allocating the space of a hole for a new object of
the same size,
 
Existing algorithms for efficient real-time disk
defragmentation may serve as an inspiration.
 
--
() ascii ribbon campaign -- against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
scott@slp53.sl.home (Scott Lurndal): Oct 02 08:16PM

>it's generating code for. There might be an issue with the compiler not
>having that information soon enough, but it could certainly get the
>information if it's worthwhile.
 
Each ring in ARM64 can be set to run big- or little-endian. In ARMv7,
the application itself can change the endianness dynamically (SETEND
instruction). Linux supports reading the endianness from the ELF
header on Arm64 and will configure the process state accordingly.
 
So, absent some indication to the compiler on the command line
which endianness is desired, there doesn't seem to be a way for the
compiler to figure it out itself.
 
There is absolutely nothing wrong with using the pre-processor and
implementation defined (or specified by the programmer with -D)
macros to determine which endianness is being used.
 
We write a lot of code that is required to run in both big- and
little-endian environments, and one of the larger issues with
portability is related to bitfields. All of our structures with
bitfields have declarations similar to:
 
struct INTCTLR_CMD_CLEAR_s {
#if __BYTE_ORDER == __BIG_ENDIAN
uint64_t dev_id : 32; /**< [ 63: 32] Interrupt device ID. */
uint64_t reserved_8_31 : 24;
uint64_t cmd_type : 8; /**< [ 7: 0] Command type. Indicates GITS_CMD_TYPE_E::CMD_CLEAR. */
#else
uint64_t cmd_type : 8;
uint64_t reserved_8_31 : 24;
uint64_t dev_id : 32;

No comments: