Friday, December 16, 2016

Digest for comp.lang.c++@googlegroups.com - 8 updates in 5 topics

Juha Nieminen <nospam@thanks.invalid>: Dec 16 11:02AM

> By asserting that there is no such thing as negative zero and that
> division by zero is undefined? My assertions are correct mate.
 
So maybe show us how the asymptotic behavior of quicksort on linked
lists is different than for random access arrays.
 
The question here is not what the asymptotic behavior of quicksort
is (because that's a bit complicated), but whether it's different
for linked lists.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Dec 16 11:00PM

On 16/12/2016 11:02, Juha Nieminen wrote:
 
> The question here is not what the asymptotic behavior of quicksort
> is (because that's a bit complicated), but whether it's different
> for linked lists.
 
Due to poor pivot choice worst case performance will manifest more often
and that is quadratic complexity.
 
/Flibble
"A. Bolmarcich" <aggedor@earl-grey.cloud9.net>: Dec 16 06:12PM

> of AVL trees in Knuth TAOCP volume 3, Sorting and Searching.
> Can you provide a reference for AVL trees that describes the
> insertion process in enough detail so I can compare them?
 
By usual description I mean one like the first one that I learned.
That implementation was in the book "Algorithms + Data Structures =
Programs" by Niklaus Wirth. Page 218 contains:
 
"The process of node insertion consists essentially of the following
three consecutive parts:
 
1. Follow the search path until it is verified that the key is not
already in the tree.
2. Insert the new node and determine the resulting balance factor.
3. Retreat along the search path and check the balance factor at each
node."
 
Wirth gives an implementation in Pascal using a recursive procedure.
One of the parameters of that procedure is a var parameter of type
boolean that indicates whether the height of the subtree along the
search path has been changed. That parameter is set to true at the
bottom of the recursion if a new node has been added to the tree.
 
As the recursion unwinds, if the value of that parameter is true,
then the balance field (height of left subtree minus height of right
subtree) in the node at that level of the recursion is modified.
Depending on the previous value of the balance field, the value of
that parameter may be set to false.
 
I am not familar with the implementaion by Knuth that does not
adjust the balance while retreating along the search path.
Tim Rentsch <txr@alumni.caltech.edu>: Dec 16 12:00PM -0800

> subtree) in the node at that level of the recursion is modified.
> Depending on the previous value of the balance field, the value of
> that parameter may be set to false.
 
I don't have a copy of the Wirth book handy just now, but based
on your description this algorithm should give the same tree
structure as the Knuth algorithm. How the Knuth algorithm
works is iterative rather than recursive, but they end up making
the same changes to the same nodes of the tree.
David Brown <david.brown@hesbynett.no>: Dec 16 10:16AM +0100

On 15/12/16 22:29, bitrex wrote:
> with no problems if they're of type MyObject. But I don't think using
> "raw" new and delete as with my_object1 would be a good idea over and
> over again from a fragmentation standpoint...
 
The issue here is memory fragmentation, /not/ whether a chip happens to
have an MMU or not. An MMU can help combat memory fragmentation if you
have a lot of memory - that's why it is fragmentation is not a big issue
on PC's. But an MMU alone will not fix anything.
 
You combat memory fragmentation in two ways. One is
allocation/deallocation patterns. On many small embedded systems, you
only ever allocate - you never deallocate memory (typically running
until someone pulls out the plug). Alternatively, you try to
de-allocate memory in at least roughly the reverse order of allocation.
The other method is smarter allocators, such as pool allocators.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Dec 16 01:57AM +0100

On 16.12.2016 00:49, Stefan Ram wrote:
> between the calls somewhere to recognize that it now has
> done all possible permutations and the next permutation
> would return to the initial state before the first call?
 
a and b are iterators; as a pair they identify a range of items.
 
You can view each permutation as a number specification.
 
When you pass it the maximum number, e.g. think of "9876", the function
returns false, and produces the permutation that correspond to minimum
number for these items, "6789".
 
 
Cheers & hth.,
 
- Alf
Gareth Owen <gwowen@gmail.com>: Dec 16 06:58AM

> between the calls somewhere to recognize that it now has
> done all possible permutations and the next permutation
> would return to the initial state before the first call?
 
It doesn't return false before returning to the first permutation, but
the last (greatest) permutation lexicographically. So if you want all
the permutations, its best to start with the items sorted.
ram@zedat.fu-berlin.de (Stefan Ram): Dec 15 11:49PM

One can repeatedly call
 
::std::next_permutation( a, b )
 
and eventually it will return »false« when there is no more
permutation possible.
 
But how can it now this? Doesn't it have to store a state
between the calls somewhere to recognize that it now has
done all possible permutations and the next permutation
would return to the initial state before the first call?
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: