Friday, March 13, 2015

Digest for comp.programming.threads@googlegroups.com - 12 updates in 6 topics

bleachbot <bleachbot@httrack.com>: Mar 12 06:19PM +0100

bleachbot <bleachbot@httrack.com>: Mar 12 06:44PM +0100

bleachbot <bleachbot@httrack.com>: Mar 12 06:53PM +0100

bleachbot <bleachbot@httrack.com>: Mar 12 06:55PM +0100

bleachbot <bleachbot@httrack.com>: Mar 12 10:38PM +0100

bleachbot <bleachbot@httrack.com>: Mar 12 11:05PM +0100

Ramine <ramine@1.1>: Mar 12 06:06AM -0700

Hello,
 
 
In this post i will speak about an interresting subject..
 
 
It's about the trans-humanist agenda...
 
I am a trans-humanist who believe that multi-cores and many cores
computers etc. and artificiel intelligence will in some decades from
now be able to improve humans and life by much... and since
i am a trans-humanist and when you are a trans-humanist you must think
"differently", you have to think more in a more optimal way,
that means you have to make your software a good
quality software that scores high in diffrerent criterions, especially
on the criterion of "scalability"... this was my goal on my previous
post titled "About concurrent database systems", i have tried in my
previous post to show that i have implemented my StringTree with
"scalability" in mind, and since i have designed and implemented it like
that with good quality on the criterion of scalability, it has
then permit me to show you that it can serve as a base to implement
a full concurrent database system... i have also showed you that
my new scalable SeqlockX also scores well on the criterion of
scalability on multi-cores, this is how i have designed my other
projects... and this is how a trans-humanist must fulfil the
trans-humanist agenda, by thinking in a more optimal manner taking into
account one of the most important criterion that is "scalability".
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Mar 12 05:39AM -0700

Hello,
 
 
I was thinking more about concurrency and database systems,
as you know B+Trees can be used to implement a disk based database,
AVL trees also can be used to implement a concurrent in-memory database,
but i have thought more about this, and i think that my
StringTree and my Parallel archiver and a concurrent AVL tree can be
used to implement a powerful concurrent database system that supports
SQL statements and that is fast and that will keep the indexes in-memory
and all the datas in a disk based file using my parallel archiver, so
this design is cool and fast i think... what is my proof of concept ? if
you have a database table with some entities, you can store the data of
the table as a directory name inside my StringTree and the keys and
entities as file's names inside StringTree, and you can store the types
of entities in another directory using the table's name, after that you
can implement the SQL statements insert() and update() and delete() and
and select() easily .. the select() statemement of SQL will be
imnplemented using an object and its methods like "where()", after that
you can store the data inside my parallel archiver, so every time you
call the insert() method , the insert() method will write the key as an
id inside the directory of StringTree that represents the table .. this
is the same for the other SQL statements, and when you will want to
create indexes with the SQL statement , the indexes will be created
in-memory using an concurrent AVL-tree... and about fault tolerancy you
will use my parallel archiver for that, cause my parallel archiver
supports fault tolerancy.. and my parallel archiver supports also
compression and encyption , so your database system will support
encryption and compression, all in all you can really use my StringTree
and my parallel archiver and a concurrent AVL tree to implement a
concurrent and a powerful database system that supports SQL statements
! nd to make you database system client-server using threads that's the
easy part...
 
 
You can download my StringTree and my parallel archiver from:
 
https://sites.google.com/site/aminer68/
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Mar 12 01:56AM -0700

Hello,
 
 
My Iterator is not actually using a Binary search, it is using something
that look like Binary search that gives the same result as a binary
search and that works on concurrent AVL trees and concurrent Red Blck
trees and my Iterator is better than the Fail Safe Iterator.
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Mar 12 01:45AM -0700

Hello,
 
I was thinking about the Iterators and concurrency and thinking about
the following Iterators:
 
Fail Safe Iterators
Fail Fast Iterators
 
I think that Fail Safe takes too much memory, and Fail Fast is not good,
so what i was thinking about is since we are using concurrency the
iterator can use the "Binary Search" and return the position of the
element and if it doesn't find the element it returns the positions
lesser than the element and if the element is not the first element of
the datastructure, or is not the last element of the datastructure , it
will return the next element of the element that is lesser than the
element that we didn't find in the datastructure, and if the element
that is lesser is the first element it will test if the element is
lesser so it will return the next element, or if the element is greater
it will return the element , and if the element that is returned is the
last element of the datastructurte it will return "no element found"..
this way the iterator in the presenc of concurrency will use a Binary
search that return a view of the datastructure that look like Fail Safe
Iterators, but my way will allow to use less memory than Fail Safe
Iterators.
 
 
This is how i will implement my Iterators for my concurrent AVL tree and
my concurrent Red Black Tree..
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Mar 12 01:55AM -0700

Hello,
 
 
My Iterator is not actually using a Binary search, it is using something
that look like Binary search that gives the same result as a binary
search and that works on concurrent AVL trees and concurrent Red Blck
trees and my Iterator is better than the Fail Safe Iterator.
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Mar 12 01:20AM -0700

Hello,
 
 
I want to speak today about a datastructure called Skiplist,
i have took a look at its time complexity and what i have
noticed that it's using a randomLevel() function that returnes
how many levels of pointers the element will have , but look
with me at this function:
 
 
randomLevel()
lvl := 1
-- random() that returns a random value in [0...1)
while random() < p and lvl < MaxLevel do
lvl := lvl + 1
return lvl
 
 
As you have noticed the probability will get smaller and smaller when
you will get higher levels and when you get higher levels even at
a probability of 1/16 the datastructure may degenarate towards the worst
case performance far from a log(n) time complexity, and if
it degenarates and you are constructing your a bigger in-memory database
with your skiplist, your database will get too slow and
this is not good.. so this is why i don't like skiplists.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: