Tuesday, August 4, 2015

Digest for comp.programming.threads@googlegroups.com - 6 updates in 4 topics

Ramine <ramine@1.1>: Aug 03 05:24PM -0700

Read again, i correct...
 
 
Hello,
 
 
We have to be smart...
 
 
I have implemented 3 parallel computing projects, here they are:
 
1- Scalable Parallel HashList
 
Read more about it here:
 
https://sites.google.com/site/aminer68/scalable-parallel-hashlist
 
 
2- Scalable Parallel Varfiler
 
Read more about it here:
 
https://sites.google.com/site/aminer68/scalable-parallel-varfiler
 
 
3- Concurrent SkipList
 
Read more about it here:
 
https://sites.google.com/site/aminer68/concurrent-skiplist
 
 
 
The first two are scalable, but the concurrent Skiplist uses
a Scalable Distributed Reader-Writer Mutex, so it is not as scalable
as the first two, so what can we do about it?
 
Here is a solution:
 
When the keys are of type strings use my Scalable Parallel HashList
or my Scalable Parallel Varfiler in combination with my
Skiplist , so when you insert, you insert in both of them,
and when you delete you delete in both of them and when
you search for a key , you search for it in my Scalable Parallel
HashList or my Scalable Parallel Varfiler, and when you want to
get the sorted list you can get it easily from the parallel skiplist
above.. this way you can get both of the charracteristic of my scalable
Hashtable and the charracteristic of my Parallel Skiplist.
 
But when you are using numbers, you can simply use a my parallel
skiplist above and you have to not worry about scalability , because
when you are using numbers since the compare function of two number that
you find on a Parallel Hahtable or a Parallel Skiplist
takes very few CPU cycles , so it will not get a good scalability.
 
 
Thank you,
Amine Moulay Ramdane.
bleachbot <bleachbot@httrack.com>: Aug 03 09:52PM +0200

bleachbot <bleachbot@httrack.com>: Aug 03 11:22PM +0200

bleachbot <bleachbot@httrack.com>: Aug 03 11:24PM +0200

Ramine <ramine@1.1>: Aug 03 05:22PM -0700

Hello,
 
 
We have to be smart...
 
 
I have implemented 3 parallel computing projects, here they are:
 
1- Scalable Parallel HashList
 
Read more about it here:
 
https://sites.google.com/site/aminer68/scalable-parallel-hashlist
 
 
2- Scalable Parallel Varfiler
 
Read more about it here:
 
https://sites.google.com/site/aminer68/scalable-parallel-varfiler
 
 
3- Concurrent SkipList
 
Read more about it here:
 
https://sites.google.com/site/aminer68/concurrent-skiplist
 
 
 
The first two are scalable, but the concurrent Skiplist uses
a Scalable Distributed Reader-Writer Mutex, so it is not as scalable
as the first two, so what can we do about it?
 
Here is a solution:
 
When the keys are of type strings use my Scalable Parallel HashList
or my Scalable Parallel Varfiler in combination with my
Skiplist , so when you insert, you insert in both of them,
and when you delete you delete in both of them and when
you search for a key , you search for it in my Scalable Parallel
HashList or my Scalable Parallel Varfiler, and when you want to
get the sorted list you can get it easily from the parallel skiplist
above.. this way you can get both of the carracteristic of my scalable
Hashtable and the carracteristic of my Parallel Skiplist.
 
But when you are using numbers, you can simply use a my parallel
skiplist above and you have to not worry about scalability , because
when you are using numbers since the compare function of two number that
you find on a Parallel Hahtable or a Parallel Skiplist
takes very few CPU cycles , so it will not get a good scalability.
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Aug 03 03:53PM -0700

Hello,
 
 
User-Level Implementations of Read-Copy Update
 
Read more here:
 
https://www.efficios.com/pub/rcu/urcu-main.pdf
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: