Saturday, March 21, 2015

Digest for comp.programming.threads@googlegroups.com - 5 updates in 3 topics

bleachbot <bleachbot@httrack.com>: Mar 20 06:53PM +0100

bleachbot <bleachbot@httrack.com>: Mar 17 10:03PM +0100

jgdakngupta@gmail.com: Mar 20 01:23PM -0700

HI Friends,
 
Below is the JOB Description :
 
Role: Core Java Multithreading Developer (with Capital Markets exp.)
(Needs Locals Only)
Location: NYC, NY
 
JD:
Strong Core Java (worked with concurrency package, multi threading, collections framework, strong OOP concepts)
Qualifications
#CoreJava #Multithreading #Capital #Market #FTE #Banking
Very strong in Spring with frameworks like IOC, AOP, Worked with Java Transactions
Unix: Should have good handle with Unix commands (commands to monitor processes etc)
Database: Should be able to write complex queries, analyse query plans, tuning of SQls, Indexing etc
Domain experience: Should have capital markets experience (Preferably futures). Please share profiles with above technical skills and non IB exp also as we have multiple requirements and some of them do not require IB experience.
 
 
Thanks,
 
 
Regards,
 
Akanksha Gupta| Direct - (727) 266-3743| Plaxonic Inc.
akanksha.gupta@plaxonic.com | www.plaxonic.com

 
Note: Please allow me to reiterate that I chose to contact you either because your resume had been posted to one of the internet job sites to which we subscribe, or you had previously submitted your resume to Plaxonic Inc.I assumed that you are either looking for a new employment opportunity, or you are interested in investigating the current job market.
If you are not currently seeking employment, or if you would prefer I contact you at some later date, please indicate your date of availability so that I may honor your request. In any event, I respectfully recommend you continue to avail yourself to the employment options and job market information we provide with our e-mail notices.
Thanks again.
Kaz Kylheku <kaz@kylheku.com>: Mar 20 08:28PM

> HI Friends,
 
Yo!
 
> Below is the JOB Description :
 
Gee, *NOW* you tell me! I read "HI Friends" and then spent 45 minutes looking
for the descrition ABOVE that line, including in the headers.
 
Sheesh ...
Ramine <ramine@1.1>: Mar 20 02:01PM -0700

Hello,
 
 
I have corrected some typos , please read again...
 
 
As you have seen me talking in my previous post, i have explained to
you a very important thing, but we have to be smarter than that to
be able to see clearly the overall picture, as you have noticed
researchers are inventing transactional memory, but transactional memory
and my SeqlockX are optimistic mechanisms, this means that you
can not always use transactional memory in a high level way,
for example with AVL trees and Red-black trees and Skiplists,
transactional memory can not be used in a high level way because
the writers can modify the pointers and this can raise exceptions
inside the readers and inside writers, and you can not do it
from high level around the insert() and search() and delete() because
you have to respect the logic of the sequential algorithms, that's
the same with my SeqlockX, you have to use them in this situation in
a finer grained manner from inside the insert() and delete() and
search() of the algorithms... this is the problem with optimistic
mechanisms of transactional memory and my SeqlockX and SMR and RCU
have the same problem... but with the scalable reader-writer locks you
can reason in a high level manner and put the Rlock() RUnlock() and
WLock() and WUnlock() in a straight forward manner around the insert()
and search() and delete() of the AVL tree or Red-Black tree or the
Skiplist, that's the advantage with scalable read-writer locks.
 
I have thought more about concurrent datastructures, and
i think they will scale well on NUMA architecture, because with
concurrent AVL trees and concurrent Red Black trees and concurrent
Skiplists the access to different nodes allocated in different NUMA
nodes will be random and i have thought about it and this will get
you a good result on NUMA architecture, what is my proof ?
imagine that you have 32 cores and one NUMA node for each 4 cores,
that means 8 NUMA nodes in total, so you will allocate your
nodes in different NUMA nodes, so when 32 threads on 32 cores will
access those concurrent datastructures above, they will do it in a
probabilistic way , this will give a probability of 1/8 (1 over 8 NUMA
nodes) for each thread, so in average i think you will have a contention
for a different NUMA node for every 4 threads , so from the Amdahl's law
this will scale on average to 8X on 8 NUMA nodes, that's really good !
and my reasonning is true for more NUMA nodes, that means it will scale
on more NUMA nodes, so we are safe !
 
Other than that i have done some scalability prediction for the
following distributed reader-writer mutex:
 
https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex
 
as you will noticed i am using an atomic "lock add" assembler
instruction that is executed by only the threads that belong to the same
core, so this will render it less expensive, i have benchmarked
it and i have noticed that it takes 20 CPU cycles on x86, so that's not
so expensive, and i have done a scalability prediction using
this distributed reader-writer mutex with a concurrent AVL tree
and a concurrent Red-Black tree, and it gives 50X scalability on NUMA
architecture when used in client-server way, that's because the "lock
add" assembler instruction that is executed by only the threads that
belong to the same core does take only 20 CPU cycles on x86.
 
I have finished to port a beautiful skiplist algorithm to freepascal and
delphi... and i am rendering it to a concurrent SkipList using the
distributed reader-writer mutex that i have talked to you about before,
and i have noticed on my benchmarks and doing some calculations
with the Amdahl's law that this concurrent Skiplist that i am
implementing will scale to 100X on read-mostly scenarios and on a NUMA
architecture when it is used in a client-server manner using threads,
that's good.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: