comp.programming.threads
http://groups.google.com/group/comp.programming.threads?hl=en
comp.programming.threads@googlegroups.com
Today's topics:
* There is still a problem... - 2 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/6da6f103104a8a8e?hl=en
* That's sad... - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/e33611aba6d1df10?hl=en
* But Amine, why do you want to satisfy many requirements - 1 messages, 1
author
http://groups.google.com/group/comp.programming.threads/t/c1fdb02d2419e0aa?hl=en
* There is a solution to satisfy the energy efficiency requirement - 3
messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/2a12c9e711904820?hl=en
* About FIFO queues - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/74e8365bbd4a8cdd?hl=en
* Concurrent FIFO queue... - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/0a68fb555b8fc66e?hl=en
* How do I look inside an .exe file to view the programming - 1 messages, 1
author
http://groups.google.com/group/comp.programming.threads/t/70ea90ac4a00188c?hl=en
* Concurrent FIFO Queue version 1.0 - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/7a3901f20c6b0023?hl=en
* Lock and Lockfree... - 2 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/4fc87d01927bc106?hl=en
* Spinlock and lockfree and waitfree... - 3 messages, 2 authors
http://groups.google.com/group/comp.programming.threads/t/624873836709bf90?hl=en
* Threadpool and Threadpool with priorities were updated - 2 messages, 1
author
http://groups.google.com/group/comp.programming.threads/t/6b3b2dc79e93d1f5?hl=en
* Here is again my Proof - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/1b195ecfbb07dead?hl=en
* ParallelVarFiler version 1.17 - 3 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/a5a073dcd9c9e9d4?hl=en
* ParallelVarFiler version 1.22 - 3 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/10ed747cbfc667d1?hl=en
* About my parallel libraries... - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/e71cb05470c8cd13?hl=en
==============================================================================
TOPIC: There is still a problem...
http://groups.google.com/group/comp.programming.threads/t/6da6f103104a8a8e?hl=en
==============================================================================
== 1 of 2 ==
Date: Fri, Oct 11 2013 4:36 pm
From: aminer
Sorry i correct...
Even if the number of threads are greater than the number of cores ,
this will not take too much CPU ressources, cause we are spinning
and using sleep(0) for example , so it's not a big problem for waitfree
FIFO queue algorithms.
Thank you,
Amine Moulay Ramdane.
On 10/11/2013 7:30 PM, aminer wrote:
>
>
> Hello,
>
> I think if the number of threads are not greater than the number
> of cores , so the spin-wait for an item in the queue will not be a big
> problem, it will not take too much CPU ressources in each core, so i
> think waitfree algorithms are still usefull.
>
>
> But lockfree algorithms do not minimize efficiently the cache-coherence
> traffic, they generate too much cache coherence traffic and they
> are not FIFO fair , so they are not starvation-free, hence i think they
> are bad.
>
>
> Thank you,
> Amine Moulay Ramdane.
>
>
>
> On 10/11/2013 6:52 PM, aminer wrote:
>>
>> Hello,
>>
>> We have to be smart, i have invented a waitfree bounded FIFO queue
>> and i was writing the algorithm on a paper and thinking at the same
>> time, it's true that my new algorithm satisfies the FIFO fairness
>> requirement, it also minimizes efficiently the cache-coherence
>> traffic... but there is still a problem, this waitfree FIFO queue
>> doesn't statisfies a requirement, wich one? when there is no items in
>> the queue the threads must not spin-wait cause it's CPU inefficient i
>> think,
>> and those lockfree and waitfree algorithms must use a spin-wait
>> mechanism when there is no items in the queue and they want to wait
>> for items in the queue etc. and my new waitfree algorithm must also use
>> a spin-wait mechanism in this case, so
>> they will take too many CPU ressources if there is no items in
>> the queue, so to satisfy this requirement we must use my SemaCondvar
>> or a FIFO fair Semaphore , but this will slow too much the FIFO queue..
>> so as you are noticing with me it's not a silver bullet.
>>
>>
>> Hope you have understood my ideas...
>>
>>
>>
>> Thank you,
>> Amine Moulay Ramdane.
>>
>>
>>
>>
>>
>>
>>
>
== 2 of 2 ==
Date: Fri, Oct 11 2013 4:46 pm
From: aminer
Hello,
Sorry there is still a problem, cause spin-wait in lockfree
and waitfree FIFO queues even with a sleep(0) (when for example there no
items in the queue) do not satisfy the requirement of the
energy efficency. That's sad i think, cause we have to use
my FIFO fair SemaCondvar or FIFO fair semaphores to satisfy
the requirement of energy efficiency, but SemaCondvar and Semaphores
are slow, that's sad.
Thank you,
Amine Moulay Ramdane.
On 10/11/2013 7:36 PM, aminer wrote:
>
>
>
> Sorry i correct...
>
> Even if the number of threads are greater than the number of cores ,
> this will not take too much CPU ressources, cause we are spinning
> and using sleep(0) for example , so it's not a big problem for waitfree
> FIFO queue algorithms.
>
>
> Thank you,
> Amine Moulay Ramdane.
>
>
>
>
> On 10/11/2013 7:30 PM, aminer wrote:
>>
>>
>> Hello,
>>
>> I think if the number of threads are not greater than the number
>> of cores , so the spin-wait for an item in the queue will not be a big
>> problem, it will not take too much CPU ressources in each core, so i
>> think waitfree algorithms are still usefull.
>>
>>
>> But lockfree algorithms do not minimize efficiently the cache-coherence
>> traffic, they generate too much cache coherence traffic and they
>> are not FIFO fair , so they are not starvation-free, hence i think they
>> are bad.
>>
>>
>> Thank you,
>> Amine Moulay Ramdane.
>>
>>
>>
>> On 10/11/2013 6:52 PM, aminer wrote:
>>>
>>> Hello,
>>>
>>> We have to be smart, i have invented a waitfree bounded FIFO queue
>>> and i was writing the algorithm on a paper and thinking at the same
>>> time, it's true that my new algorithm satisfies the FIFO fairness
>>> requirement, it also minimizes efficiently the cache-coherence
>>> traffic... but there is still a problem, this waitfree FIFO queue
>>> doesn't statisfies a requirement, wich one? when there is no items in
>>> the queue the threads must not spin-wait cause it's CPU inefficient i
>>> think,
>>> and those lockfree and waitfree algorithms must use a spin-wait
>>> mechanism when there is no items in the queue and they want to wait
>>> for items in the queue etc. and my new waitfree algorithm must also use
>>> a spin-wait mechanism in this case, so
>>> they will take too many CPU ressources if there is no items in
>>> the queue, so to satisfy this requirement we must use my SemaCondvar
>>> or a FIFO fair Semaphore , but this will slow too much the FIFO queue..
>>> so as you are noticing with me it's not a silver bullet.
>>>
>>>
>>> Hope you have understood my ideas...
>>>
>>>
>>>
>>> Thank you,
>>> Amine Moulay Ramdane.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>
==============================================================================
TOPIC: That's sad...
http://groups.google.com/group/comp.programming.threads/t/e33611aba6d1df10?hl=en
==============================================================================
== 1 of 1 ==
Date: Fri, Oct 11 2013 4:47 pm
From: aminer
Hello,
Sorry there is still a problem, cause spin-wait in lockfree
and waitfree FIFO queues even with a sleep(0) (when for example there no
items in the queue) do not satisfy the requirement of the
energy efficency. That's sad i think, cause we have to use
my FIFO fair SemaCondvar or FIFO fair semaphores to satisfy
the requirement of energy efficiency, but SemaCondvar and Semaphores
are slow, that's sad.
Thank you,
Amine Moulay Ramdane.
==============================================================================
TOPIC: But Amine, why do you want to satisfy many requirements
http://groups.google.com/group/comp.programming.threads/t/c1fdb02d2419e0aa?hl=en
==============================================================================
== 1 of 1 ==
Date: Fri, Oct 11 2013 5:04 pm
From: aminer
Hello,
Question:
But Amine, why do you want to satisfy many requirements for your FIFO
queue, such
as minimizing efficiently the cache-coherence traffic,
and the FIFO fairness, and also the energy efficiency ? why
the energy efficiency ?
answer:
Energy efficieny is also important, cause we must not think just
for today , we must think for the future when there will be many
more cores, this requirement of energy efficiency will become much more
important, so as i told you it's sad that those waitfree and lockfree
FIFO queue algorithms must use spin-wait when there no item in the queue
and the threads must wait for new items etc. so as i told you, to
satisfy the requirement of energy efficiency we must use my FIFO fair
SemaCondvar or FIFO fair Semaphores inside the FIFO queue, but since my
SemaCondvar
and Smeaphores are slow , this will slow the FIFO queue, and that's sad.
Thank you,
Amine Moulay Ramdne.
==============================================================================
TOPIC: There is a solution to satisfy the energy efficiency requirement
http://groups.google.com/group/comp.programming.threads/t/2a12c9e711904820?hl=en
==============================================================================
== 1 of 3 ==
Date: Sat, Oct 12 2013 3:55 pm
From: aminer
Hello,
I wrote:
"Question:
But Amine, why do you want to satisfy many requirements for your FIFO
queue, such
as minimizing efficiently the cache-coherence traffic,
and the FIFO fairness, and also the energy efficiency ? why
the energy efficiency ?
answer:
Energy efficieny is also important, cause we must not think just
for today , we must think for the future when there will be many
more cores, this requirement of energy efficiency will become much more
important, so as i told you it's sad that those waitfree and lockfree
FIFO queue algorithms must use spin-wait when there no item in the queue
and the threads must wait for new items etc. so as i told you, to
satisfy the requirement of energy efficiency we must use my FIFO fair
SemaCondvar or FIFO fair Semaphores inside the FIFO queue, but since my
SemaCondvar
and Smeaphores are slow , this will slow the FIFO queue, and that's sad."
I think there is a solution to satisfy the energy efficiency requirement
, you have to use my manual event object that is supported by FreePascal
and Delphi and that is portable,
so after you push an item in the queue you have to call SetEvent() of
the manual event object, and in the pop() side you have to use something
like this:
--
if self.count <> 0
then continue;
if self.count=0
then
begin
self.event.waitfor(INFINITE);
self.event.resetevent;
end;
end;
---
This method is more energy efficient, so when there is no items in the
queue the push() method will wait on the manual event object so
it will not use CPU ressources like with the spin-wait mechanism,
and i think you have to avoid Semaphores cause semaphores are slow.
Here is the complete source code:
===
unit Lockfree_MPMC;
{$IFDEF FPC}
{$ASMMODE intel}
{$ENDIF}
interface
uses
syncobjs,
{$IFDEF Delphi}expbackoff,sysutils;{$ENDIF}
{$IFDEF FPC}expbackoff,sysutils;{$ENDIF}
{$I defines.inc}
const margin=1000; // limited to 1000 threads...
{$IFDEF CPU64}
INFINITE = qword($FFFFFFFFFFFFFFFF);
{$ENDIF CPU64}
{$IFDEF CPU32}
INFINITE = longword($FFFFFFFF);
{$ENDIF CPU32}
type
{$IFDEF CPU64}
long = qword;
{$ENDIF CPU64}
{$IFDEF CPU32}
long = longword;
{$ENDIF CPU32}
tNodeQueue = tObject;
typecache1 = array[0..15] of longword;
// TLockfree_MPMC = class(TFreelist)
TLockfree_MPMC = class
private
tail:long;
tmp1:typecache1;
head: long;
fMask : long;
fSize : long;
temp:long;
backoff1,backoff2:texpbackoff;
event:TSimpleEvent;
tab : array of tNodeQueue;
procedure setobject(lp : long;const aobject : tNodeQueue);
function getLength:long;
function getSize:long;
function getObject(lp : long):tNodeQueue;
public
{$IFDEF CPU64}
constructor create(aPower : int64 =20); {allocate tab with size
equal 2^aPower, for 20 size is equal 1048576}
{$ENDIF CPU64}
{$IFDEF CPU32}
constructor create(aPower : integer =20); {allocate tab with size
equal 2^aPower, for 20 size is equal 1048576}
{$ENDIF CPU32}
destructor Destroy; override;
function push(tm : tNodeQueue):boolean;
function pop(var obj:tNodeQueue;wait:boolean=false):boolean;
property length : long read getLength;
property count: long read getLength;
property size : long read getSize;
end;
implementation
function LockedIncLong(var Target: long): long;
asm
{$IFDEF CPU32}
// --> EAX Target
// <-- EAX Result
MOV ECX, EAX
MOV EAX, 1
//sfence
LOCK XADD [ECX], EAX
inc eax
{$ENDIF CPU32}
{$IFDEF CPU64}
// --> RCX Target
// <-- EAX Result
MOV rax, 1
//sfence
LOCK XADD [rcx], rax
INC rax
{$ENDIF CPU64}
end;
function CAS(var Target:long;Comp ,Exch : long): boolean;assembler;stdcall;
asm
{$IFDEF CPU64}
mov rax, comp
lock cmpxchg [Target], Exch
setz al
{$ENDIF CPU64}
{$IFDEF CPU32}
mov eax, comp
mov ecx,Target
mov edx,exch
lock cmpxchg [ecx], edx
setz al
{$ENDIF CPU32}
end; { CAS }
{function CAS(var Target: long; Comperand: long;NewValue: long ):
boolean; assembler;stdcall;
asm
mov ecx,Target
mov edx,NewValue
mov eax,Comperand
//sfence
lock cmpxchg [ecx],edx
JNZ @@2
MOV AL,01
JMP @@Exit
@@2:
XOR AL,AL
@@Exit:
end;}
{$IFDEF CPU64}
constructor TLockfree_MPMC.create(aPower : int64 );
{$ENDIF CPU64}
{$IFDEF CPU32}
constructor TLockfree_MPMC.create(aPower : integer );
{$ENDIF CPU32}
begin
if aPower < 10
then
begin
writeln('Constructor argument must be greater or equal to 10');
halt;
end;
if (aPower < 0) or (aPower > high(integer))
then
begin
writeln('Constructor argument is incorrect');
halt;
end;
backoff1:=texpbackoff.create(1,2);
backoff2:=texpbackoff.create(1,2);
{$IFDEF CPU64}
fMask:=not($FFFFFFFFFFFFFFFF shl aPower);{$ENDIF CPU64}
{$IFDEF CPU32}
fMask:=not($FFFFFFFF shl aPower);
{$ENDIF CPU32}
fSize:=(1 shl aPower) - margin;
setLength(tab,1 shl aPower);
tail:=0;
head:=0;
temp:=0;
Event := TSimpleEvent.Create;
end;
destructor TLockfree_MPMC.Destroy;
begin
event.free;
backoff1.free;
backoff2.free;
setLength(tab,0);
inherited Destroy;
end;
procedure TLockfree_MPMC.setObject(lp : long;const aobject : tNodeQueue);
begin
tab[lp and fMask]:=aObject;
end;
function TLockfree_MPMC.getObject(lp : long):tNodeQueue;
begin
result:=tab[lp and fMask];
end;
function TLockfree_MPMC.push(tm : tNodeQueue):boolean;
var lasttail,newtemp:long;
i,j:integer;
begin
if getlength >= fsize
then
begin
result:=false;
exit;
end;
result:=true;
newTemp:=LockedIncLong(temp);
lastTail:=newTemp-1;
//asm mfence end;
setObject(lastTail,tm);
repeat
if CAS(tail,lasttail,newtemp)
then
begin
event.setevent;
exit;
end;
sleep(0);
until false;
end;
function TLockfree_MPMC.pop(var obj:tNodeQueue;wait:boolean=false):boolean;
var lastHead : long;
begin
repeat
lastHead:=head;
if tail<>head
then
begin
obj:=getObject(lastHead);
if CAS(head,lasthead,lasthead+1)
then
begin
result:=true;
exit;
end
else sleep(0); //backoff2.delay;
end
else
begin
if wait=false
then
begin
result:=false;
exit;
end;
if self.count <> 0
then continue;
if self.count=0
then
begin
self.event.waitfor(INFINITE);
self.event.resetevent;
end;
end;
until false;
end;
function TLockfree_MPMC.getLength:long;
var head1,tail1:long;
begin
head1:=head;
tail1:=tail;
if tail1 < head1
then result:= (High(long)-head1)+(1+tail1)
else result:=(tail1-head1);
end;
function TLockfree_MPMC.getSize:long;
begin
result:=fSize;
end;
end.
===
Thank you,
Amine Moulay Ramdane.
== 2 of 3 ==
Date: Sat, Oct 12 2013 3:57 pm
From: aminer
Hello.
So as you have noticed this method that i am using is
more efficient cause you will not wait on the manual event object until
self.count = 0.
Thank you,
Amine Moulay Ramdane.
On 10/12/2013 6:55 PM, aminer wrote:
>
> Hello,
>
> I wrote:
> "Question:
> But Amine, why do you want to satisfy many requirements for your FIFO
> queue, such
> as minimizing efficiently the cache-coherence traffic,
> and the FIFO fairness, and also the energy efficiency ? why
> the energy efficiency ?
> answer:
> Energy efficieny is also important, cause we must not think just
> for today , we must think for the future when there will be many
> more cores, this requirement of energy efficiency will become much more
> important, so as i told you it's sad that those waitfree and lockfree
> FIFO queue algorithms must use spin-wait when there no item in the queue
> and the threads must wait for new items etc. so as i told you, to
> satisfy the requirement of energy efficiency we must use my FIFO fair
> SemaCondvar or FIFO fair Semaphores inside the FIFO queue, but since my
> SemaCondvar
> and Smeaphores are slow , this will slow the FIFO queue, and that's sad."
>
>
>
> I think there is a solution to satisfy the energy efficiency requirement
> , you have to use my manual event object that is supported by FreePascal
> and Delphi and that is portable,
> so after you push an item in the queue you have to call SetEvent() of
> the manual event object, and in the pop() side you have to use something
> like this:
>
> --
>
> if self.count <> 0
> then continue;
>
> if self.count=0
> then
> begin
> self.event.waitfor(INFINITE);
> self.event.resetevent;
> end;
> end;
>
> ---
>
> This method is more energy efficient, so when there is no items in the
> queue the push() method will wait on the manual event object so
> it will not use CPU ressources like with the spin-wait mechanism,
> and i think you have to avoid Semaphores cause semaphores are slow.
>
> Here is the complete source code:
>
> ===
>
>
> unit Lockfree_MPMC;
>
> {$IFDEF FPC}
> {$ASMMODE intel}
> {$ENDIF}
>
>
> interface
>
> uses
> syncobjs,
> {$IFDEF Delphi}expbackoff,sysutils;{$ENDIF}
> {$IFDEF FPC}expbackoff,sysutils;{$ENDIF}
>
> {$I defines.inc}
>
> const margin=1000; // limited to 1000 threads...
> {$IFDEF CPU64}
> INFINITE = qword($FFFFFFFFFFFFFFFF);
> {$ENDIF CPU64}
> {$IFDEF CPU32}
> INFINITE = longword($FFFFFFFF);
> {$ENDIF CPU32}
> type
> {$IFDEF CPU64}
> long = qword;
> {$ENDIF CPU64}
> {$IFDEF CPU32}
> long = longword;
> {$ENDIF CPU32}
>
>
> tNodeQueue = tObject;
> typecache1 = array[0..15] of longword;
>
> // TLockfree_MPMC = class(TFreelist)
> TLockfree_MPMC = class
> private
> tail:long;
> tmp1:typecache1;
> head: long;
> fMask : long;
> fSize : long;
> temp:long;
> backoff1,backoff2:texpbackoff;
> event:TSimpleEvent;
> tab : array of tNodeQueue;
> procedure setobject(lp : long;const aobject : tNodeQueue);
> function getLength:long;
> function getSize:long;
> function getObject(lp : long):tNodeQueue;
> public
> {$IFDEF CPU64}
> constructor create(aPower : int64 =20); {allocate tab with size
> equal 2^aPower, for 20 size is equal 1048576}
> {$ENDIF CPU64}
> {$IFDEF CPU32}
> constructor create(aPower : integer =20); {allocate tab with size
> equal 2^aPower, for 20 size is equal 1048576}
> {$ENDIF CPU32}
>
>
> destructor Destroy; override;
> function push(tm : tNodeQueue):boolean;
> function pop(var obj:tNodeQueue;wait:boolean=false):boolean;
> property length : long read getLength;
> property count: long read getLength;
> property size : long read getSize;
>
> end;
>
>
> implementation
>
> function LockedIncLong(var Target: long): long;
> asm
> {$IFDEF CPU32}
> // --> EAX Target
> // <-- EAX Result
> MOV ECX, EAX
> MOV EAX, 1
> //sfence
> LOCK XADD [ECX], EAX
> inc eax
> {$ENDIF CPU32}
> {$IFDEF CPU64}
> // --> RCX Target
> // <-- EAX Result
> MOV rax, 1
> //sfence
> LOCK XADD [rcx], rax
> INC rax
> {$ENDIF CPU64}
> end;
>
> function CAS(var Target:long;Comp ,Exch : long): boolean;assembler;stdcall;
> asm
> {$IFDEF CPU64}
> mov rax, comp
> lock cmpxchg [Target], Exch
> setz al
> {$ENDIF CPU64}
> {$IFDEF CPU32}
> mov eax, comp
> mov ecx,Target
> mov edx,exch
> lock cmpxchg [ecx], edx
> setz al
>
> {$ENDIF CPU32}
>
> end; { CAS }
>
>
>
> {function CAS(var Target: long; Comperand: long;NewValue: long ):
> boolean; assembler;stdcall;
> asm
> mov ecx,Target
> mov edx,NewValue
> mov eax,Comperand
> //sfence
> lock cmpxchg [ecx],edx
> JNZ @@2
> MOV AL,01
> JMP @@Exit
> @@2:
> XOR AL,AL
> @@Exit:
> end;}
> {$IFDEF CPU64}
> constructor TLockfree_MPMC.create(aPower : int64 );
> {$ENDIF CPU64}
> {$IFDEF CPU32}
> constructor TLockfree_MPMC.create(aPower : integer );
> {$ENDIF CPU32}
>
>
> begin
> if aPower < 10
> then
> begin
> writeln('Constructor argument must be greater or equal to 10');
> halt;
> end;
> if (aPower < 0) or (aPower > high(integer))
> then
> begin
> writeln('Constructor argument is incorrect');
> halt;
> end;
> backoff1:=texpbackoff.create(1,2);
> backoff2:=texpbackoff.create(1,2);
> {$IFDEF CPU64}
> fMask:=not($FFFFFFFFFFFFFFFF shl aPower);{$ENDIF CPU64}
> {$IFDEF CPU32}
> fMask:=not($FFFFFFFF shl aPower);
> {$ENDIF CPU32}
>
> fSize:=(1 shl aPower) - margin;
> setLength(tab,1 shl aPower);
> tail:=0;
> head:=0;
> temp:=0;
> Event := TSimpleEvent.Create;
>
> end;
>
> destructor TLockfree_MPMC.Destroy;
>
> begin
> event.free;
> backoff1.free;
> backoff2.free;
> setLength(tab,0);
> inherited Destroy;
> end;
>
>
> procedure TLockfree_MPMC.setObject(lp : long;const aobject : tNodeQueue);
> begin
> tab[lp and fMask]:=aObject;
> end;
>
> function TLockfree_MPMC.getObject(lp : long):tNodeQueue;
> begin
> result:=tab[lp and fMask];
> end;
>
>
> function TLockfree_MPMC.push(tm : tNodeQueue):boolean;
> var lasttail,newtemp:long;
> i,j:integer;
> begin
>
> if getlength >= fsize
> then
> begin
> result:=false;
> exit;
> end;
> result:=true;
>
> newTemp:=LockedIncLong(temp);
>
> lastTail:=newTemp-1;
> //asm mfence end;
> setObject(lastTail,tm);
>
> repeat
>
> if CAS(tail,lasttail,newtemp)
> then
> begin
> event.setevent;
> exit;
> end;
> sleep(0);
> until false;
>
> end;
>
>
> function TLockfree_MPMC.pop(var obj:tNodeQueue;wait:boolean=false):boolean;
>
> var lastHead : long;
> begin
> repeat
> lastHead:=head;
> if tail<>head
> then
> begin
> obj:=getObject(lastHead);
>
> if CAS(head,lasthead,lasthead+1)
> then
> begin
> result:=true;
> exit;
> end
> else sleep(0); //backoff2.delay;
>
> end
> else
> begin
> if wait=false
> then
> begin
> result:=false;
> exit;
> end;
>
> if self.count <> 0
> then continue;
>
> if self.count=0
> then
> begin
> self.event.waitfor(INFINITE);
> self.event.resetevent;
> end;
> end;
> until false;
> end;
>
> function TLockfree_MPMC.getLength:long;
> var head1,tail1:long;
> begin
> head1:=head;
> tail1:=tail;
> if tail1 < head1
> then result:= (High(long)-head1)+(1+tail1)
> else result:=(tail1-head1);
> end;
>
> function TLockfree_MPMC.getSize:long;
>
> begin
> result:=fSize;
> end;
>
> end.
> ===
>
>
>
>
> Thank you,
> Amine Moulay Ramdane.
>
>
>
>
== 3 of 3 ==
Date: Sat, Oct 12 2013 4:28 pm
From: aminer
On 10/12/2013 6:55 PM, aminer wrote:
> if self.count=0
> then
> begin
> self.event.waitfor(INFINITE);
> self.event.resetevent;
> end;
> end;
This method doesn't work , cause if a thread1 was on
"self.event.waitfor(INFINITE)" and another thread2 corossed
the "if self.count=0" , and if thread2 has not arrive yet to
"self.event.waitfor(INFINITE)" , and thread1 has
received two setevent() from the push() method and it has called after
that "self.event.resetevent" , so thread2 will stop and wait
on "self.event.waitfor(INFINITE)" even if there is items in the queue,
so this is a problem.
I will still think more about a method to satisfy the efficency
requirement without using SemaCondvar or Semaphore cause they are slow.
Thank you,
Amine Moulay Ramdane.
==============================================================================
TOPIC: About FIFO queues
http://groups.google.com/group/comp.programming.threads/t/74e8365bbd4a8cdd?hl=en
==============================================================================
== 1 of 1 ==
Date: Sat, Oct 12 2013 4:41 pm
From: aminer
Hello,
I think SemaCondvar and Semaphores are also a good alternative
to build a concurrent FIFO queue, cause if there is items in the queue
my SemaCondvar and Semaphores also will not context switch to kernel
mode so it will
not be so expensive i think, the threads will switch in kernel
mode only when they want to pop() and when there is no items in the queue,
so my SemaCondVar and Semaphores are a good alternative if you
want to satisfy the FIFO fairness requirement and to minimize the
cache-coherence traffic and to satisfy also the energy efficiency
requirement.
But beware the Windows Semaphore is not FIFO fair, but my SemaCondvar
is FIFO fair.
Thank you,
Amine Moulay Ramdane.
==============================================================================
TOPIC: Concurrent FIFO queue...
http://groups.google.com/group/comp.programming.threads/t/0a68fb555b8fc66e?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Oct 13 2013 1:26 pm
From: aminer
Hello,
I have thought more, and i have come with a solution
for a concurrent FIFO queue that satisfies many requirements, it
minimizes efficiently the cache-coherence traffic, it is
FIFO fair(it uses FIFO fair locks) and it is energy efficient on the
pop() side: it will not spin-wait when there no items in the queue but
it will wait on a manual reset event object, and this is energy efficient.
I have benchmarked it and it gives 1.65 millions pop()
per second and 0.8 million push() per second.
Here is the FIFO queue, i will soon put it on my website.
{****************************************************************************
*
*
* FIFO MPMC Queue
*
*
*
*
*
* Language: FPC Pascal v2.2.0+ / Delphi 5+
*
*
*
* Required switches: none
*
*
*
* Authors: Amine Moulay Ramdane
*
*
*
*
*
*
*
*
* Version: 1.0
*
*
* Send bug reports and feedback to aminer @@ videotron @@ ca
*
* You can always get the latest version/revision of this package from
*
*
*
* http://pages.videotron.com/aminer/
*
*
*
* Description: Algorithm to handle an FIFO MPMC queue
*
*
*
* This program is distributed in the hope that it will be useful,
*
* but WITHOUT ANY WARRANTY; without even the implied warranty of
*
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
*
*
*
*
*
*****************************************************************************
* BEGIN LICENSE BLOCK
*
{ changelog
v.1.0
}
unit FIFOQUEUE_MPMC;
interface
{$IFDEF FPC}
{$ASMMODE intel}
{$ENDIF}
uses
LW_ALOCK,SpinLock,SemaCondvar,msync,syncobjs,sysutils;
{$I defines.inc}
type
{$IFDEF CPU64}
long = qword;
{$ENDIF CPU64}
{$IFDEF CPU32}
long = longword;
{$ENDIF CPU32}
tNodeQueue = tObject;
typecache1 = array[0..15] of longword;
// TLockfree_MPMC = class(TFreelist)
TFIFOQUEUE_MPMC = class
private
tail:longword;
tmp1:typecache1;
head: long;
fMask : long;
fSize : long;
tab : array of tNodeQueue;
lock1,lock2:TALOCK;
event:TSimpleEvent;
lock3:TSpinlock;
count1:long;
procedure setobject(lp : long;const aobject : tNodeQueue);
function getLength:long;
function getSize:long;
function getObject(lp : long):tNodeQueue;
public
constructor create(aPower : long =20); {allocate tab with size
equal 2^aPower, for 20 size is equal 1048576}
destructor Destroy; override;
function push(tm : tNodeQueue):boolean;
function pop(var obj:tNodeQueue):boolean;
property length : long read getLength;
property count: long read getLength;
property size : long read getSize;
end;
implementation
{$IF defined(CPU64) }
function LockedCompareExchange(CompareVal, NewVal: long; var Target:
long): long; overload;
asm
mov rax, rcx
lock cmpxchg [r8], rdx
end;
{$IFEND}
{$IF defined(CPU32) }
function LockedCompareExchange(CompareVal, NewVal: long; var
Target:long): long; overload;
asm
lock cmpxchg [ecx], edx
end;
{$IFEND}
function CAS(var Target:long;Comp ,Exch : long): boolean;
var ret:long;
begin
ret:=LockedCompareExchange(Comp,Exch,Target);
if ret=comp
then result:=true
else result:=false;
end; { CAS }
function LockedIncLong(var Target: long): long;
asm
{$IFDEF CPU32}
// --> EAX Target
// <-- EAX Result
MOV ECX, EAX
MOV EAX, 1
//sfence
LOCK XADD [ECX], EAX
inc eax
{$ENDIF CPU32}
{$IFDEF CPU64}
// --> RCX Target
// <-- EAX Result
MOV rax, 1
//sfence
LOCK XADD [rcx], rax
INC rax
{$ENDIF CPU64}
end;
function LockedDecLong(var Target: long): long;
asm
{$IFDEF CPU32}
// --> EAX Target
// <-- EAX Result
MOV ECX, EAX
MOV EAX, -1
//sfence
LOCK XADD [ECX], EAX
dec eax
{$ENDIF CPU32}
{$IFDEF CPU64}
// --> RCX Target
// <-- EAX Result
MOV rax, -1
//sfence
LOCK XADD [rcx], rax
dec rax
{$ENDIF CPU64}
end;
constructor TFIFOQUEUE_MPMC.create(aPower : long );
begin
if (aPower < 0) or (aPower > high(long))
then
begin
writeln('Constructor''s argument incorrect');
halt;
end;
{$IFDEF CPU64}
fMask:=not($FFFFFFFFFFFFFFFF shl aPower);
{$ENDIF CPU64}
{$IFDEF CPU32}
fMask:=not($FFFFFFFF shl aPower);
{$ENDIF CPU32}
fSize:=(1 shl aPower);
setLength(tab,1 shl aPower);
tail:=0;
head:=0;
lock1:=TALOCK.create(100);
lock2:=TALOCK.create(100);
lock3:=TSpinlock.create;
event:=TSimpleEvent.create;
count1:=0;
end;
destructor TFIFOQUEUE_MPMC.Destroy;
begin
lock1.free;
lock2.free;
lock3.free;
event.free;
setLength(tab,0);
inherited Destroy;
end;
procedure TFIFOQUEUE_MPMC.setObject(lp : long;const aobject : tNodeQueue);
begin
tab[lp and fMask]:=aObject;
end;
function TFIFOQUEUE_MPMC.getObject(lp : long):tNodeQueue;
begin
result:=tab[lp and fMask];
end;
function TFIFOQUEUE_MPMC.push(tm : tNodeQueue):boolean;//stdcall;
begin
lock1.enter;
result:=true;
if getlength >= fsize
then
begin
result:=false;
lock1.leave;
exit;
end;
setObject(tail,tm);
tail:=(tail+1);
lock3.enter;
inc(count1);
event.setevent;
lock3.leave;
lock1.leave;
end;
function TFIFOQUEUE_MPMC.pop(var obj:tNodeQueue):boolean;
var b:long;
begin
if self.count=0
then
begin
event.waitfor(INFINITE);
lock3.enter;
if count1=0 then event.resetevent;
lock3.leave;
end;
lock2.enter;
if tail<>head
then
begin
obj:=getObject(head);
head:=(head+1);
result:=true;
lock3.enter;
dec(count1);
lock3.leave;
lock2.leave;
exit;
end
else
begin
result:=false;
lock2.leave;
end;
end;
function TFIFOQUEUE_MPMC.getLength:long;
var head1,tail1:long;
begin
head1:=head;
tail1:=tail;
if tail1 < head1
then result:= (High(long)-head1)+(1+tail1)
else result:=(tail1-head1);
end;
function TFIFOQUEUE_MPMC.getSize:long;
begin
result:=fSize;
end;
end.
==============================================================================
TOPIC: How do I look inside an .exe file to view the programming
http://groups.google.com/group/comp.programming.threads/t/70ea90ac4a00188c?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Oct 15 2013 8:32 am
From: rush8y@googlemail.com
As this is the top link returned when searching for something similar to the OP, you will find this tool very useful:
http://www.jetbrains.com/decompiler/features/
==============================================================================
TOPIC: Concurrent FIFO Queue version 1.0
http://groups.google.com/group/comp.programming.threads/t/7a3901f20c6b0023?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Oct 15 2013 2:17 pm
From: aminer
Hello,
I have implemented a concurrent FIFO Queue that satifies
many requirements such us: it is FIFO fair, it minimizes efficiently the
cache-coherence traffic and it is energy efficient on the pop() side:
when there is no items in the queue it will not spin-wait , but it will
wait on a portable manual event object, it is not as fast
as lockfree algorithms, cause if you want to satisfy the FIFO fairness
and to minimize efficiently the cache-coherence traffic and to be
energy efficient you have to pay a price: this will lower the speed
of the concurrent FIFO queue, but the speed of my concurrent FIFO Queue
is good i think , cause i have benchmarked on the Quad core processor
each running at 1.6 GHz, with 4 threads pushing and 4 threads poping and
it gives 1 million transaction(push and pop) per second, and that's also
good.
You can download my concurrent FIFO Queue from:
http://pages.videotron.com/aminer/
Thank you,
Amine Moulay Ramdane.
==============================================================================
TOPIC: Lock and Lockfree...
http://groups.google.com/group/comp.programming.threads/t/4fc87d01927bc106?hl=en
==============================================================================
== 1 of 2 ==
Date: Tues, Oct 15 2013 4:30 pm
From: aminer
Hello,
I have come to an interresting subject, i have compared
a FIFO queue that uses two spinlocks with an exponentaial backoff
and a lockfree FIFO queue that does not use an exponential backoff, i
have come to the conclusion
that the Spinlock with a backoff is 6.5X times faster than
the lockfree version without a backoff, and you have to be smart and
understand why.. i will explain it to you, if you take a look at a
Spinlock with an exponentaial backoff , you will read this in the
Enter() method:
---
procedure TSpinLock.Enter;
var t:integer;
begin
backoff.reset;
repeat
if CAS(@FCount2^.FCount2,0,1)then break;
t:=backoff.delay;
sleep(t);
until false;
end;
---
As you have noticed if for example 4 threads will execute the CAS
one of them will succeed and 3 of them will backoff , but what
is very important to know, is that the thread that have succeeded
will have the opportunity to reenter many times the locked region
and modifying FCount2^.FCount2 in its local cache, so this will make
it very fast, in fact 6.5X times faster than the lockfree version
without backoff, cause in lockfree algorithms every thread must have a
cache miss and reload the cache line from the memory and this
is slower than a Spinlock with a backoff, so this is why the Spinlock
with a backoff is very much faster and it minimizes also the
cache-coherence traffic under contention, so what is the solution then?
i think that the lockfree FIFO queue algorithms must use an exponential
backoff mechanism to be much faster and to be able to minimize the
cache-coherence traffic.
Hope you have undertood what i mean in this post..
Thank you,
Amine Moulay Ramdane.
== 2 of 2 ==
Date: Tues, Oct 15 2013 4:38 pm
From: aminer
I mean 6.5X times faster under contention of course.
Thank youm
Amine Moulay Ramdane.
On 10/15/2013 7:30 PM, aminer wrote:
>
> Hello,
>
>
> I have come to an interresting subject, i have compared
> a FIFO queue that uses two spinlocks with an exponentaial backoff
> and a lockfree FIFO queue that does not use an exponential backoff, i
> have come to the conclusion
> that the Spinlock with a backoff is 6.5X times faster than
> the lockfree version without a backoff, and you have to be smart and
> understand why.. i will explain it to you, if you take a look at a
> Spinlock with an exponentaial backoff , you will read this in the
> Enter() method:
>
> ---
> procedure TSpinLock.Enter;
>
> var t:integer;
>
> begin
>
> backoff.reset;
> repeat
>
> if CAS(@FCount2^.FCount2,0,1)then break;
> t:=backoff.delay;
> sleep(t);
> until false;
> end;
>
> ---
>
> As you have noticed if for example 4 threads will execute the CAS
> one of them will succeed and 3 of them will backoff , but what
> is very important to know, is that the thread that have succeeded
> will have the opportunity to reenter many times the locked region
> and modifying FCount2^.FCount2 in its local cache, so this will make
> it very fast, in fact 6.5X times faster than the lockfree version
> without backoff, cause in lockfree algorithms every thread must have a
> cache miss and reload the cache line from the memory and this
> is slower than a Spinlock with a backoff, so this is why the Spinlock
> with a backoff is very much faster and it minimizes also the
> cache-coherence traffic under contention, so what is the solution then?
> i think that the lockfree FIFO queue algorithms must use an exponential
> backoff mechanism to be much faster and to be able to minimize the
> cache-coherence traffic.
>
>
>
> Hope you have undertood what i mean in this post..
>
>
>
> Thank you,
> Amine Moulay Ramdane.
>
>
>
>
>
>
>
>
>
>
>
==============================================================================
TOPIC: Spinlock and lockfree and waitfree...
http://groups.google.com/group/comp.programming.threads/t/624873836709bf90?hl=en
==============================================================================
== 1 of 3 ==
Date: Tues, Oct 15 2013 5:03 pm
From: aminer
Hello,
I have come to an interresting subject, and you have
to be smart to understand it more...
If you take a look at the following FIFO queue that is
waitfree on the push() side and lockfree on the pop()
side, here it's:
http://pastebin.com/f72cc3cc1
as i said before , there is a weaknesses with this FIFO queue, first
the lockfree side on the pop() is not FIFO fair , so it's not
starvation-free on the pop() side, and there is a second weakness:
it doesn't minimizes the cache-coherence traffic ,
but there is also a third weakness with this algorithm: since
you can not use an exponential backoff on the waitfree pop() side
you can not make it 4 times or 6 times faster under contention on the
push() side,
the Spinlock() with an exponential backoff mechanism on the pop()
and the push() is 4 times to 6 times faster than the lockfree
version and the waitfree version without a backoff mechanism and
i have just explain it to you,
but you can add an exponential backoff on the pop() lockfree side of
this FIFO que,
and this will make it 6 times faster under contention on the pop() side.
So in my opinion , since the Spinlock with an exponential backoff is
6 times faster under contention , so the risk of a stavartion will
be lowered, and since it minimizes the cache-coherence traffic, so this
will make the Spinlock with an exponential backoff
a better alternative if you want better speed and to minimize
cache-coherence traffic.
Thank you,
Amine Moulay Ramdane.
== 2 of 3 ==
Date: Tues, Oct 15 2013 6:35 pm
From: aminer
Hello,
This 6X times faster is just for the pop() method, but
for the push() the Spinlock with backoff gives the same
performance as the lockfree, so this is why they both have
the same performance under contention.
Thank you,
Amine Moulay Ramdane.
== 3 of 3 ==
Date: Thurs, Oct 17 2013 5:10 pm
From: "Chris M. Thomasson"
"aminer" wrote in message news:l3kl4m$ioi$1@news.albasani.net...
[...]
> So in my opinion , since the Spinlock with an exponential backoff is
> 6 times faster under contention , so the risk of a stavartion will
> be lowered, and since it minimizes the cache-coherence traffic, so this
> will make the Spinlock with an exponential backoff
> a better alternative if you want better speed and to minimize
> cache-coherence traffic.
One tiny point:
A lockfree algorithm can make progress on every failure of the condition.
A spinlock cannot.
;^/
==============================================================================
TOPIC: Threadpool and Threadpool with priorities were updated
http://groups.google.com/group/comp.programming.threads/t/6b3b2dc79e93d1f5?hl=en
==============================================================================
== 1 of 2 ==
Date: Fri, Oct 18 2013 10:50 am
From: aminer
Hello,
Threadpool and Threapool with priorities were
updated, i have used my new concurrent FIFO queue
inside them to minimize efficiently the cache coherence traffic and so
that they become FIFO fair, and i have updated all my other libraries
such us my Parallel archiver etc.
please download them again if you need them.
You can download my updated libraries from:
http://pages.videotron.com/aminer/
Thank you,
Amine Moulay Ramdane.
== 2 of 2 ==
Date: Fri, Oct 18 2013 10:57 am
From: aminer
Hello,
All my libraries and programs are stable now, so if you need one of my
programs or libraries you can download them from my website:
http://pages.videotron.com/aminer/
Thank you,
Amine Moulay Ramdane.
==============================================================================
TOPIC: Here is again my Proof
http://groups.google.com/group/comp.programming.threads/t/1b195ecfbb07dead?hl=en
==============================================================================
== 1 of 1 ==
Date: Fri, Oct 18 2013 11:58 am
From: blmblm@myrealbox.com
In article <l36kri$3t3$1@news.albasani.net>, aminer <aminer@toto.net> wrote:
>
> I wrote:
> > Hello,
> >
> > I think i know what is my mistake: the serial part in
> > the Amdahl's law is not the critical section, you must not
> > confuse the two.
>
Exactly. The serial part is the part that must be done exactly once
and cannot be somehow split up among multiple threads or processes.
>
> But, Amine, how can you say that the serial part of the Amdahl's law is
> not
> the critical section ?
Having got it right once (above), why now go off the rails again
(below) ....
(And why start a new thread for each new thought, rather than keeping
the whole discussion in one thread?)
> In my example if you don't have context switches , and you have the same
> number of threads than the number of cores, and the time inside the
> critical
> section is constant and the time inside the parallel part is constant,
> so the Amdahl's law can predict the worst case contention scenario ,
> that means when all the threads are contending for the same critical
> section, hence you can predict the worst case contention scenario by
> just calculating the time inside the critical section and the time
> inside the parallel part and doing the Amdahl calculation, this will
> give you the
> exact result for the worst case contention scenario, so as you have
> noticed the serial part of the Amdahl equation is not only the critical
> section, it's the critical section with a context, so you have to take
> into consideration also the context, that means the context of the worst
> case contention scenario.
>
>
> Thank you,
> Amine Moulay Ramdane.
>
>
>
[ snip ]
--
B. L. Massingill
ObDisclaimer: I don't speak for my employers; they return the favor.
==============================================================================
TOPIC: ParallelVarFiler version 1.17
http://groups.google.com/group/comp.programming.threads/t/a5a073dcd9c9e9d4?hl=en
==============================================================================
== 1 of 3 ==
Date: Sat, Oct 19 2013 8:27 am
From: aminer
Hello,
ParallelVarFiler was updated to version 1.17, from now on you don't need
to call the LoadData().
Authors: Amine Moulay Ramdane and Peter Johansson
Parallel Variable filer and streamer for Delphi and Freepascal that can
be used also as a parallel hashtable and that uses ParallelHashList
(Parallel Hashtable) with O(1) best case and O(log(n)) worst case access
that uses lock striping and lightweight
MREWs(multiple-readers-exclusive-writer) , this allows multiple threads
to write and read concurently. also ParallelHashList maintains an
independant counter , that counts the number of entries , for each
segment of the hashtable and uses a lock for each counter, this is also
for better scalability.
Description:
Collect different types of variables and save them into one file or
stream. TParallelVarFiler reads and writes on files, streams and
Strings. You can collect Strings, Integers and Dates, in fact everything
that can be stored in a variant. In addition you can collect Streams.
You can use also ParallelVarFiler to send parameters over network. It
can be used for example to share data by any IPC mechanism.
And please look at test.pas and test1.pas a parallel variable filer
examples - compile and execute them...
Now you can use ParallelVarFiler as a small to medium database, i have
added a Clean() and DeletedItems() methods and i have changed the
constructor, now you can pass a file name to the constructor where the
data will be wrote and updated and deleted... if you pass and empty file
name the data will be updated and wrote and deleted only in memory. When
you use a file name in the constructor, many readers and one writer can
proceed concurrently. But when you pass an empty file name to the
constructor, many writers and many readers can proceed concurrently in
memory. and when you read a data item it only use the parallel hashtable
in memory, hardisk is not used.
And when you pass a file name to the constructor and delete or update
the items, those items will be marked deleted from the file, and you can
use DeletedItems() to see how many data items are marked deleted in the
file, and use after that the Clean() method to delete completly those
data items from the file.
How can you store mutiple data inside a database row and map it to a key ?
You can simply use Parallel Variable Filer for that and stream your
mutiple data variants and map them to a key...
As you know ParallelVarFiler is using ParallelHashList (a parallel
hashtable that scales on multicores), so when you are using the
GetKeys() method , and you want to get the data of those keys don't
forget to test if the variant is not empty by using the VarIsEmpty()
function, and when you are using GetStream() you can test if the data
exists by testing the return boolean value from GetStream().
ParallelVarFiler is easy to learn, i have documented all the methods ,
and please read inside ParallelVarFiler.pas about
them.
Now ParallelVarFiler is Fault tolerant to power failures etc. i have
done a simulation of power failures and data file damages and
ParallelVarFiller is recovering from power failure and damages of the
data file ... And when there is a power failure or the program has
crached and you want to recover from a damage to your data file you have
only to call the LoadData() method.
As you know ParallelVarFiler is using ParallelHashList (a parallel
hashtable), but as you know ParallelHashList is cache unfriendly since
it's using a hash mechanism etc. and as you know ParallelVarFiler and
ParallelHashlist are memory bound, so since ParallelHashList is cache
unfriendly and it's memory bound so i don't think ParallelVarFiler or
ParallelHahsList will scale very well, to avoid this problem you have to
use something like replication across computers using TCP/IP and using
your database in each computer and load balance between computers, but
even if you are using replication and load balancing this can make the
memory and hardisk truly parallel but this will not avoid the problem
with the network bandwidth limitation. But ParallelVarFiler and
ParallelHashlist are scaling to a certain point and they are fast and
they are still usefull.
Please see the test.pas example inside the zip file to see how i am
using it...
But please read the following:
This software is provided on an "as-is" basis, with no warranties,
express or implied. The entire risk and liability of using it is yours.
Any damages resulting from the use or misuse of this software will be
the responsibility of the user.
please look at the test.pas and test1.pas example inside the zip file to
see how to use ParallelVarFiler...
You can download ParallelVarFiler from:
http://pages.videotron.com/aminer/
Here is the methods that have been implemented:
PUBLIC METHODS:
constructor Create(file1:string,size,mrews:integer;casesensitive:boolean);
- Creates a new VarFiler ready to use, with size and with mrews number
of mrewa and casesensitive
for case sensitive keys,the number of
MREWS(multiple-readers-exclusive-writer) must be less or
equal to the Hashtable size and file1 is the the file to write to, if
file1 is an empty string
the data will be wrote only to memory and use SaveToFile() to write to a
file.
destructor Destroy;
- Destroys the ParallelVarFiler object and cleans up.
procedure Clear;
- Deletes all contents.
function Clean:boolean
- Cleans the deleted items from the file.
function DeletedItems:integer
- Returns the number of items marked deleted.
function LoadData:boolean
- Loads the data from the file passed to the constructor.
function Delete(Name : String):boolean;
- Deletes the variable with Name.
function Exists(Name : String) : Boolean;
- Returns True if a variable with Name exists
procedure GetKeys(Strings : TstringList);
- Fills up a TStringList descendant with all the keys.
function Count : Integer;
- Returns the number of variables
function Add(Name : String; Value : Variant ):boolean;
- Adds a new variable , given the Name
function AddStream( Name : String;Stream : TStream ):boolean;
- Adds a new Stream, given the Name
function Update(Name : String; Value : Variant):boolean;
- Updates the new variable , given the Name
function UpdateStream(Name : String; Stream : TStream ):boolean;
- Updates a new Stream , given the Name
function GetStream(Name : String;Stream : TStream):boolean;
- Fills up the stream with data
procedure SaveToStream(Stream : TStream);
- Saves all variables, streams and Graphics to a stream.
procedure SaveToFile(FileName : String);
- Saves all variables, streams and Graphics to a file.
procedure SaveToString(out S : String);
- Saves all variables, streams and Graphics to a string.
function LoadFromStream(Stream : TStream):boolean;
- Loads all variables, streams and Graphics from a stream.
function LoadFromFile(FileName : String):boolean;
- Loads all variables, streams and Graphics from a File.
function LoadFromString(S : String):boolean;
- Loads all variables, streams and Graphics from a string.
function Stream2String(Stream1: TStream;var Mystring:string): boolean;
procedure String2Stream(var String2BeSaved:string; Stream1: TStream);
procedure Variant2Stream(var VR:variant; Stream1: TStream);
function Stream2Variant(Stream1: TStream;var VR:variant): boolean;
PUBLIC PROPERTIES:
Items : Variant
- Gets the value (indexed).
Language: FPC Pascal v2.2.0+ and Lazarus / Delphi 7 to 2007:
http://www.freepascal.org/
Operating Systems: Win , Linux and Mac (x86).
Required FPC switches: -O3 -Sd -dFPC -dWin32 -dFreePascal
-Sd for delphi mode....
Required Delphi switches: -DMSWINDOWS -$H+
For Delphi use -DDelphi
Thank you,
Amine Moulay Ramdane.
== 2 of 3 ==
Date: Sat, Oct 19 2013 8:36 am
From: aminer
Hello,
You have to understand ParallelVarFiller ,
it's a Parallel HashTable that can be saved automaticly
or manually to a file or to a stream or to a string and it's
fault tolerant to power failure etc.
Hope you have understood the idea.
Thank you,
Amine Moulay Ramdane.
== 3 of 3 ==
Date: Sat, Oct 19 2013 8:40 am
From: aminer
I will add..
You have to understand ParallelVarFiller ,
it's a Parallel HashTable that can be saved automaticly or manually to a
file or to a stream or to a string and it can be restored from
a file or from a stream or from a string back to the Hashtable in
memory, and it's
fault tolerant to power failure etc.
Hope you have understood the idea.
Thank you,
Amine Moulay Ramdane.
==============================================================================
TOPIC: ParallelVarFiler version 1.22
http://groups.google.com/group/comp.programming.threads/t/10ed747cbfc667d1?hl=en
==============================================================================
== 1 of 3 ==
Date: Mon, Oct 21 2013 8:24 am
From: aminer
Hello,
I have updated ParallelVarFiler to version 1.22,
now SaveToStream() , SaveToString() and SaveToFile() are
thread safe, but you must pass a filename to the constructor
before using them.
You can download ParallelVarFiler version 1.22 from:
http://pages.videotron.com/aminer/
Thank you,
Amine Moulay Ramdane
== 2 of 3 ==
Date: Mon, Oct 21 2013 8:39 am
From: aminer
Hello all,
I have wrote ParallelVarFiler before writing
the scalable scalable Anderson array based Lock , but you have to
understand that under Windows the Windows critical
section is not FIFO fair, so it's not starvatoin-free, so in my next
version 1.23 of ParallelVarFiler i will
change this Critical Section by the scalable scalable Anderson array
based Lock.
Thank you,
Amine Moulay Ramdane.
== 3 of 3 ==
Date: Mon, Oct 21 2013 8:54 am
From: aminer
Hello,
ParallelVarFiler was updated to version 1.23, now it uses the scalable
Anderson array based Lock , so now the scalable Lock is starvation-free.
You can download ParallelVarFiler 1.23 from:
http://pages.videotron.com/aminer/
Thank you,
Amine Moulay Ramdane.
On 10/21/2013 11:24 AM, aminer wrote:
> Hello,
>
>
> I have updated ParallelVarFiler to version 1.22,
> now SaveToStream() , SaveToString() and SaveToFile() are
> thread safe, but you must pass a filename to the constructor
> before using them.
>
>
> You can download ParallelVarFiler version 1.22 from:
>
> http://pages.videotron.com/aminer/
>
>
> Thank you,
> Amine Moulay Ramdane
==============================================================================
TOPIC: About my parallel libraries...
http://groups.google.com/group/comp.programming.threads/t/e71cb05470c8cd13?hl=en
==============================================================================
== 1 of 1 ==
Date: Mon, Oct 21 2013 9:18 am
From: aminer
Hello,
You have to understand me, i have designed Parallel archiver, Parallel
compression library and ParallelHahslist and ParallelVarFiler and
scalable Locks and scalable RWLocks, cause i wanted to contribute to the
FreePascal and Lazarus community, Ludob have criticize ParallelVarFiler
and he said that there
were still some bugs with ParallelVarFiler, but i think i have corrected
those bugs and i hope you will enjoy my Parallel libraries, if you asked
me what i really love in my Parallel archiver,
it's an archiver that has a O(1) complexity access in all the ADD() ,
Delete() and Extract() etc. and you can use it as a Hash Table also
with both compression and AES encryption and
i think it is stable now, this is why i love it,
if you asked me also why i have designed those
scalable RWLocks and scalable Locks, cause the RWLock that i have
designed is scalable and the scalable Locks that i have implemented are
scalable and FIFO fair, so they are also starvation-free.. this what i
love in them etc. etc.
So hope you will enjoy my parallel libraries etc. etc.
You can download all my parallel librariries and programs from:
http://pages.videotron.com/aminer/
Thank you,
Amine Moulay Ramdane.
==============================================================================
You received this message because you are subscribed to the Google Groups "comp.programming.threads"
group.
To post to this group, visit http://groups.google.com/group/comp.programming.threads?hl=en
To unsubscribe from this group, send email to comp.programming.threads+unsubscribe@googlegroups.com
To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.programming.threads/subscribe?hl=en
To report abuse, send email explaining the problem to abuse@googlegroups.com
==============================================================================
Google Groups: http://groups.google.com/?hl=en
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment