Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 31 Jul 2007 15:25:43 +0200
From:      Fluffles <etc@fluffles.net>
To:        Pawel Jakub Dawidek <pjd@FreeBSD.org>
Cc:        geom@FreeBSD.org, Dominic Bishop <dom@bishnet.net>
Subject:   Re: Increasing GELI performance
Message-ID:  <46AF3857.30600@fluffles.net>
In-Reply-To: <20070731114555.GQ1092@garage.freebsd.pl>
References:  <46AA50B4.9080901@fluffles.net> <20070727210032.0140413C457@mx1.freebsd.org> <20070730192654.GO1092@garage.freebsd.pl> <46AE4B94.8010107@fluffles.net> <20070731114555.GQ1092@garage.freebsd.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
Pawel Jakub Dawidek wrote:
> On Mon, Jul 30, 2007 at 10:35:32PM +0200, Fluffles wrote:
>   
>> Pawel Jakub Dawidek wrote:
>>     
>>> No matter how many cores/cpus you have if you run single-threaded
>>> application. What you do exactly is:
>>> 1. Send read of 128kB.
>>> 2. One of geli threads picks it up, decrypts and sends it back.
>>> 3. Send next read of 128kB.
>>> 4. One of geli threads picks it up, decrypts and sends it back.
>>> ...
>>>
>>> All threads will be used when there are more threads accessing provider.
>>>  
>>>       
>> But isn't it true that the UFS filesystem utilizes read-ahead and with 
>> that a multiple I/O queue depth (somewhere between 7 to 9 queued I/O's) 
>> - even when using something like dd to sequentially read a file on a 
>> mounted filesystem ? Then this read-ahead will cause multiple I/O 
>> request coming in and geom_eli can use multiple threads to maximize I/O 
>> throughput. Maybe Dominic can try playing with the "vfs.read_max" sysctl 
>> variable.
>>     
>
> You are right in general, but if you reread e-mail I was answering to,
> you will see that the author was reading from/writing to GEOM provider,
> not file system.
>   

Ah yes you're right. Though he might also have tested on a mounted 
filesystem, his email does not explicitely specify this. So he should 
re-run his experiment:

geli onetime /dev/da0
newfs /dev/da0
mkdir /test
mount /dev/da0 /test
dd if=/dev/zero of=/test/zerofile.000 bs=1m count=2000
(write score)
dd if=/test/zerofile.000 of=/dev/null bs=1m
(read score)

That *should* give him higher performance. Also, dominic might try 
increasing the block size, using "newfs -b 32768 /dev/da0". Without it, 
it seems it hits a performance roof on about ~130MB/s. I once wrote 
about this on the mailinglist where Bruce Evans questioned the 
usefulness of a blocksize higher than 16KB. I still have to investigate 
this further, it's on my to-do list. Deeplink: 
http://lists.freebsd.org/pipermail/freebsd-fs/2006-October/002298.html

I tried to recreate a test scenario myself, using 4 disks in a striping 
configuration (RAID0), first reading and writing on the raw .eli device, 
then on a mounted filesystem:

** raw device
# dd if=/dev/stripe/data.eli of=/dev/null bs=1m count=2000
2097152000 bytes transferred in 57.949793 secs (36189120 bytes/sec)
# dd if=/dev/zero of=/dev/stripe/data.eli bs=1m count=2000
1239416832 bytes transferred in 35.168374 secs (35242370 bytes/sec)

** mounted default newfs
# dd if=/dev/zero of=/test/zerofile.000 bs=1m count=2000
2097152000 bytes transferred in 47.843614 secs (43833478 bytes/sec)
# dd if=/test/zerofile.000 of=/dev/null bs=1m count=2000
2097152000 bytes transferred in 50.328749 secs (41669067 bytes/sec)

This was on a simple single core Sempron K8 CPU, but already there's a 
difference with the multiple queue depth VFS/UFS provides.

Good luck Dominic be sure to post again when you have new scores! I'm 
interested to see how far you can push GELI with a quadcore. :)

- Veronica



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?46AF3857.30600>