Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 30 Jul 2007 22:35:32 +0200
From:      Fluffles <etc@fluffles.net>
To:        Pawel Jakub Dawidek <pjd@FreeBSD.org>
Cc:        geom@FreeBSD.org, Dominic Bishop <dom@bishnet.net>
Subject:   Re: Increasing GELI performance
Message-ID:  <46AE4B94.8010107@fluffles.net>
In-Reply-To: <20070730192654.GO1092@garage.freebsd.pl>
References:  <46AA50B4.9080901@fluffles.net>	<20070727210032.0140413C457@mx1.freebsd.org> <20070730192654.GO1092@garage.freebsd.pl>

next in thread | previous in thread | raw e-mail | index | archive | help

Pawel Jakub Dawidek wrote:
> On Fri, Jul 27, 2007 at 10:00:35PM +0100, Dominic Bishop wrote:
>   
>> I just tried your suggestion of geli on the raw device and it is no better
>> at all:
>>
>> dd if=/dev/da0.eli of=/dev/null bs=1m count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes transferred in 29.739186 secs (35259069 bytes/sec)
>>
>> dd if=/dev/zero of=/dev/da0.eli bs=1m count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes transferred in 23.501061 secs (44618241 bytes/sec)
>>
>> Using top -S with 1s refresh to list the geli processes whilst doing this it
>> seems only one of them is doing anything at any given time, the others are
>> sitting in a state of "geli:w", I assume that is a truncation of something,
>> maybe geli:wait at a guess.
>>     
>
> No matter how many cores/cpus you have if you run single-threaded
> application. What you do exactly is:
> 1. Send read of 128kB.
> 2. One of geli threads picks it up, decrypts and sends it back.
> 3. Send next read of 128kB.
> 4. One of geli threads picks it up, decrypts and sends it back.
> ...
>
> All threads will be used when there are more threads accessing provider.
>   

But isn't it true that the UFS filesystem utilizes read-ahead and with 
that a multiple I/O queue depth (somewhere between 7 to 9 queued I/O's) 
- even when using something like dd to sequentially read a file on a 
mounted filesystem ? Then this read-ahead will cause multiple I/O 
request coming in and geom_eli can use multiple threads to maximize I/O 
throughput. Maybe Dominic can try playing with the "vfs.read_max" sysctl 
variable.

- Veronica



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?46AE4B94.8010107>