From owner-freebsd-geom@FreeBSD.ORG Mon Jul 30 20:35:41 2007 Return-Path: Delivered-To: geom@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 19A1716A419; Mon, 30 Jul 2007 20:35:41 +0000 (UTC) (envelope-from etc@fluffles.net) Received: from auriate.fluffles.net (cust.95.160.adsl.cistron.nl [195.64.95.160]) by mx1.freebsd.org (Postfix) with ESMTP id C0C6A13C45E; Mon, 30 Jul 2007 20:35:40 +0000 (UTC) (envelope-from etc@fluffles.net) Received: from 195-241-125-45.dsl.ip.tiscali.nl ([195.241.125.45] helo=[10.0.0.18]) by auriate.fluffles.net with esmtpa (Exim 4.66 (FreeBSD)) (envelope-from ) id 1IFbxc-000MqH-KF; Mon, 30 Jul 2007 22:35:24 +0200 Message-ID: <46AE4B94.8010107@fluffles.net> Date: Mon, 30 Jul 2007 22:35:32 +0200 From: Fluffles User-Agent: Thunderbird 2.0.0.5 (X11/20070716) MIME-Version: 1.0 To: Pawel Jakub Dawidek References: <46AA50B4.9080901@fluffles.net> <20070727210032.0140413C457@mx1.freebsd.org> <20070730192654.GO1092@garage.freebsd.pl> In-Reply-To: <20070730192654.GO1092@garage.freebsd.pl> Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit Cc: geom@FreeBSD.org, Dominic Bishop Subject: Re: Increasing GELI performance X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Jul 2007 20:35:41 -0000 Pawel Jakub Dawidek wrote: > On Fri, Jul 27, 2007 at 10:00:35PM +0100, Dominic Bishop wrote: > >> I just tried your suggestion of geli on the raw device and it is no better >> at all: >> >> dd if=/dev/da0.eli of=/dev/null bs=1m count=1000 >> 1000+0 records in >> 1000+0 records out >> 1048576000 bytes transferred in 29.739186 secs (35259069 bytes/sec) >> >> dd if=/dev/zero of=/dev/da0.eli bs=1m count=1000 >> 1000+0 records in >> 1000+0 records out >> 1048576000 bytes transferred in 23.501061 secs (44618241 bytes/sec) >> >> Using top -S with 1s refresh to list the geli processes whilst doing this it >> seems only one of them is doing anything at any given time, the others are >> sitting in a state of "geli:w", I assume that is a truncation of something, >> maybe geli:wait at a guess. >> > > No matter how many cores/cpus you have if you run single-threaded > application. What you do exactly is: > 1. Send read of 128kB. > 2. One of geli threads picks it up, decrypts and sends it back. > 3. Send next read of 128kB. > 4. One of geli threads picks it up, decrypts and sends it back. > ... > > All threads will be used when there are more threads accessing provider. > But isn't it true that the UFS filesystem utilizes read-ahead and with that a multiple I/O queue depth (somewhere between 7 to 9 queued I/O's) - even when using something like dd to sequentially read a file on a mounted filesystem ? Then this read-ahead will cause multiple I/O request coming in and geom_eli can use multiple threads to maximize I/O throughput. Maybe Dominic can try playing with the "vfs.read_max" sysctl variable. - Veronica