From owner-freebsd-fs@FreeBSD.ORG Thu Mar 17 07:24:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 60A931065677 for ; Thu, 17 Mar 2011 07:24:01 +0000 (UTC) (envelope-from marcus@blazingdot.com) Received: from marklar.blazingdot.com (marklar.blazingdot.com [207.154.84.83]) by mx1.freebsd.org (Postfix) with SMTP id 315C48FC1B for ; Thu, 17 Mar 2011 07:24:01 +0000 (UTC) Received: (qmail 52327 invoked by uid 503); 17 Mar 2011 06:57:20 -0000 Date: Wed, 16 Mar 2011 22:57:20 -0800 From: Marcus Reid To: Lorenzo Perone Message-ID: <20110317065720.GA49199@blazingdot.com> References: <4D7F7E33.7050103@yellowspace.net> <4D80BFB3.20706@yellowspace.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4D80BFB3.20706@yellowspace.net> X-Coffee-Level: nearly-fatal User-Agent: Mutt/1.5.6i Cc: freebsd-fs@freebsd.org, Ivan Voras Subject: Re: gmirror performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Mar 2011 07:24:01 -0000 On Wed, Mar 16, 2011 at 02:48:35PM +0100, Lorenzo Perone wrote: > On 16.03.11 13:00, Ivan Voras wrote: > > >On 15/03/2011 15:56, Lorenzo Perone wrote: > ... > >>I'd expect read performance to be noticeably higher than write > >>performance. Why is it not the case? Wrong expectation? :/ > > >Maybe. You can't expect that RAID-1 will have as good performance as > >RAID-0 but you might achieve better performance for sequential reads > >with long buffers. Try setting the vfs.read_max sysctl to 128 and see if > >it helps you. > > It *does* help! > > Thanx a great lot! I knew I it was a PEBKAC :) > > sysctl vfs.read_max=128 > configure -b load mirr0 > > just gave me 70MB/s more when reading (256640376 bytes/sec) :) > > >(you might want to leave the gmirror algorithm to the > >>default "load" and increase the stripe size to something sane, like 16k). > > If You meant gmirror configure -s 16384 mirr0: this didn't change > anything for -b load, as expected, but it did change a little for -b split. > > To sum up some results, fwimc: > > test case: > > umount /mnt && mount /dev/mirror/mirr0p4 /mnt && \ > dd if=/mnt/2gigfile.dat bs=1m of=/dev/null > > * with default vfs.read_max=8 > > -b split -s 2048: > 173875942 bytes/sec > > -b load: > 195143412 bytes/sec > > * with vfs.read_max=128 > > -b split -s 2048: > 191024137 bytes/sec > > -b load: > 258329216 bytes/sec Wow, that's great. I just almost doubled big sequential read performance on one of my machines with this too. The question now is why the defaults are the way they are... Does a big vfs.read_max (described as "Cluster read-ahead max block count") pessimize performance in some other way? Marcus