From owner-freebsd-performance@FreeBSD.ORG Wed Mar 26 06:00:05 2008 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B13501065670 for ; Wed, 26 Mar 2008 06:00:05 +0000 (UTC) (envelope-from bmeekhof@umich.edu) Received: from hellskitchen.mr.itd.umich.edu (smtp.mail.umich.edu [141.211.14.82]) by mx1.freebsd.org (Postfix) with ESMTP id 6296A8FC1C for ; Wed, 26 Mar 2008 06:00:04 +0000 (UTC) (envelope-from bmeekhof@umich.edu) Received: FROM atom.heart.mother (c-68-40-199-244.hsd1.mi.comcast.net [68.40.199.244]) BY hellskitchen.mr.itd.umich.edu ID 47E9E661.9A4C0.27008 ; 26 Mar 2008 02:00:01 -0400 Message-ID: <47E9E660.6090101@umich.edu> Date: Wed, 26 Mar 2008 02:00:00 -0400 From: "Benjeman J. Meekhof" User-Agent: Thunderbird 2.0.0.9 (X11/20071031) MIME-Version: 1.0 To: Ivan Voras References: <47E85C00.4010601@umich.edu> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-performance@freebsd.org Subject: Re: performance tuning on perc6 (LSI) controller X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Mar 2008 06:00:05 -0000 Hi Ivan, Thanks for the response. Your response quotes my initial uneven results, but are you also implying that I most likely cannot achieve results better than the later results which use a larger filesystem blocksize? gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2 #newfs -U -b 65536 /dev/stripe/test #write: 19.240875 secs (558052492 bytes/sec) #read: 20.000606 secs (536854644 bytes/sec) (iozone showed reasonably similar results - depending on recordsize would mostly be writing/reading around 500MB/s, though lows of 300MB/s were recorded in some read situations). I suppose my real question is whether there is some inherent limit in UFS2 or FreeBSD or geom that would prevent going higher than this. Maybe that's really not possible to answer, but certainly I plan to explore a few more configurations. Most of my tuning so far has been trial and error to get to this point, and all I ended up doing to finally get good results was changing filesystem blocksize to the max possible (I wanted to go to 128k but it doesn't let you do that). Apparently UFS2 and/or geom interact differently with the controller than Linux/XFS. This is no great surprise. thanks, Ben Ivan Voras wrote: > Benjeman J. Meekhof wrote: > >> My baseline was this - on linux 2.6.20 we're doing 800MB/s write and >> greater read with this configuration: 2 raid6 volumes volumes striped >> into a raid0 volume using linux software raid, XFS filesystem. Each >> raid6 is a volume on one controller using 30 PD. We've spent time >> tuning this, more than I have with FreeBSD so far. > >> time dd if=/dev/zero of=/test/deletafile bs=1M count=10240 >> 10737418240 bytes transferred in 26.473629 secs (405589209 bytes/sec) >> time dd if=/test/deletafile of=/dev/null bs=1M count=10240 >> 10737418240 bytes transferred in 157.700367 secs (68087465 bytes/sec) > > I had similar ratio of results when comparing FreeBSD+UFS to most > high-performance Linux file systems (XFS is really great!), so I'd guess > it's about as fast as you can get with this combination. > >> Any other suggestions to get best throughput? There is also HW RAID >> stripe size to adjust larger or smaller. ZFS is also on the list for >> testing. Should I perhaps be running -CURRENT or -STABLE to be get best >> results with ZFS? > > ZFS will be up to 50% faster on tests such as yours, so you should > definitely try it. Unfortunately it's not stable and you probably don't > want to use it in production. AFAIK there are no significant differences > between ZFS in -current and -stable. > > > -- Benjeman Meekhof - UM ATLAS/AGLT2 Computing office: 734-764-3450 cell: 734-417-6312