From owner-freebsd-stable@FreeBSD.ORG Fri Feb 18 23:13:09 2011 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5D3D8106566C for ; Fri, 18 Feb 2011 23:13:09 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta01.westchester.pa.mail.comcast.net (qmta01.westchester.pa.mail.comcast.net [76.96.62.16]) by mx1.freebsd.org (Postfix) with ESMTP id 1271D8FC0A for ; Fri, 18 Feb 2011 23:13:08 +0000 (UTC) Received: from omta16.westchester.pa.mail.comcast.net ([76.96.62.88]) by qmta01.westchester.pa.mail.comcast.net with comcast id 9aye1g00A1uE5Es51bD99m; Fri, 18 Feb 2011 23:13:09 +0000 Received: from koitsu.dyndns.org ([98.248.33.18]) by omta16.westchester.pa.mail.comcast.net with comcast id 9bD71g00P0PUQVN3cbD8VC; Fri, 18 Feb 2011 23:13:09 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 625759B422; Fri, 18 Feb 2011 15:13:06 -0800 (PST) Date: Fri, 18 Feb 2011 15:13:06 -0800 From: Jeremy Chadwick To: Dmitry Morozovsky Message-ID: <20110218231306.GA69028@icarus.home.lan> References: <20110218164209.GA77903@nargothrond.kdm.org> <20110218225204.GA84087@nargothrond.kdm.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-scsi@freebsd.org, stable@freebsd.org, "Kenneth D. Merry" Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Feb 2011 23:13:09 -0000 On Sat, Feb 19, 2011 at 02:05:33AM +0300, Dmitry Morozovsky wrote: > On Fri, 18 Feb 2011, Kenneth D. Merry wrote: > > KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb > KDM> > KDM> SAS hardware. > KDM> > > KDM> > [snip] > KDM> > > KDM> > Again, thank you very much Ken. I'm planning to stress test this on 846 case > KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the > KDM> > results. > KDM> > > KDM> > Any hints to particularly I/O stressing patterns? Out of my mind, I'm planning > KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else > KDM> > could you suppose? > KDM> > KDM> The best stress test I have found has been to just do a single sequential > KDM> write stream with ZFS. i.e.: > KDM> > KDM> cd /path/to/zfs/pool > KDM> dd if=/dev/zero of=foo bs=1M > KDM> > KDM> Just let it run for a long period of time and see what happens. > > Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be > /dev/random more appropriate? No -- /dev/urandom maybe, but not /dev/random. /dev/urandom will also induce significantly higher CPU load than /dev/zero will. Don't forget that ZFS is a processor-centric (read: no offloading) system. I tend to try different block sizes (starting at bs=8k and working up to bs=256k) for sequential benchmarks. The "sweet spot" on most disks I've found is 64k. Otherwise use benchmarks/bonnie++. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB |