From owner-freebsd-stable@FreeBSD.ORG Sat Feb 19 00:05:22 2011 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4EAEB106566B; Sat, 19 Feb 2011 00:05:22 +0000 (UTC) (envelope-from oberman@es.net) Received: from mailgw.es.net (mail1.es.net [IPv6:2001:400:201:1::2]) by mx1.freebsd.org (Postfix) with ESMTP id 38CE08FC19; Sat, 19 Feb 2011 00:05:22 +0000 (UTC) Received: from ptavv.es.net (ptavv.es.net [IPv6:2001:400:910::29]) by mailgw.es.net (8.14.3/8.14.3) with ESMTP id p1J05Lgn029757 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Fri, 18 Feb 2011 16:05:21 -0800 Received: from ptavv.es.net (localhost [127.0.0.1]) by ptavv.es.net (Tachyon Server) with ESMTP id 9918B1CC29; Fri, 18 Feb 2011 16:05:21 -0800 (PST) To: Jeremy Chadwick In-reply-to: Your message of "Fri, 18 Feb 2011 15:13:06 PST." <20110218231306.GA69028@icarus.home.lan> Date: Fri, 18 Feb 2011 16:05:21 -0800 From: "Kevin Oberman" Message-Id: <20110219000521.9918B1CC29@ptavv.es.net> Cc: freebsd-scsi@freebsd.org, stable@freebsd.org, "Kenneth D. Merry" , Dmitry Morozovsky Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Feb 2011 00:05:22 -0000 > Date: Fri, 18 Feb 2011 15:13:06 -0800 > From: Jeremy Chadwick > Sender: owner-freebsd-stable@freebsd.org > > On Sat, Feb 19, 2011 at 02:05:33AM +0300, Dmitry Morozovsky wrote: > > On Fri, 18 Feb 2011, Kenneth D. Merry wrote: > > > > KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb > > KDM> > KDM> SAS hardware. > > KDM> > > > KDM> > [snip] > > KDM> > > > KDM> > Again, thank you very much Ken. I'm planning to stress test this on 846 case > > KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the > > KDM> > results. > > KDM> > > > KDM> > Any hints to particularly I/O stressing patterns? Out of my mind, I'm planning > > KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else > > KDM> > could you suppose? > > KDM> > > KDM> The best stress test I have found has been to just do a single sequential > > KDM> write stream with ZFS. i.e.: > > KDM> > > KDM> cd /path/to/zfs/pool > > KDM> dd if=/dev/zero of=foo bs=1M > > KDM> > > KDM> Just let it run for a long period of time and see what happens. > > > > Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be > > /dev/random more appropriate? > > No -- /dev/urandom maybe, but not /dev/random. /dev/urandom will also > induce significantly higher CPU load than /dev/zero will. Don't forget > that ZFS is a processor-centric (read: no offloading) system. > > I tend to try different block sizes (starting at bs=8k and working up to > bs=256k) for sequential benchmarks. The "sweet spot" on most disks I've > found is 64k. Otherwise use benchmarks/bonnie++. When FreeBSD updated its random number engine a couple of years ago, random and urandom became the same thing. Unless I am missing something, a switch should make no difference. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751