From owner-freebsd-stable@FreeBSD.ORG Fri Feb 18 23:05:34 2011 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BDC761065697; Fri, 18 Feb 2011 23:05:34 +0000 (UTC) (envelope-from marck@rinet.ru) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) by mx1.freebsd.org (Postfix) with ESMTP id 43CB28FC12; Fri, 18 Feb 2011 23:05:33 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.4/8.14.4) with ESMTP id p1IN5XhB015920; Sat, 19 Feb 2011 02:05:33 +0300 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 19 Feb 2011 02:05:33 +0300 (MSK) From: Dmitry Morozovsky To: "Kenneth D. Merry" In-Reply-To: <20110218225204.GA84087@nargothrond.kdm.org> Message-ID: References: <20110218164209.GA77903@nargothrond.kdm.org> <20110218225204.GA84087@nargothrond.kdm.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.6 (woozle.rinet.ru [0.0.0.0]); Sat, 19 Feb 2011 02:05:33 +0300 (MSK) Cc: freebsd-scsi@freebsd.org, stable@freebsd.org Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Feb 2011 23:05:34 -0000 On Fri, 18 Feb 2011, Kenneth D. Merry wrote: KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb KDM> > KDM> SAS hardware. KDM> > KDM> > [snip] KDM> > KDM> > Again, thank you very much Ken. I'm planning to stress test this on 846 case KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the KDM> > results. KDM> > KDM> > Any hints to particularly I/O stressing patterns? Out of my mind, I'm planning KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else KDM> > could you suppose? KDM> KDM> The best stress test I have found has been to just do a single sequential KDM> write stream with ZFS. i.e.: KDM> KDM> cd /path/to/zfs/pool KDM> dd if=/dev/zero of=foo bs=1M KDM> KDM> Just let it run for a long period of time and see what happens. Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be /dev/random more appropriate? -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------