Date: Sat, 19 Feb 2011 02:05:33 +0300 (MSK) From: Dmitry Morozovsky <marck@rinet.ru> To: "Kenneth D. Merry" <ken@freebsd.org> Cc: freebsd-scsi@freebsd.org, stable@freebsd.org Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 Message-ID: <alpine.BSF.2.00.1102190203470.14809@woozle.rinet.ru> In-Reply-To: <20110218225204.GA84087@nargothrond.kdm.org> References: <20110218164209.GA77903@nargothrond.kdm.org> <alpine.BSF.2.00.1102190104280.14809@woozle.rinet.ru> <20110218225204.GA84087@nargothrond.kdm.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 18 Feb 2011, Kenneth D. Merry wrote: KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb KDM> > KDM> SAS hardware. KDM> > KDM> > [snip] KDM> > KDM> > Again, thank you very much Ken. I'm planning to stress test this on 846 case KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the KDM> > results. KDM> > KDM> > Any hints to particularly I/O stressing patterns? Out of my mind, I'm planning KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else KDM> > could you suppose? KDM> KDM> The best stress test I have found has been to just do a single sequential KDM> write stream with ZFS. i.e.: KDM> KDM> cd /path/to/zfs/pool KDM> dd if=/dev/zero of=foo bs=1M KDM> KDM> Just let it run for a long period of time and see what happens. Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be /dev/random more appropriate? -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1102190203470.14809>