Date: Fri, 18 Feb 2011 16:34:47 -0800 From: Jeremy Chadwick <freebsd@jdc.parodius.com> To: Kevin Oberman <oberman@es.net> Cc: freebsd-scsi@freebsd.org, stable@freebsd.org, "Kenneth D. Merry" <ken@freebsd.org>, Dmitry Morozovsky <marck@rinet.ru> Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 Message-ID: <20110219003447.GA70019@icarus.home.lan> In-Reply-To: <20110219000521.9918B1CC29@ptavv.es.net> References: <20110218231306.GA69028@icarus.home.lan> <20110219000521.9918B1CC29@ptavv.es.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Feb 18, 2011 at 04:05:21PM -0800, Kevin Oberman wrote: > > Date: Fri, 18 Feb 2011 15:13:06 -0800 > > From: Jeremy Chadwick <freebsd@jdc.parodius.com> > > Sender: owner-freebsd-stable@freebsd.org > > > > On Sat, Feb 19, 2011 at 02:05:33AM +0300, Dmitry Morozovsky wrote: > > > On Fri, 18 Feb 2011, Kenneth D. Merry wrote: > > > > > > KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb > > > KDM> > KDM> SAS hardware. > > > KDM> > > > > KDM> > [snip] > > > KDM> > > > > KDM> > Again, thank you very much Ken. I'm planning to stress test this on 846 case > > > KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the > > > KDM> > results. > > > KDM> > > > > KDM> > Any hints to particularly I/O stressing patterns? Out of my mind, I'm planning > > > KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else > > > KDM> > could you suppose? > > > KDM> > > > KDM> The best stress test I have found has been to just do a single sequential > > > KDM> write stream with ZFS. i.e.: > > > KDM> > > > KDM> cd /path/to/zfs/pool > > > KDM> dd if=/dev/zero of=foo bs=1M > > > KDM> > > > KDM> Just let it run for a long period of time and see what happens. > > > > > > Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be > > > /dev/random more appropriate? > > > > No -- /dev/urandom maybe, but not /dev/random. /dev/urandom will also > > induce significantly higher CPU load than /dev/zero will. Don't forget > > that ZFS is a processor-centric (read: no offloading) system. > > > > I tend to try different block sizes (starting at bs=8k and working up to > > bs=256k) for sequential benchmarks. The "sweet spot" on most disks I've > > found is 64k. Otherwise use benchmarks/bonnie++. > > When FreeBSD updated its random number engine a couple of years ago, > random and urandom became the same thing. Unless I am missing something, > a switch should make no difference. You and Adam's comments are both valid. I tend to work on a multitude of OSes (specifically Solaris, Linux, and FreeBSD), so I tend to use what behaves the same universally (/dev/urandom in this case). Sorry for the mix-up/noise. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB |
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110219003447.GA70019>