From owner-freebsd-scsi@FreeBSD.ORG  Sat Feb 19 00:48:02 2011
Return-Path: <owner-freebsd-scsi@FreeBSD.ORG>
Delivered-To: freebsd-scsi@freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34])
	by hub.freebsd.org (Postfix) with ESMTP id 90351106566B
	for <freebsd-scsi@freebsd.org>; Sat, 19 Feb 2011 00:48:02 +0000 (UTC)
	(envelope-from jdc@koitsu.dyndns.org)
Received: from qmta15.emeryville.ca.mail.comcast.net
	(qmta15.emeryville.ca.mail.comcast.net [76.96.27.228])
	by mx1.freebsd.org (Postfix) with ESMTP id 705AC8FC16
	for <freebsd-scsi@freebsd.org>; Sat, 19 Feb 2011 00:48:02 +0000 (UTC)
Received: from omta20.emeryville.ca.mail.comcast.net ([76.96.30.87])
	by qmta15.emeryville.ca.mail.comcast.net with comcast
	id 9cMt1g0011smiN4AFcapTs; Sat, 19 Feb 2011 00:34:49 +0000
Received: from koitsu.dyndns.org ([98.248.33.18])
	by omta20.emeryville.ca.mail.comcast.net with comcast
	id 9can1g00p0PUQVN8gcan5l; Sat, 19 Feb 2011 00:34:48 +0000
Received: by icarus.home.lan (Postfix, from userid 1000)
	id 43FB59B422; Fri, 18 Feb 2011 16:34:47 -0800 (PST)
Date: Fri, 18 Feb 2011 16:34:47 -0800
From: Jeremy Chadwick <freebsd@jdc.parodius.com>
To: Kevin Oberman <oberman@es.net>
Message-ID: <20110219003447.GA70019@icarus.home.lan>
References: <20110218231306.GA69028@icarus.home.lan>
	<20110219000521.9918B1CC29@ptavv.es.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20110219000521.9918B1CC29@ptavv.es.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: freebsd-scsi@freebsd.org, stable@freebsd.org,
	"Kenneth D. Merry" <ken@freebsd.org>, Dmitry Morozovsky <marck@rinet.ru>
Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8
X-BeenThere: freebsd-scsi@freebsd.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: SCSI subsystem <freebsd-scsi.freebsd.org>
List-Unsubscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-scsi>,
	<mailto:freebsd-scsi-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-scsi>
List-Post: <mailto:freebsd-scsi@freebsd.org>
List-Help: <mailto:freebsd-scsi-request@freebsd.org?subject=help>
List-Subscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-scsi>,
	<mailto:freebsd-scsi-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Sat, 19 Feb 2011 00:48:02 -0000

On Fri, Feb 18, 2011 at 04:05:21PM -0800, Kevin Oberman wrote:
> > Date: Fri, 18 Feb 2011 15:13:06 -0800
> > From: Jeremy Chadwick <freebsd@jdc.parodius.com>
> > Sender: owner-freebsd-stable@freebsd.org
> > 
> > On Sat, Feb 19, 2011 at 02:05:33AM +0300, Dmitry Morozovsky wrote:
> > > On Fri, 18 Feb 2011, Kenneth D. Merry wrote:
> > > 
> > > KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb
> > > KDM> > KDM> SAS hardware.
> > > KDM> > 
> > > KDM> > [snip]
> > > KDM> > 
> > > KDM> > Again, thank you very much Ken.  I'm planning to stress test this on 846 case 
> > > KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the 
> > > KDM> > results.
> > > KDM> > 
> > > KDM> > Any hints to particularly I/O stressing patterns?  Out of my mind, I'm planning 
> > > KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else 
> > > KDM> > could you suppose?
> > > KDM> 
> > > KDM> The best stress test I have found has been to just do a single sequential
> > > KDM> write stream with ZFS.  i.e.:
> > > KDM> 
> > > KDM> cd /path/to/zfs/pool
> > > KDM> dd if=/dev/zero of=foo bs=1M
> > > KDM> 
> > > KDM> Just let it run for a long period of time and see what happens.
> > > 
> > > Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be 
> > > /dev/random more appropriate?
> > 
> > No -- /dev/urandom maybe, but not /dev/random.  /dev/urandom will also
> > induce significantly higher CPU load than /dev/zero will.  Don't forget
> > that ZFS is a processor-centric (read: no offloading) system.
> > 
> > I tend to try different block sizes (starting at bs=8k and working up to
> > bs=256k) for sequential benchmarks.  The "sweet spot" on most disks I've
> > found is 64k.  Otherwise use benchmarks/bonnie++.
> 
> When FreeBSD updated its random number engine a couple of years ago,
> random and urandom became the same thing. Unless I am missing something,
> a switch should make no difference.

You and Adam's comments are both valid.  I tend to work on a multitude
of OSes (specifically Solaris, Linux, and FreeBSD), so I tend to use
what behaves the same universally (/dev/urandom in this case).  Sorry
for the mix-up/noise.

-- 
| Jeremy Chadwick                                   jdc@parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.               PGP 4BD6C0CB |