From owner-freebsd-scsi@FreeBSD.ORG Fri Feb 18 22:52:05 2011 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D8001065673; Fri, 18 Feb 2011 22:52:05 +0000 (UTC) (envelope-from ken@kdm.org) Received: from nargothrond.kdm.org (nargothrond.kdm.org [70.56.43.81]) by mx1.freebsd.org (Postfix) with ESMTP id 593648FC1D; Fri, 18 Feb 2011 22:52:05 +0000 (UTC) Received: from nargothrond.kdm.org (localhost [127.0.0.1]) by nargothrond.kdm.org (8.14.2/8.14.2) with ESMTP id p1IMq4Cd084258; Fri, 18 Feb 2011 15:52:04 -0700 (MST) (envelope-from ken@nargothrond.kdm.org) Received: (from ken@localhost) by nargothrond.kdm.org (8.14.2/8.14.2/Submit) id p1IMq4Wb084257; Fri, 18 Feb 2011 15:52:04 -0700 (MST) (envelope-from ken) Date: Fri, 18 Feb 2011 15:52:04 -0700 From: "Kenneth D. Merry" To: Dmitry Morozovsky Message-ID: <20110218225204.GA84087@nargothrond.kdm.org> References: <20110218164209.GA77903@nargothrond.kdm.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2i Cc: freebsd-scsi@freebsd.org, stable@freebsd.org Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Feb 2011 22:52:05 -0000 On Sat, Feb 19, 2011 at 01:08:41 +0300, Dmitry Morozovsky wrote: > On Fri, 18 Feb 2011, Kenneth D. Merry wrote: > > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb > KDM> SAS hardware. > > [snip] > > Again, thank you very much Ken. I'm planning to stress test this on 846 case > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the > results. > > Any hints to particularly I/O stressing patterns? Out of my mind, I'm planning > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else > could you suppose? The best stress test I have found has been to just do a single sequential write stream with ZFS. i.e.: cd /path/to/zfs/pool dd if=/dev/zero of=foo bs=1M Just let it run for a long period of time and see what happens. What model controller do you have, and what firmware do you have on it? I have run into some bugs with the LSI 2.0 firmware, notably that you'll get IOC Busy errors as well as some bogus invalid LBA errors with SATA disks. (I'm guessing it wouldn't happen with SAS disks.) The 8.0 firmware is better, but the version for the 9211-8i is not able to recognize large numbers (more than 20) drives. (I reported that to LSI and they supplied a fix.) Ken -- Kenneth Merry ken@FreeBSD.ORG