From owner-freebsd-fs@FreeBSD.ORG Sun Oct 21 15:54:14 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 593B19DF for ; Sun, 21 Oct 2012 15:54:14 +0000 (UTC) (envelope-from freebsd@penx.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 140E48FC1B for ; Sun, 21 Oct 2012 15:54:14 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q9LFs8CQ030185; Sun, 21 Oct 2012 08:54:09 -0700 (PDT) (envelope-from freebsd@penx.com) Subject: Re: ZFS HBAs + LSI chip sets (Was: ZFS hang (system #2)) From: Dennis Glatting To: Freddie Cash In-Reply-To: References: <1350698905.86715.33.camel@btw.pki2.com> <1350711509.86715.59.camel@btw.pki2.com> <50825598.3070505@FreeBSD.org> <1350744349.88577.10.camel@btw.pki2.com> <1350765093.86715.69.camel@btw.pki2.com> <508322EC.4080700@FreeBSD.org> <1350778257.86715.106.camel@btw.pki2.com> Content-Type: text/plain; charset="us-ascii" Date: Sun, 21 Oct 2012 08:54:08 -0700 Message-ID: <1350834848.88577.33.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q9LFs8CQ030185 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@penx.com Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 21 Oct 2012 15:54:14 -0000 On Sat, 2012-10-20 at 23:52 -0700, Freddie Cash wrote: > On Oct 20, 2012 5:11 PM, "Dennis Glatting" wrote: > > > > > > I chosen the LSI2008 chip set because the code was donated by LSI, and > > they therefore demonstrated interest in supporting their products under > > FreeBSD, and that chip set is found in a lot of places, notably > > Supermicro boards. Additionally, there were stories of success on the > > lists for several boards. That said, I have received private email from > > others expressing frustration with ZFS and the "hang" problems, which I > > believe are also the LSI chips. > > > > I have two questions for the broader list: > > > > 1) What HBAs are you using for ZFS and what is your level > > of success/stability? Also, what is your load? > > SuperMicro AOC-USAS-8i using the mpt(4) driver on FreeBSD 9-STABLE in one > server (alpha). > > SuperMicro AOC-USAS2-8i using the mps(4) driver on FreeBSD 9-STABLE in 2 > servers (beta and omega). > > I think they were updated on Oct 10ish. > > The alpha box runs 12 parallel rsync processes to backup 50-odd Linux > servers across multiple data centres. > > The beta box runs 12 parallel rsync processes to backup 100-odd Linux and > FreeBSD servers across 50-odd buildings. > > Both boxes uses zfs send to replicate the data to omega (each box saturates > a 1 Gbps link during the zfs send). > > Alpha and omega have 24 SATA 3 Gbps harddrives, configured as 3x 8-drive > raidz2 vdevs, with a 32 GB SSD split between OS, log vdev, and cache vdev. > > Beta has 16 SATA 6 Gbps harddrives, configured into 3x 5-drive raidz2 > vdevs, with a cold-spare, and a 32 GB SSD split between OS, log vdev, and > cache vdev. > > All three have been patched to support feature flags. All three have > dedupe enabled, compression enabled, and HPN SSH patches with the NONE > cipher enabled. > > All three run without any serious issues. The only issues we've had are 3, > maybe 4, situations where I've tried to destroy multi-TB filesystems > without enough RAM in the machine. We're now running a minimum of 32 GB of > RAM with 64 GB in one box. > > > 2) How well is the LSI chip sets supported under FreeBSD? > > I have no complaints. And we're ordering a bunch of LSI 9200-series > controllers for new servers (PCI brackets instead of UIO). Perhaps I am doing something fundamentally wrong with my SSDs. Currently I simply add them to a pool after being ashift aligned via gnop (e.g., -S 4096, depending on page size). I remember reading somewhere about offsets to insure data is page aligned but, IIRC, this was strictly a performance issue. Are you doing something different?