Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 21 Oct 2012 19:42:11 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        Dennis Glatting <freebsd@penx.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS HBAs + LSI chip sets (Was: ZFS hang (system #2))
Message-ID:  <CAOjFWZ6Cf0N1GdPu4VCU9tRM0ny_CWd5JOQ7vAY5qDESEFX5Vw@mail.gmail.com>
In-Reply-To: <1350834848.88577.33.camel@btw.pki2.com>
References:  <1350698905.86715.33.camel@btw.pki2.com> <1350711509.86715.59.camel@btw.pki2.com> <50825598.3070505@FreeBSD.org> <1350744349.88577.10.camel@btw.pki2.com> <1350765093.86715.69.camel@btw.pki2.com> <508322EC.4080700@FreeBSD.org> <1350778257.86715.106.camel@btw.pki2.com> <CAOjFWZ7G%2BaLPiPQTaUOE5oJY3So0cWYKvU86y4BZ2MQL%2BbqGMA@mail.gmail.com> <1350834848.88577.33.camel@btw.pki2.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Oct 21, 2012 8:54 AM, "Dennis Glatting" <freebsd@penx.com> wrote:
>
> On Sat, 2012-10-20 at 23:52 -0700, Freddie Cash wrote:
> > On Oct 20, 2012 5:11 PM, "Dennis Glatting" <freebsd@pki2.com> wrote:
> > >
> > >
> > > I chosen the LSI2008 chip set because the code was donated by LSI, and
> > > they therefore demonstrated interest in supporting their products
under
> > > FreeBSD, and that chip set is found in a lot of places, notably
> > > Supermicro boards. Additionally, there were stories of success on the
> > > lists for several boards. That said, I have received private email
from
> > > others expressing frustration with ZFS and the "hang" problems, which
I
> > > believe are also the LSI chips.
> > >
> > > I have two questions for the broader list:
> > >
> > >  1) What HBAs are you using for ZFS and what is your level
> > >     of success/stability? Also, what is your load?
> >
> > SuperMicro AOC-USAS-8i using the mpt(4) driver on FreeBSD 9-STABLE in
one
> > server (alpha).
> >
> > SuperMicro AOC-USAS2-8i using the mps(4) driver on FreeBSD 9-STABLE in 2
> > servers (beta and omega).
> >
> > I think they were updated on Oct 10ish.
> >
> > The alpha box runs 12 parallel rsync processes to backup 50-odd Linux
> > servers across multiple data centres.
> >
> > The beta box runs 12 parallel rsync processes to backup 100-odd Linux
and
> > FreeBSD servers across 50-odd buildings.
> >
> > Both boxes uses zfs send to replicate the data to omega (each box
saturates
> > a 1 Gbps link during the zfs send).
> >
> > Alpha and omega have 24 SATA 3 Gbps harddrives, configured as 3x 8-drive
> > raidz2 vdevs, with a 32 GB SSD split between OS, log vdev, and cache
vdev.
> >
> > Beta has 16 SATA 6 Gbps harddrives, configured into 3x 5-drive raidz2
> > vdevs, with a cold-spare, and a 32 GB SSD split between OS, log vdev,
and
> > cache vdev.
> >
> > All three have been patched to support feature flags.  All three have
> > dedupe enabled, compression enabled, and HPN SSH patches with the NONE
> > cipher enabled.
> >
> > All three run without any serious issues. The only issues we've had are
3,
> > maybe 4, situations where I've tried to destroy multi-TB filesystems
> > without enough RAM in the machine. We're now running a minimum of 32 GB
of
> > RAM with 64 GB in one box.
> >
> > >  2) How well is the LSI chip sets supported under FreeBSD?
> >
> > I have no complaints. And we're ordering a bunch of LSI 9200-series
> > controllers for new servers (PCI brackets instead of UIO).
>
>
> Perhaps I am doing something fundamentally wrong with my SSDs. Currently
> I simply add them to a pool after being ashift aligned via gnop (e.g.,
> -S 4096, depending on page size).
>
> I remember reading somewhere about offsets to insure data is page
> aligned but, IIRC, this was strictly a performance issue. Are you doing
> something different?

All my harddisks are partitioned the same:
# gpart create -s gpt daX
# gpart add -b 2048 -t freebsd-zfs -l some-label daX

For the SSDs, the above are followed by multiple partitions that are on MB
boundaries.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ6Cf0N1GdPu4VCU9tRM0ny_CWd5JOQ7vAY5qDESEFX5Vw>