Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 20 Jan 2012 09:50:34 +0100
From:      Peter Maloney <peter.maloney@brockmann-consult.de>
To:        freebsd-fs@freebsd.org
Subject:   Re: sanity check:  is 9211-8i, on 8.3, with IT firmware still "the one"
Message-ID:  <4F192ADA.5020903@brockmann-consult.de>
In-Reply-To: <alpine.BSF.2.00.1201191604510.19710@kozubik.com>
References:  <alpine.BSF.2.00.1201191604510.19710@kozubik.com>

next in thread | previous in thread | raw e-mail | index | archive | help
John,

Various people have problems with mps and ZFS.

I am using 8-STABLE from October 2011, and on the 9211-8i HBA, I am
using 9 IT firmware. In my case, it was the firmware on an SSD that
caused problems.
Crucial M4-CT256M4SSD2 firmware 0001
Randomly it would fail. Trying to reproduce with heavy IO didn't work.
But I found that hot pulling works. Hot pulling the disk a few times
while mounted causes the disk to never respond again until rebooting.
(causing SCSI timeouts). When running "gpart recover da##" or
"camcontrol reset ..." on the disk after it is removed, the kernel
panics. The mpslsi driver does not solve the problem with the
CT256M4SSD2 and firmware 0001, but firmware 0009 seems to work. Trying
the 'lost disk' on another machine works. But FreeBSD needs to be
rebooted, maybe for some part of the hardware to reset and forget about
the disk.

Sebulon with Samsung Spinpoint disks, here is a similar problem in this
thread:
    http://forums.freebsd.org/showthread.php?t=27128
And Beeblebrox, with different Samsung Spinpoint disks:
    http://forums.freebsd.org/showthread.php?p=162201#post162201

And Jason Wolfe, with Seagate ST91000640SS disks (with mps):
    http://osdir.com/ml/freebsd-scsi/2011-11/msg00006.html (freebsd-fs
list, with original post at 11/01/2011 07:13 PM CET)
But with mpslsi, the problems go away he says. I tried reproducing his
problem on my system (on my M4-CT256M4SSD2 0001 and my HDS5C3030ALA630),
and was able to get a timeout similar to his with mpslsi (one time out
of many tries), and it recovered gracefully, as he says his does. So
based on that, I would say mpslsi is the safest choice. Perhaps the same
problem on mps will cause a crash on any system with any disk, not just
ST91000640SS disks.

I am using the following disks with no known problems:
    Hitachi HUA723030ALA640 firmware MKAOA580 (tested with mps and
mpslsi, didn't test hot pull)
    Seagate ST33000650NS firmware 0002 (tested with mps and mpslsi,
didn't test hot pull)
    Hitachi HDS5C3030ALA630 firmware MEAOA580 (tested mostly with
mpslsi, and tested hot pull)
    Crucial M4-CT256M4SSD2 firmware 0009 (tested only with mpslsi; not
fully tested yet, but passes the hot pull test; has a URE which it
didn't have with firmware 0001)


The "hot pull test":
--------------
dd if=/dev/random of=/somewhere/on/the/disk bs=128k
pull disk
wait 1 second
put disk back in
wait 1 second
pull disk
wait 1 second
put disk back in
wait 1 second
hit ctrl+c on the dd command
wait for messages to stop on tty1 / syslog.
gpart show
zpool status
zpool online <pool> <disk>
zpool status

If gpart show does not seg fault, and zpool online causes the disk to
resilver, then it is all good.

(40% of the time, the bad SSD passes the test if only pulled once, and
so far 0% if pulled twice, and one time out of all tests, the red lights
blink on all disks on the controller when the bad disk is pulled)
--------------


So, I would say that with the right combination of hardware, you have a
fine system. So just test your disk however you think works best. If you
want to use mps, use the "smartctl -a" loop test to make sure it handles
it. If during the test you get no timeouts, I would call the test
indeterminate. A pass looks like what Jason Wolfe posted in the mailing
list (linked above) "SMID ... finished recovery after aborting TaskMID ...".


Peter


On 01/20/2012 01:08 AM, John Kozubik wrote:
>
> We're about to invest heavily in a new ZFS infrastructure, and our
> plans are to:
>
>
> - wait for 8.3, with the updated 6gbps mps driver
>
> - Install and use LSI 9211-8i cards with newest "IT" firmware
>
>
> This appears to be the de facto standard for ZFS HBAs ...
>
> Is there any reason to consider other cards/vendors ?
>
> Are these indeed considered solid (provided I use the new mps in 8.3) ?
>
> Thanks.
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"


-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney@brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F192ADA.5020903>