Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 04 Apr 2013 07:46:32 -0500
From:      "Mark Felder" <feld@feld.me>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS in production enviroments
Message-ID:  <op.wu0ofum934t2sn@tech304.office.supranet.net>
In-Reply-To: <CAEW%2BogaUkiiTw%2BVgZ0J6ey9MMD6EOv7sZD6FcqBq4=wU6z6w7w@mail.gmail.com>
References:  <CAEW%2BogaUkiiTw%2BVgZ0J6ey9MMD6EOv7sZD6FcqBq4=wU6z6w7w@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Our setup:

* FreeBSD 9-STABLE (from before 9.0-RELEASE)
* HP DL360 servers acting as "head units"
* LSI SAS 9201-16e controllers
* Intel NICs
* DataOn Storage DNS-1630 JBODs with dual controllers (LSI based)
* 2TB 7200RPM Hitachi SATA HDs with SAS interposers (LSISS9252)
* Intel SSDs for cache/log devices
* gmultipath is handling the active/active data paths to the drives. ex:  
ZFS uses multipath/disk01 in the pool
* istgt serving iSCSI to Xen and ESXi from zvols

Built these just before the hard drive prices spiked from the floods. I  
need to jam more RAM in there and it would be nice to be running FreeBSD  
10 with some of the newer ZFS code and having access to TRIM. Uptime on  
these servers is over a year.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?op.wu0ofum934t2sn>