Date: Fri, 5 Apr 2013 13:00:42 +0200 From: dennis berger <db@nipsi.de> To: Mark Felder <feld@feld.me> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS in production enviroments Message-ID: <4BC15B7B-4893-4167-ACF0-1CB066DE4EE3@nipsi.de> In-Reply-To: <op.wu0ofum934t2sn@tech304.office.supranet.net> References: <CAEW%2BogaUkiiTw%2BVgZ0J6ey9MMD6EOv7sZD6FcqBq4=wU6z6w7w@mail.gmail.com> <op.wu0ofum934t2sn@tech304.office.supranet.net>
next in thread | previous in thread | raw e-mail | index | archive | help
Thanks for the setup information. If you have time may you describe your head units a little bit? How do you configure istgt + zfs for iscsi volumes. On our system I = often see a lot of IOPS when I write to an exported zvol.=20 Maybe this is due to wrong blocksize config in istgt. I don't see those high IOPS in NFS exported volumes for example. Best, -dennis Am 04.04.2013 um 14:46 schrieb Mark Felder: > Our setup: >=20 > * FreeBSD 9-STABLE (from before 9.0-RELEASE) > * HP DL360 servers acting as "head units" > * LSI SAS 9201-16e controllers > * Intel NICs > * DataOn Storage DNS-1630 JBODs with dual controllers (LSI based) > * 2TB 7200RPM Hitachi SATA HDs with SAS interposers (LSISS9252) > * Intel SSDs for cache/log devices > * gmultipath is handling the active/active data paths to the drives. = ex: ZFS uses multipath/disk01 in the pool > * istgt serving iSCSI to Xen and ESXi from zvols >=20 > Built these just before the hard drive prices spiked from the floods. = I need to jam more RAM in there and it would be nice to be running = FreeBSD 10 with some of the newer ZFS code and having access to TRIM. = Uptime on these servers is over a year. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BC15B7B-4893-4167-ACF0-1CB066DE4EE3>