Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Sep 2007 20:36:26 +0200
From:      Peter Schuller <peter.schuller@infidyne.com>
To:        Randy Bush <randy@psg.com>
Cc:        freebsd-fs@FreeBSD.ORG
Subject:   Re: zfs in production?
Message-ID:  <20070928183625.GA8655@hyperion.scode.org>
In-Reply-To: <46F7EDD7.6060904@psg.com>
References:  <46F7EDD7.6060904@psg.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--tKW2IUtsqtDRztdT
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

> but we would like to hear from folk using zfs in production for any
> length of time, as we do not really have the resources to be pioneers.

I'm using it in production on at least three machines (not counting
e.g. my workstation). By production I mean for real data and/or
services that are important, but not necessarily stressing the system
in terms of performance/load or edge cases.

Some minor issues exist (memory issues on 32bit, wanting to disable
prefetch, etc, swap not working on zfs, etc) but I have never had any
showstoppers. And ZFS has btw already saved me from silent (until some
time later) data corruption (sort of; I tried hot swapping SATA
devices in a situation where I did not know whether it was supposed to
be supported - in all fairness I would never have tried it to begin
with if I had not been running ZFS, but if I had I would have had
silent corruption).

Personal gut feeling for me is that I am not too worried about data
loss, but would be more hesitant to deploy without proper testing in
cases where performance/latency/soft real time performance is a
concearn.

Biggest actual problem so far has actually been hardware rather than
software. A huge joy of ZFS is the fact that it actually does send
cache flush commands to constituent drives. I have however recently
found out that the Perc 5/i controllers will not pass this through to
underlying drives (at least not with SATA). So suddenly my crappy
cheap-o home server is more reliably in case of power failure than a
more expensive server with a real raid controller (when running
without BBU; I can only hope that they will actually flush SATA drive
caches prior to evicting contents in the cache when running with BBU
enabled).

--=20
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller@infidyne.com>'
Key retrieval: Send an E-Mail to getpgpkey@scode.org
E-Mail: peter.schuller@infidyne.com Web: http://www.scode.org


--tKW2IUtsqtDRztdT
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.4 (FreeBSD)

iD8DBQFG/UmpDNor2+l1i30RAtHmAKCvTBXB/XW3bDpNNTBh84HrT693SQCg6Bcq
TPNPmCYPMRMntQmAdUDlStw=
=1C0I
-----END PGP SIGNATURE-----

--tKW2IUtsqtDRztdT--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070928183625.GA8655>