From owner-freebsd-fs@FreeBSD.ORG Fri Sep 28 18:36:28 2007 Return-Path: Delivered-To: freebsd-fs@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AB8B816A4E0 for ; Fri, 28 Sep 2007 18:36:28 +0000 (UTC) (envelope-from scode@hyperion.scode.org) Received: from hyperion.scode.org (cl-1361.ams-04.nl.sixxs.net [IPv6:2001:960:2:550::2]) by mx1.freebsd.org (Postfix) with ESMTP id 59D0D13C4CC for ; Fri, 28 Sep 2007 18:36:28 +0000 (UTC) (envelope-from scode@hyperion.scode.org) Received: by hyperion.scode.org (Postfix, from userid 1001) id 7433523C44A; Fri, 28 Sep 2007 20:36:26 +0200 (CEST) Date: Fri, 28 Sep 2007 20:36:26 +0200 From: Peter Schuller To: Randy Bush Message-ID: <20070928183625.GA8655@hyperion.scode.org> References: <46F7EDD7.6060904@psg.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="tKW2IUtsqtDRztdT" Content-Disposition: inline In-Reply-To: <46F7EDD7.6060904@psg.com> User-Agent: Mutt/1.5.16 (2007-06-09) Cc: freebsd-fs@FreeBSD.ORG Subject: Re: zfs in production? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Sep 2007 18:36:28 -0000 --tKW2IUtsqtDRztdT Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable > but we would like to hear from folk using zfs in production for any > length of time, as we do not really have the resources to be pioneers. I'm using it in production on at least three machines (not counting e.g. my workstation). By production I mean for real data and/or services that are important, but not necessarily stressing the system in terms of performance/load or edge cases. Some minor issues exist (memory issues on 32bit, wanting to disable prefetch, etc, swap not working on zfs, etc) but I have never had any showstoppers. And ZFS has btw already saved me from silent (until some time later) data corruption (sort of; I tried hot swapping SATA devices in a situation where I did not know whether it was supposed to be supported - in all fairness I would never have tried it to begin with if I had not been running ZFS, but if I had I would have had silent corruption). Personal gut feeling for me is that I am not too worried about data loss, but would be more hesitant to deploy without proper testing in cases where performance/latency/soft real time performance is a concearn. Biggest actual problem so far has actually been hardware rather than software. A huge joy of ZFS is the fact that it actually does send cache flush commands to constituent drives. I have however recently found out that the Perc 5/i controllers will not pass this through to underlying drives (at least not with SATA). So suddenly my crappy cheap-o home server is more reliably in case of power failure than a more expensive server with a real raid controller (when running without BBU; I can only hope that they will actually flush SATA drive caches prior to evicting contents in the cache when running with BBU enabled). --=20 / Peter Schuller PGP userID: 0xE9758B7D or 'Peter Schuller ' Key retrieval: Send an E-Mail to getpgpkey@scode.org E-Mail: peter.schuller@infidyne.com Web: http://www.scode.org --tKW2IUtsqtDRztdT Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFG/UmpDNor2+l1i30RAtHmAKCvTBXB/XW3bDpNNTBh84HrT693SQCg6Bcq TPNPmCYPMRMntQmAdUDlStw= =1C0I -----END PGP SIGNATURE----- --tKW2IUtsqtDRztdT--