From owner-freebsd-current@FreeBSD.ORG Sun Jan 6 22:20:18 2008 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6499816A419 for ; Sun, 6 Jan 2008 22:20:18 +0000 (UTC) (envelope-from askbill@conducive.net) Received: from conducive.net (conducive.org [203.194.153.81]) by mx1.freebsd.org (Postfix) with ESMTP id 3C4EA13C459 for ; Sun, 6 Jan 2008 22:20:17 +0000 (UTC) (envelope-from askbill@conducive.net) Received: from c-75-75-30-250.hsd1.va.comcast.net ([75.75.30.250]:64983 helo=pb.local) by conducive.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63 (FreeBSD)) (envelope-from ) id 1JBdqp-0001dz-UX for freebsd-current@freebsd.org; Sun, 06 Jan 2008 22:20:16 +0000 Message-ID: <4781541D.6070500@conducive.net> Date: Sun, 06 Jan 2008 22:20:13 +0000 From: =?UTF-8?B?6Z+T5a625qiZIEJpbGwgSGFja2Vy?= User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.8.1.2) Gecko/20070221 SeaMonkey/1.1.1 MIME-Version: 1.0 To: freebsd-current@freebsd.org References: <20080106141157.I105@fledge.watson.org> <47810DE3.3050106@FreeBSD.org> <478119AB.8050906@FreeBSD.org> <47814160.4050401@samsco.org> In-Reply-To: <47814160.4050401@samsco.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: ZFS honesty X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jan 2008 22:20:18 -0000 Scott Long wrote: > Kris Kennaway wrote: >> Ivan Voras wrote: >>> Kris Kennaway wrote: >>>> Ivan Voras wrote: >>>>> Robert Watson wrote: >>>>> >>>>>> I'm not sure if anyone has mentioned this yet in the thread, but >>>>>> another thing worth taking into account in considering the >>>>>> stability of ZFS is whether or not Sun considers it a production >>>>>> feature in Solaris. Last I heard, it was still considered an >>>>>> experimental feature there as well. >>>>> >>>>> Last I heard, rsync didn't crash Solaris on ZFS :) >>>> >>>> [Citation needed] >>> >>> I can't provide citation about a thing that doesn't happen - you >>> don't hear things like "oh and yesterday I ran rsync on my Solaris >>> with ZFS and *it didn't crash*!" often. >>> >>> But, with some grains of salt taken, consider this Google results: >>> >>> * searching for "rsync crash solaris zfs": 790 results, most of them >>> obviously irrelevant >>> * searching for "rsync crash freebsd zfs": 10,800 results; a small >>> number of the results is from this thread, some are duplicates, but >>> it's a large number in any case. >>> >>> I feel that the number of Solaris+ZFS installations worldwide is >>> larger than that of FreeBSD+ZFS and they've had ZFS longer. >> >> Almost all Solaris systems are 64 bit. >> >> Kris > > So, let's be honest here. ZFS is simply unreliable on FreeBSD/i386. > There are things that you can do mitigate the problems, and in certain > well controlled environments you might be able to make it work well > enough for your needs. But as a general rule, don't expect it to work > reliably, period. This is backed up by Sun's own recommendation to not > run it on 32-bit Solaris. JFWIW - last night's trial OpenSolaris/Indiana' devel iso installed on Core-2 duo with 2GB created something it reported as 'Z-lite) (IIRC - it wasn;t worht wasting HDD space on...) Anyone know if this 'different' on Solaris for i386 from -64? i.e. - is do Sun use a 'lite' and full' version? And, if so, [is there | should there be ] an equivalent in the FreeBSd world? or is that just up to optioning in our case? > > But let's also be honest about ZFS in the 64-bit world. There is ample > evidence that ZFS basically wants to grow unbounded in proportion to the > workload that you give it. Indeed, even Sun recommends basically > throwing more RAM at most problems. Again, tuning is often needed, and > I think it's fair to say that it can't be expected to work on arbitrary > workloads out of the box. ++ > > Now, what about the other problems that have been reported in this > thread by Ivan and others? I don't think that it can be said that the > only problem that ZFS has is with memory. +++ > Unfortunately, it looks like > these "other" problems aren't well quantified, so I think that they are > being unfairly dismissed. But at the same time, maybe these other > problems are rare and unique enough that they represent very special > cases that won't be encountered by most people. But it also tells me > that ZFS is still immature, at least in FreeBSD. > Clearly so. So much so that IMNSHO, inclusion of most *remaining* ZFS issues more properly belongs on the ZFS-specific mailing list. I don't see much - if any - remaining evidence that there are things either 'wrong' or even sub-optimal with FreeBSD *itself* that only ZFS exposes. Au contraire - FreeBSD seems to be as accommodating to ZFS needs as can be. The rest seems to be up to ZFS code, 'sensing' of resources & load, manual & auto-config, dynamic adjustment - more graceful degradation & recovery. Whatever. ZFS-specific, not BSD-in-general. > The universal need for tuning combined with the poorly understood > problem reports tells me that administrators considering ZFS should > expect to spend a fair amount of timing testing and tuning. Don't > expect it to work out of the box for your situation. That's not to > say that it's useless; there are certainly many people who can attest to > it working well for them. Just be prepared to spend time and possibly > money making it work, and be willing to provide good problem reports for > any non-memory related problems that you encounter. > > Scott JM2CW, but the level of 'traffic' on this list in re still-experimental-at-best ZFS is distracting attention from issues that are more universal, critical to more users and uses - and more in need of scarce attention 'Real Soon Now'. It almost begs dismissal of ZFS posts to the bespoke list out-of-hand. ZFS is still eminently 'avoidable' for now. Reports of I/O problems, drivers that can corrupt data on *UFS* are a whole 'nuther matter.. Bill