From owner-freebsd-questions@FreeBSD.ORG Thu Jun 21 15:15:11 2012 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id DD7C8106568E for ; Thu, 21 Jun 2012 15:15:11 +0000 (UTC) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from wojtek.tensor.gdynia.pl (wojtek.tensor.gdynia.pl [89.206.35.99]) by mx1.freebsd.org (Postfix) with ESMTP id 1A7AE8FC16 for ; Thu, 21 Jun 2012 15:15:10 +0000 (UTC) Received: from wojtek.tensor.gdynia.pl (localhost [127.0.0.1]) by wojtek.tensor.gdynia.pl (8.14.5/8.14.5) with ESMTP id q5LFF8t4003406; Thu, 21 Jun 2012 17:15:08 +0200 (CEST) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from localhost (wojtek@localhost) by wojtek.tensor.gdynia.pl (8.14.5/8.14.5/Submit) with ESMTP id q5LFF7Ur003403; Thu, 21 Jun 2012 17:15:08 +0200 (CEST) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Date: Thu, 21 Jun 2012 17:15:07 +0200 (CEST) From: Wojciech Puchar To: Matthias Gamsjager In-Reply-To: Message-ID: References: <4FE2CE38.9000100@gmail.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.7 (wojtek.tensor.gdynia.pl [127.0.0.1]); Thu, 21 Jun 2012 17:15:08 +0200 (CEST) Cc: FreeBSD Questions Subject: Re: Is ZFS production ready? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Jun 2012 15:15:12 -0000 > > I do understand your setup but I dont have too agree that it is a good so i would repeat my question. Assume you have 48 disks, in mirrored configuration (24 mirrors) and 480 users with their data on them. Your solution with ZFS - ZFS crashes or you get double disk failure. Assuming the latter by average one per 24 file (randomly chosen) is destroyed which - in practice and limited time, means everything destroyed. Actually more than one per 24 - large files can be spread over. Your solution with UFS - better as there is fsck which slowly but successfully repairs problem. with double disk failure - the same! You restore everything from backup (i assume you have one). This takes like a day or more, one or two complete work days lost+all users in practice lost everything since last backup. My solution with UFS - fsck in case of failure work in parallel on 24 disks so not that long. double disk failure means losing data of 1/24 users. every one per 24 user cannot work, others work and i without any stress do recover this 1/24 of users data from backup after putting replacement disks. 1/24 of users lost data since last backup, and some hours of time. Even assuming ZFS is perfect then we both have problems as often, but my problems are 1/24 as severe as yours. Just don't ask me for help when unhappy users will want to cut off your head. >> And you've never seen me, yet i still exist. >> > > Really? that's you anwser to my question. The most childish answer I could stupid answer to stupid question. You never seen - but they do happens.