From owner-freebsd-questions@FreeBSD.ORG Mon Feb 4 12:41:43 2008 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1213116A418 for ; Mon, 4 Feb 2008 12:41:43 +0000 (UTC) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from wojtek.tensor.gdynia.pl (wojtek.tensor.gdynia.pl [IPv6:2001:4070:101:2::1]) by mx1.freebsd.org (Postfix) with ESMTP id 2ABE213C468 for ; Mon, 4 Feb 2008 12:41:32 +0000 (UTC) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from wojtek.tensor.gdynia.pl (localhost [IPv6:::1]) by wojtek.tensor.gdynia.pl (8.13.8/8.13.8) with ESMTP id m14CdvGl007862; Mon, 4 Feb 2008 13:39:57 +0100 (CET) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from localhost (wojtek@localhost) by wojtek.tensor.gdynia.pl (8.13.8/8.13.8/Submit) with ESMTP id m14Cdqek007859; Mon, 4 Feb 2008 13:39:55 +0100 (CET) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Date: Mon, 4 Feb 2008 13:39:52 +0100 (CET) From: Wojciech Puchar To: Christian Baer In-Reply-To: Message-ID: <20080204133351.P7781@wojtek.tensor.gdynia.pl> References: <200802022111.21862.fbsd.questions@rachie.is-a-geek.net> <20080203173245.U1631@wojtek.tensor.gdynia.pl> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-questions@freebsd.org Subject: Re: Looking for a Text on ZFS X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Feb 2008 12:41:43 -0000 > /usr to spread the load while making worlds and I mount /usr/obj > asynchronously to increase write speed. With several filesystems I can > spread to load the way I want it and decide where the data goes. And one > broken fs doesn't screw up the others in the process. did you ever got your UFS filesystem broken not because your drive failed? i don't. UFS it's not FAT, and doesn't break up. > > I do know the drawbacks of this: Storage is pretty static. Correcting > wrong estimates about the needed fs-sizes is a big problem. That is why I you CAN't estimate well how much space you need in longer term. in practice partitioning like yours means at least 100% more disk space requirements. of course - there are often cases today that whole system needs few gigs, but smallest new drive is 80GB - it will work.. still - making all in / is much easier and works fine. making all in / and /lessused, where / is at first part on disk, and /lessused on second - make big performance improvements (shorter seeks!). >> 2) it takes many drives to the pool and you may add then new drives. >> same as gconcat+growfs. > > I read about this. However, I didn't find anything conclusive as to how > well the drives can still live on their own if they are ever seperated. > Now I don't think they will be addressed as a RAID0 with all the risks of > that. But what happens if one of four drives breaks down? Does it make a > difference, if the broken drive is the first one, the last one or a middle > one? if it's just concat, you will loose lots of data, just like any other filesystem. with concat+mirror - you replace single drive that failed and rebuild mirror. that's all. after reading your answer on 3-rd question i will end the topic, because you understand quota as workaround of problems creating 1000 partitions. or simply - looks like you don't understand it at all, because it is not workaround. it's excellent tool.