From owner-freebsd-stable@FreeBSD.ORG Thu Jan 6 13:11:50 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 905741065694 for ; Thu, 6 Jan 2011 13:11:50 +0000 (UTC) (envelope-from ml@my.gd) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id 285958FC13 for ; Thu, 6 Jan 2011 13:11:49 +0000 (UTC) Received: by wwf26 with SMTP id 26so16150915wwf.31 for ; Thu, 06 Jan 2011 05:11:48 -0800 (PST) Received: by 10.227.166.13 with SMTP id k13mr876364wby.178.1294319508641; Thu, 06 Jan 2011 05:11:48 -0800 (PST) Received: from dfleuriot.local ([195.234.88.154]) by mx.google.com with ESMTPS id m10sm16785304wbc.10.2011.01.06.05.11.46 (version=SSLv3 cipher=RC4-MD5); Thu, 06 Jan 2011 05:11:47 -0800 (PST) Message-ID: <4D25BF91.7070304@my.gd> Date: Thu, 06 Jan 2011 14:11:45 +0100 From: Damien Fleuriot User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101207 Thunderbird/3.1.7 MIME-Version: 1.0 To: Chris Forgeron References: <4D1C6F90.3080206@my.gd> <4D21E679.80002@my.gd> <84882169-0461-480F-8B4C-58E794BCC8E6@my.gd> <488AE93A-97B9-4F01-AD0A-0098E4B329C3@my.gd> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: "freebsd-stable@freebsd.org" , Artem Belevich Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Jan 2011 13:11:50 -0000 I see, so no dedicated ZIL device in the end ? I could make a 15gb slice for the OS running UFS (I don't wanna risk losing the OS when manipulating ZFS, such as during upgrades), and a 25gb+ for L2ARC, depending on the disk. I can't afford a *dedicated* drive for the cache though, not enough room in the machine. On 1/6/11 12:26 PM, Chris Forgeron wrote: > You know, these days I'm not as happy with SSD's for ZIL. I may blog about some of the speed results I've been getting over the last 6mo-1yr that I've been running them with ZFS. I think people should be using hardware RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the cost of a 60 gig SSD, and it will trounce the SSD for speed. > > I'd put your SSD to L2ARC (cache). > > > -----Original Message----- > From: Damien Fleuriot [mailto:ml@my.gd] > Sent: Thursday, January 06, 2011 5:20 AM > To: Artem Belevich > Cc: Chris Forgeron; freebsd-stable@freebsd.org > Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks > > You both make good points, thanks for the feedback :) > > I am more concerned about data protection than performance, so I suppose raidz2 is the best choice I have with such a small scale setup. > > Now the question that remains is wether or not to use parts of the OS's ssd for zil, cache, or both ? > > --- > Fleuriot Damien > > On 5 Jan 2011, at 23:12, Artem Belevich wrote: > >> On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot wrote: >>> Well actually... >>> >>> raidz2: >>> - 7x 1.5 tb = 10.5tb >>> - 2 parity drives >>> >>> raidz1: >>> - 3x 1.5 tb = 4.5 tb >>> - 4x 1.5 tb = 6 tb , total 10.5tb >>> - 2 parity drives in split thus different raidz1 arrays >>> >>> So really, in both cases 2 different parity drives and same storage... >> >> In second case you get better performance, but lose some data >> protection. It's still raidz1 and you can't guarantee functionality in >> all cases of two drives failing. If two drives fail in the same vdev, >> your entire pool will be gone. Granted, it's better than single-vdev >> raidz1, but it's *not* as good as raidz2. >> >> --Artem