From owner-freebsd-fs@FreeBSD.ORG Sat Oct 30 18:38:07 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9DB4B106564A for ; Sat, 30 Oct 2010 18:38:07 +0000 (UTC) (envelope-from peter@pean.org) Received: from smtprelay-h21.telenor.se (smtprelay-h21.telenor.se [195.54.99.196]) by mx1.freebsd.org (Postfix) with ESMTP id 279528FC08 for ; Sat, 30 Oct 2010 18:38:06 +0000 (UTC) Received: from ipb1.telenor.se (ipb1.telenor.se [195.54.127.164]) by smtprelay-h21.telenor.se (Postfix) with ESMTP id D69C4E8DDF for ; Sat, 30 Oct 2010 20:38:05 +0200 (CEST) X-SENDER-IP: [85.225.7.221] X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AnkWAOgCzExV4QfdPGdsb2JhbAAHh1OZewEBAQE1vTaFRASKVA X-IronPort-AV: E=Sophos;i="4.58,265,1286143200"; d="scan'208";a="146018338" Received: from c-dd07e155.166-7-64736c14.cust.bredbandsbolaget.se (HELO [172.25.0.40]) ([85.225.7.221]) by ipb1.telenor.se with ESMTP; 30 Oct 2010 20:38:05 +0200 Mime-Version: 1.0 (Apple Message framework v1081) Content-Type: text/plain; charset=us-ascii From: =?iso-8859-1?Q?Peter_Ankerst=E5l?= In-Reply-To: Date: Sat, 30 Oct 2010 20:38:04 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <86693036-9351-4303-BADA-C99F7A4C375C@pean.org> To: Sean X-Mailer: Apple Mail (2.1081) Cc: freebsd-fs@freebsd.org Subject: Re: Raid + zfs performace. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Oct 2010 18:38:07 -0000 On 30 okt 2010, at 20.09, Sean wrote: >> I thought maybe because the existing pool is kind of r/w saturated >> it should be better to create a new independent pool for the new >> drives. In that way the heavy activity would not "spread" to the >> new drives. >=20 > You're trying to be smarter than ZFS. It's a common syndrome, usually > brought about from years of experience dealing with "dumb" > filesystems. If you create a new independent pool, then you are > guaranteeing that your current r/w saturated pool will stay that way, > unless you manually migrate data off of that pool. If you add storage > to that pool, then you are providing that pool additional resource > that ZFS can then manage. >=20 >> Now you presented me with a third option. So you think I should skip = to create >> a new hardware-raid mirror and instead use two single drives and add = these as >> a mirror to the existing pool? >=20 > If you're going to keep the hardware raid, then setting up a new > hardware raid of two drives, and then striping da1 with da0 via zfs is > a viable option. It's just another spin on the RAID 10 idea. Ok. I think I'll go with this option for this machine. In the future I = would probably use a small SSD for booting and then use zfs for all raid-solutions.=20 >=20 >> How will zfs handle howswap of these drives? >=20 > ZFS doesn't know about your drives, because you hardware raid them. If > you set up the second hardware raid mirror as a striped drive in the > pool, and you then lose both drives within a single hardware raid > mirror set, you'll be in the drink. But that's the case with any RAID > 10 scenario. >=20 >> I've seen a few crashes due to ata-detach in other systems. >=20 > That's not a ZFS issue, that's a driver/support issue with the = controller. >=20 > -Sean >=20