From owner-freebsd-questions@freebsd.org Thu Jul 9 17:10:23 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 210C49976FA for ; Thu, 9 Jul 2015 17:10:23 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qg0-f50.google.com (mail-qg0-f50.google.com [209.85.192.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E50A73E97 for ; Thu, 9 Jul 2015 17:10:22 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by qgep37 with SMTP id p37so26004587qge.1 for ; Thu, 09 Jul 2015 10:10:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:content-type:mime-version:subject:from :in-reply-to:date:content-transfer-encoding:message-id:references:to; bh=x49Z9Vh6jiBmVaEqRB8DUAwJWQxKue2viD4HaQ3NBqk=; b=Bc3HQp9gSHFhgq3NKpNzYMjcnFEBwL8DZKEI9Im5Pvj2trBW6C07AUFTYD2Lgy4nR1 PxM/ouWWbNCJl0P/JTEnkEH/U4XqWGpdmuYmTLg+Zhy466tHZicv7n/s+mVzJWOUrmeh WXRXQhuDy836evfWMLl/+SI5joOl20L7nqg/EXiRInzrs5NjObPK7C9cMDsFR65zW29p YqoL9Aepi+C9VGp1jAV3HGxuCSrLnzREH1OVkAV6SN35ORvq3crNU6j3i78rYXx75HDj wzw/ORmGaJ+joxekSkaIex869Acr+PjxHuJt53BuxWIQoZUZc3HAUYZb10lahLK2HFiV 4eWg== X-Gm-Message-State: ALoCoQmNH5X3J2kXADJr7leM2ZiUj4k74zRLymmZORDRd9sZt/xRudIXFbhgecBJVTltYs+4K5W+ X-Received: by 10.140.217.147 with SMTP id n141mr27800752qhb.43.1436461469039; Thu, 09 Jul 2015 10:04:29 -0700 (PDT) Received: from mbp-1.thecreativeadvantage.com (mail.thecreativeadvantage.com. [96.236.20.34]) by smtp.gmail.com with ESMTPSA id k133sm3862476qhc.35.2015.07.09.10.04.26 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 09 Jul 2015 10:04:27 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Gmirror/graid or hardware raid? From: Paul Kraus In-Reply-To: <20150709163926.GA83027@neutralgood.org> Date: Thu, 9 Jul 2015 13:04:27 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <917A821C-02F8-4F96-88DA-071E3431C335@mac.com> <7F08761C-556E-4147-95DB-E84B4E5179A5@kraus-haus.org> <20150709163926.GA83027@neutralgood.org> To: FreeBSD - X-Mailer: Apple Mail (2.1878.6) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jul 2015 17:10:23 -0000 On Jul 9, 2015, at 12:39, kpneal@pobox.com wrote: > On Thu, Jul 09, 2015 at 10:32:45AM -0400, Paul Kraus wrote: >> I do NOT use RaidZ for anything except bulk backup data where = capacity is all that matters and performance is limited by lots of other = factors. >=20 > A 4-drive raidz2 is more reliable than a pair of two drive mirrors, = striped. > But the pair of mirrors will perform much better.=20 Agreed. In terms of MTTDL (Mean Time To Data Loss), which Richard Elling = did lots of work researching, from best to worst: 4-way mirror RAIDz3 3-way mirror RAIDz2 2-way mirror RAIDz1 Stripe (no redundancy) But =85 The MTTDL for a 2-way mirror and a 2 drive RAIDz1 are the same. = The same can be said of a 3-way mirror and a 3 drive RAIDz2. A 4-way = mirror and a 4 drive RAIDz3 also have the same MTTDL. In reality, no one = configures a RAIDz1 of 2 drives, a RAIDz2 of 3 drives, or a RAIDz3 of 4 = drives. Take a look at Richards blog post on this topic here: = http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html > It's all a balancing act of performance vs reliability. *shrug* Don=92t forget cost :-) Fast - Cheap - Reliable =85 maybe you can have = two :-) > My main server has a three-way mirror and that's it. Three because = there > are only three brands of server-grade SAS drives. My home server has 3 stripes of 3-way mirrors. And yes, each vdev is = made up of three different drives (in some cases the same manufacturer, = but different models and production dates). >=20 >> I also create a =93do-not-remove=94 dataset in every zpool with 1 GB = reserved and quota. ZFS behaves very, very badly when FULL. This give me = a cushion when things go badly so I can delete whatever used up all the = space =85 Yes, ZFS cannot delete files if the FS is completely FULL. I = leave the =93do-not-remove=94 dataset unmounted so that it cannot be = used. >=20 > Isn't this fixed in FreeBSD 10.2? Or was it 11? I can't remember = because > I haven't upgraded to that point yet. I do remember complaints from = people > who did upgrade and then saw they didn't have as much space free as = they > did before the upgrade. I was not aware this had been accepted as a bug to fix :-) It has been a = detail to note for ZFS from the very beginning. Do you know if this is a = FBSD specific fix or coming down from OpenZFS ? -- Paul Kraus paul@kraus-haus.org