From owner-freebsd-fs@FreeBSD.ORG Thu Sep 6 23:41:08 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7B6E2106566B for ; Thu, 6 Sep 2012 23:41:08 +0000 (UTC) (envelope-from ayoung@mosaicarchive.com) Received: from mail-ob0-f182.google.com (mail-ob0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 398D38FC08 for ; Thu, 6 Sep 2012 23:41:07 +0000 (UTC) Received: by obbun3 with SMTP id un3so4546084obb.13 for ; Thu, 06 Sep 2012 16:41:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=sR6MdMDY4K5D8zbPE5cbrBRIljO9O5Qra5gzMJ27CFs=; b=ILvvGBF6HDAkvKj1fk1hPr0xWl8QCw2EBOj6bQYXyd6mpSbKcg7lVJqmf1ocC9PVxT GWi7NbkwNUIx/Svebu/E68atsxjstEuARMr7KNC9E1b4aij0KcCdp7XzcFKAeu9A1CUq DLPukd7QoEw9YXvGY4f11EVOTFj65U1kwURAFqjRtfr33NukgoDCvwByTJJjQkFhMV6p b4b2wWYyisOzhobuPVMjYeZY0kPceKyc85/G8beWQ42v7QSm/XO6EwoR6XHsCvxOksZR WEwZu3r+8xtO4UzWFVo9BMdy475+109KJ+Frrr3q5XiMjfPUyT0tSrxuF4PZgjv3xL6D x/bg== MIME-Version: 1.0 Received: by 10.182.174.68 with SMTP id bq4mr4005613obc.53.1346974867107; Thu, 06 Sep 2012 16:41:07 -0700 (PDT) Received: by 10.76.174.38 with HTTP; Thu, 6 Sep 2012 16:41:07 -0700 (PDT) X-Originating-IP: [96.237.242.243] Date: Thu, 6 Sep 2012 19:41:07 -0400 Message-ID: From: Andy Young To: freebsd-fs@freebsd.org X-Gm-Message-State: ALoCoQlYw1WRRsU/fjsH9ErTUOUlMoC/Sa09vCCv2INMPab4WvmaDClObmatffqZ7uTQRbPmDjjY Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Question on ZFS and redundancy X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Sep 2012 23:41:08 -0000 In the past I've used multiple RAID6 volumes under Linux. The thing I disliked about this was that my code had to worry about splitting my data across the volumes. I had to worry about which ones were full and manage all of that complexity myself. However, the volumes were independent. If I lost three drives in a volume then yes I would lose the volume but it wouldn't affect any other volumes. Now with ZFS and raidz2 I love the fact that I can use a single pool to spread data across multiple vdevs. However, the vdevs aren't independent anymore right? Because ZFS stripes data across the vdevs, if I lose three drives in a single vdev, doesn't this put the entire pool at risk? Andy