From owner-freebsd-fs@FreeBSD.ORG Sat Jul 7 16:34:51 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id C1212106566B for ; Sat, 7 Jul 2012 16:34:51 +0000 (UTC) (envelope-from radiomlodychbandytow@o2.pl) Received: from moh1-ve3.go2.pl (moh1-ve3.go2.pl [193.17.41.134]) by mx1.freebsd.org (Postfix) with ESMTP id 796D98FC0C for ; Sat, 7 Jul 2012 16:34:51 +0000 (UTC) Received: from moh1-ve3.go2.pl (unknown [10.0.0.134]) by moh1-ve3.go2.pl (Postfix) with ESMTP id 4D5149D8002 for ; Sat, 7 Jul 2012 18:34:45 +0200 (CEST) Received: from unknown (unknown [10.0.0.108]) by moh1-ve3.go2.pl (Postfix) with SMTP for ; Sat, 7 Jul 2012 18:34:45 +0200 (CEST) Received: from unknown [93.175.69.74] by poczta.o2.pl with ESMTP id EdhEtz; Sat, 07 Jul 2012 18:34:45 +0200 Message-ID: <4FF8651F.8090102@o2.pl> Date: Sat, 07 Jul 2012 18:34:39 +0200 From: =?UTF-8?B?UmFkaW8gbcWCb2R5Y2ggYmFuZHl0w7N3?= User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:13.0) Gecko/20120614 Thunderbird/13.0.1 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <20120706120031.A47A41065733@hub.freebsd.org> In-Reply-To: <20120706120031.A47A41065733@hub.freebsd.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-O2-Trust: 1, 33 X-O2-SPF: neutral Subject: Re: freebsd-fs Digest, Vol 472, Issue 5 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Jul 2012 16:34:51 -0000 On 2012-07-06 14:00, freebsd-fs-request@freebsd.org wrote:> It's easy to find the failure math for raidz2 and raidz3. > > But what if you create a pool with 3 different raidz3 vdevs inside of it ? > > For instance, 3 12-drive raidz3 vdevs in one big pool. > > For each individual vdev the failure probability is now higher, since not only will it fail when 4 drives in the vdev fail, but it will also fail if four drives in any of the other two vdevs fail. > > So each raidz3 vdev now has a failure rate higher than vanilla raidz3 ... but what is that new failure rate ? Is it still higher than vanilla raidz2 ? Skip these calculations. They all assume that drive failures are independent and it's not the case in the real world. There's been a good study on the topic of drive failures several years ago, http://static.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.html Among other findings they say: "We also present strong evidence for the existence of correlations between disk replacement interarrivals." -- Twoje radio