Date: Mon, 6 Feb 2012 11:39:16 -0600 (CST) From: Bob Friesenhahn <bfriesen@simple.dallas.tx.us> To: Michael Fuckner <michael@fuckner.net> Cc: freebsd-fs@freebsd.org Subject: Re: HPC and zfs. Message-ID: <alpine.GSO.2.01.1202061131560.20831@freddy.simplesystems.org> In-Reply-To: <4F300CEA.5000901@fuckner.net> References: <4F2FF72B.6000509@pean.org> <20120206162206.GA541@icarus.home.lan> <CAOjFWZ44nP5MVPgvux=Y-x%2BT%2BBy-WWGVyuAegJYrv6mLmmaN-w@mail.gmail.com> <4F300CEA.5000901@fuckner.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 6 Feb 2012, Michael Fuckner wrote: > > Another thing to think about is CPU: you probably need weeks for a rebuild of > a single disk in a Petabyte Filesystem- I haven't tried this with ZFS yet, > but I'm really interested if anyone already did this. Why would a disk rebuild take longer for a petabyte filesystem rather than a tens of gigabytes filesystem? The time to rebuild the disk primarily depends on the RAID type used for the zfs vdev (mirrors, raidz1, raidz2, raidz3), how many disks there are in the vdev, the degree of fragmentation, the amount of data stored on that disk, and the disk seek times. In a huge system, it makes sense to be more conservative about the zfs vdev design, and use more vdevs with fewer disks per vdev. Using anything less than raidz2 would be an error. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.GSO.2.01.1202061131560.20831>