Date: Mon, 6 Feb 2012 09:38:30 -0800 From: Freddie Cash <fjwcash@gmail.com> To: Daniel Kalchev <daniel@digsys.bg> Cc: freebsd-fs@freebsd.org Subject: Re: HPC and zfs. Message-ID: <CAOjFWZ5cVfvaFR%2BecPeuj-cByn=7R87BCSZD_rBsLW5-VDC_gA@mail.gmail.com> In-Reply-To: <413B1A6F-B076-4F50-90EA-7E17CF4B6E36@digsys.bg> References: <4F2FF72B.6000509@pean.org> <20120206162206.GA541@icarus.home.lan> <CAOjFWZ44nP5MVPgvux=Y-x%2BT%2BBy-WWGVyuAegJYrv6mLmmaN-w@mail.gmail.com> <4F300CEA.5000901@fuckner.net> <413B1A6F-B076-4F50-90EA-7E17CF4B6E36@digsys.bg>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Feb 6, 2012 at 9:34 AM, Daniel Kalchev <daniel@digsys.bg> wrote: > On Feb 6, 2012, at 7:24 PM, Michael Fuckner wrote: >> Another thing to think about is CPU: you probably need weeks for a rebui= ld of a single disk in a Petabyte Filesystem- I haven't tried this with ZFS= yet, but I'm really interested if anyone already did this. > > This is where ZFS will shine. Depending on how you stripe disks, you can = either get super fast resilver (if you go for stripe of mirrors), to fast (= if you go for small number of disks raidz) to reasonable (if you of for lar= ge number of disks raidz). If you need high TPS you will want to go with mi= rrors anyway. > > The thing is doable with commodity hardware, but I wonder how one ever ba= ckups such setup? With a second box configured similarily. :) Although, trying to find "downtime" to do the backups ... --=20 Freddie Cash fjwcash@gmail.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ5cVfvaFR%2BecPeuj-cByn=7R87BCSZD_rBsLW5-VDC_gA>