Date: Thu, 3 Dec 2009 09:27:38 +0000 From: krad <kraduk@googlemail.com> To: Rolf Nielsen <listreader@lazlarlyricon.com> Cc: Dan Nelson <dnelson@allantgroup.com>, freebsd-questions@freebsd.org Subject: Re: ZFS pools of consisting of several mirrors Message-ID: <d36406630912030127y7e10dac7tc6a6854e64594a39@mail.gmail.com> In-Reply-To: <4B156170.5000003@lazlarlyricon.com> References: <4B155562.30109@lazlarlyricon.com> <20091201175500.GN89004@dan.emsphone.com> <4B156170.5000003@lazlarlyricon.com>
next in thread | previous in thread | raw e-mail | index | archive | help
2009/12/1 Rolf Nielsen <listreader@lazlarlyricon.com> > Dan Nelson wrote: > >> In the last episode (Dec 01), Rolf Nielsen said: >> >>> In experimenting a bit with ZFS, I, among other things, tried something >>> like this >>> >>> zpool create -R /test test mirror file[01]0 mirror file[01]1 mirror >>> file[01]2 mirror file[01]3 mirror file[01]4 mirror file[01]5 >>> >>> This, according to zpool status, gives me a (file backed) pool consisting >>> of six mirrors, each mirror consisting of two files. Now for my >>> question. Exactly how is the pool built? Is it... >>> >>> 1. A RAID0 of the six mirrors? >>> >>> 2. A mirror of two RAID0 arrays, each array consisting of the six files >>> file0[0-5] and file1[0-5] respectively? >>> >>> 3 and 4. Like 1 and 2 above, but with JBOD instead of RAID0? >>> >>> 5. Some other way I haven't thought about? >>> >>> I guess it's 1 or 3, as the zpool status output shows me six mirrors, but >>> which is it? And, provided my guess is correct, is there a way to implement >>> 2 or 4 without involving geom_stripe or geom_concat? >>> >> >> It's 1/3/5. Each mirror is independant, and writes are balanced across >> the >> mirrors based on space usage. If you add another mirror to grow the pool, >> it will get most of the writes until the usages balance out. >> >> You usually don't want to build an array with options 2 or 4, since a >> single >> drive failure will degrade the entire mirror half. Consider if you have >> concat00 -> file01 file02 file03 file04 file05 >> concat01 -> file11 file12 file13 file14 file15 >> mirror0 -> concat0 concat1 >> >> If file01 fails, concat00 fails, causing mirror0 to become degraded. When >> you replace file01, mirror0 will have to resynch all of concat00 from >> concat01 since it doesn't know about the subdevices. If you don't replace >> file01, and then file15 fails, you have lost your entire volume (unless >> you >> do some hackery to swap file05 and file15 to create a functioning >> concat01). >> >> > Good point. Thanks for the reply. > > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to " > freebsd-questions-unsubscribe@freebsd.org" > its RAID 1+0 or raid 10. ZFS stripes across all vnodes added to a pool rather than attached. In your case your vnodes were raid 1, but they could have been raidz(23) giving you raid 50, 60, and 70(?). Or they could be individual vdevs giving your pure raid0.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d36406630912030127y7e10dac7tc6a6854e64594a39>