Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 25 Jul 2007 13:53:46 +0100
From:      Doug Rabson <dfr@rabson.org>
To:        Mark Powell <M.S.Powell@salford.ac.uk>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZfS & GEOM with many odd drive sizes
Message-ID:  <6FF8729F-B449-4EFA-B3C6-8B9A9E6F6C4F@rabson.org>
In-Reply-To: <20070725120913.A57231@rust.salford.ac.uk>
References:  <20070719102302.R1534@rust.salford.ac.uk> <20070719135510.GE1194@garage.freebsd.pl> <20070719181313.G4923@rust.salford.ac.uk> <20070721065204.GA2044@garage.freebsd.pl> <20070725095723.T57231@rust.salford.ac.uk> <1185355848.3698.7.camel@herring.rabson.org> <20070725103746.N57231@rust.salford.ac.uk> <3A5D89E1-A7B1-4B10-ADB8-F58332306691@rabson.org> <20070725120913.A57231@rust.salford.ac.uk>

next in thread | previous in thread | raw e-mail | index | archive | help

On 25 Jul 2007, at 12:17, Mark Powell wrote:

> On Wed, 25 Jul 2007, Doug Rabson wrote:
>
>>> gmirror is only going to used for the ufs /boot parition and  
>>> block device swap. (I'll ignore the smallish space used by that  
>>> below.)
>>
>> Just to muddy the waters a little - I'm working on ZFS native boot  
>> code at the moment. It probably won't ship with 7.0 but should be  
>> available shortly after.
>
> Great work. That will be zfs mirror only right?

The code is close to being able to support collections of mirrors. No  
raidz or raidz2 for now though.

>
>>>  I believe my reasoning is correct here? Let me know if your  
>>> experience would suggest otherwise.
>>
>> Your reasoning sounds fine now that I have the bigger picture in  
>> my head. I don't have a lot of experience here - for my ZFS  
>> testing, I just bought a couple of cheap 300GB drives which I'm  
>> using as a simple mirror. From what I have read, mirrors and  
>> raidz2 are roughly equivalent in 'mean time to data loss' terms  
>> with raidz1 quite a bit less safe due to the extra vulnerability  
>> window between a drive failure and replacement.
>
> So back to my original question :)
>   If one drive in a gconcat gc1 (ad2s2+ad3s2), say ad3 fails, and  
> the broken gconcat is completely replaced with a new 500GB drive  
> ad2, is fixing that as simple as:
>
> zpool replace tank gc1 ad2

That sounds right.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6FF8729F-B449-4EFA-B3C6-8B9A9E6F6C4F>