Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 23 Dec 2009 16:29:02 -0800
From:      Steven Schlansker <stevenschlansker@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device)
Message-ID:  <ED4451CA-E72F-4FA2-B346-77C44018AC3E@gmail.com>
In-Reply-To: <4B315320.5050504@quip.cz>
References:  <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz>

next in thread | previous in thread | raw e-mail | index | archive | help

On Dec 22, 2009, at 3:15 PM, Miroslav Lachman wrote:

> Steven Schlansker wrote:
>> As a corollary, you may notice some funky concat business going on.
>> This is because I have drives which are very slightly different in =
size (<  1MB)
>> and whenever one of them goes down and I bring the pool up, it =
helpfully (?)
>> expands the pool by a whole megabyte then won't let the drive back =
in.
>> This is extremely frustrating... is there any way to fix that?  I'm
>> eventually going to keep expanding each of my drives one megabyte at =
a time
>> using gconcat and space on another drive!  Very frustrating...
>=20
> You can avoid it by partitioning the drives to the well known =
'minimal' size (size of smallest disk) and use the partition instead of =
raw disk.
> For example ad12s1 instead of ad12 (if you creat slices by fdisk)
> of ad12p1 (if you creat partitions by gpart)


Yes, this makes sense.  Unfortunately, I didn't do this when I first =
made the array
as the documentation says you should use whole disks so that it can =
enable the write
cache, which I took to mean you shouldn't use a partition table.  And =
now there's no
way to fix it after the fact, as you can't shrink a zpool even by a =
single
MB :(





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ED4451CA-E72F-4FA2-B346-77C44018AC3E>