Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 23 Dec 2009 17:02:31 -0800
From:      Steven Schlansker <stevenschlansker@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device)
Message-ID:  <36133DA6-C26B-4B1B-B3E1-DBB714232F59@gmail.com>
In-Reply-To: <5da0588e0912231632v14b5dfcdrc913a9deeac9e38a@mail.gmail.com>
References:  <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <ED4451CA-E72F-4FA2-B346-77C44018AC3E@gmail.com> <5da0588e0912231632v14b5dfcdrc913a9deeac9e38a@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On Dec 23, 2009, at 4:32 PM, Rich wrote:

> That's fascinating - I'd swear it used to be the case (in
> Solaris-land, at least) that resilvering with a smaller vdev resulted
> in it shrinking the available space on other vdevs as though they were
> all as large as the smallest vdev available.

Pretty sure that this doesn't exist for raidz.  I haven't tried, though,
and Sun's bug database's search blows chunks.  I remember seeing
a bug filed on it before, but I can't for the life of me find it.

>=20
> In particular, I'd swear I've done this with some disk arrays I have
> laying around with 7x removable SCA drives, which I have in 2, 4.5, 9,
> and 18 GB varieties...
>=20
> But maybe I'm just hallucinating, or this went away a long time ago.
> (This was circa b70 in Solaris.)

Shrinking of mirrored drives seems like it might be working.
Again Sun's bug database isn't clear at all about what can /
can't be shrunk - maybe I should get a Solaris bootdisk and see
if I can shrink it from there...

>=20
> I know you can't do this in FreeBSD; I've also run into the
> "insufficient space" problem when trying to replace with a smaller
> vdev.
>=20
> - Rich
>=20
> On Wed, Dec 23, 2009 at 7:29 PM, Steven Schlansker
> <stevenschlansker@gmail.com> wrote:
>>=20
>> On Dec 22, 2009, at 3:15 PM, Miroslav Lachman wrote:
>>=20
>>> Steven Schlansker wrote:
>>>> As a corollary, you may notice some funky concat business going on.
>>>> This is because I have drives which are very slightly different in =
size (<  1MB)
>>>> and whenever one of them goes down and I bring the pool up, it =
helpfully (?)
>>>> expands the pool by a whole megabyte then won't let the drive back =
in.
>>>> This is extremely frustrating... is there any way to fix that?  I'm
>>>> eventually going to keep expanding each of my drives one megabyte =
at a time
>>>> using gconcat and space on another drive!  Very frustrating...
>>>=20
>>> You can avoid it by partitioning the drives to the well known =
'minimal' size (size of smallest disk) and use the partition instead of =
raw disk.
>>> For example ad12s1 instead of ad12 (if you creat slices by fdisk)
>>> of ad12p1 (if you creat partitions by gpart)
>>=20
>>=20
>> Yes, this makes sense.  Unfortunately, I didn't do this when I first =
made the array
>> as the documentation says you should use whole disks so that it can =
enable the write
>> cache, which I took to mean you shouldn't use a partition table.  And =
now there's no
>> way to fix it after the fact, as you can't shrink a zpool even by a =
single
>> MB :(
>>=20
>>=20
>> _______________________________________________
>> freebsd-fs@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>>=20
>=20
>=20
>=20
> --=20
>=20
> [We] use bad software and bad machines for the wrong things. -- R. W. =
Hamming




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?36133DA6-C26B-4B1B-B3E1-DBB714232F59>