Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 14 May 2015 16:59:20 +0300
From:      Daniel Kalchev <daniel@digsys.bg>
To:        Gabor Radnai <gabor.radnai@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS RAID 10 capacity expansion and uneven data distribution
Message-ID:  <C46F686C-4765-4B0F-8A7D-F5670936FC62@digsys.bg>
In-Reply-To: <CABnVG=cc_7UNMO=XUFq4esPDZyZO8wDXhfXnA4tXSu77raK42Q@mail.gmail.com>
References:  <CABnVG=cc_7UNMO=XUFq4esPDZyZO8wDXhfXnA4tXSu77raK42Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Not a total bs, but.. it could be made simpler/safer.

skip 2,3,4 and 5
7a. zfs snapshot -r zpool.old@send
7b. zfs send -R zpool.old@send | zfs receive -F zpool
do not skip 8 :)
11. zpool attach zpool da1 da2 && zpool attach zpool da3 da4

Everywhere in the instruction where it says daX replace with =
gpt/zpool-daX as in the original config.

After this operation, you should have the exact same zpool, with evenly =
redistributed data. You could use the chance to change ashift etc. =
Sadly, this works only for mirrors.

Important to understand that since the first step you have an =
non-redundant pool. It=E2=80=99s very reasonable to do a scrub before =
starting this process and of course have usable backup.

Daniel

> On 14.05.2015 =D0=B3., at 16:42, Gabor Radnai <gabor.radnai@gmail.com> =
wrote:
>=20
> Hi Kai,
>=20
> As others pointed out the cleanest way is to destroy / recreate your =
pool
> from backup.
>=20
> Though if you have no backup a hackish, in-place recreation process =
can be
> the following.
> But please be *WARNED* it is your data, the recommended solution is to =
use
> backup,
> if you follow below process it is your call - it may work but I cannot
> guarantee. You can
> have power outage, disk outage, sky falling down, whatever and you may =
lose
> your data.
> And this may not even work - more skilled readers could bit me on head =
how
> stupid this is.
>=20
> So, again be warned.
>=20
> If you are still interested:
>=20
>> On one server I am currently using a four disk RAID 10 zpool:
>>=20
>> 	zpool              ONLINE       0     0     0
>> 	  mirror-0         ONLINE       0     0     0
>> 	    gpt/zpool-da2  ONLINE       0     0     0
>> 	    gpt/zpool-da3  ONLINE       0     0     0
>> 	  mirror-1         ONLINE       0     0     0
>> 	    gpt/zpool-da4  ONLINE       0     0     0
>> 	    gpt/zpool-da5  ONLINE       0     0     0
>=20
>=20
> 1. zpool split zpool zpool.old
> this will leave your current zpool composed from slice of da2 and da4, =
and
> create a new pool from da3 and da5.
> 2. zpool destroy zpool
> 3. truncate -s <proper size> /tmp/dummy.1 && truncate -s <proper size>
> /tmp/dummy.2
> 4. zpool create <flags> zpool mirror da2 /tmp/dummy.1 mirror da4
> /tmp/dummy.2
> 5. zpool zpool offline /tmp/dummy.1 & zpool offline /tmp/dummy.2
> 6. zpool import zpool.old
> 7. (zfs create ... on zpool as needed) copy your stuff from zpool.old =
to
> zpool
> 8. cross your fingers, *no* return from here !!
> 9. zpool destroy zpool.old
> 10. zpool labelclear da3 && zpool labelclear da5 # just to be on clear =
side
> 11. zpool replace zpool /tmp/dummy.1 da3 && zpool replace zpool
> /tmp/dummy.2 da5
> 12. wait for resilver ...
>=20
> If this is total sh*t please ignore, i tried it in VM seemed to work.
>=20
> Thanks.
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?C46F686C-4765-4B0F-8A7D-F5670936FC62>