Date: Mon, 18 May 2015 08:41:30 +0100 From: krad <kraduk@gmail.com> To: Daniel Kalchev <daniel@digsys.bg> Cc: Gabor Radnai <gabor.radnai@gmail.com>, FreeBSD FS <freebsd-fs@freebsd.org> Subject: Re: ZFS RAID 10 capacity expansion and uneven data distribution Message-ID: <CALfReyfY0rVz-=iFRs1sgB=4oD4y4Syq3Stax_Xv7ZubmHy%2BYw@mail.gmail.com> In-Reply-To: <C46F686C-4765-4B0F-8A7D-F5670936FC62@digsys.bg> References: <CABnVG=cc_7UNMO=XUFq4esPDZyZO8wDXhfXnA4tXSu77raK42Q@mail.gmail.com> <C46F686C-4765-4B0F-8A7D-F5670936FC62@digsys.bg>
next in thread | previous in thread | raw e-mail | index | archive | help
depending on your dataset and you could also break it down into the file level rather than mess around with zfs send etc eg cp some_file some_file.new rm some_file mv some_file.new some_file just be careful with permissions etc (you might need a flag or to extra) On 14 May 2015 at 14:59, Daniel Kalchev <daniel@digsys.bg> wrote: > Not a total bs, but.. it could be made simpler/safer. > > skip 2,3,4 and 5 > 7a. zfs snapshot -r zpool.old@send > 7b. zfs send -R zpool.old@send | zfs receive -F zpool > do not skip 8 :) > 11. zpool attach zpool da1 da2 && zpool attach zpool da3 da4 > > Everywhere in the instruction where it says daX replace with gpt/zpool-da= X > as in the original config. > > After this operation, you should have the exact same zpool, with evenly > redistributed data. You could use the chance to change ashift etc. Sadly, > this works only for mirrors. > > Important to understand that since the first step you have an > non-redundant pool. It=E2=80=99s very reasonable to do a scrub before sta= rting this > process and of course have usable backup. > > Daniel > > > On 14.05.2015 =D0=B3., at 16:42, Gabor Radnai <gabor.radnai@gmail.com> = wrote: > > > > Hi Kai, > > > > As others pointed out the cleanest way is to destroy / recreate your po= ol > > from backup. > > > > Though if you have no backup a hackish, in-place recreation process can > be > > the following. > > But please be *WARNED* it is your data, the recommended solution is to > use > > backup, > > if you follow below process it is your call - it may work but I cannot > > guarantee. You can > > have power outage, disk outage, sky falling down, whatever and you may > lose > > your data. > > And this may not even work - more skilled readers could bit me on head > how > > stupid this is. > > > > So, again be warned. > > > > If you are still interested: > > > >> On one server I am currently using a four disk RAID 10 zpool: > >> > >> zpool ONLINE 0 0 0 > >> mirror-0 ONLINE 0 0 0 > >> gpt/zpool-da2 ONLINE 0 0 0 > >> gpt/zpool-da3 ONLINE 0 0 0 > >> mirror-1 ONLINE 0 0 0 > >> gpt/zpool-da4 ONLINE 0 0 0 > >> gpt/zpool-da5 ONLINE 0 0 0 > > > > > > 1. zpool split zpool zpool.old > > this will leave your current zpool composed from slice of da2 and da4, > and > > create a new pool from da3 and da5. > > 2. zpool destroy zpool > > 3. truncate -s <proper size> /tmp/dummy.1 && truncate -s <proper size> > > /tmp/dummy.2 > > 4. zpool create <flags> zpool mirror da2 /tmp/dummy.1 mirror da4 > > /tmp/dummy.2 > > 5. zpool zpool offline /tmp/dummy.1 & zpool offline /tmp/dummy.2 > > 6. zpool import zpool.old > > 7. (zfs create ... on zpool as needed) copy your stuff from zpool.old t= o > > zpool > > 8. cross your fingers, *no* return from here !! > > 9. zpool destroy zpool.old > > 10. zpool labelclear da3 && zpool labelclear da5 # just to be on clear > side > > 11. zpool replace zpool /tmp/dummy.1 da3 && zpool replace zpool > > /tmp/dummy.2 da5 > > 12. wait for resilver ... > > > > If this is total sh*t please ignore, i tried it in VM seemed to work. > > > > Thanks. > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALfReyfY0rVz-=iFRs1sgB=4oD4y4Syq3Stax_Xv7ZubmHy%2BYw>