From owner-freebsd-fs@FreeBSD.ORG Mon May 18 07:41:32 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C7CF85E5 for ; Mon, 18 May 2015 07:41:32 +0000 (UTC) Received: from mail-wi0-x235.google.com (mail-wi0-x235.google.com [IPv6:2a00:1450:400c:c05::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 529721D6D for ; Mon, 18 May 2015 07:41:32 +0000 (UTC) Received: by wicnf17 with SMTP id nf17so59050662wic.1 for ; Mon, 18 May 2015 00:41:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=pQQiD0Rv0wA9DEwHewUyxtV3YAHm6mkmSgHDRxw5+aU=; b=wLefrjkFH8eqBPpGykeeP2d4uHJBt3pjftCURtPaGYSh25ipNHgLXDGX5o0qCgfEJ8 nTCDnV1KCU76vbo3xkMFnBCCZWPJsUVo/+LGbBjH2ivQB9pwRwODRtqiQjk1uORS+UQ2 fQ54fn8WCcYpJluHPLacx2cl77kWpdau834ZVObxRliU55AjxTJKqxrSyRqBV3fQw0cu RHnV4muhfN+njlinl9k7fhBjdGbB9uTn3HAqy3Xn7aGm+V5waKGyIbbhhzyehlKvdC1M fMBtO+6FCkzvK+sJaSQca+x+ScBpG/5Kq2zH4v+QClUmXR4sywoqY27Hcmr60Stmuk1t hnIQ== MIME-Version: 1.0 X-Received: by 10.180.188.100 with SMTP id fz4mr17473281wic.91.1431934890578; Mon, 18 May 2015 00:41:30 -0700 (PDT) Received: by 10.28.210.149 with HTTP; Mon, 18 May 2015 00:41:30 -0700 (PDT) In-Reply-To: References: Date: Mon, 18 May 2015 08:41:30 +0100 Message-ID: Subject: Re: ZFS RAID 10 capacity expansion and uneven data distribution From: krad To: Daniel Kalchev Cc: Gabor Radnai , FreeBSD FS Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 07:41:32 -0000 depending on your dataset and you could also break it down into the file level rather than mess around with zfs send etc eg cp some_file some_file.new rm some_file mv some_file.new some_file just be careful with permissions etc (you might need a flag or to extra) On 14 May 2015 at 14:59, Daniel Kalchev wrote: > Not a total bs, but.. it could be made simpler/safer. > > skip 2,3,4 and 5 > 7a. zfs snapshot -r zpool.old@send > 7b. zfs send -R zpool.old@send | zfs receive -F zpool > do not skip 8 :) > 11. zpool attach zpool da1 da2 && zpool attach zpool da3 da4 > > Everywhere in the instruction where it says daX replace with gpt/zpool-da= X > as in the original config. > > After this operation, you should have the exact same zpool, with evenly > redistributed data. You could use the chance to change ashift etc. Sadly, > this works only for mirrors. > > Important to understand that since the first step you have an > non-redundant pool. It=E2=80=99s very reasonable to do a scrub before sta= rting this > process and of course have usable backup. > > Daniel > > > On 14.05.2015 =D0=B3., at 16:42, Gabor Radnai = wrote: > > > > Hi Kai, > > > > As others pointed out the cleanest way is to destroy / recreate your po= ol > > from backup. > > > > Though if you have no backup a hackish, in-place recreation process can > be > > the following. > > But please be *WARNED* it is your data, the recommended solution is to > use > > backup, > > if you follow below process it is your call - it may work but I cannot > > guarantee. You can > > have power outage, disk outage, sky falling down, whatever and you may > lose > > your data. > > And this may not even work - more skilled readers could bit me on head > how > > stupid this is. > > > > So, again be warned. > > > > If you are still interested: > > > >> On one server I am currently using a four disk RAID 10 zpool: > >> > >> zpool ONLINE 0 0 0 > >> mirror-0 ONLINE 0 0 0 > >> gpt/zpool-da2 ONLINE 0 0 0 > >> gpt/zpool-da3 ONLINE 0 0 0 > >> mirror-1 ONLINE 0 0 0 > >> gpt/zpool-da4 ONLINE 0 0 0 > >> gpt/zpool-da5 ONLINE 0 0 0 > > > > > > 1. zpool split zpool zpool.old > > this will leave your current zpool composed from slice of da2 and da4, > and > > create a new pool from da3 and da5. > > 2. zpool destroy zpool > > 3. truncate -s /tmp/dummy.1 && truncate -s > > /tmp/dummy.2 > > 4. zpool create zpool mirror da2 /tmp/dummy.1 mirror da4 > > /tmp/dummy.2 > > 5. zpool zpool offline /tmp/dummy.1 & zpool offline /tmp/dummy.2 > > 6. zpool import zpool.old > > 7. (zfs create ... on zpool as needed) copy your stuff from zpool.old t= o > > zpool > > 8. cross your fingers, *no* return from here !! > > 9. zpool destroy zpool.old > > 10. zpool labelclear da3 && zpool labelclear da5 # just to be on clear > side > > 11. zpool replace zpool /tmp/dummy.1 da3 && zpool replace zpool > > /tmp/dummy.2 da5 > > 12. wait for resilver ... > > > > If this is total sh*t please ignore, i tried it in VM seemed to work. > > > > Thanks. > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >