From owner-freebsd-fs@FreeBSD.ORG Thu May 14 14:10:30 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C46653B5 for ; Thu, 14 May 2015 14:10:30 +0000 (UTC) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.21.123]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client CN "smtp-sofia.digsys.bg", Issuer "Digital Systems Operational CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 3C74F1F05 for ; Thu, 14 May 2015 14:10:30 +0000 (UTC) Received: from [193.68.6.125] ([193.68.6.125]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.9/8.14.9) with ESMTP id t4EDxKjW071178 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 14 May 2015 16:59:20 +0300 (EEST) (envelope-from daniel@digsys.bg) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2098\)) Subject: Re: ZFS RAID 10 capacity expansion and uneven data distribution From: Daniel Kalchev In-Reply-To: Date: Thu, 14 May 2015 16:59:20 +0300 Cc: freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: To: Gabor Radnai X-Mailer: Apple Mail (2.2098) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 May 2015 14:10:31 -0000 Not a total bs, but.. it could be made simpler/safer. skip 2,3,4 and 5 7a. zfs snapshot -r zpool.old@send 7b. zfs send -R zpool.old@send | zfs receive -F zpool do not skip 8 :) 11. zpool attach zpool da1 da2 && zpool attach zpool da3 da4 Everywhere in the instruction where it says daX replace with = gpt/zpool-daX as in the original config. After this operation, you should have the exact same zpool, with evenly = redistributed data. You could use the chance to change ashift etc. = Sadly, this works only for mirrors. Important to understand that since the first step you have an = non-redundant pool. It=E2=80=99s very reasonable to do a scrub before = starting this process and of course have usable backup. Daniel > On 14.05.2015 =D0=B3., at 16:42, Gabor Radnai = wrote: >=20 > Hi Kai, >=20 > As others pointed out the cleanest way is to destroy / recreate your = pool > from backup. >=20 > Though if you have no backup a hackish, in-place recreation process = can be > the following. > But please be *WARNED* it is your data, the recommended solution is to = use > backup, > if you follow below process it is your call - it may work but I cannot > guarantee. You can > have power outage, disk outage, sky falling down, whatever and you may = lose > your data. > And this may not even work - more skilled readers could bit me on head = how > stupid this is. >=20 > So, again be warned. >=20 > If you are still interested: >=20 >> On one server I am currently using a four disk RAID 10 zpool: >>=20 >> zpool ONLINE 0 0 0 >> mirror-0 ONLINE 0 0 0 >> gpt/zpool-da2 ONLINE 0 0 0 >> gpt/zpool-da3 ONLINE 0 0 0 >> mirror-1 ONLINE 0 0 0 >> gpt/zpool-da4 ONLINE 0 0 0 >> gpt/zpool-da5 ONLINE 0 0 0 >=20 >=20 > 1. zpool split zpool zpool.old > this will leave your current zpool composed from slice of da2 and da4, = and > create a new pool from da3 and da5. > 2. zpool destroy zpool > 3. truncate -s /tmp/dummy.1 && truncate -s > /tmp/dummy.2 > 4. zpool create zpool mirror da2 /tmp/dummy.1 mirror da4 > /tmp/dummy.2 > 5. zpool zpool offline /tmp/dummy.1 & zpool offline /tmp/dummy.2 > 6. zpool import zpool.old > 7. (zfs create ... on zpool as needed) copy your stuff from zpool.old = to > zpool > 8. cross your fingers, *no* return from here !! > 9. zpool destroy zpool.old > 10. zpool labelclear da3 && zpool labelclear da5 # just to be on clear = side > 11. zpool replace zpool /tmp/dummy.1 da3 && zpool replace zpool > /tmp/dummy.2 da5 > 12. wait for resilver ... >=20 > If this is total sh*t please ignore, i tried it in VM seemed to work. >=20 > Thanks. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"