From owner-freebsd-fs@FreeBSD.ORG Thu May 14 13:42:28 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C4CEF4E7 for ; Thu, 14 May 2015 13:42:28 +0000 (UTC) Received: from mail-qk0-x22c.google.com (mail-qk0-x22c.google.com [IPv6:2607:f8b0:400d:c09::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 818EE1B96 for ; Thu, 14 May 2015 13:42:28 +0000 (UTC) Received: by qkgy4 with SMTP id y4so49276377qkg.2 for ; Thu, 14 May 2015 06:42:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=eT/rfuBFs8oStKTqYFEXqeuYLjzK+EscJ1ouKDy9A+A=; b=ZQr3AB8GQnkprO8ZSkWlQnIdBzsDGuCi+LX5JOsMRtRr/kGLgkbSAQ1nYAb3QauINN TpVYwFtErjsaaigV/su15jXxInPGwyo/TZlhUns8g3DN6xXak4HyEFBCwcFs0rRGMQ7R iMIDckHYfHhtrlruYNTAIg13nUnD3mLOJQJ8R8XMLE5wjg/95jDIgU98jAP7kms7CvLV VIPI5FMY4HBIfeX0LeLvUQnbynkxnM/Dbxo5sUW/uso1fw9S185FXs+tzzBplYRTk3q3 soAeDoXfjLPEfCeTqt5TlNMWobgv5EAb8KXRB6w2EKWr59enbPEZ2ut4p8zDMh12UJk0 /9cQ== X-Received: by 10.55.15.129 with SMTP id 1mr9080683qkp.29.1431610947739; Thu, 14 May 2015 06:42:27 -0700 (PDT) MIME-Version: 1.0 Received: by 10.140.96.118 with HTTP; Thu, 14 May 2015 06:42:07 -0700 (PDT) From: Gabor Radnai Date: Thu, 14 May 2015 15:42:07 +0200 Message-ID: Subject: Re: ZFS RAID 10 capacity expansion and uneven data distribution To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 May 2015 13:42:28 -0000 Hi Kai, As others pointed out the cleanest way is to destroy / recreate your pool from backup. Though if you have no backup a hackish, in-place recreation process can be the following. But please be *WARNED* it is your data, the recommended solution is to use backup, if you follow below process it is your call - it may work but I cannot guarantee. You can have power outage, disk outage, sky falling down, whatever and you may lose your data. And this may not even work - more skilled readers could bit me on head how stupid this is. So, again be warned. If you are still interested: > On one server I am currently using a four disk RAID 10 zpool: > > zpool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gpt/zpool-da2 ONLINE 0 0 0 > gpt/zpool-da3 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > gpt/zpool-da4 ONLINE 0 0 0 > gpt/zpool-da5 ONLINE 0 0 0 1. zpool split zpool zpool.old this will leave your current zpool composed from slice of da2 and da4, and create a new pool from da3 and da5. 2. zpool destroy zpool 3. truncate -s /tmp/dummy.1 && truncate -s /tmp/dummy.2 4. zpool create zpool mirror da2 /tmp/dummy.1 mirror da4 /tmp/dummy.2 5. zpool zpool offline /tmp/dummy.1 & zpool offline /tmp/dummy.2 6. zpool import zpool.old 7. (zfs create ... on zpool as needed) copy your stuff from zpool.old to zpool 8. cross your fingers, *no* return from here !! 9. zpool destroy zpool.old 10. zpool labelclear da3 && zpool labelclear da5 # just to be on clear side 11. zpool replace zpool /tmp/dummy.1 da3 && zpool replace zpool /tmp/dummy.2 da5 12. wait for resilver ... If this is total sh*t please ignore, i tried it in VM seemed to work. Thanks.