From owner-freebsd-fs@FreeBSD.ORG Sun Jan 27 19:08:08 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 9D773C4A; Sun, 27 Jan 2013 19:08:08 +0000 (UTC) (envelope-from uqs@FreeBSD.org) Received: from acme.spoerlein.net (acme.spoerlein.net [IPv6:2a01:4f8:131:23c2::1]) by mx1.freebsd.org (Postfix) with ESMTP id 2DFC5127; Sun, 27 Jan 2013 19:08:08 +0000 (UTC) Received: from localhost (acme.spoerlein.net [IPv6:2a01:4f8:131:23c2::1]) by acme.spoerlein.net (8.14.6/8.14.6) with ESMTP id r0RJ86S1009795 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Sun, 27 Jan 2013 20:08:06 +0100 (CET) (envelope-from uqs@FreeBSD.org) Date: Sun, 27 Jan 2013 20:08:06 +0100 From: Ulrich =?utf-8?B?U3DDtnJsZWlu?= To: Fabian Keil Subject: Re: Zpool surgery Message-ID: <20130127190806.GQ35868@acme.spoerlein.net> Mail-Followup-To: Fabian Keil , current@FreeBSD.org, fs@FreeBSD.org References: <20130127103612.GB38645@acme.spoerlein.net> <20130127145601.7f650d3c@fabiankeil.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20130127145601.7f650d3c@fabiankeil.de> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: current@FreeBSD.org, fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Jan 2013 19:08:08 -0000 On Sun, 2013-01-27 at 14:56:01 +0100, Fabian Keil wrote: > Ulrich Spörlein wrote: > > > I have a slight problem with transplanting a zpool, maybe this is not > > possible the way I like to do it, maybe I need to fuzz some > > identifiers... > > > > I want to transplant my old zpool tank from a 1TB drive to a new 2TB > > drive, but *not* use dd(1) or any other cloning mechanism, as the pool > > was very full very often and is surely severely fragmented. > > > > So, I have tank (the old one), the new one, let's call it tank' and > > then there's the archive pool where snapshots from tank are sent to, and > > these should now come from tank' in the future. > > > > I have: > > tank -> sending snapshots to archive > > > > I want: > > tank' -> sending snapshots to archive > > > > Ideally I would want archive to not even know that tank and tank' are > > different, so as to not have to send a full snapshot again, but > > continue the incremental snapshots. > > > > So I did zfs send -R tank | ssh otherhost "zfs recv -d tank" and that > > worked well, this contained a snapshot A that was also already on > > archive. Then I made a final snapshot B on tank, before turning down that > > pool and sent it to tank' as well. > > > > Now I have snapshot A on tank, tank' and archive and they are virtually > > identical. I have snapshot B on tank and tank' and would like to send > > this from tank' to archive, but it complains: > > > > cannot receive incremental stream: most recent snapshot of archive does > > not match incremental source > > In general this should work, so I'd suggest that you double check > that you are indeed sending the correct incremental. > > > Is there a way to tweak the identity of tank' to be *really* the same as > > tank, so that archive can accept that incremental stream? Or should I > > use dd(1) after all to transplant tank to tank'? My other option would > > be to turn on dedup on archive and send another full stream of tank', > > 99.9% of which would hopefully be deduped and not consume precious space > > on archive. > > The pools don't have to be the same. > > I wouldn't consider dedup as you'll have to recreate the pool if > it turns out the the dedup performance is pathetic. On a system > that hasn't been created with dedup in mind that seems rather > likely. > > > Any ideas? > > Your whole procedure seems a bit complicated to me. > > Why don't you use "zpool replace"? Ehhh, .... "zpool replace", eh? I have to say I didn't know that option was available, but also because this is on a newer machine, I needed some way to do this over the network, so a direct zpool replace is not that easy. I dug out an old ATA-to-USB case and will use that to attach the old tank to the new machine and then have a try at this zpool replace thing. How will that affect the fragmentation level of the new pool? Will the resilver do something sensible wrt. keeping files together for better read-ahead performance? Cheers, Uli