From owner-freebsd-questions@freebsd.org Wed Jan 31 21:59:14 2018 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AE850EE45BC for ; Wed, 31 Jan 2018 21:59:14 +0000 (UTC) (envelope-from dch@skunkwerks.at) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 575457AF5B for ; Wed, 31 Jan 2018 21:59:13 +0000 (UTC) (envelope-from dch@skunkwerks.at) Received: from compute7.internal (compute7.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 9CE2920A7B; Wed, 31 Jan 2018 16:59:12 -0500 (EST) Received: from web6 ([10.202.2.216]) by compute7.internal (MEProxy); Wed, 31 Jan 2018 16:59:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=skunkwerks.at; h=content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; bh=3LTsGRz0Q/nR/vWUMe7voJExyZE8R x66zmf7aYTpaok=; b=V0ybZK+n1LSjizdHi3CX4IjY1MMeLz1V++gS6yM1IP4O4 ToMLj0IPTZN80JlHaMmqOVAfMF3ptsLEKiPxhqv9gtXNfm8g9KnxRe3C6KNsETrJ n8ljJQcCle8bJRKQI30PKGoTTKLKOVHwyNKlEJn8vtZasXOBwzPhdnYOxRhddx2h W3ME9vgNS4KvkXYcqr2DqUFxqUfGHhhijx/GEi7A6KmdHXlRRIuoZYXZblY8nSsC smcHcrEjUqHixRkWqhTOOegPmJJgQinECBCf3Ho2fvZPa3LcYYeVypR3s4hx/YqN YupwA8aIs1MqkZ25OMBG9tOAznKMxopB8MBHso1HA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=3LTsGR z0Q/nR/vWUMe7voJExyZE8Rx66zmf7aYTpaok=; b=fcD0bNqyJmYFPltJJMIBvH ncTmrxMjJebc9asgDEXz3eGY4kAWbbNyDsT5zlGJ0wiprQiyTTJKn/EUmxzSAs/w b+BsTszFgzP7zmwOg2ehsemFDWu/h/1QKrFTsIQbqBQ3tUd/D1Sx8pT7pQzC1s49 VavJkpCJbmMVReQSf6iEA8Hp5zJNmVTzbtY/EwAfi4GdT/SR1Cm0qVnrlRMCkn/8 7FmuYOn5EDCayYyDX2bDK55fc710rnU8pFM1bSZmSQsdOa0iKCq0kpTUjk0LPfV0 2lPfYJ5e41jjw2SXeSmLXGgU9/5/KgqhMMeCgn6v7N1KriFDiXlcWvXeWGUK9nUA == X-ME-Sender: Received: by mailuser.nyi.internal (Postfix, from userid 99) id 6069F4251; Wed, 31 Jan 2018 16:59:12 -0500 (EST) Message-Id: <1517435952.2407347.1255099432.2DE20786@webmail.messagingengine.com> From: Dave Cottlehuber To: freebsd-questions@freebsd.org, bennett@sdf.org MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="utf-8" X-Mailer: MessagingEngine.com Webmail Interface - ajax-20f48d70 Subject: Re: ZFS error message questions Date: Wed, 31 Jan 2018 22:59:12 +0100 In-Reply-To: <201801100811.w0A8B62K027111@sdf.org> References: <201801100811.w0A8B62K027111@sdf.org> X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Jan 2018 21:59:14 -0000 On Wed, 10 Jan 2018, at 09:11, Scott Bennett wrote: Hopefully somebody replied off-line to get you out of your dilemma, Scott. If not, at least I hope this is useful to your understanding. > After running 11.1-STABLE with root on ZFS for a week and a half or so, > I apparently screwed something up such that the system became unusable while > trying to enter multi-user mode. It looked like the easiest way to correct > the mess would be to wipe it out and re-install. I'm a huge fan of using Boot Environments via sysutils/beadm-devel for this. Basically your / filesystem lives in zroot/ROOT/a_meaningful_name, except you can: - have multiple copies / versions - select one of those at start time - keep "anything useful" on a dataset that lives outside (zroot/usr/home for example) so that it stays between environments - I can build and install from source into a new boot environment - or try out the next release safely inside there - have super powers for sysadmins > However, I wanted to save the home directory tree because there had been > important changes in it since it had been imported from the 10.3-STABLE system > with root, /usr/home, and so on on UFS2, so I ran > > zfs send -ReL -I @15dec2017.freshlyinstalled-11.1S-r236620 zrootx/usr/ > home@28dec2017 > /mnt/bootdisks/usrhome.28dec2017.zfs-send > > which is supposed, as I understand it, to have saved a full replication stream, > including four snapshots. Now I wish to receive this into the newly > re-installed system, but here is what happens. > > Script started on Sun Jan 7 00:54:39 2018 > # zfs recv -dv zrootx receiving incremental stream of zrootx/usr/home@16dec2017.post-import- > from-10.3S into zrootx/usr/home@16dec2017.post-import-from-10.3S > cannot receive incremental stream: most recent snapshot of zrootx/usr/ > home does not > match incremental source > # exit Almost - you're not starting from the beginning of the zfs dataset, but a snapshot partway through, 15dec2017.freshlyinstalled-11.1S-r236620. As an example if we squint hard and pretend that the dataset is just a simple array, e.g. dataset [0,1,2,3,4] is the entire dataset and all snapshots, then you've sent [2,3,4] along, but you still need [0,1] to be able to have all the data, and also to be able to create a valid dataset you'll need the first few chunks - anything that hasn't changed in [2,3,4] is only going to exist in the early blocks in [0,1]. If you don't need the earlier snapshots, its sufficient to `zfs send -Lev zrootx/usr/home@28dec2017 > zrootx_usr_home_28dec2017.zfs`. Just remember that's not recursive so you'll need to do that for each dataset separately. This is likely to be a lot smaller than the full set with all intermediary snapshots. > Script done on Sun Jan 7 00:55:59 2018 > > I have two questions at this point. > 1) What in hell does this apparently undocumented error message mean? The zfs recv is normally (aka I do this and I hope it's common practice) done with `zfs recv -Fuvs zroot/usr/home` which will try to rollback (the F) any later snapshots on the receiving dataset to find the last preceding snapshot that your [2,3,4] stream needs to link onto - the [1] snapshot is that predecessor. So the error is simply saying "I can't find anything to continue on from", as your destination dataset has no blocks in common at all with the incoming replication stream, as [0,1] are missing. > 2) Is there still any way to recover what I need from the saved > stream? It's a couple of hundred gigabytes in length, so if it's just > junk now, I'd like to free up the space. If the original [0,1] are no longer about then this is difficult. I am sure there are people who can extract the changes from the incomplete zfs stream, but to my knowledge there's no magic command line tool to do so. I'd love to be proved wrong, and have something like a .zip or .tar recovery tool that can work with incomplete streams. > I think I only need three or four files that have changed since I imported > /usr/home into 11.1-STABLE from 10.3-STABLE by > > dump 0f - /usr/home>/mnt/bootdisks/usrhome > > on the 10.3-STABLE system, followed by > > cd /usr/home > restore rDf /mnt/bootdisks/usr/home Some reference for future BE (boot environment fun) are here: http://www.infracaninophile.co.uk/articles/zfs-update-management/ http://www.callfortesting.org/bhyve-boot-environments/ https://dan.langille.org/2017/09/30/upgrading-from-freebsd-10-3-to-11-1-via-freebsd-update-and-beadm/ A+ Dave