Date: Tue, 16 May 2017 08:31:21 +0200 (CEST) From: =?ISO-8859-1?Q?Trond_Endrest=F8l?= <Trond.Endrestol@fagskolen.gjovik.no> To: Nikos Vassiliadis <nvass@gmx.com> Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: zpool imported twice with different names (was Re: Fwd: ZFS) Message-ID: <alpine.BSF.2.21.1705160825130.40966@mail.fig.ol.no> In-Reply-To: <ca7b47a7-7512-3cbb-d47b-6ef546dffd74@gmx.com> References: <7c059678-4af4-f0c9-ff3b-c6266e02fb7a@gmx.com> <adf4ab9f-72f1-ed0f-fee2-82caba3af4a4@gmx.com> <ca7b47a7-7512-3cbb-d47b-6ef546dffd74@gmx.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 15 May 2017 20:11+0200, Nikos Vassiliadis wrote: > Fix the e-mail subject > > On 05/15/2017 08:09 PM, Nikos Vassiliadis wrote: > > Hi everybody, > > > > While trying to rename a zpool from zroot to vega, > > I ended up in this strange situation: > > nik@vega:~ % zfs list -t all > > NAME USED AVAIL REFER MOUNTPOINT > > vega 1.83G 34.7G 96K /zroot > > vega/ROOT 1.24G 34.7G 96K none > > vega/ROOT/default 1.24G 34.7G 1.24G / > > vega/tmp 120K 34.7G 120K /tmp > > vega/usr 608M 34.7G 96K /usr > > vega/usr/home 136K 34.7G 136K /usr/home > > vega/usr/ports 96K 34.7G 96K /usr/ports > > vega/usr/src 607M 34.7G 607M /usr/src > > vega/var 720K 34.7G 96K /var > > vega/var/audit 96K 34.7G 96K /var/audit > > vega/var/crash 96K 34.7G 96K /var/crash > > vega/var/log 236K 34.7G 236K /var/log > > vega/var/mail 100K 34.7G 100K /var/mail > > vega/var/tmp 96K 34.7G 96K /var/tmp > > zroot 1.83G 34.7G 96K /zroot > > zroot/ROOT 1.24G 34.7G 96K none > > zroot/ROOT/default 1.24G 34.7G 1.24G / > > zroot/tmp 120K 34.7G 120K /tmp > > zroot/usr 608M 34.7G 96K /usr > > zroot/usr/home 136K 34.7G 136K /usr/home > > zroot/usr/ports 96K 34.7G 96K /usr/ports > > zroot/usr/src 607M 34.7G 607M /usr/src > > zroot/var 724K 34.7G 96K /var > > zroot/var/audit 96K 34.7G 96K /var/audit > > zroot/var/crash 96K 34.7G 96K /var/crash > > zroot/var/log 240K 34.7G 240K /var/log > > zroot/var/mail 100K 34.7G 100K /var/mail > > zroot/var/tmp 96K 34.7G 96K /var/tmp > > nik@vega:~ % zpool status > > pool: vega > > state: ONLINE > > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017 > > config: > > > > NAME STATE READ WRITE CKSUM > > vega ONLINE 0 0 0 > > vtbd0p3 ONLINE 0 0 0 > > > > errors: No known data errors > > > > pool: zroot > > state: ONLINE > > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017 > > config: > > > > NAME STATE READ WRITE CKSUM > > zroot ONLINE 0 0 0 > > vtbd0p3 ONLINE 0 0 0 > > > > errors: No known data errors > > nik@vega:~ % > > ------------------------------------------- > > > > It seems like there are two pools, sharing the same vdev... > > > > After running a few commands in this state, like doing a scrub, > > the pool was (most probably) destroyed. It couldn't boot anymore > > and I didn't research further. Is this a known bug? > > I guess you had a /boot/zfs/zpool.cache file referring to the original zroot pool. Next, the kernel found the vega pool and didn't realise these two pools are the very same. > > Steps to reproduce: > > install FreeBSD-11.0 in a pool named zroot > > reboot into a live-CD Redo the above steps. > > zpool import -f zroot vega Do these four commands instead of a regular import: mkdir /tmp/vega zpool import -N -f -o cachefile=/tmp/zpool.cache vega mount -t zfs vega/ROOT/default /tmp/vega cp -p /tmp/zpool.cache /tmp/vega/boot/zfs/zpool.cache > > reboot again Reboot again. > > > > Thanks, > > Nikos > > > > PS: > > Sorry for the cross-posting, I am doing this to share to more people > > because it is a rather easy way to destroy a ZFS pool. -- +-------------------------------+------------------------------------+ | Vennlig hilsen, | Best regards, | | Trond Endrestøl, | Trond Endrestøl, | | IT-ansvarlig, | System administrator, | | Fagskolen Innlandet, | Gjøvik Technical College, Norway, | | tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, | | sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. | +-------------------------------+------------------------------------+ From owner-freebsd-stable@freebsd.org Tue May 16 14:26:47 2017 Return-Path: <owner-freebsd-stable@freebsd.org> Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E31B9D6ED06; Tue, 16 May 2017 14:26:47 +0000 (UTC) (envelope-from eborisch@gmail.com) Received: from mail-pg0-x232.google.com (mail-pg0-x232.google.com [IPv6:2607:f8b0:400e:c05::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B2A15E3C; Tue, 16 May 2017 14:26:47 +0000 (UTC) (envelope-from eborisch@gmail.com) Received: by mail-pg0-x232.google.com with SMTP id u187so77188828pgb.0; Tue, 16 May 2017 07:26:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=uAUpgAqnd1stsFvHEgHFbGeINeqKRDeEgnKaZRwCuME=; b=GOrTZvNP9MMjvWeDgKa44gRXxxNWRtWn8uvZnSlMcRXWkHasjHzAHHC7Aus9bqcUvs toNNxtG295qxEetlSLAYn9UZfI9sn35rZrg7ba+0CpSBMJSGwGhafuV8SzA+gulgcf0M EM5iAZfv8n6uvVQ+ix8MbOoupXj6xCxR8erQR5eWAnriZJgBk7BjKx4h59DipQBn4rsY HJaKw0eq0HfcJivbib/x5/8qodH2sRwhq8jUJiaUphONomuLKX+Ucd7nDJPzOgp0Si8f XYii9lU9jL1XAUugRZiPjeanHYL4gH+k7M7caM+Q1YwiuqLtFSCBvIUed1akfoI7kDYR tMEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=uAUpgAqnd1stsFvHEgHFbGeINeqKRDeEgnKaZRwCuME=; b=EzwQbg3iKeqTV+GAMoXzpE6gbT2ViRpTlA48jxx+Y5Swx2D/ZghjBr/jqUF1LY+bGa 8MYrXaJHhPKTsXdcW9q9UQzsAWmhfSC8C+5YTsH7vp2CN7O6q6lN0gMgtU+akSZQxJsy a0C2HIFZk8agCzqIjmqBweWnHYi8GzzOKHPp/X0Tn9/SrM7ZRL5+Dcnnq/fbjszrlz8k Q2V5/jR8/D12UznuErun8AENgymJDAzKcnsEkI8WYBsKXAc/lcTRXFljZKSE2A+GppaL N22uawb0rA5e3lNfO+YFFvz7ekj7aOTdve5WrZTdr2L6C9v9fnx5oRx4t7uzQYY9Fvrt KWSg== X-Gm-Message-State: AODbwcAvBXOL0HekyoT13KndLd4mdSKUwtnhfkxFFGfR3br4pR1WYRLp HJYA10TZh0ukMFliFIZp5PTIQPwmvQ== X-Received: by 10.84.193.129 with SMTP id f1mr16469574pld.129.1494944807258; Tue, 16 May 2017 07:26:47 -0700 (PDT) MIME-Version: 1.0 Received: by 10.100.155.47 with HTTP; Tue, 16 May 2017 07:26:46 -0700 (PDT) In-Reply-To: <alpine.BSF.2.21.1705160825130.40966@mail.fig.ol.no> References: <7c059678-4af4-f0c9-ff3b-c6266e02fb7a@gmx.com> <adf4ab9f-72f1-ed0f-fee2-82caba3af4a4@gmx.com> <ca7b47a7-7512-3cbb-d47b-6ef546dffd74@gmx.com> <alpine.BSF.2.21.1705160825130.40966@mail.fig.ol.no> From: "Eric A. Borisch" <eborisch@gmail.com> Date: Tue, 16 May 2017 09:26:46 -0500 Message-ID: <CAASnNnrwTuJkMG2p+cXoKVfRQKnA5NccRGodwNUdCyG5j-L_LA@mail.gmail.com> Subject: Re: zpool imported twice with different names (was Re: Fwd: ZFS) To: =?UTF-8?Q?Trond_Endrest=C3=B8l?= <Trond.Endrestol@fagskolen.gjovik.no> Cc: Nikos Vassiliadis <nvass@gmx.com>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>, FreeBSD Stable <freebsd-stable@freebsd.org> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code <freebsd-stable.freebsd.org> List-Unsubscribe: <https://lists.freebsd.org/mailman/options/freebsd-stable>, <mailto:freebsd-stable-request@freebsd.org?subject=unsubscribe> List-Archive: <http://lists.freebsd.org/pipermail/freebsd-stable/> List-Post: <mailto:freebsd-stable@freebsd.org> List-Help: <mailto:freebsd-stable-request@freebsd.org?subject=help> List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/freebsd-stable>, <mailto:freebsd-stable-request@freebsd.org?subject=subscribe> X-List-Received-Date: Tue, 16 May 2017 14:26:48 -0000 On Tue, May 16, 2017 at 1:31 AM, Trond Endrest=C3=B8l < Trond.Endrestol@fagskolen.gjovik.no> wrote: > I guess you had a /boot/zfs/zpool.cache file referring to the original > zroot pool. Next, the kernel found the vega pool and didn't realise > these two pools are the very same. > Assuming this is the case, shouldn't it be fixed? A check while importing that the guid of the pool targeted for import is not in the set of currently active guids would be worthwhile, but it -- apparently, if this is reproducible -- doesn't exist? Again, assuming this is reproducible. - Eric
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.21.1705160825130.40966>