From owner-freebsd-stable@freebsd.org Thu Dec 22 09:11:44 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8EAC6C8C1C9 for ; Thu, 22 Dec 2016 09:11:44 +0000 (UTC) (envelope-from emz@norma.perm.ru) Received: from elf.hq.norma.perm.ru (mail.norma.perm.ru [IPv6:2a00:7540:1::5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.norma.perm.ru", Issuer "Vivat-Trade UNIX Root CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id DA3111595 for ; Thu, 22 Dec 2016 09:11:43 +0000 (UTC) (envelope-from emz@norma.perm.ru) Received: from bsdrookie.norma.com. (CHAYKA-OP1.norma.com [IPv6:fd00::7af] (may be forged)) by elf.hq.norma.perm.ru (8.15.2/8.15.2) with ESMTPS id uBM9BbOu021238 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO) for ; Thu, 22 Dec 2016 14:11:38 +0500 (YEKT) (envelope-from emz@norma.perm.ru) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=norma.perm.ru; s=key; t=1482397898; bh=BN+jNkfxYNiabgG7nmhf1Ha2vHLuOc9SACmZDHywjV8=; h=To:From:Subject:Date; b=tcIdGWNLUJ/L0pMRA3ky5vaf7sspdaYa+tUawvu/MwiFHhCro9jlH0ZQkfIuQ3TEB NRb4ugsu0A4qE6nMhtpUpQesmB8ChyTM7QCPPtzszg/9Z0wm7UygvGDqvEeIsxGDYw TZfTeqSzWPnzggAATOSg+ZsAu23PN4s2y3ERjdxc= To: freebsd-stable From: "Eugene M. Zheganin" Subject: cannot detach vdev from zfs pool Message-ID: <585B98C9.4070607@norma.perm.ru> Date: Thu, 22 Dec 2016 14:11:37 +0500 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0 MIME-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Dec 2016 09:11:44 -0000 Hi, Recently I decided to remove the bogus zfs-inside-geli-inside-zvol pool, since it's now officially unsupported. So, I needed to reslice my disk, hence to detach one of the disks from a mirrored pool. I issued 'zpool detach zroot gpt/zroot1' and my system livelocked almost immidiately, so I pressed reset. Now I got this: # zpool status zroot pool: zroot state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in= a degraded state. action: Online the device using 'zpool online' or replace the device with= 'zpool replace'. scan: resilvered 687G in 5h26m with 0 errors on Sat Oct 17 19:41:49 201= 5 config: NAME STATE READ WRITE CKSUM zroot DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 gpt/zroot0 ONLINE 0 0 0 1151243332124505229 OFFLINE 0 0 0 was /dev/gpt/zroot1 errors: No known data errors This isn't a big deal by itself, since I was able to create second zfs pool and now I'm relocating my data to it, although I should say that this is very disturbing sequence of events, because I'm now unable to even delete the UNAVAIL vdev from the pool. I tried to boot from a FreeBSD USB stick and detach it there, but all I discovered was the fact that zfs subsystem locks up upon the command 'zpool detach zroot 1151243332124505229'. I waited for several minutes but nothing happened, furthermore subsequent zpool/zfs commands are hanging up too. Is this worth submitting a pr, or may be it does need additional investigation ? In general I intend to destroy this pool after relocation it, but I'm afraid someone (or even myself again) could step on this later. Both disks are healthy, and I don't see the complains in dmesg. I'm running a FreeBSD 11.0-release-p5 here. The pool was initialy created somewhere under 9.0 I guess. Thanks. Eugene.