From owner-freebsd-stable@freebsd.org Thu Dec 22 16:26:39 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 66F8BC8C737 for ; Thu, 22 Dec 2016 16:26:39 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-qk0-x22b.google.com (mail-qk0-x22b.google.com [IPv6:2607:f8b0:400d:c09::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 23B0212F0 for ; Thu, 22 Dec 2016 16:26:39 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-qk0-x22b.google.com with SMTP id u25so108174666qki.2 for ; Thu, 22 Dec 2016 08:26:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=PF4HgfGS1+mfkIDO7AuQkWIvEGI4Tdy7dePY2KTHZ/g=; b=bN2LfLu6DTbaxaFw4Ing0WMHkB1AIlXHpZnXSZ8V9zEsw+HynindrkkRQdLE9/iR1k qlMlMbBscwiiV4rc9I/9PlbQbhBbNFNLjhsaFsQd7ynsTbE3RnVvXaPefzEqRSLU5oaj CXr18CUZ6joZxvIeny69tkQuI0HZrVkUvaf1vmysfvCOyw38owC1DHZCoQ2ZTsmkxCrr T9TTk51NZppdoLld0a/cVjmssUDGuUa5juc59EiYIjXxpsH9bqit0BCVl/ntOl7C58V0 aT+Bq0xofJ/62kAa8xYyycMKPcCgZvJ4HhUl0+/XbFSKW8oraBv8X2YRkJsHks8bgg+e 68VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=PF4HgfGS1+mfkIDO7AuQkWIvEGI4Tdy7dePY2KTHZ/g=; b=k2kRpzswk0E1Pv3p6l8GD135rMe+sQTiQdrgYzM4cvGp9I9p0acQs14erzQGHf+muI WxUj6FvqU/YnonIrQVxkof/OpQE1el1Pv0O4BHm4xlhG7dLNfyOH5yrYJ3QUTva3cqlC hgUDTpMbq7Ox9Qk1GE+QOzu0v0meP59UODS3QexWK2GKkvq5a0ibDRzkkjjn+3A4346c tGkk6Q3qz7S8n7oMouCcAvM3wq2CdSPODyKC/v1+7F9FMHKtk+cFub122zH7fzOGIPTV b39gsxrF7G/a9VXdqL8HVMf0vLLLw2KH3hE79PPZ06aF9t3oeg7fMYQVOdhmaEWBSG4T yRqA== X-Gm-Message-State: AIkVDXLwW8CWpYz/ehsAw2BWrFlvAevDQyAARmdc61XlnFPQnzkvbdlRz1DrQz85YDsR9QtBCUoNhNbptgfzcg== X-Received: by 10.55.12.19 with SMTP id 19mr11967749qkm.100.1482423998210; Thu, 22 Dec 2016 08:26:38 -0800 (PST) MIME-Version: 1.0 Sender: asomers@gmail.com Received: by 10.12.181.208 with HTTP; Thu, 22 Dec 2016 08:26:37 -0800 (PST) In-Reply-To: <585B98C9.4070607@norma.perm.ru> References: <585B98C9.4070607@norma.perm.ru> From: Alan Somers Date: Thu, 22 Dec 2016 09:26:37 -0700 X-Google-Sender-Auth: gq0_eXYr-0B4PUNQ1OnaHgzZpnQ Message-ID: Subject: Re: cannot detach vdev from zfs pool To: "Eugene M. Zheganin" Cc: freebsd-stable Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Dec 2016 16:26:39 -0000 On Thu, Dec 22, 2016 at 2:11 AM, Eugene M. Zheganin wrote: > Hi, > > Recently I decided to remove the bogus zfs-inside-geli-inside-zvol pool, > since it's now officially unsupported. So, I needed to reslice my disk, > hence to detach one of the disks from a mirrored pool. I issued 'zpool > detach zroot gpt/zroot1' and my system livelocked almost immidiately, so > I pressed reset. Now I got this: > > # zpool status zroot > pool: zroot > state: DEGRADED > status: One or more devices has been taken offline by the administrator. > Sufficient replicas exist for the pool to continue functioning in a > degraded state. > action: Online the device using 'zpool online' or replace the device with > 'zpool replace'. > scan: resilvered 687G in 5h26m with 0 errors on Sat Oct 17 19:41:49 2015 > config: > > NAME STATE READ WRITE CKSUM > zroot DEGRADED 0 0 0 > mirror-0 DEGRADED 0 0 0 > gpt/zroot0 ONLINE 0 0 0 > 1151243332124505229 OFFLINE 0 0 0 was > /dev/gpt/zroot1 > > errors: No known data errors > > This isn't a big deal by itself, since I was able to create second zfs > pool and now I'm relocating my data to it, although I should say that > this is very disturbing sequence of events, because I'm now unable to > even delete the UNAVAIL vdev from the pool. I tried to boot from a > FreeBSD USB stick and detach it there, but all I discovered was the fact > that zfs subsystem locks up upon the command 'zpool detach zroot > 1151243332124505229'. I waited for several minutes but nothing happened, > furthermore subsequent zpool/zfs commands are hanging up too. > > Is this worth submitting a pr, or may be it does need additional > investigation ? In general I intend to destroy this pool after > relocation it, but I'm afraid someone (or even myself again) could step > on this later. Both disks are healthy, and I don't see the complains in > dmesg. I'm running a FreeBSD 11.0-release-p5 here. The pool was initialy > created somewhere under 9.0 I guess. > > Thanks. > Eugene. I'm not surprised to see this kind of error in a ZFS on GELI on Zvol pool. ZFS on Zvols has known deadlocks, even without involving GELI. GELI only makes it worse, because it foils the recursion detection in zvol_open. I wouldn't bother opening a PR if I were you, because it probably wouldn't add any new information. Sorry it didn't meet your expectations, -Alan