From owner-freebsd-fs@freebsd.org Sun Jun 26 17:15:17 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 60571B81EC6 for ; Sun, 26 Jun 2016 17:15:17 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 27B672508 for ; Sun, 26 Jun 2016 17:15:17 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1bHDPh-0004ET-RE for freebsd-fs@freebsd.org; Sun, 26 Jun 2016 19:00:06 +0200 Received: from 89.248.140.17 ([89.248.140.17]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 26 Jun 2016 19:00:05 +0200 Received: from holger by 89.248.140.17 with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 26 Jun 2016 19:00:05 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Holger Freyther Subject: Deadlock in zpool import with degraded pool Date: Sun, 26 Jun 2016 16:52:30 +0000 (UTC) Lines: 35 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: sea.gmane.org User-Agent: Loom/3.14 (http://gmane.org/) X-Loom-IP: 89.248.140.17 (Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/601.6.17 (KHTML, like Gecko) Version/9.1.1 Safari/601.6.17) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 26 Jun 2016 17:15:17 -0000 Hi, Everytime I run zpool import tank, I can see disk activity with gstat, with top I see the ARC growing and the free memory going down and then the system seems to lock-up. Adding a swap partition doesn't help. I have tried the zpool import with 10.1, 10.3 and 11.0-CURRENT. My pool consists out of two disks and on the last start the second GELI partition was not attached. When manually doing geli attach + zpool online it started to resilver. The system still had a broken disk inside so I took this opportunity to halt and remove it. On reboot the disk changed from ada2 to ada1. With that the pool was degraded and the only thing I found was zpool replace to replace the physical disk with itself. As it was resilvering before this didn't look like a big loss in terms of time. GELI is not the fastest and I didn't want to wait too long so I started zfs destroy of some volumes I will not use anymore. This should have freed 300GB from 2TB of a ~3TB pool. Before the zfs destroy completing the system locked-up. On reboot the system unlocked both GELI partitions but got stuck on mounting the rootfs. Booting the memstick version of FreeBSD 10.3 and 11.0-CURRENT I do see the deadlock on import (top not updating time, not responding to switches of the VTY, USB keyboard plug/unplug still generating messages though). The system is a AMD64 with 12GB RAM and running FreeBSD 10.1. It has two 3TB disks, a boot pool and the main pool on top of GELI. The main pool runs with deduplication enabled Does this sound familiar? Is there anything I can try before saying good bye to mybacked up data? If the pool itself can not be repaired, is there a way to access the volumes with the data? thanks for your help.