From owner-freebsd-fs@freebsd.org Mon Oct 2 21:07:23 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B8210E27C24 for ; Mon, 2 Oct 2017 21:07:23 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citapm.icyb.net.ua (citapm.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 0DDE077B5D for ; Mon, 2 Oct 2017 21:07:22 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citapm.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id AAA10608; Tue, 03 Oct 2017 00:07:20 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1dz7vs-000Jg9-Jz; Tue, 03 Oct 2017 00:07:20 +0300 Subject: Re: ZFS stalled after some mirror disks were lost From: Andriy Gapon To: Ben RUBSON , Freebsd fs References: <4A0E9EB8-57EA-4E76-9D7E-3E344B2037D2@gmail.com> <71d4416a-3454-df36-adae-34c0b70cd84e@multiplay.co.uk> <8A189756-028A-465E-9962-D0181FAEBB79@gmail.com> <5d3e1f0d-c618-afa4-7e52-819c9edf30c9@FreeBSD.org> <48D23270-1811-4E09-8AF2-5C0FEC2F9176@gmail.com> <9ff8ef2c-b445-dad3-d726-b84793c173ee@FreeBSD.org> Message-ID: <84f5608e-d312-437c-3c6b-d8e5847de8bc@FreeBSD.org> Date: Tue, 3 Oct 2017 00:06:25 +0300 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <9ff8ef2c-b445-dad3-d726-b84793c173ee@FreeBSD.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Oct 2017 21:07:23 -0000 On 02/10/2017 23:55, Andriy Gapon wrote: > Maybe that caused a domino effect in ZFS code. I see a lot of threads waiting > either for spa_namespace_lock or a spa config lock (a highly specialized ZFS > lock). But it is hard to untangle their inter-dependencies. Forgot to add. It would be nice to determine an owner of spa_namespace_lock. If you have debug symbols then it can be easily done in kgdb on the live system: (kgdb) p spa_namespace_lock -- Andriy Gapon