From owner-freebsd-fs@freebsd.org Tue Dec 15 14:26:02 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AE082A482B2 for ; Tue, 15 Dec 2015 14:26:02 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x235.google.com (mail-wm0-x235.google.com [IPv6:2a00:1450:400c:c09::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 67CD11A60 for ; Tue, 15 Dec 2015 14:26:02 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x235.google.com with SMTP id p66so93402787wmp.0 for ; Tue, 15 Dec 2015 06:26:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=+A9sDo4JBIoaaapFl6pMBhL1cJYILhPM7ps1xin0438=; b=YcoeCuuMg5WILDNIwBgcg+zlA90l8gNp4T5bn8WF6ICYY0AeUkWZ8phSye0t8PGXgh tUG+2Bt1WwkyCq5Z6oa4oB+jSaAW72ODVYgsCaEjcTVTFsD34/SUgTejnJ7RbuxRolQB nDWzW5/4BHhWcGtJrMO7lARazDp3r33miypaoJ2QtSzPprVH5dJimnOVBK9gQiyA3Qx9 8CVwzMMynUM+Nik0mmvv36xl64Kwg2+X7PpShCTWcg6hVUMpKuqcMD/IGlCELDHNx70W 9XE3a+4w8FV+p2rH8sgyIdhTM5CPk/t/b7D+5BXJfP2Gg7FiKoivG08daQY/fp7Aqru6 3Pdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=+A9sDo4JBIoaaapFl6pMBhL1cJYILhPM7ps1xin0438=; b=Qx7zfy2exoGI65vmz1/uPgtvNRpNW2ATQ2KuxEzCaM9Hdn+4PnPtAFw4AgcK3csSFd 9Aj4Wr567abw40LzqAYt3Yb2C7o13A3F84c1DFJeCtfz/i7bzQCxkhQlJTV7OjzwdINT fER+sjGbjEgBH+EgGsaR0uOVtTqQSxKOTFIF2M8WawLG+HmImzs/NHV6WhmxQCf1niQk 8d4PL9TEJH9XFbP5+DLa/Sf50UFzm4qTy/ukuKVKvrj101rB7ShXQvsUUorSVJ1/wxhh l5OCZq0WWy+r4NDgFANGYlXk9SQ+HqgqkAzar1/yiqiLOrSVeXz8cjJmom4dHydU0vX0 2Jrg== X-Gm-Message-State: ALoCoQnJolBdNKPKqiWCpsM5zlZqmKKJfClsm6tgfDK9qXgcMWWSSmy5EKawvsBz4+2xRL0OoUAR9pUWBOY1G+xY0V1vgL1E8A== X-Received: by 10.194.88.130 with SMTP id bg2mr44211723wjb.162.1450189560464; Tue, 15 Dec 2015 06:26:00 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id xs9sm1621538wjc.43.2015.12.15.06.25.59 for (version=TLSv1/SSLv3 cipher=OTHER); Tue, 15 Dec 2015 06:25:59 -0800 (PST) Subject: Re: ZFS hang in zfs_freebsd_rename To: freebsd-fs@freebsd.org References: From: Steven Hartland Message-ID: <567022FB.1010508@multiplay.co.uk> Date: Tue, 15 Dec 2015 14:26:03 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Dec 2015 14:26:02 -0000 Not a surprise in 9.x unfortunately, try upgrading to 10.x On 15/12/2015 12:51, Bengt Ahlgren wrote: > We have a server running 9.3-REL which currenly has two quite large zfs > pools: > > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > p1 18.1T 10.7T 7.38T 59% 1.00x ONLINE - > p2 43.5T 29.1T 14.4T 66% 1.00x ONLINE - > > It has been running without any issues for some time now. Once, just > now, processes are getting stuck and impossible to kill on accessing a > particular directory in the p2 pool. That pool is a 2x6 disk raidz2. > > One process is stuck in zfs_freebsd_rename, and other processes > accessing that particular directory also get stuck. The system is now > almost completely idle. > > Output from kgdb on the running system for that first process: > > Thread 651 (Thread 102157): > #0 sched_switch (td=0xfffffe0b14059920, newtd=0xfffffe001633e920, flags=) > at /usr/src/sys/kern/sched_ule.c:1904 > #1 0xffffffff808f4604 in mi_switch (flags=260, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:485 > #2 0xffffffff809308e2 in sleepq_wait (wchan=0xfffffe0135b60488, pri=96) at /usr/src/sys/kern/subr_sleepqueue.c:618 > #3 0xffffffff808cf922 in __lockmgr_args (lk=0xfffffe0135b60488, flags=524544, ilk=0xfffffe0135b604b8, > wmesg=, pri=, timo=, > file=0xffffffff80f0d782 "/usr/src/sys/kern/vfs_subr.c", line=2337) at /usr/src/sys/kern/kern_lock.c:221 > #4 0xffffffff80977369 in vop_stdlock (ap=) at lockmgr.h:97 > #5 0xffffffff80dd4a04 in VOP_LOCK1_APV (vop=0xffffffff813e8160, a=0xffffffa07f935520) at vnode_if.c:2052 > #6 0xffffffff80998c17 in _vn_lock (vp=0xfffffe0135b603f0, flags=524288, > file=0xffffffff80f0d782 "/usr/src/sys/kern/vfs_subr.c", line=2337) at vnode_if.h:859 > #7 0xffffffff8098b621 in vputx (vp=0xfffffe0135b603f0, func=1) at /usr/src/sys/kern/vfs_subr.c:2337 > #8 0xffffffff81ac7955 in zfs_rename_unlock (zlpp=0xffffffa07f9356b8) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:3609 > #9 0xffffffff81ac8c72 in zfs_freebsd_rename (ap=) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:4039 > #10 0xffffffff80dd4f04 in VOP_RENAME_APV (vop=0xffffffff81b47d40, a=0xffffffa07f9358e0) at vnode_if.c:1522 > #11 0xffffffff80996bbd in kern_renameat (td=, oldfd=, > old=, newfd=-100, new=0x1826a9af00 , > pathseg=) at vnode_if.h:636 > #12 0xffffffff80cd228a in amd64_syscall (td=0xfffffe0b14059920, traced=0) at subr_syscall.c:135 > #13 0xffffffff80cbc907 in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 > ---Type to continue, or q to quit--- > #14 0x0000000800cc1acc in ?? () > Previous frame inner to this frame (corrupt stack?) > > Full procstat -kk -a and kgdb "thread apply all bt" can be found here: > > https://www.sics.se/~bengta/ZFS-hang/ > > I don't know how to produce "alltrace in ddb" as the instructions in the > wiki says. It runs the GENERIC kernel, so perhaps it isn't possible? > > I checked "camcontrol tags" for all the disks in the pool - all have > zeroes for dev_active, devq_queued and held. > > Is there anything else I can check while the machine is up? I however > need to restart it pretty soon. > > Bengt > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"