From owner-freebsd-fs@FreeBSD.ORG Sun Dec 14 21:08:14 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2090C2B8 for ; Sun, 14 Dec 2014 21:08:14 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EA509F4F for ; Sun, 14 Dec 2014 21:08:13 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBEL8D2D012864 for ; Sun, 14 Dec 2014 21:08:13 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201412142108.sBEL8D2D012864@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-fs@FreeBSD.org Subject: Problem reports for freebsd-fs@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 14 Dec 2014 21:08:13 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Dec 2014 21:08:14 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- Open | 136470 | [nfs] Cannot mount / in read-only, over NFS Open | 139651 | [nfs] mount(8): read-only remount of NFS volume d Open | 144447 | [zfs] sharenfs fsunshare() & fsshare_main() non f 3 problems total for which you should take action. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 09:07:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7508E9DD for ; Mon, 15 Dec 2014 09:07:45 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2FC39CB2 for ; Mon, 15 Dec 2014 09:07:44 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id C37E51E6B; Mon, 15 Dec 2014 09:07:35 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id DrgazEKWAYLv; Mon, 15 Dec 2014 09:07:33 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id E52A71E5A; Mon, 15 Dec 2014 09:07:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1418634453; bh=fj3wY9646hkY/h00gRK/D+zFA6cTn/uHVp49ZZxk5As=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=N4UfgwTrsDcg+JoR/DXK6ERjpq+Rgu3Ns4QvKkECJQXcvW/0QHqUpiOvje5exCjmT AeNgB0bV9oE916kKU385sS7kRT0wdwhKkCiOU+h5/dme80/+wXeKoPoma4aWDPdtAT bpEWGzwiIXTfYGJAPPr6skZ7hF6iGE7jN6XB4M0Y= Mime-Version: 1.0 Date: Mon, 15 Dec 2014 09:07:32 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <2efc29240b59eabfdea79fe29744178d@mail.unix-experience.fr> X-Mailer: RainLoop/1.6.10.182 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: References: <1280247055.9141285.1418216202088.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 09:07:45 -0000 Hi Rick,=0Aafter talking with my N+1, NFSv4 is required on our infrastruc= ture. I tried to upgrade NFSv4+ZFS server from 9.3 to 10.1, i hope this w= ill resolve some issues...=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Syst= ems, Network and Security Engineer=0Ahttp://www.unix-experience.fr=0A=0A1= 0 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" a =C3=A9crit: =0A> Hi Rick,=0A> thanks for your suggestion.=0A> For my = locking bug, rpc.lockd is stucked in rpcrecv state on the server. kill -9= doesn't affect the=0A> process, it's blocked.... (State: Ds)=0A> =0A> fo= r the performances=0A> =0A> NFSv3: 60Mbps=0A> NFSv4: 45Mbps=0A> Regards,= =0A> =0A> Lo=C3=AFc Blot,=0A> UNIX Systems, Network and Security Engineer= =0A> http://www.unix-experience.fr=0A> =0A> 10 d=C3=A9cembre 2014 13:56 "= Rick Macklem" a =C3=A9crit:=0A> =0A>> Loic Blot wr= ote:=0A>> =0A>>> Hi Rick,=0A>>> I'm trying NFSv3.=0A>>> Some jails are st= arting very well but now i have an issue with lockd=0A>>> after some minu= tes:=0A>>> =0A>>> nfs server 10.10.X.8:/jails: lockd not responding=0A>>>= nfs server 10.10.X.8:/jails lockd is alive again=0A>>> =0A>>> I look at = mbuf, but i seems there is no problem.=0A>> =0A>> Well, if you need locks= to be visible across multiple clients, then=0A>> I'm afraid you are stuc= k with using NFSv4 and the performance you get=0A>> from it. (There is no= way to do file handle affinity for NFSv4 because=0A>> the read and write= ops are buried in the compound RPC and not easily=0A>> recognized.)=0A>>= =0A>> If the locks don't need to be visible across multiple clients, I'd= =0A>> suggest trying the "nolockd" option with nfsv3.=0A>> =0A>>> Here is= my rc.conf on server:=0A>>> =0A>>> nfs_server_enable=3D"YES"=0A>>> nfsv4= _server_enable=3D"YES"=0A>>> nfsuserd_enable=3D"YES"=0A>>> nfsd_server_fl= ags=3D"-u -t -n 256"=0A>>> mountd_enable=3D"YES"=0A>>> mountd_flags=3D"-r= "=0A>>> nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>> rpcbind_enable= =3D"YES"=0A>>> rpc_lockd_enable=3D"YES"=0A>>> rpc_statd_enable=3D"YES"=0A= >>> =0A>>> Here is the client:=0A>>> =0A>>> nfsuserd_enable=3D"YES"=0A>>>= nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>> nfscbd_enable=3D"YES"= =0A>>> rpc_lockd_enable=3D"YES"=0A>>> rpc_statd_enable=3D"YES"=0A>>> =0A>= >> Have you got an idea ?=0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Bl= ot,=0A>>> UNIX Systems, Network and Security Engineer=0A>>> http://www.un= ix-experience.fr=0A>>> =0A>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" <= rmacklem@uoguelph.ca> a =C3=A9crit: =0A>>>> Loic Blot wrote:=0A>>>> =0A>>= >>> Hi rick,=0A>>>>> =0A>>>>> I waited 3 hours (no lag at jail launch) an= d now I do: sysrc=0A>>>>> memcached_flags=3D"-v -m 512"=0A>>>>> Command w= as very very slow...=0A>>>>> =0A>>>>> Here is a dd over NFS:=0A>>>>> =0A>= >>>> 601062912 bytes transferred in 21.060679 secs (28539579 bytes/sec)= =0A>>>> =0A>>>> Can you try the same read using an NFSv3 mount?=0A>>>> (I= f it runs much faster, you have probably been bitten by the ZFS=0A>>>> "s= equential vs random" read heuristic which I've been told things=0A>>>> NF= S is doing "random" reads without file handle affinity. File=0A>>>> handl= e affinity is very hard to do for NFSv4, so it isn't done.)=0A>> =0A>> I = was actually suggesting that you try the "dd" over nfsv3 to see how=0A>> = the performance compared with nfsv4. If you do that, please post the=0A>>= comparable results.=0A>> =0A>> Someday I would like to try and get ZFS's= sequential vs random read=0A>> heuristic modified and any info on what d= ifference in performance that=0A>> might make for NFS would be useful.=0A= >> =0A>> rick=0A>> =0A>>>> rick=0A>>>> =0A>>>>> This is quite slow...=0A>= >>>> =0A>>>>> You can found some nfsstat below (command isn't finished ye= t)=0A>>>>> =0A>>>>> nfsstat -c -w 1=0A>>>>> =0A>>>>> GtAttr Lookup Rdlink= Read Write Rename Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0= 0 16 0=0A>>>>> 2 0 0 0 0 0 17 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 = 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0= 0 4 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0 0 3 0=0A>>>>> 0 0 0 0 0 0= 3 0=0A>>>>> 37 10 0 8 0 0 14 1=0A>>>>> 18 16 0 4 1 2 4 0=0A>>>>> 78 91 0= 82 6 12 30 0=0A>>>>> 19 18 0 2 2 4 2 0=0A>>>>> 0 0 0 0 2 0 0 0=0A>>>>> 0= 0 0 0 0 0 0 0=0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddi= r=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 1 0 0 0 0 1 0=0A>>>>> 4 6 0 0 6 0 3 0=0A>>>>> 2 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 1 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 1 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 6 108 0 0 0 0 0 = 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> GtAttr Lookup R= dlink Read Write Rename Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0= 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 98 54 = 0 86 11 0 25 0=0A>>>>> 36 24 0 39 25 0 10 1=0A>>>>> 67 8 0 63 63 0 41 0= =0A>>>>> 34 0 0 35 34 0 0 0=0A>>>>> 75 0 0 75 77 0 0 0=0A>>>>> 34 0 0 35 = 35 0 0 0=0A>>>>> 75 0 0 74 76 0 0 0=0A>>>>> 33 0 0 34 33 0 0 0=0A>>>>> 0 = 0 0 0 5 0 0 0=0A>>>>> 0 0 0 0 0 0 6 0=0A>>>>> 11 0 0 0 0 0 11 0=0A>>>>> 0= 0 0 0 0 0 0 0=0A>>>>> 0 17 0 0 0 0 1 0=0A>>>>> GtAttr Lookup Rdlink Read= Write Rename Access Rddir=0A>>>>> 4 5 0 0 0 0 12 0=0A>>>>> 2 0 0 0 0 0 2= 6 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0= 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0 0 4 = 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 4 0 0 0 0 0 2 0=0A>>>>> 2 0 0 0 0 0 24 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0 0 7 0=0A>>>>> 2 1 0= 0 0 0 1 0=0A>>>>> 0 0 0 0 2 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 = 0 6 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0= 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 = 0 0 0 0=0A>>>>> 4 6 0 0 0 0 3 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 2 0 0 0 0= 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> GtAttr Lookup Rdlink Read Write Ren= ame Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> = 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4= 71 0 0 0 0 0 0=0A>>>>> 0 1 0 0 0 0 0 0=0A>>>>> 2 36 0 0 0 0 1 0=0A>>>>> = 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0= 0 0 0 0 0 0 0=0A>>>>> 1 0 0 0 0 0 1 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>> 79 6 0 79 79 0 2 0=0A>>>>> 25 0 0 25 26 0 6 0=0A>>>= >> 43 18 0 39 46 0 23 0=0A>>>>> 36 0 0 36 36 0 31 0=0A>>>>> 68 1 0 66 68 = 0 0 0=0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>= 36 0 0 36 36 0 0 0=0A>>>>> 48 0 0 48 49 0 0 0=0A>>>>> 20 0 0 20 20 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 3 14 0 1 0 0 11 0=0A>>>>> 0 0 0 0 0 0 0 = 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0 0 4 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 4 22 0 0 0 0 16 0=0A>>>>> 2 0 0 0 0 0 23 0=0A>>>>> =0A>>>>> Rega= rds,=0A>>>>> =0A>>>>> Lo=C3=AFc Blot,=0A>>>>> UNIX Systems, Network and S= ecurity Engineer=0A>>>>> http://www.unix-experience.fr=0A>>>>> =0A>>>>> 8= d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" = a=0A>>>>> =C3=A9crit: =0A>>>>>> Hi Rick,=0A>>>>>> I stopped the jails th= is week-end and started it this morning,=0A>>>>>> i'll=0A>>>>>> give you = some stats this week.=0A>>>>>> =0A>>>>>> Here is my nfsstat -m output (wi= th your rsize/wsize tweaks)=0A>> =0A>> =0A> nfsv4,tcp,resvport,hard,cto,s= ec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acregmax=3D60,nametimeo= =3D60,negna=0A>> =0A>>>>>> =0A>> =0A>> =0A> etimeo=3D60,rsize=3D32768,wsi= ze=3D32768,readdirsize=3D32768,readahead=3D1,wcommitsize=3D773136,timeout= =3D120,retra=0A>> =0A>>>>>> s=3D2147483647=0A>>>>>> =0A>>>>>> On server s= ide my disks are on a raid controller which show a=0A>>>>>> 512b=0A>>>>>>= volume and write performances=0A>>>>>> are very honest (dd if=3D/dev/zer= o of=3D/jails/test.dd bs=3D4096=0A>>>>>> count=3D100000000 =3D> 450MBps)= =0A>>>>>> =0A>>>>>> Regards,=0A>>>>>> =0A>>>>>> Lo=C3=AFc Blot,=0A>>>>>> = UNIX Systems, Network and Security Engineer=0A>>>>>> http://www.unix-expe= rience.fr=0A>>>>>> =0A>>>>>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a=0A>>>>>> =C3=A9crit:=0A>>>>>> =0A>>>>>>> Loic Blot= wrote:=0A>>>>>>> =0A>>>>>>>> Hi,=0A>>>>>>>> i'm trying to create a virtu= alisation environment based on=0A>>>>>>>> jails.=0A>>>>>>>> Those jails a= re stored under a big ZFS pool on=20a FreeBSD 9.3=0A>>>>>>>> which=0A>>>>= >>>> export a NFSv4 volume. This NFSv4 volume was mounted on a big=0A>>>>= >>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but only 1=0A>>>= >>>>> was=0A>>>>>>>> used at this time).=0A>>>>>>>> =0A>>>>>>>> The probl= em is simple, my hypervisors runs 6 jails (used 1% cpu=0A>>>>>>>> and=0A>= >>>>>>> 10GB RAM approximatively and less than 1MB bandwidth) and works= =0A>>>>>>>> fine at start but the system slows down and after 2-3 days=0A= >>>>>>>> become=0A>>>>>>>> unusable. When i look at top command i see 80-= 100% on system=0A>>>>>>>> and=0A>>>>>>>> commands are very very slow. Man= y process are tagged with=0A>>>>>>>> nfs_cl*.=0A>>>>>>> =0A>>>>>>> To be = honest, I would expect the slowness to be because of slow=0A>>>>>>> respo= nse=0A>>>>>>> from the NFSv4 server, but if you do:=0A>>>>>>> # ps axHl= =0A>>>>>>> on a client when it is slow and post that, it would give us so= me=0A>>>>>>> more=0A>>>>>>> information on where the client side processe= s are sitting.=0A>>>>>>> If you also do something like:=0A>>>>>>> # nfsst= at -c -w 1=0A>>>>>>> and let it run for a while, that should show you how= many RPCs=0A>>>>>>> are=0A>>>>>>> being done and which ones.=0A>>>>>>> = =0A>>>>>>> # nfsstat -m=0A>>>>>>> will show you what your mount is actual= ly using.=0A>>>>>>> The only mount option I can suggest trying is=0A>>>>>= >> "rsize=3D32768,wsize=3D32768",=0A>>>>>>> since some network environmen= ts have difficulties with 64K.=0A>>>>>>> =0A>>>>>>> There are a few thing= s you can try on the NFSv4 server side, if=0A>>>>>>> it=0A>>>>>>> appears= =0A>>>>>>> that the clients are generating a large RPC load.=0A>>>>>>> - = disabling the DRC cache for TCP by setting vfs.nfsd.cachetcp=3D0=0A>>>>>>= > - If the server is seeing a large write RPC load, then=0A>>>>>>> "sync= =3Ddisabled"=0A>>>>>>> might help, although it does run a risk of data lo= ss when the=0A>>>>>>> server=0A>>>>>>> crashes.=0A>>>>>>> Then there are = a couple of other ZFS related things (I'm not a=0A>>>>>>> ZFS=0A>>>>>>> g= uy,=0A>>>>>>> but these have shown up on the mailing lists).=0A>>>>>>> - = make sure your volumes are 4K aligned and ashift=3D12 (in case a=0A>>>>>>= > drive=0A>>>>>>> that uses 4K sectors is pretending to be 512byte sector= ed)=0A>>>>>>> - never run over 70-80% full if write performance is an iss= ue=0A>>>>>>> - use a zil on an SSD with good write performance=0A>>>>>>> = =0A>>>>>>> The only NFSv4 thing I can tell you is that it is known that= =0A>>>>>>> ZFS's=0A>>>>>>> algorithm for determining sequential vs random= I/O fails for=0A>>>>>>> NFSv4=0A>>>>>>> during writing and this can be a= performance hit. The only=0A>>>>>>> workaround=0A>>>>>>> is to use NFSv3= mounts, since file handle affinity apparently=0A>>>>>>> fixes=0A>>>>>>> = the problem and this is only done for NFSv3.=0A>>>>>>> =0A>>>>>>> rick=0A= >>>>>>> =0A>>>>>>>> I saw that there are TSO issues with igb then i'm try= ing to=0A>>>>>>>> disable=0A>>>>>>>> it with sysctl but the situation was= n't solved.=0A>>>>>>>> =0A>>>>>>>> Someone has got ideas ? I can give you= more informations if you=0A>>>>>>>> need.=0A>>>>>>>> =0A>>>>>>>> Thanks = in advance.=0A>>>>>>>> Regards,=0A>>>>>>>> =0A>>>>>>>> Lo=C3=AFc Blot,=0A= >>>>>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>>> http://ww= w.unix-experience.fr=0A>>>>>>>> _________________________________________= ______=0A>>>>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>>>> http://l= ists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>>> To unsubscribe, s= end any mail to=0A>>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>>>> = =0A>>>>>> _______________________________________________=0A>>>>>> freebs= d-fs@freebsd.org mailing list=0A>>>>>> http://lists.freebsd.org/mailman/l= istinfo/freebsd-fs=0A>>>>>> To unsubscribe, send any mail to=0A>>>>>> "fr= eebsd-fs-unsubscribe@freebsd.org"=0A> =0A> ______________________________= _________________=0A> freebsd-fs@freebsd.org mailing list=0A> http://list= s.freebsd.org/mailman/listinfo/freebsd-fs=0A> To unsubscribe, send any ma= il to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 12:29:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E7497C99 for ; Mon, 15 Dec 2014 12:29:34 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A10F164C for ; Mon, 15 Dec 2014 12:29:33 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 843982601C; Mon, 15 Dec 2014 12:29:30 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id UCzz9_FJ9aE3; Mon, 15 Dec 2014 12:29:27 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 9566D2600D; Mon, 15 Dec 2014 12:29:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1418646567; bh=7g3YR+HfDUaH1Hgz+aQ6pT6Q6Bh3sdMvdUlW4qH4vyA=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=dmDh0jE5qJ9MsCbZiOz96zVD6xprG6Bo/i5zORh/MkkKjyo4Qlu5ZTP8f5a0SeodM gVLsIxyhHWW7CdxqyqAirS+ml7njoiJBjNetwKvAnTVxIJfediyHExgZMaiAPCFl+H jIUBUBOqhQd748E/zJZktW12KqILBZxI4D48u2r8= Mime-Version: 1.0 Date: Mon, 15 Dec 2014 12:29:27 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: X-Mailer: RainLoop/1.6.10.182 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <2efc29240b59eabfdea79fe29744178d@mail.unix-experience.fr> References: <2efc29240b59eabfdea79fe29744178d@mail.unix-experience.fr> <1280247055.9141285.1418216202088.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 12:29:35 -0000 Hmmm...=0Anow i'm experiencing a deadlock.=0A=0A 0 918 915 0 21 0= 12352 3372 zfs D - 1:48.64 nfsd: server (nfsd)=0A=0Athe on= ly issue was to reboot the server, but after rebooting deadlock arrives a= second time when i start my jails over NFS.=0A=0ARegards,=0A=0ALo=C3=AFc= Blot,=0AUNIX Systems, Network and Security Engineer=0Ahttp://www.unix-ex= perience.fr=0A=0A15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" a =C3=A9crit: =0A=0AHi Rick,=0Aafter talking with my = N+1, NFSv4 is required on our infrastructure. I tried to upgrade NFSv4+ZF= S server from 9.3 to 10.1, i hope this will resolve some issues...=0A=0AR= egards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security Enginee= r=0Ahttp://www.unix-experience.fr=0A=0A10 d=C3=A9cembre 2014 15:36 "Lo=C3= =AFc Blot" a =C3=A9crit:=0A=0A=0AHi Rick,= =0Athanks for your suggestion.=0AFor my locking bug, rpc.lockd is stucked= in rpcrecv state on the server. kill -9 doesn't affect the=0Aprocess, it= 's blocked.... (State: Ds)=0A=0Afor the performances=0A=0ANFSv3: 60Mbps= =0ANFSv4: 45Mbps=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network = and Security Engineer=0Ahttp://www.unix-experience.fr=0A=0A10 d=C3=A9cemb= re 2014 13:56 "Rick Macklem" a =C3=A9crit:=0A=0A> = Loic Blot wrote:=0A> =0A>> Hi Rick,=0A>> I'm trying NFSv3.=0A>> Some jail= s are starting very well but now i have an issue with lockd=0A>> after so= me minutes:=0A>> =0A>> nfs server 10.10.X.8:/jails: lockd not responding= =0A>> nfs server 10.10.X.8:/jails lockd is alive again=0A>> =0A>> I look = at mbuf, but i seems there is no problem.=0A> =0A> Well, if you need lock= s to be visible across multiple clients, then=0A> I'm afraid you are stuc= k with using NFSv4 and the performance you get=0A> from it. (There is no = way to do file handle affinity for NFSv4 because=0A> the read and write o= ps are buried in the compound RPC and not easily=0A> recognized.)=0A> =0A= > If the locks don't need to be visible across multiple clients, I'd=0A> = suggest trying the "nolockd" option with nfsv3.=0A> =0A>> Here is my rc.c= onf on server:=0A>> =0A>> nfs_server_enable=3D"YES"=0A>> nfsv4_server_ena= ble=3D"YES"=0A>> nfsuserd_enable=3D"YES"=0A>> nfsd_server_flags=3D"-u -t = -n 256"=0A>> mountd_enable=3D"YES"=0A>> mountd_flags=3D"-r"=0A>> nfsuserd= _flags=3D"-usertimeout 0 -force 20"=0A>> rpcbind_enable=3D"YES"=0A>> rpc_= lockd_enable=3D"YES"=0A>> rpc_statd_enable=3D"YES"=0A>> =0A>> Here is the= client:=0A>> =0A>> nfsuserd_enable=3D"YES"=0A>> nfsuserd_flags=3D"-usert= imeout 0 -force 20"=0A>> nfscbd_enable=3D"YES"=0A>> rpc_lockd_enable=3D"Y= ES"=0A>> rpc_statd_enable=3D"YES"=0A>> =0A>> Have you got an idea ?=0A>> = =0A>> Regards,=0A>> =0A>> Lo=C3=AFc Blot,=0A>> UNIX Systems, Network and = Security Engineer=0A>> http://www.unix-experience.fr=0A>> =0A>> 9 d=C3=A9= cembre 2014 04:31 "Rick Macklem" a =C3=A9crit: =0A= >>> Loic Blot wrote:=0A>>> =0A>>>> Hi rick,=0A>>>> =0A>>>> I waited 3 hou= rs (no lag at jail launch) and now I do: sysrc=0A>>>> memcached_flags=3D"= -v -m 512"=0A>>>> Command was very very slow...=0A>>>> =0A>>>> Here is a = dd over NFS:=0A>>>> =0A>>>> 601062912 bytes transferred in 21.060679 secs= (28539579 bytes/sec)=0A>>> =0A>>> Can you try the same read using an NFS= v3 mount?=0A>>> (If it runs much faster, you have probably been bitten by= the ZFS=0A>>> "sequential vs random" read heuristic which I've been told= things=0A>>> NFS is doing "random" reads without file handle affinity. F= ile=0A>>> handle affinity is very hard to do for NFSv4, so it isn't done.= )=0A> =0A> I was actually suggesting that you try the "dd" over nfsv3 to = see how=0A> the performance compared with nfsv4. If you do that, please p= ost the=0A> comparable results.=0A> =0A> Someday I would like to try and = get ZFS's sequential vs random read=0A> heuristic modified and any info o= n what difference in performance that=0A> might make for NFS would be use= ful.=0A> =0A> rick=0A> =0A>>> rick=0A>>> =0A>>>> This is quite slow...=0A= >>>> =0A>>>> You can found some nfsstat below (command isn't finished yet= )=0A>>>> =0A>>>> nfsstat -c -w 1=0A>>>> =0A>>>> GtAttr Lookup Rdlink Read= Write Rename Access Rddir=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 4 0 0 0 0 0 16 0= =0A>>>> 2 0 0 0 0 0 17 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A= >>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 4 0 0 0 0 4 0=0A>>>>= 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0= 0 0 0 0 0 0=0A>>>> 4 0 0 0 0 0 3 0=0A>>>> 0 0 0 0 0 0 3 0=0A>>>> 37 10 0= 8 0 0 14 1=0A>>>> 18 16 0 4 1 2 4 0=0A>>>> 78 91 0 82 6 12 30 0=0A>>>> 1= 9 18 0 2 2 4 2 0=0A>>>> 0 0 0 0 2 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> GtA= ttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>> 0 0 0 0 0 0 0 0= =0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 1 0 0 0 0 1 0=0A>= >>> 4 6 0 0 6 0 3 0=0A>>>> 2 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> = 1 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 1 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 = 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 = 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 = 0 0=0A>>>> 6 108 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 = 0=0A>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>> 0 0 0= 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0= 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0= 0=0A>>>> 98 54 0 86 11 0 25 0=0A>>>> 36 24 0 39 25 0 10 1=0A>>>> 67 8 0 = 63 63 0 41 0=0A>>>> 34 0 0 35 34 0 0 0=0A>>>> 75 0 0 75 77 0 0 0=0A>>>> 3= 4 0 0 35 35 0 0 0=0A>>>> 75 0 0 74 76 0 0 0=0A>>>> 33 0 0 34 33 0 0 0=0A>= >>> 0 0 0 0 5 0 0 0=0A>>>> 0 0 0 0 0 0 6 0=0A>>>> 11 0 0 0 0 0 11 0=0A>>>= > 0 0 0 0 0 0 0 0=0A>>>> 0 17 0 0 0 0 1 0=0A>>>> GtAttr Lookup Rdlink Rea= d Write Rename Access Rddir=0A>>>> 4 5 0 0 0 0 12 0=0A>>>> 2 0 0 0 0 0 26= 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0= =0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 4 0 0 0 0 4 0=0A>= >>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> = 4 0 0 0 0 0 2 0=0A>>>> 2 0 0 0 0 0 24 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0= 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0= 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> GtAttr Look= up Rdlink Read Write Rename Access Rddir=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 = 0 0 0 0 0 0 0=0A>>>> 4 0 0 0 0 0 7 0=0A>>>> 2 1 0 0 0 0 1 0=0A>>>> 0 0 0 = 0 2 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 6 0 0 0=0A>>>> 0 0 0 0 0 = 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 = 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 4 6 0 0 0 0 3 0=0A= >>>> 0 0 0 0 0 0 0 0=0A>>>> 2 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>>= 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> GtA= ttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>> 0 0 0 0 0 0 0 0= =0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>= >>> 0 0 0 0 0 0 0 0=0A>>>> 4 71 0 0 0 0 0 0=0A>>>> 0 1 0 0 0 0 0 0=0A>>>>= 2 36 0 0 0 0 1 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 = 0 0 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 1 0 0 0 0 0 1 0=0A>>>> 0 0 0 = 0 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 79 6 0 79 79 0 2 0=0A>>>> 25 0 0 = 25 26 0 6 0=0A>>>> 43 18 0 39 46 0 23 0=0A>>>> 36 0 0 36 36 0 31 0=0A>>>>= 68 1 0 66 68 0 0 0=0A>>>> GtAttr Lookup Rdlink Read Write Rename Access = Rddir=0A>>>> 36 0 0 36 36 0 0 0=0A>>>> 48 0 0 48 49 0 0 0=0A>>>> 20 0 0 2= 0 20 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 3 14 0 1 0 0 11 0=0A>>>> 0 0 0 0= 0 0 0 0=0A>>>> 0 0 0 0 0 0 0 0=0A>>>> 0 4 0 0 0 0 4 0=0A>>>> 0 0 0 0 0 0= 0 0=0A>>>> 4 22 0 0 0 0 16 0=0A>>>> 2 0 0 0 0 0 23 0=0A>>>> =0A>>>> Rega= rds,=0A>>>> =0A>>>> Lo=C3=AFc Blot,=0A>>>> UNIX Systems, Network and Secu= rity Engineer=0A>>>> http://www.unix-experience.fr=0A>>>> =0A>>>> 8 d=C3= =A9cembre 2014 09:36 "Lo=C3=AFc Blot" a=0A= >>>> =C3=A9crit: =0A>>>>> Hi Rick,=0A>>>>> I stopped the jails this week-= end and started it this morning,=0A>>>>> i'll=0A>>>>> give you some stats= this week.=0A>>>>> =0A>>>>> Here is my nfsstat -m output (with your rsiz= e/wsize tweaks)=0A=0Anfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,a= cdirmax=3D60,acregmin=3D5,acregmax=3D60,nametimeo=3D60,negna =0A>>>>> =0A= =0Aetimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead= =3D1,wcommitsize=3D773136,timeout=3D120,retra =0A=0A=0A=0A=0A=0A=0A=0A=0A= =0As=3D2147483647=0A=0AOn server side my disks are on a raid controller w= hich show a=0A512b=0Avolume and write performances=0Aare very honest (dd = if=3D/dev/zero of=3D/jails/test.dd bs=3D4096=0Acount=3D100000000 =3D> 450= MBps)=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Secu= rity Engineer=0Ahttp://www.unix-experience.fr=0A=0A5 d=C3=A9cembre 2014 1= 5:14 "Rick Macklem" a=0A=C3=A9crit:=0A=0A> Loic Bl= ot wrote:=0A> =0A>> Hi,=0A>> i'm trying to create a virtualisation enviro= nment based on=0A>> jails.=0A>> Those jails are stored under a big ZFS po= ol on a FreeBSD 9.3=0A>> which=0A>> export a NFSv4 volume. This NFSv4 vol= ume was mounted on a big=0A>> hypervisor (2 Xeon E5v3 + 128GB memory and = 8 ports (but only 1=0A>> was=0A>> used at this time).=0A>> =0A>> The prob= lem is simple, my hypervisors runs 6 jails (used 1% cpu=0A>> and=0A>> 10G= B RAM approximatively and less than 1MB bandwidth) and works=0A>> fine at= start but the system slows down and after 2-3 days=0A>> become=0A>> unus= able. When i look at top command i see 80-100% on system=0A>> and=0A>> co= mmands are very very slow. Many process are tagged with=0A>> nfs_cl*.=0A>= =0A> To be honest, I would expect the slowness to be because of slow=0A>= response=0A> from the NFSv4 server, but if you do:=0A> # ps axHl=0A> on = a client when it is slow and post that, it would give us some=0A> more=0A= > information on where the client side processes are sitting.=0A> If you = also do something like:=0A> # nfsstat -c -w 1=0A> and let it run for a wh= ile, that should show you how many RPCs=0A> are=0A> being done and which = ones.=0A> =0A> # nfsstat -m=0A> will show you what your mount is actually= using.=0A> The only mount option I can suggest trying is=0A> "rsize=3D32= 768,wsize=3D32768",=0A> since some network environments have difficulties= with 64K.=0A> =0A> There are a few things you can try on the NFSv4 serve= r side, if=0A> it=0A> appears=0A> that the clients are generating a large= RPC load.=0A> - disabling the DRC cache for TCP by setting vfs.nfsd.cach= etcp=3D0=0A> - If the server is seeing a large write RPC load, then=0A> "= sync=3Ddisabled"=0A> might help, although it does run a risk of data loss= when the=0A> server=0A> crashes.=0A> Then there are a couple of other ZF= S related things (I'm not a=0A> ZFS=0A> guy,=0A> but these have shown up = on the mailing lists).=0A> - make sure your volumes are 4K aligned and as= hift=3D12 (in case a=0A> drive=0A> that uses 4K sectors is pretending to = be 512byte sectored)=0A> - never run over 70-80% full if write performanc= e is an issue=0A> - use a zil on an SSD with good write performance=0A> = =0A> The only NFSv4 thing I can tell you is that it is known that=0A> ZFS= 's=0A> algorithm for determining sequential vs random I/O fails for=0A> N= FSv4=0A> during writing and this can be a performance hit. The only=0A> w= orkaround=0A> is to use NFSv3 mounts, since file handle affinity apparent= ly=0A> fixes=0A> the problem and this is only done for NFSv3.=0A> =0A> ri= ck=0A> =0A>> I saw that there are TSO issues with igb then i'm trying to= =0A>> disable=0A>> it with sysctl but the situation wasn't solved.=0A>> = =0A>> Someone has got ideas ? I can give you more informations if you=0A>= > need.=0A>> =0A>> Thanks in advance.=0A>> Regards,=0A>> =0A>> Lo=C3=AFc = Blot,=0A>> UNIX Systems, Network and Security Engineer=0A>> http://www.un= ix-experience.fr=0A>> _______________________________________________=0A>= > freebsd-fs@freebsd.org mailing list=0A>> http://lists.freebsd.org/mailm= an/listinfo/freebsd-fs=0A>> To unsubscribe, send any mail to=0A>> "freebs= d-fs-unsubscribe@freebsd.org"=0A=0A______________________________________= _________=0Afreebsd-fs@freebsd.org mailing list=0Ahttp://lists.freebsd.or= g/mailman/listinfo/freebsd-fs=0ATo unsubscribe, send any mail to=0A"freeb= sd-fs-unsubscribe@freebsd.org"=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A__________= _____________________________________=0Afreebsd-fs@freebsd.org mailing li= st=0Ahttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0ATo unsubscrib= e, send any mail to "freebsd-fs-unsubscribe@freebsd.org"=0A=0A=0A________= _______________________________________=0Afreebsd-fs@freebsd.org mailing = list=0Ahttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0ATo unsubscr= ibe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 12:34:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2E5A1F17 for ; Mon, 15 Dec 2014 12:34:40 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AB067762 for ; Mon, 15 Dec 2014 12:34:39 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 30CFE26041; Mon, 15 Dec 2014 12:34:37 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id HF6kzYOJ0Pna; Mon, 15 Dec 2014 12:34:33 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id CCF2926033; Mon, 15 Dec 2014 12:34:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1418646872; bh=gug4JgB36KQ3QweWCfN+8WkT5rRblNIWDZ+L+M3xXj0=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=SWeaHPk1NlslfOqYtO+ZKWfGZaxwaCl2Z8N/ntv0YE9M3wynS+KpMBNdSPwUufmpU Jx7BGBvhe+AJNYR/nZ4K4X7NhjepaaGt+5s7qsXqm64XXbFckovxMjOKsSYJ1YQC22 VIUYUBLBRi1U/jw2b2ZkcDDhrlsEI2Y7NT5tuMro= Mime-Version: 1.0 Date: Mon, 15 Dec 2014 12:34:32 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: X-Mailer: RainLoop/1.6.10.182 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: References: <2efc29240b59eabfdea79fe29744178d@mail.unix-experience.fr> <1280247055.9141285.1418216202088.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 12:34:40 -0000 For more informations, here is procstat -kk on nfsd, if you need more hot= datas, tell me.=0A=0A=0ARegards, PID TID COMM TDNAME = KSTACK =0A 918 100529 nfsd nfsd= : master mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x= 554 svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x= 107 sys_nfssvc+0x9c amd64_syscall+0x351 =0A 918 100564 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe =0A 918 100565 nfsd nfsd: serv= ice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe =0A 918 100566 nfsd nfsd: service mi_= switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe =0A 918 100567 nfsd nfsd: service mi_switch+0xe= 1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 918 100568 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1= 00569 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100570 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100571 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmslee= p+0x66 nfsv4_lock+0x9b nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_in= ternal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100572 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+= 0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrv_setclient+0xbd nfs= rvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100573 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0= x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrvd_dorpc+0x316 nfssvc= _program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a= fork_trampoline+0xe =0A 918 100574 nfsd nfsd: service mi= _switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100575 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xa= b _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_sta= rt+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100576 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmg= r_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp= +0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_prog= ram+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 918 100577 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal= +0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1= 00578 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe =0A 918 100579 nfsd nfs= d: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38= d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0= x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_tram= poline+0xe =0A 918 100580 nfsd nfsd: service mi_switch+0x= e1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc7= 7 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100581= nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock= +0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb for= k_exit+0x9a fork_trampoline+0xe =0A 918 100582 nfsd nfsd: se= rvice mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfs= vno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 = svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 918 100583 nfsd nfsd: service mi_switch+0xe1 sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LO= CK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtov= p+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100584 nfsd= nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x1= 5d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 = nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe =0A 918 100585 nfsd nfsd: service= mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_r= un_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A 918 100586 nfsd nfsd: service mi_switch+0xe1 sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc= 8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thre= ad_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100587 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __= lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_= fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssv= c_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9= a fork_trampoline+0xe =0A 918 100588 nfsd nfsd: service m= i_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_std= lock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_in= ternal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100589 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+= 0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0x= ab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100590 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockm= gr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtov= p+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_pro= gram+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe =0A 918 100591 nfsd nfsd: service mi_swi= tch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_interna= l+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 = 100592 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a = sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0= xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100593 nfsd nf= sd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_ar= gs+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x3= 8d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+= 0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_tra= mpoline+0xe =0A 918 100594 nfsd nfsd: service mi_switch+0= xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc= 77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 10059= 5 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_loc= k+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 918 100596 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554= svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoli= ne+0xe =0A 918 100597 nfsd nfsd: service mi_switch+0xe1 s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhto= vp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 sv= c_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100598 nfs= d nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x= 15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_ex= it+0x9a fork_trampoline+0xe =0A 918 100599 nfsd nfsd: servic= e mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 v= op_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_= fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_= run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0x= e =0A 918 100600 nfsd nfsd: service mi_switch+0xe1 sleepq= _wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_= APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0x= c8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100601 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs= _fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfss= vc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 918 100602 nfsd nfsd: service = mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtov= p+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_i= nternal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A= 918 100603 nfsd nfsd: service mi_switch+0xe1 sleepq_wait= +0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0= xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nf= srvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_s= tart+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100604 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhto= vp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_pr= ogram+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fo= rk_trampoline+0xe =0A 918 100605 nfsd nfsd: service mi_sw= itch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock= +0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_intern= al+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918= 100606 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a= sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _= vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100607 nfsd n= fsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_a= rgs+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x= 38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program= +0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 918 100608 nfsd nfsd: service mi_switch+= 0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrv_g= etlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 nfsrvd_dorpc+0xec= 6 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 918 100609 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno= _fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc= _run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 918 100610 nfsd nfsd: service mi_switch+0xe1 sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e vop_stdlock+0x3c VOP_LOCK1= _APV+0xab _vn_lock+0x43 nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_loc= kctrl+0x14ad nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 s= vc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline= +0xe =0A 918 100611 nfsd nfsd: service mi_switch+0xe1 sle= epq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrvd_dorpc+0x= 316 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork= _exit+0x9a fork_trampoline+0xe =0A 918 100612 nfsd nfsd: ser= vice mi_switch+0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4= _lock+0x9b nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77= svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100613 = nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a _sleep+= 0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrvd_dorpc+0x316 nfssvc_program+0x= 554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe =0A 918 100614 nfsd nfsd: service mi_switch+0xe= 1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrvd_dor= pc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 918 100615 nfsd nfsd= : service mi_switch+0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 = nfsv4_lock+0x9b nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+= 0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 10= 0616 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a _s= leep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrvd_dorpc+0x316 nfssvc_progr= am+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A 918 100617 nfsd nfsd: service mi_switc= h+0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrv= d_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100618 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+= 0x66 nfsv4_lock+0x9b nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_inte= rnal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 9= 18 100619 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x= 3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100620 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+= 0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_progr= am+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A 918 100621 nfsd nfsd: service mi_switc= h+0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrv= d_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100622 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+= 0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_progr= am+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A 918 100623 nfsd nfsd: service mi_switc= h+0xe1 sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrv= d_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100624 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+= 0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_progr= am+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A 918 100625 nfsd nfsd: service mi_switc= h+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x= 3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c n= fsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+= 0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 10= 0626 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sl= eeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_= lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 918 100627 nfsd nfsd= : service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x= 554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe =0A 918 100628 nfsd nfsd: service mi_switch+0xe= 1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VO= P_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77= svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100629 = nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x= 917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork= _exit+0x9a fork_trampoline+0xe =0A 918 100630 nfsd nfsd: ser= vice mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x90= 2 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 s= vc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline= +0xe =0A 918 100631 nfsd nfsd: service mi_switch+0xe1 sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_= thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100632 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15= d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 = zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 n= fssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 918 100633 nfsd nfsd: service = mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop= _stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_ru= n_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 918 100634 nfsd nfsd: service mi_switch+0xe1 sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_AP= V+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8= nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_threa= d_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100635 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_f= htovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc= _program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a= fork_trampoline+0xe =0A 918 100636 nfsd nfsd: service mi= _switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100637 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xa= b _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_sta= rt+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100638 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmg= r_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp= +0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_prog= ram+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 918 100639 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal= +0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1= 00640 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe =0A 918 100641 nfsd nfs= d: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38= d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0= x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_tram= poline+0xe =0A 918 100642 nfsd nfsd: service mi_switch+0x= e1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc7= 7 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100643= nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock= +0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb for= k_exit+0x9a fork_trampoline+0xe =0A 918 100644 nfsd nfsd: se= rvice mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfs= vno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 = svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 918 100645 nfsd nfsd: service mi_switch+0xe1 sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LO= CK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtov= p+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100646 nfsd= nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x1= 5d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 = nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe =0A 918 100647 nfsd nfsd: service= mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_r= un_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A 918 100648 nfsd nfsd: service mi_switch+0xe1 sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc= 8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thre= ad_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100649 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __= lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_= fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssv= c_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9= a fork_trampoline+0xe =0A 918 100650 nfsd nfsd: service m= i_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_std= lock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_in= ternal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100651 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+= 0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0x= ab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100652 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockm= gr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtov= p+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_pro= gram+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe =0A 918 100653 nfsd nfsd: service mi_swi= tch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_interna= l+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 = 100654 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a = sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0= xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100655 nfsd nf= sd: service mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_ar= gs+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x3= 8d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+= 0x554 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_tra= mpoline+0xe =0A 918 100656 nfsd nfsd: service mi_switch+0= xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554 svc_run_internal+0xc= 77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 10065= 7 nfsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_loc= k+0x43 zfs_fhtovp+0x38d nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917 nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 918 100658 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902 vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 nfssvc_program+0x554= svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoli= ne+0xe=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security Engineer= =0Ahttp://www.unix-experience.fr=0A=0A15 d=C3=A9cembre 2014 13:29 "Lo=C3= =AFc Blot" a =C3=A9crit: =0A> Hmmm...=0A> = now i'm experiencing a deadlock.=0A> =0A> 0 918 915 0 21 0 12352 3372 zfs= D - 1:48.64 nfsd: server (nfsd)=0A> =0A> the only issue was to reboot th= e server, but after rebooting deadlock arrives a second time when i=0A> s= tart my jails over NFS.=0A> =0A> Regards,=0A> =0A> Lo=C3=AFc Blot,=0A> UN= IX Systems, Network and Security Engineer=0A> http://www.unix-experience.= fr=0A> =0A> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" a =C3=A9crit:=0A> =0A> Hi Rick,=0A> after talking with my = N+1, NFSv4 is required on our infrastructure. I tried to upgrade NFSv4+ZF= S=0A> server from 9.3 to 10.1, i hope this will resolve some issues...=0A= > =0A> Regards,=0A> =0A> Lo=C3=AFc Blot,=0A> UNIX Systems, Network and Se= curity Engineer=0A> http://www.unix-experience.fr=0A> =0A> 10 d=C3=A9cemb= re 2014 15:36 "Lo=C3=AFc Blot" a =C3=A9cri= t:=0A> =0A> Hi Rick,=0A> thanks for your suggestion.=0A> For my locking b= ug, rpc.lockd is stucked in rpcrecv state on the server. kill -9 doesn't = affect the=0A> process, it's blocked.... (State: Ds)=0A> =0A> for the per= formances=0A> =0A> NFSv3: 60Mbps=0A> NFSv4: 45Mbps=0A> Regards,=0A> =0A> = Lo=C3=AFc Blot,=0A> UNIX Systems, Network and Security Engineer=0A> http:= //www.unix-experience.fr=0A> =0A> 10 d=C3=A9cembre 2014 13:56 "Rick Mackl= em" a =C3=A9crit:=0A> =0A>> Loic Blot wrote:=0A>> = =0A>>> Hi Rick,=0A>>> I'm trying NFSv3.=0A>>> Some jails are starting ver= y well but now i have an issue with lockd=0A>>> after some minutes:=0A>>>= =0A>>> nfs server 10.10.X.8:/jails: lockd not responding=0A>>> nfs serve= r 10.10.X.8:/jails lockd is alive again=0A>>> =0A>>> I look at mbuf, but = i seems there is no problem.=0A>> =0A>> Well, if you need locks to be vis= ible across multiple clients, then=0A>> I'm afraid you are stuck with usi= ng NFSv4 and the performance you get=0A>> from it. (There is no way to do= file handle affinity for NFSv4 because=0A>> the read and write ops are b= uried in the compound RPC and not easily=0A>> recognized.)=0A>> =0A>> If = the locks don't need to be visible across multiple clients, I'd=0A>> sugg= est trying the "nolockd" option with nfsv3.=0A>> =0A>>> Here is my rc.con= f on server:=0A>>> =0A>>> nfs_server_enable=3D"YES"=0A>>> nfsv4_server_en= able=3D"YES"=0A>>> nfsuserd_enable=3D"YES"=0A>>> nfsd_server_flags=3D"-u = -t -n 256"=0A>>> mountd_enable=3D"YES"=0A>>> mountd_flags=3D"-r"=0A>>> nf= suserd_flags=3D"-usertimeout 0 -force 20"=0A>>> rpcbind_enable=3D"YES"=0A= >>> rpc_lockd_enable=3D"YES"=0A>>> rpc_statd_enable=3D"YES"=0A>>> =0A>>> = Here is the client:=0A>>> =0A>>> nfsuserd_enable=3D"YES"=0A>>> nfsuserd_f= lags=3D"-usertimeout 0 -force 20"=0A>>> nfscbd_enable=3D"YES"=0A>>> rpc_l= ockd_enable=3D"YES"=0A>>> rpc_statd_enable=3D"YES"=0A>>> =0A>>> Have you = got an idea ?=0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UN= IX Systems, Network and Security Engineer=0A>>> http://www.unix-experienc= e.fr=0A>>> =0A>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" a =C3=A9crit: =0A>>>> Loic Blot wrote:=0A>>>> =0A>>>>> Hi rick,= =0A>>>>> =0A>>>>> I waited 3 hours (no lag at jail launch) and now I do: = sysrc=0A>>>>> memcached_flags=3D"-v -m 512"=0A>>>>> Command was very very= slow...=0A>>>>> =0A>>>>> Here is a dd over NFS:=0A>>>>> =0A>>>>> 6010629= 12 bytes transferred in 21.060679 secs (28539579 bytes/sec)=0A>>>> =0A>>>= > Can you try the same read using an NFSv3 mount?=0A>>>> (If it runs much= faster, you have probably been bitten by the ZFS=0A>>>> "sequential vs r= andom" read heuristic which I've been told things=0A>>>> NFS is doing "ra= ndom" reads without file handle affinity. File=0A>>>> handle affinity is = very hard to do for NFSv4, so it isn't done.)=0A>> =0A>> I was actually s= uggesting that you try the "dd" over nfsv3 to see how=0A>> the performanc= e compared with nfsv4. If you do that, please post the=0A>> comparable re= sults.=0A>> =0A>> Someday I would like to try and get ZFS's sequential vs= random read=0A>> heuristic modified and any info on what difference in p= erformance that=0A>> might make for NFS would be useful.=0A>> =0A>> rick= =0A>> =0A>>>> rick=0A>>>> =0A>>>>> This is quite slow...=0A>>>>> =0A>>>>>= You can found some nfsstat below (command isn't finished yet)=0A>>>>> = =0A>>>>> nfsstat -c -w 1=0A>>>>> =0A>>>>> GtAttr Lookup Rdlink Read Write= Rename Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0 0 16 0=0A>= >>>> 2 0 0 0 0 0 17 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>= >>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0 0 4 0=0A>>= >>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>= >> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0 0 3 0=0A>>>>> 0 0 0 0 0 0 3 0=0A>>>>= > 37 10 0 8 0 0 14 1=0A>>>>> 18 16 0 4 1 2 4 0=0A>>>>> 78 91 0 82 6 12 30= 0=0A>>>>> 19 18 0 2 2 4 2 0=0A>>>>> 0 0 0 0 2 0 0 0=0A>>>>> 0 0 0 0 0 0 = 0 0=0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>> 0= 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 = 1 0 0 0 0 1 0=0A>>>>> 4 6 0 0 6 0 3 0=0A>>>>> 2 0 0 0 0 0 0 0=0A>>>>> 0 0= 0 0 0 0 0 0=0A>>>>> 1 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 1 0 0 0=0A>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0= 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 = 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 6 108 0 0 0 0 0 0=0A>>>>> 0 0 0= 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> GtAttr Lookup Rdlink Read Writ= e Rename Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>= >>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>= >>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 98 54 0 86 11 0 25 0= =0A>>>>> 36 24 0 39 25 0 10 1=0A>>>>> 67 8 0 63 63 0 41 0=0A>>>>> 34 0 0 = 35 34 0 0 0=0A>>>>> 75 0 0 75 77 0 0 0=0A>>>>> 34 0 0 35 35 0 0 0=0A>>>>>= 75 0 0 74 76 0 0 0=0A>>>>> 33 0 0 34 33 0 0 0=0A>>>>> 0 0 0 0 5 0 0 0=0A= >>>>> 0 0 0 0 0 0 6 0=0A>>>>> 11 0 0 0 0 0 11 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 17 0 0 0 0 1 0=0A>>>>> GtAttr Lookup Rdlink Read Write Rename = Access Rddir=0A>>>>> 4 5 0 0 0 0 12 0=0A>>>>> 2 0 0 0 0 0 26 0=0A>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0= 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0 0 4 0=0A>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0= 0 0 0 2 0=0A>>>>> 2 0 0 0 0 0 24 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0= 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 = 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> GtAttr = Lookup Rdlink Read Write Rename Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>= >>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0 0 7 0=0A>>>>> 2 1 0 0 0 0 1 0=0A>>>= >> 0 0 0 0 2 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 6 0 0 0=0A>>>>= > 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>>= 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> = 4 6 0 0 0 0 3 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 2 0 0 0 0 0 0 0=0A>>>>> 0= 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 71 0 0 0 0 0 0= =0A>>>>> 0 1 0 0 0 0 0 0=0A>>>>> 2 36 0 0 0 0 1 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 1 0 0 0 0 0 1 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 79 6 0 79 79 0 2 0=0A>>>>> 25 0 0 25 26 0 6 0=0A>>>>> 43 18 0 39= 46 0 23 0=0A>>>>> 36 0 0 36 36 0 31 0=0A>>>>> 68 1 0 66 68 0 0 0=0A>>>>>= GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>> 36 0 0 36 36= 0 0 0=0A>>>>> 48 0 0 48 49 0 0 0=0A>>>>> 20 0 0 20 20 0 0 0=0A>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>> 3 14 0 1 0 0 11 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0= 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0 0 4 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 22= 0 0 0 0 16 0=0A>>>>> 2 0 0 0 0 0 23 0=0A>>>>> =0A>>>>> Regards,=0A>>>>> = =0A>>>>> Lo=C3=AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engin= eer=0A>>>>> http://www.unix-experience.fr=0A>>>>> =0A>>>>> 8 d=C3=A9cembr= e 2014 09:36 "Lo=C3=AFc Blot" a=0A>>>>> = =C3=A9crit: =0A>>>>>> Hi Rick,=0A>>>>>> I stopped the jails this week-end= and started it this morning,=0A>>>>>> i'll=0A>>>>>> give you some stats = this week.=0A>>>>>> =0A>>>>>> Here is my nfsstat -m output (with your rsi= ze/wsize tweaks)=0A> =0A> nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin= =3D3,acdirmax=3D60,acregmin=3D5,acregmax=3D60,nametimeo=3D60,negna=0A> = =0A>>>>>> =0A> =0A> etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize= =3D32768,readahead=3D1,wcommitsize=3D773136,timeout=3D120,retra=0A> =0A> = s=3D2147483647=0A> =0A> On server side my disks are on a raid controller = which show a=0A> 512b=0A> volume and write performances=0A> are very hone= st (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096=0A> count=3D100000000= =3D> 450MBps)=0A> =0A> Regards,=0A> =0A> Lo=C3=AFc Blot,=0A> UNIX System= s, Network and Security Engineer=0A> http://www.unix-experience.fr=0A> = =0A> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a= =0A> =C3=A9crit:=0A> =0A>> Loic Blot wrote:=0A>> =0A>>> Hi,=0A>>> i'm try= ing to create a virtualisation environment based on=0A>>> jails.=0A>>> Th= ose jails are stored under a big ZFS pool on a FreeBSD 9.3=0A>>> which=0A= >>> export a NFSv4 volume. This NFSv4 volume was mounted on a big=0A>>> h= ypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but only 1=0A>>> was= =0A>>> used at this time).=0A>>> =0A>>> The problem is simple, my hypervi= sors runs 6 jails (used 1% cpu=0A>>> and=0A>>> 10GB RAM approximatively a= nd less than 1MB bandwidth) and works=0A>>> fine at start but the system = slows down and after 2-3 days=0A>>> become=0A>>> unusable. When i look at= top command i see 80-100% on system=0A>>> and=0A>>> commands are very ve= ry slow. Many process are tagged with=0A>>> nfs_cl*.=0A>> =0A>> To be hon= est, I would expect the slowness to be because of slow=0A>> response=0A>>= from the NFSv4 server, but if you do:=0A>> # ps axHl=0A>> on a client wh= en it is slow and post that, it would give us some=0A>> more=0A>> informa= tion on where the client side processes are sitting.=0A>> If you also do = something like:=0A>> # nfsstat -c -w 1=0A>> and let it run for a while, t= hat should show you how many RPCs=0A>> are=0A>> being done and which ones= .=0A>> =0A>> # nfsstat -m=0A>> will show you what your mount is actually = using.=0A>> The only mount option I can suggest trying is=0A>> "rsize=3D3= 2768,wsize=3D32768",=0A>> since some network environments have difficulti= es with 64K.=0A>> =0A>> There are a few things you can try on the NFSv4 s= erver side, if=0A>> it=0A>> appears=0A>> that the clients are generating = a large RPC load.=0A>> - disabling the DRC cache for TCP by setting vfs.n= fsd.cachetcp=3D0=0A>> - If the server is seeing a large write RPC load, t= hen=0A>> "sync=3Ddisabled"=0A>> might help, although it does run a risk o= f data loss when the=0A>> server=0A>> crashes.=0A>> Then there are a coup= le of other ZFS related things (I'm not a=0A>> ZFS=0A>> guy,=0A>> but the= se have shown up on the mailing lists).=0A>> - make sure your volumes are= 4K aligned and ashift=3D12 (in case a=0A>> drive=0A>> that uses 4K secto= rs is pretending to be 512byte sectored)=0A>> - never run over 70-80% ful= l if write performance is an issue=0A>> - use a zil on an SSD with good w= rite performance=0A>> =0A>> The only NFSv4 thing I can tell you is that i= t is known that=0A>> ZFS's=0A>> algorithm for determining sequential vs r= andom I/O fails for=0A>> NFSv4=0A>> during writing and this can be a perf= ormance hit. The only=0A>> workaround=0A>> is to use NFSv3 mounts, since = file handle affinity apparently=0A>> fixes=0A>> the problem and this is o= nly done for NFSv3.=0A>> =0A>> rick=0A>> =0A>>> I saw that there are TSO = issues with igb then i'm trying to=0A>>> disable=0A>>> it with sysctl but= the situation wasn't solved.=0A>>> =0A>>> Someone has got ideas ? I can = give you more informations if you=0A>>> need.=0A>>> =0A>>> Thanks in adva= nce.=0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, Netw= ork and Security Engineer=0A>>> http://www.unix-experience.fr=0A>>> _____= __________________________________________=0A>>> freebsd-fs@freebsd.org m= ailing list=0A>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A= >>> To unsubscribe, send any mail to=0A>>> "freebsd-fs-unsubscribe@freebs= d.org"=0A> =0A> _______________________________________________=0A> freeb= sd-fs@freebsd.org mailing list=0A> http://lists.freebsd.org/mailman/listi= nfo/freebsd-fs=0A> To unsubscribe, send any mail to=0A> "freebsd-fs-unsub= scribe@freebsd.org"=0A> =0A> ____________________________________________= ___=0A> freebsd-fs@freebsd.org mailing list=0A> http://lists.freebsd.org/= mailman/listinfo/freebsd-fs=0A> To unsubscribe, send any mail to "freebsd= -fs-unsubscribe@freebsd.org"=0A> =0A> ___________________________________= ____________=0A> freebsd-fs@freebsd.org mailing list=0A> http://lists.fre= ebsd.org/mailman/listinfo/freebsd-fs=0A> To unsubscribe, send any mail to= "freebsd-fs-unsubscribe@freebsd.org"=0A> _______________________________= ________________=0A> freebsd-fs@freebsd.org mailing list=0A> http://lists= .freebsd.org/mailman/listinfo/freebsd-fs=0A> To unsubscribe, send any mai= l to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 14:17:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 02F4D3A1 for ; Mon, 15 Dec 2014 14:17:58 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 8A77A257 for ; Mon, 15 Dec 2014 14:17:57 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AggFAHzsjlSDaFve/2dsb2JhbABag1hYBIMCwlEKhShKAoExAQEBAQF9hAwBAQEDAQEBARcBCCsgCwUuAgINGQIpAQkmBggCBQQBHASIAwgNvFqWJgEBAQEBAQQBAQEBAQEBAQEBGIEhjgABARs0B4ItOxGBMAWJPogCgxyDIDCCLoIxgzuEKoM4IoF+HoFuIDAHgQU5fgEBAQ X-IronPort-AV: E=Sophos;i="5.07,580,1413259200"; d="scan'208";a="176604002" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 15 Dec 2014 09:17:53 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 71F5DB404E; Mon, 15 Dec 2014 09:17:53 -0500 (EST) Date: Mon, 15 Dec 2014 09:17:53 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <1215617347.12668398.1418653073454.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 14:17:58 -0000 Loic Blot wrote: > For more informations, here is procstat -kk on nfsd, if you need more > hot datas, tell me. >=20 >=20 > Regards, PID TID COMM TDNAME KSTACK > 918 100529 nfsd nfsd: master mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > amd64_syscall+0x351 Well, most of the threads are stuck like this one, waiting for a vnode lock in ZFS. All of them appear to be in zfs_fhtovp(). I`m not a ZFS guy, so I can`t help much. I`ll try changing the subject line to include ZFS vnode lock, so maybe the ZFS guys will take a look. The only thing I`ve seen suggested is trying: sysctl vfs.lookup_shared=3D0 to disable shared vop_lookup()s. Apparently zfs_lookup() doesn`t obey the vnode locking rules for lookup and rename, according to the posting I saw. I`ve added a couple of comments about the other threads below, but they are all either waiting for an RPC request or waiting for the threads stuck on the ZFS vnode lock to complete. rick > 918 100564 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe Fyi, this thread is just waiting for an RPC to arrive. (Normal) > 918 100565 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100566 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100567 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100568 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100569 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100570 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100571 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100572 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe This one (and a few others) are waiting for the nfsv4_lock. This happens because other threads are stuck with RPCs in progress. (ie. The ones waiting on the vnode lock in zfs_fhtovp().) For these, the RPC needs to lock out other threads to do the operation, so it waits for the nfsv4_lock() which can exclusively lock the NFSv4 data structures once all other nfsd threads complete their RPCs in progress. > 918 100573 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe Same as above. > 918 100574 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100575 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100576 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100577 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100578 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100579 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100580 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100581 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100582 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100583 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100584 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100585 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100586 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100587 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100588 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100589 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100590 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100591 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100592 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100593 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100594 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100595 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100596 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100597 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100598 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100599 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100600 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100601 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100602 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100603 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100604 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100605 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100606 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100607 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > 918 100608 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100609 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100610 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100611 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100612 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100613 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100614 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100615 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100616 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100617 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100618 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100619 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100620 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100621 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100622 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100623 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100624 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100625 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100626 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100627 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100628 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100629 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100630 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100631 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100632 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100633 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100634 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100635 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100636 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100637 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100638 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100639 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100640 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100641 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100642 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100643 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100644 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100645 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100646 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100647 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100648 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100649 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100650 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100651 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100652 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100653 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100654 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100655 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100656 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100657 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100658 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" a > =C3=A9crit: > > Hmmm... > > now i'm experiencing a deadlock. > >=20 > > 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server (nfsd) > >=20 > > the only issue was to reboot the server, but after rebooting > > deadlock arrives a second time when i > > start my jails over NFS. > >=20 > > Regards, > >=20 > > Lo=C3=AFc Blot, > > UNIX Systems, Network and Security Engineer > > http://www.unix-experience.fr > >=20 > > 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" a > > =C3=A9crit: > >=20 > > Hi Rick, > > after talking with my N+1, NFSv4 is required on our infrastructure. > > I tried to upgrade NFSv4+ZFS > > server from 9.3 to 10.1, i hope this will resolve some issues... > >=20 > > Regards, > >=20 > > Lo=C3=AFc Blot, > > UNIX Systems, Network and Security Engineer > > http://www.unix-experience.fr > >=20 > > 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" a > > =C3=A9crit: > >=20 > > Hi Rick, > > thanks for your suggestion. > > For my locking bug, rpc.lockd is stucked in rpcrecv state on the > > server. kill -9 doesn't affect the > > process, it's blocked.... (State: Ds) > >=20 > > for the performances > >=20 > > NFSv3: 60Mbps > > NFSv4: 45Mbps > > Regards, > >=20 > > Lo=C3=AFc Blot, > > UNIX Systems, Network and Security Engineer > > http://www.unix-experience.fr > >=20 > > 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" a > > =C3=A9crit: > >=20 > >> Loic Blot wrote: > >>=20 > >>> Hi Rick, > >>> I'm trying NFSv3. > >>> Some jails are starting very well but now i have an issue with > >>> lockd > >>> after some minutes: > >>>=20 > >>> nfs server 10.10.X.8:/jails: lockd not responding > >>> nfs server 10.10.X.8:/jails lockd is alive again > >>>=20 > >>> I look at mbuf, but i seems there is no problem. > >>=20 > >> Well, if you need locks to be visible across multiple clients, > >> then > >> I'm afraid you are stuck with using NFSv4 and the performance you > >> get > >> from it. (There is no way to do file handle affinity for NFSv4 > >> because > >> the read and write ops are buried in the compound RPC and not > >> easily > >> recognized.) > >>=20 > >> If the locks don't need to be visible across multiple clients, I'd > >> suggest trying the "nolockd" option with nfsv3. > >>=20 > >>> Here is my rc.conf on server: > >>>=20 > >>> nfs_server_enable=3D"YES" > >>> nfsv4_server_enable=3D"YES" > >>> nfsuserd_enable=3D"YES" > >>> nfsd_server_flags=3D"-u -t -n 256" > >>> mountd_enable=3D"YES" > >>> mountd_flags=3D"-r" > >>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>> rpcbind_enable=3D"YES" > >>> rpc_lockd_enable=3D"YES" > >>> rpc_statd_enable=3D"YES" > >>>=20 > >>> Here is the client: > >>>=20 > >>> nfsuserd_enable=3D"YES" > >>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>> nfscbd_enable=3D"YES" > >>> rpc_lockd_enable=3D"YES" > >>> rpc_statd_enable=3D"YES" > >>>=20 > >>> Have you got an idea ? > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" a > >>> =C3=A9crit: > >>>> Loic Blot wrote: > >>>>=20 > >>>>> Hi rick, > >>>>>=20 > >>>>> I waited 3 hours (no lag at jail launch) and now I do: sysrc > >>>>> memcached_flags=3D"-v -m 512" > >>>>> Command was very very slow... > >>>>>=20 > >>>>> Here is a dd over NFS: > >>>>>=20 > >>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > >>>>> bytes/sec) > >>>>=20 > >>>> Can you try the same read using an NFSv3 mount? > >>>> (If it runs much faster, you have probably been bitten by the > >>>> ZFS > >>>> "sequential vs random" read heuristic which I've been told > >>>> things > >>>> NFS is doing "random" reads without file handle affinity. File > >>>> handle affinity is very hard to do for NFSv4, so it isn't done.) > >>=20 > >> I was actually suggesting that you try the "dd" over nfsv3 to see > >> how > >> the performance compared with nfsv4. If you do that, please post > >> the > >> comparable results. > >>=20 > >> Someday I would like to try and get ZFS's sequential vs random > >> read > >> heuristic modified and any info on what difference in performance > >> that > >> might make for NFS would be useful. > >>=20 > >> rick > >>=20 > >>>> rick > >>>>=20 > >>>>> This is quite slow... > >>>>>=20 > >>>>> You can found some nfsstat below (command isn't finished yet) > >>>>>=20 > >>>>> nfsstat -c -w 1 > >>>>>=20 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 16 0 > >>>>> 2 0 0 0 0 0 17 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 4 0 0 0 0 4 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 3 0 > >>>>> 0 0 0 0 0 0 3 0 > >>>>> 37 10 0 8 0 0 14 1 > >>>>> 18 16 0 4 1 2 4 0 > >>>>> 78 91 0 82 6 12 30 0 > >>>>> 19 18 0 2 2 4 2 0 > >>>>> 0 0 0 0 2 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 1 0 0 0 0 1 0 > >>>>> 4 6 0 0 6 0 3 0 > >>>>> 2 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 1 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 1 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 6 108 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 98 54 0 86 11 0 25 0 > >>>>> 36 24 0 39 25 0 10 1 > >>>>> 67 8 0 63 63 0 41 0 > >>>>> 34 0 0 35 34 0 0 0 > >>>>> 75 0 0 75 77 0 0 0 > >>>>> 34 0 0 35 35 0 0 0 > >>>>> 75 0 0 74 76 0 0 0 > >>>>> 33 0 0 34 33 0 0 0 > >>>>> 0 0 0 0 5 0 0 0 > >>>>> 0 0 0 0 0 0 6 0 > >>>>> 11 0 0 0 0 0 11 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 17 0 0 0 0 1 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 4 5 0 0 0 0 12 0 > >>>>> 2 0 0 0 0 0 26 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 4 0 0 0 0 4 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 2 0 > >>>>> 2 0 0 0 0 0 24 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 7 0 > >>>>> 2 1 0 0 0 0 1 0 > >>>>> 0 0 0 0 2 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 6 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 6 0 0 0 0 3 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 2 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 71 0 0 0 0 0 0 > >>>>> 0 1 0 0 0 0 0 0 > >>>>> 2 36 0 0 0 0 1 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 1 0 0 0 0 0 1 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 79 6 0 79 79 0 2 0 > >>>>> 25 0 0 25 26 0 6 0 > >>>>> 43 18 0 39 46 0 23 0 > >>>>> 36 0 0 36 36 0 31 0 > >>>>> 68 1 0 66 68 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 36 0 0 36 36 0 0 0 > >>>>> 48 0 0 48 49 0 0 0 > >>>>> 20 0 0 20 20 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 3 14 0 1 0 0 11 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 4 0 0 0 0 4 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 22 0 0 0 0 16 0 > >>>>> 2 0 0 0 0 0 23 0 > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > >>>>> a > >>>>> =C3=A9crit: > >>>>>> Hi Rick, > >>>>>> I stopped the jails this week-end and started it this morning, > >>>>>> i'll > >>>>>> give you some stats this week. > >>>>>>=20 > >>>>>> Here is my nfsstat -m output (with your rsize/wsize tweaks) > >=20 > > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregm= in=3D5,acregmax=3D60,nametimeo=3D60,negna > >=20 > >>>>>>=20 > >=20 > > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead= =3D1,wcommitsize=3D773136,timeout=3D120,retra > >=20 > > s=3D2147483647 > >=20 > > On server side my disks are on a raid controller which show a > > 512b > > volume and write performances > > are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > > count=3D100000000 =3D> 450MBps) > >=20 > > Regards, > >=20 > > Lo=C3=AFc Blot, > > UNIX Systems, Network and Security Engineer > > http://www.unix-experience.fr > >=20 > > 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a > > =C3=A9crit: > >=20 > >> Loic Blot wrote: > >>=20 > >>> Hi, > >>> i'm trying to create a virtualisation environment based on > >>> jails. > >>> Those jails are stored under a big ZFS pool on a FreeBSD 9.3 > >>> which > >>> export a NFSv4 volume. This NFSv4 volume was mounted on a big > >>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but only 1 > >>> was > >>> used at this time). > >>>=20 > >>> The problem is simple, my hypervisors runs 6 jails (used 1% cpu > >>> and > >>> 10GB RAM approximatively and less than 1MB bandwidth) and works > >>> fine at start but the system slows down and after 2-3 days > >>> become > >>> unusable. When i look at top command i see 80-100% on system > >>> and > >>> commands are very very slow. Many process are tagged with > >>> nfs_cl*. > >>=20 > >> To be honest, I would expect the slowness to be because of slow > >> response > >> from the NFSv4 server, but if you do: > >> # ps axHl > >> on a client when it is slow and post that, it would give us some > >> more > >> information on where the client side processes are sitting. > >> If you also do something like: > >> # nfsstat -c -w 1 > >> and let it run for a while, that should show you how many RPCs > >> are > >> being done and which ones. > >>=20 > >> # nfsstat -m > >> will show you what your mount is actually using. > >> The only mount option I can suggest trying is > >> "rsize=3D32768,wsize=3D32768", > >> since some network environments have difficulties with 64K. > >>=20 > >> There are a few things you can try on the NFSv4 server side, if > >> it > >> appears > >> that the clients are generating a large RPC load. > >> - disabling the DRC cache for TCP by setting vfs.nfsd.cachetcp=3D0 > >> - If the server is seeing a large write RPC load, then > >> "sync=3Ddisabled" > >> might help, although it does run a risk of data loss when the > >> server > >> crashes. > >> Then there are a couple of other ZFS related things (I'm not a > >> ZFS > >> guy, > >> but these have shown up on the mailing lists). > >> - make sure your volumes are 4K aligned and ashift=3D12 (in case a > >> drive > >> that uses 4K sectors is pretending to be 512byte sectored) > >> - never run over 70-80% full if write performance is an issue > >> - use a zil on an SSD with good write performance > >>=20 > >> The only NFSv4 thing I can tell you is that it is known that > >> ZFS's > >> algorithm for determining sequential vs random I/O fails for > >> NFSv4 > >> during writing and this can be a performance hit. The only > >> workaround > >> is to use NFSv3 mounts, since file handle affinity apparently > >> fixes > >> the problem and this is only done for NFSv3. > >>=20 > >> rick > >>=20 > >>> I saw that there are TSO issues with igb then i'm trying to > >>> disable > >>> it with sysctl but the situation wasn't solved. > >>>=20 > >>> Someone has got ideas ? I can give you more informations if you > >>> need. > >>>=20 > >>> Thanks in advance. > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >=20 > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org" > >=20 > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org" > >=20 > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 20:48:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D8188223 for ; Mon, 15 Dec 2014 20:48:47 +0000 (UTC) Received: from thebighonker.lerctr.org (thebighonker.lerctr.org [IPv6:2001:470:1f0f:3ad:223:7dff:fe9e:6e8a]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "thebighonker.lerctr.org", Issuer "COMODO RSA Domain Validation Secure Server CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AAC31867 for ; Mon, 15 Dec 2014 20:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Content-Type:MIME-Version:Message-ID:Subject:To:From:Date; bh=vEERstw92f4OUoD1vhB5duARxfnlrU4OiwSHvbxroUc=; b=EEpwQLXoflp5KAlhHbeVHbUBYaIv4r2AUio0X3lBirDHFneyVCa2agjsM5zMmWaInn0Qq6iypMnh15ZNtrfaQY03zJ3rD9FGu1+KWhA8eVXtloDBRXm6D57WrCHXdNNhbpFXkvzgCrcTciCcDlXdtFaZFloI1eYYyMCGvwAE9Yk=; Received: from 104-54-221-134.lightspeed.austtx.sbcglobal.net ([104.54.221.134]:21236 helo=borg.lerctr.org) by thebighonker.lerctr.org with esmtpsa (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.84 (FreeBSD)) (envelope-from ) id 1Y0cZR-0008wa-Ex for freebsd-fs@freebsd.org; Mon, 15 Dec 2014 14:48:46 -0600 Date: Mon, 15 Dec 2014 14:48:34 -0600 From: Larry Rosenman To: freebsd-fs@freebsd.org Subject: zfs diff without allow as user gets coredump? Message-ID: <20141215204833.GA2858@borg.lerctr.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-LERCTR-Spam-Score: -2.9 (--) X-Spam-Report: SpamScore (-2.9/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, TVD_RCVD_IP=0.001 X-LERCTR-Spam-Report: SpamScore (-2.9/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, TVD_RCVD_IP=0.001 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 20:48:47 -0000 Should we get a better message if you try to do a zfs diff as a normal user, and diff hasn't been allowed? You currently get: borg.lerctr.org /home/ler $ zfs diff zroot/home/ler@zfs-auto-snap_hourly-2014-12-15-12h00 internal error: Invalid argument Abort trap (core dumped) borg.lerctr.org /home/ler $ Just curious. -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 E-Mail: ler@lerctr.org US Mail: 108 Turvey Cove, Hutto, TX 78634-5688 From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 20:59:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E0360413 for ; Mon, 15 Dec 2014 20:59:47 +0000 (UTC) Received: from thebighonker.lerctr.org (thebighonker.lerctr.org [IPv6:2001:470:1f0f:3ad:223:7dff:fe9e:6e8a]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "thebighonker.lerctr.org", Issuer "COMODO RSA Domain Validation Secure Server CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B52B1971 for ; Mon, 15 Dec 2014 20:59:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:To:From:Date; bh=WKW/gbJ+kx4U87hCaXl1mDAdlvFVQ2uSMEPfa4QGcAA=; b=QOseVEN5ZhjavUgAXjspREbFvVj2ZN5JMQ+OmythROWg1wut9/RPvFROBxZYUhcum+hk4nLS7hw1LbISwS6QyiYgAa28za0+ui2CRvdpvvvDHDyJlIZjz8THbcN+eTGAbX8qWafmWsbTLmhXwQdcdKJfc+jofMQ8s6ilQoyB7EA=; Received: from 104-54-221-134.lightspeed.austtx.sbcglobal.net ([104.54.221.134]:61968 helo=borg.lerctr.org) by thebighonker.lerctr.org with esmtpsa (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.84 (FreeBSD)) (envelope-from ) id 1Y0ck5-00095T-ND for freebsd-fs@freebsd.org; Mon, 15 Dec 2014 14:59:47 -0600 Date: Mon, 15 Dec 2014 14:59:34 -0600 From: Larry Rosenman To: freebsd-fs@freebsd.org Subject: Re: zfs diff without allow as user gets coredump? Message-ID: <20141215205934.GA3005@borg.lerctr.org> References: <20141215204833.GA2858@borg.lerctr.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20141215204833.GA2858@borg.lerctr.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-LERCTR-Spam-Score: -2.9 (--) X-Spam-Report: SpamScore (-2.9/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, TVD_RCVD_IP=0.001 X-LERCTR-Spam-Report: SpamScore (-2.9/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, TVD_RCVD_IP=0.001 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 20:59:48 -0000 On Mon, Dec 15, 2014 at 02:48:34PM -0600, Larry Rosenman wrote: > Should we get a better message if you try to do a zfs diff as a normal > user, and diff hasn't been allowed? > > You currently get: > borg.lerctr.org /home/ler $ zfs diff zroot/home/ler@zfs-auto-snap_hourly-2014-12-15-12h00 > internal error: Invalid argument > Abort trap (core dumped) > borg.lerctr.org /home/ler $ > > Just curious. borg.lerctr.org /home/ler $ uname -a FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #26 r275811: Mon Dec 15 12:36:41 CST 2014 root@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER amd64 borg.lerctr.org /home/ler $ svn info /usr/src Path: /usr/src Working Copy Root Path: /usr/src URL: svn://svn.freebsd.org/base/head Relative URL: ^/head Repository Root: svn://svn.freebsd.org/base Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f Revision: 275812 Node Kind: directory Schedule: normal Last Changed Author: delphij Last Changed Rev: 275812 Last Changed Date: 2014-12-15 12:28:22 -0600 (Mon, 15 Dec 2014) -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 E-Mail: ler@lerctr.org US Mail: 108 Turvey Cove, Hutto, TX 78634-5688 From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 21:27:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 88611E74 for ; Mon, 15 Dec 2014 21:27:54 +0000 (UTC) Received: from anubis.delphij.net (anubis.delphij.net [64.62.153.212]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 6ECEACC5 for ; Mon, 15 Dec 2014 21:27:54 +0000 (UTC) Received: from zeta.ixsystems.com (unknown [12.229.62.2]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id 085591A29D; Mon, 15 Dec 2014 13:27:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1418678868; x=1418693268; bh=OlqlHo3YMoPvQDsGZCLP+sAcjTaFOm8A+dyLzQyiy3I=; h=Date:From:Reply-To:To:Subject:References:In-Reply-To; b=ENzpvbF6mK3vJT/W6mGjp54GaQQvxy0ma0qpR/DoJ7sjn9Yfd4jDh/SmCtt1Xyekt Q3Yrq+4K9pBg9PDX/9ybjQydsmt90pSzh4resxzIPU06wCyEPwtq4Oji/UW8FZNBWe ke39N5KZ+W3g3nS9EhSADoI6YcnoIVJ6rhrQRsH8= Message-ID: <548F5253.2090603@delphij.net> Date: Mon, 15 Dec 2014 13:27:47 -0800 From: Xin Li Reply-To: d@delphij.net Organization: The FreeBSD Project MIME-Version: 1.0 To: Larry Rosenman , freebsd-fs@freebsd.org Subject: Re: zfs diff without allow as user gets coredump? References: <20141215204833.GA2858@borg.lerctr.org> In-Reply-To: <20141215204833.GA2858@borg.lerctr.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 21:27:54 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 12/15/14 12:48, Larry Rosenman wrote: > Should we get a better message if you try to do a zfs diff as a > normal user, and diff hasn't been allowed? > > You currently get: borg.lerctr.org /home/ler $ zfs diff > zroot/home/ler@zfs-auto-snap_hourly-2014-12-15-12h00 internal > error: Invalid argument Abort trap (core dumped) borg.lerctr.org > /home/ler $ It would be useful if you have a backtrace from the core file as I can't reproduce on -CURRENT. BTW I wasn't able to reproduce the abort trap (something returned EINVAL? how?) but found a different bug where an extra \n is sneaked in. Index: cddl/contrib/opensolaris/lib/libzfs/common/libzfs_diff.c =================================================================== - --- cddl/contrib/opensolaris/lib/libzfs/common/libzfs_diff.c (revision 275812) +++ cddl/contrib/opensolaris/lib/libzfs/common/libzfs_diff.c (working copy) @@ -524,7 +524,7 @@ (void) snprintf(di->errbuf, sizeof (di->errbuf), dgettext(TEXT_DOMAIN, "The diff delegated " "permission is needed in order\nto create a " - - "just-in-time snapshot for diffing\n")); + "just-in-time snapshot for diffing")); return (zfs_error(hdl, EZFS_DIFF, di->errbuf)); } else { (void) snprintf(di->errbuf, sizeof (di->errbuf), But no it doesn't fix your coredump apparently... Cheers, - -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.1.0 (FreeBSD) iQIcBAEBCgAGBQJUj1JQAAoJEJW2GBstM+nsIf0P/1H14YHsuYWkjFxXaV5P9mpa XX3PyaWHgOIIu4N6fOEG2sz2WlAbCJiFaB6PdS6ih4hbKH7ZmkxKu+cVo7q7tmdH uyTDoeDcvcoRhMbAHnhk3FRtu+9stTb4nxve4Ja/8OhFYR9mUyJfoJYlTOBsYawI zitqrWhwTj94mP4vWSN7lOB1IN50/Pz/lyNVyTjyJUABWBowDKVQUtgtsffAqR3i xB7IDFFI8b4Tn7GxIQzuJiBAHTdSlsbQsarFYg+9912JeEz7N3NDovOqYcufcCFT m9t6ksbjjGwMJIwqfx83+FORlJdbRj0qbRTT5A81oXi2FlAPAzo1Jnk8g916HbSt 0n9jVqRDYGfoSy06+mjRDcMuA/x9QaeCCPPAoQBFNOdvPYnBVyNRATZhG7BVdEXN 1ygm1I7OUFBZkeXFYaJDEcdF/PHsWUkPMctA4DGbTV2rbwhS1mqHLjy5VMszUzh/ B8XHEHi3zF+6CAoMUQzhWVu1tF8p/nv6ZBjCU39aIdt88u8qxzdTq3kM//oibJi6 hp2DrUxiQFrmEDwHy5hS6NJaSYAD/ap3H2Y6T5+fw22/rB0vqaPqgiB+kDlzPGtN 8SFgoTzp6YffJ6+kXJ/XZ5bPVoV4JIq+rp2Ypyrp6nJXYYrICYYZCEKLX3Amxxtw dqRCXzO0d8PCnNldTBgy =6gKh -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 21:33:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A89FDF58 for ; Mon, 15 Dec 2014 21:33:07 +0000 (UTC) Received: from thebighonker.lerctr.org (thebighonker.lerctr.org [IPv6:2001:470:1f0f:3ad:223:7dff:fe9e:6e8a]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "thebighonker.lerctr.org", Issuer "COMODO RSA Domain Validation Secure Server CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 79106D9D for ; Mon, 15 Dec 2014 21:33:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:Cc:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=QDSYUAVM4q1bporQbYAxrlbB2/maGAeZeJZGlrxzMj0=; b=a7574GNlRvJM0zqy5RwpZDJlbJfqh6XRnvFuxRyDTvxgjWaZuYrPp9QvJaD4HGvA44xcp6MKcVqUJWTqaLMr0ZAeLngwujhmCuFqnsQoUFLNWaMhsqdqEECZquNcnlXhzBcgMVWClLoQeJ2vAX6DsTEZpH+8yHilZhb4Qhgr5nA=; Received: from thebighonker.lerctr.org ([2001:470:1f0f:3ad:223:7dff:fe9e:6e8a]:14326 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpsa (TLSv1.2:DHE-RSA-AES128-GCM-SHA256:128) (Exim 4.84 (FreeBSD)) (envelope-from ) id 1Y0dGJ-0009Xh-2X; Mon, 15 Dec 2014 15:33:06 -0600 Received: from host.alcatel.com ([198.205.55.139]) by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Mon, 15 Dec 2014 15:33:02 -0600 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 15 Dec 2014 15:33:02 -0600 From: Larry Rosenman To: d@delphij.net Subject: Re: zfs diff without allow as user gets =?UTF-8?Q?coredump=3F?= In-Reply-To: <548F5253.2090603@delphij.net> References: <20141215204833.GA2858@borg.lerctr.org> <548F5253.2090603@delphij.net> Message-ID: <1b1127837b9c07378c58e9ab7ebf8229@thebighonker.lerctr.org> X-Sender: ler@lerctr.org User-Agent: Roundcube Webmail/1.0.3 X-Spam-Score: -2.1 (--) X-LERCTR-Spam-Score: -2.1 (--) X-Spam-Report: SpamScore (-2.1/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, KAM_ASCII_DIVIDERS=0.8, T_RP_MATCHES_RCVD=-0.01 X-LERCTR-Spam-Report: SpamScore (-2.1/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, KAM_ASCII_DIVIDERS=0.8, T_RP_MATCHES_RCVD=-0.01 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 21:33:07 -0000 On 2014-12-15 15:27, Xin Li wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 12/15/14 12:48, Larry Rosenman wrote: >> Should we get a better message if you try to do a zfs diff as a >> normal user, and diff hasn't been allowed? >> >> You currently get: borg.lerctr.org /home/ler $ zfs diff >> zroot/home/ler@zfs-auto-snap_hourly-2014-12-15-12h00 internal >> error: Invalid argument Abort trap (core dumped) borg.lerctr.org >> /home/ler $ > > It would be useful if you have a backtrace from the core file as I > can't reproduce on -CURRENT. > > BTW I wasn't able to reproduce the abort trap (something returned > EINVAL? how?) but found a different bug where an extra \n is sneaked > in. > > Index: cddl/contrib/opensolaris/lib/libzfs/common/libzfs_diff.c > =================================================================== > - --- > cddl/contrib/opensolaris/lib/libzfs/common/libzfs_diff.c (revision > 275812) > +++ cddl/contrib/opensolaris/lib/libzfs/common/libzfs_diff.c (working > copy) > @@ -524,7 +524,7 @@ > (void) snprintf(di->errbuf, sizeof (di->errbuf), > dgettext(TEXT_DOMAIN, "The diff delegated " > "permission is needed in order\nto create a " > - - "just-in-time snapshot for diffing\n")); > + "just-in-time snapshot for diffing")); > return (zfs_error(hdl, EZFS_DIFF, di->errbuf)); > } else { > (void) snprintf(di->errbuf, sizeof (di->errbuf), > > But no it doesn't fix your coredump apparently... > borg.lerctr.org /home/ler $ zfs diff zroot/home/ler@zfs-auto-snap_hourly-2014-12-15-14h00 internal error: Invalid argument Abort trap (core dumped) borg.lerctr.org /home/ler $ gdb -c zfs.core /sbin/zfs GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"...(no debugging symbols found)... Core was generated by `zfs'. Program terminated with signal 6, Aborted. Reading symbols from /lib/libgeom.so.5...(no debugging symbols found)...done. Loaded symbols for /lib/libgeom.so.5 Reading symbols from /lib/libjail.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libjail.so.1 Reading symbols from /lib/libnvpair.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libnvpair.so.2 Reading symbols from /lib/libumem.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libumem.so.2 Reading symbols from /lib/libutil.so.9...(no debugging symbols found)...done. Loaded symbols for /lib/libutil.so.9 Reading symbols from /lib/libuutil.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libuutil.so.2 Reading symbols from /lib/libzfs_core.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libzfs_core.so.2 Reading symbols from /lib/libzfs.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libzfs.so.2 Reading symbols from /lib/libc.so.7...(no debugging symbols found)...done. Loaded symbols for /lib/libc.so.7 Reading symbols from /lib/libbsdxml.so.4...(no debugging symbols found)...done. Loaded symbols for /lib/libbsdxml.so.4 Reading symbols from /lib/libsbuf.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libsbuf.so.6 Reading symbols from /lib/libmd.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libmd.so.6 Reading symbols from /lib/libm.so.5...(no debugging symbols found)...done. Loaded symbols for /lib/libm.so.5 Reading symbols from /lib/libavl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libavl.so.2 Reading symbols from /lib/libthr.so.3...(no debugging symbols found)...done. Loaded symbols for /lib/libthr.so.3 Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols found)...done. Loaded symbols for /libexec/ld-elf.so.1 #0 0x0000000801a00dba in kill () from /lib/libc.so.7 [New Thread 803006400 (LWP 100916/zfs)] (gdb) bt #0 0x0000000801a00dba in kill () from /lib/libc.so.7 #1 0x00000008019ff4e9 in abort () from /lib/libc.so.7 #2 0x000000080168ead2 in zfs_standard_error_fmt () from /lib/libzfs.so.2 #3 0x000000080168e4c5 in zfs_standard_error () from /lib/libzfs.so.2 #4 0x000000080168c44c in zfs_show_diffs () from /lib/libzfs.so.2 #5 0x000000080168ac2e in zfs_show_diffs () from /lib/libzfs.so.2 #6 0x000000000040a004 in zfs_do_diff () #7 0x000000000040572f in main () (gdb) borg.lerctr.org /home/ler $ zfs allow zroot/home/ler ---- Permissions on zroot/home/ler ----------------------------------- Local+Descendent permissions: user ler destroy,mount,snapdir,snapshot borg.lerctr.org /home/ler $ borg.lerctr.org /home/ler $ uname -a FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #26 r275811: Mon Dec 15 12:36:41 CST 2014 root@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER amd64 borg.lerctr.org /home/ler $ svn info /usr/src Path: /usr/src Working Copy Root Path: /usr/src URL: svn://svn.freebsd.org/base/head Relative URL: ^/head Repository Root: svn://svn.freebsd.org/base Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f Revision: 275812 Node Kind: directory Schedule: normal Last Changed Author: delphij Last Changed Rev: 275812 Last Changed Date: 2014-12-15 12:28:22 -0600 (Mon, 15 Dec 2014) if you want / need access I can arrange it :) -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 E-Mail: ler@lerctr.org US Mail: 108 Turvey Cove, Hutto, TX 78634-5688 From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 14:23:31 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 145F3A85 for ; Tue, 16 Dec 2014 14:23:31 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F0062965 for ; Tue, 16 Dec 2014 14:23:30 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBGENUXP001744 for ; Tue, 16 Dec 2014 14:23:30 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 175449] [unionfs] unionfs and devfs misbehaviour Date: Tue, 16 Dec 2014 14:23:31 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 1.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: dbn@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2014 14:23:31 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=175449 David Naylor changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Progress |Open CC| |dbn@FreeBSD.org --- Comment #5 from David Naylor --- This bug is still applicable on FreeBSD 10.1-RELEASE -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 14:36:05 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 69DADED2 for ; Tue, 16 Dec 2014 14:36:05 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 50D68A9C for ; Tue, 16 Dec 2014 14:36:05 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBGEa58F041891 for ; Tue, 16 Dec 2014 14:36:05 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 126973] [unionfs] [hang] System hang with unionfs and init chroot [regression] Date: Tue, 16 Dec 2014 14:36:05 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: dbn@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: resolution bug_status cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2014 14:36:05 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=126973 David Naylor changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|--- |Unable to Reproduce Status|In Progress |Closed CC| |dbn@FreeBSD.org --- Comment #8 from David Naylor --- Too complicated to verify if this issue persists. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 14:38:53 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0F92CFDD for ; Tue, 16 Dec 2014 14:38:53 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EB37CAC4 for ; Tue, 16 Dec 2014 14:38:52 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBGEcqHH042982 for ; Tue, 16 Dec 2014 14:38:52 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 141950] [unionfs] [lor] ufs/unionfs/ufs Lock order reversal Date: Tue, 16 Dec 2014 14:38:53 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: dbn@FreeBSD.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2014 14:38:53 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=141950 David Naylor changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dbn@FreeBSD.org --- Comment #7 from David Naylor --- Any progress on this LOR? Can this bug be changed? -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 15:24:28 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2C20CBA3 for ; Tue, 16 Dec 2014 15:24:28 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 13CA7FBE for ; Tue, 16 Dec 2014 15:24:28 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBGFORKt003293 for ; Tue, 16 Dec 2014 15:24:27 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 170778] [zfs] [panic] FreeBSD panics randomly Date: Tue, 16 Dec 2014 15:24:28 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 9.1-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: harrison.grundy@astrodoggroup.com X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2014 15:24:28 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=170778 Harrison Grundy changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |harrison.grundy@astrodoggro | |up.com --- Comment #2 from Harrison Grundy --- Does this still occur, and if so, can you provide dmesg output? -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 17:44:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4DEDB176 for ; Tue, 16 Dec 2014 17:44:27 +0000 (UTC) Received: from mail-lb0-x235.google.com (mail-lb0-x235.google.com [IPv6:2a00:1450:4010:c04::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C543A215 for ; Tue, 16 Dec 2014 17:44:26 +0000 (UTC) Received: by mail-lb0-f181.google.com with SMTP id l4so11232975lbv.26 for ; Tue, 16 Dec 2014 09:44:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=QI5lZ307Wg44Cy3utd6aOJykGa+QExbZxocfqMdrwSY=; b=PSCsjwJqefwJSMd7tvjiqDGem6gRRz5Z3MZpAYGm2xoYFGPijV+0ZY8woFmOkFQdN0 aKyY2+QJVHnpjv/XYM9/XQsS8UBg9DXEyXBYEi7QkTCFVbavXzB2oWZmIpmdM7keIsZe badsdIoOnj8Ina7prZ09PeDfOQ8ktxSJV3d45MNskk3fj7bILomNKhgQH+IwBmqdP4MY wStbG/HZzCgckcIf/o+f0HgPuT8Q1JDuGYCBvPmuSw8djFV0sfSVSd5uvkNnLf6Sf2SA fOVJDwgGACIBJchKfAS6M43tiVkqf4kxnTyIfmypXfkjtMu7zMeGb9O00zMt8eVwZVVE qPlg== MIME-Version: 1.0 X-Received: by 10.112.16.129 with SMTP id g1mr32287097lbd.30.1418751864956; Tue, 16 Dec 2014 09:44:24 -0800 (PST) Received: by 10.114.216.163 with HTTP; Tue, 16 Dec 2014 09:44:24 -0800 (PST) Date: Tue, 16 Dec 2014 09:44:24 -0800 Message-ID: Subject: Expanding a zpool inside a VM From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2014 17:44:27 -0000 I have a FreeBSD 10.0 installation in a vmware guest and I'd like to expand the zpool. Obviously it's very simple to expand the disk size within vmware, but I just want to make sure this command - which _appears_ to work fine - is really safe and sane to properly expand the pool to the full size of the the new, larger disk: zpool online -e pool /dev/da1 From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 17:51:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 98CCD2A5 for ; Tue, 16 Dec 2014 17:51:53 +0000 (UTC) Received: from webmail2.jnielsen.net (webmail2.jnielsen.net [50.114.224.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "webmail2.jnielsen.net", Issuer "freebsdsolutions.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 770FC31A for ; Tue, 16 Dec 2014 17:51:52 +0000 (UTC) Received: from [10.10.1.196] (office.betterlinux.com [199.58.199.60]) (authenticated bits=0) by webmail2.jnielsen.net (8.14.9/8.14.9) with ESMTP id sBGHphF0078341 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 16 Dec 2014 10:51:46 -0700 (MST) (envelope-from lists@jnielsen.net) X-Authentication-Warning: webmail2.jnielsen.net: Host office.betterlinux.com [199.58.199.60] claimed to be [10.10.1.196] Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) Subject: Re: Expanding a zpool inside a VM From: John Nielsen In-Reply-To: Date: Tue, 16 Dec 2014 10:51:43 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <90BAD484-1EFD-478F-9A43-60D39242FC1D@jnielsen.net> References: To: javocado X-Mailer: Apple Mail (2.1993) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2014 17:51:53 -0000 On Dec 16, 2014, at 10:44 AM, javocado wrote: > I have a FreeBSD 10.0 installation in a vmware guest and I'd like to = expand > the zpool. Obviously it's very simple to expand the disk size within > vmware, but I just want to make sure this command - which _appears_ to = work > fine - is really safe and sane to properly expand the pool to the full = size > of the the new, larger disk: >=20 > zpool online -e pool /dev/da1 I've been using exactly that command for a long time to extend zpools in = virtual machines with the expected results. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 00:06:51 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8387FDDA for ; Wed, 17 Dec 2014 00:06:51 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6B6DC83C for ; Wed, 17 Dec 2014 00:06:51 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBH06pou043771 for ; Wed, 17 Dec 2014 00:06:51 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 174310] [zfs] root point mounting broken on CURRENT with multiple pools Date: Wed, 17 Dec 2014 00:06:51 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: delphij@FreeBSD.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 00:06:51 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=174310 Xin LI changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |delphij@FreeBSD.org --- Comment #4 from Xin LI --- Since nobody seems to have been working on this, could you please add a few assertion or printf in /sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c, zfs_mount() to see what have returned these ENXIO's? -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 05:21:04 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C25E97EB for ; Wed, 17 Dec 2014 05:21:04 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AA4A4A94 for ; Wed, 17 Dec 2014 05:21:04 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBH5L4tC017561 for ; Wed, 17 Dec 2014 05:21:04 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 170778] [zfs] [panic] FreeBSD panics randomly Date: Wed, 17 Dec 2014 05:21:04 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 9.1-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: emz@norma.perm.ru X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 05:21:04 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=170778 emz@norma.perm.ru changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |emz@norma.perm.ru --- Comment #3 from emz@norma.perm.ru --- Nope, gone with 10-CURRENT somewhere before the 10.0-RELEASE. Please close this. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 11:54:02 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 38B20B0C for ; Wed, 17 Dec 2014 11:54:02 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 208F0DD5 for ; Wed, 17 Dec 2014 11:54:02 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBHBs1WH065599 for ; Wed, 17 Dec 2014 11:54:01 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 170778] [zfs] [panic] FreeBSD panics randomly Date: Wed, 17 Dec 2014 11:54:01 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 9.1-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: harrison.grundy@astrodoggroup.com X-Bugzilla-Status: Closed X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 11:54:02 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=170778 Harrison Grundy changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Progress |Closed Resolution|--- |FIXED --- Comment #4 from Harrison Grundy --- Closed per reporter. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 12:24:40 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 78667883 for ; Wed, 17 Dec 2014 12:24:40 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5FD35279 for ; Wed, 17 Dec 2014 12:24:40 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBHCOeUL075842 for ; Wed, 17 Dec 2014 12:24:40 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 176449] zfs(1): ZFS NFS export went wrong with special hostname character Date: Wed, 17 Dec 2014 12:24:40 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 9.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: harrison.grundy@astrodoggroup.com X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 12:24:40 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=176449 Harrison Grundy changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 12:47:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6AE341DB for ; Wed, 17 Dec 2014 12:47:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 52DBE770 for ; Wed, 17 Dec 2014 12:47:11 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBHClBUf086275 for ; Wed, 17 Dec 2014 12:47:11 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 193128] NFSv3 Solaris 10 Server < - > NFSv3 Freebsd 10.1 Client Date: Wed, 17 Dec 2014 12:47:10 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: harrison.grundy@astrodoggroup.com X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 12:47:11 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 Harrison Grundy changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org CC| |harrison.grundy@astrodoggro | |up.com -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 15:37:04 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5E7FE286 for ; Wed, 17 Dec 2014 15:37:04 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 460BFE2A for ; Wed, 17 Dec 2014 15:37:04 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBHFb47H061326 for ; Wed, 17 Dec 2014 15:37:04 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 167105] [nfs] mount_nfs can not handle source exports wiht more then 63 chars Date: Wed, 17 Dec 2014 15:37:04 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mexas@bris.ac.uk X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 15:37:04 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=167105 mexas@bris.ac.uk changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mexas@bris.ac.uk --- Comment #4 from mexas@bris.ac.uk --- I'm getting the same error in 10.1-stable amd64: WARNING: autofs_trigger_one: request for /net/ ture/export/ completed with error 5 Any news on this since 2012? Anton -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 20:31:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3F9E153E; Wed, 17 Dec 2014 20:31:11 +0000 (UTC) Received: from caida.org (rommie.caida.org [192.172.226.78]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 197BC1767; Wed, 17 Dec 2014 20:31:10 +0000 (UTC) Message-ID: <5491E3CE.2030003@caida.org> Date: Wed, 17 Dec 2014 12:13:02 -0800 From: Daniel Andersen User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Michael Ranner , freebsd-fs@freebsd.org Subject: Re: Process enters unkillable state and somewhat wedges zfs References: <549152B5.6030100@ranner.eu> In-Reply-To: <549152B5.6030100@ranner.eu> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 20:31:11 -0000 I'm certainly not an expert on this, but from my understanding, I think Andriy Gapon had some thoughts on this particular problem and potential solutions: http://lists.freebsd.org/pipermail/freebsd-fs/2014-November/020482.html At least, I think these problems are related. >From my end, my 'solution' was to stop using nullfs mounts in conjunction with zfs. :( Fortunately, our jail servers don't use zfs, yet. Since I reorganized our data NFS server to stop using nullfs, it's been very stable. ( aside from one crash when it's raid 1 boot drive somehow went offline.. but I'm fairly sure that had nothing whatsoever to do with the ZFS problems. ) Dan On 12/17/2014 01:53 AM, Michael Ranner wrote: > Hello there! > > I have the same problem on FreeBSD 9.3-RELEASE-p6. > > mount_nullfs of ZFS dataset in jail environment. > > php-fpm process get stuck (100% CPU, unkillable) and the specific ZFS dataset is unaccessible. Further processes > accessing the same dataset are hanging in state zfs. > > Other datasets on the same pool are still accessible. > > The stack trace looks very similar to Daniels. > > Is there any progress on this? > > I have many very similar environments (jail, ZFS, nullfs mounts), so I am interested in some solution or workaround. > > -- > Mit freundlichen Grüßen > > Ing. Michael Ranner > > GSM: +43 676 4155044 > Mail: michael@ranner.eu > WWW: http://www.azedo.at/ > From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 23:14:34 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 075B9645 for ; Wed, 17 Dec 2014 23:14:34 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E2D6DECB for ; Wed, 17 Dec 2014 23:14:33 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBHNEX7I025264 for ; Wed, 17 Dec 2014 23:14:33 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 193128] NFSv3 Solaris 10 Server < - > NFSv3 Freebsd 10.1 Client Date: Wed, 17 Dec 2014 23:14:34 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: rmacklem@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 23:14:34 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 Rick Macklem changed: What |Removed |Added ---------------------------------------------------------------------------- Status|New |Open CC| |rmacklem@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 23:27:04 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4B6318F3 for ; Wed, 17 Dec 2014 23:27:04 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 18FE1100F for ; Wed, 17 Dec 2014 23:27:04 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBHNR3iD051549 for ; Wed, 17 Dec 2014 23:27:03 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 193128] NFSv3 Solaris 10 Server < - > NFSv3 Freebsd 10.1 Client Date: Wed, 17 Dec 2014 23:27:03 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: rmacklem@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 23:27:04 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 --- Comment #2 from Rick Macklem --- This sounds like a reported case of a problem that was pretty clearly a Solaris server bug. From the wireshark trace I have, the client: - does an exclusive open, which succeeds - does a setattr with mode 0644, which succeeds --> Then the file's attributes are returned by the subsequent write with mode 0, even though it replied NFS_OK to the setattr for mode 0644 When I tried to contact someone in the NFS Solaris engineering group, I got a "we don't talk to anyone without a maintenance contract". The person who reported this via email tried making a bug report to Solaris, but ended up converting his server to FreeBSD to fix the problem. His test case for the above mentioned wireshark trace was extracting a small tarball with one small file in it. You could try the same test and see if the you get the same result. If unrolling the tarball doesn't cause the problem, you could email me a packet trace for the failure, but please try and make it as small as possible. (Or look at it in wireshark and look for an Exclusive CREATE, followed by a SETATTR that sets the mode, both returning NFS_OK.) I guess you are stuck with NFSv2 (which does file creation differently) or bugging Oracle/Solaris about the bug. Also, if the simple "tar xpf " fails for FreeBSD 10 but succeeds for FreeBSD8.3 against the Solaris server, I could look at a packet trace from the FreeBSD8.3 case and see how it differs. (It still seems clear it is a Solaris server bug, but maybe a client change could be created to work around it.) To get a packet trace, you can do the following on the FreeBSD client: # tcpdump -s 0 -w .pcap host Good luck with it, rick ps: I'll email the packet trace I have to you, so you can look at it in wireshark. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 23:31:54 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A93479B8 for ; Wed, 17 Dec 2014 23:31:54 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 90B9610DE for ; Wed, 17 Dec 2014 23:31:54 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBHNVs96069659 for ; Wed, 17 Dec 2014 23:31:54 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 193128] NFSv3 Solaris 10 Server < - > NFSv3 Freebsd 10.1 Client Date: Wed, 17 Dec 2014 23:31:54 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: rmacklem@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: rmacklem@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2014 23:31:54 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 Rick Macklem changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-fs@FreeBSD.org |rmacklem@FreeBSD.org --- Comment #3 from Rick Macklem --- I'll take this one. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 18 05:56:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BAA6E364 for ; Thu, 18 Dec 2014 05:56:32 +0000 (UTC) Received: from syd1-0001mrd.server-mail.com (syd1-0001mrd.server-mail.com [210.247.193.193]) by mx1.freebsd.org (Postfix) with ESMTP id 4F900126B for ; Thu, 18 Dec 2014 05:56:31 +0000 (UTC) Received: from wic001mz.server-mail.com (wic001mz.server-mail.com [210.247.173.1]) by syd1-0001mrd.server-mail.com (Postfix) with ESMTP id 09345402A0 for ; Thu, 18 Dec 2014 15:28:55 +1000 (EST) Received: from bne3-0003mrs.server-mail.com ([203.147.156.147]) by wic001mz.server-mail.com with - id UtUu1p00V3B5L3P01tUufA; Thu, 18 Dec 2014 15:28:54 +1000 Received: from localhost (localhost.localdomain [127.0.0.1]) by bne3-0003mrs.server-mail.com (Postfix) with ESMTP id AE925881BA for ; Thu, 18 Dec 2014 15:28:54 +1000 (EST) X-MRS-PREFILTER: [ BAYES_=-1.9, HTML_MESSAGE=0.0, HTML_FONT_SIZE_LARGE=0.0 ] X-MRS-SCORE: -1.898 X-Virus-Scanned: amavisd-new at server-mail.com Received: from bne3-0003mrs.server-mail.com ([127.0.0.1]) by localhost (bne3-0003mrs.server-mail.com [127.0.0.1]) (amavisd-new, port 10124) with ESMTP id YeisBIsqMT20 for ; Thu, 18 Dec 2014 15:28:49 +1000 (EST) Received: from SWP0001-003NL.services.admin-domain.net (swp0001-003nl.server-web.com [202.139.240.13]) by bne3-0003mrs.server-mail.com (Postfix) with SMTP id 5AFC0881BF for ; Thu, 18 Dec 2014 15:28:43 +1000 (EST) Date: Thu, 18 Dec 2014 15:28:43 +1000 Subject: Delivery Status Notification To: freebsd-fs@freebsd.org From: "FedEx Express Saver" X-Mailer: EBTReporterv2.x Reply-To: "FedEx Express Saver" Mime-Version: 1.0 Message-Id: <20141218052843.5AFC0881BF@bne3-0003mrs.server-mail.com> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Dec 2014 05:56:32 -0000   FedEx Dear Customer, Your parcel has arrived at December 12. Courier was unable to deliver the parcel to you. To receive your parcel, print this label and go to the nearest office. Get Shipment Label FedEx 1995-2014 From owner-freebsd-fs@FreeBSD.ORG Thu Dec 18 11:28:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2B9E6CB for ; Thu, 18 Dec 2014 11:28:58 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AF9BC1527 for ; Thu, 18 Dec 2014 11:28:56 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id B578F1767; Thu, 18 Dec 2014 11:28:48 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id q6ZZJrJdtmnq; Thu, 18 Dec 2014 11:28:43 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 2BDC61756; Thu, 18 Dec 2014 11:28:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1418902123; bh=BnvsxW9TA9ponm+Jpmt6bAh62VxOLYDsZYYDQpUtlOg=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=Gn3kmaAJgls8aqvXMpISxmhp3qxiENQ3ACVuaudb4xooZEJ/jC2OkuZluux892sMl GwYBNx4ohTduBrFGa3VReImCw/voElTPB9Fd5UtICLqy6iI30lvwkSxRXfoMFW8DCr ZeT9thJQIV2sw5dBxaRKeJWg0pDLTck5k8HKWWfM= Mime-Version: 1.0 Date: Thu, 18 Dec 2014 11:28:42 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <0eaadfe31ac4b8bbdeaf0baff696dada@mail.unix-experience.fr> X-Mailer: RainLoop/1.7.0.203 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <1215617347.12668398.1418653073454.JavaMail.root@uoguelph.ca> References: <1215617347.12668398.1418653073454.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Dec 2014 11:28:58 -0000 Hi rick,=0Ai tried to start a LXC container on Debian Squeeze from my fre= ebsd ZFS+NFSv4 server and i also have a deadlock on nfsd (vfs.lookup_shar= ed=3D0). Deadlock procs each time i launch a squeeze container, it seems = (3 tries, 3 fails).=0A=0A 921 - D 0:00.02 nfsd: server (nfsd)=0A= =0AHere is the procstat -kk=0A=0A PID TID COMM TDNAME = KSTACK =0A 921 100538 nfsd nfsd= : master mi_switch+0xe1 sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0xc9e vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 nfsvno_advlock+0= x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad nfsrvd_locku+0x283 nfsrvd_d= orpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de nfsr= vd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c =0A 921 100572 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100573 nfsd nf= sd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0= xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a fork_trampoline+0xe =0A 921 100574 nfsd nfsd: servic= e mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wai= t_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fo= rk_trampoline+0xe =0A 921 100575 nfsd nfsd: service mi_sw= itch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16= a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampol= ine+0xe =0A 921 100576 nfsd nfsd: service mi_switch+0xe1 = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 921 100577 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 1= 00578 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100579 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100580 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 921 100581 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 921 100582 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 921 100583 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 921 100584 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 92= 1 100585 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100586 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100587 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 921 100588 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 921 100589 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 921 100590 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 921 100591 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 921 100592 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 10059= 3 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100594 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100595 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 921 100596 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A 921 100597 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A 921 100598 nfsd nfsd: service mi_switch+0xe1= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 921 100599 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 1= 00600 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100601 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100602 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 921 100603 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 921 100604 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 921 100605 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 921 100606 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 92= 1 100607 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100608 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100609 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 921 100610 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 921 100611 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 921 100612 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 921 100613 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 921 100614 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 10061= 5 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100616 nfsd = nfsd: service mi_switch+0xe1 sleepq_wait+0x3a _sleep+0x287 nf= smsleep+0x66 nfsv4_lock+0x9b nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f= nfsrvd_lock+0x5b1 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_intern= al+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921= 100617 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100618 nf= sd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a _sleep+0x= 287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrvd_dorpc+0x316 nfssvc_program+0x55= 4 svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a fork_trampol= ine+0xe =0A 921 100619 nfsd nfsd: service mi_switch+0xe1 = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 921 100620 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 1= 00621 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100622 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100623 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 921 100624 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 921 100625 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 921 100626 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 921 100627 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 92= 1 100628 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100629 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100630 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 921 100631 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 921 100632 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 921 100633 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 921 100634 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 921 100635 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 10063= 6 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100637 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100638 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 921 100639 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A 921 100640 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A 921 100641 nfsd nfsd: service mi_switch+0xe1= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 921 100642 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 1= 00643 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100644 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100645 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 921 100646 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb=20fork_exit+= 0x9a fork_trampoline+0xe =0A 921 100647 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_s= ig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A 921 100648 nfsd nfsd: service mi_switc= h+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline= +0xe =0A 921 100649 nfsd nfsd: service mi_switch+0xe1 sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 921 100650 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_= signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87= e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100651= nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thre= ad_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100652 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100653 nfsd nf= sd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0= xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a fork_trampoline+0xe =0A 921 100654 nfsd nfsd: servic= e mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wai= t_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fo= rk_trampoline+0xe =0A 921 100655 nfsd nfsd: service mi_sw= itch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16= a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampol= ine+0xe =0A 921 100656 nfsd nfsd: service mi_switch+0xe1 = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 921 100657 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 1= 00658 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100659 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100660 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 921 100661 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 921 100662 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 921 100663 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 921 100664 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 92= 1 100665 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 921 100666 n= fsd nfsd: service mi_switch+0xe1 sleepq_wait+0x3a _sleep+0= x287 nfsmsleep+0x66 nfsv4_lock+0x9b nfsrv_setclient+0xbd nfsrvd_setclient= id+0x3c8 nfsrvd_dorpc+0xc76 nfssvc_program+0x554 svc_run_internal+0xc77 s= vc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A=0A=0ARegards,= =0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security Engineer=0Ahtt= p://www.unix-experience.fr=0A=0A15 d=C3=A9cembre 2014 15:18 "Rick Macklem= " a =C3=A9crit: =0A> Loic Blot wrote:=0A> =0A>> Fo= r more informations, here is procstat -kk on nfsd, if you need more=0A>> = hot datas, tell me.=0A>> =0A>> Regards, PID TID COMM TDNA= ME KSTACK=0A>> 918 100529 nfsd nfsd: master mi_= switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_progra= m+0x554 svc_run_internal+0xc77 svc_run+0x1de=0A>> nfsrvd_nfsd+0x1ca nfssv= c_nfsd+0x107 sys_nfssvc+0x9c=0A>> amd64_syscall+0x351=0A> =0A> Well, most= of the threads are stuck like this one, waiting for a vnode=0A> lock in = ZFS. All of them appear to be in zfs_fhtovp().=0A> I`m not a ZFS guy, so = I can`t help much. I`ll try changing the subject line=0A> to include ZFS = vnode lock, so maybe the ZFS guys will take a look.=0A> =0A> The only thi= ng I`ve seen suggested is trying:=0A> sysctl vfs.lookup_shared=3D0=0A> to= disable shared vop_lookup()s. Apparently zfs_lookup() doesn`t=0A> obey t= he vnode locking rules for lookup and rename, according to=0A> the postin= g I saw.=0A> =0A> I`ve added a couple of comments about the other threads= below, but=0A> they are all either waiting for an RPC request or waiting= for the=0A> threads stuck on the ZFS vnode lock to complete.=0A> =0A> ri= ck=0A> =0A>> 918 100564 nfsd nfsd: service mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_t= rampoline+0xe=0A> =0A> Fyi, this thread is just waiting for an RPC to arr= ive. (Normal)=0A> =0A>> 918 100565 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 918 100566 nfsd nfsd: service = mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>> fork_trampoline+0xe=0A>> 918 100567 nfsd nfsd: ser= vice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100568 nfsd nfsd= : service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100569 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100570 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100571 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0= x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe=0A>> 918 100572 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0= x9b=0A>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76= =0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb=0A= >> fork_exit+0x9a fork_trampoline+0xe=0A> =0A> This one (and a few others= ) are waiting for the nfsv4_lock. This happens=0A> because other threads = are stuck with RPCs in progress. (ie. The ones=0A> waiting on the vnode l= ock in zfs_fhtovp().)=0A> For these, the RPC needs to lock out other thre= ads to do the operation,=0A> so it waits for the nfsv4_lock() which can e= xclusively lock the=20NFSv4=0A> data structures once all other nfsd threa= ds complete their RPCs in=0A> progress.=0A> =0A>> 918 100573 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 n= fsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe=0A> =0A> Same as above.=0A> =0A>> 918 100574 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 = zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x= 917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100575 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100576 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100577 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100578 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100579 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100580 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100581 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100582 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100583 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100584 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100585 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100586 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100587 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100588 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100589 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100590 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100591 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100592 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100593 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100594 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100595 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100596 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a=20sleeplk+0x15d __= lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0x= b=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100597 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 = zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x= 917=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100598 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100599 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100600 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100601 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100602 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100603 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100604 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100605 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100606 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100607 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 z= fs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb= =0A>> fork_exit+0x9a fork_trampoline+0xe=0A> =0A> Lots more waiting for t= he ZFS vnode lock in zfs_fhtovp().=0A> =0A>> 918 100608 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsl= eep+0x66 nfsv4_lock+0x9b=0A>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21= f nfsrvd_lock+0x5b1=0A>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_= internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe=0A>> 918 100609 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_i= nternal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0x= e=0A>> 918 100610 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>> nfsvno_advlock+0x119 nfsrv_dolocal+= 0x84 nfsrv_lockctrl+0x14ad=0A>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfs= svc_program+0x554=0A>> svc_run_internal+0xc77 svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 918 100611 nfsd nfsd: = service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x= 66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0x= e=0A>> 918 100612 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_d= orpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100613 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfs= msleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 = svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_tram= poline+0xe=0A>> 918 100614 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>= nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_= thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100615 nfsd = nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+= 0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe=0A>> 918 100616 nfsd nfsd: service mi_= switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+= 0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77= =0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>> 918 10= 0617 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_wait+0x= 3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe=0A>> 918 100618 nfsd nfsd: serv= ice mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 n= fsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A= >> 918 100619 nfsd nfsd: service mi_switch+0xe1=0A>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_= LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfs= d_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_intern= al+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>= > 918 100620 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq= _wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_interna= l+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe=0A>>= 918 100621 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_= wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0= x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>> svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100622 nfsd nfs= d: service mi_switch+0xe1=0A>> sleepq_wait+0x3a sleeplk+0x15d __lockmg= r_args+0x902=0A>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_f= htovp+0x38d=0A>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb=0A= >> fork_exit+0x9a fork_trampoline+0xe=0A>> 918 100623 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_wait+0x3a _sleep+0x287 nfsmslee= p+0x66 nfsv4_lock+0x9b=0A>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>> svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe=0A>> 918 100624 nfsd nfsd: service mi_switch+0xe1=0A= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_ru= n_internal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline= +0xe=0A>> 918 100625 nfsd nfsd: service mi_switch+0xe1=0A>= > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x= 3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0= x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run= _internal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+= 0xe=0A>> 918 100626 nfsd nfsd: service mi_switch+0xe1=0A>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x= 7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_= internal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0= xe=0A>> 918 100627 nfsd nfsd: service mi_switch+0xe1=0A>> = sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_i= nternal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0x= e=0A>> 918 100628 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_in= ternal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100629 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100630 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100631 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100632 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100633 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100634 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100635 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100636 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100637 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100638 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100639 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100640 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100641 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100642 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100643 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100644 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100645 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab=20_vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_i= nternal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0x= e=0A>> 918 100646 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_in= ternal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100647 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100648 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100649 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100651 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100652 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100653 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100654 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100655 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100656 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100657 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> 918 100658 nfsd nfsd: service mi_switch+0xe1=0A>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d=0A>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>> nfssvc_program+0x554 svc_run_int= ernal+0xc77 svc_thread_start+0xb=0A>> fork_exit+0x9a fork_trampoline+0xe= =0A>> =0A>> Lo=C3=AFc Blot,=0A>> UNIX Systems, Network and Security Engin= eer=0A>> http://www.unix-experience.fr=0A>> =0A>> 15 d=C3=A9cembre 2014 1= 3:29 "Lo=C3=AFc Blot" a=0A>> =C3=A9crit:= =0A>>> Hmmm...=0A>>> now i'm experiencing a deadlock.=0A>>> =0A>>> 0 918 = 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server (nfsd)=0A>>> =0A>>> th= e only issue was to reboot the server, but after rebooting=0A>>> deadlock= arrives a second time when i=0A>>> start my jails over NFS.=0A>>> =0A>>>= Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, Network and Se= curity Engineer=0A>>> http://www.unix-experience.fr=0A>>> =0A>>> 15 d=C3= =A9cembre 2014 10:07 "Lo=C3=AFc Blot" a=0A= >>> =C3=A9crit:=0A>>> =0A>>> Hi Rick,=0A>>> after talking with my N+1, NF= Sv4 is required on our infrastructure.=0A>>> I tried to upgrade NFSv4+ZFS= =0A>>> server from 9.3 to 10.1, i hope this will resolve some issues...= =0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, N= etwork and Security Engineer=0A>>> http://www.unix-experience.fr=0A>>> = =0A>>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" a=0A>>> =C3=A9crit:=0A>>> =0A>>> Hi Rick,=0A>>> thanks for your= suggestion.=0A>>> For my locking bug, rpc.lockd is stucked in rpcrecv st= ate on the=0A>>> server. kill -9 doesn't affect the=0A>>> process, it's b= locked.... (State: Ds)=0A>>> =0A>>> for the performances=0A>>> =0A>>> NFS= v3: 60Mbps=0A>>> NFSv4: 45Mbps=0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot= ,=0A>>> UNIX Systems, Network and Security Engineer=0A>>> http://www.unix= -experience.fr=0A>>> =0A>>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" a=0A>>> =C3=A9crit:=0A>>> =0A>>>> Loic Blot wrote:= =0A>>>> =0A>>>>> Hi Rick,=0A>>>>> I'm trying NFSv3.=0A>>>>> Some jails ar= e starting very well but now i have an issue with=0A>>>>> lockd=0A>>>>> a= fter some minutes:=0A>>>>> =0A>>>>> nfs server 10.10.X.8:/jails: lockd no= t responding=0A>>>>> nfs server 10.10.X.8:/jails lockd is alive again=0A>= >>>> =0A>>>>> I look at mbuf, but i seems there is no problem.=0A>>>> =0A= >>>> Well, if you need locks to be visible across multiple clients,=0A>>>= > then=0A>>>> I'm afraid you are stuck with using NFSv4 and the performan= ce you=0A>>>> get=0A>>>> from it. (There is no way to do file handle affi= nity for NFSv4=0A>>>> because=0A>>>> the read and write ops are buried in= the compound RPC and not=0A>>>> easily=0A>>>> recognized.)=0A>>>> =0A>>>= > If the locks don't need to be visible across multiple clients, I'd=0A>>= >> suggest trying the "nolockd" option with nfsv3.=0A>>>> =0A>>>>> Here i= s my rc.conf on server:=0A>>>>> =0A>>>>> nfs_server_enable=3D"YES"=0A>>>>= > nfsv4_server_enable=3D"YES"=0A>>>>> nfsuserd_enable=3D"YES"=0A>>>>> nfs= d_server_flags=3D"-u -t -n 256"=0A>>>>> mountd_enable=3D"YES"=0A>>>>> mou= ntd_flags=3D"-r"=0A>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>= >>> rpcbind_enable=3D"YES"=0A>>>>> rpc_lockd_enable=3D"YES"=0A>>>>> rpc_s= tatd_enable=3D"YES"=0A>>>>> =0A>>>>> Here is the client:=0A>>>>> =0A>>>>>= nfsuserd_enable=3D"YES"=0A>>>>> nfsuserd_flags=3D"-usertimeout 0 -force = 20"=0A>>>>> nfscbd_enable=3D"YES"=0A>>>>> rpc_lockd_enable=3D"YES"=0A>>>>= > rpc_statd_enable=3D"YES"=0A>>>>> =0A>>>>> Have you got an idea ?=0A>>>>= > =0A>>>>> Regards,=0A>>>>> =0A>>>>> Lo=C3=AFc Blot,=0A>>>>> UNIX Systems= , Network and Security Engineer=0A>>>>> http://www.unix-experience.fr=0A>= >>>> =0A>>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" a=0A>>>>> =C3=A9crit:=0A>>>>>> Loic Blot wrote:=0A>>>>>> =0A>>>>>>>= Hi rick,=0A>>>>>>> =0A>>>>>>> I waited 3 hours (no lag at jail launch) a= nd now I do: sysrc=0A>>>>>>> memcached_flags=3D"-v -m 512"=0A>>>>>>> Comm= and was very very slow...=0A>>>>>>> =0A>>>>>>> Here is a dd=20over NFS:= =0A>>>>>>> =0A>>>>>>> 601062912 bytes transferred in 21.060679 secs (2853= 9579=0A>>>>>>> bytes/sec)=0A>>>>>> =0A>>>>>> Can you try the same read us= ing an NFSv3 mount?=0A>>>>>> (If it runs much faster, you have probably b= een bitten by the=0A>>>>>> ZFS=0A>>>>>> "sequential vs random" read heuri= stic which I've been told=0A>>>>>> things=0A>>>>>> NFS is doing "random" = reads without file handle affinity. File=0A>>>>>> handle affinity is very= hard to do for NFSv4, so it isn't done.)=0A>>>> =0A>>>> I was actually s= uggesting that you try the "dd" over nfsv3 to see=0A>>>> how=0A>>>> the p= erformance compared with nfsv4. If you do that, please post=0A>>>> the=0A= >>>> comparable results.=0A>>>> =0A>>>> Someday I would like to try and g= et ZFS's sequential vs random=0A>>>> read=0A>>>> heuristic modified and a= ny info on what difference in performance=0A>>>> that=0A>>>> might make f= or NFS would be useful.=0A>>>> =0A>>>> rick=0A>>>> =0A>>>>>> rick=0A>>>>>= > =0A>>>>>>> This is quite slow...=0A>>>>>>> =0A>>>>>>> You can found som= e nfsstat below (command isn't finished yet)=0A>>>>>>> =0A>>>>>>> nfsstat= -c -w 1=0A>>>>>>> =0A>>>>>>> GtAttr Lookup Rdlink Read Write Rename Acce= ss Rddir=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4 0 0 0 0 0 16 0=0A>>>>>>> 2= 0 0 0 0 0 17 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>= >>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 4 0 0 0 0 4 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4 0 0 0 0 0 3 0=0A>>>>>>> 0 0 0= 0 0 0 3 0=0A>>>>>>> 37 10 0 8 0 0 14 1=0A>>>>>>> 18 16 0 4 1 2 4 0=0A>>>= >>>> 78 91 0 82 6 12 30 0=0A>>>>>>> 19 18 0 2 2 4 2 0=0A>>>>>>> 0 0 0 0 2= 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> GtAttr Lookup Rdlink Read Writ= e Rename Access Rddir=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 1 0 0 0 0 1 0=0A>>>>>>> 4 6 0 0 6 = 0 3 0=0A>>>>>>> 2 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 1 0 0= 0 0 0 0 0=0A>>>>>>> 0 0 0 0 1 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> = 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>= >>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 6 108 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 = 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> GtAttr Lookup Rdlink Read Wri= te Rename Access Rddir=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 = 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0= 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 98 5= 4 0 86 11 0 25 0=0A>>>>>>> 36 24 0 39 25 0 10 1=0A>>>>>>> 67 8 0 63 63 0 = 41 0=0A>>>>>>> 34 0 0 35 34 0 0 0=0A>>>>>>> 75 0 0 75 77 0 0 0=0A>>>>>>> = 34 0 0 35 35 0 0 0=0A>>>>>>> 75 0 0 74 76 0 0 0=0A>>>>>>> 33 0 0 34 33 0 = 0 0=0A>>>>>>> 0 0 0 0 5 0 0 0=0A>>>>>>> 0 0 0 0 0 0 6 0=0A>>>>>>> 11 0 0 = 0 0 0 11 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 17 0 0 0 0 1 0=0A>>>>>>>= GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>> 4 5 0 0 0 = 0 12 0=0A>>>>>>> 2 0 0 0 0 0 26 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0= 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>= > 0 0 0 0 0 0 0 0=0A>>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>= >>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4 0 0 0 0 0 2 = 0=0A>>>>>>> 2 0 0 0 0 0 24 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 = 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0= 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>= > GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>> 0 0 0 0 0= 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4 0 0 0 0 0 7 0=0A>>>>>>> 2 1 = 0 0 0 0 1 0=0A>>>>>>> 0 0 0 0 2 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>= 0 0 0 0 6 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>= >>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4 6 0 0 0 0 3 0=0A>>>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>>>> 2 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0= 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> = GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>> 0 0 0 0 0 0= 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 = 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4 71 0 0 0 0 0 0=0A>>>>>>> = 0 1 0 0 0 0 0 0=0A>>>>>>> 2 36 0 0 0 0 1 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>= >>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 1 0 0 0 0 0 1 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>>>> 79 6 0 79 79 0 2 0=0A>>>>>>> 25 0 0 25 26 0 6 0=0A>>>>>>>= 43 18 0 39 46 0 23 0=0A>>>>>>> 36 0 0 36 36 0 31 0=0A>>>>>>> 68 1 0 66 6= 8 0 0 0=0A>>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>= >>>>>> 36 0 0 36 36 0 0 0=0A>>>>>>> 48 0 0 48 49 0 0 0=0A>>>>>>> 20 0 0 2= 0 20 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 3 14 0 1 0 0 11 0=0A>>>>>>= > 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 4 0 0 0 0 4 0=0A>= >>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4 22 0 0 0 0 16 0=0A>>>>>>> 2 0 0 0 0 0 = 23 0=0A>>>>>>> =0A>>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,= =0A>>>>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>> http://w= ww.unix-experience.fr=0A>>>>>>> =0A>>>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo= =C3=AFc Blot"=0A>>>>>>> a=0A>>>>>>> =C3=A9= crit:=0A>>>>>>>> Hi Rick,=0A>>>>>>>> I stopped the jails this week-end an= d started it this morning,=0A>>>>>>>> i'll=0A>>>>>>>> give you some stats= this week.=0A>>>>>>>> =0A>>>>>>>> Here is my nfsstat -m output (with you= r rsize/wsize tweaks)=0A>>> =0A>>> =0A>> =0A> nfsv4,tcp,resvport,hard,cto= ,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acregmax=3D60,nametime= o=3D60,negna=0A>>> =0A>>>>>>>> =0A>>> =0A>>> =0A>> =0A> etimeo=3D60,rsize= =3D32768,wsize=3D32768,readdirsize=3D32768,readahead=3D1,wcommitsize=3D77= 3136,timeout=3D120,retra=0A>>> =0A>>> s=3D2147483647=0A>>> =0A>>> On serv= er side my disks are on a raid controller which show a=0A>>> 512b=0A>>> v= olume and write performances=0A>>> are very honest (dd if=3D/dev/zero of= =3D/jails/test.dd bs=3D4096=0A>>> count=3D100000000 =3D> 450MBps)=0A>>> = =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, Network = and Security Engineer=0A>>> http://www.unix-experience.fr=0A>>> =0A>>> 5 = d=C3=A9cembre 2014 15:14 "Rick Macklem" a=0A>>> = =C3=A9crit:=0A>>> =0A>>>> Loic Blot wrote:=0A>>>> =0A>>>>> Hi,=0A>>>>> i'= m trying to create a virtualisation environment based on=0A>>>>> jails.= =0A>>>>> Those jails are stored under a big ZFS pool on a FreeBSD 9.3=0A>= >>>> which=0A>>>>> export a NFSv4 volume. This NFSv4 volume was mounted o= n a big=0A>>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but o= nly 1=0A>>>>> was=0A>>>>> used at this time).=0A>>>>> =0A>>>>> The proble= m is simple, my hypervisors runs 6 jails (used 1% cpu=0A>>>>> and=0A>>>>>= 10GB RAM approximatively and less than 1MB bandwidth) and works=0A>>>>> = fine at start but the system slows down and after 2-3 days=0A>>>>> become= =0A>>>>> unusable. When i look at top command i see 80-100% on system=0A>= >>>> and=0A>>>>> commands are very very slow. Many process are tagged wit= h=0A>>>>> nfs_cl*.=0A>>>> =0A>>>> To be honest, I would expect the slowne= ss to be because of slow=0A>>>> response=0A>>>> from the NFSv4 server, bu= t if you do:=0A>>>> # ps axHl=0A>>>> on a client when it is slow and post= that, it would give us some=0A>>>> more=0A>>>> information on where the = client side processes are sitting.=0A>>>> If you also do something like:= =0A>>>> # nfsstat -c -w 1=0A>>>> and let it run for a while, that should = show you how many RPCs=0A>>>> are=0A>>>> being done and which ones.=0A>>>= > =0A>>>> # nfsstat -m=0A>>>> will show you what your mount is actually u= sing.=0A>>>> The only mount option I can suggest trying is=0A>>>> "rsize= =3D32768,wsize=3D32768",=0A>>>> since some network environments have diff= iculties with 64K.=0A>>>> =0A>>>> There are a few things you can try on t= he NFSv4 server side, if=0A>>>> it=0A>>>> appears=0A>>>> that the clients= are generating a large RPC load.=0A>>>> - disabling the DRC cache for TC= P by setting vfs.nfsd.cachetcp=3D0=0A>>>> - If the server is seeing a lar= ge write RPC load, then=0A>>>> "sync=3Ddisabled"=0A>>>> might help, altho= ugh it does run a risk of data loss when the=0A>>>> server=0A>>>> crashes= .=0A>>>> Then there are a couple of other ZFS related things (I'm not a= =0A>>>> ZFS=0A>>>> guy,=0A>>>> but these have shown up on the mailing lis= ts).=0A>>>> - make sure your volumes are 4K aligned and ashift=3D12 (in c= ase a=0A>>>> drive=0A>>>> that uses 4K sectors is pretending to be 512byt= e sectored)=0A>>>> - never run over 70-80% full if write performance is a= n issue=0A>>>> - use a zil on an SSD with good write performance=0A>>>> = =0A>>>> The only NFSv4 thing I can tell you is that it is known that=0A>>= >> ZFS's=0A>>>> algorithm for determining sequential vs random I/O fails = for=0A>>>> NFSv4=0A>>>> during writing and this can be a performance hit.= The only=0A>>>> workaround=0A>>>> is to use NFSv3 mounts, since file han= dle affinity apparently=0A>>>> fixes=0A>>>> the problem and this is only = done for NFSv3.=0A>>>> =0A>>>> rick=0A>>>> =0A>>>>> I saw that there are = TSO issues with igb then i'm trying to=0A>>>>> disable=0A>>>>> it with sy= sctl but the situation wasn't solved.=0A>>>>> =0A>>>>> Someone has got id= eas ? I can give you more informations if you=0A>>>>> need.=0A>>>>> =0A>>= >>> Thanks in advance.=0A>>>>> Regards,=0A>>>>> =0A>>>>> Lo=C3=AFc Blot,= =0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>> http://www.u= nix-experience.fr=0A>>>>> _______________________________________________= =0A>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>> http://lists.freebsd= .org/mailman/listinfo/freebsd-fs=0A>>>>> To unsubscribe, send any mail to= =0A>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>> =0A>>> ______________= _________________________________=0A>>> freebsd-fs@freebsd.org mailing li= st=0A>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>> To un= subscribe, send any mail to=0A>>> "freebsd-fs-unsubscribe@freebsd.org"=0A= >>> =0A>>> _______________________________________________=0A>>> freebsd-= fs@freebsd.org mailing list=0A>>> http://lists.freebsd.org/mailman/listin= fo/freebsd-fs=0A>>> To unsubscribe, send any mail to=0A>>> "freebsd-fs-un= subscribe@freebsd.org"=0A>>> =0A>>> _____________________________________= __________=0A>>> freebsd-fs@freebsd.org mailing list=0A>>> http://lists.f= reebsd.org/mailman/listinfo/freebsd-fs=0A>>> To unsubscribe, send any mai= l to=0A>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>> ___________________= ____________________________=0A>>> freebsd-fs@freebsd.org mailing list=0A= >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>> To unsubsc= ribe, send any mail to=0A>>> "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 00:46:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 076E6715 for ; Fri, 19 Dec 2014 00:46:11 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 7F99B2763 for ; Fri, 19 Dec 2014 00:46:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtsEADF0k1SDaFve/2dsb2JhbABbg1hYBIMCwxIKhShKAoE3AQEBAQF9hAwBAQEDAQEBARcBCCsgCwUWGAICDRkCKQEJJgYIAgUEARwEiAMIDbkili0BAQEBAQEEAQEBAQEBAQEBARiBIY4AAQEbATMHgi07EYEwBYlDiAWDHIMjMIIxgjGDP4QsgzgigX4egW4gMQEBBYEFOX4BAQE X-IronPort-AV: E=Sophos;i="5.07,604,1413259200"; d="scan'208";a="177839544" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 18 Dec 2014 19:46:07 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id CC747AEA35; Thu, 18 Dec 2014 19:46:06 -0500 (EST) Date: Thu, 18 Dec 2014 19:46:06 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <367024859.592531.1418949966814.JavaMail.root@uoguelph.ca> In-Reply-To: <0eaadfe31ac4b8bbdeaf0baff696dada@mail.unix-experience.fr> Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2014 00:46:11 -0000 Loic Blot wrote: > Hi rick, > i tried to start a LXC container on Debian Squeeze from my freebsd > ZFS+NFSv4 server and i also have a deadlock on nfsd > (vfs.lookup_shared=3D0). Deadlock procs each time i launch a squeeze > container, it seems (3 tries, 3 fails). >=20 Well, I`ll take a look at this `procstat -kk`, but the only thing I`ve seen posted w.r.t. avoiding deadlocks in ZFS is to not use nullfs. (I have no idea if you are using any nullfs mounts, but if so, try getting rid of them.) Here`s a high level post about the ZFS and vnode locking problem, but there is no patch available, as far as I know. http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407 rick > 921 - D 0:00.02 nfsd: server (nfsd) >=20 > Here is the procstat -kk >=20 > PID TID COMM TDNAME KSTACK > 921 100538 nfsd nfsd: master mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > nfssvc_nfsd+0x107 sys_nfssvc+0x9c > 921 100572 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100573 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100574 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100575 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100576 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100577 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100578 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100579 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100580 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100581 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100582 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100583 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100584 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100585 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100586 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100587 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100588 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100589 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100590 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100591 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100592 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100593 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100594 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100595 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100596 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100597 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100598 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100599 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100600 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100601 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100602 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100603 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100604 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100605 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100606 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100607 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100608 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100609 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100610 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100611 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100612 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100613 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100614 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100615 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100616 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 921 100617 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100618 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 921 100619 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100620 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100621 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100622 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100623 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100624 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100625 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100626 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100627 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100628 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100629 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100630 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100631 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100632 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100633 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100634 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100635 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100636 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100637 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100638 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100639 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100640 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100641 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100642 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100643 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100644 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100645 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100646 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100647 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100648 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100649 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100650 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100651 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100652 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100653 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100654 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100655 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100656 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100657 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100658 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100659 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100660 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100661 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100662 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100663 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100664 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100665 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100666 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe >=20 >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a =C3= =A9crit: > > Loic Blot wrote: > >=20 > >> For more informations, here is procstat -kk on nfsd, if you need > >> more > >> hot datas, tell me. > >>=20 > >> Regards, PID TID COMM TDNAME KSTACK > >> 918 100529 nfsd nfsd: master mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > >> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >> amd64_syscall+0x351 > >=20 > > Well, most of the threads are stuck like this one, waiting for a > > vnode > > lock in ZFS. All of them appear to be in zfs_fhtovp(). > > I`m not a ZFS guy, so I can`t help much. I`ll try changing the > > subject line > > to include ZFS vnode lock, so maybe the ZFS guys will take a look. > >=20 > > The only thing I`ve seen suggested is trying: > > sysctl vfs.lookup_shared=3D0 > > to disable shared vop_lookup()s. Apparently zfs_lookup() doesn`t > > obey the vnode locking rules for lookup and rename, according to > > the posting I saw. > >=20 > > I`ve added a couple of comments about the other threads below, but > > they are all either waiting for an RPC request or waiting for the > > threads stuck on the ZFS vnode lock to complete. > >=20 > > rick > >=20 > >> 918 100564 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >=20 > > Fyi, this thread is just waiting for an RPC to arrive. (Normal) > >=20 > >> 918 100565 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100566 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100567 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >=20 > > This one (and a few others) are waiting for the nfsv4_lock. This > > happens > > because other threads are stuck with RPCs in progress. (ie. The > > ones > > waiting on the vnode lock in zfs_fhtovp().) > > For these, the RPC needs to lock out other threads to do the > > operation, > > so it waits for the nfsv4_lock() which can exclusively lock the > > NFSv4 > > data structures once all other nfsd threads complete their RPCs in > > progress. > >=20 > >> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >=20 > > Same as above. > >=20 > >> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >=20 > > Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > >=20 > >> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > >> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >> svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >>=20 > >> Lo=C3=AFc Blot, > >> UNIX Systems, Network and Security Engineer > >> http://www.unix-experience.fr > >>=20 > >> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" > >> a > >> =C3=A9crit: > >>> Hmmm... > >>> now i'm experiencing a deadlock. > >>>=20 > >>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server (nfsd) > >>>=20 > >>> the only issue was to reboot the server, but after rebooting > >>> deadlock arrives a second time when i > >>> start my jails over NFS. > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hi Rick, > >>> after talking with my N+1, NFSv4 is required on our > >>> infrastructure. > >>> I tried to upgrade NFSv4+ZFS > >>> server from 9.3 to 10.1, i hope this will resolve some issues... > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hi Rick, > >>> thanks for your suggestion. > >>> For my locking bug, rpc.lockd is stucked in rpcrecv state on the > >>> server. kill -9 doesn't affect the > >>> process, it's blocked.... (State: Ds) > >>>=20 > >>> for the performances > >>>=20 > >>> NFSv3: 60Mbps > >>> NFSv4: 45Mbps > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>>> Loic Blot wrote: > >>>>=20 > >>>>> Hi Rick, > >>>>> I'm trying NFSv3. > >>>>> Some jails are starting very well but now i have an issue with > >>>>> lockd > >>>>> after some minutes: > >>>>>=20 > >>>>> nfs server 10.10.X.8:/jails: lockd not responding > >>>>> nfs server 10.10.X.8:/jails lockd is alive again > >>>>>=20 > >>>>> I look at mbuf, but i seems there is no problem. > >>>>=20 > >>>> Well, if you need locks to be visible across multiple clients, > >>>> then > >>>> I'm afraid you are stuck with using NFSv4 and the performance > >>>> you > >>>> get > >>>> from it. (There is no way to do file handle affinity for NFSv4 > >>>> because > >>>> the read and write ops are buried in the compound RPC and not > >>>> easily > >>>> recognized.) > >>>>=20 > >>>> If the locks don't need to be visible across multiple clients, > >>>> I'd > >>>> suggest trying the "nolockd" option with nfsv3. > >>>>=20 > >>>>> Here is my rc.conf on server: > >>>>>=20 > >>>>> nfs_server_enable=3D"YES" > >>>>> nfsv4_server_enable=3D"YES" > >>>>> nfsuserd_enable=3D"YES" > >>>>> nfsd_server_flags=3D"-u -t -n 256" > >>>>> mountd_enable=3D"YES" > >>>>> mountd_flags=3D"-r" > >>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>> rpcbind_enable=3D"YES" > >>>>> rpc_lockd_enable=3D"YES" > >>>>> rpc_statd_enable=3D"YES" > >>>>>=20 > >>>>> Here is the client: > >>>>>=20 > >>>>> nfsuserd_enable=3D"YES" > >>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>> nfscbd_enable=3D"YES" > >>>>> rpc_lockd_enable=3D"YES" > >>>>> rpc_statd_enable=3D"YES" > >>>>>=20 > >>>>> Have you got an idea ? > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" a > >>>>> =C3=A9crit: > >>>>>> Loic Blot wrote: > >>>>>>=20 > >>>>>>> Hi rick, > >>>>>>>=20 > >>>>>>> I waited 3 hours (no lag at jail launch) and now I do: sysrc > >>>>>>> memcached_flags=3D"-v -m 512" > >>>>>>> Command was very very slow... > >>>>>>>=20 > >>>>>>> Here is a dd over NFS: > >>>>>>>=20 > >>>>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > >>>>>>> bytes/sec) > >>>>>>=20 > >>>>>> Can you try the same read using an NFSv3 mount? > >>>>>> (If it runs much faster, you have probably been bitten by the > >>>>>> ZFS > >>>>>> "sequential vs random" read heuristic which I've been told > >>>>>> things > >>>>>> NFS is doing "random" reads without file handle affinity. File > >>>>>> handle affinity is very hard to do for NFSv4, so it isn't > >>>>>> done.) > >>>>=20 > >>>> I was actually suggesting that you try the "dd" over nfsv3 to > >>>> see > >>>> how > >>>> the performance compared with nfsv4. If you do that, please post > >>>> the > >>>> comparable results. > >>>>=20 > >>>> Someday I would like to try and get ZFS's sequential vs random > >>>> read > >>>> heuristic modified and any info on what difference in > >>>> performance > >>>> that > >>>> might make for NFS would be useful. > >>>>=20 > >>>> rick > >>>>=20 > >>>>>> rick > >>>>>>=20 > >>>>>>> This is quite slow... > >>>>>>>=20 > >>>>>>> You can found some nfsstat below (command isn't finished yet) > >>>>>>>=20 > >>>>>>> nfsstat -c -w 1 > >>>>>>>=20 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 16 0 > >>>>>>> 2 0 0 0 0 0 17 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 3 0 > >>>>>>> 0 0 0 0 0 0 3 0 > >>>>>>> 37 10 0 8 0 0 14 1 > >>>>>>> 18 16 0 4 1 2 4 0 > >>>>>>> 78 91 0 82 6 12 30 0 > >>>>>>> 19 18 0 2 2 4 2 0 > >>>>>>> 0 0 0 0 2 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 1 0 0 0 0 1 0 > >>>>>>> 4 6 0 0 6 0 3 0 > >>>>>>> 2 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 1 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 1 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 6 108 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 98 54 0 86 11 0 25 0 > >>>>>>> 36 24 0 39 25 0 10 1 > >>>>>>> 67 8 0 63 63 0 41 0 > >>>>>>> 34 0 0 35 34 0 0 0 > >>>>>>> 75 0 0 75 77 0 0 0 > >>>>>>> 34 0 0 35 35 0 0 0 > >>>>>>> 75 0 0 74 76 0 0 0 > >>>>>>> 33 0 0 34 33 0 0 0 > >>>>>>> 0 0 0 0 5 0 0 0 > >>>>>>> 0 0 0 0 0 0 6 0 > >>>>>>> 11 0 0 0 0 0 11 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 17 0 0 0 0 1 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 4 5 0 0 0 0 12 0 > >>>>>>> 2 0 0 0 0 0 26 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 2 0 > >>>>>>> 2 0 0 0 0 0 24 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 7 0 > >>>>>>> 2 1 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 2 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 6 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 6 0 0 0 0 3 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 2 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 71 0 0 0 0 0 0 > >>>>>>> 0 1 0 0 0 0 0 0 > >>>>>>> 2 36 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 1 0 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 79 6 0 79 79 0 2 0 > >>>>>>> 25 0 0 25 26 0 6 0 > >>>>>>> 43 18 0 39 46 0 23 0 > >>>>>>> 36 0 0 36 36 0 31 0 > >>>>>>> 68 1 0 66 68 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 36 0 0 36 36 0 0 0 > >>>>>>> 48 0 0 48 49 0 0 0 > >>>>>>> 20 0 0 20 20 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 3 14 0 1 0 0 11 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 22 0 0 0 0 16 0 > >>>>>>> 2 0 0 0 0 0 23 0 > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>> Hi Rick, > >>>>>>>> I stopped the jails this week-end and started it this > >>>>>>>> morning, > >>>>>>>> i'll > >>>>>>>> give you some stats this week. > >>>>>>>>=20 > >>>>>>>> Here is my nfsstat -m output (with your rsize/wsize tweaks) > >>>=20 > >>>=20 > >>=20 > > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregm= in=3D5,acregmax=3D60,nametimeo=3D60,negna > >>>=20 > >>>>>>>>=20 > >>>=20 > >>>=20 > >>=20 > > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead= =3D1,wcommitsize=3D773136,timeout=3D120,retra > >>>=20 > >>> s=3D2147483647 > >>>=20 > >>> On server side my disks are on a raid controller which show a > >>> 512b > >>> volume and write performances > >>> are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > >>> count=3D100000000 =3D> 450MBps) > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>>> Loic Blot wrote: > >>>>=20 > >>>>> Hi, > >>>>> i'm trying to create a virtualisation environment based on > >>>>> jails. > >>>>> Those jails are stored under a big ZFS pool on a FreeBSD 9.3 > >>>>> which > >>>>> export a NFSv4 volume. This NFSv4 volume was mounted on a big > >>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but only 1 > >>>>> was > >>>>> used at this time). > >>>>>=20 > >>>>> The problem is simple, my hypervisors runs 6 jails (used 1% cpu > >>>>> and > >>>>> 10GB RAM approximatively and less than 1MB bandwidth) and works > >>>>> fine at start but the system slows down and after 2-3 days > >>>>> become > >>>>> unusable. When i look at top command i see 80-100% on system > >>>>> and > >>>>> commands are very very slow. Many process are tagged with > >>>>> nfs_cl*. > >>>>=20 > >>>> To be honest, I would expect the slowness to be because of slow > >>>> response > >>>> from the NFSv4 server, but if you do: > >>>> # ps axHl > >>>> on a client when it is slow and post that, it would give us some > >>>> more > >>>> information on where the client side processes are sitting. > >>>> If you also do something like: > >>>> # nfsstat -c -w 1 > >>>> and let it run for a while, that should show you how many RPCs > >>>> are > >>>> being done and which ones. > >>>>=20 > >>>> # nfsstat -m > >>>> will show you what your mount is actually using. > >>>> The only mount option I can suggest trying is > >>>> "rsize=3D32768,wsize=3D32768", > >>>> since some network environments have difficulties with 64K. > >>>>=20 > >>>> There are a few things you can try on the NFSv4 server side, if > >>>> it > >>>> appears > >>>> that the clients are generating a large RPC load. > >>>> - disabling the DRC cache for TCP by setting vfs.nfsd.cachetcp=3D0 > >>>> - If the server is seeing a large write RPC load, then > >>>> "sync=3Ddisabled" > >>>> might help, although it does run a risk of data loss when the > >>>> server > >>>> crashes. > >>>> Then there are a couple of other ZFS related things (I'm not a > >>>> ZFS > >>>> guy, > >>>> but these have shown up on the mailing lists). > >>>> - make sure your volumes are 4K aligned and ashift=3D12 (in case a > >>>> drive > >>>> that uses 4K sectors is pretending to be 512byte sectored) > >>>> - never run over 70-80% full if write performance is an issue > >>>> - use a zil on an SSD with good write performance > >>>>=20 > >>>> The only NFSv4 thing I can tell you is that it is known that > >>>> ZFS's > >>>> algorithm for determining sequential vs random I/O fails for > >>>> NFSv4 > >>>> during writing and this can be a performance hit. The only > >>>> workaround > >>>> is to use NFSv3 mounts, since file handle affinity apparently > >>>> fixes > >>>> the problem and this is only done for NFSv3. > >>>>=20 > >>>> rick > >>>>=20 > >>>>> I saw that there are TSO issues with igb then i'm trying to > >>>>> disable > >>>>> it with sysctl but the situation wasn't solved. > >>>>>=20 > >>>>> Someone has got ideas ? I can give you more informations if you > >>>>> need. > >>>>>=20 > >>>>> Thanks in advance. > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 02:35:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2C7DBF06 for ; Fri, 19 Dec 2014 02:35:30 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id A44B6B3D for ; Fri, 19 Dec 2014 02:35:29 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtsEAMqMk1SDaFve/2dsb2JhbABSCINYWASDAsMTCoUoSgKBNwEBAQEBfYQMAQEBAwEBAQEXAQgEJyALBRYYAgINGQIpAQkmBggCBQQBHASIAwgNuRuWLwEBAQEBAQQBAQEBAQEBAQEBGIEhjXsFAQEbATMHgi07EYEwBYlDiAWDHIMjMIIxgjGDP4QsgzgigX4egW4gMQeBBTl+AQEB X-IronPort-AV: E=Sophos;i="5.07,604,1413259200"; d="scan'208";a="177860986" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 18 Dec 2014 21:35:25 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 41EC0B406F; Thu, 18 Dec 2014 21:35:25 -0500 (EST) Date: Thu, 18 Dec 2014 21:35:25 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <1280817381.630445.1418956525242.JavaMail.root@uoguelph.ca> In-Reply-To: <0eaadfe31ac4b8bbdeaf0baff696dada@mail.unix-experience.fr> Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2014 02:35:30 -0000 Loic Blot wrote: > Hi rick, > i tried to start a LXC container on Debian Squeeze from my freebsd > ZFS+NFSv4 server and i also have a deadlock on nfsd > (vfs.lookup_shared=3D0). Deadlock procs each time i launch a squeeze > container, it seems (3 tries, 3 fails). >=20 > 921 - D 0:00.02 nfsd: server (nfsd) >=20 > Here is the procstat -kk >=20 Ok, I took a closer look at this and I think you have found a bug in the NFSv4 server's file locking code. If I am correct, this will only happen if you have the vfs.nfsd.enable_locallocks=3D1. Please try setting: vfs.nfsd.enable_locallocks=3D0 (which I thought was the default setting?) This only needs to be non-zero if non-nfsd threads running in the NFS server need to see the locks being set by NFS clients. (If your application does require non-nfsd threads running in the NFS server to see the file locks, you'll have to wait until I come up with a patch for this, which could be a week or more.) In case you are curious, when vfs.nfsd.enable_locallocks=3D1, the server vop_unlock()s/vop_lock()s the vnode around a VOP_ADVLOCK() call. Unfortunately, this introduces a LOR with the nfsv4_lock() which is held at this time. I think this resulted in this deadlock. To fix it, I will have to delay the vn_lock() call until after the nfsv4_lock() is released. This probably hasn't been reported before, because most don't set vfs.nfsd.enable_locallocks=3D1. Thanks for reporting this and sorry I didn't spot this sooner, rick > PID TID COMM TDNAME KSTACK > 921 100538 nfsd nfsd: master mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > nfssvc_nfsd+0x107 sys_nfssvc+0x9c This is the bad one, which is trying to get a vnode lock while holding the nfsv4_lock(), which is a LOR (the code should always acquire the nfsv4_lock() after having the vnode locked). Since another thread has the vnode locked and is trying to acquire the nfsv4_lock()--> deadlock. > 921 100572 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100573 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100574 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100575 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100576 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100577 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100578 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100579 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100580 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100581 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100582 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100583 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100584 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100585 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100586 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100587 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100588 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100589 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100590 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100591 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100592 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100593 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100594 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100595 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100596 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100597 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100598 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100599 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100600 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100601 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100602 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100603 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100604 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100605 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100606 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100607 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100608 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100609 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100610 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100611 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100612 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100613 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100614 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100615 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100616 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 921 100617 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100618 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 921 100619 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100620 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100621 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100622 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100623 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100624 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100625 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100626 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100627 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100628 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100629 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100630 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100631 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100632 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100633 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100634 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100635 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100636 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100637 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100638 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100639 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100640 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100641 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100642 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100643 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100644 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100645 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100646 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100647 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100648 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100649 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100650 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100651 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100652 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100653 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100654 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100655 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100656 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100657 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100658 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100659 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100660 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100661 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100662 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100663 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100664 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100665 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100666 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe >=20 >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a =C3= =A9crit: > > Loic Blot wrote: > >=20 > >> For more informations, here is procstat -kk on nfsd, if you need > >> more > >> hot datas, tell me. > >>=20 > >> Regards, PID TID COMM TDNAME KSTACK > >> 918 100529 nfsd nfsd: master mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > >> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >> amd64_syscall+0x351 > >=20 > > Well, most of the threads are stuck like this one, waiting for a > > vnode > > lock in ZFS. All of them appear to be in zfs_fhtovp(). > > I`m not a ZFS guy, so I can`t help much. I`ll try changing the > > subject line > > to include ZFS vnode lock, so maybe the ZFS guys will take a look. > >=20 > > The only thing I`ve seen suggested is trying: > > sysctl vfs.lookup_shared=3D0 > > to disable shared vop_lookup()s. Apparently zfs_lookup() doesn`t > > obey the vnode locking rules for lookup and rename, according to > > the posting I saw. > >=20 > > I`ve added a couple of comments about the other threads below, but > > they are all either waiting for an RPC request or waiting for the > > threads stuck on the ZFS vnode lock to complete. > >=20 > > rick > >=20 > >> 918 100564 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >=20 > > Fyi, this thread is just waiting for an RPC to arrive. (Normal) > >=20 > >> 918 100565 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100566 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100567 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >=20 > > This one (and a few others) are waiting for the nfsv4_lock. This > > happens > > because other threads are stuck with RPCs in progress. (ie. The > > ones > > waiting on the vnode lock in zfs_fhtovp().) > > For these, the RPC needs to lock out other threads to do the > > operation, > > so it waits for the nfsv4_lock() which can exclusively lock the > > NFSv4 > > data structures once all other nfsd threads complete their RPCs in > > progress. > >=20 > >> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >=20 > > Same as above. > >=20 > >> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >=20 > > Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > >=20 > >> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > >> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >> svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe Ouch! I missed this one when I looked at it, so I missed the critical line that showed the race. Fortunately you posted a shorter one. This would the same race as above. Hopefully you can run with: vfs.nfsd.enable_locallocks=3D0 rick > >> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > >> fork_exit+0x9a fork_trampoline+0xe > >>=20 > >> Lo=C3=AFc Blot, > >> UNIX Systems, Network and Security Engineer > >> http://www.unix-experience.fr > >>=20 > >> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" > >> a > >> =C3=A9crit: > >>> Hmmm... > >>> now i'm experiencing a deadlock. > >>>=20 > >>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server (nfsd) > >>>=20 > >>> the only issue was to reboot the server, but after rebooting > >>> deadlock arrives a second time when i > >>> start my jails over NFS. > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hi Rick, > >>> after talking with my N+1, NFSv4 is required on our > >>> infrastructure. > >>> I tried to upgrade NFSv4+ZFS > >>> server from 9.3 to 10.1, i hope this will resolve some issues... > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hi Rick, > >>> thanks for your suggestion. > >>> For my locking bug, rpc.lockd is stucked in rpcrecv state on the > >>> server. kill -9 doesn't affect the > >>> process, it's blocked.... (State: Ds) > >>>=20 > >>> for the performances > >>>=20 > >>> NFSv3: 60Mbps > >>> NFSv4: 45Mbps > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>>> Loic Blot wrote: > >>>>=20 > >>>>> Hi Rick, > >>>>> I'm trying NFSv3. > >>>>> Some jails are starting very well but now i have an issue with > >>>>> lockd > >>>>> after some minutes: > >>>>>=20 > >>>>> nfs server 10.10.X.8:/jails: lockd not responding > >>>>> nfs server 10.10.X.8:/jails lockd is alive again > >>>>>=20 > >>>>> I look at mbuf, but i seems there is no problem. > >>>>=20 > >>>> Well, if you need locks to be visible across multiple clients, > >>>> then > >>>> I'm afraid you are stuck with using NFSv4 and the performance > >>>> you > >>>> get > >>>> from it. (There is no way to do file handle affinity for NFSv4 > >>>> because > >>>> the read and write ops are buried in the compound RPC and not > >>>> easily > >>>> recognized.) > >>>>=20 > >>>> If the locks don't need to be visible across multiple clients, > >>>> I'd > >>>> suggest trying the "nolockd" option with nfsv3. > >>>>=20 > >>>>> Here is my rc.conf on server: > >>>>>=20 > >>>>> nfs_server_enable=3D"YES" > >>>>> nfsv4_server_enable=3D"YES" > >>>>> nfsuserd_enable=3D"YES" > >>>>> nfsd_server_flags=3D"-u -t -n 256" > >>>>> mountd_enable=3D"YES" > >>>>> mountd_flags=3D"-r" > >>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>> rpcbind_enable=3D"YES" > >>>>> rpc_lockd_enable=3D"YES" > >>>>> rpc_statd_enable=3D"YES" > >>>>>=20 > >>>>> Here is the client: > >>>>>=20 > >>>>> nfsuserd_enable=3D"YES" > >>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>> nfscbd_enable=3D"YES" > >>>>> rpc_lockd_enable=3D"YES" > >>>>> rpc_statd_enable=3D"YES" > >>>>>=20 > >>>>> Have you got an idea ? > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" a > >>>>> =C3=A9crit: > >>>>>> Loic Blot wrote: > >>>>>>=20 > >>>>>>> Hi rick, > >>>>>>>=20 > >>>>>>> I waited 3 hours (no lag at jail launch) and now I do: sysrc > >>>>>>> memcached_flags=3D"-v -m 512" > >>>>>>> Command was very very slow... > >>>>>>>=20 > >>>>>>> Here is a dd over NFS: > >>>>>>>=20 > >>>>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > >>>>>>> bytes/sec) > >>>>>>=20 > >>>>>> Can you try the same read using an NFSv3 mount? > >>>>>> (If it runs much faster, you have probably been bitten by the > >>>>>> ZFS > >>>>>> "sequential vs random" read heuristic which I've been told > >>>>>> things > >>>>>> NFS is doing "random" reads without file handle affinity. File > >>>>>> handle affinity is very hard to do for NFSv4, so it isn't > >>>>>> done.) > >>>>=20 > >>>> I was actually suggesting that you try the "dd" over nfsv3 to > >>>> see > >>>> how > >>>> the performance compared with nfsv4. If you do that, please post > >>>> the > >>>> comparable results. > >>>>=20 > >>>> Someday I would like to try and get ZFS's sequential vs random > >>>> read > >>>> heuristic modified and any info on what difference in > >>>> performance > >>>> that > >>>> might make for NFS would be useful. > >>>>=20 > >>>> rick > >>>>=20 > >>>>>> rick > >>>>>>=20 > >>>>>>> This is quite slow... > >>>>>>>=20 > >>>>>>> You can found some nfsstat below (command isn't finished yet) > >>>>>>>=20 > >>>>>>> nfsstat -c -w 1 > >>>>>>>=20 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 16 0 > >>>>>>> 2 0 0 0 0 0 17 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 3 0 > >>>>>>> 0 0 0 0 0 0 3 0 > >>>>>>> 37 10 0 8 0 0 14 1 > >>>>>>> 18 16 0 4 1 2 4 0 > >>>>>>> 78 91 0 82 6 12 30 0 > >>>>>>> 19 18 0 2 2 4 2 0 > >>>>>>> 0 0 0 0 2 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 1 0 0 0 0 1 0 > >>>>>>> 4 6 0 0 6 0 3 0 > >>>>>>> 2 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 1 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 1 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 6 108 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 98 54 0 86 11 0 25 0 > >>>>>>> 36 24 0 39 25 0 10 1 > >>>>>>> 67 8 0 63 63 0 41 0 > >>>>>>> 34 0 0 35 34 0 0 0 > >>>>>>> 75 0 0 75 77 0 0 0 > >>>>>>> 34 0 0 35 35 0 0 0 > >>>>>>> 75 0 0 74 76 0 0 0 > >>>>>>> 33 0 0 34 33 0 0 0 > >>>>>>> 0 0 0 0 5 0 0 0 > >>>>>>> 0 0 0 0 0 0 6 0 > >>>>>>> 11 0 0 0 0 0 11 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 17 0 0 0 0 1 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 4 5 0 0 0 0 12 0 > >>>>>>> 2 0 0 0 0 0 26 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 2 0 > >>>>>>> 2 0 0 0 0 0 24 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 7 0 > >>>>>>> 2 1 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 2 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 6 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 6 0 0 0 0 3 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 2 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 71 0 0 0 0 0 0 > >>>>>>> 0 1 0 0 0 0 0 0 > >>>>>>> 2 36 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 1 0 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 79 6 0 79 79 0 2 0 > >>>>>>> 25 0 0 25 26 0 6 0 > >>>>>>> 43 18 0 39 46 0 23 0 > >>>>>>> 36 0 0 36 36 0 31 0 > >>>>>>> 68 1 0 66 68 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 36 0 0 36 36 0 0 0 > >>>>>>> 48 0 0 48 49 0 0 0 > >>>>>>> 20 0 0 20 20 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 3 14 0 1 0 0 11 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 22 0 0 0 0 16 0 > >>>>>>> 2 0 0 0 0 0 23 0 > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>> Hi Rick, > >>>>>>>> I stopped the jails this week-end and started it this > >>>>>>>> morning, > >>>>>>>> i'll > >>>>>>>> give you some stats this week. > >>>>>>>>=20 > >>>>>>>> Here is my nfsstat -m output (with your rsize/wsize tweaks) > >>>=20 > >>>=20 > >>=20 > > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregm= in=3D5,acregmax=3D60,nametimeo=3D60,negna > >>>=20 > >>>>>>>>=20 > >>>=20 > >>>=20 > >>=20 > > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead= =3D1,wcommitsize=3D773136,timeout=3D120,retra > >>>=20 > >>> s=3D2147483647 > >>>=20 > >>> On server side my disks are on a raid controller which show a > >>> 512b > >>> volume and write performances > >>> are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > >>> count=3D100000000 =3D> 450MBps) > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>>> Loic Blot wrote: > >>>>=20 > >>>>> Hi, > >>>>> i'm trying to create a virtualisation environment based on > >>>>> jails. > >>>>> Those jails are stored under a big ZFS pool on a FreeBSD 9.3 > >>>>> which > >>>>> export a NFSv4 volume. This NFSv4 volume was mounted on a big > >>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but only 1 > >>>>> was > >>>>> used at this time). > >>>>>=20 > >>>>> The problem is simple, my hypervisors runs 6 jails (used 1% cpu > >>>>> and > >>>>> 10GB RAM approximatively and less than 1MB bandwidth) and works > >>>>> fine at start but the system slows down and after 2-3 days > >>>>> become > >>>>> unusable. When i look at top command i see 80-100% on system > >>>>> and > >>>>> commands are very very slow. Many process are tagged with > >>>>> nfs_cl*. > >>>>=20 > >>>> To be honest, I would expect the slowness to be because of slow > >>>> response > >>>> from the NFSv4 server, but if you do: > >>>> # ps axHl > >>>> on a client when it is slow and post that, it would give us some > >>>> more > >>>> information on where the client side processes are sitting. > >>>> If you also do something like: > >>>> # nfsstat -c -w 1 > >>>> and let it run for a while, that should show you how many RPCs > >>>> are > >>>> being done and which ones. > >>>>=20 > >>>> # nfsstat -m > >>>> will show you what your mount is actually using. > >>>> The only mount option I can suggest trying is > >>>> "rsize=3D32768,wsize=3D32768", > >>>> since some network environments have difficulties with 64K. > >>>>=20 > >>>> There are a few things you can try on the NFSv4 server side, if > >>>> it > >>>> appears > >>>> that the clients are generating a large RPC load. > >>>> - disabling the DRC cache for TCP by setting vfs.nfsd.cachetcp=3D0 > >>>> - If the server is seeing a large write RPC load, then > >>>> "sync=3Ddisabled" > >>>> might help, although it does run a risk of data loss when the > >>>> server > >>>> crashes. > >>>> Then there are a couple of other ZFS related things (I'm not a > >>>> ZFS > >>>> guy, > >>>> but these have shown up on the mailing lists). > >>>> - make sure your volumes are 4K aligned and ashift=3D12 (in case a > >>>> drive > >>>> that uses 4K sectors is pretending to be 512byte sectored) > >>>> - never run over 70-80% full if write performance is an issue > >>>> - use a zil on an SSD with good write performance > >>>>=20 > >>>> The only NFSv4 thing I can tell you is that it is known that > >>>> ZFS's > >>>> algorithm for determining sequential vs random I/O fails for > >>>> NFSv4 > >>>> during writing and this can be a performance hit. The only > >>>> workaround > >>>> is to use NFSv3 mounts, since file handle affinity apparently > >>>> fixes > >>>> the problem and this is only done for NFSv3. > >>>>=20 > >>>> rick > >>>>=20 > >>>>> I saw that there are TSO issues with igb then i'm trying to > >>>>> disable > >>>>> it with sysctl but the situation wasn't solved. > >>>>>=20 > >>>>> Someone has got ideas ? I can give you more informations if you > >>>>> need. > >>>>>=20 > >>>>> Thanks in advance. > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 17:07:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 34E0137C for ; Fri, 19 Dec 2014 17:07:04 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A8B5A1FD8 for ; Fri, 19 Dec 2014 17:07:02 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id sBJH6bx2008100; Fri, 19 Dec 2014 20:06:38 +0300 (MSK) (envelope-from marck@rinet.ru) Date: Fri, 19 Dec 2014 20:06:37 +0300 (MSK) From: Dmitry Morozovsky To: =?ISO-8859-15?Q?Lo=EFc_Blot?= Subject: Re: High Kernel Load with nfsv4 In-Reply-To: <1e19554bc0d4eb3e8dab74e2056b5ec4@mail.unix-experience.fr> Message-ID: References: <766911003.8048587.1418095910736.JavaMail.root@uoguelph.ca> <1e19554bc0d4eb3e8dab74e2056b5ec4@mail.unix-experience.fr> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Fri, 19 Dec 2014 20:06:38 +0300 (MSK) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2014 17:07:04 -0000 Loic, On Wed, 10 Dec 2014, Lo?c Blot wrote: > Hi Rick, > I'm trying NFSv3. > Some jails are starting very well but now i have an issue with lockd after some minutes: > > nfs server 10.10.X.8:/jails: lockd not responding > nfs server 10.10.X.8:/jails lockd is alive again > > I look at mbuf, but i seems there is no problem. > > Here is my rc.conf on server: > > nfs_server_enable="YES" > nfsv4_server_enable="YES" > nfsuserd_enable="YES" > nfsd_server_flags="-u -t -n 256" just a random thought: are you sure you want so much nfsd threads? I suppose lock contention could be easily involved here... [snip] -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 10:17:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9258FD39 for ; Sat, 20 Dec 2014 10:17:23 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 200D5215D for ; Sat, 20 Dec 2014 10:17:22 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 736CC26F4B; Sat, 20 Dec 2014 10:17:12 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id 8S2maDqcZGiK; Sat, 20 Dec 2014 10:17:06 +0000 (UTC) Received: from Nerz-PC (AMontsouris-651-1-101-194.w82-123.abo.wanadoo.fr [82.123.244.194]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 546E226F3B; Sat, 20 Dec 2014 10:17:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1419070626; bh=/hiwiAzzutCAawCohxFBV4DOOs71x4f5APb5gg0f6tw=; h=Subject:From:To:Cc:Date:In-Reply-To:References; b=FZ3hMiqpbbxadIQ7NR6TJrAqoWqSVamsjHSDLrbd87Gh9zs73uYGnxUuK7QvEnXpv P7W6fqU1ZPL5jkhExfn7lIVpt+Bj0XerIyB/Wy1oKOmzjKtwYULn3hUudezxpFvDAX 7EejcGdF1UauzGfLqKrDQbR4hC42rkjQhmsjCXi0= Message-ID: <1419070626.4549.5.camel@unix-experience.fr> Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 From: =?ISO-8859-1?Q?Lo=EFc?= BLOT To: Rick Macklem Date: Sat, 20 Dec 2014 11:17:06 +0100 In-Reply-To: <367024859.592531.1418949966814.JavaMail.root@uoguelph.ca> References: <367024859.592531.1418949966814.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.8 Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2014 10:17:23 -0000 Hi Rick, ok, i don't need locallocks, i haven't understand option was for that usage, i removed it. I do more tests on monday. Thanks for the deadlock fix, for other people :) -- Best regards, Loïc BLOT, UNIX systems, security and network engineer http://www.unix-experience.fr Le jeudi 18 décembre 2014 à 19:46 -0500, Rick Macklem a écrit : > Loic Blot wrote: > > Hi rick, > > i tried to start a LXC container on Debian Squeeze from my freebsd > > ZFS+NFSv4 server and i also have a deadlock on nfsd > > (vfs.lookup_shared=0). Deadlock procs each time i launch a squeeze > > container, it seems (3 tries, 3 fails). > > > Well, I`ll take a look at this `procstat -kk`, but the only thing > I`ve seen posted w.r.t. avoiding deadlocks in ZFS is to not use > nullfs. (I have no idea if you are using any nullfs mounts, but > if so, try getting rid of them.) > > Here`s a high level post about the ZFS and vnode locking problem, > but there is no patch available, as far as I know. > > http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407 > > rick > > > 921 - D 0:00.02 nfsd: server (nfsd) > > > > Here is the procstat -kk > > > > PID TID COMM TDNAME KSTACK > > 921 100538 nfsd nfsd: master mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > > svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > > nfssvc_nfsd+0x107 sys_nfssvc+0x9c > > 921 100572 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100573 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100574 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100575 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100576 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100577 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100578 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100579 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100580 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100581 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100582 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100583 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100584 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100585 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100586 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100587 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100588 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100589 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100590 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100591 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100592 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100593 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100594 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100595 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100596 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100597 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100598 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100599 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100600 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100601 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100602 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100603 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100604 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100605 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100606 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100607 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100608 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100609 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100610 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100611 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100612 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100613 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100614 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100615 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100616 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > > nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > 921 100617 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100618 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > 921 100619 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100620 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100621 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100622 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100623 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100624 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100625 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100626 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100627 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100628 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100629 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100630 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100631 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100632 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100633 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100634 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100635 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100636 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100637 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100638 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100639 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100640 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100641 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100642 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100643 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100644 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100645 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100646 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100647 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100648 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100649 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100650 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100651 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100652 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100653 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100654 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100655 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100656 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100657 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100658 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100659 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100660 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100661 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100662 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100663 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100664 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100665 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 921 100666 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 > > nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > > > > > Regards, > > > > Loïc Blot, > > UNIX Systems, Network and Security Engineer > > http://www.unix-experience.fr > > > > 15 décembre 2014 15:18 "Rick Macklem" a écrit: > > > Loic Blot wrote: > > > > > >> For more informations, here is procstat -kk on nfsd, if you need > > >> more > > >> hot datas, tell me. > > >> > > >> Regards, PID TID COMM TDNAME KSTACK > > >> 918 100529 nfsd nfsd: master mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > > >> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > > >> amd64_syscall+0x351 > > > > > > Well, most of the threads are stuck like this one, waiting for a > > > vnode > > > lock in ZFS. All of them appear to be in zfs_fhtovp(). > > > I`m not a ZFS guy, so I can`t help much. I`ll try changing the > > > subject line > > > to include ZFS vnode lock, so maybe the ZFS guys will take a look. > > > > > > The only thing I`ve seen suggested is trying: > > > sysctl vfs.lookup_shared=0 > > > to disable shared vop_lookup()s. Apparently zfs_lookup() doesn`t > > > obey the vnode locking rules for lookup and rename, according to > > > the posting I saw. > > > > > > I`ve added a couple of comments about the other threads below, but > > > they are all either waiting for an RPC request or waiting for the > > > threads stuck on the ZFS vnode lock to complete. > > > > > > rick > > > > > >> 918 100564 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > > > > > Fyi, this thread is just waiting for an RPC to arrive. (Normal) > > > > > >> 918 100565 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > >> 918 100566 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > >> 918 100567 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > >> 918 100568 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > >> 918 100569 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > >> 918 100570 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > >> 918 100571 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100572 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 nfsrvd_dorpc+0xc76 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > > > > > This one (and a few others) are waiting for the nfsv4_lock. This > > > happens > > > because other threads are stuck with RPCs in progress. (ie. The > > > ones > > > waiting on the vnode lock in zfs_fhtovp().) > > > For these, the RPC needs to lock out other threads to do the > > > operation, > > > so it waits for the nfsv4_lock() which can exclusively lock the > > > NFSv4 > > > data structures once all other nfsd threads complete their RPCs in > > > progress. > > > > > >> 918 100573 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > > > > Same as above. > > > > > >> 918 100574 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100575 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100576 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100577 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100578 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100579 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100580 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100581 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100582 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100583 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100584 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100585 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100586 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100587 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100588 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100589 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100590 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100591 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100592 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100593 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100594 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100595 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100596 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100597 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100598 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100599 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100600 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100601 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100602 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100603 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100604 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100605 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100606 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100607 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > > > > > Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > > > > > >> 918 100608 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > > >> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100609 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100610 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > >> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > > >> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > > >> svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > > >> fork_trampoline+0xe > > >> 918 100611 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100612 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100613 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100614 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100615 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100616 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100617 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100618 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100619 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100620 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100621 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100622 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100623 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > >> 918 100624 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100625 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100626 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100627 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100628 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100629 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100630 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100631 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100632 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100633 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100634 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100635 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100636 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100637 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100638 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100639 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100640 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100641 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100642 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100643 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100644 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100645 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100646 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100647 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100648 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100649 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100650 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100651 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100652 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100653 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100654 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100655 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100656 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100657 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> 918 100658 nfsd nfsd: service mi_switch+0xe1 > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 zfs_fhtovp+0x38d > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_thread_start+0xb > > >> fork_exit+0x9a fork_trampoline+0xe > > >> > > >> Loïc Blot, > > >> UNIX Systems, Network and Security Engineer > > >> http://www.unix-experience.fr > > >> > > >> 15 décembre 2014 13:29 "Loïc Blot" > > >> a > > >> écrit: > > >>> Hmmm... > > >>> now i'm experiencing a deadlock. > > >>> > > >>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server (nfsd) > > >>> > > >>> the only issue was to reboot the server, but after rebooting > > >>> deadlock arrives a second time when i > > >>> start my jails over NFS. > > >>> > > >>> Regards, > > >>> > > >>> Loïc Blot, > > >>> UNIX Systems, Network and Security Engineer > > >>> http://www.unix-experience.fr > > >>> > > >>> 15 décembre 2014 10:07 "Loïc Blot" > > >>> a > > >>> écrit: > > >>> > > >>> Hi Rick, > > >>> after talking with my N+1, NFSv4 is required on our > > >>> infrastructure. > > >>> I tried to upgrade NFSv4+ZFS > > >>> server from 9.3 to 10.1, i hope this will resolve some issues... > > >>> > > >>> Regards, > > >>> > > >>> Loïc Blot, > > >>> UNIX Systems, Network and Security Engineer > > >>> http://www.unix-experience.fr > > >>> > > >>> 10 décembre 2014 15:36 "Loïc Blot" > > >>> a > > >>> écrit: > > >>> > > >>> Hi Rick, > > >>> thanks for your suggestion. > > >>> For my locking bug, rpc.lockd is stucked in rpcrecv state on the > > >>> server. kill -9 doesn't affect the > > >>> process, it's blocked.... (State: Ds) > > >>> > > >>> for the performances > > >>> > > >>> NFSv3: 60Mbps > > >>> NFSv4: 45Mbps > > >>> Regards, > > >>> > > >>> Loïc Blot, > > >>> UNIX Systems, Network and Security Engineer > > >>> http://www.unix-experience.fr > > >>> > > >>> 10 décembre 2014 13:56 "Rick Macklem" a > > >>> écrit: > > >>> > > >>>> Loic Blot wrote: > > >>>> > > >>>>> Hi Rick, > > >>>>> I'm trying NFSv3. > > >>>>> Some jails are starting very well but now i have an issue with > > >>>>> lockd > > >>>>> after some minutes: > > >>>>> > > >>>>> nfs server 10.10.X.8:/jails: lockd not responding > > >>>>> nfs server 10.10.X.8:/jails lockd is alive again > > >>>>> > > >>>>> I look at mbuf, but i seems there is no problem. > > >>>> > > >>>> Well, if you need locks to be visible across multiple clients, > > >>>> then > > >>>> I'm afraid you are stuck with using NFSv4 and the performance > > >>>> you > > >>>> get > > >>>> from it. (There is no way to do file handle affinity for NFSv4 > > >>>> because > > >>>> the read and write ops are buried in the compound RPC and not > > >>>> easily > > >>>> recognized.) > > >>>> > > >>>> If the locks don't need to be visible across multiple clients, > > >>>> I'd > > >>>> suggest trying the "nolockd" option with nfsv3. > > >>>> > > >>>>> Here is my rc.conf on server: > > >>>>> > > >>>>> nfs_server_enable="YES" > > >>>>> nfsv4_server_enable="YES" > > >>>>> nfsuserd_enable="YES" > > >>>>> nfsd_server_flags="-u -t -n 256" > > >>>>> mountd_enable="YES" > > >>>>> mountd_flags="-r" > > >>>>> nfsuserd_flags="-usertimeout 0 -force 20" > > >>>>> rpcbind_enable="YES" > > >>>>> rpc_lockd_enable="YES" > > >>>>> rpc_statd_enable="YES" > > >>>>> > > >>>>> Here is the client: > > >>>>> > > >>>>> nfsuserd_enable="YES" > > >>>>> nfsuserd_flags="-usertimeout 0 -force 20" > > >>>>> nfscbd_enable="YES" > > >>>>> rpc_lockd_enable="YES" > > >>>>> rpc_statd_enable="YES" > > >>>>> > > >>>>> Have you got an idea ? > > >>>>> > > >>>>> Regards, > > >>>>> > > >>>>> Loïc Blot, > > >>>>> UNIX Systems, Network and Security Engineer > > >>>>> http://www.unix-experience.fr > > >>>>> > > >>>>> 9 décembre 2014 04:31 "Rick Macklem" a > > >>>>> écrit: > > >>>>>> Loic Blot wrote: > > >>>>>> > > >>>>>>> Hi rick, > > >>>>>>> > > >>>>>>> I waited 3 hours (no lag at jail launch) and now I do: sysrc > > >>>>>>> memcached_flags="-v -m 512" > > >>>>>>> Command was very very slow... > > >>>>>>> > > >>>>>>> Here is a dd over NFS: > > >>>>>>> > > >>>>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > > >>>>>>> bytes/sec) > > >>>>>> > > >>>>>> Can you try the same read using an NFSv3 mount? > > >>>>>> (If it runs much faster, you have probably been bitten by the > > >>>>>> ZFS > > >>>>>> "sequential vs random" read heuristic which I've been told > > >>>>>> things > > >>>>>> NFS is doing "random" reads without file handle affinity. File > > >>>>>> handle affinity is very hard to do for NFSv4, so it isn't > > >>>>>> done.) > > >>>> > > >>>> I was actually suggesting that you try the "dd" over nfsv3 to > > >>>> see > > >>>> how > > >>>> the performance compared with nfsv4. If you do that, please post > > >>>> the > > >>>> comparable results. > > >>>> > > >>>> Someday I would like to try and get ZFS's sequential vs random > > >>>> read > > >>>> heuristic modified and any info on what difference in > > >>>> performance > > >>>> that > > >>>> might make for NFS would be useful. > > >>>> > > >>>> rick > > >>>> > > >>>>>> rick > > >>>>>> > > >>>>>>> This is quite slow... > > >>>>>>> > > >>>>>>> You can found some nfsstat below (command isn't finished yet) > > >>>>>>> > > >>>>>>> nfsstat -c -w 1 > > >>>>>>> > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 4 0 0 0 0 0 16 0 > > >>>>>>> 2 0 0 0 0 0 17 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 4 0 0 0 0 4 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 4 0 0 0 0 0 3 0 > > >>>>>>> 0 0 0 0 0 0 3 0 > > >>>>>>> 37 10 0 8 0 0 14 1 > > >>>>>>> 18 16 0 4 1 2 4 0 > > >>>>>>> 78 91 0 82 6 12 30 0 > > >>>>>>> 19 18 0 2 2 4 2 0 > > >>>>>>> 0 0 0 0 2 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 1 0 0 0 0 1 0 > > >>>>>>> 4 6 0 0 6 0 3 0 > > >>>>>>> 2 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 1 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 1 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 6 108 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 98 54 0 86 11 0 25 0 > > >>>>>>> 36 24 0 39 25 0 10 1 > > >>>>>>> 67 8 0 63 63 0 41 0 > > >>>>>>> 34 0 0 35 34 0 0 0 > > >>>>>>> 75 0 0 75 77 0 0 0 > > >>>>>>> 34 0 0 35 35 0 0 0 > > >>>>>>> 75 0 0 74 76 0 0 0 > > >>>>>>> 33 0 0 34 33 0 0 0 > > >>>>>>> 0 0 0 0 5 0 0 0 > > >>>>>>> 0 0 0 0 0 0 6 0 > > >>>>>>> 11 0 0 0 0 0 11 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 17 0 0 0 0 1 0 > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > >>>>>>> 4 5 0 0 0 0 12 0 > > >>>>>>> 2 0 0 0 0 0 26 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 4 0 0 0 0 4 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 4 0 0 0 0 0 2 0 > > >>>>>>> 2 0 0 0 0 0 24 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 4 0 0 0 0 0 7 0 > > >>>>>>> 2 1 0 0 0 0 1 0 > > >>>>>>> 0 0 0 0 2 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 6 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 4 6 0 0 0 0 3 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 2 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 4 71 0 0 0 0 0 0 > > >>>>>>> 0 1 0 0 0 0 0 0 > > >>>>>>> 2 36 0 0 0 0 1 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 1 0 0 0 0 0 1 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 79 6 0 79 79 0 2 0 > > >>>>>>> 25 0 0 25 26 0 6 0 > > >>>>>>> 43 18 0 39 46 0 23 0 > > >>>>>>> 36 0 0 36 36 0 31 0 > > >>>>>>> 68 1 0 66 68 0 0 0 > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > >>>>>>> 36 0 0 36 36 0 0 0 > > >>>>>>> 48 0 0 48 49 0 0 0 > > >>>>>>> 20 0 0 20 20 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 3 14 0 1 0 0 11 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 0 4 0 0 0 0 4 0 > > >>>>>>> 0 0 0 0 0 0 0 0 > > >>>>>>> 4 22 0 0 0 0 16 0 > > >>>>>>> 2 0 0 0 0 0 23 0 > > >>>>>>> > > >>>>>>> Regards, > > >>>>>>> > > >>>>>>> Loïc Blot, > > >>>>>>> UNIX Systems, Network and Security Engineer > > >>>>>>> http://www.unix-experience.fr > > >>>>>>> > > >>>>>>> 8 décembre 2014 09:36 "Loïc Blot" > > >>>>>>> a > > >>>>>>> écrit: > > >>>>>>>> Hi Rick, > > >>>>>>>> I stopped the jails this week-end and started it this > > >>>>>>>> morning, > > >>>>>>>> i'll > > >>>>>>>> give you some stats this week. > > >>>>>>>> > > >>>>>>>> Here is my nfsstat -m output (with your rsize/wsize tweaks) > > >>> > > >>> > > >> > > > nfsv4,tcp,resvport,hard,cto,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negna > > >>> > > >>>>>>>> > > >>> > > >>> > > >> > > > etimeo=60,rsize=32768,wsize=32768,readdirsize=32768,readahead=1,wcommitsize=773136,timeout=120,retra > > >>> > > >>> s=2147483647 > > >>> > > >>> On server side my disks are on a raid controller which show a > > >>> 512b > > >>> volume and write performances > > >>> are very honest (dd if=/dev/zero of=/jails/test.dd bs=4096 > > >>> count=100000000 => 450MBps) > > >>> > > >>> Regards, > > >>> > > >>> Loïc Blot, > > >>> UNIX Systems, Network and Security Engineer > > >>> http://www.unix-experience.fr > > >>> > > >>> 5 décembre 2014 15:14 "Rick Macklem" a > > >>> écrit: > > >>> > > >>>> Loic Blot wrote: > > >>>> > > >>>>> Hi, > > >>>>> i'm trying to create a virtualisation environment based on > > >>>>> jails. > > >>>>> Those jails are stored under a big ZFS pool on a FreeBSD 9.3 > > >>>>> which > > >>>>> export a NFSv4 volume. This NFSv4 volume was mounted on a big > > >>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but only 1 > > >>>>> was > > >>>>> used at this time). > > >>>>> > > >>>>> The problem is simple, my hypervisors runs 6 jails (used 1% cpu > > >>>>> and > > >>>>> 10GB RAM approximatively and less than 1MB bandwidth) and works > > >>>>> fine at start but the system slows down and after 2-3 days > > >>>>> become > > >>>>> unusable. When i look at top command i see 80-100% on system > > >>>>> and > > >>>>> commands are very very slow. Many process are tagged with > > >>>>> nfs_cl*. > > >>>> > > >>>> To be honest, I would expect the slowness to be because of slow > > >>>> response > > >>>> from the NFSv4 server, but if you do: > > >>>> # ps axHl > > >>>> on a client when it is slow and post that, it would give us some > > >>>> more > > >>>> information on where the client side processes are sitting. > > >>>> If you also do something like: > > >>>> # nfsstat -c -w 1 > > >>>> and let it run for a while, that should show you how many RPCs > > >>>> are > > >>>> being done and which ones. > > >>>> > > >>>> # nfsstat -m > > >>>> will show you what your mount is actually using. > > >>>> The only mount option I can suggest trying is > > >>>> "rsize=32768,wsize=32768", > > >>>> since some network environments have difficulties with 64K. > > >>>> > > >>>> There are a few things you can try on the NFSv4 server side, if > > >>>> it > > >>>> appears > > >>>> that the clients are generating a large RPC load. > > >>>> - disabling the DRC cache for TCP by setting vfs.nfsd.cachetcp=0 > > >>>> - If the server is seeing a large write RPC load, then > > >>>> "sync=disabled" > > >>>> might help, although it does run a risk of data loss when the > > >>>> server > > >>>> crashes. > > >>>> Then there are a couple of other ZFS related things (I'm not a > > >>>> ZFS > > >>>> guy, > > >>>> but these have shown up on the mailing lists). > > >>>> - make sure your volumes are 4K aligned and ashift=12 (in case a > > >>>> drive > > >>>> that uses 4K sectors is pretending to be 512byte sectored) > > >>>> - never run over 70-80% full if write performance is an issue > > >>>> - use a zil on an SSD with good write performance > > >>>> > > >>>> The only NFSv4 thing I can tell you is that it is known that > > >>>> ZFS's > > >>>> algorithm for determining sequential vs random I/O fails for > > >>>> NFSv4 > > >>>> during writing and this can be a performance hit. The only > > >>>> workaround > > >>>> is to use NFSv3 mounts, since file handle affinity apparently > > >>>> fixes > > >>>> the problem and this is only done for NFSv3. > > >>>> > > >>>> rick > > >>>> > > >>>>> I saw that there are TSO issues with igb then i'm trying to > > >>>>> disable > > >>>>> it with sysctl but the situation wasn't solved. > > >>>>> > > >>>>> Someone has got ideas ? I can give you more informations if you > > >>>>> need. > > >>>>> > > >>>>> Thanks in advance. > > >>>>> Regards, > > >>>>> > > >>>>> Loïc Blot, > > >>>>> UNIX Systems, Network and Security Engineer > > >>>>> http://www.unix-experience.fr > > >>>>> _______________________________________________ > > >>>>> freebsd-fs@freebsd.org mailing list > > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >>>>> To unsubscribe, send any mail to > > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > > >>> > > >>> _______________________________________________ > > >>> freebsd-fs@freebsd.org mailing list > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >>> To unsubscribe, send any mail to > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > >>> > > >>> _______________________________________________ > > >>> freebsd-fs@freebsd.org mailing list > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >>> To unsubscribe, send any mail to > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > >>> > > >>> _______________________________________________ > > >>> freebsd-fs@freebsd.org mailing list > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >>> To unsubscribe, send any mail to > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > >>> _______________________________________________ > > >>> freebsd-fs@freebsd.org mailing list > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >>> To unsubscribe, send any mail to > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 10:18:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1F336E84 for ; Sat, 20 Dec 2014 10:18:20 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D03EE216D for ; Sat, 20 Dec 2014 10:18:19 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 7483026F93; Sat, 20 Dec 2014 10:18:17 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id iUo2YlxIL7XG; Sat, 20 Dec 2014 10:18:15 +0000 (UTC) Received: from Nerz-PC (AMontsouris-651-1-101-194.w82-123.abo.wanadoo.fr [82.123.244.194]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 7C2C326F7E; Sat, 20 Dec 2014 10:18:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1419070695; bh=hMsdPwksAF8FsjbYennb2w6ChsSkOLjrcZkhONu965M=; h=Subject:From:To:Cc:Date:In-Reply-To:References; b=BgKoNu5cIAoDiHaVJ59YxPSRqQu5K2BNJdx3XzXAmmDpG8BZVvi9GYSOdpdBYmomU CbRrSl0DggDyix9l2Ky148suqGSrY403uWYkP1fULYdGjv3CboDvVbBY2nT9/DD5mh 5HjgplPQwQiqRu7yGgckfLZOa9lt57tmbXtbiYW0= Message-ID: <1419070695.4549.6.camel@unix-experience.fr> Subject: Re: High Kernel Load with nfsv4 From: =?ISO-8859-1?Q?Lo=EFc?= BLOT To: Dmitry Morozovsky Date: Sat, 20 Dec 2014 11:18:15 +0100 In-Reply-To: References: <766911003.8048587.1418095910736.JavaMail.root@uoguelph.ca> <1e19554bc0d4eb3e8dab74e2056b5ec4@mail.unix-experience.fr> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.8 Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2014 10:18:20 -0000 Hi Dmitry, you mean less process improve performance ? -- Best regards, Loïc BLOT, UNIX systems, security and network engineer http://www.unix-experience.fr Le vendredi 19 décembre 2014 à 20:06 +0300, Dmitry Morozovsky a écrit : > Loic, > > On Wed, 10 Dec 2014, Lo?c Blot wrote: > > > Hi Rick, > > I'm trying NFSv3. > > Some jails are starting very well but now i have an issue with lockd after some minutes: > > > > nfs server 10.10.X.8:/jails: lockd not responding > > nfs server 10.10.X.8:/jails lockd is alive again > > > > I look at mbuf, but i seems there is no problem. > > > > Here is my rc.conf on server: > > > > nfs_server_enable="YES" > > nfsv4_server_enable="YES" > > nfsuserd_enable="YES" > > nfsd_server_flags="-u -t -n 256" > > just a random thought: are you sure you want so much nfsd threads? I suppose > lock contention could be easily involved here... > > [snip] > > > -- > Sincerely, > D.Marck [DM5020, MCK-RIPE, DM3-RIPN] > [ FreeBSD committer: marck@FreeBSD.org ] > ------------------------------------------------------------------------ > *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** > ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 13:44:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 60E12A29 for ; Sat, 20 Dec 2014 13:44:07 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B5BBB274B for ; Sat, 20 Dec 2014 13:44:06 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id sBKAvx0i017618; Sat, 20 Dec 2014 13:57:59 +0300 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 20 Dec 2014 13:57:59 +0300 (MSK) From: Dmitry Morozovsky To: =?ISO-8859-15?Q?Lo=EFc_BLOT?= Subject: Re: High Kernel Load with nfsv4 In-Reply-To: <1419070695.4549.6.camel@unix-experience.fr> Message-ID: References: <766911003.8048587.1418095910736.JavaMail.root@uoguelph.ca> <1e19554bc0d4eb3e8dab74e2056b5ec4@mail.unix-experience.fr> <1419070695.4549.6.camel@unix-experience.fr> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sat, 20 Dec 2014 13:58:14 +0300 (MSK) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2014 13:44:07 -0000 On Sat, 20 Dec 2014, Lo?c BLOT wrote: > Hi Dmitry, > you mean less process improve performance ? I suppose yes, especially if you have small number of concurrently-accessing yout NFS server clients and/or client processes. Default of 4, of course, is ridiculously low nowadays; however, I'll start with, say, 4 per CPU core you have, as someone mentioned previously. -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 14:32:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 84CA0502 for ; Sat, 20 Dec 2014 14:32:40 +0000 (UTC) Received: from mail-wi0-x22a.google.com (mail-wi0-x22a.google.com [IPv6:2a00:1450:400c:c05::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 09CA62CB7 for ; Sat, 20 Dec 2014 14:32:40 +0000 (UTC) Received: by mail-wi0-f170.google.com with SMTP id bs8so7103941wib.1 for ; Sat, 20 Dec 2014 06:32:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=IvWIhAf3Gw+1MFe9c2VBxys9xpOOWWtuwoWKXx0Elf4=; b=o4ZHDjT3PWOqC+p87kQfFQMVpDrByTCOV6nU7SByOyMSMf4DC+071FE2RU97J7+OCE I6YeLSNmNZKGQfPjXuNJZ8GPQ1+1MlrGQreCsjPt7I9mKCuP6xkI6VwK3HOqsHdRTc7m QpBiI2Dz8G8c6MY0jL7P307hKKON/VukZoZhxYTBrFtrUqatm7Fq27SIE+tPaAJcJ6aC L/Mk0UvHaICgXTRaIY2R92AAy1eTxWN+GL31lYa9YjcTzkkXp31PYADmg+lwqUVz8dHa ywxR8VWMgKkMmOXto67J8XYQR7zB253a5jWv5hEif1MRqxkxr2a6MnjmChq3XEkPeIDf 9ENA== MIME-Version: 1.0 X-Received: by 10.180.80.163 with SMTP id s3mr14687108wix.59.1419085958467; Sat, 20 Dec 2014 06:32:38 -0800 (PST) Received: by 10.27.177.218 with HTTP; Sat, 20 Dec 2014 06:32:38 -0800 (PST) In-Reply-To: <54267FD3.2080603@fsn.hu> References: <542560C1.9070207@fsn.hu> <54267FD3.2080603@fsn.hu> Date: Sat, 20 Dec 2014 15:32:38 +0100 Message-ID: Subject: Re: 16 exabytes of L2ARC? From: Nikolay Denev To: "Nagy, Attila" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2014 14:32:40 -0000 On Sat, Sep 27, 2014 at 11:13 AM, Nagy, Attila wrote: > On 09/26/14 14:49, Nagy, Attila wrote: > >> Hi, >> >> Running stable/10@r271944: >> # zpool iostat -v >> capacity operations bandwidth >> pool alloc free read write read write >> ---------- ----- ----- ----- ----- ----- ----- >> data 17.3T 40.7T 165 1.24K 1.63M 90.8M >> da0 4.31T 10.2T 41 318 418K 22.7M >> da1 4.32T 10.2T 41 317 416K 22.7M >> da2 4.32T 10.2T 41 317 416K 22.7M >> da3 4.31T 10.2T 41 317 418K 22.7M >> cache - - - - - - >> ada0 513G 16.0E 222 179 1.05M 2.79M >> ada1 511G 16.0E 222 180 1.05M 2.80M >> ---------- ----- ----- ----- ----- ----- ----- >> >> # egrep 'ada.*MB' /var/run/dmesg.boot >> ada0: 600.000MB/s transfers (SATA 3.x, UDMA5, PIO 512bytes) >> ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C) >> ada1: 600.000MB/s transfers (SATA 3.x, UDMA5, PIO 512bytes) >> ada1: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C) >> > I've removed the cache devices and re-added them, now it's fine: > cache - - - - - - > ada0 355M 372G 24 0 151K 0 > ada1 345M 372G 12 505 71.7K 2.79M > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > Hi, Did you figure out the root cause of this? I seem to be having the same issue: [14:31][root@nas:~]#zpool iostat -v | grep -A1 cache cache - - - - - - ada4p1 215G 16.0E 9 3 107K 322K uname -a : FreeBSD nas.home.lan 10.1-STABLE FreeBSD 10.1-STABLE #14 r274549: Sat Nov 15 14:43:56 UTC 2014 root@nas.home.lan:/usr/obj/usr/src/sys/NAS amd64 --Nikolay