From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 11:16:39 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A56F0128 for ; Mon, 5 Jan 2015 11:16:39 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0802164346 for ; Mon, 5 Jan 2015 11:16:37 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 1CD2428D46; Mon, 5 Jan 2015 11:16:29 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id 0aH5uKtTa48f; Mon, 5 Jan 2015 11:16:21 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id AE1BF28D38; Mon, 5 Jan 2015 11:16:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1420456581; bh=G2K+6UGzPN/4+ApNRAhKp1QHmnBtwUKs9WXidpFVbfk=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=llA6QBUV28naIALKMD1lsb/LogfyKwtMmIaOavrIZK1rhIwdgtlkecqDjibK0Aael GfbT/yrtTWXyX6nAzhRVfyizzb+6yWoM1HjnOh6L5XV4TwcFpuWUiy8spjBr3+H0ZH P3MCZmc8NkUH5rgCxzCz4VvhAczAFNky/8YR3zmo= Mime-Version: 1.0 Date: Mon, 05 Jan 2015 11:16:21 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <87a8b6ab243024e553b2baba30537b92@mail.unix-experience.fr> X-Mailer: RainLoop/1.7.1.215 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <82f8ef92c6bf5f2323bae5ce5e4e2394@mail.unix-experience.fr> References: <82f8ef92c6bf5f2323bae5ce5e4e2394@mail.unix-experience.fr> <1479765128.1118136.1419290403230.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 11:16:39 -0000 Hi,=0Ahappy new year Rick and @freebsd-fs.=0A=0AAfter some days, i looked= my NFSv4.1 mount. At server start it was calm, but after 4 days, here is= the top stat...=0A=0ACPU: 0.0% user, 0.0% nice, 100% system, 0.0% in= terrupt, 0.0% idle=0A=0ADefinitively i think it's a problem on client si= de. What can i look into running kernel to resolve this issue ?=0A=0A=0AR= egards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security Enginee= r=0Ahttp://www.unix-experience.fr=0A=0A30 d=C3=A9cembre 2014 16:16 "Lo=C3= =AFc Blot" a =C3=A9crit: =0A> Hi Rick,=0A>= i upgraded my jail host from FreeBSD 9.3 to 10.1 to use NFS v4.1 (mounto= ptions:=0A> rw,rsize=3D32768,wsize=3D32768,tcp,nfsv4,minorversion=3D1)=0A= > =0A> Performance is quite stable but it's slow. Not as slow as before b= ut slow... services was launched=0A> but no client are using them and sys= tem CPU % was 10-50%.=0A> =0A> I don't see anything on NFSv4.1 server, it= 's perfectly stable and functionnal.=0A> =0A> Regards,=0A> =0A> Lo=C3=AFc= Blot,=0A> UNIX Systems, Network and Security Engineer=0A> http://www.uni= x-experience.fr=0A> =0A> 23 d=C3=A9cembre 2014 00:20 "Rick Macklem" a =C3=A9crit:=0A> =0A>> Loic Blot wrote:=0A>> =0A>>> Hi= ,=0A>>> =0A>>> To clarify because of our exchanges. Here are the current = sysctl=0A>>> options for server:=0A>>> =0A>>> vfs.nfsd.enable_nobodycheck= =3D0=0A>>> vfs.nfsd.enable_nogroupcheck=3D0=0A>>> =0A>>> vfs.nfsd.maxthre= ads=3D200=0A>>> vfs.nfsd.tcphighwater=3D10000=0A>>> vfs.nfsd.tcpcachetime= o=3D300=0A>>> vfs.nfsd.server_min_nfsvers=3D4=0A>>> =0A>>> kern.maxvnodes= =3D10000000=0A>>> kern.ipc.maxsockbuf=3D4194304=0A>>> net.inet.tcp.sendbu= f_max=3D4194304=0A>>> net.inet.tcp.recvbuf_max=3D4194304=0A>>> =0A>>> vfs= .lookup_shared=3D0=0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>= >> UNIX Systems, Network and Security Engineer=0A>>> http://www.unix-expe= rience.fr=0A>>> =0A>>> 22 d=C3=A9cembre 2014 09:42 "Lo=C3=AFc Blot" a=0A>>> =C3=A9crit:=0A>>> =0A>>> Hi Rick,=0A>>>= my 5 jails runs this weekend and now i have some stats on this=0A>>> mon= day.=0A>>> =0A>>> Hopefully deadlock was fixed, yeah, but everything isn'= t good :(=0A>>> =0A>>> On NFSv4 server (FreeBSD 10.1) system uses 35% CPU= =0A>>> =0A>>> As i can see this is because of nfsd:=0A>>> =0A>>> 918 root= 96 20 0 12352K 3372K rpcsvc 6 51.4H=0A>>> 273.68% nfsd: server (nfsd)=0A= >>> =0A>>> If i look at dmesg i see:=0A>>> nfsd server cache flooded, try= increasing vfs.nfsd.tcphighwater=0A>> =0A>> Well, you have a couple of c= hoices:=0A>> 1 - Use NFSv4.1 (add "minorversion=3D1" to your mount option= s).=0A>> (NFSv4.1 avoids use of the DRC and instead uses something=0A>> c= alled sessions. See below.)=0A>> OR=0A>> =0A>>> vfs.nfsd.tcphighwater was= set to 10000, i increase it to 15000=0A>> =0A>> 2 - Bump vfs.nfsd.tcphig= hwater way up, until you no longer see=0A>> "nfs server cache flooded" me= ssages. (I think Garrett Wollman uses=0A>> 100000. (You may still see qui= te a bit of CPU overheads.)=0A>> =0A>> OR=0A>> =0A>> 3 - Set vfs.nfsd.cac= hetcp=3D0 (which disables the DRC and gets rid=0A>> of the CPU overheads)= . However, there is a risk of data corruption=0A>> if you have a client->= server network partitioning of a moderate=0A>> duration, because a non-id= empotent RPC may get redone, becasue=0A>> the client times out waiting fo= r a reply. If a non-idempotent=0A>> RPC gets done twice on the server, da= ta corruption can happen.=0A>> (The DRC provides improved correctness, bu= t does add overhead.)=0A>> =0A>> If #1 works for you, it is the preferred= solution, since Sessions=0A>> in NFSv4.1 solves the correctness problem = in a good, space bound=0A>> way. A session basically has N (usually 32 or= 64) slots and only=0A>> allows one outstanding RPC/slot. As such, it can= cache the previous=0A>> reply for each slot (32 or 64 of them) and guara= ntee "exactly once"=0A>> RPC semantics.=0A>> =0A>> rick=0A>> =0A>>> Here = is 'nfsstat -s' output:=0A>>> =0A>>> Server Info:=0A>>> Getattr Setattr L= ookup Readlink Read Write Create=0A>>> Remove=0A>>> 12600652 1812 2501097= 156 1386423 1983729 123=0A>>> 162067=0A>>> Rename Link Symlink Mkdir Rmd= ir Readdir RdirPlus=0A>>> Access=0A>>> 36762 9 0 0 0 3147 0=0A>>> 623524= =0A>>> Mknod Fsstat Fsinfo PathConf Commit=0A>>> 0 0 0 0 328117=0A>>> Ser= ver Ret-Failed=0A>>> 0=0A>>> Server Faults=0A>>> 0=0A>>> Server Cache Sta= ts:=0A>>> Inprog Idem Non-idem Misses=0A>>> 0 0 0 12635512=0A>>> Server W= rite Gathering:=0A>>> WriteOps WriteRPC Opsaved=0A>>> 1983729 1983729 0= =0A>>> =0A>>> And here is 'procstat -kk' for nfsd (server)=0A>>> =0A>>> 9= 18 100528 nfsd nfsd: master mi_switch+0xe1=0A>>> sleepq_catch_signals+0xa= b sleepq_timedwait_sig+0x10=0A>>> _cv_timedwait_sig_sbt+0x18b svc_run_int= ernal+0x4a1 svc_run+0x1de=0A>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_n= fssvc+0x9c=0A>>> amd64_syscall+0x351 Xfast_syscall+0xfb=0A>>> 918 100568 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100569 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100570 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100571 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100572 nfsd nfsd: service mi_sw= itch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>> fork_trampoline+0xe=0A>>> 918 100573 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= fork_trampoline+0xe=0A>>> 918 100574 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100575 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_tr= ampoline+0xe=0A>>> 918 100576 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoli= ne+0xe=0A>>> 918 100577 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe= =0A>>> 918 100578 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>>= 918 100579 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 1= 00580 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100581 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100582 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100583 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100584 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100585 nfsd nfsd: service mi_sw= itch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>> fork_trampoline+0xe=0A>>> 918 100586 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= fork_trampoline+0xe=0A>>> 918 100587 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100588 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_tr= ampoline+0xe=0A>>> 918 100589 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoli= ne+0xe=0A>>> 918 100590 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe= =0A>>> 918 100591 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>>= 918 100592 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 1= 00593 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100594 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100595 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100596 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100597 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100598 nfsd nfsd: service mi_sw= itch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>> fork_trampoline+0xe=0A>>> 918 100599 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= fork_trampoline+0xe=0A>>> 918 100600 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100601 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_tr= ampoline+0xe=0A>>> 918 100602 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoli= ne+0xe=0A>>> 918 100603 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe= =0A>>> 918 100604 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>>= 918 100605 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 1= 00606 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100607 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100608 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100609 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100610 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100611 nfsd nfsd: service mi_sw= itch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>> fork_trampoline+0xe=0A>>> 918 100612 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= fork_trampoline+0xe=0A>>> 918 100613 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100614 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_tr= ampoline+0xe=0A>>> 918 100615 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoli= ne+0xe=0A>>> 918 100616 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe= =0A>>> 918 100617 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>>= 918 100618 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 1= 00619 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100620 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100621 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100622 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100623 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100624 nfsd nfsd: service mi_sw= itch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>> fork_trampoline+0xe=0A>>> 918 100625 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= fork_trampoline+0xe=0A>>> 918 100626 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100627 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_tr= ampoline+0xe=0A>>> 918 100628 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoli= ne+0xe=0A>>> 918 100629 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe= =0A>>> 918 100630 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>>= 918 100631 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 1= 00632 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100633 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100634 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100635 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100636 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100637 nfsd nfsd: service mi_sw= itch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>> fork_trampoline+0xe=0A>>> 918 100638 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= fork_trampoline+0xe=0A>>> 918 100639 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100640 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_tr= ampoline+0xe=0A>>> 918 100641 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoli= ne+0xe=0A>>> 918 100642 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe= =0A>>> 918 100643 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>>= 918 100644 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 1= 00645 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100646 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100647 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100648 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100649 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100650 nfsd nfsd: service mi_sw= itch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>> fork_trampoline+0xe=0A>>> 918 100651 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= fork_trampoline+0xe=0A>>> 918 100652 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> f= ork_trampoline+0xe=0A>>> 918 100653 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_tr= ampoline+0xe=0A>>> 918 100654 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoli= ne+0xe=0A>>> 918 100655 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe= =0A>>> 918 100656 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>>= 918 100657 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 1= 00658 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100659 = nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100660 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100661 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 918 100662 nfsd nfsd: service= mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>> fork_trampoline+0xe=0A>>> ---=0A>>> =0A>>> Now if we look at = client (FreeBSD 9.3)=0A>>> =0A>>> We see system was very busy and do many= and many interrupts=0A>>> =0A>>> CPU: 0.0% user, 0.0% nice, 37.8% system= , 51.2% interrupt, 11.0%=0A>>> idle=0A>>> =0A>>> A look at process list s= hows that there are many sendmail process in=0A>>> state nfstry=0A>>> =0A= >>> nfstry 18 32:27 0.88% sendmail: Queue runner@00:30:00 for=0A>>> /var/= spool/clientm=0A>>> =0A>>> Here is 'nfsstat -c' output:=0A>>> =0A>>> Clie= nt Info:=0A>>> Rpc Counts:=0A>>> Getattr Setattr Lookup Readlink Read Wri= te Create=0A>>> Remove=0A>>> 1051347 1724 2494481 118 903902 1901285 1626= 76=0A>>> 161899=0A>>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus=0A= >>> Access=0A>>> 36744 2 0 114 40 3131 0=0A>>> 544136=0A>>> Mknod Fsstat = Fsinfo PathConf Commit=0A>>> 9 0 0 0 245821=0A>>> Rpc Info:=0A>>> TimedOu= t Invalid X Replies Retries Requests=0A>>> 0 0 0 0 8356557=0A>>> Cache In= fo:=0A>>> Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits=0A= >>> Misses=0A>>> 108754455 491475 54229224 2437229 46814561 821723 513212= 3=0A>>> 1871871=0A>>> BioRLHits Misses BioD Hits Misses DirE Hits Misses = Accs Hits=0A>>> Misses=0A>>> 144035 118 53736 2753 27813 1 57238839=0A>>>= 544205=0A>>> =0A>>> If you need more things, tell me, i let the PoC in t= his state.=0A>>> =0A>>> Thanks=0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3= =AFc Blot,=0A>>> UNIX Systems, Network and Security Engineer=0A>>> http:/= /www.unix-experience.fr=0A>>> =0A>>> 21 d=C3=A9cembre 2014 01:33 "Rick Ma= cklem" a =C3=A9crit:=0A>>> =0A>>> Loic Blot wrote:= =0A>>> =0A>>>> Hi Rick,=0A>>>> ok, i don't need locallocks, i haven't und= erstand option was for=0A>>>> that=0A>>>> usage, i removed it.=0A>>>> I d= o more tests on monday.=0A>>>> Thanks for the deadlock fix, for other peo= ple :)=0A>>> =0A>>> Good. Please let us know if running with vfs.nfsd.ena= ble_locallocks=3D0=0A>>> gets rid of the deadlocks? (I think it fixes the= one you saw.)=0A>>> =0A>>> On the performance side, you might also want = to try different values=0A>>> of=0A>>> readahead, if the Linux client has= such a mount option. (With the=0A>>> NFSv4-ZFS sequential vs random I/O = heuristic, I have no idea what the=0A>>> optimal readahead value would be= .)=0A>>> =0A>>> Good luck with it and please let us know how it goes, ric= k=0A>>> ps: I now have a patch to fix the deadlock when=0A>>> vfs.nfsd.en= able_locallocks=3D1=0A>>> is set. I'll post it for anyone who is interest= ed after I put it=0A>>> through some testing.=0A>>> =0A>>> --=0A>>> Best = regards,=0A>>> Lo=C3=AFc BLOT,=0A>>> UNIX systems, security and network e= ngineer=0A>>> http://www.unix-experience.fr=0A>>> =0A>>> Le jeudi 18 d=C3= =A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a =C3=A9crit :=0A>>> =0A>= >> Loic Blot wrote: =0A>>>> Hi rick,=0A>>>> i tried to start a LXC contai= ner on Debian Squeeze from my=0A>>>> freebsd=0A>>>> ZFS+NFSv4 server and = i also have a deadlock on nfsd=0A>>>> (vfs.lookup_shared=3D0). Deadlock p= rocs each time i launch a=0A>>>> squeeze=0A>>>> container, it seems (3 tr= ies, 3 fails).=0A>>> =0A>>> Well, I`ll take a look at this `procstat -kk`= , but the only thing=0A>>> I`ve seen posted w.r.t. avoiding deadlocks in = ZFS is to not use=0A>>> nullfs. (I have no idea if you are using any null= fs mounts, but=0A>>> if so, try getting rid of them.)=0A>>> =0A>>> Here`s= a high level post about the ZFS and vnode locking problem,=0A>>> but the= re is no patch available, as far as I know.=0A>>> =0A>>> http://docs.Free= BSD.org/cgi/mid.cgi?54739F41.8030407=0A>>> =0A>>> rick=0A>>> =0A>>> 921 -= D 0:00.02 nfsd: server (nfsd)=0A>>> =0A>>> Here is the procstat -kk=0A>>= > =0A>>> PID TID COMM TDNAME KSTACK=0A>>> 921 100538 nfsd nfsd: master mi= _switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A= >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> nfsvno_advloc= k+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>>> nfsrvd_locku+0x283= nfsrvd_dorpc+0xec6 nfssvc_program+0x554=0A>>> svc_run_internal+0xc77 svc= _run+0x1de nfsrvd_nfsd+0x1ca=0A>>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>= > 921 100572 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100573 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100574 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100575 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100576 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100577 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100578 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100579 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100580 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100581 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100582 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100583 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100584 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100585 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100586 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100587 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100588 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100589 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100590 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100591 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100592 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100593 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100594 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100595 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100596 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100597 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100598 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100599 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100600 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100601 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100602 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100603 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100604 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100605 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100606 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100607 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100608 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100609 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100610 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100611 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100612 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100613 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100614 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100615 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>= > 921 100616 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sl= eep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>> nfsrv_getlockfile+0x179 nf= srv_lockctrl+0x21f nfsrvd_lock+0x5b1=0A>>> nfsrvd_dorpc+0xec6 nfssvc_prog= ram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb fork_exit+0x9= a fork_trampoline+0xe=0A>>> 921 100617 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>> _cv_wait_sig+0= x16a=0A>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >> fork_trampoline+0xe=0A>>> 921 100618 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>= nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>> 921 100619 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100620 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100621 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100622 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100623 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100624 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100625 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100626 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100627 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100628 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100629 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100630 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100631 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100632 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100633 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100634 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100635 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100636 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100637 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100638 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100639 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100640 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100641 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100642 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100643 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100644 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100645 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100646 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100647 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100648 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100649 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100650 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100651 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100652 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100653 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100654 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100655 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100656 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100657 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100658 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100659 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100660 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100661 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100662 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100663 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100664 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100665 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>> _cv_wait_sig+0x16a=0A>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>> fork_trampoline+0xe=0A>>> 921 100666 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsl= eep+0x66 nfsv4_lock+0x9b=0A>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x= 3c8=0A>>> nfsrvd_dorpc+0xc76=0A>>> nfssvc_program+0x554 svc_run_internal+= 0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, N= etwork and Security Engineer=0A>>> http://www.unix-experience.fr=0A>>> = =0A>>> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" = a=0A>>> =C3=A9crit:=0A>>> =0A>>> Loic Blot wrote:=0A>>> =0A>>>> For more = informations, here is procstat -kk on nfsd, if you=0A>>>> need=0A>>>> mor= e=0A>>>> hot datas, tell me.=0A>>>> =0A>>>> Regards, PID TID COMM TDNAME = KSTACK=0A>>>> 918 100529 nfsd nfsd: master mi_switch+0xe1=0A>>>> sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_= run_internal+0xc77 svc_run+0x1de=0A>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x1= 07 sys_nfssvc+0x9c=0A>>>> amd64_syscall+0x351=0A>>> =0A>>> Well, most of = the threads are stuck like this one, waiting for=0A>>> a=0A>>> vnode=0A>>= > lock in ZFS. All of them appear to be in zfs_fhtovp().=0A>>> I`m not a = ZFS guy, so I can`t help much. I`ll try changing the=0A>>> subject line= =0A>>> to include ZFS vnode lock, so maybe the ZFS guys will take a=0A>>>= look.=0A>>> =0A>>> The only thing I`ve seen suggested is trying:=0A>>> s= ysctl vfs.lookup_shared=3D0=0A>>> to disable shared vop_lookup()s. Appare= ntly zfs_lookup()=0A>>> doesn`t=0A>>> obey the vnode locking rules for lo= okup and rename, according=0A>>> to=0A>>> the posting I saw.=0A>>> =0A>>>= I`ve added a couple of comments about the other threads below,=0A>>> but= =0A>>> they are all either waiting for an RPC request or waiting for=0A>>= > the=0A>>> threads stuck on the ZFS vnode lock to complete.=0A>>> =0A>>>= rick=0A>>> =0A>>>> 918 100564 nfsd nfsd: service mi_switch+0xe1=0A>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A= >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fo= rk_trampoline+0xe=0A>>> =0A>>> Fyi, this thread is just waiting for an RP= C to arrive. (Normal)=0A>>> =0A>>>> 918 100565 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_w= ait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 918 100566 nfsd nfsd: service mi= _switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> = _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 918 100567 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A= >>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 918 100568 nfsd nfsd: = service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 918 100569 nfsd n= fsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 918 100570 n= fsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 918 100= 571 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a _sleep+0x28= 7 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>> nfsrvd_dorpc+0x316 nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe=0A>>>> 918 100572 nfsd nfsd: service mi_switch+0xe1= =0A>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>= >> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8=0A>>>> nfsrvd_dorpc+0xc7= 6=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_st= art+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> =0A>>> This one (= and a few others) are waiting for the nfsv4_lock.=0A>>> This=0A>>> happen= s=0A>>> because other threads are stuck with RPCs in progress. (ie. The= =0A>>> ones=0A>>> waiting on the vnode lock in zfs_fhtovp().)=0A>>> For t= hese, the RPC needs to lock out other threads to do the=0A>>> operation,= =0A>>> so it waits for the nfsv4_lock() which can exclusively lock the=0A= >>> NFSv4=0A>>> data structures once all other nfsd threads complete thei= r RPCs=0A>>> in=0A>>> progress.=0A>>> =0A>>>> 918 100573 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfs= v4_lock+0x9b=0A>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A>>> =0A>>> Same as above.=0A>>> =0A>>>> 918 100574 nfsd nfsd: service = mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhto= vp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A= >>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+= 0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100575 nfsd nfsd= : service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= > zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorp= c+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thr= ead_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100576= nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 n= fsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>= >> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> = 918 100577 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a slee= plk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+= 0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait= +0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run= _internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>> 918 100579 nfsd nfsd: service mi_switch+0xe1=0A>>>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_= fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>> 918 100580 nfsd nfsd: service mi_switch+0xe= 1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fo= rk_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100581 nfsd nfsd: service mi_= switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A= >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+= 0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>= > nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb= =0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100582 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> z= fs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread= _start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100583 nf= sd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> = svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918= 100584 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc= 77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>> 918 100585 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_= APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1=0A>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a= fork_trampoline+0xe=0A>>>> 918 100587 nfsd nfsd: service mi_switch+0xe1= =0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100588 nfsd nfsd: service mi_s= witch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0= x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb= =0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100589 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> z= fs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread= _start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100590 nf= sd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> = svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918= 100591 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc= 77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>> 918 100592 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_= APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1=0A>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a= fork_trampoline+0xe=0A>>>> 918 100594 nfsd nfsd: service mi_switch+0xe1= =0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100595 nfsd nfsd: service mi_s= witch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0= x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb= =0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100596 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> z= fs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread= _start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100597 nf= sd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> = svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918= 100598 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc= 77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>> 918 100599 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_= APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1=0A>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a= fork_trampoline+0xe=0A>>>> 918 100601 nfsd nfsd: service mi_switch+0xe1= =0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100602 nfsd nfsd: service mi_s= witch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0= x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb= =0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100603 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> z= fs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread= _start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918 100604 nf= sd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> = svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>> 918= 100605 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_internal+0xc= 77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>> 918 100606 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c VOP_LOCK1_= APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1=0A>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>> vop_stdlock+0x3c = VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> zfs_fhtovp+0x38d=0A>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a= fork_trampoline+0xe=0A>>> =0A>>> Lots more waiting for the ZFS vnode loc= k in zfs_fhtovp().=0A>>> =0A>>> 918 100608 nfsd nfsd: service mi_switch+0= xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A= >>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1=0A>>> = nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_= thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100609 nfsd= nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thre= ad_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100610 nf= sd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0xc9e=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>>= > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554=0A>>> svc_ru= n_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a=0A>>> fork_trampolin= e+0xe=0A>>> 918 100611 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wai= t+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc+0x3= 16 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100612 nfsd nfsd: service m= i_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lo= ck+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>> 9= 18 100613 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep= +0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe=0A>>> 918 100614 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>= nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100615 nfs= d nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsl= eep+0x66 nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 sv= c_run_internal+0xc77=0A>>> svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe=0A>>> 918 100616 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq= _wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc= +0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start= +0xb fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100617 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv= 4_lock+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>= >> 918 100618 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a _s= leep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe=0A>>> 918 100619 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_e= xit+0x9a fork_trampoline+0xe=0A>>> 918 100620 nfsd nfsd: service mi_switc= h+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A= >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork= _exit+0x9a fork_trampoline+0xe=0A>>> 918 100621 nfsd nfsd: service mi_swi= tch+0xe1=0A>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x= 9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 = 100622 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0= x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_l= ock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8= nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>= >> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 91= 8 100623 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a _sleep+= 0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>> nfsrvd_dorpc+0x316 nfssvc_prog= ram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb fork_exit+0x9= a fork_trampoline+0xe=0A>>> 918 100624 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>> 918 100625 nfsd nfsd: service mi_switch+0x= e1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exi= t+0x9a fork_trampoline+0xe=0A>>> 918 100626 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_e= xit+0x9a fork_trampoline+0xe=0A>>> 918 100627 nfsd nfsd: service mi_switc= h+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A= >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork= _exit+0x9a fork_trampoline+0xe=0A>>> 918 100628 nfsd nfsd: service mi_swi= tch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d= =0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssv= c_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100629 nfsd nfsd: service mi_= switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x3= 8d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfs= svc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>>= fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100630 nfsd nfsd: service m= i_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp= +0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb= =0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100631 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0= x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fh= tovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start= +0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100632 nfsd nfsd:= service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_ar= gs+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zf= s_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_sta= rt+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100633 nfsd nfs= d: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> = zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_s= tart+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100634 nfsd n= fsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmg= r_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= > zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc= +0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread= _start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100635 nfsd= nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_thre= ad_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100636 nf= sd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc_t= hread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 100637= nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x= 43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> svc= _thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 1006= 38 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d= __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>> s= vc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 10= 0639 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x1= 5d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_loc= k+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 n= fsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>= svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 918 = 100640 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0= x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_l= ock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8= nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>= >> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>> 91= 8 100641 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0x= c8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A>>= > 918 100642 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fhtov= p+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0xc= 77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe=0A= >>> 918 100643 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV+0x= ab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_internal+0= xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>> 918 100644 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0x3= a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>> 918 100645 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c nfs= d_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoline= +0xe=0A>>> 918 100646 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wait= +0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOCK1= _APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c n= fsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>> 918 100647 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_wa= it+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_trampo= line+0xe=0A>>> 918 100648 nfsd nfsd: service mi_switch+0xe1=0A>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+0x= 7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_run= _internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>> 918 100649 nfsd nfsd: service mi_switch+0xe1=0A>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c VOP= _LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>> 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fhtov= p+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 svc= _run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a fork_= trampoline+0xe=0A>>> 918 100651 nfsd nfsd: service mi_switch+0xe1=0A>>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a for= k_trampoline+0xe=0A>>> 918 100652 nfsd nfsd: service mi_switch+0xe1=0A>>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+0x= 3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a f= ork_trampoline+0xe=0A>>> 918 100653 nfsd nfsd: service mi_switch+0xe1=0A>= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nfsvno= _fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+0x9a= fork_trampoline+0xe=0A>>> 918 100654 nfsd nfsd: service mi_switch+0xe1= =0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>> 918 100655 nfsd nfsd: service mi_switch+0x= e1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_exi= t+0x9a fork_trampoline+0xe=0A>>> 918 100656 nfsd nfsd: service mi_switch+= 0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork_e= xit+0x9a fork_trampoline+0xe=0A>>> 918 100657 nfsd nfsd: service mi_switc= h+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d=0A= >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> fork= _exit+0x9a fork_trampoline+0xe=0A>>> 918 100658 nfsd nfsd: service mi_swi= tch+0xe1=0A>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>> zfs_fhtovp+0x38d= =0A>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>> nfssv= c_program+0x554 svc_run_internal+0xc77=0A>>> svc_thread_start+0xb=0A>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX= Systems, Network and Security Engineer=0A>>> http://www.unix-experience.= fr=0A>>> =0A>>> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot"=0A>>> =0A>>> a=0A>>> =C3=A9crit:=0A>>> =0A>>> Hmmm...= =0A>>> now i'm experiencing a deadlock.=0A>>> =0A>>> 0 918 915 0 21 0 123= 52 3372 zfs D - 1:48.64 nfsd: server=0A>>> (nfsd)=0A>>> =0A>>> the only i= ssue was to reboot the server, but after rebooting=0A>>> deadlock arrives= a second time when i=0A>>> start my jails over NFS.=0A>>> =0A>>> Regards= ,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, Network and Security E= ngineer=0A>>> http://www.unix-experience.fr=0A>>> =0A>>> 15 d=C3=A9cembre= 2014 10:07 "Lo=C3=AFc Blot"=0A>>> =0A>>> a= =0A>>> =C3=A9crit:=0A>>> =0A>>> Hi Rick,=0A>>> after talking with my N+1,= NFSv4 is required on our=0A>>> infrastructure.=0A>>> I tried to upgrade = NFSv4+ZFS=0A>>> server from 9.3 to 10.1, i hope this will resolve some=0A= >>> issues...=0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UN= IX Systems, Network and Security Engineer=0A>>> http://www.unix-experienc= e.fr=0A>>> =0A>>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot"=0A>>> =0A>>> a=0A>>> =C3=A9crit:=0A>>> =0A>>> Hi Rick= ,=0A>>> thanks for your suggestion.=0A>>> For my locking bug, rpc.lockd i= s stucked in rpcrecv state on=0A>>> the=0A>>> server. kill -9 doesn't aff= ect the=0A>>> process, it's blocked.... (State: Ds)=0A>>> =0A>>> for the = performances=0A>>> =0A>>> NFSv3: 60Mbps=0A>>> NFSv4: 45Mbps=0A>>> Regards= ,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, Network and Security E= ngineer=0A>>> http://www.unix-experience.fr=0A>>> =0A>>> 10 d=C3=A9cembre= 2014 13:56 "Rick Macklem" =0A>>> a=0A>>> =C3=A9cri= t:=0A>>> =0A>>> Loic Blot wrote:=0A>>> =0A>>>> Hi Rick,=0A>>>> I'm trying= NFSv3.=0A>>>> Some jails are starting very well but now i have an issue= =0A>>>> with=0A>>>> lockd=0A>>>> after some minutes:=0A>>>> =0A>>>> nfs s= erver 10.10.X.8:/jails: lockd not responding=0A>>>> nfs server 10.10.X.8:= /jails lockd is alive again=0A>>>> =0A>>>> I look at mbuf, but i seems th= ere is no problem.=0A>>> =0A>>> Well, if you need locks to be visible acr= oss multiple=0A>>> clients,=0A>>> then=0A>>> I'm afraid you are stuck wit= h using NFSv4 and the=0A>>> performance=0A>>> you=0A>>> get=0A>>> from it= . (There is no way to do file handle affinity for=0A>>> NFSv4=0A>>> becau= se=0A>>> the read and write ops are buried in the compound RPC and=0A>>> = not=0A>>> easily=0A>>> recognized.)=0A>>> =0A>>> If the locks don't need = to be visible across multiple=0A>>> clients,=0A>>> I'd=0A>>> suggest tryi= ng the "nolockd" option with nfsv3.=0A>>> =0A>>>> Here is my rc.conf on s= erver:=0A>>>> =0A>>>> nfs_server_enable=3D"YES"=0A>>>> nfsv4_server_enabl= e=3D"YES"=0A>>>> nfsuserd_enable=3D"YES"=0A>>>> nfsd_server_flags=3D"-u -= t -n 256"=0A>>>> mountd_enable=3D"YES"=0A>>>> mountd_flags=3D"-r"=0A>>>> = nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>>> rpcbind_enable=3D"YES"= =0A>>>> rpc_lockd_enable=3D"YES"=0A>>>> rpc_statd_enable=3D"YES"=0A>>>> = =0A>>>> Here is the client:=0A>>>> =0A>>>> nfsuserd_enable=3D"YES"=0A>>>>= nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>>> nfscbd_enable=3D"YES"= =0A>>>> rpc_lockd_enable=3D"YES"=0A>>>> rpc_statd_enable=3D"YES"=0A>>>> = =0A>>>> Have you got an idea ?=0A>>>> =0A>>>> Regards,=0A>>>> =0A>>>> Lo= =C3=AFc Blot,=0A>>>> UNIX Systems, Network and Security Engineer=0A>>>> h= ttp://www.unix-experience.fr=0A>>>> =0A>>>> 9 d=C3=A9cembre 2014 04:31 "R= ick Macklem" =0A>>>> a=0A>>>> =C3=A9crit: =0A>>>>> = Loic Blot wrote:=0A>>>>> =0A>>>>>> Hi rick,=0A>>>>>> =0A>>>>>> I waited 3= hours (no lag at jail launch) and now I do:=0A>>>>>> sysrc=0A>>>>>> memc= ached_flags=3D"-v -m 512"=0A>>>>>> Command was very very slow...=0A>>>>>>= =0A>>>>>> Here is a dd over NFS:=0A>>>>>> =0A>>>>>> 601062912 bytes tran= sferred in 21.060679 secs (28539579=0A>>>>>> bytes/sec)=0A>>>>> =0A>>>>> = Can you try the same read using an NFSv3 mount?=0A>>>>> (If it runs much = faster, you have probably been bitten by=0A>>>>> the=0A>>>>> ZFS=0A>>>>> = "sequential vs random" read heuristic which I've been told=0A>>>>> things= =0A>>>>> NFS is doing "random" reads without file handle affinity.=0A>>>>= > File=0A>>>>> handle affinity is very hard to do for NFSv4, so it isn't= =0A>>>>> done.)=0A>>> =0A>>> I was actually suggesting that you try the "= dd" over nfsv3=0A>>> to=0A>>> see=0A>>> how=0A>>> the performance compare= d with nfsv4. If you do that, please=0A>>> post=0A>>> the=0A>>> comparabl= e results.=0A>>> =0A>>> Someday I would like to try and get ZFS's sequent= ial vs=0A>>> random=0A>>> read=0A>>> heuristic modified and any info on w= hat difference in=0A>>> performance=0A>>> that=0A>>> might make for NFS w= ould be useful.=0A>>> =0A>>> rick=0A>>> =0A>>> rick=0A>>> =0A>>> This is = quite slow...=0A>>> =0A>>> You can found some nfsstat below (command isn'= t finished=0A>>> yet)=0A>>> =0A>>> nfsstat -c -w 1=0A>>> =0A>>> GtAttr Lo= okup Rdlink Read Write Rename Access Rddir=0A>>> 0 0 0 0 0 0 0 0=0A>>> 4 = 0 0 0 0 0 16 0=0A>>> 2 0 0 0 0 0 17 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0= 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 4 0 0 0 0 4 = 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>>= 0 0 0 0 0 0 0 0=0A>>> 4 0 0 0 0 0 3 0=0A>>> 0 0 0 0 0 0 3 0=0A>>> 37 10 = 0 8 0 0 14 1=0A>>> 18 16 0 4 1 2 4 0=0A>>> 78 91 0 82 6 12 30 0=0A>>> 19 = 18 0 2 2 4 2 0=0A>>> 0 0 0 0 2 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> GtAttr L= ookup Rdlink Read Write Rename Access Rddir=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0= 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 1 0 0 0 0 1 0=0A>>> 4 6 0 0 = 6 0 3 0=0A>>> 2 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 1 0 0 0 0 0 0 0= =0A>>> 0 0 0 0 1 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> = 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0= 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 6 108 0 0 0 0 = 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> GtAttr Lookup Rdlin= k Read Write Rename Access Rddir=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 = 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>= >> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 98 54 0 86 11 0 25 0=0A>>>= 36 24 0 39 25 0 10 1=0A>>> 67 8 0 63 63 0 41 0=0A>>> 34 0 0 35 34 0 0 0= =0A>>> 75 0 0 75 77 0 0 0=0A>>> 34 0 0 35 35 0 0 0=0A>>> 75 0 0 74 76 0 0= 0=0A>>> 33 0 0 34 33 0 0 0=0A>>> 0 0 0 0 5 0 0 0=0A>>> 0 0 0 0 0 0 6 0= =0A>>> 11 0 0 0 0 0 11 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 17 0 0 0 0 1 0=0A>= >> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>> 4 5 0 0 0 0 = 12 0=0A>>> 2 0 0 0 0 0 26 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0= =0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> = 0 4 0 0 0 0 4 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0= 0 0 0 0=0A>>> 4 0 0 0 0 0 2 0=0A>>> 2 0 0 0 0 0 24 0=0A>>> 0 0 0 0 0 0 0= 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>= > 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> GtAtt= r Lookup Rdlink Read Write Rename Access Rddir=0A>>> 0 0 0 0 0 0 0 0=0A>>= > 0 0 0 0 0 0 0 0=0A>>> 4 0 0 0 0 0 7 0=0A>>> 2 1 0 0 0 0 1 0=0A>>> 0 0 0= 0 2 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 6 0 0 0=0A>>> 0 0 0 0 0 0 = 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>= >> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 4 6 0 0 0 0 3 0=0A>>> 0 0 = 0 0 0 0 0 0=0A>>> 2 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0= 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> GtAttr Lookup Rdli= nk Read Write Rename Access Rddir=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0= 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A= >>> 4 71 0 0 0 0 0 0=0A>>> 0 1 0 0 0 0 0 0=0A>>> 2 36 0 0 0 0 1 0=0A>>> 0= 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 = 0 0 0 0=0A>>> 1 0 0 0 0 0 1 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0= =0A>>> 79 6 0 79 79 0 2 0=0A>>> 25 0 0 25 26 0 6 0=0A>>> 43 18 0 39 46 0 = 23 0=0A>>> 36 0 0 36 36 0 31 0=0A>>> 68 1 0 66 68 0 0 0=0A>>> GtAttr Look= up Rdlink Read Write Rename Access Rddir=0A>>> 36 0 0 36 36 0 0 0=0A>>> 4= 8 0 0 48 49 0 0 0=0A>>> 20 0 0 20 20 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 3 = 14 0 1 0 0 11 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 0 4 0 0= 0 0 4 0=0A>>> 0 0 0 0 0 0 0 0=0A>>> 4 22 0 0 0 0 16 0=0A>>> 2 0 0 0 0 0 = 23 0=0A>>> =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX System= s, Network and Security Engineer=0A>>> http://www.unix-experience.fr=0A>>= > =0A>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot"=0A>>> a=0A>>> =C3=A9crit: =0A>>>> Hi Rick,=0A>>>> I stopped th= e jails this week-end and started it this=0A>>>> morning,=0A>>>> i'll=0A>= >>> give you some stats this week.=0A>>>> =0A>>>> Here is my nfsstat -m o= utput (with your rsize/wsize=0A>>>> tweaks)=0A>> =0A>> =0A> nfsv4,tcp,res= vport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acregmax= =3D60,nametimeo=3D60,negna=0A>> =0A>>> =0A>> =0A>> =0A> etimeo=3D60,rsize= =3D32768,wsize=3D32768,readdirsize=3D32768,readahead=3D1,wcommitsize=3D77= 3136,timeout=3D120,retra=0A>> =0A>>> s=3D2147483647=0A>>> =0A>>> On serve= r side my disks are on a raid controller which show a=0A>>> 512b=0A>>> vo= lume and write performances=0A>>> are very honest (dd if=3D/dev/zero of= =3D/jails/test.dd bs=3D4096=0A>>> count=3D100000000 =3D> 450MBps)=0A>>> = =0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, Network = and Security Engineer=0A>>> http://www.unix-experience.fr=0A>>> =0A>>> 5 = d=C3=A9cembre 2014 15:14 "Rick Macklem" a=0A>>> = =C3=A9crit:=0A>>> =0A>>> Loic Blot wrote:=0A>>> =0A>>> Hi,=0A>>> i'm tryi= ng to create a virtualisation environment based on=0A>>> jails.=0A>>> Tho= se jails are stored under a big ZFS pool on a FreeBSD=0A>>> 9.3=0A>>> whi= ch=0A>>> export a NFSv4 volume. This NFSv4 volume was mounted on a=0A>>> = big=0A>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but=0A>>> o= nly 1=0A>>> was=0A>>> used at this time).=0A>>> =0A>>> The problem is sim= ple, my hypervisors runs 6 jails (used 1%=0A>>> cpu=0A>>> and=0A>>> 10GB = RAM approximatively and less than 1MB bandwidth) and=0A>>> works=0A>>> fi= ne at start but the system slows down and after 2-3 days=0A>>> become=0A>= >> unusable. When i look at top command i see 80-100% on=0A>>> system=0A>= >> and=0A>>> commands are very very slow. Many process are tagged with=0A= >>> nfs_cl*.=0A>>> =0A>>> To be honest, I would expect the slowness to be= because of=0A>>> slow=0A>>> response=0A>>> from the NFSv4 server, but if= you do:=0A>>> # ps axHl=0A>>> on a client when it is slow and post that,= it would give us=0A>>> some=0A>>> more=0A>>> information on where the cl= ient side processes are sitting.=0A>>> If you also do something like:=0A>= >> # nfsstat -c -w 1=0A>>> and let it run for a while, that should show y= ou how many=0A>>> RPCs=0A>>> are=0A>>> being done and which ones.=0A>>> = =0A>>> # nfsstat -m=0A>>> will show you what your mount is actually using= .=0A>>> The only mount option I can suggest trying is=0A>>> "rsize=3D3276= 8,wsize=3D32768",=0A>>> since some network environments have difficulties= with 64K.=0A>>> =0A>>> There are a few things you can try on the NFSv4 s= erver side,=0A>>> if=0A>>> it=0A>>> appears=0A>>> that the clients are ge= nerating a large RPC load.=0A>>> - disabling the DRC cache for TCP by set= ting=0A>>> vfs.nfsd.cachetcp=3D0=0A>>> - If the server is seeing a large = write RPC load, then=0A>>> "sync=3Ddisabled"=0A>>> might help, although i= t does run a risk of data loss when=0A>>> the=0A>>> server=0A>>> crashes.= =0A>>> Then there are a couple of other ZFS related things (I'm not=0A>>>= a=0A>>> ZFS=0A>>> guy,=0A>>> but these have shown up on the mailing list= s).=0A>>> - make sure your volumes are 4K aligned and ashift=3D12 (in=0A>= >> case a=0A>>> drive=0A>>> that uses 4K sectors is pretending to be 512b= yte sectored)=0A>>> - never run over 70-80% full if write performance is = an=0A>>> issue=0A>>> - use a zil on an SSD with good write performance=0A= >>> =0A>>> The only NFSv4 thing I can tell you is that it is known that= =0A>>> ZFS's=0A>>> algorithm for determining sequential vs random I/O fai= ls for=0A>>> NFSv4=0A>>> during writing and this can be a performance hit= . The only=0A>>> workaround=0A>>> is to use NFSv3 mounts, since file hand= le affinity=0A>>> apparently=0A>>> fixes=0A>>> the problem and this is on= ly done for NFSv3.=0A>>> =0A>>> rick=0A>>> =0A>>> I saw that there are TS= O issues with igb then i'm trying to=0A>>> disable=0A>>> it with sysctl b= ut the situation wasn't solved.=0A>>> =0A>>> Someone has got ideas ? I ca= n give you more informations if=0A>>> you=0A>>> need.=0A>>> =0A>>> Thanks= in advance.=0A>>> Regards,=0A>>> =0A>>> Lo=C3=AFc Blot,=0A>>> UNIX Syste= ms, Network and Security Engineer=0A>>> http://www.unix-experience.fr=0A>= >> _______________________________________________=0A>>> freebsd-fs@freeb= sd.org mailing list=0A>>> http://lists.freebsd.org/mailman/listinfo/freeb= sd-fs=0A>>> To unsubscribe, send any mail to=0A>>> "freebsd-fs-unsubscrib= e@freebsd.org"=0A>>> =0A>>> _____________________________________________= __=0A>>> freebsd-fs@freebsd.org mailing list=0A>>> http://lists.freebsd.o= rg/mailman/listinfo/freebsd-fs=0A>>> To unsubscribe, send any mail to=0A>= >> "freebsd-fs-unsubscribe@freebsd.org"=0A>>> =0A>>> ____________________= ___________________________=0A>>> freebsd-fs@freebsd.org mailing list=0A>= >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>> To unsubscr= ibe, send any mail to=0A>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>> = =0A>>> _______________________________________________=0A>>> freebsd-fs@f= reebsd.org mailing list=0A>>> http://lists.freebsd.org/mailman/listinfo/f= reebsd-fs=0A>>> To unsubscribe, send any mail to=0A>>> "freebsd-fs-unsubs= cribe@freebsd.org"=0A>>> _______________________________________________= =0A>>> freebsd-fs@freebsd.org mailing list=0A>>> http://lists.freebsd.org= /mailman/listinfo/freebsd-fs=0A>>> To unsubscribe, send any mail to=0A>>>= "freebsd-fs-unsubscribe@freebsd.org"=0A>>> =0A>>> ______________________= _________________________=0A>>> freebsd-fs@freebsd.org mailing list=0A>>>= http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>> To unsubscrib= e, send any mail to "freebsd-fs-unsubscribe@freebsd.org"=0A> =0A> _______= ________________________________________=0A> freebsd-fs@freebsd.org maili= ng list=0A> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A> To u= nsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 13:35:06 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A2DCD20F for ; Mon, 5 Jan 2015 13:35:06 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2591564251 for ; Mon, 5 Jan 2015 13:35:05 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AkYFAOCRqlSDaFve/2dsb2JhbABcg1hYBIMBwxAKhSlKAoEeAQEBAQF9hAwBAQEDAQEBARcBCAQnIAsbGAICDRkCKQEJJgYIAgUEARoCBIgDCA2pC5M/AQEBAQEFAQEBAQEBAQEBGYEhjgUBAQ0OATMHgi07EYEwBYlLiAmDHoMjMII1gjOHcoM5IoF/Ah2BbiAxAQEFfgcXIn4BAQE X-IronPort-AV: E=Sophos;i="5.07,700,1413259200"; d="scan'208";a="181620763" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 05 Jan 2015 08:34:55 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id A7BE2B403E; Mon, 5 Jan 2015 08:34:54 -0500 (EST) Date: Mon, 5 Jan 2015 08:34:54 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <956766012.5685731.1420464894657.JavaMail.root@uoguelph.ca> In-Reply-To: <87a8b6ab243024e553b2baba30537b92@mail.unix-experience.fr> Subject: Re: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 13:35:06 -0000 Loic Blot wrote: > Hi, > happy new year Rick and @freebsd-fs. >=20 > After some days, i looked my NFSv4.1 mount. At server start it was > calm, but after 4 days, here is the top stat... >=20 > CPU: 0.0% user, 0.0% nice, 100% system, 0.0% interrupt, 0.0% > idle >=20 > Definitively i think it's a problem on client side. What can i look > into running kernel to resolve this issue ? >=20 Well, I'd start with: # nfsstat -e -s - run repeatedly on the server (once every N seconds in a loop). Then look at the output, comparing the counts and see which RPCs are being performed by the client(s). You are looking for which RPCs are being done a lot. (If one RPC is almost 100% of the load, then it might be a client/caching issue for whatever that RPC is doing.) Also look at the Open/Lock counts near the end of the output. If the # of Opens/Locks is large, it may be possible to reduce the CPU overheads by using larger hash tables. Then you need to profile the server kernel to see where the CPU is being used. Hopefully someone else can fill you in on how to do that, because I'll admit I don't know how to. Basically you are looking to see if the CPU is being used in the NFS server code or ZFS. Good luck with it, rick >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 30 d=C3=A9cembre 2014 16:16 "Lo=C3=AFc Blot" a > =C3=A9crit: > > Hi Rick, > > i upgraded my jail host from FreeBSD 9.3 to 10.1 to use NFS v4.1 > > (mountoptions: > > rw,rsize=3D32768,wsize=3D32768,tcp,nfsv4,minorversion=3D1) > >=20 > > Performance is quite stable but it's slow. Not as slow as before > > but slow... services was launched > > but no client are using them and system CPU % was 10-50%. > >=20 > > I don't see anything on NFSv4.1 server, it's perfectly stable and > > functionnal. > >=20 > > Regards, > >=20 > > Lo=C3=AFc Blot, > > UNIX Systems, Network and Security Engineer > > http://www.unix-experience.fr > >=20 > > 23 d=C3=A9cembre 2014 00:20 "Rick Macklem" a > > =C3=A9crit: > >=20 > >> Loic Blot wrote: > >>=20 > >>> Hi, > >>>=20 > >>> To clarify because of our exchanges. Here are the current sysctl > >>> options for server: > >>>=20 > >>> vfs.nfsd.enable_nobodycheck=3D0 > >>> vfs.nfsd.enable_nogroupcheck=3D0 > >>>=20 > >>> vfs.nfsd.maxthreads=3D200 > >>> vfs.nfsd.tcphighwater=3D10000 > >>> vfs.nfsd.tcpcachetimeo=3D300 > >>> vfs.nfsd.server_min_nfsvers=3D4 > >>>=20 > >>> kern.maxvnodes=3D10000000 > >>> kern.ipc.maxsockbuf=3D4194304 > >>> net.inet.tcp.sendbuf_max=3D4194304 > >>> net.inet.tcp.recvbuf_max=3D4194304 > >>>=20 > >>> vfs.lookup_shared=3D0 > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 22 d=C3=A9cembre 2014 09:42 "Lo=C3=AFc Blot" > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hi Rick, > >>> my 5 jails runs this weekend and now i have some stats on this > >>> monday. > >>>=20 > >>> Hopefully deadlock was fixed, yeah, but everything isn't good :( > >>>=20 > >>> On NFSv4 server (FreeBSD 10.1) system uses 35% CPU > >>>=20 > >>> As i can see this is because of nfsd: > >>>=20 > >>> 918 root 96 20 0 12352K 3372K rpcsvc 6 51.4H > >>> 273.68% nfsd: server (nfsd) > >>>=20 > >>> If i look at dmesg i see: > >>> nfsd server cache flooded, try increasing vfs.nfsd.tcphighwater > >>=20 > >> Well, you have a couple of choices: > >> 1 - Use NFSv4.1 (add "minorversion=3D1" to your mount options). > >> (NFSv4.1 avoids use of the DRC and instead uses something > >> called sessions. See below.) > >> OR > >>=20 > >>> vfs.nfsd.tcphighwater was set to 10000, i increase it to 15000 > >>=20 > >> 2 - Bump vfs.nfsd.tcphighwater way up, until you no longer see > >> "nfs server cache flooded" messages. (I think Garrett Wollman uses > >> 100000. (You may still see quite a bit of CPU overheads.) > >>=20 > >> OR > >>=20 > >> 3 - Set vfs.nfsd.cachetcp=3D0 (which disables the DRC and gets rid > >> of the CPU overheads). However, there is a risk of data corruption > >> if you have a client->server network partitioning of a moderate > >> duration, because a non-idempotent RPC may get redone, becasue > >> the client times out waiting for a reply. If a non-idempotent > >> RPC gets done twice on the server, data corruption can happen. > >> (The DRC provides improved correctness, but does add overhead.) > >>=20 > >> If #1 works for you, it is the preferred solution, since Sessions > >> in NFSv4.1 solves the correctness problem in a good, space bound > >> way. A session basically has N (usually 32 or 64) slots and only > >> allows one outstanding RPC/slot. As such, it can cache the > >> previous > >> reply for each slot (32 or 64 of them) and guarantee "exactly > >> once" > >> RPC semantics. > >>=20 > >> rick > >>=20 > >>> Here is 'nfsstat -s' output: > >>>=20 > >>> Server Info: > >>> Getattr Setattr Lookup Readlink Read Write Create > >>> Remove > >>> 12600652 1812 2501097 156 1386423 1983729 123 > >>> 162067 > >>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > >>> Access > >>> 36762 9 0 0 0 3147 0 > >>> 623524 > >>> Mknod Fsstat Fsinfo PathConf Commit > >>> 0 0 0 0 328117 > >>> Server Ret-Failed > >>> 0 > >>> Server Faults > >>> 0 > >>> Server Cache Stats: > >>> Inprog Idem Non-idem Misses > >>> 0 0 0 12635512 > >>> Server Write Gathering: > >>> WriteOps WriteRPC Opsaved > >>> 1983729 1983729 0 > >>>=20 > >>> And here is 'procstat -kk' for nfsd (server) > >>>=20 > >>> 918 100528 nfsd nfsd: master mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10 > >>> _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_run+0x1de > >>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>> amd64_syscall+0x351 Xfast_syscall+0xfb > >>> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100659 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100660 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100661 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100662 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> --- > >>>=20 > >>> Now if we look at client (FreeBSD 9.3) > >>>=20 > >>> We see system was very busy and do many and many interrupts > >>>=20 > >>> CPU: 0.0% user, 0.0% nice, 37.8% system, 51.2% interrupt, 11.0% > >>> idle > >>>=20 > >>> A look at process list shows that there are many sendmail process > >>> in > >>> state nfstry > >>>=20 > >>> nfstry 18 32:27 0.88% sendmail: Queue runner@00:30:00 for > >>> /var/spool/clientm > >>>=20 > >>> Here is 'nfsstat -c' output: > >>>=20 > >>> Client Info: > >>> Rpc Counts: > >>> Getattr Setattr Lookup Readlink Read Write Create > >>> Remove > >>> 1051347 1724 2494481 118 903902 1901285 162676 > >>> 161899 > >>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > >>> Access > >>> 36744 2 0 114 40 3131 0 > >>> 544136 > >>> Mknod Fsstat Fsinfo PathConf Commit > >>> 9 0 0 0 245821 > >>> Rpc Info: > >>> TimedOut Invalid X Replies Retries Requests > >>> 0 0 0 0 8356557 > >>> Cache Info: > >>> Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits > >>> Misses > >>> 108754455 491475 54229224 2437229 46814561 821723 5132123 > >>> 1871871 > >>> BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits > >>> Misses > >>> 144035 118 53736 2753 27813 1 57238839 > >>> 544205 > >>>=20 > >>> If you need more things, tell me, i let the PoC in this state. > >>>=20 > >>> Thanks > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 21 d=C3=A9cembre 2014 01:33 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>> Loic Blot wrote: > >>>=20 > >>>> Hi Rick, > >>>> ok, i don't need locallocks, i haven't understand option was for > >>>> that > >>>> usage, i removed it. > >>>> I do more tests on monday. > >>>> Thanks for the deadlock fix, for other people :) > >>>=20 > >>> Good. Please let us know if running with > >>> vfs.nfsd.enable_locallocks=3D0 > >>> gets rid of the deadlocks? (I think it fixes the one you saw.) > >>>=20 > >>> On the performance side, you might also want to try different > >>> values > >>> of > >>> readahead, if the Linux client has such a mount option. (With the > >>> NFSv4-ZFS sequential vs random I/O heuristic, I have no idea what > >>> the > >>> optimal readahead value would be.) > >>>=20 > >>> Good luck with it and please let us know how it goes, rick > >>> ps: I now have a patch to fix the deadlock when > >>> vfs.nfsd.enable_locallocks=3D1 > >>> is set. I'll post it for anyone who is interested after I put it > >>> through some testing. > >>>=20 > >>> -- > >>> Best regards, > >>> Lo=C3=AFc BLOT, > >>> UNIX systems, security and network engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> Le jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a =C3= =A9crit : > >>>=20 > >>> Loic Blot wrote: > >>>> Hi rick, > >>>> i tried to start a LXC container on Debian Squeeze from my > >>>> freebsd > >>>> ZFS+NFSv4 server and i also have a deadlock on nfsd > >>>> (vfs.lookup_shared=3D0). Deadlock procs each time i launch a > >>>> squeeze > >>>> container, it seems (3 tries, 3 fails). > >>>=20 > >>> Well, I`ll take a look at this `procstat -kk`, but the only thing > >>> I`ve seen posted w.r.t. avoiding deadlocks in ZFS is to not use > >>> nullfs. (I have no idea if you are using any nullfs mounts, but > >>> if so, try getting rid of them.) > >>>=20 > >>> Here`s a high level post about the ZFS and vnode locking problem, > >>> but there is no patch available, as far as I know. > >>>=20 > >>> http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407 > >>>=20 > >>> rick > >>>=20 > >>> 921 - D 0:00.02 nfsd: server (nfsd) > >>>=20 > >>> Here is the procstat -kk > >>>=20 > >>> PID TID COMM TDNAME KSTACK > >>> 921 100538 nfsd nfsd: master mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>> svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > >>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>> 921 100572 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100573 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100574 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100575 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100576 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100577 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100578 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100579 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100580 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100581 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100582 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100583 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100584 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100585 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100586 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100587 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100588 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100589 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100590 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100591 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100592 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100593 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100594 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100595 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100596 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100597 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100598 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100599 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100600 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100601 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100602 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100603 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100604 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100605 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100606 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100607 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100608 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100609 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100610 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100611 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100612 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100613 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100614 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100615 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100616 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > >>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 921 100617 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100618 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 921 100619 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100620 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100621 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100622 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100623 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100624 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100625 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100626 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100627 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100628 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100629 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100630 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100631 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100632 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100633 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100634 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100635 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100636 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100637 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100638 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100639 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100640 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100641 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100642 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100643 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100644 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100645 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100646 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100647 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100648 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100649 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100650 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100651 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100652 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100653 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100654 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100655 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100656 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100657 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100658 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100659 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100660 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100661 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100662 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100663 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100664 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100665 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>> _cv_wait_sig+0x16a > >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 921 100666 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > >>> nfsrvd_dorpc+0xc76 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>> Loic Blot wrote: > >>>=20 > >>>> For more informations, here is procstat -kk on nfsd, if you > >>>> need > >>>> more > >>>> hot datas, tell me. > >>>>=20 > >>>> Regards, PID TID COMM TDNAME KSTACK > >>>> 918 100529 nfsd nfsd: master mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > >>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>>> amd64_syscall+0x351 > >>>=20 > >>> Well, most of the threads are stuck like this one, waiting for > >>> a > >>> vnode > >>> lock in ZFS. All of them appear to be in zfs_fhtovp(). > >>> I`m not a ZFS guy, so I can`t help much. I`ll try changing the > >>> subject line > >>> to include ZFS vnode lock, so maybe the ZFS guys will take a > >>> look. > >>>=20 > >>> The only thing I`ve seen suggested is trying: > >>> sysctl vfs.lookup_shared=3D0 > >>> to disable shared vop_lookup()s. Apparently zfs_lookup() > >>> doesn`t > >>> obey the vnode locking rules for lookup and rename, according > >>> to > >>> the posting I saw. > >>>=20 > >>> I`ve added a couple of comments about the other threads below, > >>> but > >>> they are all either waiting for an RPC request or waiting for > >>> the > >>> threads stuck on the ZFS vnode lock to complete. > >>>=20 > >>> rick > >>>=20 > >>>> 918 100564 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>> _cv_wait_sig+0x16a > >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>> fork_trampoline+0xe > >>>=20 > >>> Fyi, this thread is just waiting for an RPC to arrive. (Normal) > >>>=20 > >>>> 918 100565 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>> _cv_wait_sig+0x16a > >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>> fork_trampoline+0xe > >>>> 918 100566 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>> _cv_wait_sig+0x16a > >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>> fork_trampoline+0xe > >>>> 918 100567 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>> _cv_wait_sig+0x16a > >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>> fork_trampoline+0xe > >>>> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>> _cv_wait_sig+0x16a > >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>> fork_trampoline+0xe > >>>> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>> _cv_wait_sig+0x16a > >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>> fork_trampoline+0xe > >>>> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>> _cv_wait_sig+0x16a > >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>> fork_trampoline+0xe > >>>> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > >>>> nfsrvd_dorpc+0xc76 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>=20 > >>> This one (and a few others) are waiting for the nfsv4_lock. > >>> This > >>> happens > >>> because other threads are stuck with RPCs in progress. (ie. The > >>> ones > >>> waiting on the vnode lock in zfs_fhtovp().) > >>> For these, the RPC needs to lock out other threads to do the > >>> operation, > >>> so it waits for the nfsv4_lock() which can exclusively lock the > >>> NFSv4 > >>> data structures once all other nfsd threads complete their RPCs > >>> in > >>> progress. > >>>=20 > >>>> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>=20 > >>> Same as above. > >>>=20 > >>>> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>> zfs_fhtovp+0x38d > >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>> svc_thread_start+0xb > >>>> fork_exit+0x9a fork_trampoline+0xe > >>>=20 > >>> Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > >>>=20 > >>> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > >>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>> svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > >>> fork_trampoline+0xe > >>> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>> zfs_fhtovp+0x38d > >>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>> svc_thread_start+0xb > >>> fork_exit+0x9a fork_trampoline+0xe > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" > >>> > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hmmm... > >>> now i'm experiencing a deadlock. > >>>=20 > >>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server > >>> (nfsd) > >>>=20 > >>> the only issue was to reboot the server, but after rebooting > >>> deadlock arrives a second time when i > >>> start my jails over NFS. > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" > >>> > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hi Rick, > >>> after talking with my N+1, NFSv4 is required on our > >>> infrastructure. > >>> I tried to upgrade NFSv4+ZFS > >>> server from 9.3 to 10.1, i hope this will resolve some > >>> issues... > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" > >>> > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Hi Rick, > >>> thanks for your suggestion. > >>> For my locking bug, rpc.lockd is stucked in rpcrecv state on > >>> the > >>> server. kill -9 doesn't affect the > >>> process, it's blocked.... (State: Ds) > >>>=20 > >>> for the performances > >>>=20 > >>> NFSv3: 60Mbps > >>> NFSv4: 45Mbps > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" > >>> a > >>> =C3=A9crit: > >>>=20 > >>> Loic Blot wrote: > >>>=20 > >>>> Hi Rick, > >>>> I'm trying NFSv3. > >>>> Some jails are starting very well but now i have an issue > >>>> with > >>>> lockd > >>>> after some minutes: > >>>>=20 > >>>> nfs server 10.10.X.8:/jails: lockd not responding > >>>> nfs server 10.10.X.8:/jails lockd is alive again > >>>>=20 > >>>> I look at mbuf, but i seems there is no problem. > >>>=20 > >>> Well, if you need locks to be visible across multiple > >>> clients, > >>> then > >>> I'm afraid you are stuck with using NFSv4 and the > >>> performance > >>> you > >>> get > >>> from it. (There is no way to do file handle affinity for > >>> NFSv4 > >>> because > >>> the read and write ops are buried in the compound RPC and > >>> not > >>> easily > >>> recognized.) > >>>=20 > >>> If the locks don't need to be visible across multiple > >>> clients, > >>> I'd > >>> suggest trying the "nolockd" option with nfsv3. > >>>=20 > >>>> Here is my rc.conf on server: > >>>>=20 > >>>> nfs_server_enable=3D"YES" > >>>> nfsv4_server_enable=3D"YES" > >>>> nfsuserd_enable=3D"YES" > >>>> nfsd_server_flags=3D"-u -t -n 256" > >>>> mountd_enable=3D"YES" > >>>> mountd_flags=3D"-r" > >>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>> rpcbind_enable=3D"YES" > >>>> rpc_lockd_enable=3D"YES" > >>>> rpc_statd_enable=3D"YES" > >>>>=20 > >>>> Here is the client: > >>>>=20 > >>>> nfsuserd_enable=3D"YES" > >>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>> nfscbd_enable=3D"YES" > >>>> rpc_lockd_enable=3D"YES" > >>>> rpc_statd_enable=3D"YES" > >>>>=20 > >>>> Have you got an idea ? > >>>>=20 > >>>> Regards, > >>>>=20 > >>>> Lo=C3=AFc Blot, > >>>> UNIX Systems, Network and Security Engineer > >>>> http://www.unix-experience.fr > >>>>=20 > >>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" > >>>> a > >>>> =C3=A9crit: > >>>>> Loic Blot wrote: > >>>>>=20 > >>>>>> Hi rick, > >>>>>>=20 > >>>>>> I waited 3 hours (no lag at jail launch) and now I do: > >>>>>> sysrc > >>>>>> memcached_flags=3D"-v -m 512" > >>>>>> Command was very very slow... > >>>>>>=20 > >>>>>> Here is a dd over NFS: > >>>>>>=20 > >>>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > >>>>>> bytes/sec) > >>>>>=20 > >>>>> Can you try the same read using an NFSv3 mount? > >>>>> (If it runs much faster, you have probably been bitten by > >>>>> the > >>>>> ZFS > >>>>> "sequential vs random" read heuristic which I've been told > >>>>> things > >>>>> NFS is doing "random" reads without file handle affinity. > >>>>> File > >>>>> handle affinity is very hard to do for NFSv4, so it isn't > >>>>> done.) > >>>=20 > >>> I was actually suggesting that you try the "dd" over nfsv3 > >>> to > >>> see > >>> how > >>> the performance compared with nfsv4. If you do that, please > >>> post > >>> the > >>> comparable results. > >>>=20 > >>> Someday I would like to try and get ZFS's sequential vs > >>> random > >>> read > >>> heuristic modified and any info on what difference in > >>> performance > >>> that > >>> might make for NFS would be useful. > >>>=20 > >>> rick > >>>=20 > >>> rick > >>>=20 > >>> This is quite slow... > >>>=20 > >>> You can found some nfsstat below (command isn't finished > >>> yet) > >>>=20 > >>> nfsstat -c -w 1 > >>>=20 > >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>> 0 0 0 0 0 0 0 0 > >>> 4 0 0 0 0 0 16 0 > >>> 2 0 0 0 0 0 17 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 4 0 0 0 0 4 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 4 0 0 0 0 0 3 0 > >>> 0 0 0 0 0 0 3 0 > >>> 37 10 0 8 0 0 14 1 > >>> 18 16 0 4 1 2 4 0 > >>> 78 91 0 82 6 12 30 0 > >>> 19 18 0 2 2 4 2 0 > >>> 0 0 0 0 2 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 1 0 0 0 0 1 0 > >>> 4 6 0 0 6 0 3 0 > >>> 2 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 1 0 0 0 0 0 0 0 > >>> 0 0 0 0 1 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 6 108 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 98 54 0 86 11 0 25 0 > >>> 36 24 0 39 25 0 10 1 > >>> 67 8 0 63 63 0 41 0 > >>> 34 0 0 35 34 0 0 0 > >>> 75 0 0 75 77 0 0 0 > >>> 34 0 0 35 35 0 0 0 > >>> 75 0 0 74 76 0 0 0 > >>> 33 0 0 34 33 0 0 0 > >>> 0 0 0 0 5 0 0 0 > >>> 0 0 0 0 0 0 6 0 > >>> 11 0 0 0 0 0 11 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 17 0 0 0 0 1 0 > >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>> 4 5 0 0 0 0 12 0 > >>> 2 0 0 0 0 0 26 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 4 0 0 0 0 4 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 4 0 0 0 0 0 2 0 > >>> 2 0 0 0 0 0 24 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 4 0 0 0 0 0 7 0 > >>> 2 1 0 0 0 0 1 0 > >>> 0 0 0 0 2 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 6 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 4 6 0 0 0 0 3 0 > >>> 0 0 0 0 0 0 0 0 > >>> 2 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 4 71 0 0 0 0 0 0 > >>> 0 1 0 0 0 0 0 0 > >>> 2 36 0 0 0 0 1 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 1 0 0 0 0 0 1 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 79 6 0 79 79 0 2 0 > >>> 25 0 0 25 26 0 6 0 > >>> 43 18 0 39 46 0 23 0 > >>> 36 0 0 36 36 0 31 0 > >>> 68 1 0 66 68 0 0 0 > >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>> 36 0 0 36 36 0 0 0 > >>> 48 0 0 48 49 0 0 0 > >>> 20 0 0 20 20 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 3 14 0 1 0 0 11 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 0 0 0 0 0 0 0 > >>> 0 4 0 0 0 0 4 0 > >>> 0 0 0 0 0 0 0 0 > >>> 4 22 0 0 0 0 16 0 > >>> 2 0 0 0 0 0 23 0 > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > >>> a > >>> =C3=A9crit: > >>>> Hi Rick, > >>>> I stopped the jails this week-end and started it this > >>>> morning, > >>>> i'll > >>>> give you some stats this week. > >>>>=20 > >>>> Here is my nfsstat -m output (with your rsize/wsize > >>>> tweaks) > >>=20 > >>=20 > > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregm= in=3D5,acregmax=3D60,nametimeo=3D60,negna > >>=20 > >>>=20 > >>=20 > >>=20 > > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead= =3D1,wcommitsize=3D773136,timeout=3D120,retra > >>=20 > >>> s=3D2147483647 > >>>=20 > >>> On server side my disks are on a raid controller which show a > >>> 512b > >>> volume and write performances > >>> are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > >>> count=3D100000000 =3D> 450MBps) > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>> Loic Blot wrote: > >>>=20 > >>> Hi, > >>> i'm trying to create a virtualisation environment based on > >>> jails. > >>> Those jails are stored under a big ZFS pool on a FreeBSD > >>> 9.3 > >>> which > >>> export a NFSv4 volume. This NFSv4 volume was mounted on a > >>> big > >>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but > >>> only 1 > >>> was > >>> used at this time). > >>>=20 > >>> The problem is simple, my hypervisors runs 6 jails (used 1% > >>> cpu > >>> and > >>> 10GB RAM approximatively and less than 1MB bandwidth) and > >>> works > >>> fine at start but the system slows down and after 2-3 days > >>> become > >>> unusable. When i look at top command i see 80-100% on > >>> system > >>> and > >>> commands are very very slow. Many process are tagged with > >>> nfs_cl*. > >>>=20 > >>> To be honest, I would expect the slowness to be because of > >>> slow > >>> response > >>> from the NFSv4 server, but if you do: > >>> # ps axHl > >>> on a client when it is slow and post that, it would give us > >>> some > >>> more > >>> information on where the client side processes are sitting. > >>> If you also do something like: > >>> # nfsstat -c -w 1 > >>> and let it run for a while, that should show you how many > >>> RPCs > >>> are > >>> being done and which ones. > >>>=20 > >>> # nfsstat -m > >>> will show you what your mount is actually using. > >>> The only mount option I can suggest trying is > >>> "rsize=3D32768,wsize=3D32768", > >>> since some network environments have difficulties with 64K. > >>>=20 > >>> There are a few things you can try on the NFSv4 server side, > >>> if > >>> it > >>> appears > >>> that the clients are generating a large RPC load. > >>> - disabling the DRC cache for TCP by setting > >>> vfs.nfsd.cachetcp=3D0 > >>> - If the server is seeing a large write RPC load, then > >>> "sync=3Ddisabled" > >>> might help, although it does run a risk of data loss when > >>> the > >>> server > >>> crashes. > >>> Then there are a couple of other ZFS related things (I'm not > >>> a > >>> ZFS > >>> guy, > >>> but these have shown up on the mailing lists). > >>> - make sure your volumes are 4K aligned and ashift=3D12 (in > >>> case a > >>> drive > >>> that uses 4K sectors is pretending to be 512byte sectored) > >>> - never run over 70-80% full if write performance is an > >>> issue > >>> - use a zil on an SSD with good write performance > >>>=20 > >>> The only NFSv4 thing I can tell you is that it is known that > >>> ZFS's > >>> algorithm for determining sequential vs random I/O fails for > >>> NFSv4 > >>> during writing and this can be a performance hit. The only > >>> workaround > >>> is to use NFSv3 mounts, since file handle affinity > >>> apparently > >>> fixes > >>> the problem and this is only done for NFSv3. > >>>=20 > >>> rick > >>>=20 > >>> I saw that there are TSO issues with igb then i'm trying to > >>> disable > >>> it with sysctl but the situation wasn't solved. > >>>=20 > >>> Someone has got ideas ? I can give you more informations if > >>> you > >>> need. > >>>=20 > >>> Thanks in advance. > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" > >=20 > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 16:34:59 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E3564CAE for ; Mon, 5 Jan 2015 16:34:59 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 421A32C5E for ; Mon, 5 Jan 2015 16:34:58 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 8EC5F2820E; Mon, 5 Jan 2015 16:34:54 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id y00zgvTS4pns; Mon, 5 Jan 2015 16:34:46 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id B9183281FB; Mon, 5 Jan 2015 16:34:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1420475685; bh=BO5NX1GV94nxvoMVxysjXPULK+gObAjTwZebc1DF8jQ=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=MylQ0Eced1RKNPKlnR7oWf5XaXi7vj+QFUwPGWBQUuT0cTpfR4h4LJOsLy07yaEb/ 4XWGlTab/Ku/yWwdh8z0x21VI+PsVX68kiZchhqgIWAyQ/qRAxlKTBKSNNPFei+f1h 5Xhkx6kWvQvXKN6VMFDD0oWpcdBNRK2zYmvAlHvI= Mime-Version: 1.0 Date: Mon, 05 Jan 2015 16:34:45 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: X-Mailer: RainLoop/1.7.1.215 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <956766012.5685731.1420464894657.JavaMail.root@uoguelph.ca> References: <956766012.5685731.1420464894657.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 16:35:00 -0000 Hi Rick,=0Anfsstat -e -s don't show usefull datas on server.=0A=0AServer = Info:=0A Getattr Setattr Lookup Readlink Read Write Cr= eate Remove=0A 26935254 16911 5755728 302 2334920 3673= 866 0 328332=0A Rename Link Symlink Mkdir Rmd= ir Readdir RdirPlus Access=0A 77980 28 0 = 0 3 8900 3 1806052=0A Mknod Fsstat Fsinfo= PathConf Commit LookupP SetClId SetClIdCf=0A 1 1095 = 0 0 614377 8172 8 8=0A Open = OpenAttr OpenDwnGr OpenCfrm DelePurge DeleRet GetFH Lock=0A = 1595299 0 44145 1495 0 0 5197490 63= 5015=0A LockT LockU Close Verify NVerify PutFH PutPu= bFH PutRootFH=0A 0 614919 1270938 0 0 226886= 76 0 5=0A Renew RestoreFH SaveFH Secinfo RelLckOw= n V4Create=0A 42104 197606 275820 0 143 4578= =0AServer:=0ARetfailed Faults Clients=0A 0 0 = 6=0AOpenOwner Opens LockOwner Locks Delegs =0A 32335 145= 448 204 181 0 =0AServer Cache Stats:=0A Inprog = Idem Non-idem Misses CacheSize TCPPeak=0A 0 0 = 1 15082947 60 16522=0A=0AOnly GetAttr and Lookup increase = and it's only every 4-5 seconds and only +2 to +5 into theses values.=0A= =0ANow on client, if i take four processes stack i got=0A=0A PID TID = COMM TDNAME KSTACK =0A63170 1= 02547 mv - mi_switch+0xe1 turnstile_wait+0x4= 2a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 nfs_lookup+0x3d0 VOP_LOOKUP_= APV+0xa1 lookup+0x59c namei+0x4d4 vn_open_cred+0x21d kern_openat+0x26f am= d64_syscall+0x351 Xfast_syscall+0xfb=0A=0AAnother mv:=0A63140 101738 mv = - mi_switch+0xe1 turnstile_wait+0x42a __mtx_l= ock_sleep+0x253 nfscl_nodeleg+0x65 nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 l= ookup+0x59c namei+0x4d4 kern_statat_vnhook+0xae sys_lstat+0x30 amd64_sysc= all+0x351 Xfast_syscall+0xfb =0A=0A62070 102170 sendmail - = mi_switch+0xe1 sleepq_timedwait+0x3a _sleep+0x26e clnt_vc_call+0= x666 clnt_reconnect_call+0x4fa newnfs_request+0xa8c nfscl_request+0x72 nf= srpc_lookup+0x1fb nfs_lookup+0x508 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei= +0x4d4 kern_statat_vnhook+0xae sys_lstat+0x30 amd64_syscall+0x351 Xfast_s= yscall+0xfb=0A=0A63200 100930 mv - mi_switch= +0xe1 turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 nfs_= lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4 kern_statat_vnh= ook+0xae sys_lstat+0x30 amd64_syscall+0x351 Xfast_syscall+0xfb=0A=0AWhen = client is in this state, server was doing nothing special (procstat -kk)= =0A=0APID TID COMM TDNAME KSTACK = =0A 895 100538 nfsd nfsd: master mi_switch+0xe1 s= leepq_catch_signals+0xab sleepq_timedwait_sig+0x10 _cv_timedwait_sig_sbt+= 0x18b svc_run_internal+0x4a1 svc_run+0x1de nfsrvd_nfsd+0x1ca nfssvc_nfsd+= 0x107 sys_nfssvc+0x9c amd64_syscall+0x351 Xfast_syscall+0xfb =0A 895 100= 568 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100569 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100570 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a fork_trampoline+0xe =0A 895 100571 nfsd nfsd: ser= vice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= fork_trampoline+0xe =0A 895 100572 nfsd nfsd: service mi= _switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0= x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tram= poline+0xe =0A 895 100573 nfsd nfsd: service mi_switch+0x= e1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A 895 100575 nfsd nfsd: service mi_switch+0xe1 sleepq_= catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 = 100576 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100577 nfs= d nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100578 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb f= ork_exit+0x9a fork_trampoline+0xe =0A 895 100579 nfsd nfsd: = service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a fork_trampoline+0xe =0A 895 100580 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_t= rampoline+0xe =0A 895 100581 nfsd nfsd: service mi_switch= +0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+= 0xe =0A 895 100582 nfsd nfsd: service mi_switch+0xe1 slee= pq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inte= rnal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 8= 95 100583 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100584 = nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100585 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe =0A 895 100586 nfsd nfs= d: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0x= f _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe =0A 895 100587 nfsd nfsd: service= mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait= _sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe =0A 895 100588 nfsd nfsd: service mi_swi= tch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoli= ne+0xe =0A 895 100589 nfsd nfsd: service mi_switch+0xe1 s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A= 895 100590 nfsd nfsd: service mi_switch+0xe1 sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 1005= 92 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100593 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100594 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig= +0xf=20_cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a fork_trampoline+0xe =0A 895 100595 nfsd nfsd: se= rvice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv= _wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a fork_trampoline+0xe =0A 895 100596 nfsd nfsd: service m= i_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tra= mpoline+0xe =0A 895 100597 nfsd nfsd: service mi_switch+0= xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0x= e =0A 895 100598 nfsd nfsd: service mi_switch+0xe1 sleepq= _catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895= 100599 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100600 nf= sd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab = sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100602 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a fork_trampoline+0xe =0A 895 100603 nfsd nfsd:= service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf = _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a fork_trampoline+0xe =0A 895 100604 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_s= ig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A 895 100605 nfsd nfsd: service mi_switc= h+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline= +0xe =0A 895 100606 nfsd nfsd: service mi_switch+0xe1 sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 895 100607 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_= signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87= e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100608= nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thre= ad_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100609 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100610 nfsd nf= sd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0= xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a fork_trampoline+0xe =0A 895 100611 nfsd nfsd: servic= e mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wai= t_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fo= rk_trampoline+0xe =0A 895 100612 nfsd nfsd: service mi_sw= itch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16= a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampol= ine+0xe =0A 895 100613 nfsd nfsd: service mi_switch+0xe1 = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 895 100614 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 1= 00615 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100617 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100618 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 895 100619 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 895 100621 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 895 100622 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 895 100623 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 89= 5 100624 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100625 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100626 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 895 100627 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 895 100628 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 895 100629 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 895 100630 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 895 100631 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 10063= 2 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100633 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100634 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 895 100635 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A 895 100636 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A 895 100638 nfsd=20=20=20=20 nfsd: service mi_swi= tch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoli= ne+0xe =0A 895 100639 nfsd nfsd: service mi_switch+0xe1 s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A= 895 100640 nfsd nfsd: service mi_switch+0xe1 sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 1006= 41 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100642 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100643 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe =0A 895 100644 nfsd nfsd: serv= ice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe =0A 895 100645 nfsd nfsd: service mi_= switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe =0A 895 100646 nfsd nfsd: service mi_switch+0xe= 1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 895 100647 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 1= 00648 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100649 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100651 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 895 100652 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 895 100653 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 895 100654 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 895 100655 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 89= 5 100656 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100657 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100658 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 895 100659 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 895 100661 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 895 100662 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 895 100684 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 895 100685 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 10068= 6 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100797 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 895 100798 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 895 100799 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A 895 100800 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A 895 100801 nfsd nfsd: service mi_switch+0xe1= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A=0AI really think it's a client side problem, maybe a lookup problem.= =0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security = Engineer=0Ahttp://www.unix-experience.fr=0A=0A5 janvier 2015 14:35 "Rick = Macklem" a =C3=A9crit: =0A> Loic Blot wrote:=0A> = =0A>> Hi,=0A>> happy new year Rick and @freebsd-fs.=0A>> =0A>> After some= days, i looked my NFSv4.1 mount. At server start it was=0A>> calm, but a= fter 4 days, here is the top stat...=0A>> =0A>> CPU: 0.0% user, 0.0% nice= , 100% system, 0.0% interrupt, 0.0%=0A>> idle=0A>> =0A>> Definitively i t= hink it's a problem on client side. What can i look=0A>> into running ker= nel to resolve this issue ?=0A> =0A> Well, I'd start with:=0A> # nfsstat = -e -s=0A> - run repeatedly on the server (once every N seconds in a loop)= .=0A> Then look at the output, comparing the counts and see which RPCs=0A= > are being performed by the client(s). You are looking for which=0A> RPC= s are being done a lot. (If one RPC is almost 100% of the load,=0A> then = it might be a client/caching issue for whatever that RPC is=0A> doing.)= =0A> =0A> Also look at the Open/Lock counts near the end of the output.= =0A> If the # of Opens/Locks is large, it may be possible to reduce the= =0A> CPU overheads by using larger hash tables.=0A> =0A> Then you need to= profile the server kernel to see where the CPU=0A> is being used.=0A> Ho= pefully someone else can fill you in on how to do that, because=0A> I'll = admit I don't know how to.=0A> Basically you are looking to see if the CP= U is being used in=0A> the NFS server code or ZFS.=0A> =0A> Good luck wit= h it, rick=0A> =0A>> Regards,=0A>> =0A>> Lo=C3=AFc Blot,=0A>> UNIX System= s, Network and Security Engineer=0A>> http://www.unix-experience.fr=0A>> = =0A>> 30 d=C3=A9cembre 2014 16:16 "Lo=C3=AFc Blot" a=0A>> =C3=A9crit:=0A>>> Hi Rick,=0A>>> i upgraded my jail host = from FreeBSD 9.3 to 10.1 to use NFS v4.1=0A>>> (mountoptions:=0A>>> rw,rs= ize=3D32768,wsize=3D32768,tcp,nfsv4,minorversion=3D1)=0A>>> =0A>>> Perfor= mance is quite stable but it's slow. Not as slow as before=0A>>> but slow= ... services was launched=0A>>> but no client are using them and system C= PU % was 10-50%.=0A>>> =0A>>> I don't see anything on NFSv4.1 server, it'= s perfectly stable and=0A>>> functionnal.=0A>>> =0A>>> Regards,=0A>>> =0A= >>> Lo=C3=AFc Blot,=0A>>> UNIX Systems, Network and Security Engineer=0A>= >> http://www.unix-experience.fr=0A>>> =0A>>> 23 d=C3=A9cembre 2014 00:20= "Rick Macklem" a=0A>>> =C3=A9crit:=0A>>> =0A>>>> = Loic Blot wrote:=0A>>>> =0A>>>>> Hi,=0A>>>>> =0A>>>>> To clarify because = of our exchanges. Here are the current sysctl=0A>>>>> options for server:= =0A>>>>> =0A>>>>> vfs.nfsd.enable_nobodycheck=3D0=0A>>>>> vfs.nfsd.enable= _nogroupcheck=3D0=0A>>>>> =0A>>>>> vfs.nfsd.maxthreads=3D200=0A>>>>> vfs.= nfsd.tcphighwater=3D10000=0A>>>>> vfs.nfsd.tcpcachetimeo=3D300=0A>>>>> vf= s.nfsd.server_min_nfsvers=3D4=0A>>>>> =0A>>>>> kern.maxvnodes=3D10000000= =0A>>>>> kern.ipc.maxsockbuf=3D4194304=0A>>>>> net.inet.tcp.sendbuf_max= =3D4194304=0A>>>>> net.inet.tcp.recvbuf_max=3D4194304=0A>>>>> =0A>>>>> vf= s.lookup_shared=3D0=0A>>>>> =0A>>>>> Regards,=0A>>>>> =0A>>>>> Lo=C3=AFc = Blot,=0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>> http://= www.unix-experience.fr=0A>>>>> =0A>>>>> 22 d=C3=A9cembre 2014 09:42 "Lo= =C3=AFc Blot" =0A>>>>> a=0A>>>>> =C3=A9crit= :=0A>>>>> =0A>>>>> Hi Rick,=0A>>>>> my 5 jails runs this weekend and now = i have some stats on this=0A>>>>> monday.=0A>>>>> =0A>>>>> Hopefully dead= lock was fixed, yeah, but everything isn't good :(=0A>>>>> =0A>>>>> On NF= Sv4 server (FreeBSD 10.1) system uses 35% CPU=0A>>>>> =0A>>>>> As i can s= ee this is because of nfsd:=0A>>>>> =0A>>>>> 918 root 96 20 0 12352K 3372= K rpcsvc 6 51.4H=0A>>>>> 273.68% nfsd: server (nfsd)=0A>>>>> =0A>>>>> If = i look at dmesg i see:=0A>>>>> nfsd server cache flooded, try increasing = vfs.nfsd.tcphighwater=0A>>>> =0A>>>> Well, you have a couple of choices:= =0A>>>> 1 - Use NFSv4.1 (add "minorversion=3D1" to your mount options).= =0A>>>> (NFSv4.1 avoids use of the DRC and instead uses something=0A>>>> = called sessions. See below.)=0A>>>> OR=0A>>>> =0A>>>>> vfs.nfsd.tcphighwa= ter was set to 10000, i increase it to 15000=0A>>>> =0A>>>> 2 - Bump vfs.= nfsd.tcphighwater way up, until you no longer see=0A>>>> "nfs server cach= e flooded" messages. (I think Garrett Wollman uses=0A>>>> 100000. (You ma= y still see quite a bit of CPU overheads.)=0A>>>> =0A>>>> OR=0A>>>> =0A>>= >> 3 - Set vfs.nfsd.cachetcp=3D0 (which disables the DRC and gets rid=0A>= >>> of the CPU overheads). However, there is a risk of data corruption=0A= >>>> if you have a client->server network partitioning of a moderate=0A>>= >> duration, because a non-idempotent RPC may get redone, becasue=0A>>>> = the client times out waiting for a reply. If a non-idempotent=0A>>>> RPC = gets done twice on the server, data corruption can happen.=0A>>>> (The DR= C provides improved correctness, but does add overhead.)=0A>>>> =0A>>>> I= f #1 works for you, it is the preferred solution, since Sessions=0A>>>> i= n NFSv4.1 solves the correctness problem in a good, space bound=0A>>>> wa= y. A session basically has N (usually 32 or 64) slots and only=0A>>>> all= ows one outstanding RPC/slot. As such, it can cache the=0A>>>> previous= =0A>>>> reply for each slot (32 or 64 of them) and guarantee "exactly=0A>= >>> once"=0A>>>> RPC semantics.=0A>>>> =0A>>>> rick=0A>>>> =0A>>>>> Here = is 'nfsstat -s' output:=0A>>>>> =0A>>>>> Server Info:=0A>>>>> Getattr Set= attr Lookup Readlink Read Write Create=0A>>>>> Remove=0A>>>>> 12600652 18= 12 2501097 156 1386423 1983729 123=0A>>>>> 162067=0A>>>>> Rename Link Sym= link Mkdir Rmdir Readdir RdirPlus=0A>>>>> Access=0A>>>>> 36762 9 0 0 0 31= 47 0=0A>>>>> 623524=0A>>>>> Mknod Fsstat Fsinfo PathConf Commit=0A>>>>> 0= 0 0 0 328117=0A>>>>> Server Ret-Failed=0A>>>>> 0=0A>>>>> Server Faults= =0A>>>>> 0=0A>>>>> Server Cache Stats:=0A>>>>> Inprog Idem Non-idem Misse= s=0A>>>>> 0 0 0 12635512=0A>>>>> Server Write Gathering:=0A>>>>> WriteOps= WriteRPC Opsaved=0A>>>>> 1983729 1983729 0=0A>>>>> =0A>>>>> And here is = 'procstat -kk' for nfsd (server)=0A>>>>> =0A>>>>> 918 100528 nfsd nfsd: m= aster mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_timedwait_s= ig+0x10=0A>>>>> _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_ru= n+0x1de=0A>>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>= >> amd64_syscall+0x351 Xfast_syscall+0xfb=0A>>>>> 918 100568 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100569 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100570 nfsd nfs= d: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100571 nfsd n= fsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100572 nfsd= nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100573 nf= sd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100574 = nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thre= ad_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 10057= 5 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sle= epq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100= 576 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 1= 00577 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918= 100578 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 9= 18 100579 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87= e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>>= 918 100580 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>= >> 918 100581 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>= >>>> 918 100582 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0= xe=0A>>>>> 918 100584 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline= +0xe=0A>>>>> 918 100585 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_= catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoli= ne+0xe=0A>>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampo= line+0xe=0A>>>>> 918 100587 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tr= ampoline+0xe=0A>>>>> 918 100589 nfsd nfsd: service mi_switch+0xe1=0A>>>>>= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_= trampoline+0xe=0A>>>>> 918 100590 nfsd nfsd: service mi_switch+0xe1=0A>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> for= k_trampoline+0xe=0A>>>>> 918 100591 nfsd nfsd: service mi_switch+0xe1=0A>= >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>= >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> f= ork_trampoline+0xe=0A>>>>> 918 100592 nfsd nfsd: service mi_switch+0xe1= =0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 918 100593 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>> fork_trampoline+0xe=0A>>>>> 918 100594 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0= x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100595 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_s= ig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100596 nfsd nfsd: service mi_s= witch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100597 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100598 nfsd nfsd: service = mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100599 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100600 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf = _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100601 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100602 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100603 nfsd nfsd= : service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100604 nfsd nf= sd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100605 nfsd = nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100606 nfs= d nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100607 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100608= nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 1006= 09 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 10= 0610 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 = 100611 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 91= 8 100612 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> = 918 100613 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>= > 918 100614 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>= >>> 918 100615 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A= >>>>> 918 100616 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 918 100617 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0= xe=0A>>>>> 918 100618 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline= +0xe=0A>>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_= catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoli= ne+0xe=0A>>>>> 918 100620 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampo= line+0xe=0A>>>>> 918 100621 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tr= ampoline+0xe=0A>>>>> 918 100623 nfsd nfsd: service mi_switch+0xe1=0A>>>>>= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_= trampoline+0xe=0A>>>>> 918 100624 nfsd nfsd: service mi_switch+0xe1=0A>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> for= k_trampoline+0xe=0A>>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1=0A>= >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>= >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> f= ork_trampoline+0xe=0A>>>>> 918 100626 nfsd nfsd: service mi_switch+0xe1= =0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 918 100627 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>> fork_trampoline+0xe=0A>>>>> 918 100628 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0= x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100629 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_s= ig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100630 nfsd nfsd: service mi_s= witch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100631 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100632 nfsd nfsd: service = mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100633 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100634 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf = _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100635 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100636 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100637 nfsd nfsd= : service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100638 nfsd nf= sd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100639 nfsd = nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100640 nfs= d nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100641 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 100642= nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 1006= 43 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 10= 0644 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 = 100645 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 91= 8 100646 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> = 918 100647 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>= > 918 100648 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>= >>> 918 100649 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A= >>>>> 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0= xe=0A>>>>> 918 100652 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline= +0xe=0A>>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_= catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoli= ne+0xe=0A>>>>> 918 100654 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampo= line+0xe=0A>>>>> 918 100655 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tr= ampoline+0xe=0A>>>>> 918 100657 nfsd nfsd: service mi_switch+0xe1=0A>>>>>= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_= trampoline+0xe=0A>>>>> 918 100658 nfsd nfsd: service mi_switch+0xe1=0A>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> for= k_trampoline+0xe=0A>>>>> 918 100659 nfsd nfsd: service mi_switch+0xe1=0A>= >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>= >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> f= ork_trampoline+0xe=0A>>>>> 918 100660 nfsd nfsd: service mi_switch+0xe1= =0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 918 100661 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>> fork_trampoline+0xe=0A>>>>> 918 100662 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0= x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>> fork_trampoline+0xe=0A>>>>> ---=0A>>>>> =0A>>>>> Now if we look = at client (FreeBSD 9.3)=0A>>>>> =0A>>>>> We see system was very busy and = do many and many interrupts=0A>>>>> =0A>>>>> CPU: 0.0% user, 0.0% nice, 3= 7.8% system, 51.2% interrupt, 11.0%=0A>>>>> idle=0A>>>>> =0A>>>>> A look = at process list shows that there are many sendmail process=0A>>>>> in=0A>= >>>> state nfstry=0A>>>>> =0A>>>>> nfstry 18 32:27 0.88% sendmail: Queue = runner@00:30:00 for=0A>>>>> /var/spool/clientm=0A>>>>> =0A>>>>> Here is '= nfsstat -c' output:=0A>>>>> =0A>>>>> Client Info:=0A>>>>> Rpc Counts:=0A>= >>>> Getattr Setattr Lookup Readlink Read Write Create=0A>>>>> Remove=0A>= >>>> 1051347 1724 2494481 118 903902 1901285 162676=0A>>>>> 161899=0A>>>>= > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus=0A>>>>> Access=0A>>>>>= 36744 2 0 114 40 3131 0=0A>>>>> 544136=0A>>>>> Mknod Fsstat Fsinfo PathC= onf Commit=0A>>>>> 9 0 0 0 245821=0A>>>>> Rpc Info:=0A>>>>> TimedOut Inva= lid X Replies Retries Requests=0A>>>>> 0 0 0 0 8356557=0A>>>>> Cache Info= :=0A>>>>> Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits=0A= >>>>> Misses=0A>>>>> 108754455 491475 54229224 2437229 46814561 821723 51= 32123=0A>>>>> 1871871=0A>>>>> BioRLHits Misses BioD Hits Misses DirE Hits= Misses Accs Hits=0A>>>>> Misses=0A>>>>> 144035 118 53736 2753 27813 1 57= 238839=0A>>>>> 544205=0A>>>>> =0A>>>>> If you need more things, tell me, = i let the PoC in this state.=0A>>>>> =0A>>>>> Thanks=0A>>>>> =0A>>>>> Reg= ards,=0A>>>>> =0A>>>>> Lo=C3=AFc Blot,=0A>>>>> UNIX Systems, Network and = Security Engineer=0A>>>>> http://www.unix-experience.fr=0A>>>>> =0A>>>>> = 21 d=C3=A9cembre 2014 01:33 "Rick Macklem" a=0A>>>= >> =C3=A9crit:=0A>>>>> =0A>>>>> Loic Blot wrote:=0A>>>>> =0A>>>>>> Hi Ric= k,=0A>>>>>> ok, i don't need locallocks, i haven't understand option was = for=0A>>>>>> that=0A>>>>>> usage, i removed it.=0A>>>>>> I do more tests = on monday.=0A>>>>>> Thanks for the deadlock fix, for other people :)=0A>>= >>> =0A>>>>> Good. Please let us know if running with=0A>>>>> vfs.nfsd.en= able_locallocks=3D0=0A>>>>> gets rid of the deadlocks? (I think it fixes = the one you saw.)=0A>>>>> =0A>>>>> On the performance side, you might als= o want to try different=0A>>>>> values=0A>>>>> of=0A>>>>> readahead, if t= he Linux client has such a mount option. (With the=0A>>>>> NFSv4-ZFS sequ= ential vs random I/O heuristic, I have no idea what=0A>>>>> the=0A>>>>> o= ptimal readahead value would be.)=0A>>>>> =0A>>>>> Good luck with it and = please let us know how it goes, rick=0A>>>>> ps: I now have a patch to fi= x the deadlock when=0A>>>>> vfs.nfsd.enable_locallocks=3D1=0A>>>>> is set= . I'll post it for anyone who is interested after I put it=0A>>>>> throug= h some testing.=0A>>>>> =0A>>>>> --=0A>>>>> Best regards,=0A>>>>> Lo=C3= =AFc BLOT,=0A>>>>> UNIX systems, security and network engineer=0A>>>>> ht= tp://www.unix-experience.fr=0A>>>>> =0A>>>>> Le jeudi 18 d=C3=A9cembre 20= 14 =C3=A0 19:46 -0500, Rick Macklem a =C3=A9crit :=0A>>>>> =0A>>>>> Loic = Blot wrote:=0A>>>>>> Hi rick,=0A>>>>>> i tried to start a LXC container o= n Debian Squeeze from my=0A>>>>>> freebsd=0A>>>>>> ZFS+NFSv4 server and i= also have a deadlock on nfsd=0A>>>>>> (vfs.lookup_shared=3D0). Deadlock = procs each time i launch a=0A>>>>>> squeeze=0A>>>>>> container, it seems = (3 tries, 3 fails).=0A>>>>> =0A>>>>> Well, I`ll take a look at this `proc= stat -kk`, but the only thing=0A>>>>> I`ve seen posted w.r.t. avoiding de= adlocks in ZFS is to not use=0A>>>>> nullfs. (I have no idea if you are u= sing any nullfs mounts, but=0A>>>>> if so, try getting rid of them.)=0A>>= >>> =0A>>>>> Here`s a high level post about the ZFS and vnode locking pro= blem,=0A>>>>> but there is no patch available, as far as I know.=0A>>>>> = =0A>>>>> http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407=0A>>>>> =0A= >>>>> rick=0A>>>>> =0A>>>>> 921 - D 0:00.02 nfsd: server (nfsd)=0A>>>>> = =0A>>>>> Here is the procstat -kk=0A>>>>> =0A>>>>> PID TID COMM TDNAME KS= TACK=0A>>>>> 921 100538 nfsd nfsd: master mi_switch+0xe1=0A>>>>> sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>>>>> vop_stdlock+0x3c VOP_= LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> nfsvno_advlock+0x119 nfsrv_dolocal+0= x84 nfsrv_lockctrl+0x14ad=0A>>>>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 n= fssvc_program+0x554=0A>>>>> svc_run_internal+0xc77 svc_run+0x1de nfsrvd_n= fsd+0x1ca=0A>>>>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>>> 921 100572 nf= sd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_= wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921= 100573 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100574 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100575 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100576 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100577 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100578 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100579 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100580 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100581 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100582 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100583 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100584 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100585 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100586 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100587 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100588 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100589 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100590 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100591 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100592 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100593 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100594 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100595 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100596 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100597 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100598 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100599 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100600 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100601 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100602 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100603 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100604 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100605 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100606 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100607 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100608 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100609 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100610 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100611 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100612 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100613 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100614 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100615 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100616 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait= +0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>> nfsrv_getlockfi= le+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1=0A>>>>> nfsrvd_dorpc+0xec= 6 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 921 100617 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A= >>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100618 nfsd nf= sd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmslee= p+0x66 nfsv4_lock+0x9b=0A>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 sv= c_run_internal+0xc77=0A>>>>> svc_thread_start+0xb fork_exit+0x9a fork_tra= mpoline+0xe=0A>>>>> 921 100619 nfsd nfsd: service mi_switch+0xe1=0A>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100620 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100621 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100622 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100623 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100624 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100625 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100626 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100627 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100628 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100629 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100630 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100631 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100632 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100633 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100634 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100635 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100636 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100637 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100638 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100639 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100640 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100641 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100642 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100643 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100644 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100645 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100646 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100647 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100648 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100649 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100650 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100651 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100652 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100653 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100654 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100655 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100656 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100657 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100658 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100659 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe= =0A>>>>> 921 100660 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_tram= poline+0xe=0A>>>>> 921 100661 nfsd nfsd: service mi_switch+0xe1=0A>>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a= =0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= >> fork_trampoline+0xe=0A>>>>> 921 100662 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>> _cv_wait= _sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100663 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= > _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100664 nfsd nfsd: = service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 921 100665 n= fsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>> _cv_wait_sig+0x16a=0A>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 92= 1 100666 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a _slee= p+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>> nfsrv_setclient+0xbd nfsrv= d_setclientid+0x3c8=0A>>>>> nfsrvd_dorpc+0xc76=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> =0A>>>>> Regards,=0A>>>>> =0A>>>>> Lo=C3= =AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>> ht= tp://www.unix-experience.fr=0A>>>>> =0A>>>>> 15 d=C3=A9cembre 2014 15:18 = "Rick Macklem" a=0A>>>>> =C3=A9crit:=0A>>>>> =0A>>= >>> Loic Blot wrote:=0A>>>>> =0A>>>>>> For more informations, here is pro= cstat -kk on nfsd, if you=0A>>>>>> need=0A>>>>>> more=0A>>>>>> hot datas,= tell me.=0A>>>>>> =0A>>>>>> Regards, PID TID COMM TDNAME KSTACK=0A>>>>>>= 918 100529 nfsd nfsd: master mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sl= eeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+= 0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run= _internal+0xc77 svc_run+0x1de=0A>>>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x10= 7 sys_nfssvc+0x9c=0A>>>>>> amd64_syscall+0x351=0A>>>>> =0A>>>>> Well, mos= t of the threads are stuck like this one, waiting for=0A>>>>> a=0A>>>>> v= node=0A>>>>> lock in ZFS. All of them appear to be in zfs_fhtovp().=0A>>>= >> I`m not a ZFS guy, so I can`t help much. I`ll try changing the=0A>>>>>= subject line=0A>>>>> to include ZFS vnode lock, so maybe the ZFS guys wi= ll take a=0A>>>>> look.=0A>>>>> =0A>>>>> The only thing I`ve seen suggest= ed is trying:=0A>>>>> sysctl vfs.lookup_shared=3D0=0A>>>>> to disable sha= red vop_lookup()s. Apparently zfs_lookup()=0A>>>>> doesn`t=0A>>>>> obey t= he vnode locking rules for lookup and rename, according=0A>>>>> to=0A>>>>= > the posting I saw.=0A>>>>> =0A>>>>> I`ve added a couple of comments abo= ut the other threads below,=0A>>>>> but=0A>>>>> they are all either waiti= ng for an RPC request or waiting for=0A>>>>> the=0A>>>>> threads stuck on= the ZFS vnode lock to complete.=0A>>>>> =0A>>>>> rick=0A>>>>> =0A>>>>>> = 918 100564 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a=0A>>>>>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>> fork_trampolin= e+0xe=0A>>>>> =0A>>>>> Fyi, this thread is just waiting for an RPC to arr= ive. (Normal)=0A>>>>> =0A>>>>>> 918 100565 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv_w= ait_sig+0x16a=0A>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>> fork_trampoline+0xe=0A>>>>>> 918 100566 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>> _cv_wait_sig+0x16a=0A>>>>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>>>>> fork_trampoline+0xe=0A>>>>>> 918 100567 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a=0A>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>> fork_trampoline+0xe=0A>>>= >>> 918 100568 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a=0A>>>>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>> fork_tramp= oline+0xe=0A>>>>>> 918 100569 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a= =0A>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>> fork_trampoline+0xe=0A>>>>>> 918 100570 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv= _wait_sig+0x16a=0A>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>>>> fork_trampoline+0xe=0A>>>>>> 918 100571 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x6= 6 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_ru= n_internal+0xc77=0A>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe=0A>>>>>> 918 100572 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> s= leepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsr= v_setclient+0xbd nfsrvd_setclientid+0x3c8=0A>>>>>> nfsrvd_dorpc+0xc76=0A>= >>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_sta= rt+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> =0A>>>>> This = one (and a few others) are waiting for the nfsv4_lock.=0A>>>>> This=0A>>>= >> happens=0A>>>>> because other threads are stuck with RPCs in progress.= (ie. The=0A>>>>> ones=0A>>>>> waiting on the vnode lock in zfs_fhtovp().= )=0A>>>>> For these, the RPC needs to lock out other threads to do the=0A= >>>>> operation,=0A>>>>> so it waits for the nfsv4_lock() which can exclu= sively lock the=0A>>>>> NFSv4=0A>>>>> data structures once all other nfsd= threads complete their RPCs=0A>>>>> in=0A>>>>> progress.=0A>>>>> =0A>>>>= >> 918 100573 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a= _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe=0A>>>>> =0A>>>>> Same as above.=0A>>>= >> =0A>>>>>> 918 100574 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq= _wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno= _fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100575 nfsd nfsd: service mi_s= witch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_= fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x= 917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thr= ead_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10= 0576 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+= 0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _= vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>>>> 918 100577 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock= +0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>= > fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100578 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+= 0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>= zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> sv= c_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 9= 18 100579 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0= xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c n= fsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_= internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork= _trampoline+0xe=0A>>>>>> 918 100580 nfsd nfsd: service mi_switch+0xe1=0A>= >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>= >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfs= svc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A= >>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100581 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>= >>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>= >> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>> 918 100582 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3= a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_= APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0= x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc= _run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a= fork_trampoline+0xe=0A>>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe= 1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> v= op_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38= d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>= > nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0= xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100584 nfsd n= fsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0x= e=0A>>>>>> 918 100585 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP= _LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x= 554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_ex= it+0x9a fork_trampoline+0xe=0A>>>>>> 918 100586 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhto= vp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10058= 7 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x1= 5d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_= lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhto= vp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampol= ine+0xe=0A>>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x= 3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_prog= ram+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100589 nfsd nfsd: service = mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zf= s_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_t= hread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 = 100590 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100591 nfsd nfsd: service mi_switch+0xe1=0A>>>>= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>= >>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100592 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>= >> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> = svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>= 918 100593 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_ru= n_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fo= rk_trampoline+0xe=0A>>>>>> 918 100594 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d= =0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0x= b=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100595 nfsd nf= sd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 n= fsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>> 918 100596 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wa= it+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_= LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exi= t+0x9a fork_trampoline+0xe=0A>>>>>> 918 100597 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtov= p+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10059= 8 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x1= 5d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_= lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhto= vp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampol= ine+0xe=0A>>>>>> 918 100599 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x= 3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_prog= ram+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100600 nfsd nfsd: service = mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zf= s_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_t= hread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 = 100601 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100602 nfsd nfsd: service mi_switch+0xe1=0A>>>>= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>= >>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100603 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>= >> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> = svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>= 918 100604 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c= nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_ru= n_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fo= rk_trampoline+0xe=0A>>>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d= =0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0x= b=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100606 nfsd nf= sd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 n= fsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wa= it+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_= LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exi= t+0x9a fork_trampoline+0xe=0A>>>>> =0A>>>>> Lots more waiting for the ZFS= vnode lock in zfs_fhtovp().=0A>>>>> =0A>>>>> 918 100608 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nf= sv4_lock+0x9b=0A>>>>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd= _lock+0x5b1=0A>>>>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A>>>>> 918 100609 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait= +0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc= _run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a f= ork_trampoline+0xe=0A>>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1= =0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> nfsvno_advlock+0x11= 9 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>>>>> nfsrvd_locku+0x283 nfs= rvd_dorpc+0xec6 nfssvc_program+0x554=0A>>>>> svc_run_internal+0xc77 svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>> fork_trampoline+0xe=0A>>>>> 918 10= 0611 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x= 287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>> nfsrvd_dorpc+0x316 nfssvc_prog= ram+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb fork_exit+0= x9a fork_trampoline+0xe=0A>>>>> 918 100612 nfsd nfsd: service mi_switch+0= xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b= =0A>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>> = 918 100613 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a _sl= eep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>> nfsrvd_dorpc+0x316 nfssv= c_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100614 nfsd nfsd: service mi_sw= itch+0xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock= +0x9b=0A>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>= >> 918 100615 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a = _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>> nfsrvd_dorpc+0x316 nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100616 nfsd nfsd: service mi= _switch+0xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_l= ock+0x9b=0A>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A= >>>>> 918 100617 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x= 3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>> nfsrvd_dorpc+0x316= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100618 nfsd nfsd: service= mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv= 4_lock+0x9b=0A>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_inter= nal+0xc77=0A>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A>>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait= +0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc= _run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a f= ork_trampoline+0xe=0A>>>>> 918 100620 nfsd nfsd: service mi_switch+0xe1= =0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A= >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfss= vc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>= >>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100621 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 n= fsv4_lock+0x9b=0A>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+= 0xe=0A>>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_= LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 = svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9= a fork_trampoline+0xe=0A>>>>> 918 100623 nfsd nfsd: service mi_switch+0xe= 1=0A>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A= >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>= >>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 1= 00624 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+= 0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhto= vp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+= 0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline= +0xe=0A>>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP= _LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>> 918 100626 nfsd nfsd: service mi_switch+0x= e1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d= =0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> n= fssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100627 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100628 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100630 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100631 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100632 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100633 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100634 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100635 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100636 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100637 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100638 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100639 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100640 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100641 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100642 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100643 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100644 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100645 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100646 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100647 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100648 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100649 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100650 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100651 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100652 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100654 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918 100655 nfsd nfsd:= service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> sv= c_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> 918= 100656 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>> 918 100657 nfsd nfsd: service mi_switch+0xe1=0A>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38d=0A>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb=0A>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>> 918 100658 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>> zfs_fhtovp+0x38= d=0A>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>> svc_thread_start+0xb= =0A>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> =0A>>>>> Lo=C3=AFc Bl= ot,=0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>> http://ww= w.unix-experience.fr=0A>>>>> =0A>>>>> 15 d=C3=A9cembre 2014 13:29 "Lo=C3= =AFc Blot"=0A>>>>> =0A>>>>> a=0A>>>>> =C3= =A9crit:=0A>>>>> =0A>>>>> Hmmm...=0A>>>>> now i'm experiencing a deadlock= .=0A>>>>> =0A>>>>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: serv= er=0A>>>>> (nfsd)=0A>>>>> =0A>>>>> the only issue was to reboot the serve= r, but after rebooting=0A>>>>> deadlock arrives a second time when i=0A>>= >>> start my jails over NFS.=0A>>>>> =0A>>>>> Regards,=0A>>>>> =0A>>>>> L= o=C3=AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>= > http://www.unix-experience.fr=0A>>>>> =0A>>>>> 15 d=C3=A9cembre 2014 10= :07 "Lo=C3=AFc Blot"=0A>>>>> =0A>>>>> a=0A>= >>>> =C3=A9crit:=0A>>>>> =0A>>>>> Hi Rick,=0A>>>>> after talking with my = N+1, NFSv4 is required on our=0A>>>>> infrastructure.=0A>>>>> I tried to = upgrade NFSv4+ZFS=0A>>>>> server from 9.3 to 10.1, i hope this will resol= ve some=0A>>>>> issues...=0A>>>>> =0A>>>>> Regards,=0A>>>>> =0A>>>>> Lo= =C3=AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>>= http://www.unix-experience.fr=0A>>>>> =0A>>>>> 10 d=C3=A9cembre 2014 15:= 36 "Lo=C3=AFc Blot"=0A>>>>> =0A>>>>> a=0A>>= >>> =C3=A9crit:=0A>>>>> =0A>>>>> Hi Rick,=0A>>>>> thanks for your suggest= ion.=0A>>>>> For my locking bug, rpc.lockd is stucked in rpcrecv state on= =0A>>>>> the=0A>>>>> server. kill -9 doesn't affect the=0A>>>>> process, = it's blocked.... (State: Ds)=0A>>>>> =0A>>>>> for the performances=0A>>>>= > =0A>>>>> NFSv3: 60Mbps=0A>>>>> NFSv4: 45Mbps=0A>>>>> Regards,=0A>>>>> = =0A>>>>> Lo=C3=AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engin= eer=0A>>>>> http://www.unix-experience.fr=0A>>>>> =0A>>>>> 10 d=C3=A9cemb= re 2014 13:56 "Rick Macklem" =0A>>>>> a=0A>>>>> =C3= =A9crit:=0A>>>>> =0A>>>>> Loic Blot wrote:=0A>>>>> =0A>>>>>> Hi Rick,=0A>= >>>>> I'm trying NFSv3.=0A>>>>>> Some jails are starting very well but no= w i have an issue=0A>>>>>> with=0A>>>>>> lockd=0A>>>>>> after some minute= s:=0A>>>>>> =0A>>>>>> nfs server 10.10.X.8:/jails: lockd not responding= =0A>>>>>> nfs server 10.10.X.8:/jails lockd is alive again=0A>>>>>> =0A>>= >>>> I look at mbuf, but i seems there is no problem.=0A>>>>> =0A>>>>> We= ll, if you need locks to be visible across multiple=0A>>>>> clients,=0A>>= >>> then=0A>>>>> I'm afraid you are stuck with using NFSv4 and the=0A>>>>= > performance=0A>>>>> you=0A>>>>> get=0A>>>>> from it. (There is no way t= o do file handle affinity for=0A>>>>> NFSv4=0A>>>>> because=0A>>>>> the r= ead and write ops are buried in the compound RPC and=0A>>>>> not=0A>>>>> = easily=0A>>>>> recognized.)=0A>>>>> =0A>>>>> If the locks don't need to b= e visible across multiple=0A>>>>> clients,=0A>>>>> I'd=0A>>>>> suggest tr= ying the "nolockd" option with nfsv3.=0A>>>>> =0A>>>>>> Here is my rc.con= f on server:=0A>>>>>> =0A>>>>>> nfs_server_enable=3D"YES"=0A>>>>>> nfsv4_= server_enable=3D"YES"=0A>>>>>> nfsuserd_enable=3D"YES"=0A>>>>>> nfsd_serv= er_flags=3D"-u -t -n 256"=0A>>>>>> mountd_enable=3D"YES"=0A>>>>>> mountd_= flags=3D"-r"=0A>>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>>>>= > rpcbind_enable=3D"YES"=0A>>>>>> rpc_lockd_enable=3D"YES"=0A>>>>>> rpc_s= tatd_enable=3D"YES"=0A>>>>>> =0A>>>>>> Here is the client:=0A>>>>>> =0A>>= >>>> nfsuserd_enable=3D"YES"=0A>>>>>> nfsuserd_flags=3D"-usertimeout 0 -f= orce 20"=0A>>>>>> nfscbd_enable=3D"YES"=0A>>>>>> rpc_lockd_enable=3D"YES"= =0A>>>>>> rpc_statd_enable=3D"YES"=0A>>>>>> =0A>>>>>> Have you got an ide= a ?=0A>>>>>> =0A>>>>>> Regards,=0A>>>>>> =0A>>>>>> Lo=C3=AFc Blot,=0A>>>>= >> UNIX Systems, Network and Security Engineer=0A>>>>>> http://www.unix-e= xperience.fr=0A>>>>>> =0A>>>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem"= =0A>>>>>> a=0A>>>>>> =C3=A9crit:=0A>>>>>>> Loic Bl= ot wrote:=0A>>>>>>> =0A>>>>>>>> Hi rick,=0A>>>>>>>> =0A>>>>>>>> I waited = 3 hours (no lag at jail launch) and now I do:=0A>>>>>>>> sysrc=0A>>>>>>>>= memcached_flags=3D"-v -m 512"=0A>>>>>>>> Command was very very slow...= =0A>>>>>>>> =0A>>>>>>>> Here is a dd over NFS:=0A>>>>>>>> =0A>>>>>>>> 601= 062912 bytes transferred in 21.060679 secs (28539579=0A>>>>>>>> bytes/sec= )=0A>>>>>>> =0A>>>>>>> Can you try the same read using an NFSv3 mount?=0A= >>>>>>> (If it runs much faster, you have probably been bitten by=0A>>>>>= >> the=0A>>>>>>> ZFS=0A>>>>>>> "sequential vs random" read heuristic whic= h I've been told=0A>>>>>>> things=0A>>>>>>> NFS is doing "random" reads w= ithout file handle affinity.=0A>>>>>>> File=0A>>>>>>> handle affinity is = very hard to do for NFSv4, so it isn't=0A>>>>>>> done.)=0A>>>>> =0A>>>>> = I was actually suggesting that you try the "dd" over nfsv3=0A>>>>> to=0A>= >>>> see=0A>>>>> how=0A>>>>> the performance compared with nfsv4. If you = do that, please=0A>>>>> post=0A>>>>> the=0A>>>>> comparable results.=0A>>= >>> =0A>>>>> Someday I would like to try and get ZFS's sequential vs=0A>>= >>> random=0A>>>>> read=0A>>>>> heuristic modified and any info on what d= ifference in=0A>>>>> performance=0A>>>>> that=0A>>>>> might make for NFS = would be useful.=0A>>>>> =0A>>>>> rick=0A>>>>> =0A>>>>> rick=0A>>>>> =0A>= >>>> This is quite slow...=0A>>>>> =0A>>>>> You can found some nfsstat be= low (command isn't finished=0A>>>>> yet)=0A>>>>> =0A>>>>> nfsstat -c -w 1= =0A>>>>> =0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>= >>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0 0 16 0=0A>>>>> 2 0 0 0 0 0 17 0=0A= >>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>= >>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0 0 4 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>= >>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>= >> 4 0 0 0 0 0 3 0=0A>>>>> 0 0 0 0 0 0 3 0=0A>>>>> 37 10 0 8 0 0 14 1=0A>= >>>> 18 16 0 4 1 2 4 0=0A>>>>> 78 91 0 82 6 12 30 0=0A>>>>> 19 18 0 2 2 4= 2 0=0A>>>>> 0 0 0 0 2 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> GtAttr Looku= p Rdlink Read Write Rename Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0= 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 1 0 0 0 0 1 0=0A>>>>> 4 = 6 0 0 6 0 3 0=0A>>>>> 2 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0=200 0 0 0=0A>>>>> 1= 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 1 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0= 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>> 6 108 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0= 0 0 0 0 0 0=0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 98 54 0 86 11 0 25 0=0A>>>>> 36 24 0 39 = 25 0 10 1=0A>>>>> 67 8 0 63 63 0 41 0=0A>>>>> 34 0 0 35 34 0 0 0=0A>>>>> = 75 0 0 75 77 0 0 0=0A>>>>> 34 0 0 35 35 0 0 0=0A>>>>> 75 0 0 74 76 0 0 0= =0A>>>>> 33 0 0 34 33 0 0 0=0A>>>>> 0 0 0 0 5 0 0 0=0A>>>>> 0 0 0 0 0 0 6= 0=0A>>>>> 11 0 0 0 0 0 11 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 17 0 0 0 0= 1 0=0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>> = 4 5 0 0 0 0 12 0=0A>>>>> 2 0 0 0 0 0 26 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>>= 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> = 0 0 0 0 0 0 0 0=0A>>>>> 0 4 0 0 0 0 4 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0= 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 0 0 0 0 0 2 0=0A>>>>> 2 = 0 0 0 0 0 24 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0= 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> GtAttr Lookup Rdlink Read Wr= ite Rename Access Rddir=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 4 0 0 0 0 0 7 0=0A>>>>> 2 1 0 0 0 0 1 0=0A>>>>> 0 0 0 0 2 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 6 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 6 0 0 0 0 3 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 2 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0= =0A>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0= 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 71 0 0 0 0 0 0=0A>>>>> 0 1 0= 0 0 0 0 0=0A>>>>> 2 36 0 0 0 0 1 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0= 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 1 0 0 = 0 0 0 1 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 79 6 0 = 79 79 0 2 0=0A>>>>> 25 0 0 25 26 0 6 0=0A>>>>> 43 18 0 39 46 0 23 0=0A>>>= >> 36 0 0 36 36 0 31 0=0A>>>>> 68 1 0 66 68 0 0 0=0A>>>>> GtAttr Lookup R= dlink Read Write Rename Access Rddir=0A>>>>> 36 0 0 36 36 0 0 0=0A>>>>> 4= 8 0 0 48 49 0 0 0=0A>>>>> 20 0 0 20 20 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>= >>> 3 14 0 1 0 0 11 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>= >>>> 0 4 0 0 0 0 4 0=0A>>>>> 0 0 0 0 0 0 0 0=0A>>>>> 4 22 0 0 0 0 16 0=0A= >>>>> 2 0 0 0 0 0 23 0=0A>>>>> =0A>>>>> Regards,=0A>>>>> =0A>>>>> Lo=C3= =AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>> ht= tp://www.unix-experience.fr=0A>>>>> =0A>>>>> 8 d=C3=A9cembre 2014 09:36 "= Lo=C3=AFc Blot"=0A>>>>> a=0A>>>>> =C3=A9cr= it:=0A>>>>>> Hi Rick,=0A>>>>>> I stopped the jails this week-end and star= ted it this=0A>>>>>> morning,=0A>>>>>> i'll=0A>>>>>> give you some stats = this week.=0A>>>>>> =0A>>>>>> Here is my nfsstat -m output (with your rsi= ze/wsize=0A>>>>>> tweaks)=0A>>>> =0A>>>> =0A>>> =0A>> =0A> nfsv4,tcp,resv= port,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acregmax= =3D60,nametimeo=3D60,negna=0A>>>> =0A>>>>> =0A>>>> =0A>>>> =0A>>> =0A>> = =0A> etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahea= d=3D1,wcommitsize=3D773136,timeout=3D120,retra=0A>>>> =0A>>>>> s=3D214748= 3647=0A>>>>> =0A>>>>> On server side my disks are on a raid controller wh= ich show a=0A>>>>> 512b=0A>>>>> volume and write performances=0A>>>>> are= very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096=0A>>>>> cou= nt=3D100000000 =3D> 450MBps)=0A>>>>> =0A>>>>> Regards,=0A>>>>> =0A>>>>> L= o=C3=AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engineer=0A>>>>= > http://www.unix-experience.fr=0A>>>>> =0A>>>>> 5 d=C3=A9cembre 2014 15:= 14 "Rick Macklem" a=0A>>>>> =C3=A9crit:=0A>>>>> = =0A>>>>> Loic Blot wrote:=0A>>>>> =0A>>>>> Hi,=0A>>>>> i'm trying to crea= te a virtualisation environment based on=0A>>>>> jails.=0A>>>>> Those jai= ls are stored under a big ZFS pool on a FreeBSD=0A>>>>> 9.3=0A>>>>> which= =0A>>>>> export a NFSv4 volume. This NFSv4 volume was mounted on a=0A>>>>= > big=0A>>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but=0A>= >>>> only 1=0A>>>>> was=0A>>>>> used at this time).=0A>>>>> =0A>>>>> The = problem is simple, my hypervisors runs 6 jails (used 1%=0A>>>>> cpu=0A>>>= >> and=0A>>>>> 10GB RAM approximatively and less than 1MB bandwidth) and= =0A>>>>> works=0A>>>>> fine at start but the system slows down and after = 2-3 days=0A>>>>> become=0A>>>>> unusable. When i look at top command i se= e 80-100% on=0A>>>>> system=0A>>>>> and=0A>>>>> commands are very very sl= ow. Many process are tagged with=0A>>>>> nfs_cl*.=0A>>>>> =0A>>>>> To be = honest, I would expect the slowness to be because of=0A>>>>> slow=0A>>>>>= response=0A>>>>> from the NFSv4 server, but if you do:=0A>>>>> # ps axHl= =0A>>>>> on a client when it is slow and post that, it would give us=0A>>= >>> some=0A>>>>> more=0A>>>>> information on where the client side proces= ses are sitting.=0A>>>>> If you also do something like:=0A>>>>> # nfsstat= -c -w 1=0A>>>>> and let it run for a while, that should show you how man= y=0A>>>>> RPCs=0A>>>>> are=0A>>>>> being done and which ones.=0A>>>>> =0A= >>>>> # nfsstat -m=0A>>>>> will show you what your mount is actually usin= g.=0A>>>>> The only mount option I can suggest trying is=0A>>>>> "rsize= =3D32768,wsize=3D32768",=0A>>>>> since some network environments have dif= ficulties with 64K.=0A>>>>> =0A>>>>> There are a few things you can try o= n the NFSv4 server side,=0A>>>>> if=0A>>>>> it=0A>>>>> appears=0A>>>>> th= at the clients are generating a large RPC load.=0A>>>>> - disabling the D= RC cache for TCP by setting=0A>>>>> vfs.nfsd.cachetcp=3D0=0A>>>>> - If th= e server is seeing a large write RPC load, then=0A>>>>> "sync=3Ddisabled"= =0A>>>>> might help, although it does run a risk of data loss when=0A>>>>= > the=0A>>>>> server=0A>>>>> crashes.=0A>>>>> Then there are a couple of = other ZFS related things (I'm not=0A>>>>> a=0A>>>>> ZFS=0A>>>>> guy,=0A>>= >>> but these have shown up on the mailing lists).=0A>>>>> - make sure yo= ur volumes are 4K aligned and ashift=3D12 (in=0A>>>>> case a=0A>>>>> driv= e=0A>>>>> that uses 4K sectors is pretending to be 512byte sectored)=0A>>= >>> - never run over 70-80% full if write performance is an=0A>>>>> issue= =0A>>>>> - use a zil on an SSD with good write performance=0A>>>>> =0A>>>= >> The only NFSv4 thing I can tell you is that it is known that=0A>>>>> Z= FS's=0A>>>>> algorithm for determining sequential vs random I/O fails for= =0A>>>>> NFSv4=0A>>>>> during writing and this can be a performance hit. = The only=0A>>>>> workaround=0A>>>>> is to use NFSv3 mounts, since file ha= ndle affinity=0A>>>>> apparently=0A>>>>> fixes=0A>>>>> the problem and th= is is only done for NFSv3.=0A>>>>> =0A>>>>> rick=0A>>>>> =0A>>>>> I saw t= hat there are TSO issues with igb then i'm trying to=0A>>>>> disable=0A>>= >>> it with sysctl but the situation wasn't solved.=0A>>>>> =0A>>>>> Some= one has got ideas ? I can give you more informations if=0A>>>>> you=0A>>>= >> need.=0A>>>>> =0A>>>>> Thanks in advance.=0A>>>>> Regards,=0A>>>>> =0A= >>>>> Lo=C3=AFc Blot,=0A>>>>> UNIX Systems, Network and Security Engineer= =0A>>>>> http://www.unix-experience.fr=0A>>>>> __________________________= _____________________=0A>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>= http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>> To unsubscr= ibe, send any mail to=0A>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>>= > =0A>>>>> _______________________________________________=0A>>>>> freebs= d-fs@freebsd.org mailing list=0A>>>>> http://lists.freebsd.org/mailman/li= stinfo/freebsd-fs=0A>>>>> To unsubscribe, send any mail to=0A>>>>> "freeb= sd-fs-unsubscribe@freebsd.org"=0A>>>>> =0A>>>>> _________________________= ______________________=0A>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>= > http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>> To unsubsc= ribe, send any mail to=0A>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>= >> =0A>>>>> _______________________________________________=0A>>>>> freeb= sd-fs@freebsd.org mailing list=0A>>>>> http://lists.freebsd.org/mailman/l= istinfo/freebsd-fs=0A>>>>> To unsubscribe, send any mail to=0A>>>>> "free= bsd-fs-unsubscribe@freebsd.org"=0A>>>>> _________________________________= ______________=0A>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>> http:/= /lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>> To unsubscribe, se= nd any mail to=0A>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>>> =0A>>= >>> _______________________________________________=0A>>>>> freebsd-fs@fr= eebsd.org mailing list=0A>>>>> http://lists.freebsd.org/mailman/listinfo/= freebsd-fs=0A>>>>> To unsubscribe, send any mail to=0A>>>>> "freebsd-fs-u= nsubscribe@freebsd.org"=0A>>> =0A>>> ____________________________________= ___________=0A>>> freebsd-fs@freebsd.org mailing list=0A>>> http://lists.= freebsd.org/mailman/listinfo/freebsd-fs=0A>>> To unsubscribe, send any ma= il to=0A>>> "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 19:05:56 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 77142A71 for ; Mon, 5 Jan 2015 19:05:56 +0000 (UTC) Received: from mail-ie0-x22e.google.com (mail-ie0-x22e.google.com [IPv6:2607:f8b0:4001:c03::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3DC272B56 for ; Mon, 5 Jan 2015 19:05:56 +0000 (UTC) Received: by mail-ie0-f174.google.com with SMTP id at20so19504879iec.5 for ; Mon, 05 Jan 2015 11:05:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:date:message-id:subject:from:to:content-type; bh=QZrLRk6/Mqy02x7ajNhX5BwFZg2nLL5X5IzINVz4bOg=; b=dVWn3iw2htDgnPP6xsMNCuC3ZMih9antHKRec/BPhndzs/znkHtsAxiyyX7Du8LuCW qdnRFYqLvfvJX0A1YIdrnc2lGJHJWiGDdjPFGbVWjOgBb3Qz63HcoY7SQU3d5LXAMpk2 NdC9vMGMSZShWKu+RK1n4wb/CzrWHMSvxut5w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=QZrLRk6/Mqy02x7ajNhX5BwFZg2nLL5X5IzINVz4bOg=; b=lMebpz5s4XjqCzn9wLX0+5OowIt9w+g/BLBfwDxGsEaXLrg+Tir4t7j4baBzCd0Yyt vUDEe3KLWV0fpaPWcroL1loJmRaaiOGIxaORlSqjq0cu2SDdsYQaatqYnqxKoWsZkzk+ 9QWraQJWuZfV/p9izXlFca6UGPiZNtcn317lvwl/H121PniJdmMcl/Yz3ZN7NPq5xhSk VjWD7c3OsTtM2bHsbfRJe2vW5avUyBXeTSHSC/CGYSZkpCXEisHaa3FsWqGSAMQDgWaf G6Tgd2HNTGuGRTIcP+aijrz/6KSaRRDuvVeteM73snxuqovOgjyWuU1Y+ZQyJ9qfVDvq EkGA== X-Gm-Message-State: ALoCoQldx2HPrCV0uzvZrdPfyJ/pan7TIy+tIz674v/kVJ1Iw3rZvSWR1QQvjJx/QaJK8xIPLWAE MIME-Version: 1.0 X-Received: by 10.50.30.3 with SMTP id o3mr12332789igh.44.1420484755765; Mon, 05 Jan 2015 11:05:55 -0800 (PST) Received: by 10.43.156.75 with HTTP; Mon, 5 Jan 2015 11:05:55 -0800 (PST) Date: Mon, 5 Jan 2015 11:05:55 -0800 Message-ID: Subject: ZFS Send / Receive Recursively Without Properties From: Tim Gustafson To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 19:05:56 -0000 Hi, We're using ZFS send/receive to maintain off-site snapshots of system "A" onto system "B" for redundancy. The data on system "B" is read-only, and would only ever be used to transfer data back to system "A" in the event of a hardware failure. We're currently this command to achieve this: /sbin/zfs send -R -I 'tank/root@2015-01-04' 'tank/root@2015-01-05' | /usr/bin/ssh -i /root/.ssh/id_rsa 'user@backup-server' /usr/local/bin/sudo /sbin/zfs receive -v -F -u -n 'tank/notbackedup/source-server/root' This works well, except that it sets the mountpoints on server "B" server's copy of the file systems to whatever they were on the source system, which overwrites server "B"'s root file system when we send the root file system from server "A". If I drop the -R parameter to ZFS send, then it does not overwrite the mountpoints, but it also does not destroy the non-existent snapshots on the server "B". Snapshots on server "A" are automatically destroyed by a script after 7 days, and we don't want to accumulate snapshots on server "B" that have been destroyed from server "A". We also would prefer to not run a snapshot purging script on server "B" because ultimately this solution will be used for multiple source servers, and each of them have different snapshot retention policies that I'd like to not have to maintain in two separate places. Is there any way to recursively send (and destroy) snapshots on the server "B" without also copying the mountpoint property? -- Tim Gustafson tjg@ucsc.edu 831-459-5354 Baskin Engineering, Room 313A From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 19:40:14 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 79F08B9F for ; Mon, 5 Jan 2015 19:40:14 +0000 (UTC) Received: from exch2-4.slu.se (webmail.slu.se [77.235.224.124]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "webmail.slu.se", Issuer "TERENA SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CB50366ED1 for ; Mon, 5 Jan 2015 19:40:12 +0000 (UTC) Received: from exch2-4.slu.se (77.235.224.124) by exch2-4.slu.se (77.235.224.124) with Microsoft SMTP Server (TLS) id 15.0.995.29; Mon, 5 Jan 2015 20:24:57 +0100 Received: from exch2-4.slu.se ([::1]) by exch2-4.slu.se ([fe80::4173:e97d:6ba9:312b%23]) with mapi id 15.00.0995.028; Mon, 5 Jan 2015 20:24:57 +0100 From: =?utf-8?B?S2FybGkgU2rDtmJlcmc=?= To: Tim Gustafson Subject: Re: ZFS Send / Receive Recursively Without Properties Thread-Topic: ZFS Send / Receive Recursively Without Properties Thread-Index: AQHQKR1KJJbhIEDvBU+lXTmhxx65NA== Date: Mon, 5 Jan 2015 19:24:57 +0000 Message-ID: <56ec0f38c9414ba49faeeecd0f95020d@exch2-4.slu.se> Accept-Language: sv-SE, en-US Content-Language: sv-SE X-MS-Has-Attach: X-MS-TNEF-Correlator: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 19:40:14 -0000 DQpEZW4gNSBqYW4gMjAxNSAyMDowNiBza3JldiBUaW0gR3VzdGFmc29uIDx0amdAdWNzYy5lZHU+ Og0KPg0KPiBIaSwNCj4NCj4gV2UncmUgdXNpbmcgWkZTIHNlbmQvcmVjZWl2ZSB0byBtYWludGFp biBvZmYtc2l0ZSBzbmFwc2hvdHMgb2Ygc3lzdGVtDQo+ICJBIiBvbnRvIHN5c3RlbSAiQiIgZm9y IHJlZHVuZGFuY3kuICBUaGUgZGF0YSBvbiBzeXN0ZW0gIkIiIGlzDQo+IHJlYWQtb25seSwgYW5k IHdvdWxkIG9ubHkgZXZlciBiZSB1c2VkIHRvIHRyYW5zZmVyIGRhdGEgYmFjayB0byBzeXN0ZW0N Cj4gIkEiIGluIHRoZSBldmVudCBvZiBhIGhhcmR3YXJlIGZhaWx1cmUuDQo+DQo+IFdlJ3JlIGN1 cnJlbnRseSB0aGlzIGNvbW1hbmQgdG8gYWNoaWV2ZSB0aGlzOg0KPg0KPiAvc2Jpbi96ZnMgc2Vu ZCAtUiAtSSAndGFuay9yb290QDIwMTUtMDEtMDQnICd0YW5rL3Jvb3RAMjAxNS0wMS0wNScgfA0K PiAvdXNyL2Jpbi9zc2ggLWkgL3Jvb3QvLnNzaC9pZF9yc2EgJ3VzZXJAYmFja3VwLXNlcnZlcicN Cj4gL3Vzci9sb2NhbC9iaW4vc3VkbyAvc2Jpbi96ZnMgcmVjZWl2ZSAtdiAtRiAtdSAtbg0KPiAn dGFuay9ub3RiYWNrZWR1cC9zb3VyY2Utc2VydmVyL3Jvb3QnDQo+DQo+IFRoaXMgd29ya3Mgd2Vs bCwgZXhjZXB0IHRoYXQgaXQgc2V0cyB0aGUgbW91bnRwb2ludHMgb24gc2VydmVyICJCIg0KPiBz ZXJ2ZXIncyBjb3B5IG9mIHRoZSBmaWxlIHN5c3RlbXMgdG8gd2hhdGV2ZXIgdGhleSB3ZXJlIG9u IHRoZSBzb3VyY2UNCj4gc3lzdGVtLCB3aGljaCBvdmVyd3JpdGVzIHNlcnZlciAiQiIncyByb290 IGZpbGUgc3lzdGVtIHdoZW4gd2Ugc2VuZA0KPiB0aGUgcm9vdCBmaWxlIHN5c3RlbSBmcm9tIHNl cnZlciAiQSIuDQo+DQo+IElmIEkgZHJvcCB0aGUgLVIgcGFyYW1ldGVyIHRvIFpGUyBzZW5kLCB0 aGVuIGl0IGRvZXMgbm90IG92ZXJ3cml0ZSB0aGUNCj4gbW91bnRwb2ludHMsIGJ1dCBpdCBhbHNv IGRvZXMgbm90IGRlc3Ryb3kgdGhlIG5vbi1leGlzdGVudCBzbmFwc2hvdHMNCj4gb24gdGhlIHNl cnZlciAiQiIuICBTbmFwc2hvdHMgb24gc2VydmVyICJBIiBhcmUgYXV0b21hdGljYWxseQ0KPiBk ZXN0cm95ZWQgYnkgYSBzY3JpcHQgYWZ0ZXIgNyBkYXlzLCBhbmQgd2UgZG9uJ3Qgd2FudCB0byBh Y2N1bXVsYXRlDQo+IHNuYXBzaG90cyBvbiBzZXJ2ZXIgIkIiIHRoYXQgaGF2ZSBiZWVuIGRlc3Ry b3llZCBmcm9tIHNlcnZlciAiQSIuICBXZQ0KPiBhbHNvIHdvdWxkIHByZWZlciB0byBub3QgcnVu IGEgc25hcHNob3QgcHVyZ2luZyBzY3JpcHQgb24gc2VydmVyICJCIg0KPiBiZWNhdXNlIHVsdGlt YXRlbHkgdGhpcyBzb2x1dGlvbiB3aWxsIGJlIHVzZWQgZm9yIG11bHRpcGxlIHNvdXJjZQ0KPiBz ZXJ2ZXJzLCBhbmQgZWFjaCBvZiB0aGVtIGhhdmUgZGlmZmVyZW50IHNuYXBzaG90IHJldGVudGlv biBwb2xpY2llcw0KPiB0aGF0IEknZCBsaWtlIHRvIG5vdCBoYXZlIHRvIG1haW50YWluIGluIHR3 byBzZXBhcmF0ZSBwbGFjZXMuDQo+DQo+IElzIHRoZXJlIGFueSB3YXkgdG8gcmVjdXJzaXZlbHkg c2VuZCAoYW5kIGRlc3Ryb3kpIHNuYXBzaG90cyBvbiB0aGUNCj4gc2VydmVyICJCIiB3aXRob3V0 IGFsc28gY29weWluZyB0aGUgbW91bnRwb2ludCBwcm9wZXJ0eT8NCg0KQ2FuJ3Qgc2F5IG11Y2gg Zm9yIHRoZSBvdGhlciBzdHVmZiBidXQgaWYgeW91IGRvbid0IHdhbnQgdGhlIGZpbGVzeXN0ZW1z IHRvIG1vdW50LCBzZXQgJ2Nhbm1vdW50PW5vYXV0bycuIEJ1dCB5b3UnbGwgaGF2ZSB0byBzZXQg YWxsIHRoZSBmaWxlc3lzdGVtcyB5b3Ugd2FudCBtb3VudGVkIGF0IGJvb3QgaW4gJy9ldGMvZnN0 YWInIHRoZW4uDQoNCldlIGhhdmUgb3VyIG93biBzY3JpcHQgdGhhdCB0YWtlcyBhIG5ldyBzbmFw LCBzZW5kcyBpdCBvdmVyLCByZW1vdmVzIHRoZSBiYXNlIHNuYXBzaG90LCB0aGVuIHJlbmFtZXMg dGhlIG5ldyBzbmFwc2hvdCB0byBiYXNlLiBUaGlzIHRvIG9ubHkgdXBkYXRlIHRoZSBkYXRhIG9u IHNlY29uZGFyeSBzZXJ2ZXIuIFdlIHRoZW4gdXNlIGFub3RoZXIgc25hcHNob3QgbWFuYWdlbWVu dCB0b29sIHRvIHRha2Ugc2NoZWR1bGVkIHNuYXBzaG90cyB3aXRoIGRpZmZlcmVudCByZXRlbnRp b25zIGFzIGEgY29tcGxlbWVudCB0byB0aGF0Lg0KDQovSw0KDQo+DQo+IC0tDQo+DQo+IFRpbSBH dXN0YWZzb24NCj4gdGpnQHVjc2MuZWR1DQo+IDgzMS00NTktNTM1NA0KPiBCYXNraW4gRW5naW5l ZXJpbmcsIFJvb20gMzEzQQ0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fXw0KPiBmcmVlYnNkLWZzQGZyZWVic2Qub3JnIG1haWxpbmcgbGlzdA0KPiBodHRw Oi8vbGlzdHMuZnJlZWJzZC5vcmcvbWFpbG1hbi9saXN0aW5mby9mcmVlYnNkLWZzDQo+IFRvIHVu c3Vic2NyaWJlLCBzZW5kIGFueSBtYWlsIHRvICJmcmVlYnNkLWZzLXVuc3Vic2NyaWJlQGZyZWVi c2Qub3JnIg0K From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 19:41:21 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20B05CEC for ; Mon, 5 Jan 2015 19:41:21 +0000 (UTC) Received: from smarthost1.sentex.ca (smarthost1.sentex.ca [IPv6:2607:f3e0:0:1::12]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smarthost.sentex.ca", Issuer "smarthost.sentex.ca" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id DD67966EE7 for ; Mon, 5 Jan 2015 19:41:20 +0000 (UTC) Received: from [IPv6:2607:f3e0:0:4:f025:8813:7603:7e4a] (saphire3.sentex.ca [IPv6:2607:f3e0:0:4:f025:8813:7603:7e4a]) by smarthost1.sentex.ca (8.14.9/8.14.9) with ESMTP id t05JfIgT062302; Mon, 5 Jan 2015 14:41:18 -0500 (EST) (envelope-from mike@sentex.net) Message-ID: <54AAE8B0.8050003@sentex.net> Date: Mon, 05 Jan 2015 14:40:32 -0500 From: Mike Tancsa Organization: Sentex Communications User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Tim Gustafson , freebsd-fs@freebsd.org Subject: Re: ZFS Send / Receive Recursively Without Properties References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.75 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 19:41:21 -0000 On 1/5/2015 2:05 PM, Tim Gustafson wrote: > > Is there any way to recursively send (and destroy) snapshots on the > server "B" without also copying the mountpoint property? Funny, I am sort of struggling with this issue now as well. I am using FreeNAS to send to a non FreeNAS server and also cannot send recursive snapshots for what seems to be that reason. I am trying to do this by sending it as a non root user (ie. the target user is not root) so that when it tried to mount afterwards it fails. But I need to somehow tell FreeNAS, this is "OK". I have yet to get it fully working, but I have been playing around with the zfs allow/unallow settings so that the B server's non root user can do everything but set the mountpoint property. ---Mike -- ------------------- Mike Tancsa, tel +1 519 651 3400 Sentex Communications, mike@sentex.net Providing Internet services since 1994 www.sentex.net Cambridge, Ontario Canada http://www.tancsa.com/ From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 20:45:39 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BC58FCA7 for ; Mon, 5 Jan 2015 20:45:39 +0000 (UTC) Received: from mail-yk0-x22c.google.com (mail-yk0-x22c.google.com [IPv6:2607:f8b0:4002:c07::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 77BE52B8A for ; Mon, 5 Jan 2015 20:45:39 +0000 (UTC) Received: by mail-yk0-f172.google.com with SMTP id 131so10653125ykp.31 for ; Mon, 05 Jan 2015 12:45:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=eU8YY3OAa1lZyL5yEXIqrmVxwbRO7TsWYshc4D3tuPI=; b=MtH9Ja8Wy22fKfgyKFZflNNTQfEN63PWueB9YgwHG4HMMqHK82HMbSQygzEO4sfPvl z9C2PVOGPvO5NOi5WSmuajOgtYVwTfmDBL6A8PUhQgrIstFhYvW9F0Y1FyiuPtAwVzRu 3P962oVPo75/SplqZc7fG6VTcH8O4f3F02L5cXJ7fw//bORqpM3YL7EzyxG4YQ0d+S1Q y3lIfYq1l1JUjdVPHj0HINv7Er1atXOq60X6liHMXSS01qeKofTBKwwWjNtaEjxq+kqq AY+8/mjtr5SDOhTiY1in8uniZCZ5cAbSHUGDE5t02Lbv6r6oaNUKqKsYo/M/mmJx6a1E 0/bA== MIME-Version: 1.0 X-Received: by 10.236.40.14 with SMTP id e14mr61102273yhb.81.1420490738683; Mon, 05 Jan 2015 12:45:38 -0800 (PST) Received: by 10.170.48.136 with HTTP; Mon, 5 Jan 2015 12:45:38 -0800 (PST) In-Reply-To: References: Date: Mon, 5 Jan 2015 14:45:38 -0600 Message-ID: Subject: Re: ZFS Send / Receive Recursively Without Properties From: "Eric A. Borisch" To: Tim Gustafson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 20:45:39 -0000 On Monday, January 5, 2015, Tim Gustafson wrote: > > If I drop the -R parameter to ZFS send, then it does not overwrite the > mountpoints, but it also does not destroy the non-existent snapshots > on the server "B". Snapshots on server "A" are automatically > destroyed by a script after 7 days, and we don't want to accumulate > snapshots on server "B" that have been destroyed from server "A". We > also would prefer to not run a snapshot purging script on server "B" > because ultimately this solution will be used for multiple source > servers, and each of them have different snapshot retention policies > that I'd like to not have to maintain in two separate places. > I was doing this too, until I realized that an accidental (fat fingered) removal of file systems/snapshots on the source side then flows through to remove it on the backup side as well. Granted, you may want the removal to happen on both sides, but for my tastes, removals of whole file systems on the backup shouldn't happen without user interaction. This concern led me to avoid -R for automated transfers, and necessitated a snapshot expiry script on the backup side. I like my backup to protect me from at least one level of fat-fingers. :) This has the added benefit that you can have a smaller retain count on the active source, but keep files longer on the backup side. I've also started using holds on the backup side as a belts-and-suspenders approach. Just my 2c. - Eric From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 21:07:36 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 85F46496 for ; Mon, 5 Jan 2015 21:07:36 +0000 (UTC) Received: from mail-yh0-x22c.google.com (mail-yh0-x22c.google.com [IPv6:2607:f8b0:4002:c01::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3E2AD2F37 for ; Mon, 5 Jan 2015 21:07:36 +0000 (UTC) Received: by mail-yh0-f44.google.com with SMTP id c41so10890353yho.17 for ; Mon, 05 Jan 2015 13:07:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=wTLhwADGbuPfdo1GiN6AVHdPP0Yk3WYYzYE5qYOgGYw=; b=kK0KTi3Uqw0pwr6muZyNzlvNIfH0bnncPJKhl09qf721f75pnyUdki04L7dwXKN868 FnHTwim63oFlETWVMd1Xuzs9gFfQ8TOWu8H6DDjFkBctx0HovlZlfwGIH3kgryzsLb8v 1M7agvx97wPCp5m2SQZGOIIJZsudcwFqjFOgb0Tvm7gzvC3BuNLW1V+7FgzS38HGDJ/F V1SNUJpdll2xR7SwqFHdPfcra6VHzpxIJ0X20XQGYSHISmP+Lbmgzke17eoCVjicAcgR OP76sYa9T0RBg+XV9ZEIOBGb1+BQEjHyaO0oj7YAISxSbumS4D9PSEgxEATSTkm6XQ7e L46w== MIME-Version: 1.0 X-Received: by 10.236.14.136 with SMTP id d8mr29372485yhd.139.1420492055208; Mon, 05 Jan 2015 13:07:35 -0800 (PST) Received: by 10.170.48.136 with HTTP; Mon, 5 Jan 2015 13:07:35 -0800 (PST) In-Reply-To: References: Date: Mon, 5 Jan 2015 15:07:35 -0600 Message-ID: Subject: Re: ZFS Send / Receive Recursively Without Properties From: "Eric A. Borisch" To: Tim Gustafson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 21:07:36 -0000 > > This concern led me to avoid -R for automated transfers, and necessitated > a snapshot expiry script on the backup side. I like my backup to protect me > from at least one level of fat-fingers. :) > Let me correct this comment (it's been a while) I've avoided using -F (on the receive) with -RI (on the send), which is what I'm guessing you are doing if it is removing snapshots "for you." So: actually using -R (send), skipping -F (receive; no snapshot removal); post-xfer expiration of backup-side snapshots. Is the -u option of any use to you on the receive? You can receive it initially with -u and then set a local mount point property... - Eric From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 21:12:55 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 40945818 for ; Mon, 5 Jan 2015 21:12:55 +0000 (UTC) Received: from mail-ie0-x22c.google.com (mail-ie0-x22c.google.com [IPv6:2607:f8b0:4001:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 05ECF6409A for ; Mon, 5 Jan 2015 21:12:54 +0000 (UTC) Received: by mail-ie0-f172.google.com with SMTP id tr6so20342125ieb.31 for ; Mon, 05 Jan 2015 13:12:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=PeZ6ZpnwkA5QbsD4x7r/9D9HTIkxO/9Ptnu4n8wV9UU=; b=ldL0+lvATLi3hVTr3rlbvm3vWK7UEzMJvspRlvpHPD51ZD9yGqb91/E9smmCSP391T XErWezZpUbiqqMvb2l19r/XdtGpGxxr5KzsmUtx2YNQFn5QFdukes94bquo1ZUnCNW17 FnLx7Y2jeDW4EdnXTPahBxLKNWg5drA0nMcsM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=PeZ6ZpnwkA5QbsD4x7r/9D9HTIkxO/9Ptnu4n8wV9UU=; b=NyipTxlmOQVXSk+y7oj6zs2Bw6sqjGqhcxTHwEvyhpvaDvN7PfySS0aWM4qbFSNkv/ jktj6QYXOzp/HiIzQn9gP1N6WqecBXPBbvMOOY0BH9smO6rm1L5SqJToL5ADwjtMOfLs 1A9jeptI28BYgbduJ9r8F0THPr3lwOQd1R3QoOwAsWysKP3BmNFruBlVHAGIhyM2kWwF xs4Y1qTSA57czdki03azK5mxOUa7F9IuTvnCRCnKhcDTPSyIk7D9vqzVwAv8rv0If/Iu /Qi9RlLaD4fI//H1cLGsytxf1RhgDlxO3hq3EJfyztYNMEoIXIx/nsb2t8eLLgzYJ4v2 fPXA== X-Gm-Message-State: ALoCoQkGMLaTzJrDYICm/rycH6y0YBmfvoHZwt7YfDMbDuCSfr9TlvZ7gDElUMG2Xj5oR+MTWVAt MIME-Version: 1.0 X-Received: by 10.107.17.169 with SMTP id 41mr79797684ior.90.1420492374419; Mon, 05 Jan 2015 13:12:54 -0800 (PST) Received: by 10.43.156.75 with HTTP; Mon, 5 Jan 2015 13:12:54 -0800 (PST) In-Reply-To: References: Date: Mon, 5 Jan 2015 13:12:54 -0800 Message-ID: Subject: Re: ZFS Send / Receive Recursively Without Properties From: Tim Gustafson To: "Eric A. Borisch" Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 21:12:55 -0000 > I've avoided using -F (on the receive) with -RI (on the send), which is what > I'm guessing you are doing if it is removing snapshots "for you." > > So: actually using -R (send), skipping -F (receive; no snapshot removal); > post-xfer expiration of backup-side snapshots. I understand, but -R on the send side is also causing the file system properties to be copied over, which is what I don't want. In particular the mount point, but really I don't need any of the properties copied over. > Is the -u option of any use to you on the receive? You can receive it > initially with -u and then set a local mount point property... I use that to keep the file system from accidentally being mounted as part of the initial sync operation, but mostly it's a no-op and harmless. -- Tim Gustafson tjg@ucsc.edu 831-459-5354 Baskin Engineering, Room 313A From owner-freebsd-fs@FreeBSD.ORG Mon Jan 5 21:22:03 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 71EC4988 for ; Mon, 5 Jan 2015 21:22:03 +0000 (UTC) Received: from thyme.infocus-llc.com (thyme.infocus-llc.com [199.15.120.10]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 45FD064230 for ; Mon, 5 Jan 2015 21:22:03 +0000 (UTC) Received: from draco.over-yonder.net (c-75-65-60-66.hsd1.ms.comcast.net [75.65.60.66]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by thyme.infocus-llc.com (Postfix) with ESMTPSA id 94A2637B62B; Mon, 5 Jan 2015 15:21:56 -0600 (CST) Received: by draco.over-yonder.net (Postfix, from userid 100) id 3kGVCS0BXDz2Zp; Mon, 5 Jan 2015 15:21:56 -0600 (CST) Date: Mon, 5 Jan 2015 15:21:55 -0600 From: "Matthew D. Fuller" To: Tim Gustafson Subject: Re: ZFS Send / Receive Recursively Without Properties Message-ID: <20150105212155.GI1937@over-yonder.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Editor: vi X-OS: FreeBSD User-Agent: Mutt/1.5.23-fullermd.4 (2014-03-12) X-Virus-Scanned: clamav-milter 0.98.5 at thyme.infocus-llc.com X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Jan 2015 21:22:03 -0000 On Mon, Jan 05, 2015 at 11:05:55AM -0800 I heard the voice of Tim Gustafson, and lo! it spake thus: > > This works well, except that it sets the mountpoints on server "B" > server's copy of the file systems to whatever they were on the > source system, which overwrites server "B"'s root file system when > we send the root file system from server "A". I've run into this a time or two. I was actually thinking recently it might be nice to have a property on a filesystem similar to the zpool-level altroot, specifically for cases like this. -- Matthew Fuller (MF4839) | fullermd@over-yonder.net Systems/Network Administrator | http://www.over-yonder.net/~fullermd/ On the Internet, nobody can hear you scream. From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 03:17:25 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 19064558 for ; Tue, 6 Jan 2015 03:17:25 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 7FC092148 for ; Tue, 6 Jan 2015 03:17:24 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap4EAIlSq1SDaFve/2dsb2JhbADSUAICAQ X-IronPort-AV: E=Sophos;i="5.07,704,1413259200"; d="scan'208";a="181826544" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 05 Jan 2015 22:17:22 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id DD5C83CE1D; Mon, 5 Jan 2015 22:17:22 -0500 (EST) Date: Mon, 5 Jan 2015 22:17:22 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <2093433467.6650515.1420514242874.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 03:17:25 -0000 Loic Blot wrote: > Hi Rick, > nfsstat -e -s don't show usefull datas on server. >=20 Well, as far as I know, it returns valid information. (See below.) > Server Info: > Getattr Setattr Lookup Readlink Read Write Create > Remove > 26935254 16911 5755728 302 2334920 3673866 0 > 328332 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > Access > 77980 28 0 0 3 8900 3 > 1806052 > Mknod Fsstat Fsinfo PathConf Commit LookupP SetClId > SetClIdCf > 1 1095 0 0 614377 8172 8 > 8 > Open OpenAttr OpenDwnGr OpenCfrm DelePurge DeleRet GetFH > Lock > 1595299 0 44145 1495 0 0 5197490 > 635015 > LockT LockU Close Verify NVerify PutFH PutPubFH > PutRootFH > 0 614919 1270938 0 0 22688676 0 > 5 > Renew RestoreFH SaveFH Secinfo RelLckOwn V4Create > 42104 197606 275820 0 143 4578 > Server: > Retfailed Faults Clients > 0 0 6 > OpenOwner Opens LockOwner Locks Delegs > 32335 145448 204 181 0 Well, 145448 Opens are a lot of Open files. Each of these uses a kernel malloc'd data structure that is linked into multiple linked lists. The question is..why aren't these Opens being closed? Since FreeBSD does I/O on an mmap'd file after closing it, the FreeBSD NFSv4 client is forced to delay doing Close RPCs until the vnode is VOP_INACTIVE()/VOP_RECLAIM()'d. (The VOP_RECLAIM() case is needed, since VOP_INACTIVE() isn't guaranteed to be called.) Since there were about 1.5 million Opens and 1.27 million Closes, it does appear that Opens are being Closed. Now, I'm not sure I would have imagined 1.5million file Opens in a few days. My guess is this is the bottleneck. I'd suggest that you do: # nfsstat -e -c on each of the NFSv4 clients and see how many Opens/client there are. I vaguely remember an upper limit in the client, but can't remember what it is set to. --> I suspect the client Open/Lock limit needs to be increased. (I can't remember if the server also has a limit, but I think it does.) Then the size of the hash tables used to search the Opens may also need to be increased a lot. Also, I'd suggest you take a look at whatever apps. are running on the client(s) and try to figure out why they are Opening so many files? My guess is that the client(s) are gettig bogged down by all these Opens. > Server Cache Stats: > Inprog Idem Non-idem Misses CacheSize TCPPeak > 0 0 1 15082947 60 16522 >=20 > Only GetAttr and Lookup increase and it's only every 4-5 seconds and > only +2 to +5 into theses values. >=20 > Now on client, if i take four processes stack i got >=20 > PID TID COMM TDNAME KSTACK > 63170 102547 mv - mi_switch+0xe1 > turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 > nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4 > vn_open_cred+0x21d kern_openat+0x26f amd64_syscall+0x351 > Xfast_syscall+0xfb >=20 > Another mv: > 63140 101738 mv - mi_switch+0xe1 > turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 > nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4 > kern_statat_vnhook+0xae sys_lstat+0x30 amd64_syscall+0x351 > Xfast_syscall+0xfb >=20 > 62070 102170 sendmail - mi_switch+0xe1 > sleepq_timedwait+0x3a _sleep+0x26e clnt_vc_call+0x666 > clnt_reconnect_call+0x4fa newnfs_request+0xa8c nfscl_request+0x72 > nfsrpc_lookup+0x1fb nfs_lookup+0x508 VOP_LOOKUP_APV+0xa1 > lookup+0x59c namei+0x4d4 kern_statat_vnhook+0xae sys_lstat+0x30 > amd64_syscall+0x351 Xfast_syscall+0xfb >=20 > 63200 100930 mv - mi_switch+0xe1 > turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 > nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4 > kern_statat_vnhook+0xae sys_lstat+0x30 amd64_syscall+0x351 > Xfast_syscall+0xfb >=20 The above simply says that thread 102710 is waiting for a Lookup reply from the server and the other 3 are waiting for the mutex that protects the state structures in the client. (I suspect some other thread in the client is wading through the Open list, if a single client has a lot of these 145K Opens.) > When client is in this state, server was doing nothing special > (procstat -kk) >=20 > PID TID COMM TDNAME KSTACK > 895 100538 nfsd nfsd: master mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10 > _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_run+0x1de > nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > amd64_syscall+0x351 Xfast_syscall+0xfb > 895 100568 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100569 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100570 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100571 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100572 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100573 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100575 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100576 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100577 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100578 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100579 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100580 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100581 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100582 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100583 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100584 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100585 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100586 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100587 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100588 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100589 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100590 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100592 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100593 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100594 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100595 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100596 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100597 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100598 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100599 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100600 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100602 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100603 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100604 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100605 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100606 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100607 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100608 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100609 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100610 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100611 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100612 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100613 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100614 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100615 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100617 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100618 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100619 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100621 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100622 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100623 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100624 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100625 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100626 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100627 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100628 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100629 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100630 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100631 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100632 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100633 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100634 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100635 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100636 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100638 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100639 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100640 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100641 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100642 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100643 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100644 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100645 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100646 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100647 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100648 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100649 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100651 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100652 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100653 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100654 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100655 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100656 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100657 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100658 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100659 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100661 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100662 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100684 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100685 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100686 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100797 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100798 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100799 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100800 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 895 100801 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe >=20 > I really think it's a client side problem, maybe a lookup problem. >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 5 janvier 2015 14:35 "Rick Macklem" a =C3=A9crit: > > Loic Blot wrote: > >=20 > >> Hi, > >> happy new year Rick and @freebsd-fs. > >>=20 > >> After some days, i looked my NFSv4.1 mount. At server start it was > >> calm, but after 4 days, here is the top stat... > >>=20 > >> CPU: 0.0% user, 0.0% nice, 100% system, 0.0% interrupt, 0.0% > >> idle > >>=20 > >> Definitively i think it's a problem on client side. What can i > >> look > >> into running kernel to resolve this issue ? > >=20 > > Well, I'd start with: > > # nfsstat -e -s > > - run repeatedly on the server (once every N seconds in a loop). > > Then look at the output, comparing the counts and see which RPCs > > are being performed by the client(s). You are looking for which > > RPCs are being done a lot. (If one RPC is almost 100% of the load, > > then it might be a client/caching issue for whatever that RPC is > > doing.) > >=20 > > Also look at the Open/Lock counts near the end of the output. > > If the # of Opens/Locks is large, it may be possible to reduce the > > CPU overheads by using larger hash tables. > >=20 > > Then you need to profile the server kernel to see where the CPU > > is being used. > > Hopefully someone else can fill you in on how to do that, because > > I'll admit I don't know how to. > > Basically you are looking to see if the CPU is being used in > > the NFS server code or ZFS. > >=20 > > Good luck with it, rick > >=20 > >> Regards, > >>=20 > >> Lo=C3=AFc Blot, > >> UNIX Systems, Network and Security Engineer > >> http://www.unix-experience.fr > >>=20 > >> 30 d=C3=A9cembre 2014 16:16 "Lo=C3=AFc Blot" > >> a > >> =C3=A9crit: > >>> Hi Rick, > >>> i upgraded my jail host from FreeBSD 9.3 to 10.1 to use NFS v4.1 > >>> (mountoptions: > >>> rw,rsize=3D32768,wsize=3D32768,tcp,nfsv4,minorversion=3D1) > >>>=20 > >>> Performance is quite stable but it's slow. Not as slow as before > >>> but slow... services was launched > >>> but no client are using them and system CPU % was 10-50%. > >>>=20 > >>> I don't see anything on NFSv4.1 server, it's perfectly stable and > >>> functionnal. > >>>=20 > >>> Regards, > >>>=20 > >>> Lo=C3=AFc Blot, > >>> UNIX Systems, Network and Security Engineer > >>> http://www.unix-experience.fr > >>>=20 > >>> 23 d=C3=A9cembre 2014 00:20 "Rick Macklem" a > >>> =C3=A9crit: > >>>=20 > >>>> Loic Blot wrote: > >>>>=20 > >>>>> Hi, > >>>>>=20 > >>>>> To clarify because of our exchanges. Here are the current > >>>>> sysctl > >>>>> options for server: > >>>>>=20 > >>>>> vfs.nfsd.enable_nobodycheck=3D0 > >>>>> vfs.nfsd.enable_nogroupcheck=3D0 > >>>>>=20 > >>>>> vfs.nfsd.maxthreads=3D200 > >>>>> vfs.nfsd.tcphighwater=3D10000 > >>>>> vfs.nfsd.tcpcachetimeo=3D300 > >>>>> vfs.nfsd.server_min_nfsvers=3D4 > >>>>>=20 > >>>>> kern.maxvnodes=3D10000000 > >>>>> kern.ipc.maxsockbuf=3D4194304 > >>>>> net.inet.tcp.sendbuf_max=3D4194304 > >>>>> net.inet.tcp.recvbuf_max=3D4194304 > >>>>>=20 > >>>>> vfs.lookup_shared=3D0 > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 22 d=C3=A9cembre 2014 09:42 "Lo=C3=AFc Blot" > >>>>> > >>>>> a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Hi Rick, > >>>>> my 5 jails runs this weekend and now i have some stats on this > >>>>> monday. > >>>>>=20 > >>>>> Hopefully deadlock was fixed, yeah, but everything isn't good > >>>>> :( > >>>>>=20 > >>>>> On NFSv4 server (FreeBSD 10.1) system uses 35% CPU > >>>>>=20 > >>>>> As i can see this is because of nfsd: > >>>>>=20 > >>>>> 918 root 96 20 0 12352K 3372K rpcsvc 6 51.4H > >>>>> 273.68% nfsd: server (nfsd) > >>>>>=20 > >>>>> If i look at dmesg i see: > >>>>> nfsd server cache flooded, try increasing vfs.nfsd.tcphighwater > >>>>=20 > >>>> Well, you have a couple of choices: > >>>> 1 - Use NFSv4.1 (add "minorversion=3D1" to your mount options). > >>>> (NFSv4.1 avoids use of the DRC and instead uses something > >>>> called sessions. See below.) > >>>> OR > >>>>=20 > >>>>> vfs.nfsd.tcphighwater was set to 10000, i increase it to 15000 > >>>>=20 > >>>> 2 - Bump vfs.nfsd.tcphighwater way up, until you no longer see > >>>> "nfs server cache flooded" messages. (I think Garrett Wollman > >>>> uses > >>>> 100000. (You may still see quite a bit of CPU overheads.) > >>>>=20 > >>>> OR > >>>>=20 > >>>> 3 - Set vfs.nfsd.cachetcp=3D0 (which disables the DRC and gets rid > >>>> of the CPU overheads). However, there is a risk of data > >>>> corruption > >>>> if you have a client->server network partitioning of a moderate > >>>> duration, because a non-idempotent RPC may get redone, becasue > >>>> the client times out waiting for a reply. If a non-idempotent > >>>> RPC gets done twice on the server, data corruption can happen. > >>>> (The DRC provides improved correctness, but does add overhead.) > >>>>=20 > >>>> If #1 works for you, it is the preferred solution, since > >>>> Sessions > >>>> in NFSv4.1 solves the correctness problem in a good, space bound > >>>> way. A session basically has N (usually 32 or 64) slots and only > >>>> allows one outstanding RPC/slot. As such, it can cache the > >>>> previous > >>>> reply for each slot (32 or 64 of them) and guarantee "exactly > >>>> once" > >>>> RPC semantics. > >>>>=20 > >>>> rick > >>>>=20 > >>>>> Here is 'nfsstat -s' output: > >>>>>=20 > >>>>> Server Info: > >>>>> Getattr Setattr Lookup Readlink Read Write Create > >>>>> Remove > >>>>> 12600652 1812 2501097 156 1386423 1983729 123 > >>>>> 162067 > >>>>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > >>>>> Access > >>>>> 36762 9 0 0 0 3147 0 > >>>>> 623524 > >>>>> Mknod Fsstat Fsinfo PathConf Commit > >>>>> 0 0 0 0 328117 > >>>>> Server Ret-Failed > >>>>> 0 > >>>>> Server Faults > >>>>> 0 > >>>>> Server Cache Stats: > >>>>> Inprog Idem Non-idem Misses > >>>>> 0 0 0 12635512 > >>>>> Server Write Gathering: > >>>>> WriteOps WriteRPC Opsaved > >>>>> 1983729 1983729 0 > >>>>>=20 > >>>>> And here is 'procstat -kk' for nfsd (server) > >>>>>=20 > >>>>> 918 100528 nfsd nfsd: master mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10 > >>>>> _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 > >>>>> svc_run+0x1de > >>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>>>> amd64_syscall+0x351 Xfast_syscall+0xfb > >>>>> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100659 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100660 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100661 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100662 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> --- > >>>>>=20 > >>>>> Now if we look at client (FreeBSD 9.3) > >>>>>=20 > >>>>> We see system was very busy and do many and many interrupts > >>>>>=20 > >>>>> CPU: 0.0% user, 0.0% nice, 37.8% system, 51.2% interrupt, 11.0% > >>>>> idle > >>>>>=20 > >>>>> A look at process list shows that there are many sendmail > >>>>> process > >>>>> in > >>>>> state nfstry > >>>>>=20 > >>>>> nfstry 18 32:27 0.88% sendmail: Queue runner@00:30:00 for > >>>>> /var/spool/clientm > >>>>>=20 > >>>>> Here is 'nfsstat -c' output: > >>>>>=20 > >>>>> Client Info: > >>>>> Rpc Counts: > >>>>> Getattr Setattr Lookup Readlink Read Write Create > >>>>> Remove > >>>>> 1051347 1724 2494481 118 903902 1901285 162676 > >>>>> 161899 > >>>>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > >>>>> Access > >>>>> 36744 2 0 114 40 3131 0 > >>>>> 544136 > >>>>> Mknod Fsstat Fsinfo PathConf Commit > >>>>> 9 0 0 0 245821 > >>>>> Rpc Info: > >>>>> TimedOut Invalid X Replies Retries Requests > >>>>> 0 0 0 0 8356557 > >>>>> Cache Info: > >>>>> Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits > >>>>> Misses > >>>>> 108754455 491475 54229224 2437229 46814561 821723 5132123 > >>>>> 1871871 > >>>>> BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits > >>>>> Misses > >>>>> 144035 118 53736 2753 27813 1 57238839 > >>>>> 544205 > >>>>>=20 > >>>>> If you need more things, tell me, i let the PoC in this state. > >>>>>=20 > >>>>> Thanks > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 21 d=C3=A9cembre 2014 01:33 "Rick Macklem" a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Loic Blot wrote: > >>>>>=20 > >>>>>> Hi Rick, > >>>>>> ok, i don't need locallocks, i haven't understand option was > >>>>>> for > >>>>>> that > >>>>>> usage, i removed it. > >>>>>> I do more tests on monday. > >>>>>> Thanks for the deadlock fix, for other people :) > >>>>>=20 > >>>>> Good. Please let us know if running with > >>>>> vfs.nfsd.enable_locallocks=3D0 > >>>>> gets rid of the deadlocks? (I think it fixes the one you saw.) > >>>>>=20 > >>>>> On the performance side, you might also want to try different > >>>>> values > >>>>> of > >>>>> readahead, if the Linux client has such a mount option. (With > >>>>> the > >>>>> NFSv4-ZFS sequential vs random I/O heuristic, I have no idea > >>>>> what > >>>>> the > >>>>> optimal readahead value would be.) > >>>>>=20 > >>>>> Good luck with it and please let us know how it goes, rick > >>>>> ps: I now have a patch to fix the deadlock when > >>>>> vfs.nfsd.enable_locallocks=3D1 > >>>>> is set. I'll post it for anyone who is interested after I put > >>>>> it > >>>>> through some testing. > >>>>>=20 > >>>>> -- > >>>>> Best regards, > >>>>> Lo=C3=AFc BLOT, > >>>>> UNIX systems, security and network engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> Le jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a = =C3=A9crit : > >>>>>=20 > >>>>> Loic Blot wrote: > >>>>>> Hi rick, > >>>>>> i tried to start a LXC container on Debian Squeeze from my > >>>>>> freebsd > >>>>>> ZFS+NFSv4 server and i also have a deadlock on nfsd > >>>>>> (vfs.lookup_shared=3D0). Deadlock procs each time i launch a > >>>>>> squeeze > >>>>>> container, it seems (3 tries, 3 fails). > >>>>>=20 > >>>>> Well, I`ll take a look at this `procstat -kk`, but the only > >>>>> thing > >>>>> I`ve seen posted w.r.t. avoiding deadlocks in ZFS is to not use > >>>>> nullfs. (I have no idea if you are using any nullfs mounts, but > >>>>> if so, try getting rid of them.) > >>>>>=20 > >>>>> Here`s a high level post about the ZFS and vnode locking > >>>>> problem, > >>>>> but there is no patch available, as far as I know. > >>>>>=20 > >>>>> http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407 > >>>>>=20 > >>>>> rick > >>>>>=20 > >>>>> 921 - D 0:00.02 nfsd: server (nfsd) > >>>>>=20 > >>>>> Here is the procstat -kk > >>>>>=20 > >>>>> PID TID COMM TDNAME KSTACK > >>>>> 921 100538 nfsd nfsd: master mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >>>>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>>>> svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > >>>>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>>>> 921 100572 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100573 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100574 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100575 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100576 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100577 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100578 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100579 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100580 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100581 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100582 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100583 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100584 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100585 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100586 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100587 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100588 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100589 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100590 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100591 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100592 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100593 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100594 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100595 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100596 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100597 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100598 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100599 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100600 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100601 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100602 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100603 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100604 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100605 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100606 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100607 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100608 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100609 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100610 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100611 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100612 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100613 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100614 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100615 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100616 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > >>>>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 921 100617 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100618 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 921 100619 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100620 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100621 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100622 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100623 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100624 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100625 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100626 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100627 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100628 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100629 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100630 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100631 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100632 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100633 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100634 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100635 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100636 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100637 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100638 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100639 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100640 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100641 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100642 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100643 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100644 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100645 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100646 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100647 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100648 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100649 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100650 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100651 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100652 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100653 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100654 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100655 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100656 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100657 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100658 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100659 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100660 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100661 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100662 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100663 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100664 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100665 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>> _cv_wait_sig+0x16a > >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 921 100666 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > >>>>> nfsrvd_dorpc+0xc76 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Loic Blot wrote: > >>>>>=20 > >>>>>> For more informations, here is procstat -kk on nfsd, if you > >>>>>> need > >>>>>> more > >>>>>> hot datas, tell me. > >>>>>>=20 > >>>>>> Regards, PID TID COMM TDNAME KSTACK > >>>>>> 918 100529 nfsd nfsd: master mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > >>>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>>>>> amd64_syscall+0x351 > >>>>>=20 > >>>>> Well, most of the threads are stuck like this one, waiting for > >>>>> a > >>>>> vnode > >>>>> lock in ZFS. All of them appear to be in zfs_fhtovp(). > >>>>> I`m not a ZFS guy, so I can`t help much. I`ll try changing the > >>>>> subject line > >>>>> to include ZFS vnode lock, so maybe the ZFS guys will take a > >>>>> look. > >>>>>=20 > >>>>> The only thing I`ve seen suggested is trying: > >>>>> sysctl vfs.lookup_shared=3D0 > >>>>> to disable shared vop_lookup()s. Apparently zfs_lookup() > >>>>> doesn`t > >>>>> obey the vnode locking rules for lookup and rename, according > >>>>> to > >>>>> the posting I saw. > >>>>>=20 > >>>>> I`ve added a couple of comments about the other threads below, > >>>>> but > >>>>> they are all either waiting for an RPC request or waiting for > >>>>> the > >>>>> threads stuck on the ZFS vnode lock to complete. > >>>>>=20 > >>>>> rick > >>>>>=20 > >>>>>> 918 100564 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>> _cv_wait_sig+0x16a > >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>> fork_trampoline+0xe > >>>>>=20 > >>>>> Fyi, this thread is just waiting for an RPC to arrive. (Normal) > >>>>>=20 > >>>>>> 918 100565 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>> _cv_wait_sig+0x16a > >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>> fork_trampoline+0xe > >>>>>> 918 100566 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>> _cv_wait_sig+0x16a > >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>> fork_trampoline+0xe > >>>>>> 918 100567 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>> _cv_wait_sig+0x16a > >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>> fork_trampoline+0xe > >>>>>> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>> _cv_wait_sig+0x16a > >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>> fork_trampoline+0xe > >>>>>> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>> _cv_wait_sig+0x16a > >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>> fork_trampoline+0xe > >>>>>> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>> _cv_wait_sig+0x16a > >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>> fork_trampoline+0xe > >>>>>> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > >>>>>> nfsrvd_dorpc+0xc76 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>=20 > >>>>> This one (and a few others) are waiting for the nfsv4_lock. > >>>>> This > >>>>> happens > >>>>> because other threads are stuck with RPCs in progress. (ie. The > >>>>> ones > >>>>> waiting on the vnode lock in zfs_fhtovp().) > >>>>> For these, the RPC needs to lock out other threads to do the > >>>>> operation, > >>>>> so it waits for the nfsv4_lock() which can exclusively lock the > >>>>> NFSv4 > >>>>> data structures once all other nfsd threads complete their RPCs > >>>>> in > >>>>> progress. > >>>>>=20 > >>>>>> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>=20 > >>>>> Same as above. > >>>>>=20 > >>>>>> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>> zfs_fhtovp+0x38d > >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>> svc_thread_start+0xb > >>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>=20 > >>>>> Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > >>>>>=20 > >>>>> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > >>>>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >>>>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>>>> svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > >>>>> fork_trampoline+0xe > >>>>> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>> zfs_fhtovp+0x38d > >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>> svc_thread_start+0xb > >>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" > >>>>> > >>>>> a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Hmmm... > >>>>> now i'm experiencing a deadlock. > >>>>>=20 > >>>>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server > >>>>> (nfsd) > >>>>>=20 > >>>>> the only issue was to reboot the server, but after rebooting > >>>>> deadlock arrives a second time when i > >>>>> start my jails over NFS. > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" > >>>>> > >>>>> a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Hi Rick, > >>>>> after talking with my N+1, NFSv4 is required on our > >>>>> infrastructure. > >>>>> I tried to upgrade NFSv4+ZFS > >>>>> server from 9.3 to 10.1, i hope this will resolve some > >>>>> issues... > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" > >>>>> > >>>>> a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Hi Rick, > >>>>> thanks for your suggestion. > >>>>> For my locking bug, rpc.lockd is stucked in rpcrecv state on > >>>>> the > >>>>> server. kill -9 doesn't affect the > >>>>> process, it's blocked.... (State: Ds) > >>>>>=20 > >>>>> for the performances > >>>>>=20 > >>>>> NFSv3: 60Mbps > >>>>> NFSv4: 45Mbps > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" > >>>>> a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Loic Blot wrote: > >>>>>=20 > >>>>>> Hi Rick, > >>>>>> I'm trying NFSv3. > >>>>>> Some jails are starting very well but now i have an issue > >>>>>> with > >>>>>> lockd > >>>>>> after some minutes: > >>>>>>=20 > >>>>>> nfs server 10.10.X.8:/jails: lockd not responding > >>>>>> nfs server 10.10.X.8:/jails lockd is alive again > >>>>>>=20 > >>>>>> I look at mbuf, but i seems there is no problem. > >>>>>=20 > >>>>> Well, if you need locks to be visible across multiple > >>>>> clients, > >>>>> then > >>>>> I'm afraid you are stuck with using NFSv4 and the > >>>>> performance > >>>>> you > >>>>> get > >>>>> from it. (There is no way to do file handle affinity for > >>>>> NFSv4 > >>>>> because > >>>>> the read and write ops are buried in the compound RPC and > >>>>> not > >>>>> easily > >>>>> recognized.) > >>>>>=20 > >>>>> If the locks don't need to be visible across multiple > >>>>> clients, > >>>>> I'd > >>>>> suggest trying the "nolockd" option with nfsv3. > >>>>>=20 > >>>>>> Here is my rc.conf on server: > >>>>>>=20 > >>>>>> nfs_server_enable=3D"YES" > >>>>>> nfsv4_server_enable=3D"YES" > >>>>>> nfsuserd_enable=3D"YES" > >>>>>> nfsd_server_flags=3D"-u -t -n 256" > >>>>>> mountd_enable=3D"YES" > >>>>>> mountd_flags=3D"-r" > >>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>>> rpcbind_enable=3D"YES" > >>>>>> rpc_lockd_enable=3D"YES" > >>>>>> rpc_statd_enable=3D"YES" > >>>>>>=20 > >>>>>> Here is the client: > >>>>>>=20 > >>>>>> nfsuserd_enable=3D"YES" > >>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>>> nfscbd_enable=3D"YES" > >>>>>> rpc_lockd_enable=3D"YES" > >>>>>> rpc_statd_enable=3D"YES" > >>>>>>=20 > >>>>>> Have you got an idea ? > >>>>>>=20 > >>>>>> Regards, > >>>>>>=20 > >>>>>> Lo=C3=AFc Blot, > >>>>>> UNIX Systems, Network and Security Engineer > >>>>>> http://www.unix-experience.fr > >>>>>>=20 > >>>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" > >>>>>> a > >>>>>> =C3=A9crit: > >>>>>>> Loic Blot wrote: > >>>>>>>=20 > >>>>>>>> Hi rick, > >>>>>>>>=20 > >>>>>>>> I waited 3 hours (no lag at jail launch) and now I do: > >>>>>>>> sysrc > >>>>>>>> memcached_flags=3D"-v -m 512" > >>>>>>>> Command was very very slow... > >>>>>>>>=20 > >>>>>>>> Here is a dd over NFS: > >>>>>>>>=20 > >>>>>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > >>>>>>>> bytes/sec) > >>>>>>>=20 > >>>>>>> Can you try the same read using an NFSv3 mount? > >>>>>>> (If it runs much faster, you have probably been bitten by > >>>>>>> the > >>>>>>> ZFS > >>>>>>> "sequential vs random" read heuristic which I've been told > >>>>>>> things > >>>>>>> NFS is doing "random" reads without file handle affinity. > >>>>>>> File > >>>>>>> handle affinity is very hard to do for NFSv4, so it isn't > >>>>>>> done.) > >>>>>=20 > >>>>> I was actually suggesting that you try the "dd" over nfsv3 > >>>>> to > >>>>> see > >>>>> how > >>>>> the performance compared with nfsv4. If you do that, please > >>>>> post > >>>>> the > >>>>> comparable results. > >>>>>=20 > >>>>> Someday I would like to try and get ZFS's sequential vs > >>>>> random > >>>>> read > >>>>> heuristic modified and any info on what difference in > >>>>> performance > >>>>> that > >>>>> might make for NFS would be useful. > >>>>>=20 > >>>>> rick > >>>>>=20 > >>>>> rick > >>>>>=20 > >>>>> This is quite slow... > >>>>>=20 > >>>>> You can found some nfsstat below (command isn't finished > >>>>> yet) > >>>>>=20 > >>>>> nfsstat -c -w 1 > >>>>>=20 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 16 0 > >>>>> 2 0 0 0 0 0 17 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 4 0 0 0 0 4 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 3 0 > >>>>> 0 0 0 0 0 0 3 0 > >>>>> 37 10 0 8 0 0 14 1 > >>>>> 18 16 0 4 1 2 4 0 > >>>>> 78 91 0 82 6 12 30 0 > >>>>> 19 18 0 2 2 4 2 0 > >>>>> 0 0 0 0 2 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 1 0 0 0 0 1 0 > >>>>> 4 6 0 0 6 0 3 0 > >>>>> 2 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 1 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 1 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 6 108 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 98 54 0 86 11 0 25 0 > >>>>> 36 24 0 39 25 0 10 1 > >>>>> 67 8 0 63 63 0 41 0 > >>>>> 34 0 0 35 34 0 0 0 > >>>>> 75 0 0 75 77 0 0 0 > >>>>> 34 0 0 35 35 0 0 0 > >>>>> 75 0 0 74 76 0 0 0 > >>>>> 33 0 0 34 33 0 0 0 > >>>>> 0 0 0 0 5 0 0 0 > >>>>> 0 0 0 0 0 0 6 0 > >>>>> 11 0 0 0 0 0 11 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 17 0 0 0 0 1 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 4 5 0 0 0 0 12 0 > >>>>> 2 0 0 0 0 0 26 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 4 0 0 0 0 4 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 2 0 > >>>>> 2 0 0 0 0 0 24 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 0 0 0 0 0 7 0 > >>>>> 2 1 0 0 0 0 1 0 > >>>>> 0 0 0 0 2 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 6 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 6 0 0 0 0 3 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 2 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 71 0 0 0 0 0 0 > >>>>> 0 1 0 0 0 0 0 0 > >>>>> 2 36 0 0 0 0 1 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 1 0 0 0 0 0 1 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 79 6 0 79 79 0 2 0 > >>>>> 25 0 0 25 26 0 6 0 > >>>>> 43 18 0 39 46 0 23 0 > >>>>> 36 0 0 36 36 0 31 0 > >>>>> 68 1 0 66 68 0 0 0 > >>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>> 36 0 0 36 36 0 0 0 > >>>>> 48 0 0 48 49 0 0 0 > >>>>> 20 0 0 20 20 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 3 14 0 1 0 0 11 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 0 4 0 0 0 0 4 0 > >>>>> 0 0 0 0 0 0 0 0 > >>>>> 4 22 0 0 0 0 16 0 > >>>>> 2 0 0 0 0 0 23 0 > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > >>>>> a > >>>>> =C3=A9crit: > >>>>>> Hi Rick, > >>>>>> I stopped the jails this week-end and started it this > >>>>>> morning, > >>>>>> i'll > >>>>>> give you some stats this week. > >>>>>>=20 > >>>>>> Here is my nfsstat -m output (with your rsize/wsize > >>>>>> tweaks) > >>>>=20 > >>>>=20 > >>>=20 > >>=20 > > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregm= in=3D5,acregmax=3D60,nametimeo=3D60,negna > >>>>=20 > >>>>>=20 > >>>>=20 > >>>>=20 > >>>=20 > >>=20 > > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead= =3D1,wcommitsize=3D773136,timeout=3D120,retra > >>>>=20 > >>>>> s=3D2147483647 > >>>>>=20 > >>>>> On server side my disks are on a raid controller which show a > >>>>> 512b > >>>>> volume and write performances > >>>>> are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > >>>>> count=3D100000000 =3D> 450MBps) > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>> Loic Blot wrote: > >>>>>=20 > >>>>> Hi, > >>>>> i'm trying to create a virtualisation environment based on > >>>>> jails. > >>>>> Those jails are stored under a big ZFS pool on a FreeBSD > >>>>> 9.3 > >>>>> which > >>>>> export a NFSv4 volume. This NFSv4 volume was mounted on a > >>>>> big > >>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but > >>>>> only 1 > >>>>> was > >>>>> used at this time). > >>>>>=20 > >>>>> The problem is simple, my hypervisors runs 6 jails (used 1% > >>>>> cpu > >>>>> and > >>>>> 10GB RAM approximatively and less than 1MB bandwidth) and > >>>>> works > >>>>> fine at start but the system slows down and after 2-3 days > >>>>> become > >>>>> unusable. When i look at top command i see 80-100% on > >>>>> system > >>>>> and > >>>>> commands are very very slow. Many process are tagged with > >>>>> nfs_cl*. > >>>>>=20 > >>>>> To be honest, I would expect the slowness to be because of > >>>>> slow > >>>>> response > >>>>> from the NFSv4 server, but if you do: > >>>>> # ps axHl > >>>>> on a client when it is slow and post that, it would give us > >>>>> some > >>>>> more > >>>>> information on where the client side processes are sitting. > >>>>> If you also do something like: > >>>>> # nfsstat -c -w 1 > >>>>> and let it run for a while, that should show you how many > >>>>> RPCs > >>>>> are > >>>>> being done and which ones. > >>>>>=20 > >>>>> # nfsstat -m > >>>>> will show you what your mount is actually using. > >>>>> The only mount option I can suggest trying is > >>>>> "rsize=3D32768,wsize=3D32768", > >>>>> since some network environments have difficulties with 64K. > >>>>>=20 > >>>>> There are a few things you can try on the NFSv4 server side, > >>>>> if > >>>>> it > >>>>> appears > >>>>> that the clients are generating a large RPC load. > >>>>> - disabling the DRC cache for TCP by setting > >>>>> vfs.nfsd.cachetcp=3D0 > >>>>> - If the server is seeing a large write RPC load, then > >>>>> "sync=3Ddisabled" > >>>>> might help, although it does run a risk of data loss when > >>>>> the > >>>>> server > >>>>> crashes. > >>>>> Then there are a couple of other ZFS related things (I'm not > >>>>> a > >>>>> ZFS > >>>>> guy, > >>>>> but these have shown up on the mailing lists). > >>>>> - make sure your volumes are 4K aligned and ashift=3D12 (in > >>>>> case a > >>>>> drive > >>>>> that uses 4K sectors is pretending to be 512byte sectored) > >>>>> - never run over 70-80% full if write performance is an > >>>>> issue > >>>>> - use a zil on an SSD with good write performance > >>>>>=20 > >>>>> The only NFSv4 thing I can tell you is that it is known that > >>>>> ZFS's > >>>>> algorithm for determining sequential vs random I/O fails for > >>>>> NFSv4 > >>>>> during writing and this can be a performance hit. The only > >>>>> workaround > >>>>> is to use NFSv3 mounts, since file handle affinity > >>>>> apparently > >>>>> fixes > >>>>> the problem and this is only done for NFSv3. > >>>>>=20 > >>>>> rick > >>>>>=20 > >>>>> I saw that there are TSO issues with igb then i'm trying to > >>>>> disable > >>>>> it with sysctl but the situation wasn't solved. > >>>>>=20 > >>>>> Someone has got ideas ? I can give you more informations if > >>>>> you > >>>>> need. > >>>>>=20 > >>>>> Thanks in advance. > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>=20 > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>=20 > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>=20 > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>=20 > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>=20 > >>> _______________________________________________ > >>> freebsd-fs@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>> To unsubscribe, send any mail to > >>> "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 08:25:55 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E773BA03 for ; Tue, 6 Jan 2015 08:25:55 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4EBBB648BE for ; Tue, 6 Jan 2015 08:25:54 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 2F48528018; Tue, 6 Jan 2015 08:25:50 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id 6PunGYsl0D2j; Tue, 6 Jan 2015 08:25:41 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 3292328008; Tue, 6 Jan 2015 08:25:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1420532741; bh=mrQBmiD7srh5DyrVU4xbqpvm0aTnnlTxZsl36XNt+wc=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=FsnZdE/+LxE6yxWZuLJtqfhifWI/fJaUxSZ01TrDEuC35eN+9nZCHI9b3az4FzNhR BxlkNiGP0+Ke+MKkLUA3DODg9gvL0ryDXPo4wTZ9cpmiZ1YlkQIeHH4qbfN1b0/+r/ XQMmefopQ/1qjo4ZirYjLXj7SbpvADjOvP5xlvG4= Mime-Version: 1.0 Date: Tue, 06 Jan 2015 08:25:40 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: X-Mailer: RainLoop/1.7.1.215 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <2093433467.6650515.1420514242874.JavaMail.root@uoguelph.ca> References: <2093433467.6650515.1420514242874.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 08:25:56 -0000 Hi Rick,=0A=0Ai saw that some people has issues with igb cards with NFS= =0AFor example: http://freebsd.1045724.n5.nabble.com/NFS-over-LAGG-lacp-p= oor-performance-td5906349.html =0A=0Acan my problem be related ? I use ig= b with default queue number. Here are my vmstat -i outputs=0A=0AServer si= de:=0A=0Ainterrupt total rate=0Airq1: atkb= d0 18 0=0Airq20: ehci1 = 2790134 2=0Airq21: ehci0 2547642 = 2=0Acpu0:timer 36299188 35=0Airq264: ciss0= 6352476 6=0Airq265: igb0:que 0 = 2716692 2=0Airq266: igb0:que 1 32205278 31= =0Airq267: igb0:que 2 38395109 37=0Airq268: igb0:que= 3 1413468 1=0Airq269: igb0:que 4 392= 07930 38=0Airq270: igb0:que 5 1622715 1=0A= irq271: igb0:que 6 1634676 1=0Airq272: igb0:que 7 = 1190123 1=0Airq273: igb0:link = 2 0=0Acpu1:timer 14074423 13=0Acpu= 8:timer 12204739 11=0Acpu9:timer = 11384192 11=0Acpu3:timer 10461566 = 10=0Acpu4:timer 12785103 12=0Acpu6:t= imer 10739344 10=0Acpu5:timer = 10978294 10=0Acpu7:timer 10599705 = 10=0Acpu2:timer 13998891 13=0Acpu10:tim= er 11602361 11=0Acpu11:timer = 11568523 11=0ATotal 296772592 = 290=0A=0AAnd client side:=0Ainterrupt total = rate=0Airq9: acpi0 4 0=0Airq22: eh= ci1 950519 2=0Airq23: ehci0 = 1865060 4=0Acpu0:timer 248128035 = 546=0Airq268: mfi0 406896 0=0Airq269: igb0= :que 0 2510556 5=0Airq270: igb0:que 1 = 2825336 6=0Airq271: igb0:que 2 2092958 = 4=0Airq272: igb0:que 3 1960849 4=0Airq273: igb0:qu= e 4 2645369 5=0Airq274: igb0:que 5 2= 735187 6=0Airq275: igb0:que 6 2290531 5= =0Airq276: igb0:que 7 2384370 5=0Airq277: igb0:lin= k 2 0=0Airq287: igb2:que 0 14= 65051 3=0Airq288: igb2:que 1 856381 1=0A= irq289: igb2:que 2 809318 1=0Airq290: igb2:que 3 = 897154 1=0Airq291: igb2:que 4 8757= 55 1=0Airq292: igb2:que 5 35866117 78=0Airq= 293: igb2:que 6 846517 1=0Airq294: igb2:que 7 = 857979 1=0Airq295: igb2:link 2 = 0=0Airq296: igb3:que 0 535212 1=0Airq297= : igb3:que 1 454359 1=0Airq298: igb3:que 2 = 454142 1=0Airq299: igb3:que 3 454623 = 1=0Airq300: igb3:que 4 456297 1=0Airq301: i= gb3:que 5 455482 1=0Airq302: igb3:que 6 = 456128 1=0Airq303: igb3:que 7 454680 = 1=0Airq304: igb3:link 3 0=0Airq305: ahci= 0 75 0=0Acpu1:timer = 257233702 566=0Acpu13:timer 255603184 56= 2=0Acpu7:timer 258492826 569=0Acpu12:timer = 255819351 563=0Acpu6:timer 258= 493465 569=0Acpu15:timer 254694003 560= =0Acpu3:timer 258171320 568=0Acpu22:timer = 256506877 564=0Acpu5:timer 2534= 01435 558=0Acpu16:timer 255412360 562=0A= cpu11:timer 257318013 566=0Acpu20:timer = 253648060 558=0Acpu2:timer 2578645= 43 567=0Acpu17:timer 261828899 576=0Acpu= 9:timer 257497326 567=0Acpu18:timer = 258451190 569=0Acpu8:timer 257784504 = 567=0Acpu14:timer 254923723 561=0Acpu10:= timer 257265498 566=0Acpu19:timer = 258775946 569=0Acpu4:timer 256368658 = 564=0Acpu23:timer 255050534 561=0Acpu21:tim= er 257663842 567=0ATotal = 6225260206 13710=0A=0APlease note igb2 on client side is the dedic= ated link for NFSv4=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Ne= twork and Security Engineer=0Ahttp://www.unix-experience.fr=0A=0A6 janvie= r 2015 04:17 "Rick Macklem" a =C3=A9crit: =0A> Loi= c Blot wrote:=0A> =0A>> Hi Rick,=0A>> nfsstat -e -s don't show usefull da= tas on server.=0A> =0A> Well, as far as I know, it returns valid informat= ion.=0A> (See below.)=0A> =0A>> Server Info:=0A>> Getattr Setattr Lookup = Readlink Read Write Create=0A>> Remove=0A>> 26935254 16911 5755728 302 23= 34920 3673866 0=0A>> 328332=0A>> Rename Link Symlink Mkdir Rmdir Readdir = RdirPlus=0A>> Access=0A>> 77980 28 0 0 3 8900 3=0A>> 1806052=0A>> Mknod F= sstat Fsinfo PathConf Commit LookupP SetClId=0A>> SetClIdCf=0A>> 1 1095 0= 0 614377 8172 8=0A>> 8=0A>> Open OpenAttr OpenDwnGr OpenCfrm DelePurge D= eleRet GetFH=0A>> Lock=0A>> 1595299 0 44145 1495 0 0 5197490=0A>> 635015= =0A>> LockT LockU Close Verify NVerify PutFH PutPubFH=0A>> PutRootFH=0A>>= 0 614919 1270938 0 0 22688676 0=0A>> 5=0A>> Renew RestoreFH SaveFH Secin= fo RelLckOwn V4Create=0A>> 42104 197606 275820 0 143 4578=0A>> Server:=0A= >> Retfailed Faults Clients=0A>> 0 0 6=0A>> OpenOwner Opens LockOwner Loc= ks Delegs=0A>> 32335 145448 204 181 0=0A> =0A> Well, 145448 Opens are a l= ot of Open files. Each of these uses=0A> a kernel malloc'd data structure= that is linked into multiple=0A> linked lists.=0A> =0A> The question is.= .why aren't these Opens being closed?=0A> Since FreeBSD does I/O on an mm= ap'd file after closing it,=0A> the FreeBSD NFSv4 client is forced to del= ay doing Close RPCs=0A> until the vnode is VOP_INACTIVE()/VOP_RECLAIM()'d= . (The=0A> VOP_RECLAIM() case is needed, since VOP_INACTIVE() isn't=0A> g= uaranteed to be called.)=0A> =0A> Since there were about 1.5 million Open= s and 1.27 million=0A> Closes, it does appear that Opens are being Closed= .=0A> Now, I'm not sure I would have imagined 1.5million file Opens=0A> i= n a few days. My guess is this is the bottleneck.=0A> =0A> I'd suggest th= at you do:=0A> # nfsstat -e -c=0A> on each of the NFSv4 clients and see h= ow many Opens/client=0A> there are. I vaguely remember an upper limit in = the client,=0A> but can't remember what it is set to.=0A> --> I suspect t= he client Open/Lock limit needs to be increased.=0A> (I can't remember if= the server also has a limit, but I=0A> think it does.)=0A> Then the size= of the hash tables used to search the Opens=0A> may also need to be incr= eased a lot.=0A> =0A> Also, I'd suggest you take a look at whatever apps.= are=0A> running on the client(s) and try to figure out why they=0A> are = Opening so many files?=0A> =0A> My guess is that the client(s) are gettig= bogged down by all=0A> these Opens.=0A> =0A>> Server Cache Stats:=0A>> I= nprog Idem Non-idem Misses CacheSize TCPPeak=0A>> 0 0 1 15082947 60 16522= =0A>> =0A>> Only GetAttr and Lookup increase and it's only every 4-5 seco= nds and=0A>> only +2 to +5 into theses values.=0A>> =0A>> Now on client, = if i take four processes stack i got=0A>> =0A>> PID TID COMM TDNAME KSTAC= K=0A>> 63170 102547 mv - mi_switch+0xe1=0A>> turnstile_wait+0x42a __mtx_l= ock_sleep+0x253 nfscl_nodeleg+0x65=0A>> nfs_lookup+0x3d0 VOP_LOOKUP_APV+0= xa1 lookup+0x59c namei+0x4d4=0A>> vn_open_cred+0x21d kern_openat+0x26f am= d64_syscall+0x351=0A>> Xfast_syscall+0xfb=0A>> =0A>> Another mv:=0A>> 631= 40 101738 mv - mi_switch+0xe1=0A>> turnstile_wait+0x42a __mtx_lock_sleep+= 0x253 nfscl_nodeleg+0x65=0A>> nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup= +0x59c namei+0x4d4=0A>> kern_statat_vnhook+0xae sys_lstat+0x30 amd64_sysc= all+0x351=0A>> Xfast_syscall+0xfb=0A>> =0A>> 62070 102170 sendmail - mi_s= witch+0xe1=0A>> sleepq_timedwait+0x3a _sleep+0x26e clnt_vc_call+0x666=0A>= > clnt_reconnect_call+0x4fa newnfs_request+0xa8c nfscl_request+0x72=0A>> = nfsrpc_lookup+0x1fb nfs_lookup+0x508 VOP_LOOKUP_APV+0xa1=0A>> lookup+0x59= c namei+0x4d4 kern_statat_vnhook+0xae sys_lstat+0x30=0A>> amd64_syscall+0= x351 Xfast_syscall+0xfb=0A>> =0A>> 63200 100930 mv - mi_switch+0xe1=0A>> = turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65=0A>> nfs_l= ookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4=0A>> kern_statat= _vnhook+0xae sys_lstat+0x30 amd64_syscall+0x351=0A>> Xfast_syscall+0xfb= =0A> =0A> The above simply says that thread 102710 is waiting for a Looku= p=0A> reply from the server and the other 3 are waiting for the mutex=0A>= that protects the state structures in the client. (I suspect=0A> some ot= her thread in the client is wading through the Open list,=0A> if a single= client has a lot of these 145K Opens.)=0A> =0A>> When client is in this = state, server was doing nothing special=0A>> (procstat -kk)=0A>> =0A>> PI= D TID COMM TDNAME KSTACK=0A>> 895 100538 nfsd nfsd: master mi_switch+0xe1= =0A>> sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10=0A>> _cv_timedw= ait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_run+0x1de=0A>> nfsrvd_nfsd+0= x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>> amd64_syscall+0x351 Xfast_sys= call+0xfb=0A>> 895 100568 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 895 100569 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 = 100570 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100571 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100572 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100573 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 895 100575 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> for= k_trampoline+0xe=0A>> 895 100576 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 895 100577 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 895 100578 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 1005= 79 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100580 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100581 nfsd nfsd: servic= e mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>> fork_trampoline+0xe=0A>> 895 100582 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> = fork_trampoline+0xe=0A>> 895 100583 nfsd nfsd: service mi_switch+0xe1=0A>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tramp= oline+0xe=0A>> 895 100584 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 895 100585 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 = 100586 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100587 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100588 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100589 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 895 100590 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> for= k_trampoline+0xe=0A>> 895 100592 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 895 100593 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 895 100594 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 1005= 95 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100596 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100597 nfsd nfsd: servic= e mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>> fork_trampoline+0xe=0A>> 895 100598 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> = fork_trampoline+0xe=0A>> 895 100599 nfsd nfsd: service mi_switch+0xe1=0A>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tramp= oline+0xe=0A>> 895 100600 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 895 100602 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 = 100603 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100604 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100605 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100606 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 895 100607 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> for= k_trampoline+0xe=0A>> 895 100608 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 895 100609 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 895 100610 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 1006= 11 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100612 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100613 nfsd nfsd: servic= e mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>> fork_trampoline+0xe=0A>> 895 100614 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> = fork_trampoline+0xe=0A>> 895 100615 nfsd nfsd: service mi_switch+0xe1=0A>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tramp= oline+0xe=0A>> 895 100617 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 895 100618 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 = 100619 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100621 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100622 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100623 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 895 100624 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> for= k_trampoline+0xe=0A>> 895 100625 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 895 100626 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 895 100627 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 1006= 28 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100629 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100630 nfsd nfsd: servic= e mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>> fork_trampoline+0xe=0A>> 895 100631 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> = fork_trampoline+0xe=0A>> 895 100632 nfsd nfsd: service mi_switch+0xe1=0A>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tramp= oline+0xe=0A>> 895 100633 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 895 100634 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 = 100635 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100636 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100638 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100639 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 895 100640 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> for= k_trampoline+0xe=0A>> 895 100641 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 895 100642 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 895 100643 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 1006= 44 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100645 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100646 nfsd nfsd: servic= e mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>> fork_trampoline+0xe=0A>> 895 100647 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> = fork_trampoline+0xe=0A>> 895 100648 nfsd nfsd: service mi_switch+0xe1=0A>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tramp= oline+0xe=0A>> 895 100649 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 895 100651 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 = 100652 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100653 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100654 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100655 nfsd nfsd: service mi_s= witch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>> fork_trampoline+0xe=0A>> 895 100656 nfsd nfsd: service mi_switch+0x= e1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= =0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> for= k_trampoline+0xe=0A>> 895 100657 nfsd nfsd: service mi_switch+0xe1=0A>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoli= ne+0xe=0A>> 895 100658 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>= > 895 100659 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 1006= 61 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100662 nfsd nf= sd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100684 nfsd nfsd: servic= e mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>> fork_trampoline+0xe=0A>> 895 100685 nfsd nfsd: service mi_switc= h+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> = fork_trampoline+0xe=0A>> 895 100686 nfsd nfsd: service mi_switch+0xe1=0A>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_tramp= oline+0xe=0A>> 895 100797 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe= =0A>> 895 100798 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 = 100799 nfsd nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100800 nfs= d nfsd: service mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>> fork_trampoline+0xe=0A>> 895 100801 nfsd nfsd: se= rvice mi_switch+0xe1=0A>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _= cv_wait_sig+0x16a=0A>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>> fork_trampoline+0xe=0A>> =0A>> I really think it's a client= side problem, maybe a lookup problem.=0A>> =0A>> Regards,=0A>> =0A>> Lo= =C3=AFc Blot,=0A>> UNIX Systems, Network and Security Engineer=0A>> http:= //www.unix-experience.fr=0A>> =0A>> 5 janvier 2015 14:35 "Rick Macklem" <= rmacklem@uoguelph.ca> a =C3=A9crit:=0A>>> Loic Blot wrote:=0A>>> =0A>>>> = Hi,=0A>>>> happy new year Rick and @freebsd-fs.=0A>>>> =0A>>>> After some= days, i looked my NFSv4.1 mount. At server start it was=0A>>>> calm, but= after 4 days, here is the top stat...=0A>>>> =0A>>>> CPU: 0.0% user, 0.0= % nice, 100% system, 0.0% interrupt, 0.0%=0A>>>> idle=0A>>>> =0A>>>> Defi= nitively i think it's a problem on client side. What can i=0A>>>> look=0A= >>>> into running kernel to resolve this issue ?=0A>>> =0A>>> Well, I'd s= tart with:=0A>>> # nfsstat -e -s=0A>>> - run repeatedly on the server (on= ce every N seconds in a loop).=0A>>> Then look at the output, comparing t= he counts and see which RPCs=0A>>> are being performed by the client(s). = You are looking for which=0A>>> RPCs are being done a lot. (If one RPC is= almost 100% of the load,=0A>>> then it might be a client/caching issue f= or whatever that RPC is=0A>>> doing.)=0A>>> =0A>>> Also look at the Open/= Lock counts near the end of the output.=0A>>> If the # of Opens/Locks is = large, it may be possible to reduce the=0A>>> CPU overheads by using larg= er hash tables.=0A>>> =0A>>> Then you need to profile the server kernel t= o see where the CPU=0A>>> is being used.=0A>>> Hopefully someone else can= fill you in on how to do that, because=0A>>> I'll admit I don't know how= to.=0A>>> Basically you are looking to see if the CPU is being used in= =0A>>> the NFS server code or ZFS.=0A>>> =0A>>> Good luck with it, rick= =0A>>> =0A>>>> Regards,=0A>>>> =0A>>>> Lo=C3=AFc Blot,=0A>>>> UNIX System= s, Network and Security Engineer=0A>>>> http://www.unix-experience.fr=0A>= >>> =0A>>>> 30 d=C3=A9cembre 2014 16:16 "Lo=C3=AFc Blot" =0A>>>> a=0A>>>> =C3=A9crit:=0A>>>>> Hi Rick,=0A>>>>> i upg= raded my jail host from FreeBSD 9.3 to 10.1 to use NFS v4.1=0A>>>>> (moun= toptions:=0A>>>>> rw,rsize=3D32768,wsize=3D32768,tcp,nfsv4,minorversion= =3D1)=0A>>>>> =0A>>>>> Performance is quite stable but it's slow. Not as = slow as before=0A>>>>> but slow... services was launched=0A>>>>> but no c= lient are using them and system CPU % was 10-50%.=0A>>>>> =0A>>>>> I don'= t see anything on NFSv4.1 server, it's perfectly stable and=0A>>>>> funct= ionnal.=0A>>>>> =0A>>>>> Regards,=0A>>>>> =0A>>>>> Lo=C3=AFc Blot,=0A>>>>= > UNIX Systems, Network and Security Engineer=0A>>>>> http://www.unix-exp= erience.fr=0A>>>>> =0A>>>>> 23 d=C3=A9cembre 2014 00:20 "Rick Macklem" a=0A>>>>> =C3=A9crit:=0A>>>>> =0A>>>>>> Loic Blot wr= ote:=0A>>>>>> =0A>>>>>>> Hi,=0A>>>>>>> =0A>>>>>>> To clarify because of o= ur exchanges. Here are the current=0A>>>>>>> sysctl=0A>>>>>>> options for= server:=0A>>>>>>> =0A>>>>>>> vfs.nfsd.enable_nobodycheck=3D0=0A>>>>>>> v= fs.nfsd.enable_nogroupcheck=3D0=0A>>>>>>> =0A>>>>>>> vfs.nfsd.maxthreads= =3D200=0A>>>>>>> vfs.nfsd.tcphighwater=3D10000=0A>>>>>>> vfs.nfsd.tcpcach= etimeo=3D300=0A>>>>>>> vfs.nfsd.server_min_nfsvers=3D4=0A>>>>>>> =0A>>>>>= >> kern.maxvnodes=3D10000000=0A>>>>>>> kern.ipc.maxsockbuf=3D4194304=0A>>= >>>>> net.inet.tcp.sendbuf_max=3D4194304=0A>>>>>>> net.inet.tcp.recvbuf_m= ax=3D4194304=0A>>>>>>> =0A>>>>>>> vfs.lookup_shared=3D0=0A>>>>>>> =0A>>>>= >>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Systems,= Network and Security Engineer=0A>>>>>>> http://www.unix-experience.fr=0A= >>>>>>> =0A>>>>>>> 22 d=C3=A9cembre 2014 09:42 "Lo=C3=AFc Blot"=0A>>>>>>>= =0A>>>>>>> a=0A>>>>>>> =C3=A9crit:=0A>>>>>= >> =0A>>>>>>> Hi Rick,=0A>>>>>>> my 5 jails runs this weekend and now i h= ave some stats on this=0A>>>>>>> monday.=0A>>>>>>> =0A>>>>>>> Hopefully d= eadlock was fixed, yeah, but everything isn't good=0A>>>>>>> :(=0A>>>>>>>= =0A>>>>>>> On NFSv4 server (FreeBSD 10.1) system uses 35% CPU=0A>>>>>>> = =0A>>>>>>> As i can see this is because of nfsd:=0A>>>>>>> =0A>>>>>>> 918= root 96 20 0 12352K 3372K rpcsvc 6 51.4H=0A>>>>>>> 273.68% nfsd: server = (nfsd)=0A>>>>>>> =0A>>>>>>> If i look at dmesg i see:=0A>>>>>>> nfsd serv= er cache flooded, try increasing vfs.nfsd.tcphighwater=0A>>>>>> =0A>>>>>>= Well, you have a couple of choices:=0A>>>>>> 1 - Use NFSv4.1 (add "minor= version=3D1" to your mount options).=0A>>>>>> (NFSv4.1 avoids use of the = DRC and instead uses something=0A>>>>>> called sessions. See below.)=0A>>= >>>> OR=0A>>>>>> =0A>>>>>>> vfs.nfsd.tcphighwater was set to 10000, i inc= rease it to 15000=0A>>>>>> =0A>>>>>> 2 - Bump vfs.nfsd.tcphighwater way u= p, until you no longer see=0A>>>>>> "nfs server cache flooded" messages. = (I think Garrett Wollman=0A>>>>>> uses=0A>>>>>> 100000. (You may still se= e quite a bit of CPU overheads.)=0A>>>>>> =0A>>>>>> OR=0A>>>>>> =0A>>>>>>= 3 - Set vfs.nfsd.cachetcp=3D0 (which disables the DRC and gets rid=0A>>>= >>> of the CPU overheads). However, there is a risk of data=0A>>>>>> corr= uption=0A>>>>>> if you have a client->server network partitioning of a mo= derate=0A>>>>>> duration, because a non-idempotent RPC may get redone, be= casue=0A>>>>>> the client times out waiting for a reply. If a non-idempot= ent=0A>>>>>> RPC gets done twice on the server, data corruption can happe= n.=0A>>>>>> (The DRC provides improved correctness, but does add overhead= .)=0A>>>>>> =0A>>>>>> If #1 works for you, it is the preferred solution, = since=0A>>>>>> Sessions=0A>>>>>> in NFSv4.1 solves the correctness proble= m in a good, space bound=0A>>>>>> way. A session basically has N (usually= 32 or 64) slots and only=0A>>>>>> allows one outstanding RPC/slot. As su= ch, it can cache the=0A>>>>>> previous=0A>>>>>> reply for each slot (32 o= r 64 of them) and guarantee "exactly=0A>>>>>> once"=0A>>>>>> RPC semantic= s.=0A>>>>>> =0A>>>>>> rick=0A>>>>>> =0A>>>>>>> Here is 'nfsstat -s' outpu= t:=0A>>>>>>> =0A>>>>>>> Server Info:=0A>>>>>>> Getattr Setattr Lookup Rea= dlink Read Write Create=0A>>>>>>> Remove=0A>>>>>>> 12600652 1812 2501097 = 156 1386423 1983729 123=0A>>>>>>> 162067=0A>>>>>>> Rename Link Symlink Mk= dir Rmdir Readdir RdirPlus=0A>>>>>>> Access=0A>>>>>>> 36762 9 0 0 0 3147 = 0=0A>>>>>>> 623524=0A>>>>>>> Mknod Fsstat Fsinfo PathConf Commit=0A>>>>>>= > 0 0 0 0 328117=0A>>>>>>> Server Ret-Failed=0A>>>>>>> 0=0A>>>>>>> Server= Faults=0A>>>>>>> 0=0A>>>>>>> Server Cache Stats:=0A>>>>>>> Inprog Idem N= on-idem Misses=0A>>>>>>> 0 0 0 12635512=0A>>>>>>> Server Write Gathering:= =0A>>>>>>> WriteOps WriteRPC Opsaved=0A>>>>>>> 1983729 1983729 0=0A>>>>>>= > =0A>>>>>>> And here is 'procstat -kk' for nfsd (server)=0A>>>>>>> =0A>>= >>>>> 918 100528 nfsd nfsd: master mi_switch+0xe1=0A>>>>>>> sleepq_catch_= signals+0xab sleepq_timedwait_sig+0x10=0A>>>>>>> _cv_timedwait_sig_sbt+0x= 18b svc_run_internal+0x4a1=0A>>>>>>> svc_run+0x1de=0A>>>>>>> nfsrvd_nfsd+= 0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>>>>> amd64_syscall+0x351 Xfa= st_syscall+0xfb=0A>>>>>>> 918 100568 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_= sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100569 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100= 570 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 918 100571 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 918 100572 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100573 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 918 100574 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 918 100575 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100576 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100577 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 918 100579 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100580 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 = 100581 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 918 100582 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100583 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100584 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 918 100585 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100587 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100= 588 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 918 100589 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 918 100590 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100591 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 918 100592 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100594 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100595 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 918 100596 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 918 100597 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100598 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 = 100599 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100601 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100602 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 918 100603 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 918 100604 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100605 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100= 606 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 918 100608 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100609 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 918 100610 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 918 100611 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100612 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100613 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 918 100614 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 918 100615 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100616 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 = 100617 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 918 100618 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100619 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100620 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 918 100621 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100623 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100= 624 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 918 100626 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100627 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 918 100628 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100630 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100631 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 918 100632 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 918 100633 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100634 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 = 100635 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 918 100636 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100637 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100638 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 918 100639 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 918 100640 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100641 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100= 642 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 918 100643 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 918 100644 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100645 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 918 100646 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 918 100647 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100648 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100649 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100652 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 = 100653 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 918 100654 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100655 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100656 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 918 100657 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 918 100658 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100659 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 918 100= 660 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 918 100661 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 918 100662 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> ---=0A>>>>>>> =0A>>>= >>>> Now if we look at client (FreeBSD 9.3)=0A>>>>>>> =0A>>>>>>> We see s= ystem was very busy and do many and many interrupts=0A>>>>>>> =0A>>>>>>> = CPU: 0.0% user, 0.0% nice, 37.8% system, 51.2% interrupt, 11.0%=0A>>>>>>>= idle=0A>>>>>>> =0A>>>>>>> A look at process list shows that there are ma= ny sendmail=0A>>>>>>> process=0A>>>>>>> in=0A>>>>>>> state nfstry=0A>>>>>= >> =0A>>>>>>> nfstry 18 32:27 0.88% sendmail: Queue runner@00:30:00 for= =0A>>>>>>> /var/spool/clientm=0A>>>>>>> =0A>>>>>>> Here is 'nfsstat -c' o= utput:=0A>>>>>>> =0A>>>>>>> Client Info:=0A>>>>>>> Rpc Counts:=0A>>>>>>> = Getattr Setattr Lookup Readlink Read Write Create=0A>>>>>>> Remove=0A>>>>= >>> 1051347 1724 2494481 118=20903902 1901285 162676=0A>>>>>>> 161899=0A>= >>>>>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus=0A>>>>>>> Access= =0A>>>>>>> 36744 2 0 114 40 3131 0=0A>>>>>>> 544136=0A>>>>>>> Mknod Fssta= t Fsinfo PathConf Commit=0A>>>>>>> 9 0 0 0 245821=0A>>>>>>> Rpc Info:=0A>= >>>>>> TimedOut Invalid X Replies Retries Requests=0A>>>>>>> 0 0 0 0 8356= 557=0A>>>>>>> Cache Info:=0A>>>>>>> Attr Hits Misses Lkup Hits Misses Bio= R Hits Misses BioW Hits=0A>>>>>>> Misses=0A>>>>>>> 108754455 491475 54229= 224 2437229 46814561 821723 5132123=0A>>>>>>> 1871871=0A>>>>>>> BioRLHits= Misses BioD Hits Misses DirE Hits Misses Accs Hits=0A>>>>>>> Misses=0A>>= >>>>> 144035 118 53736 2753 27813 1 57238839=0A>>>>>>> 544205=0A>>>>>>> = =0A>>>>>>> If you need more things, tell me, i let the PoC in this state.= =0A>>>>>>> =0A>>>>>>> Thanks=0A>>>>>>> =0A>>>>>>> Regards,=0A>>>>>>> =0A>= >>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Systems, Network and Security Engin= eer=0A>>>>>>> http://www.unix-experience.fr=0A>>>>>>> =0A>>>>>>> 21 d=C3= =A9cembre 2014 01:33 "Rick Macklem" a=0A>>>>>>> = =C3=A9crit:=0A>>>>>>> =0A>>>>>>> Loic Blot wrote:=0A>>>>>>> =0A>>>>>>>> H= i Rick,=0A>>>>>>>> ok, i don't need locallocks, i haven't understand opti= on was=0A>>>>>>>> for=0A>>>>>>>> that=0A>>>>>>>> usage, i removed it.=0A>= >>>>>>> I do more tests on monday.=0A>>>>>>>> Thanks for the deadlock fix= , for other people :)=0A>>>>>>> =0A>>>>>>> Good. Please let us know if ru= nning with=0A>>>>>>> vfs.nfsd.enable_locallocks=3D0=0A>>>>>>> gets rid of= the deadlocks? (I think it fixes the one you saw.)=0A>>>>>>> =0A>>>>>>> = On the performance side, you might also want to try different=0A>>>>>>> v= alues=0A>>>>>>> of=0A>>>>>>> readahead, if the Linux client has such a mo= unt option. (With=0A>>>>>>> the=0A>>>>>>> NFSv4-ZFS sequential vs random = I/O heuristic, I have no idea=0A>>>>>>> what=0A>>>>>>> the=0A>>>>>>> opti= mal readahead value would be.)=0A>>>>>>> =0A>>>>>>> Good luck with it and= please let us know how it goes, rick=0A>>>>>>> ps: I now have a patch to= fix the deadlock when=0A>>>>>>> vfs.nfsd.enable_locallocks=3D1=0A>>>>>>>= is set. I'll post it for anyone who is interested after I put=0A>>>>>>> = it=0A>>>>>>> through some testing.=0A>>>>>>> =0A>>>>>>> --=0A>>>>>>> Best= regards,=0A>>>>>>> Lo=C3=AFc BLOT,=0A>>>>>>> UNIX systems, security and = network engineer=0A>>>>>>> http://www.unix-experience.fr=0A>>>>>>> =0A>>>= >>>> Le jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a = =C3=A9crit :=0A>>>>>>> =0A>>>>>>> Loic Blot wrote:=0A>>>>>>>> Hi rick,=0A= >>>>>>>> i tried to start a LXC container on Debian Squeeze from my=0A>>>= >>>>> freebsd=0A>>>>>>>> ZFS+NFSv4 server and i also have a deadlock on n= fsd=0A>>>>>>>> (vfs.lookup_shared=3D0). Deadlock procs each time i launch= a=0A>>>>>>>> squeeze=0A>>>>>>>> container, it seems (3 tries, 3 fails).= =0A>>>>>>> =0A>>>>>>> Well, I`ll take a look at this `procstat -kk`, but = the only=0A>>>>>>> thing=0A>>>>>>> I`ve seen posted w.r.t. avoiding deadl= ocks in ZFS is to not use=0A>>>>>>> nullfs. (I have no idea if you are us= ing any nullfs mounts, but=0A>>>>>>> if so, try getting rid of them.)=0A>= >>>>>> =0A>>>>>>> Here`s a high level post about the ZFS and vnode lockin= g=0A>>>>>>> problem,=0A>>>>>>> but there is no patch available, as far as= I know.=0A>>>>>>> =0A>>>>>>> http://docs.FreeBSD.org/cgi/mid.cgi?54739F4= 1.8030407=0A>>>>>>> =0A>>>>>>> rick=0A>>>>>>> =0A>>>>>>> 921 - D 0:00.02 = nfsd: server (nfsd)=0A>>>>>>> =0A>>>>>>> Here is the procstat -kk=0A>>>>>= >> =0A>>>>>>> PID TID COMM TDNAME KSTACK=0A>>>>>>> 921 100538 nfsd nfsd: = master mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0xc9e=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>>>>>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>= >>>>>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554=0A>>>>>= >> svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca=0A>>>>>>> nfssv= c_nfsd+0x107 sys_nfssvc+0x9c=0A>>>>>>> 921 100572 nfsd nfsd: service mi_s= witch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= >>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100573 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>= >>>>> 921 100574 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch= _signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork= _trampoline+0xe=0A>>>>>>> 921 100575 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_= sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100576 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100= 577 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 921 100578 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 921 100579 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100580 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 921 100581 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 921 100582 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100583 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100584 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 921 100585 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 921 100586 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100587 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 = 100588 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 921 100589 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100590 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100591 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 921 100592 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 921 100593 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100594 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100= 595 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 921 100596 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 921 100597 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100598 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 921 100599 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 921 100600 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100601 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100602 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 921 100603 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 921 100604 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100605 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 = 100606 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 921 100607 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100608 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100609 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 921 100610 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 921 100611 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100612 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100= 613 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 921 100614 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 921 100615 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100616 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmslee= p+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0= x21f nfsrvd_lock+0x5b1=0A>>>>>>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 = svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe=0A>>>>>>> 921 100617 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_= sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100618 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 = nfsv4_lock+0x9b=0A>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run= _internal+0xc77=0A>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe=0A>>>>>>> 921 100619 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x1= 6a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100620 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100621 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 921 100622 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 921 100623 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100624 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 = 100625 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 921 100626 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100627 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100628 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 921 100629 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 921 100630 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100631 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100= 632 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 921 100633 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 921 100634 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100635 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 921 100636 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 921 100637 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100638 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100639 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 921 100640 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 921 100641 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100642 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 = 100643 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 921 100644 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100645 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100646 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 921 100647 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 921 100648 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100649 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100= 650 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0= xe=0A>>>>>>> 921 100651 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >> fork_trampoline+0xe=0A>>>>>>> 921 100652 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100653 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_= sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> = 921 100654 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_tramp= oline+0xe=0A>>>>>>> 921 100655 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= > sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100656 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100657 n= fsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A= >>>>>>> 921 100658 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fo= rk_trampoline+0xe=0A>>>>>>> 921 100659 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100660 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 = 100661 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampolin= e+0xe=0A>>>>>>> 921 100662 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100663 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>>> 921 100664 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>= >>> 921 100665 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> 921 100666 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>= >>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8=0A>>>>>>> nfsrvd_dorpc+= 0xc76=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc= _thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> = =0A>>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX S= ystems, Network and Security Engineer=0A>>>>>>> http://www.unix-experienc= e.fr=0A>>>>>>> =0A>>>>>>> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A>>>>>>> Loic Blot= wrote:=0A>>>>>>> =0A>>>>>>>> For more informations, here is procstat -kk= on nfsd, if you=0A>>>>>>>> need=0A>>>>>>>> more=0A>>>>>>>> hot datas, te= ll me.=0A>>>>>>>> =0A>>>>>>>> Regards, PID TID COMM TDNAME KSTACK=0A>>>>>= >>> 918 100529 nfsd nfsd: master mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x= 3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77 svc_run+0x1de=0A>>>>>>>> nfsrvd_nfsd+0x1ca n= fssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>>>>>> amd64_syscall+0x351=0A>>>>>>>= =0A>>>>>>> Well, most of the threads are stuck like this one, waiting fo= r=0A>>>>>>> a=0A>>>>>>> vnode=0A>>>>>>> lock in ZFS. All of them appear t= o be in zfs_fhtovp().=0A>>>>>>> I`m not a ZFS guy, so I can`t help much. = I`ll try changing the=0A>>>>>>> subject line=0A>>>>>>> to include ZFS vno= de lock, so maybe the ZFS guys will take a=0A>>>>>>> look.=0A>>>>>>> =0A>= >>>>>> The only thing I`ve seen suggested is trying:=0A>>>>>>> sysctl vfs= .lookup_shared=3D0=0A>>>>>>> to disable shared vop_lookup()s. Apparently = zfs_lookup()=0A>>>>>>> doesn`t=0A>>>>>>> obey the vnode locking rules for= lookup and rename, according=0A>>>>>>> to=0A>>>>>>> the posting I saw.= =0A>>>>>>> =0A>>>>>>> I`ve added a couple of comments about the other thr= eads below,=0A>>>>>>> but=0A>>>>>>> they are all either waiting for an RP= C request or waiting for=0A>>>>>>> the=0A>>>>>>> threads stuck on the ZFS= vnode lock to complete.=0A>>>>>>> =0A>>>>>>> rick=0A>>>>>>> =0A>>>>>>>> = 918 100564 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf=0A>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>> fork_t= rampoline+0xe=0A>>>>>>> =0A>>>>>>> Fyi, this thread is just waiting for a= n RPC to arrive. (Normal)=0A>>>>>>> =0A>>>>>>>> 918 100565 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>> fork_trampoline+0xe=0A>>>>>>>> = 918 100566 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf=0A>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>> fork_t= rampoline+0xe=0A>>>>>>>> 918 100567 nfsd nfsd: service mi_switch+0xe1=0A>= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>> _cv_wait= _sig+0x16a=0A>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a=0A>>>>>>>> fork_trampoline+0xe=0A>>>>>>>> 918 100568 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>>>> fork_trampoline+0xe=0A>>>>>>>= > 918 100569 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>> s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>> fork= _trampoline+0xe=0A>>>>>>>> 918 100570 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>> _cv_= wait_sig+0x16a=0A>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>>>>>>> fork_trampoline+0xe=0A>>>>>>>> 918 100571 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsl= eep+0x66 nfsv4_lock+0x9b=0A>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe=0A>>>>>>>> 918 100572 nfsd nfsd: service mi_switch+0x= e1=0A>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9= b=0A>>>>>>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8=0A>>>>>>>> nfs= rvd_dorpc+0xc76=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>>>>>> =0A>>>>>>> This one (and a few others) are waiting for the n= fsv4_lock.=0A>>>>>>> This=0A>>>>>>> happens=0A>>>>>>> because other threa= ds are stuck with RPCs in progress. (ie. The=0A>>>>>>> ones=0A>>>>>>> wai= ting on the vnode lock in zfs_fhtovp().)=0A>>>>>>> For these, the RPC nee= ds to lock out other threads to do the=0A>>>>>>> operation,=0A>>>>>>> so = it waits for the nfsv4_lock() which can exclusively lock the=0A>>>>>>> NF= Sv4=0A>>>>>>> data structures once all other nfsd threads complete their = RPCs=0A>>>>>>> in=0A>>>>>>> progress.=0A>>>>>>> =0A>>>>>>>> 918 100573 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a _sleep+0x287 = nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe=0A>>>>>>> =0A>>>>>>> Same as above.=0A>>>>>>> = =0A>>>>>>>> 918 100574 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>= >>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100575 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0= x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_tra= mpoline+0xe=0A>>>>>>>> 918 100576 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d= =0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>= >>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_st= art+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 1005= 77 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+= 0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>> 918 100578 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_f= htovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> sv= c_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>= >> 918 100579 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x= 3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100580 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= >>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>>>>>>> 918 100581 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfss= vc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb= =0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100582 nfsd= nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork= _trampoline+0xe=0A>>>>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>= > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_th= read_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 9= 18 100584 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtov= p+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100585 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= > zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>= >>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>= >>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100587 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0= x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_tra= mpoline+0xe=0A>>>>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d= =0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>= >>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_st= art+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 1005= 89 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+= 0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>> 918 100590 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_f= htovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> sv= c_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>= >> 918 100591 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x= 3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100592 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= >>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>>>>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfss= vc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb= =0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100594 nfsd= nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork= _trampoline+0xe=0A>>>>>>>> 918 100595 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>= > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_th= read_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 9= 18 100596 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtov= p+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100597 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= > zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>= >>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>= >>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100599 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0= x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp= +0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_tra= mpoline+0xe=0A>>>>>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d= =0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>= >>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_st= art+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 1006= 01 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+= 0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>> 918 100602 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_f= htovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> sv= c_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>= >> 918 100603 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x= 3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> f= ork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100604 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= >>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>>>>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0= x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfss= vc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_thread_start+0xb= =0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>> 918 100606 nfsd= nfsd: service mi_switch+0xe1=0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>>>> svc_thread_start+0xb=0A>>>>>>>> fork_exit+0x9a fork= _trampoline+0xe=0A>>>>>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>= > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>> svc_th= read_start+0xb=0A>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> = =0A>>>>>>> Lots more waiting for the ZFS vnode lock in zfs_fhtovp().=0A>>= >>>>> =0A>>>>>>> 918 100608 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> s= leepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfs= rv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1=0A>>>>>>> nfs= rvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100609= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x1= 5d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_t= rampoline+0xe=0A>>>>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> nfsvno_advlock+0x1= 19 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>>>>>>> nfsrvd_locku+0x283 = nfsrvd_dorpc+0xec6 nfssvc_program+0x554=0A>>>>>>> svc_run_internal+0xc77 = svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>> fork_trampoline+0xe=0A>>>>>= >> 918 100611 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3= a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfsrvd_dorpc+0x31= 6 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100612 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x= 66 nfsv4_lock+0x9b=0A>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_= run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_tra= mpoline+0xe=0A>>>>>>> 918 100613 nfsd nfsd: service mi_switch+0xe1=0A>>>>= >>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>= > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>= > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 1= 00614 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a _sleep= +0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfsrvd_dorpc+0x316 nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb fork= _exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100615 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4= _lock+0x9b=0A>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+= 0xe=0A>>>>>>> 918 100616 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> slee= pq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfsrvd= _dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_th= read_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100617 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a _sleep+0x287 n= fsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>> 918 100618 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x= 9b=0A>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc= 77=0A>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>= >>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+= 0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LO= CK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x= 554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100620 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zf= s_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc= +0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> sv= c_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>= 918 100621 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a = _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfsrvd_dorpc+0x316 = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100622 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= >>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>= >>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A= >>>>>>> 918 100623 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wai= t+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>> nfsrvd_dorpc= +0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_s= tart+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100624 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc= 8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc7= 7=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>>=20fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>= >>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100626 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8= nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+= 0xe=0A>>>>>>> 918 100627 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> n= fsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100628 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_ar= gs+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>= >>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> = fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100630 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+= 0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= >> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>= >> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>= >>>>> 918 100631 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+= 0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LO= CK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x= 554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100632 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zf= s_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc= +0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> sv= c_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>= 918 100633 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a = sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>>>> 918 100634 nfsd nfsd: service mi_switc= h+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fht= ovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thr= ead_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 = 100635 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0x= ab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_ru= n_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a = fork_trampoline+0xe=0A>>>>>>> 918 100636 nfsd nfsd: service mi_switch+0xe= 1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>= vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0= x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>= >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_s= tart+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 10063= 7 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x= 15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_= trampoline+0xe=0A>>>>>>> 918 100638 nfsd nfsd: service mi_switch+0xe1=0A>= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d= =0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>= >> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start= +0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100639 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhto= vp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>>>>> 918 100640 nfsd nfsd: service mi_switch+0xe1=0A>>>>>= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>= >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb= =0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100641 nfsd n= fsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0= x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0= xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>>>> 918 100642 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>= >>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100643 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8= nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+= 0xe=0A>>>>>>> 918 100644 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> n= fsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100645 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_ar= gs+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>= >>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>>> 918 100646 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> = fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100647 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+= 0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= >> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>= >> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>= >>>>> 918 100648 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+= 0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LO= CK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x= 554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_= exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100649 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zf= s_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc= +0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> sv= c_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>= 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a = sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>>>> 918 100651 nfsd nfsd: service mi_switc= h+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fht= ovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thr= ead_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 = 100652 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0x= ab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_ru= n_internal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a = fork_trampoline+0xe=0A>>>>>>> 918 100653 nfsd nfsd: service mi_switch+0xe= 1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>= vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0= x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>= >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_s= tart+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 10065= 4 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x= 15d __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_= trampoline+0xe=0A>>>>>>> 918 100655 nfsd nfsd: service mi_switch+0xe1=0A>= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d= =0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>= >> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start= +0xb=0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100656 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhto= vp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_interna= l+0xc77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>>>>> 918 100657 nfsd nfsd: service mi_switch+0xe1=0A>>>>>= >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>= >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>> svc_thread_start+0xb= =0A>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>> 918 100658 nfsd n= fsd: service mi_switch+0xe1=0A>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0= x43=0A>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0= xc8 nfsrvd_dorpc+0x917=0A>>>>>>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>>>>>> svc_thread_start+0xb=0A>>>>>>> fork_exit+0x9a fork_trampoli= ne+0xe=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Systems, Netwo= rk and Security Engineer=0A>>>>>>> http://www.unix-experience.fr=0A>>>>>>= > =0A>>>>>>> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot"=0A>>>>>>> =0A>>>>>>> a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A= >>>>>>> Hmmm...=0A>>>>>>> now i'm experiencing a deadlock.=0A>>>>>>> =0A>= >>>>>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server=0A>>>>>>>= (nfsd)=0A>>>>>>> =0A>>>>>>> the only issue was to reboot the server, but= after rebooting=0A>>>>>>> deadlock arrives a second time when i=0A>>>>>>= > start my jails over NFS.=0A>>>>>>> =0A>>>>>>> Regards,=0A>>>>>>> =0A>>>= >>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Systems, Network and Security Enginee= r=0A>>>>>>> http://www.unix-experience.fr=0A>>>>>>> =0A>>>>>>> 15 d=C3=A9= cembre 2014 10:07 "Lo=C3=AFc Blot"=0A>>>>>>> =0A>>>>>>> a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A>>>>>>> Hi Rick,=0A>>>>= >>> after talking with my N+1, NFSv4 is required on our=0A>>>>>>> infrast= ructure.=0A>>>>>>> I tried to upgrade NFSv4+ZFS=0A>>>>>>> server from 9.3= to 10.1, i hope this will resolve some=0A>>>>>>> issues...=0A>>>>>>> =0A= >>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Syst= ems, Network and Security Engineer=0A>>>>>>> http://www.unix-experience.f= r=0A>>>>>>> =0A>>>>>>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot"=0A>>>= >>>> =0A>>>>>>> a=0A>>>>>>> =C3=A9crit:=0A>= >>>>>> =0A>>>>>>> Hi Rick,=0A>>>>>>> thanks for your suggestion.=0A>>>>>>= > For my locking bug, rpc.lockd is stucked in rpcrecv state on=0A>>>>>>> = the=0A>>>>>>> server. kill -9 doesn't affect the=0A>>>>>>> process, it's = blocked.... (State: Ds)=0A>>>>>>> =0A>>>>>>> for the performances=0A>>>>>= >> =0A>>>>>>> NFSv3: 60Mbps=0A>>>>>>> NFSv4: 45Mbps=0A>>>>>>> Regards,=0A= >>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Systems, Network and Se= curity Engineer=0A>>>>>>> http://www.unix-experience.fr=0A>>>>>>> =0A>>>>= >>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" =0A>= >>>>>> a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A>>>>>>> Loic Blot wrote:=0A>>= >>>>> =0A>>>>>>>> Hi Rick,=0A>>>>>>>> I'm trying NFSv3.=0A>>>>>>>> Some j= ails are starting very well but now i have an issue=0A>>>>>>>> with=0A>>>= >>>>> lockd=0A>>>>>>>> after some minutes:=0A>>>>>>>> =0A>>>>>>>> nfs ser= ver 10.10.X.8:/jails: lockd not responding=0A>>>>>>>> nfs server 10.10.X.= 8:/jails lockd is alive again=0A>>>>>>>> =0A>>>>>>>> I look at mbuf, but = i seems there is no problem.=0A>>>>>>> =0A>>>>>>> Well, if you need locks= to be visible across multiple=0A>>>>>>> clients,=0A>>>>>>> then=0A>>>>>>= > I'm afraid you are stuck with using NFSv4 and the=0A>>>>>>> performance= =0A>>>>>>> you=0A>>>>>>> get=0A>>>>>>> from it. (There is no way to do fi= le handle affinity for=0A>>>>>>> NFSv4=0A>>>>>>> because=0A>>>>>>> the re= ad and write ops are buried in the compound RPC and=0A>>>>>>> not=0A>>>>>= >> easily=0A>>>>>>> recognized.)=0A>>>>>>> =0A>>>>>>> If the locks don't = need to be visible across multiple=0A>>>>>>> clients,=0A>>>>>>> I'd=0A>>>= >>>> suggest trying the "nolockd" option with nfsv3.=0A>>>>>>> =0A>>>>>>>= > Here is my rc.conf on server:=0A>>>>>>>> =0A>>>>>>>> nfs_server_enable= =3D"YES"=0A>>>>>>>> nfsv4_server_enable=3D"YES"=0A>>>>>>>> nfsuserd_enabl= e=3D"YES"=0A>>>>>>>> nfsd_server_flags=3D"-u -t -n 256"=0A>>>>>>>> mountd= _enable=3D"YES"=0A>>>>>>>> mountd_flags=3D"-r"=0A>>>>>>>> nfsuserd_flags= =3D"-usertimeout 0 -force 20"=0A>>>>>>>> rpcbind_enable=3D"YES"=0A>>>>>>>= > rpc_lockd_enable=3D"YES"=0A>>>>>>>> rpc_statd_enable=3D"YES"=0A>>>>>>>>= =0A>>>>>>>> Here is the client:=0A>>>>>>>> =0A>>>>>>>> nfsuserd_enable= =3D"YES"=0A>>>>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>>>>>>= > nfscbd_enable=3D"YES"=0A>>>>>>>> rpc_lockd_enable=3D"YES"=0A>>>>>>>> rp= c_statd_enable=3D"YES"=0A>>>>>>>> =0A>>>>>>>> Have you got an idea ?=0A>>= >>>>>> =0A>>>>>>>> Regards,=0A>>>>>>>> =0A>>>>>>>> Lo=C3=AFc Blot,=0A>>>>= >>>> UNIX Systems, Network and Security Engineer=0A>>>>>>>> http://www.un= ix-experience.fr=0A>>>>>>>> =0A>>>>>>>> 9 d=C3=A9cembre 2014 04:31 "Rick = Macklem" =0A>>>>>>>> a=0A>>>>>>>> =C3=A9crit:=0A>>>= >>>>>> Loic Blot wrote:=0A>>>>>>>>> =0A>>>>>>>>>> Hi rick,=0A>>>>>>>>>> = =0A>>>>>>>>>> I waited 3 hours (no lag at jail launch) and now I do:=0A>>= >>>>>>>> sysrc=0A>>>>>>>>>> memcached_flags=3D"-v -m 512"=0A>>>>>>>>>> Co= mmand was very very slow...=0A>>>>>>>>>> =0A>>>>>>>>>> Here is a dd over = NFS:=0A>>>>>>>>>> =0A>>>>>>>>>> 601062912 bytes transferred in 21.060679 = secs (28539579=0A>>>>>>>>>> bytes/sec)=0A>>>>>>>>> =0A>>>>>>>>> Can you t= ry the same read using an NFSv3 mount?=0A>>>>>>>>> (If it runs much faste= r, you have probably been bitten by=0A>>>>>>>>> the=0A>>>>>>>>> ZFS=0A>>>= >>>>>> "sequential vs random" read heuristic which I've been told=0A>>>>>= >>>> things=0A>>>>>>>>> NFS is doing "random" reads without file handle a= ffinity.=0A>>>>>>>>> File=0A>>>>>>>>> handle affinity is very hard to do = for NFSv4, so it isn't=0A>>>>>>>>> done.)=0A>>>>>>> =0A>>>>>>> I was actu= ally suggesting that you try the "dd" over nfsv3=0A>>>>>>> to=0A>>>>>>> s= ee=0A>>>>>>> how=0A>>>>>>> the performance compared with nfsv4. If you do= that, please=0A>>>>>>> post=0A>>>>>>> the=0A>>>>>>> comparable results.= =0A>>>>>>> =0A>>>>>>> Someday I would like to try and get ZFS's sequentia= l vs=0A>>>>>>> random=0A>>>>>>> read=0A>>>>>>> heuristic modified and any= info on what difference in=0A>>>>>>> performance=0A>>>>>>> that=0A>>>>>>= > might make for NFS would be useful.=0A>>>>>>> =0A>>>>>>> rick=0A>>>>>>>= =0A>>>>>>> rick=0A>>>>>>> =0A>>>>>>> This is quite slow...=0A>>>>>>> =0A= >>>>>>> You can found some nfsstat below (command isn't finished=0A>>>>>>= > yet)=0A>>>>>>> =0A>>>>>>> nfsstat -c -w 1=0A>>>>>>> =0A>>>>>>> GtAttr L= ookup Rdlink Read Write Rename Access Rddir=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>= >>>>>> 4 0 0 0 0 0 16 0=0A>>>>>>> 2 0 0 0 0 0 17 0=0A>>>>>>> 0 0 0 0 0 0 = 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0= 0 0 0 0=0A>>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>= >> 4 0 0 0 0 0 3 0=0A>>>>>>> 0 0 0 0 0 0 3 0=0A>>>>>>> 37 10 0 8 0 0 14 1= =0A>>>>>>> 18 16 0 4 1 2 4 0=0A>>>>>>> 78 91 0 82 6 12 30 0=0A>>>>>>> 19 = 18 0 2 2 4 2 0=0A>>>>>>> 0 0 0 0 2 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>= >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>> 0 0 0 0= 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 = 1 0 0 0 0 1 0=0A>>>>>>> 4 6 0 0 6 0 3 0=0A>>>>>>> 2 0 0 0 0 0 0 0=0A>>>>>= >> 0 0 0 0 0 0 0 0=0A>>>>>>> 1 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 1 0 0 0=0A= >>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0= 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 = 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 6 1= 08 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>= >>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>> 0 0 0 0= 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>= >> 0 0 0 0 0 0 0 0=0A>>>>>>> 98 54 0 86 11 0 25 0=0A>>>>>>> 36 24 0 39 25= 0 10 1=0A>>>>>>> 67 8 0 63 63 0 41 0=0A>>>>>>> 34 0 0 35 34 0 0 0=0A>>>>= >>> 75 0 0 75 77 0 0 0=0A>>>>>>> 34 0 0 35 35 0 0 0=0A>>>>>>> 75 0 0 74 7= 6 0 0 0=0A>>>>>>> 33 0 0 34 33 0 0 0=0A>>>>>>> 0 0 0 0 5 0 0 0=0A>>>>>>> = 0 0 0 0 0 0 6 0=0A>>>>>>> 11 0 0 0 0 0 11 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>= >>>>>> 0 17 0 0 0 0 1 0=0A>>>>>>> GtAttr Lookup Rdlink Read Write Rename = Access Rddir=0A>>>>>>> 4 5 0 0 0 0 12 0=0A>>>>>>> 2 0 0 0 0 0 26 0=0A>>>>= >>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 4 0 0 0 = 0 4 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0= 0 0 0 0 0=0A>>>>>>> 4 0 0 0 0 0 2 0=0A>>>>>>> 2 0 0 0 0 0 24 0=0A>>>>>>>= 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>= >>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> GtAttr Lookup Rdlink Read Write Rena= me Access Rddir=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>= >>>> 4 0 0 0 0 0 7 0=0A>>>>>>> 2 1 0 0 0 0 1 0=0A>>>>>>> 0 0 0 0 2 0 0 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 6 0 0 0=0A>>>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0= 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> = 4 6 0 0 0 0 3 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 2 0 0 0 0 0 0 0=0A>>>= >>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> GtAttr Lookup Rdlink Read Write Rena= me Access Rddir=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>= >>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>> 4 71 0 0 0 0 0 0=0A>>>>>>> 0 1 0 0 0 0 0 0=0A>>>>>>> 2 36 0 0 = 0 0 1 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0= 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 1 0 0 0 0 0 1 0=0A>>>>>>= > 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 79 6 0 79 79 0 2 0= =0A>>>>>>> 25 0 0 25 26 0 6 0=0A>>>>>>> 43 18 0 39 46 0 23 0=0A>>>>>>> 36= 0 0 36 36 0 31 0=0A>>>>>>> 68 1 0 66 68 0 0 0=0A>>>>>>> GtAttr Lookup Rd= link Read Write Rename Access Rddir=0A>>>>>>> 36 0 0 36 36 0 0 0=0A>>>>>>= > 48 0 0 48 49 0 0 0=0A>>>>>>> 20 0 0 20 20 0 0 0=0A>>>>>>> 0 0 0 0 0 0 0= 0=0A>>>>>>> 3 14 0 1 0 0 11 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 0 0 0 = 0 0 0 0 0=0A>>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>> 4= 22 0 0 0 0 16 0=0A>>>>>>> 2 0 0 0 0 0 23 0=0A>>>>>>> =0A>>>>>>> Regards,= =0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Systems, Network and= Security Engineer=0A>>>>>>> http://www.unix-experience.fr=0A>>>>>>> =0A>= >>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot"=0A>>>>>>> a=0A>>>>>>> =C3=A9crit:=0A>>>>>>>> Hi Rick,=0A>>>>>>>>= I stopped the jails this week-end and started it this=0A>>>>>>>> morning= ,=0A>>>>>>>> i'll=0A>>>>>>>> give you some stats this week.=0A>>>>>>>> = =0A>>>>>>>> Here is my nfsstat -m output (with your rsize/wsize=0A>>>>>>>= > tweaks)=0A>>>>>> =0A>>>>>> =0A>>>>> =0A>>>> =0A>>> =0A>> =0A> nfsv4,tcp= ,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acre= gmax=3D60,nametimeo=3D60,negna=0A>>>>>> =0A>>>>>>> =0A>>>>>> =0A>>>>>> = =0A>>>>> =0A>>>> =0A>>> =0A>> =0A> etimeo=3D60,rsize=3D32768,wsize=3D3276= 8,readdirsize=3D32768,readahead=3D1,wcommitsize=3D773136,timeout=3D120,re= tra=0A>>>>>> =0A>>>>>>> s=3D2147483647=0A>>>>>>> =0A>>>>>>> On server sid= e my disks are on a raid controller which show a=0A>>>>>>> 512b=0A>>>>>>>= volume and write performances=0A>>>>>>> are very honest (dd if=3D/dev/ze= ro of=3D/jails/test.dd bs=3D4096=0A>>>>>>> count=3D100000000 =3D> 450MBps= )=0A>>>>>>> =0A>>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>= >>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>> http://www.un= ix-experience.fr=0A>>>>>>> =0A>>>>>>> 5 d=C3=A9cembre 2014 15:14 "Rick Ma= cklem" a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A>>>>>>= > Loic Blot wrote:=0A>>>>>>> =0A>>>>>>> Hi,=0A>>>>>>> i'm trying to creat= e a virtualisation environment based on=0A>>>>>>> jails.=0A>>>>>>> Those = jails are stored under a big ZFS pool on a FreeBSD=0A>>>>>>> 9.3=0A>>>>>>= > which=0A>>>>>>> export a NFSv4 volume. This NFSv4 volume was mounted on= a=0A>>>>>>> big=0A>>>>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 p= orts (but=0A>>>>>>> only 1=0A>>>>>>> was=0A>>>>>>> used at this time).=0A= >>>>>>> =0A>>>>>>> The problem is simple, my hypervisors runs 6 jails (us= ed 1%=0A>>>>>>> cpu=0A>>>>>>> and=0A>>>>>>> 10GB RAM approximatively and = less than 1MB bandwidth) and=0A>>>>>>> works=0A>>>>>>> fine at start but = the system slows down and after 2-3 days=0A>>>>>>> become=0A>>>>>>> unusa= ble. When i look at top command i see 80-100% on=0A>>>>>>> system=0A>>>>>= >> and=0A>>>>>>> commands are very very slow. Many process are tagged wit= h=0A>>>>>>> nfs_cl*.=0A>>>>>>> =0A>>>>>>> To be honest, I would expect th= e slowness to be because of=0A>>>>>>> slow=0A>>>>>>> response=0A>>>>>>> f= rom the NFSv4 server, but if you do:=0A>>>>>>> # ps axHl=0A>>>>>>> on a c= lient when it is slow and post that, it would give us=0A>>>>>>> some=0A>>= >>>>> more=0A>>>>>>> information on where the client side processes are s= itting.=0A>>>>>>> If you also do something like:=0A>>>>>>> # nfsstat -c -= w 1=0A>>>>>>> and let it run for a while, that should show you how=20many= =0A>>>>>>> RPCs=0A>>>>>>> are=0A>>>>>>> being done and which ones.=0A>>>>= >>> =0A>>>>>>> # nfsstat -m=0A>>>>>>> will show you what your mount is ac= tually using.=0A>>>>>>> The only mount option I can suggest trying is=0A>= >>>>>> "rsize=3D32768,wsize=3D32768",=0A>>>>>>> since some network enviro= nments have difficulties with 64K.=0A>>>>>>> =0A>>>>>>> There are a few t= hings you can try on the NFSv4 server side,=0A>>>>>>> if=0A>>>>>>> it=0A>= >>>>>> appears=0A>>>>>>> that the clients are generating a large RPC load= .=0A>>>>>>> - disabling the DRC cache for TCP by setting=0A>>>>>>> vfs.nf= sd.cachetcp=3D0=0A>>>>>>> - If the server is seeing a large write RPC loa= d, then=0A>>>>>>> "sync=3Ddisabled"=0A>>>>>>> might help, although it doe= s run a risk of data loss when=0A>>>>>>> the=0A>>>>>>> server=0A>>>>>>> c= rashes.=0A>>>>>>> Then there are a couple of other ZFS related things (I'= m not=0A>>>>>>> a=0A>>>>>>> ZFS=0A>>>>>>> guy,=0A>>>>>>> but these have s= hown up on the mailing lists).=0A>>>>>>> - make sure your volumes are 4K = aligned and ashift=3D12 (in=0A>>>>>>> case a=0A>>>>>>> drive=0A>>>>>>> th= at uses 4K sectors is pretending to be 512byte sectored)=0A>>>>>>> - neve= r run over 70-80% full if write performance is an=0A>>>>>>> issue=0A>>>>>= >> - use a zil on an SSD with good write performance=0A>>>>>>> =0A>>>>>>>= The only NFSv4 thing I can tell you is that it is known that=0A>>>>>>> Z= FS's=0A>>>>>>> algorithm for determining sequential vs random I/O fails f= or=0A>>>>>>> NFSv4=0A>>>>>>> during writing and this can be a performance= hit. The only=0A>>>>>>> workaround=0A>>>>>>> is to use NFSv3 mounts, sin= ce file handle affinity=0A>>>>>>> apparently=0A>>>>>>> fixes=0A>>>>>>> th= e problem and this is only done for NFSv3.=0A>>>>>>> =0A>>>>>>> rick=0A>>= >>>>> =0A>>>>>>> I saw that there are TSO issues with igb then i'm trying= to=0A>>>>>>> disable=0A>>>>>>> it with sysctl but the situation wasn't s= olved.=0A>>>>>>> =0A>>>>>>> Someone has got ideas ? I can give you more i= nformations if=0A>>>>>>> you=0A>>>>>>> need.=0A>>>>>>> =0A>>>>>>> Thanks = in advance.=0A>>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>= >>>> UNIX Systems, Network and Security Engineer=0A>>>>>>> http://www.uni= x-experience.fr=0A>>>>>>> _______________________________________________= =0A>>>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>>> http://lists.fre= ebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>> To unsubscribe, send any m= ail to=0A>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>>>>> =0A>>>>>>= > _______________________________________________=0A>>>>>>> freebsd-fs@fr= eebsd.org mailing list=0A>>>>>>> http://lists.freebsd.org/mailman/listinf= o/freebsd-fs=0A>>>>>>> To unsubscribe, send any mail to=0A>>>>>>> "freebs= d-fs-unsubscribe@freebsd.org"=0A>>>>>>> =0A>>>>>>> ______________________= _________________________=0A>>>>>>> freebsd-fs@freebsd.org mailing list= =0A>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>>= To unsubscribe, send any mail to=0A>>>>>>> "freebsd-fs-unsubscribe@freeb= sd.org"=0A>>>>>>> =0A>>>>>>> ____________________________________________= ___=0A>>>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>>> http://lists.= freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>> To unsubscribe, send an= y mail to=0A>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>>>>> ______= _________________________________________=0A>>>>>>> freebsd-fs@freebsd.or= g mailing list=0A>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebs= d-fs=0A>>>>>>> To unsubscribe, send any mail to=0A>>>>>>> "freebsd-fs-uns= ubscribe@freebsd.org"=0A>>>>>>> =0A>>>>>>> ______________________________= _________________=0A>>>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>>>= http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>> To unsubs= cribe, send any mail to=0A>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A= >>>>> =0A>>>>> _______________________________________________=0A>>>>> fr= eebsd-fs@freebsd.org mailing list=0A>>>>> http://lists.freebsd.org/mailma= n/listinfo/freebsd-fs=0A>>>>> To unsubscribe, send any mail to=0A>>>>> "f= reebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 10:40:08 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4ED35BE6 for ; Tue, 6 Jan 2015 10:40:08 +0000 (UTC) Received: from mailex.mailcore.me (mailex.mailcore.me [94.136.40.62]) by mx1.freebsd.org (Postfix) with ESMTP id 191D964D14 for ; Tue, 6 Jan 2015 10:40:07 +0000 (UTC) Received: from host81-152-205-160.range81-152.btcentralplus.com ([81.152.205.160] helo=[192.168.1.208]) by smtp04.mailcore.me with esmtpa (Exim 4.80.1) (envelope-from ) id 1Y8RYN-0007VV-R8 for freebsd-fs@freebsd.org; Tue, 06 Jan 2015 10:40:00 +0000 Message-ID: <54ABBB8D.6090801@kearsley.me> Date: Tue, 06 Jan 2015 10:40:13 +0000 From: Richard Kearsley User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: zfs directory size accounting Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Mailcore-Auth: 12120934 X-Mailcore-Domain: 1490668 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 10:40:08 -0000 Hello In my system I am using a subdirectory structure to store millions of files, like 0-f/0-f/0-f/0-f/0-f/file.ext I have recently discovered that the size of the directory itself (not including the files within it) is variable for some reason - I has the understanding that it was a fixed size for example: 0/0/0/0 # ls -als total 564 9 drwxr-xr-x 16 root wheel 16 Jan 3 13:57 . 9 drwxr-xr-x 18 root wheel 18 Feb 18 2014 .. 25 drwxr-xr-x 2 root wheel 6 Jan 5 17:52 2 25 drwxr-xr-x 2 root wheel 8 Jan 5 17:52 3 25 drwxr-xr-x 2 root wheel 4 Jan 5 17:31 4 25 drwxr-xr-x 2 root wheel 2 Jan 3 04:36 5 25 drwxr-xr-x 2 root wheel 6 Dec 29 21:47 6 25 drwxr-xr-x 2 root wheel 6 Jan 3 18:39 7 25 drwxr-xr-x 2 root wheel 17 Jan 4 15:43 8 93 drwxr-xr-x 2 root wheel 4 Jan 4 03:57 9 25 drwxr-xr-x 2 root wheel 8 Jan 3 22:01 a 93 drwxr-xr-x 2 root wheel 5 Jan 3 20:18 b 25 drwxr-xr-x 2 root wheel 8 Jan 5 18:04 c 93 drwxr-xr-x 2 root wheel 9 Jan 5 16:25 d 25 drwxr-xr-x 2 root wheel 6 Jan 4 08:02 e 25 drwxr-xr-x 2 root wheel 10 Jan 4 18:56 f I believe the block count (first number) in particular makes a big difference on filesystem space usage which is potentially a problem for me 0/0/0/0 # stat b 1598987375 6362113 drwxr-xr-x 2 root wheel 4294967295 5 "Oct 5 01:07:43 2014" "Jan 3 20:18:04 2015" "Jan 3 20:18:04 2015" "Oct 5 01:07:43 2014" 16384 185 0 b directory 'b' is using 185 512-byte blocks - 94720 bytes!... why? Many thanks Richard From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 14:58:40 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 683FA23B for ; Tue, 6 Jan 2015 14:58:40 +0000 (UTC) Received: from mail-oi0-x22c.google.com (mail-oi0-x22c.google.com [IPv6:2607:f8b0:4003:c06::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2CE961B4B for ; Tue, 6 Jan 2015 14:58:40 +0000 (UTC) Received: by mail-oi0-f44.google.com with SMTP id a141so18760656oig.3 for ; Tue, 06 Jan 2015 06:58:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=tW1xY5O21Dj0uJWfiVg8WC4WjB6/SMO9ulpQeWcBjwo=; b=XK7dvxG3pQTiPpAaVwf3S5hQLr9+KAAanuNZqeSJ7k30gKXmFa0dJXHUhZDvVUYBsW SDE4iMi6o+xNcem3RyjzqNvsGoxm7kLYHHnCd3U7a586bFdBFak27NalWCNcBMRCA+ZO 6tQ0ziK7I6K+mTMEuU2LoJX4VVKOXEyXfrOqxHFCpWJ0x9nq/hUy+vrXentKqRsDA6+2 spqBcYc5sKbIri38XQfBANp0vopAkiccWahRw8/WwnNlNqMimbQW0YM5e0iiD+M94Zeo tOs2yy7RMwf3XZjg4dTkh/5LWg2A/QuNDUzCs3K9Uhf/igQJHUKsxaXdWBD86I4JyTdf lVYw== X-Received: by 10.202.135.78 with SMTP id j75mr15541329oid.106.1420556319486; Tue, 06 Jan 2015 06:58:39 -0800 (PST) MIME-Version: 1.0 Received: by 10.182.169.71 with HTTP; Tue, 6 Jan 2015 06:58:19 -0800 (PST) From: Fervent Dissent Date: Tue, 6 Jan 2015 22:58:19 +0800 Message-ID: Subject: zfs corruption after controller failure To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 14:58:40 -0000 I have a external disk that was on a cheap usb controller, that controller died. At first zpool status showed a fault and I could find the pool. After a new cable, not usb, I couldn't import it no matter the combination of flags and switches. Nothing I found helped, but I managed to get it to give me some information back. I found some old posts on "labelfix" and manual moving labels, but I'm not willing to do that yet. It is on an esata connection now, so ada1. Thanks for the help, Jason # uname -v FreeBSD 10.1-PRERELEASE #3 r273269: Sun Oct 19 20:13:23 CST 2014 root@satellite:/usr/obj/usr/src/sys/TWILIGHT # ls /dev/diskid/ DISK-Z050B3VV from dmesg ada1 at ahcich3 bus 0 scbus3 target 0 lun 0 ada1: ATA-8 SATA 2.x device ada1: Serial Number Z050B3VV ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad10 # gpart show ada1 => 63 1953525105 ada1 MBR (932G) 63 1953525105 - free - (932G) # gpart list ada1 Geom name: ada1 modified: false state: OK fwheads: 16 fwsectors: 63 last: 1953525167 first: 63 entries: 4 scheme: MBR Consumers: 1. Name: ada1 Mediasize: 1000204886016 (932G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 # zdb -lu /dev/ada1 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 Uberblock[0] magic = 0000000000bab10c version = 5000 txg = 3697688 guid_sum = 13899651479099793827 timestamp = 1420437689 UTC = Mon Jan 5 14:01:29 2015 Uberblock[4] magic = 0000000000bab10c version = 5000 txg = 3697657 guid_sum = 13899651479099793827 timestamp = 1420437534 UTC = Mon Jan 5 13:58:54 2015 Uberblock[8] magic = 0000000000bab10c version = 5000 txg = 3697658 guid_sum = 13899651479099793827 timestamp = 1420437539 UTC = Mon Jan 5 13:58:59 2015 Uberblock[12] magic = 0000000000bab10c version = 5000 txg = 3697659 guid_sum = 13899651479099793827 timestamp = 1420437544 UTC = Mon Jan 5 13:59:04 2015 Uberblock[16] magic = 0000000000bab10c version = 5000 txg = 3697660 guid_sum = 13899651479099793827 timestamp = 1420437549 UTC = Mon Jan 5 13:59:09 2015 Uberblock[20] magic = 0000000000bab10c version = 5000 txg = 3697661 guid_sum = 13899651479099793827 timestamp = 1420437554 UTC = Mon Jan 5 13:59:14 2015 Uberblock[24] magic = 0000000000bab10c version = 5000 txg = 3697662 guid_sum = 13899651479099793827 timestamp = 1420437559 UTC = Mon Jan 5 13:59:19 2015 Uberblock[28] magic = 0000000000bab10c version = 5000 txg = 3697663 guid_sum = 13899651479099793827 timestamp = 1420437564 UTC = Mon Jan 5 13:59:24 2015 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 Uberblock[0] magic = 0000000000bab10c version = 5000 txg = 3697688 guid_sum = 13899651479099793827 timestamp = 1420437689 UTC = Mon Jan 5 14:01:29 2015 Uberblock[4] magic = 0000000000bab10c version = 5000 txg = 3697657 guid_sum = 13899651479099793827 timestamp = 1420437534 UTC = Mon Jan 5 13:58:54 2015 Uberblock[8] magic = 0000000000bab10c version = 5000 txg = 3697658 guid_sum = 13899651479099793827 timestamp = 1420437539 UTC = Mon Jan 5 13:58:59 2015 Uberblock[12] magic = 0000000000bab10c version = 5000 txg = 3697659 guid_sum = 13899651479099793827 timestamp = 1420437544 UTC = Mon Jan 5 13:59:04 2015 Uberblock[16] magic = 0000000000bab10c version = 5000 txg = 3697660 guid_sum = 13899651479099793827 timestamp = 1420437549 UTC = Mon Jan 5 13:59:09 2015 Uberblock[20] magic = 0000000000bab10c version = 5000 txg = 3697661 guid_sum = 13899651479099793827 timestamp = 1420437554 UTC = Mon Jan 5 13:59:14 2015 Uberblock[24] magic = 0000000000bab10c version = 5000 txg = 3697662 guid_sum = 13899651479099793827 timestamp = 1420437559 UTC = Mon Jan 5 13:59:19 2015 Uberblock[28] magic = 0000000000bab10c version = 5000 txg = 3697663 guid_sum = 13899651479099793827 timestamp = 1420437564 UTC = Mon Jan 5 13:59:24 2015 From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:01:43 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3449B3DB for ; Tue, 6 Jan 2015 15:01:43 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 78D251C94 for ; Tue, 6 Jan 2015 15:01:41 +0000 (UTC) Received: (qmail 8082 invoked by uid 89); 6 Jan 2015 15:01:54 -0000 Received: from unknown (HELO ?192.168.1.200?) (rainer@ultra-secure.de@217.71.83.52) by mail.ultra-secure.de with ESMTPA; 6 Jan 2015 15:01:54 -0000 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) Subject: Re: zfs corruption after controller failure From: Rainer Duffner In-Reply-To: Date: Tue, 6 Jan 2015 16:01:35 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> References: To: Fervent Dissent X-Mailer: Apple Mail (2.1993) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:01:43 -0000 > Am 06.01.2015 um 15:58 schrieb Fervent Dissent = : >=20 > I have a external disk that was on a cheap usb controller, that = controller > died. Maybe I=E2=80=99m mistaken, but I though that if your pool only has a = single disk and that disk/pool shows errors or becomes = unreadable/corrupted/whatever you cannot recover it. Same for not using ECC memory=E2=80=A6 =20= From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:06:26 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3DC685D4 for ; Tue, 6 Jan 2015 15:06:26 +0000 (UTC) Received: from mail-oi0-x231.google.com (mail-oi0-x231.google.com [IPv6:2607:f8b0:4003:c06::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F36AF1D07 for ; Tue, 6 Jan 2015 15:06:25 +0000 (UTC) Received: by mail-oi0-f49.google.com with SMTP id a141so51841783oig.8 for ; Tue, 06 Jan 2015 07:06:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=Q4x79fu7/UV493QM1WEUUOp8GWLQwunnakOslrJJdKg=; b=YjRHEdFG9KkieuMwNEjWUaOUU44+BT6DB2mOHx647LcxBGmYt/Lkd8xhEw447tmyBg QPf0HNvLllz3uVIb+IV6Hj6IIRSqfMrpM52d+UbYEuwTtRMK5nOsBWSqld038SoteIpc 86e3dPxOhqCmLVrF16n82LI5pbH0KrO1mf/Z+5hDAQxPylK4fBFoEPxN18Q3yBejl60o 4CSIbso6WobH/ep5RMfyvhIfHGQR8BAMzHZlThLyIyt50eFfqSy3LmJZ4/k+CPD6x/jf QR7xKHF+baf9p1nlWHrQEPLvdqrRjlvQqP/Dgf35X4H5ejr6qXXpQWPpWJnYm7oGZNuo A3Tw== X-Received: by 10.202.135.78 with SMTP id j75mr15566966oid.106.1420556785361; Tue, 06 Jan 2015 07:06:25 -0800 (PST) MIME-Version: 1.0 Received: by 10.182.169.71 with HTTP; Tue, 6 Jan 2015 07:06:05 -0800 (PST) In-Reply-To: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> From: Fervent Dissent Date: Tue, 6 Jan 2015 23:06:05 +0800 Message-ID: Subject: Re: zfs corruption after controller failure To: Rainer Duffner Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:06:26 -0000 I've had power lose before and a previous bad controller cause multiple problems. The drive would disappear or go offline. I would clear it and go on no problem. This is the first failure that I have not been able to recover from. On Tue, Jan 6, 2015 at 11:01 PM, Rainer Duffner wrote: > > > Am 06.01.2015 um 15:58 schrieb Fervent Dissent < > walkerindarkness@gmail.com>: > > > > I have a external disk that was on a cheap usb controller, that > controller > > died. > > > Maybe I=E2=80=99m mistaken, but I though that if your pool only has a sin= gle disk > and that disk/pool shows errors or becomes unreadable/corrupted/whatever > you cannot recover it. > > Same for not using ECC memory=E2=80=A6 > > > > From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:09:20 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6AACB685 for ; Tue, 6 Jan 2015 15:09:20 +0000 (UTC) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) by mx1.freebsd.org (Postfix) with ESMTP id 1F49C1D58 for ; Tue, 6 Jan 2015 15:09:19 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id 139704C4C8E6; Tue, 6 Jan 2015 16:09:17 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id F77W6Q2N6kon; Tue, 6 Jan 2015 16:09:08 +0100 (CET) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 9B0344C4C7D6; Tue, 6 Jan 2015 16:09:08 +0100 (CET) Message-ID: <54ABFA82.2070401@internetx.com> Date: Tue, 06 Jan 2015 16:08:50 +0100 From: InterNetX - Juergen Gotteswinter Reply-To: jg@internetx.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Fervent Dissent , Rainer Duffner Subject: Re: zfs corruption after controller failure References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:09:20 -0000 > external usb drive (for sure cheap consumer desktop one) > cheap (maybe even buggy) usb controller > non-mirrored / raidz > probably cheap hardware at all (no ecc, like rainer mentioned for example) what do you expect. zfs is very tolerant, but some basics should be taken care of... but even with a single disk it should be possible to recover / import the pool at least read only Am 06.01.2015 um 16:06 schrieb Fervent Dissent: > I've had power lose before and a previous bad controller cause multiple > problems. The drive would disappear or go offline. I would clear it and go > on no problem. This is the first failure that I have not been able to > recover from. > > On Tue, Jan 6, 2015 at 11:01 PM, Rainer Duffner > wrote: > >> >>> Am 06.01.2015 um 15:58 schrieb Fervent Dissent < >> walkerindarkness@gmail.com>: >>> >>> I have a external disk that was on a cheap usb controller, that >> controller >>> died. >> >> >> Maybe I’m mistaken, but I though that if your pool only has a single disk >> and that disk/pool shows errors or becomes unreadable/corrupted/whatever >> you cannot recover it. >> >> Same for not using ECC memory… >> >> >> >> > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:11:36 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B56C3773 for ; Tue, 6 Jan 2015 15:11:36 +0000 (UTC) Received: from mail-oi0-x233.google.com (mail-oi0-x233.google.com [IPv6:2607:f8b0:4003:c06::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 757DE1D93 for ; Tue, 6 Jan 2015 15:11:36 +0000 (UTC) Received: by mail-oi0-f51.google.com with SMTP id h136so18905160oig.10 for ; Tue, 06 Jan 2015 07:11:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=i5fqF6U3bBDoUOu7xe3D3T9rgGDcwyMlOxmRUTxc1qA=; b=t9CnFXiuVvWFhyMHWY2OwEtJOQCpYfBuYH7C7TxnyFk/CL9Ci90FFOWShxNUrj85sQ r1/hhujFMK6NKrlBxULn5zzq9IaPnHOdq+fJjXlHWL4u/V646YrevYjweYaXijOpdFEU VZArIDXq/VqVrpMzckw+0JnHgrrm/yycHvpf/Lr7GaQWcHluo3q5pUxDTjUBjLcsKgPB w5Gv84W210XsY9Eh7mNNqLEDWRWGPpE9MFozv+ia9CyJ1ZxGVAvpN2tcSB0mngEmTyJK elYH8x2KrQa5P3xFjz/VS4cUiIMXAnzkFT0ab4/NA/IgL0I4dERJSEbzG/jKaikgknUT FXuQ== X-Received: by 10.182.149.164 with SMTP id ub4mr55193300obb.1.1420557095757; Tue, 06 Jan 2015 07:11:35 -0800 (PST) MIME-Version: 1.0 Received: by 10.182.169.71 with HTTP; Tue, 6 Jan 2015 07:11:15 -0800 (PST) In-Reply-To: <54ABFA82.2070401@internetx.com> References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> <54ABFA82.2070401@internetx.com> From: Fervent Dissent Date: Tue, 6 Jan 2015 23:11:15 +0800 Message-ID: Subject: Re: zfs corruption after controller failure To: jg@internetx.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:11:36 -0000 I tried these, and some variations of them, and it gave the error the pool does not exist. zpool import -N -o readonly=3Don -f -R /pool zpool import -N -o readonly=3Don -f -R /pool -F -T On Tue, Jan 6, 2015 at 11:08 PM, InterNetX - Juergen Gotteswinter < jg@internetx.com> wrote: > > external usb drive (for sure cheap consumer desktop one) > > cheap (maybe even buggy) usb controller > > non-mirrored / raidz > > probably cheap hardware at all (no ecc, like rainer mentioned for > example) > > what do you expect. zfs is very tolerant, but some basics should be > taken care of... > > but even with a single disk it should be possible to recover / import > the pool at least read only > > > > Am 06.01.2015 um 16:06 schrieb Fervent Dissent: > > I've had power lose before and a previous bad controller cause multiple > > problems. The drive would disappear or go offline. I would clear it and > go > > on no problem. This is the first failure that I have not been able to > > recover from. > > > > On Tue, Jan 6, 2015 at 11:01 PM, Rainer Duffner > > wrote: > > > >> > >>> Am 06.01.2015 um 15:58 schrieb Fervent Dissent < > >> walkerindarkness@gmail.com>: > >>> > >>> I have a external disk that was on a cheap usb controller, that > >> controller > >>> died. > >> > >> > >> Maybe I=E2=80=99m mistaken, but I though that if your pool only has a = single > disk > >> and that disk/pool shows errors or becomes unreadable/corrupted/whatev= er > >> you cannot recover it. > >> > >> Same for not using ECC memory=E2=80=A6 > >> > >> > >> > >> > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:15:02 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D61E5968 for ; Tue, 6 Jan 2015 15:15:02 +0000 (UTC) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8A0E21EA3 for ; Tue, 6 Jan 2015 15:15:02 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id 43FCC4C4C8E5; Tue, 6 Jan 2015 16:05:25 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QcED4iUAjMCs; Tue, 6 Jan 2015 16:05:18 +0100 (CET) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 472474C4C7D4; Tue, 6 Jan 2015 16:05:18 +0100 (CET) Message-ID: <54ABF99B.2000301@internetx.com> Date: Tue, 06 Jan 2015 16:04:59 +0100 From: InterNetX - Juergen Gotteswinter Reply-To: juergen.gotteswinter@internetx.com Organization: InterNetX GmbH User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Fervent Dissent Subject: Re: zfs corruption after controller failure References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> In-Reply-To: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:15:02 -0000 > external usb disk > cheap usb controller > non-mirrored / raidz > probably cheap hardware at all (no ecc, like rainer mentioned for example) what do you expect. zfs is very tolerant, but some basics should be taken care of... Am 06.01.2015 um 16:01 schrieb Rainer Duffner: > >> Am 06.01.2015 um 15:58 schrieb Fervent Dissent : >> >> I have a external disk that was on a cheap usb controller, that controller >> died. > > > Maybe I’m mistaken, but I though that if your pool only has a single disk and that disk/pool shows errors or becomes unreadable/corrupted/whatever you cannot recover it. > > Same for not using ECC memory… > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:29:45 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C2A3FCFF for ; Tue, 6 Jan 2015 15:29:45 +0000 (UTC) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) by mx1.freebsd.org (Postfix) with ESMTP id 41E5B12F for ; Tue, 6 Jan 2015 15:29:44 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id BB83E4C4C80C; Tue, 6 Jan 2015 16:29:42 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tWzQjwLv2TVi; Tue, 6 Jan 2015 16:29:40 +0100 (CET) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 38A9C4C4C7DD; Tue, 6 Jan 2015 16:29:40 +0100 (CET) Message-ID: <54ABFF51.50708@internetx.com> Date: Tue, 06 Jan 2015 16:29:21 +0100 From: InterNetX - Juergen Gotteswinter Reply-To: jg@internetx.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Fervent Dissent Subject: Re: zfs corruption after controller failure References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> <54ABFA82.2070401@internetx.com> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:29:45 -0000 -T is an undocumentated and so for it should be considered as unstable or even something which can easily break things. theres a reason why its not documentated very likely. dl;dr - its something i whouldnt touch... What Output do you get from a simple zpool import ? You did everything you tried until now with "readonly=on" ? If everything seems to be broken, you could give the zfs forensic scripts a try... its already broken, cant get worse anymore. Am 06.01.2015 um 16:11 schrieb Fervent Dissent: > I tried these, and some variations of them, and it gave the error the > pool does not exist. > zpool import -N -o readonly=on -f -R /pool > zpool import -N -o readonly=on -f -R /pool -F -T > > On Tue, Jan 6, 2015 at 11:08 PM, InterNetX - Juergen Gotteswinter > > wrote: > > > external usb drive (for sure cheap consumer desktop one) > > cheap (maybe even buggy) usb controller > > non-mirrored / raidz > > probably cheap hardware at all (no ecc, like rainer mentioned for example) > > what do you expect. zfs is very tolerant, but some basics should be > taken care of... > > but even with a single disk it should be possible to recover / import > the pool at least read only > > > > Am 06.01.2015 um 16:06 schrieb Fervent Dissent: > > I've had power lose before and a previous bad controller cause > multiple > > problems. The drive would disappear or go offline. I would clear > it and go > > on no problem. This is the first failure that I have not been able to > > recover from. > > > > On Tue, Jan 6, 2015 at 11:01 PM, Rainer Duffner > > > > wrote: > > > >> > >>> Am 06.01.2015 um 15:58 schrieb Fervent Dissent < > >> walkerindarkness@gmail.com >: > >>> > >>> I have a external disk that was on a cheap usb controller, that > >> controller > >>> died. > >> > >> > >> Maybe I’m mistaken, but I though that if your pool only has a > single disk > >> and that disk/pool shows errors or becomes > unreadable/corrupted/whatever > >> you cannot recover it. > >> > >> Same for not using ECC memory… > >> > >> > >> > >> > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org > " > > > > From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:35:37 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 001EE333 for ; Tue, 6 Jan 2015 15:35:36 +0000 (UTC) Received: from mail-ob0-x22e.google.com (mail-ob0-x22e.google.com [IPv6:2607:f8b0:4003:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B314926F for ; Tue, 6 Jan 2015 15:35:36 +0000 (UTC) Received: by mail-ob0-f174.google.com with SMTP id uz6so3174703obc.5 for ; Tue, 06 Jan 2015 07:35:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=RXT7CS0RA8maPhbZATV5bLGoOLfZI3dWO04nts4eH18=; b=pPL5WFVrd2I3UlBssrgEl5dzQ2Z8TQ62Xn/nQGLrrNjrQ/JQPY/iiv0LW+3bZbqy1Q +ffJAr2eebVO+3r1n8BhOlLRo0tDji6qvbENd8jPZbgqj8Z8jatb2Bh2g2PjcKDxP0b/ HRPd+UlExsL9kuuDsxkgTvC5B1beesKaFO7EOxwM8Dv79oDKiGOQxTdChRmIf5fTXxW0 y/bMFhzK/j7Qq5o8FP2Hu5cX/CW1QvFE12mave+dB2tG0Ena5Lhcqt16irEiHLjcj6at B/NyvIwEt10DU5/kBwCY033LrSW8ynSNa3loOyhiGursttaTJqTScJ1Hwdl6PDz/6Mhf 7zKw== X-Received: by 10.60.145.167 with SMTP id sv7mr56868193oeb.38.1420558536134; Tue, 06 Jan 2015 07:35:36 -0800 (PST) MIME-Version: 1.0 Received: by 10.182.169.71 with HTTP; Tue, 6 Jan 2015 07:35:16 -0800 (PST) In-Reply-To: <54ABFF51.50708@internetx.com> References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> <54ABFA82.2070401@internetx.com> <54ABFF51.50708@internetx.com> From: Fervent Dissent Date: Tue, 6 Jan 2015 23:35:16 +0800 Message-ID: Subject: Re: zfs corruption after controller failure To: jg@internetx.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:35:37 -0000 # zpool import -d /dev/diskid/ # no error, no import # zpool import -d /dev/diskid/ -N -o readonly=3Don -f -R /tb -F -T 3697688 = tb cannot import 'tb': no such pool available tb is the pool name, I'll try those scripts later On Tue, Jan 6, 2015 at 11:29 PM, InterNetX - Juergen Gotteswinter < jg@internetx.com> wrote: > -T is an undocumentated and so for it should be considered as unstable > or even something which can easily break things. theres a reason why its > not documentated very likely. > > dl;dr - its something i whouldnt touch... > > What Output do you get from a simple zpool import ? You did everything > you tried until now with "readonly=3Don" ? > > If everything seems to be broken, you could give the zfs forensic > scripts a try... its already broken, cant get worse anymore. > > Am 06.01.2015 um 16:11 schrieb Fervent Dissent: > > I tried these, and some variations of them, and it gave the error the > > pool does not exist. > > zpool import -N -o readonly=3Don -f -R /pool > > zpool import -N -o readonly=3Don -f -R /pool -F -T > > > > On Tue, Jan 6, 2015 at 11:08 PM, InterNetX - Juergen Gotteswinter > > > wrote: > > > > > external usb drive (for sure cheap consumer desktop one) > > > cheap (maybe even buggy) usb controller > > > non-mirrored / raidz > > > probably cheap hardware at all (no ecc, like rainer mentioned for > example) > > > > what do you expect. zfs is very tolerant, but some basics should be > > taken care of... > > > > but even with a single disk it should be possible to recover / impo= rt > > the pool at least read only > > > > > > > > Am 06.01.2015 um 16:06 schrieb Fervent Dissent: > > > I've had power lose before and a previous bad controller cause > > multiple > > > problems. The drive would disappear or go offline. I would clear > > it and go > > > on no problem. This is the first failure that I have not been abl= e > to > > > recover from. > > > > > > On Tue, Jan 6, 2015 at 11:01 PM, Rainer Duffner > > > > > > wrote: > > > > > >> > > >>> Am 06.01.2015 um 15:58 schrieb Fervent Dissent < > > >> walkerindarkness@gmail.com >: > > >>> > > >>> I have a external disk that was on a cheap usb controller, that > > >> controller > > >>> died. > > >> > > >> > > >> Maybe I=E2=80=99m mistaken, but I though that if your pool only = has a > > single disk > > >> and that disk/pool shows errors or becomes > > unreadable/corrupted/whatever > > >> you cannot recover it. > > >> > > >> Same for not using ECC memory=E2=80=A6 > > >> > > >> > > >> > > >> > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing > list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org > > " > > > > > > > > > From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:38:28 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9B1D43F1 for ; Tue, 6 Jan 2015 15:38:28 +0000 (UTC) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 36E9C2E5 for ; Tue, 6 Jan 2015 15:38:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id EB9A84C4C80C; Tue, 6 Jan 2015 16:38:25 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YYt2XPeyVFEB; Tue, 6 Jan 2015 16:38:23 +0100 (CET) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 712A94C4C7E4; Tue, 6 Jan 2015 16:38:23 +0100 (CET) Message-ID: <54AC015D.50708@internetx.com> Date: Tue, 06 Jan 2015 16:38:05 +0100 From: InterNetX - Juergen Gotteswinter Reply-To: jg@internetx.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Fervent Dissent Subject: Re: zfs corruption after controller failure References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> <54ABFA82.2070401@internetx.com> <54ABFF51.50708@internetx.com> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:38:28 -0000 Still using -T , why? Did you check the Zpool manpage? You wont find it there Good Luck, hope you got a Backup. Am 06.01.2015 um 16:35 schrieb Fervent Dissent: > # zpool import -d /dev/diskid/ > # > no error, no import > > # zpool import -d /dev/diskid/ -N -o readonly=on -f -R /tb -F -T 3697688 tb > cannot import 'tb': no such pool available > > > tb is the pool name, I'll try those scripts later > > On Tue, Jan 6, 2015 at 11:29 PM, InterNetX - Juergen Gotteswinter > > wrote: > > -T is an undocumentated and so for it should be considered as unstable > or even something which can easily break things. theres a reason why its > not documentated very likely. > > dl;dr - its something i whouldnt touch... > > What Output do you get from a simple zpool import ? You did everything > you tried until now with "readonly=on" ? > > If everything seems to be broken, you could give the zfs forensic > scripts a try... its already broken, cant get worse anymore. > > Am 06.01.2015 um 16:11 schrieb Fervent Dissent: > > I tried these, and some variations of them, and it gave the error the > > pool does not exist. > > zpool import -N -o readonly=on -f -R /pool > > zpool import -N -o readonly=on -f -R /pool -F -T > > > > On Tue, Jan 6, 2015 at 11:08 PM, InterNetX - Juergen Gotteswinter > > >> wrote: > > > > > external usb drive (for sure cheap consumer desktop one) > > > cheap (maybe even buggy) usb controller > > > non-mirrored / raidz > > > probably cheap hardware at all (no ecc, like rainer mentioned for example) > > > > what do you expect. zfs is very tolerant, but some basics should be > > taken care of... > > > > but even with a single disk it should be possible to recover / import > > the pool at least read only > > > > > > > > Am 06.01.2015 um 16:06 schrieb Fervent Dissent: > > > I've had power lose before and a previous bad controller cause > > multiple > > > problems. The drive would disappear or go offline. I would clear > > it and go > > > on no problem. This is the first failure that I have not been able to > > > recover from. > > > > > > On Tue, Jan 6, 2015 at 11:01 PM, Rainer Duffner > > > >> > > > wrote: > > > > > >> > > >>> Am 06.01.2015 um 15:58 schrieb Fervent Dissent < > > >> walkerindarkness@gmail.com > > >>: > > >>> > > >>> I have a external disk that was on a cheap usb controller, that > > >> controller > > >>> died. > > >> > > >> > > >> Maybe I’m mistaken, but I though that if your pool only has a > > single disk > > >> and that disk/pool shows errors or becomes > > unreadable/corrupted/whatever > > >> you cannot recover it. > > >> > > >> Same for not using ECC memory… > > >> > > >> > > >> > > >> > > > _______________________________________________ > > > freebsd-fs@freebsd.org > > > mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org > > > >" > > > > > > > > > From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 15:50:45 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CE680D46 for ; Tue, 6 Jan 2015 15:50:45 +0000 (UTC) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.21.123]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smtp-sofia.digsys.bg", Issuer "Digital Systems Operational CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 5977B6CA for ; Tue, 6 Jan 2015 15:50:45 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [193.68.6.1]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.6/8.14.6) with ESMTP id t06FmeRE015964 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO) for ; Tue, 6 Jan 2015 17:48:40 +0200 (EET) (envelope-from daniel@digsys.bg) Message-ID: <54AC03D7.2020603@digsys.bg> Date: Tue, 06 Jan 2015 17:48:39 +0200 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs corruption after controller failure References: <20FB5F2C-65D2-4F33-8D45-DD7FC34A5E2E@ultra-secure.de> <54ABFA82.2070401@internetx.com> <54ABFF51.50708@internetx.com> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 15:50:45 -0000 On 06.01.15 17:35, Fervent Dissent wrote: > # zpool import -d /dev/diskid/ > # > no error, no import You do not need to use -d here, ZFS will try to look at all possible block devices and will eventually find the metadata all by itself. Maybe, you had some labeling on the disk? Or some partition (not starting from block 0) on it, which contained your ZFS pool. In order to ZFS to see the pool, you will need to recreate that partition first. Your earlier post shows MBR partitioning, but not partition? Perhaps you just need to remember what the partition was? Daniel From owner-freebsd-fs@FreeBSD.ORG Tue Jan 6 23:28:59 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3215CB41 for ; Tue, 6 Jan 2015 23:28:59 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 6B1301028 for ; Tue, 6 Jan 2015 23:28:58 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap4EANxurFSDaFve/2dsb2JhbADSaQICAQ X-IronPort-AV: E=Sophos;i="5.07,711,1413259200"; d="scan'208";a="182085200" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 06 Jan 2015 18:28:58 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 5346EB4129; Tue, 6 Jan 2015 18:28:56 -0500 (EST) Date: Tue, 6 Jan 2015 18:28:56 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <1566336890.7368425.1420586936307.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jan 2015 23:28:59 -0000 Loic Blot wrote: > Hi Rick, >=20 > i saw that some people has issues with igb cards with NFS > For example: > http://freebsd.1045724.n5.nabble.com/NFS-over-LAGG-lacp-poor-performance-= td5906349.html >=20 > can my problem be related ? I use igb with default queue number. Here > are my vmstat -i outputs >=20 I have no idea. Maybe someone familiar with this will respond? I do think that the large # of NFSv4 Opens (which are actually a form of lo= ck) could be a factor. The client and server has to search those lists for a ma= tch for many NFSv4 operations, including all reads/writes. On the server side, the default hash table sizes are very small. This is in part that I did testing on 256Mbyte i386 systems, so that the values were safe for such a machine. I'd suggest you increase the following on the server's kernel. In sys/fs/nfs/nfs.h: NFSSTATEHASHSIZE - This one is in every client header, so if you have a lar= ge# of clients, you don't want to increase it too much. Howe= ver for a fairly large server handling not too many clients,= I'd try something like 1000 instead of 10. (I just tried 100 on the small i386 laptop I have handy = and it seemed ok for a small test.) NFSLOCKHASHSIZE - This one is a single global table, so I'd bump it way up, 20000 maybe? In sys/fs/nfsport.h: NFSRV_V4STATELIMIT - The comment notes that the default of 500000 seems saf= e for a 256Mbyte i386, so I'd bump it to something like 2000= 000 for your case. You will have to rebuild a kernel from sources after editing these values a= nd boot it on the server. Maybe these should become tunables so building a ker= nel isn't necessary? I looked and there isn't much that can be done in the client. At this point= , the open_owners and opens are single lists for a client (a mount point on a client machine for FreeBSD). If you post what you get for "nfsstat -e -c" on a typical client in your setup, that would tell me if it is the open_own= ers (which I suspect) or opens that will be a long list. (I would have to code a patch to make either of these a hash table instead of a single linked list. I should do this. It was on my to-do list, but got forgotten.;-) rick > Server side: >=20 > interrupt total rate > irq1: atkbd0 18 0 > irq20: ehci1 2790134 2 > irq21: ehci0 2547642 2 > cpu0:timer 36299188 35 > irq264: ciss0 6352476 6 > irq265: igb0:que 0 2716692 2 > irq266: igb0:que 1 32205278 31 > irq267: igb0:que 2 38395109 37 > irq268: igb0:que 3 1413468 1 > irq269: igb0:que 4 39207930 38 > irq270: igb0:que 5 1622715 1 > irq271: igb0:que 6 1634676 1 > irq272: igb0:que 7 1190123 1 > irq273: igb0:link 2 0 > cpu1:timer 14074423 13 > cpu8:timer 12204739 11 > cpu9:timer 11384192 11 > cpu3:timer 10461566 10 > cpu4:timer 12785103 12 > cpu6:timer 10739344 10 > cpu5:timer 10978294 10 > cpu7:timer 10599705 10 > cpu2:timer 13998891 13 > cpu10:timer 11602361 11 > cpu11:timer 11568523 11 > Total 296772592 290 >=20 > And client side: > interrupt total rate > irq9: acpi0 4 0 > irq22: ehci1 950519 2 > irq23: ehci0 1865060 4 > cpu0:timer 248128035 546 > irq268: mfi0 406896 0 > irq269: igb0:que 0 2510556 5 > irq270: igb0:que 1 2825336 6 > irq271: igb0:que 2 2092958 4 > irq272: igb0:que 3 1960849 4 > irq273: igb0:que 4 2645369 5 > irq274: igb0:que 5 2735187 6 > irq275: igb0:que 6 2290531 5 > irq276: igb0:que 7 2384370 5 > irq277: igb0:link 2 0 > irq287: igb2:que 0 1465051 3 > irq288: igb2:que 1 856381 1 > irq289: igb2:que 2 809318 1 > irq290: igb2:que 3 897154 1 > irq291: igb2:que 4 875755 1 > irq292: igb2:que 5 35866117 78 > irq293: igb2:que 6 846517 1 > irq294: igb2:que 7 857979 1 > irq295: igb2:link 2 0 > irq296: igb3:que 0 535212 1 > irq297: igb3:que 1 454359 1 > irq298: igb3:que 2 454142 1 > irq299: igb3:que 3 454623 1 > irq300: igb3:que 4 456297 1 > irq301: igb3:que 5 455482 1 > irq302: igb3:que 6 456128 1 > irq303: igb3:que 7 454680 1 > irq304: igb3:link 3 0 > irq305: ahci0 75 0 > cpu1:timer 257233702 566 > cpu13:timer 255603184 562 > cpu7:timer 258492826 569 > cpu12:timer 255819351 563 > cpu6:timer 258493465 569 > cpu15:timer 254694003 560 > cpu3:timer 258171320 568 > cpu22:timer 256506877 564 > cpu5:timer 253401435 558 > cpu16:timer 255412360 562 > cpu11:timer 257318013 566 > cpu20:timer 253648060 558 > cpu2:timer 257864543 567 > cpu17:timer 261828899 576 > cpu9:timer 257497326 567 > cpu18:timer 258451190 569 > cpu8:timer 257784504 567 > cpu14:timer 254923723 561 > cpu10:timer 257265498 566 > cpu19:timer 258775946 569 > cpu4:timer 256368658 564 > cpu23:timer 255050534 561 > cpu21:timer 257663842 567 > Total 6225260206 13710 >=20 > Please note igb2 on client side is the dedicated link for NFSv4 >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 6 janvier 2015 04:17 "Rick Macklem" a =C3=A9crit: > > Loic Blot wrote: > >=20 > >> Hi Rick, > >> nfsstat -e -s don't show usefull datas on server. > >=20 > > Well, as far as I know, it returns valid information. > > (See below.) > >=20 > >> Server Info: > >> Getattr Setattr Lookup Readlink Read Write Create > >> Remove > >> 26935254 16911 5755728 302 2334920 3673866 0 > >> 328332 > >> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > >> Access > >> 77980 28 0 0 3 8900 3 > >> 1806052 > >> Mknod Fsstat Fsinfo PathConf Commit LookupP SetClId > >> SetClIdCf > >> 1 1095 0 0 614377 8172 8 > >> 8 > >> Open OpenAttr OpenDwnGr OpenCfrm DelePurge DeleRet GetFH > >> Lock > >> 1595299 0 44145 1495 0 0 5197490 > >> 635015 > >> LockT LockU Close Verify NVerify PutFH PutPubFH > >> PutRootFH > >> 0 614919 1270938 0 0 22688676 0 > >> 5 > >> Renew RestoreFH SaveFH Secinfo RelLckOwn V4Create > >> 42104 197606 275820 0 143 4578 > >> Server: > >> Retfailed Faults Clients > >> 0 0 6 > >> OpenOwner Opens LockOwner Locks Delegs > >> 32335 145448 204 181 0 > >=20 > > Well, 145448 Opens are a lot of Open files. Each of these uses > > a kernel malloc'd data structure that is linked into multiple > > linked lists. > >=20 > > The question is..why aren't these Opens being closed? > > Since FreeBSD does I/O on an mmap'd file after closing it, > > the FreeBSD NFSv4 client is forced to delay doing Close RPCs > > until the vnode is VOP_INACTIVE()/VOP_RECLAIM()'d. (The > > VOP_RECLAIM() case is needed, since VOP_INACTIVE() isn't > > guaranteed to be called.) > >=20 > > Since there were about 1.5 million Opens and 1.27 million > > Closes, it does appear that Opens are being Closed. > > Now, I'm not sure I would have imagined 1.5million file Opens > > in a few days. My guess is this is the bottleneck. > >=20 > > I'd suggest that you do: > > # nfsstat -e -c > > on each of the NFSv4 clients and see how many Opens/client > > there are. I vaguely remember an upper limit in the client, > > but can't remember what it is set to. > > --> I suspect the client Open/Lock limit needs to be increased. > > (I can't remember if the server also has a limit, but I > > think it does.) > > Then the size of the hash tables used to search the Opens > > may also need to be increased a lot. > >=20 > > Also, I'd suggest you take a look at whatever apps. are > > running on the client(s) and try to figure out why they > > are Opening so many files? > >=20 > > My guess is that the client(s) are gettig bogged down by all > > these Opens. > >=20 > >> Server Cache Stats: > >> Inprog Idem Non-idem Misses CacheSize TCPPeak > >> 0 0 1 15082947 60 16522 > >>=20 > >> Only GetAttr and Lookup increase and it's only every 4-5 seconds > >> and > >> only +2 to +5 into theses values. > >>=20 > >> Now on client, if i take four processes stack i got > >>=20 > >> PID TID COMM TDNAME KSTACK > >> 63170 102547 mv - mi_switch+0xe1 > >> turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 > >> nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4 > >> vn_open_cred+0x21d kern_openat+0x26f amd64_syscall+0x351 > >> Xfast_syscall+0xfb > >>=20 > >> Another mv: > >> 63140 101738 mv - mi_switch+0xe1 > >> turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 > >> nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4 > >> kern_statat_vnhook+0xae sys_lstat+0x30 amd64_syscall+0x351 > >> Xfast_syscall+0xfb > >>=20 > >> 62070 102170 sendmail - mi_switch+0xe1 > >> sleepq_timedwait+0x3a _sleep+0x26e clnt_vc_call+0x666 > >> clnt_reconnect_call+0x4fa newnfs_request+0xa8c nfscl_request+0x72 > >> nfsrpc_lookup+0x1fb nfs_lookup+0x508 VOP_LOOKUP_APV+0xa1 > >> lookup+0x59c namei+0x4d4 kern_statat_vnhook+0xae sys_lstat+0x30 > >> amd64_syscall+0x351 Xfast_syscall+0xfb > >>=20 > >> 63200 100930 mv - mi_switch+0xe1 > >> turnstile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65 > >> nfs_lookup+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4 > >> kern_statat_vnhook+0xae sys_lstat+0x30 amd64_syscall+0x351 > >> Xfast_syscall+0xfb > >=20 > > The above simply says that thread 102710 is waiting for a Lookup > > reply from the server and the other 3 are waiting for the mutex > > that protects the state structures in the client. (I suspect > > some other thread in the client is wading through the Open list, > > if a single client has a lot of these 145K Opens.) > >=20 > >> When client is in this state, server was doing nothing special > >> (procstat -kk) > >>=20 > >> PID TID COMM TDNAME KSTACK > >> 895 100538 nfsd nfsd: master mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10 > >> _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_run+0x1de > >> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >> amd64_syscall+0x351 Xfast_syscall+0xfb > >> 895 100568 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100569 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100570 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100571 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100572 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100573 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100575 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100576 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100577 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100578 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100579 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100580 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100581 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100582 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100583 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100584 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100585 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100586 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100587 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100588 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100589 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100590 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100592 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100593 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100594 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100595 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100596 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100597 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100598 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100599 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100600 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100602 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100603 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100604 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100605 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100606 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100607 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100608 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100609 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100610 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100611 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100612 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100613 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100614 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100615 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100617 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100618 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100619 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100621 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100622 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100623 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100624 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100625 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100626 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100627 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100628 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100629 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100630 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100631 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100632 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100633 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100634 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100635 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100636 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100638 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100639 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100640 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100641 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100642 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100643 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100644 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100645 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100646 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100647 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100648 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100649 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100651 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100652 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100653 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100654 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100655 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100656 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100657 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100658 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100659 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100661 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100662 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100684 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100685 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100686 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100797 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100798 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100799 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100800 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >> 895 100801 nfsd nfsd: service mi_switch+0xe1 > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >> fork_trampoline+0xe > >>=20 > >> I really think it's a client side problem, maybe a lookup problem. > >>=20 > >> Regards, > >>=20 > >> Lo=C3=AFc Blot, > >> UNIX Systems, Network and Security Engineer > >> http://www.unix-experience.fr > >>=20 > >> 5 janvier 2015 14:35 "Rick Macklem" a > >> =C3=A9crit: > >>> Loic Blot wrote: > >>>=20 > >>>> Hi, > >>>> happy new year Rick and @freebsd-fs. > >>>>=20 > >>>> After some days, i looked my NFSv4.1 mount. At server start it > >>>> was > >>>> calm, but after 4 days, here is the top stat... > >>>>=20 > >>>> CPU: 0.0% user, 0.0% nice, 100% system, 0.0% interrupt, 0.0% > >>>> idle > >>>>=20 > >>>> Definitively i think it's a problem on client side. What can i > >>>> look > >>>> into running kernel to resolve this issue ? > >>>=20 > >>> Well, I'd start with: > >>> # nfsstat -e -s > >>> - run repeatedly on the server (once every N seconds in a loop). > >>> Then look at the output, comparing the counts and see which RPCs > >>> are being performed by the client(s). You are looking for which > >>> RPCs are being done a lot. (If one RPC is almost 100% of the > >>> load, > >>> then it might be a client/caching issue for whatever that RPC is > >>> doing.) > >>>=20 > >>> Also look at the Open/Lock counts near the end of the output. > >>> If the # of Opens/Locks is large, it may be possible to reduce > >>> the > >>> CPU overheads by using larger hash tables. > >>>=20 > >>> Then you need to profile the server kernel to see where the CPU > >>> is being used. > >>> Hopefully someone else can fill you in on how to do that, because > >>> I'll admit I don't know how to. > >>> Basically you are looking to see if the CPU is being used in > >>> the NFS server code or ZFS. > >>>=20 > >>> Good luck with it, rick > >>>=20 > >>>> Regards, > >>>>=20 > >>>> Lo=C3=AFc Blot, > >>>> UNIX Systems, Network and Security Engineer > >>>> http://www.unix-experience.fr > >>>>=20 > >>>> 30 d=C3=A9cembre 2014 16:16 "Lo=C3=AFc Blot" > >>>> > >>>> a > >>>> =C3=A9crit: > >>>>> Hi Rick, > >>>>> i upgraded my jail host from FreeBSD 9.3 to 10.1 to use NFS > >>>>> v4.1 > >>>>> (mountoptions: > >>>>> rw,rsize=3D32768,wsize=3D32768,tcp,nfsv4,minorversion=3D1) > >>>>>=20 > >>>>> Performance is quite stable but it's slow. Not as slow as > >>>>> before > >>>>> but slow... services was launched > >>>>> but no client are using them and system CPU % was 10-50%. > >>>>>=20 > >>>>> I don't see anything on NFSv4.1 server, it's perfectly stable > >>>>> and > >>>>> functionnal. > >>>>>=20 > >>>>> Regards, > >>>>>=20 > >>>>> Lo=C3=AFc Blot, > >>>>> UNIX Systems, Network and Security Engineer > >>>>> http://www.unix-experience.fr > >>>>>=20 > >>>>> 23 d=C3=A9cembre 2014 00:20 "Rick Macklem" a > >>>>> =C3=A9crit: > >>>>>=20 > >>>>>> Loic Blot wrote: > >>>>>>=20 > >>>>>>> Hi, > >>>>>>>=20 > >>>>>>> To clarify because of our exchanges. Here are the current > >>>>>>> sysctl > >>>>>>> options for server: > >>>>>>>=20 > >>>>>>> vfs.nfsd.enable_nobodycheck=3D0 > >>>>>>> vfs.nfsd.enable_nogroupcheck=3D0 > >>>>>>>=20 > >>>>>>> vfs.nfsd.maxthreads=3D200 > >>>>>>> vfs.nfsd.tcphighwater=3D10000 > >>>>>>> vfs.nfsd.tcpcachetimeo=3D300 > >>>>>>> vfs.nfsd.server_min_nfsvers=3D4 > >>>>>>>=20 > >>>>>>> kern.maxvnodes=3D10000000 > >>>>>>> kern.ipc.maxsockbuf=3D4194304 > >>>>>>> net.inet.tcp.sendbuf_max=3D4194304 > >>>>>>> net.inet.tcp.recvbuf_max=3D4194304 > >>>>>>>=20 > >>>>>>> vfs.lookup_shared=3D0 > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 22 d=C3=A9cembre 2014 09:42 "Lo=C3=AFc Blot" > >>>>>>> > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Hi Rick, > >>>>>>> my 5 jails runs this weekend and now i have some stats on > >>>>>>> this > >>>>>>> monday. > >>>>>>>=20 > >>>>>>> Hopefully deadlock was fixed, yeah, but everything isn't good > >>>>>>> :( > >>>>>>>=20 > >>>>>>> On NFSv4 server (FreeBSD 10.1) system uses 35% CPU > >>>>>>>=20 > >>>>>>> As i can see this is because of nfsd: > >>>>>>>=20 > >>>>>>> 918 root 96 20 0 12352K 3372K rpcsvc 6 51.4H > >>>>>>> 273.68% nfsd: server (nfsd) > >>>>>>>=20 > >>>>>>> If i look at dmesg i see: > >>>>>>> nfsd server cache flooded, try increasing > >>>>>>> vfs.nfsd.tcphighwater > >>>>>>=20 > >>>>>> Well, you have a couple of choices: > >>>>>> 1 - Use NFSv4.1 (add "minorversion=3D1" to your mount options). > >>>>>> (NFSv4.1 avoids use of the DRC and instead uses something > >>>>>> called sessions. See below.) > >>>>>> OR > >>>>>>=20 > >>>>>>> vfs.nfsd.tcphighwater was set to 10000, i increase it to > >>>>>>> 15000 > >>>>>>=20 > >>>>>> 2 - Bump vfs.nfsd.tcphighwater way up, until you no longer see > >>>>>> "nfs server cache flooded" messages. (I think Garrett Wollman > >>>>>> uses > >>>>>> 100000. (You may still see quite a bit of CPU overheads.) > >>>>>>=20 > >>>>>> OR > >>>>>>=20 > >>>>>> 3 - Set vfs.nfsd.cachetcp=3D0 (which disables the DRC and gets > >>>>>> rid > >>>>>> of the CPU overheads). However, there is a risk of data > >>>>>> corruption > >>>>>> if you have a client->server network partitioning of a > >>>>>> moderate > >>>>>> duration, because a non-idempotent RPC may get redone, becasue > >>>>>> the client times out waiting for a reply. If a non-idempotent > >>>>>> RPC gets done twice on the server, data corruption can happen. > >>>>>> (The DRC provides improved correctness, but does add > >>>>>> overhead.) > >>>>>>=20 > >>>>>> If #1 works for you, it is the preferred solution, since > >>>>>> Sessions > >>>>>> in NFSv4.1 solves the correctness problem in a good, space > >>>>>> bound > >>>>>> way. A session basically has N (usually 32 or 64) slots and > >>>>>> only > >>>>>> allows one outstanding RPC/slot. As such, it can cache the > >>>>>> previous > >>>>>> reply for each slot (32 or 64 of them) and guarantee "exactly > >>>>>> once" > >>>>>> RPC semantics. > >>>>>>=20 > >>>>>> rick > >>>>>>=20 > >>>>>>> Here is 'nfsstat -s' output: > >>>>>>>=20 > >>>>>>> Server Info: > >>>>>>> Getattr Setattr Lookup Readlink Read Write Create > >>>>>>> Remove > >>>>>>> 12600652 1812 2501097 156 1386423 1983729 123 > >>>>>>> 162067 > >>>>>>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > >>>>>>> Access > >>>>>>> 36762 9 0 0 0 3147 0 > >>>>>>> 623524 > >>>>>>> Mknod Fsstat Fsinfo PathConf Commit > >>>>>>> 0 0 0 0 328117 > >>>>>>> Server Ret-Failed > >>>>>>> 0 > >>>>>>> Server Faults > >>>>>>> 0 > >>>>>>> Server Cache Stats: > >>>>>>> Inprog Idem Non-idem Misses > >>>>>>> 0 0 0 12635512 > >>>>>>> Server Write Gathering: > >>>>>>> WriteOps WriteRPC Opsaved > >>>>>>> 1983729 1983729 0 > >>>>>>>=20 > >>>>>>> And here is 'procstat -kk' for nfsd (server) > >>>>>>>=20 > >>>>>>> 918 100528 nfsd nfsd: master mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10 > >>>>>>> _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 > >>>>>>> svc_run+0x1de > >>>>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>>>>>> amd64_syscall+0x351 Xfast_syscall+0xfb > >>>>>>> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100659 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100660 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100661 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100662 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> --- > >>>>>>>=20 > >>>>>>> Now if we look at client (FreeBSD 9.3) > >>>>>>>=20 > >>>>>>> We see system was very busy and do many and many interrupts > >>>>>>>=20 > >>>>>>> CPU: 0.0% user, 0.0% nice, 37.8% system, 51.2% interrupt, > >>>>>>> 11.0% > >>>>>>> idle > >>>>>>>=20 > >>>>>>> A look at process list shows that there are many sendmail > >>>>>>> process > >>>>>>> in > >>>>>>> state nfstry > >>>>>>>=20 > >>>>>>> nfstry 18 32:27 0.88% sendmail: Queue runner@00:30:00 for > >>>>>>> /var/spool/clientm > >>>>>>>=20 > >>>>>>> Here is 'nfsstat -c' output: > >>>>>>>=20 > >>>>>>> Client Info: > >>>>>>> Rpc Counts: > >>>>>>> Getattr Setattr Lookup Readlink Read Write Create > >>>>>>> Remove > >>>>>>> 1051347 1724 2494481 118 903902 1901285 162676 > >>>>>>> 161899 > >>>>>>> Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > >>>>>>> Access > >>>>>>> 36744 2 0 114 40 3131 0 > >>>>>>> 544136 > >>>>>>> Mknod Fsstat Fsinfo PathConf Commit > >>>>>>> 9 0 0 0 245821 > >>>>>>> Rpc Info: > >>>>>>> TimedOut Invalid X Replies Retries Requests > >>>>>>> 0 0 0 0 8356557 > >>>>>>> Cache Info: > >>>>>>> Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits > >>>>>>> Misses > >>>>>>> 108754455 491475 54229224 2437229 46814561 821723 5132123 > >>>>>>> 1871871 > >>>>>>> BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits > >>>>>>> Misses > >>>>>>> 144035 118 53736 2753 27813 1 57238839 > >>>>>>> 544205 > >>>>>>>=20 > >>>>>>> If you need more things, tell me, i let the PoC in this > >>>>>>> state. > >>>>>>>=20 > >>>>>>> Thanks > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 21 d=C3=A9cembre 2014 01:33 "Rick Macklem" > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Loic Blot wrote: > >>>>>>>=20 > >>>>>>>> Hi Rick, > >>>>>>>> ok, i don't need locallocks, i haven't understand option was > >>>>>>>> for > >>>>>>>> that > >>>>>>>> usage, i removed it. > >>>>>>>> I do more tests on monday. > >>>>>>>> Thanks for the deadlock fix, for other people :) > >>>>>>>=20 > >>>>>>> Good. Please let us know if running with > >>>>>>> vfs.nfsd.enable_locallocks=3D0 > >>>>>>> gets rid of the deadlocks? (I think it fixes the one you > >>>>>>> saw.) > >>>>>>>=20 > >>>>>>> On the performance side, you might also want to try different > >>>>>>> values > >>>>>>> of > >>>>>>> readahead, if the Linux client has such a mount option. (With > >>>>>>> the > >>>>>>> NFSv4-ZFS sequential vs random I/O heuristic, I have no idea > >>>>>>> what > >>>>>>> the > >>>>>>> optimal readahead value would be.) > >>>>>>>=20 > >>>>>>> Good luck with it and please let us know how it goes, rick > >>>>>>> ps: I now have a patch to fix the deadlock when > >>>>>>> vfs.nfsd.enable_locallocks=3D1 > >>>>>>> is set. I'll post it for anyone who is interested after I put > >>>>>>> it > >>>>>>> through some testing. > >>>>>>>=20 > >>>>>>> -- > >>>>>>> Best regards, > >>>>>>> Lo=C3=AFc BLOT, > >>>>>>> UNIX systems, security and network engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> Le jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a= =C3=A9crit > >>>>>>> : > >>>>>>>=20 > >>>>>>> Loic Blot wrote: > >>>>>>>> Hi rick, > >>>>>>>> i tried to start a LXC container on Debian Squeeze from my > >>>>>>>> freebsd > >>>>>>>> ZFS+NFSv4 server and i also have a deadlock on nfsd > >>>>>>>> (vfs.lookup_shared=3D0). Deadlock procs each time i launch a > >>>>>>>> squeeze > >>>>>>>> container, it seems (3 tries, 3 fails). > >>>>>>>=20 > >>>>>>> Well, I`ll take a look at this `procstat -kk`, but the only > >>>>>>> thing > >>>>>>> I`ve seen posted w.r.t. avoiding deadlocks in ZFS is to not > >>>>>>> use > >>>>>>> nullfs. (I have no idea if you are using any nullfs mounts, > >>>>>>> but > >>>>>>> if so, try getting rid of them.) > >>>>>>>=20 > >>>>>>> Here`s a high level post about the ZFS and vnode locking > >>>>>>> problem, > >>>>>>> but there is no patch available, as far as I know. > >>>>>>>=20 > >>>>>>> http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407 > >>>>>>>=20 > >>>>>>> rick > >>>>>>>=20 > >>>>>>> 921 - D 0:00.02 nfsd: server (nfsd) > >>>>>>>=20 > >>>>>>> Here is the procstat -kk > >>>>>>>=20 > >>>>>>> PID TID COMM TDNAME KSTACK > >>>>>>> 921 100538 nfsd nfsd: master mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >>>>>>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > >>>>>>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>>>>>> 921 100572 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100573 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100574 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100575 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100576 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100577 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100578 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100579 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100580 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100581 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100582 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100583 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100584 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100585 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100586 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100587 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100588 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100589 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100590 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100591 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100592 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100593 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100594 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100595 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100596 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100597 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100598 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100599 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100600 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100601 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100602 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100603 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100604 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100605 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100606 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100607 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100608 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100609 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100610 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100611 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100612 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100613 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100614 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100615 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100616 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f > >>>>>>> nfsrvd_lock+0x5b1 > >>>>>>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 921 100617 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100618 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 921 100619 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100620 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100621 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100622 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100623 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100624 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100625 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100626 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100627 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100628 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100629 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100630 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100631 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100632 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100633 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100634 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100635 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100636 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100637 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100638 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100639 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100640 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100641 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100642 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100643 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100644 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100645 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100646 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100647 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100648 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100649 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100650 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100651 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100652 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100653 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100654 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100655 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100656 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100657 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100658 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100659 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100660 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100661 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100662 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100663 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100664 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100665 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>> _cv_wait_sig+0x16a > >>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 921 100666 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > >>>>>>> nfsrvd_dorpc+0xc76 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Loic Blot wrote: > >>>>>>>=20 > >>>>>>>> For more informations, here is procstat -kk on nfsd, if you > >>>>>>>> need > >>>>>>>> more > >>>>>>>> hot datas, tell me. > >>>>>>>>=20 > >>>>>>>> Regards, PID TID COMM TDNAME KSTACK > >>>>>>>> 918 100529 nfsd nfsd: master mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > >>>>>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > >>>>>>>> amd64_syscall+0x351 > >>>>>>>=20 > >>>>>>> Well, most of the threads are stuck like this one, waiting > >>>>>>> for > >>>>>>> a > >>>>>>> vnode > >>>>>>> lock in ZFS. All of them appear to be in zfs_fhtovp(). > >>>>>>> I`m not a ZFS guy, so I can`t help much. I`ll try changing > >>>>>>> the > >>>>>>> subject line > >>>>>>> to include ZFS vnode lock, so maybe the ZFS guys will take a > >>>>>>> look. > >>>>>>>=20 > >>>>>>> The only thing I`ve seen suggested is trying: > >>>>>>> sysctl vfs.lookup_shared=3D0 > >>>>>>> to disable shared vop_lookup()s. Apparently zfs_lookup() > >>>>>>> doesn`t > >>>>>>> obey the vnode locking rules for lookup and rename, according > >>>>>>> to > >>>>>>> the posting I saw. > >>>>>>>=20 > >>>>>>> I`ve added a couple of comments about the other threads > >>>>>>> below, > >>>>>>> but > >>>>>>> they are all either waiting for an RPC request or waiting for > >>>>>>> the > >>>>>>> threads stuck on the ZFS vnode lock to complete. > >>>>>>>=20 > >>>>>>> rick > >>>>>>>=20 > >>>>>>>> 918 100564 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>>> _cv_wait_sig+0x16a > >>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>>> fork_trampoline+0xe > >>>>>>>=20 > >>>>>>> Fyi, this thread is just waiting for an RPC to arrive. > >>>>>>> (Normal) > >>>>>>>=20 > >>>>>>>> 918 100565 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>>> _cv_wait_sig+0x16a > >>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>>> fork_trampoline+0xe > >>>>>>>> 918 100566 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>>> _cv_wait_sig+0x16a > >>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>>> fork_trampoline+0xe > >>>>>>>> 918 100567 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>>> _cv_wait_sig+0x16a > >>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>>> fork_trampoline+0xe > >>>>>>>> 918 100568 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>>> _cv_wait_sig+0x16a > >>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>>> fork_trampoline+0xe > >>>>>>>> 918 100569 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>>> _cv_wait_sig+0x16a > >>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>>> fork_trampoline+0xe > >>>>>>>> 918 100570 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > >>>>>>>> _cv_wait_sig+0x16a > >>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > >>>>>>>> fork_trampoline+0xe > >>>>>>>> 918 100571 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>>> svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100572 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > >>>>>>>> nfsrvd_dorpc+0xc76 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>=20 > >>>>>>> This one (and a few others) are waiting for the nfsv4_lock. > >>>>>>> This > >>>>>>> happens > >>>>>>> because other threads are stuck with RPCs in progress. (ie. > >>>>>>> The > >>>>>>> ones > >>>>>>> waiting on the vnode lock in zfs_fhtovp().) > >>>>>>> For these, the RPC needs to lock out other threads to do the > >>>>>>> operation, > >>>>>>> so it waits for the nfsv4_lock() which can exclusively lock > >>>>>>> the > >>>>>>> NFSv4 > >>>>>>> data structures once all other nfsd threads complete their > >>>>>>> RPCs > >>>>>>> in > >>>>>>> progress. > >>>>>>>=20 > >>>>>>>> 918 100573 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>>> svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>>=20 > >>>>>>> Same as above. > >>>>>>>=20 > >>>>>>>> 918 100574 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100575 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100576 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100577 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100579 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100580 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100581 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100582 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100584 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100585 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100587 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100589 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100590 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100591 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100592 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100594 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100595 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100596 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100597 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100599 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100601 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100602 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100603 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100604 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100606 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1 > >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>>> zfs_fhtovp+0x38d > >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>>> svc_thread_start+0xb > >>>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>=20 > >>>>>>> Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > >>>>>>>=20 > >>>>>>> 918 100608 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f > >>>>>>> nfsrvd_lock+0x5b1 > >>>>>>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100609 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > >>>>>>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > >>>>>>> fork_trampoline+0xe > >>>>>>> 918 100611 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100612 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100613 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100614 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100615 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100616 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100617 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100618 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100620 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100621 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100623 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > >>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 > >>>>>>> svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100624 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100625 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100626 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100627 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100628 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100630 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100631 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100632 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100633 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100634 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100635 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100636 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100637 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100638 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100639 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100640 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100641 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100642 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100643 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100644 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100645 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100646 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100647 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100648 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100649 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100650 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100652 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100654 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100655 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100657 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>> 918 100658 nfsd nfsd: service mi_switch+0xe1 > >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > >>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > >>>>>>> zfs_fhtovp+0x38d > >>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 > >>>>>>> svc_thread_start+0xb > >>>>>>> fork_exit+0x9a fork_trampoline+0xe > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" > >>>>>>> > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Hmmm... > >>>>>>> now i'm experiencing a deadlock. > >>>>>>>=20 > >>>>>>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server > >>>>>>> (nfsd) > >>>>>>>=20 > >>>>>>> the only issue was to reboot the server, but after rebooting > >>>>>>> deadlock arrives a second time when i > >>>>>>> start my jails over NFS. > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" > >>>>>>> > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Hi Rick, > >>>>>>> after talking with my N+1, NFSv4 is required on our > >>>>>>> infrastructure. > >>>>>>> I tried to upgrade NFSv4+ZFS > >>>>>>> server from 9.3 to 10.1, i hope this will resolve some > >>>>>>> issues... > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" > >>>>>>> > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Hi Rick, > >>>>>>> thanks for your suggestion. > >>>>>>> For my locking bug, rpc.lockd is stucked in rpcrecv state on > >>>>>>> the > >>>>>>> server. kill -9 doesn't affect the > >>>>>>> process, it's blocked.... (State: Ds) > >>>>>>>=20 > >>>>>>> for the performances > >>>>>>>=20 > >>>>>>> NFSv3: 60Mbps > >>>>>>> NFSv4: 45Mbps > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Loic Blot wrote: > >>>>>>>=20 > >>>>>>>> Hi Rick, > >>>>>>>> I'm trying NFSv3. > >>>>>>>> Some jails are starting very well but now i have an issue > >>>>>>>> with > >>>>>>>> lockd > >>>>>>>> after some minutes: > >>>>>>>>=20 > >>>>>>>> nfs server 10.10.X.8:/jails: lockd not responding > >>>>>>>> nfs server 10.10.X.8:/jails lockd is alive again > >>>>>>>>=20 > >>>>>>>> I look at mbuf, but i seems there is no problem. > >>>>>>>=20 > >>>>>>> Well, if you need locks to be visible across multiple > >>>>>>> clients, > >>>>>>> then > >>>>>>> I'm afraid you are stuck with using NFSv4 and the > >>>>>>> performance > >>>>>>> you > >>>>>>> get > >>>>>>> from it. (There is no way to do file handle affinity for > >>>>>>> NFSv4 > >>>>>>> because > >>>>>>> the read and write ops are buried in the compound RPC and > >>>>>>> not > >>>>>>> easily > >>>>>>> recognized.) > >>>>>>>=20 > >>>>>>> If the locks don't need to be visible across multiple > >>>>>>> clients, > >>>>>>> I'd > >>>>>>> suggest trying the "nolockd" option with nfsv3. > >>>>>>>=20 > >>>>>>>> Here is my rc.conf on server: > >>>>>>>>=20 > >>>>>>>> nfs_server_enable=3D"YES" > >>>>>>>> nfsv4_server_enable=3D"YES" > >>>>>>>> nfsuserd_enable=3D"YES" > >>>>>>>> nfsd_server_flags=3D"-u -t -n 256" > >>>>>>>> mountd_enable=3D"YES" > >>>>>>>> mountd_flags=3D"-r" > >>>>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>>>>> rpcbind_enable=3D"YES" > >>>>>>>> rpc_lockd_enable=3D"YES" > >>>>>>>> rpc_statd_enable=3D"YES" > >>>>>>>>=20 > >>>>>>>> Here is the client: > >>>>>>>>=20 > >>>>>>>> nfsuserd_enable=3D"YES" > >>>>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > >>>>>>>> nfscbd_enable=3D"YES" > >>>>>>>> rpc_lockd_enable=3D"YES" > >>>>>>>> rpc_statd_enable=3D"YES" > >>>>>>>>=20 > >>>>>>>> Have you got an idea ? > >>>>>>>>=20 > >>>>>>>> Regards, > >>>>>>>>=20 > >>>>>>>> Lo=C3=AFc Blot, > >>>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>>> http://www.unix-experience.fr > >>>>>>>>=20 > >>>>>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" > >>>>>>>> a > >>>>>>>> =C3=A9crit: > >>>>>>>>> Loic Blot wrote: > >>>>>>>>>=20 > >>>>>>>>>> Hi rick, > >>>>>>>>>>=20 > >>>>>>>>>> I waited 3 hours (no lag at jail launch) and now I do: > >>>>>>>>>> sysrc > >>>>>>>>>> memcached_flags=3D"-v -m 512" > >>>>>>>>>> Command was very very slow... > >>>>>>>>>>=20 > >>>>>>>>>> Here is a dd over NFS: > >>>>>>>>>>=20 > >>>>>>>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > >>>>>>>>>> bytes/sec) > >>>>>>>>>=20 > >>>>>>>>> Can you try the same read using an NFSv3 mount? > >>>>>>>>> (If it runs much faster, you have probably been bitten by > >>>>>>>>> the > >>>>>>>>> ZFS > >>>>>>>>> "sequential vs random" read heuristic which I've been told > >>>>>>>>> things > >>>>>>>>> NFS is doing "random" reads without file handle affinity. > >>>>>>>>> File > >>>>>>>>> handle affinity is very hard to do for NFSv4, so it isn't > >>>>>>>>> done.) > >>>>>>>=20 > >>>>>>> I was actually suggesting that you try the "dd" over nfsv3 > >>>>>>> to > >>>>>>> see > >>>>>>> how > >>>>>>> the performance compared with nfsv4. If you do that, please > >>>>>>> post > >>>>>>> the > >>>>>>> comparable results. > >>>>>>>=20 > >>>>>>> Someday I would like to try and get ZFS's sequential vs > >>>>>>> random > >>>>>>> read > >>>>>>> heuristic modified and any info on what difference in > >>>>>>> performance > >>>>>>> that > >>>>>>> might make for NFS would be useful. > >>>>>>>=20 > >>>>>>> rick > >>>>>>>=20 > >>>>>>> rick > >>>>>>>=20 > >>>>>>> This is quite slow... > >>>>>>>=20 > >>>>>>> You can found some nfsstat below (command isn't finished > >>>>>>> yet) > >>>>>>>=20 > >>>>>>> nfsstat -c -w 1 > >>>>>>>=20 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 16 0 > >>>>>>> 2 0 0 0 0 0 17 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 3 0 > >>>>>>> 0 0 0 0 0 0 3 0 > >>>>>>> 37 10 0 8 0 0 14 1 > >>>>>>> 18 16 0 4 1 2 4 0 > >>>>>>> 78 91 0 82 6 12 30 0 > >>>>>>> 19 18 0 2 2 4 2 0 > >>>>>>> 0 0 0 0 2 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 1 0 0 0 0 1 0 > >>>>>>> 4 6 0 0 6 0 3 0 > >>>>>>> 2 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 1 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 1 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 6 108 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 98 54 0 86 11 0 25 0 > >>>>>>> 36 24 0 39 25 0 10 1 > >>>>>>> 67 8 0 63 63 0 41 0 > >>>>>>> 34 0 0 35 34 0 0 0 > >>>>>>> 75 0 0 75 77 0 0 0 > >>>>>>> 34 0 0 35 35 0 0 0 > >>>>>>> 75 0 0 74 76 0 0 0 > >>>>>>> 33 0 0 34 33 0 0 0 > >>>>>>> 0 0 0 0 5 0 0 0 > >>>>>>> 0 0 0 0 0 0 6 0 > >>>>>>> 11 0 0 0 0 0 11 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 17 0 0 0 0 1 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 4 5 0 0 0 0 12 0 > >>>>>>> 2 0 0 0 0 0 26 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 2 0 > >>>>>>> 2 0 0 0 0 0 24 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 0 0 0 0 0 7 0 > >>>>>>> 2 1 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 2 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 6 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 6 0 0 0 0 3 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 2 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 71 0 0 0 0 0 0 > >>>>>>> 0 1 0 0 0 0 0 0 > >>>>>>> 2 36 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 1 0 0 0 0 0 1 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 79 6 0 79 79 0 2 0 > >>>>>>> 25 0 0 25 26 0 6 0 > >>>>>>> 43 18 0 39 46 0 23 0 > >>>>>>> 36 0 0 36 36 0 31 0 > >>>>>>> 68 1 0 66 68 0 0 0 > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > >>>>>>> 36 0 0 36 36 0 0 0 > >>>>>>> 48 0 0 48 49 0 0 0 > >>>>>>> 20 0 0 20 20 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 3 14 0 1 0 0 11 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 0 4 0 0 0 0 4 0 > >>>>>>> 0 0 0 0 0 0 0 0 > >>>>>>> 4 22 0 0 0 0 16 0 > >>>>>>> 2 0 0 0 0 0 23 0 > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > >>>>>>> a > >>>>>>> =C3=A9crit: > >>>>>>>> Hi Rick, > >>>>>>>> I stopped the jails this week-end and started it this > >>>>>>>> morning, > >>>>>>>> i'll > >>>>>>>> give you some stats this week. > >>>>>>>>=20 > >>>>>>>> Here is my nfsstat -m output (with your rsize/wsize > >>>>>>>> tweaks) > >>>>>>=20 > >>>>>>=20 > >>>>>=20 > >>>>=20 > >>>=20 > >>=20 > > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregm= in=3D5,acregmax=3D60,nametimeo=3D60,negna > >>>>>>=20 > >>>>>>>=20 > >>>>>>=20 > >>>>>>=20 > >>>>>=20 > >>>>=20 > >>>=20 > >>=20 > > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead= =3D1,wcommitsize=3D773136,timeout=3D120,retra > >>>>>>=20 > >>>>>>> s=3D2147483647 > >>>>>>>=20 > >>>>>>> On server side my disks are on a raid controller which show a > >>>>>>> 512b > >>>>>>> volume and write performances > >>>>>>> are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > >>>>>>> count=3D100000000 =3D> 450MBps) > >>>>>>>=20 > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>>=20 > >>>>>>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" = a > >>>>>>> =C3=A9crit: > >>>>>>>=20 > >>>>>>> Loic Blot wrote: > >>>>>>>=20 > >>>>>>> Hi, > >>>>>>> i'm trying to create a virtualisation environment based on > >>>>>>> jails. > >>>>>>> Those jails are stored under a big ZFS pool on a FreeBSD > >>>>>>> 9.3 > >>>>>>> which > >>>>>>> export a NFSv4 volume. This NFSv4 volume was mounted on a > >>>>>>> big > >>>>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but > >>>>>>> only 1 > >>>>>>> was > >>>>>>> used at this time). > >>>>>>>=20 > >>>>>>> The problem is simple, my hypervisors runs 6 jails (used 1% > >>>>>>> cpu > >>>>>>> and > >>>>>>> 10GB RAM approximatively and less than 1MB bandwidth) and > >>>>>>> works > >>>>>>> fine at start but the system slows down and after 2-3 days > >>>>>>> become > >>>>>>> unusable. When i look at top command i see 80-100% on > >>>>>>> system > >>>>>>> and > >>>>>>> commands are very very slow. Many process are tagged with > >>>>>>> nfs_cl*. > >>>>>>>=20 > >>>>>>> To be honest, I would expect the slowness to be because of > >>>>>>> slow > >>>>>>> response > >>>>>>> from the NFSv4 server, but if you do: > >>>>>>> # ps axHl > >>>>>>> on a client when it is slow and post that, it would give us > >>>>>>> some > >>>>>>> more > >>>>>>> information on where the client side processes are sitting. > >>>>>>> If you also do something like: > >>>>>>> # nfsstat -c -w 1 > >>>>>>> and let it run for a while, that should show you how many > >>>>>>> RPCs > >>>>>>> are > >>>>>>> being done and which ones. > >>>>>>>=20 > >>>>>>> # nfsstat -m > >>>>>>> will show you what your mount is actually using. > >>>>>>> The only mount option I can suggest trying is > >>>>>>> "rsize=3D32768,wsize=3D32768", > >>>>>>> since some network environments have difficulties with 64K. > >>>>>>>=20 > >>>>>>> There are a few things you can try on the NFSv4 server side, > >>>>>>> if > >>>>>>> it > >>>>>>> appears > >>>>>>> that the clients are generating a large RPC load. > >>>>>>> - disabling the DRC cache for TCP by setting > >>>>>>> vfs.nfsd.cachetcp=3D0 > >>>>>>> - If the server is seeing a large write RPC load, then > >>>>>>> "sync=3Ddisabled" > >>>>>>> might help, although it does run a risk of data loss when > >>>>>>> the > >>>>>>> server > >>>>>>> crashes. > >>>>>>> Then there are a couple of other ZFS related things (I'm not > >>>>>>> a > >>>>>>> ZFS > >>>>>>> guy, > >>>>>>> but these have shown up on the mailing lists). > >>>>>>> - make sure your volumes are 4K aligned and ashift=3D12 (in > >>>>>>> case a > >>>>>>> drive > >>>>>>> that uses 4K sectors is pretending to be 512byte sectored) > >>>>>>> - never run over 70-80% full if write performance is an > >>>>>>> issue > >>>>>>> - use a zil on an SSD with good write performance > >>>>>>>=20 > >>>>>>> The only NFSv4 thing I can tell you is that it is known that > >>>>>>> ZFS's > >>>>>>> algorithm for determining sequential vs random I/O fails for > >>>>>>> NFSv4 > >>>>>>> during writing and this can be a performance hit. The only > >>>>>>> workaround > >>>>>>> is to use NFSv3 mounts, since file handle affinity > >>>>>>> apparently > >>>>>>> fixes > >>>>>>> the problem and this is only done for NFSv3. > >>>>>>>=20 > >>>>>>> rick > >>>>>>>=20 > >>>>>>> I saw that there are TSO issues with igb then i'm trying to > >>>>>>> disable > >>>>>>> it with sysctl but the situation wasn't solved. > >>>>>>>=20 > >>>>>>> Someone has got ideas ? I can give you more informations if > >>>>>>> you > >>>>>>> need. > >>>>>>>=20 > >>>>>>> Thanks in advance. > >>>>>>> Regards, > >>>>>>>=20 > >>>>>>> Lo=C3=AFc Blot, > >>>>>>> UNIX Systems, Network and Security Engineer > >>>>>>> http://www.unix-experience.fr > >>>>>>> _______________________________________________ > >>>>>>> freebsd-fs@freebsd.org mailing list > >>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>> To unsubscribe, send any mail to > >>>>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>>>=20 > >>>>>>> _______________________________________________ > >>>>>>> freebsd-fs@freebsd.org mailing list > >>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>> To unsubscribe, send any mail to > >>>>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>>>=20 > >>>>>>> _______________________________________________ > >>>>>>> freebsd-fs@freebsd.org mailing list > >>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>> To unsubscribe, send any mail to > >>>>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>>>=20 > >>>>>>> _______________________________________________ > >>>>>>> freebsd-fs@freebsd.org mailing list > >>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>> To unsubscribe, send any mail to > >>>>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>>> _______________________________________________ > >>>>>>> freebsd-fs@freebsd.org mailing list > >>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>> To unsubscribe, send any mail to > >>>>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>>>=20 > >>>>>>> _______________________________________________ > >>>>>>> freebsd-fs@freebsd.org mailing list > >>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>>>> To unsubscribe, send any mail to > >>>>>>> "freebsd-fs-unsubscribe@freebsd.org" > >>>>>=20 > >>>>> _______________________________________________ > >>>>> freebsd-fs@freebsd.org mailing list > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >>>>> To unsubscribe, send any mail to > >>>>> "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 >=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Thu Jan 8 14:52:43 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0EDB0F10 for ; Thu, 8 Jan 2015 14:52:43 +0000 (UTC) Received: from out11.mail.aliyun.com (out11.mail.aliyun.com [205.204.117.240]) by mx1.freebsd.org (Postfix) with ESMTP id A3F4AAF1 for ; Thu, 8 Jan 2015 14:52:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aliyun.com; s=s1024; t=1420728753; h=Date:From:To:Subject:Message-ID:MIME-Version:Content-Type; bh=I18Ypq/LfuSpQtspe2L3Hdn4p0TOCWDmndO8S+QZogI=; b=BM+RHhy+pVHqbg47lb2JXDKsY+Msxifoqpzp31N8gGfOTzb/1BOu7CwMAn8qnV54Twh2a5F7obHxPoNBu1jv93p08gVD29m8pHOEbBULGejiQvRHjNX5dI1kO/0eKVZ3BHlPb58gVwrMs+f7I+d9O8foJ4HvqHM6rRl/XAErR0Y= X-Alimail-AntiSpam: AC=CONTINUE; BC=0.2030677|-1; FP=15778902926959570676|21|1|20|0|-1|-1|-1; HT=r75b01021; MF=richbless@aliyun.com; PH=DS; RN=1; RT=1; SR=0; Received: from abccdeggh(mailfrom:richbless@aliyun.com ip:27.214.65.122) by smtp.aliyun-inc.com(127.0.0.1); Thu, 08 Jan 2015 22:52:25 +0800 Date: Thu, 8 Jan 2015 22:52:24 +0800 From: "richbless@aliyun.com" Reply-To: abscn@hotmail.com To: "freebsd-fs" Subject: machinery solution Message-ID: <201501082252244505666@aliyun.com> X-Mailer: Foxmail 6, 10, 201, 20 [cn] MIME-Version: 1.0 Content-Type: text/plain; charset="GB2312" Content-Transfer-Encoding: base64 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Jan 2015 14:52:43 -0000 VGhpcyBpcyBTdGV2ZW4gZnJvbSBDaGluYS4gV2UgbWFpbmx5IGRlYWwgd2l0aCBraW5kcyBvZiBl bmdpbmVlcmluZyBtYWNoaW5lcnkgYW5kIHNwYXJlIHBhcnRzOg0Kd2hlZWwgbG9hZGVyLCBiYWNr aG9lIGxvYWRlciwgYnVsbGRvemVyLCBtb3RvciBncmFkZXIsIHJvYWQgcm9sbGVyLCB0cnVjayBj cmFuZSwgZm9ybGlmdCB0cnVjaywgdHJhY3RvciwgZXRjLg0Kb3VyIHdlYnNpdGUgaXMgd3d3LnF6 Ym9jaGVuZy5jb20NCg0KQmVzdCBSZWdhcmRzDQoNClN0ZXZlbg0KLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQ0KUUlOR1pIT1UgQk9DSEVORyBNQUNISU5FUlkgQ08uLExURA0K VGVsOis4NiAxODY2MDY0MjMxMSBFbWFpbDogYWJzY25AaG90bWFpbC5jb20gIFNreXBlOiBxemJv Y2hlbmdmaW9uYTEgICAgICAgICAgICANCkFkZDogUWluZ3pob3UgQ2l0eSwgU2hhbmRvbmcgUHJv dmluY2UsIENoaW5hDQp3d3cucXpib2NoZW5nLmNvbQ==