Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 15 Oct 2017 21:25:18 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Fabian Freyer <fabian.freyer@physik.tu-berlin.de>, "freebsd-stable@freebsd.org" <freebsd-stable@freebsd.org>
Cc:        "lists@searchy.net" <lists@searchy.net>, "zeha@physik.tu-berlin.de" <zeha@physik.tu-berlin.de>, "steff@physik.tu-berlin.de" <steff@physik.tu-berlin.de>
Subject:   Re: kernel process [nfscl] high cpu
Message-ID:  <YTOPR0101MB21723273008339C23593A2CFDD4E0@YTOPR0101MB2172.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <39012cc6-a527-eb8e-3a6f-7e24cd9d98d8@physik.tu-berlin.de>
References:  <1031959302.30289198.1430594914473.JavaMail.root@uoguelph.ca> <5603AE3D.5090407@searchy.net> <1887696626.8730412.1443097925392.JavaMail.zimbra@uoguelph.ca> <5603FC85.8070105@searchy.net> <924567664.10016007.1443129423733.JavaMail.zimbra@uoguelph.ca>, <39012cc6-a527-eb8e-3a6f-7e24cd9d98d8@physik.tu-berlin.de>

next in thread | previous in thread | raw e-mail | index | archive | help
Well, a couple of comments:
1 - I have no idea if NFSv4 mounts (or any NFS mount for that matter)
     will work correctly within jails. (I don't do jails and know nothing
     about them. Or, at least very little about them.)
2 - I do know that the "nfsuserd" daemon is badly broken when
     jails are in use.
     - There are two things you can do about this.
    1 - Set vfs.nfsd.enable_stringtouid=3D1 on the server, plus do not run
          the nfsuserd daemon on either the client or server. (This will
          require a small change to /etc/rc.d/nfsd to avoid it being starte=
d
          at boot. See the /etc/rc.d/nfsd script in head/current. The chang=
e
          wasn't MFC'd because it was considered a POLA violation.)
       --> This causes the user/group strings to be just the numbers on the
             wire and nfsuserd is no longer needed.
            (The Linux client will do this by default.)
         If it exists on the client, also set vfs.nfs.enable_uidtostring=3D=
1.
         (I think this is in stable/11, but not 11.1.)
    OR
    2 - Carry the patches in head/current as r320698, r320757 and
         r320758 over to the 11.1 sources and build the kernel plus
         nfsuserd from these patched sources. (These patches make
         the nfsuserd daemon use an AF_LOCAL socket, which allows it
         to work within a jail.)

As noted at the beginning, I know that nfsuserd breaks when jails
are in use, but I do not know what other NFSv4 related things break
within jails, so fixing the nfsuserd situation may not resolve your problem=
s.

rick

________________________________________
From: Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
Sent: Sunday, October 15, 2017 4:45:00 PM
To: freebsd-stable@freebsd.org
Cc: Rick Macklem; lists@searchy.net; zeha@physik.tu-berlin.de; steff@physik=
.tu-berlin.de
Subject: Re: kernel process [nfscl] high cpu

Hi,

(I'm not on this list, please CC me in future replies.
My apologies to those who get this message twice, I had a typo in the
To: header, and have to re-send it to the list.)

sorry for reviving such an old thread, but I've run into this problem
lately as well, on a 11.1-RELEASE-p1 jailhost mounting NFSv4.0 mounts
into jails.

On 24.09.2015 23:17, Rick Macklem wrote:
> Frank de Bot wrote:
>> Rick Macklem wrote:
>>> Frank de Bot wrote:
>>>> Rick Macklem wrote:
>>>>> Frank de Bot wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On a 10.1-RELEASE-p9 server I have several NFS mounts used for a
>>>>>> jail.
>>>>>> Because it's a server only to test, there is a low load. But the
>>>>>> [nfscl]
>>>>>> process is hogging a CPU after a while. This happens pretty fast,
>>>>>> within
>>>>>> 1 or 2 days. I'm noticing the high CPU of the process when I want to
>>>>>> do
>>>>>> some test after a little while (those 1 or 2 days).

Here's my ps ax | grep nfscl:
# ps ax | grep nfscl
11111  -  DL      932:08.74 [nfscl]
11572  -  DL      442:27.42 [nfscl]
30396  -  DL      933:44.13 [nfscl]
35902  -  DL      442:08.70 [nfscl]
40881  -  DL      938:56.04 [nfscl]
43276  -  DL      932:38.88 [nfscl]
49178  -  DL      934:24.77 [nfscl]
56314  -  DL      935:21.55 [nfscl]
60085  -  DL      936:37.11 [nfscl]
71788  -  DL      933:10.96 [nfscl]
82001  -  DL      934:45.76 [nfscl]
86222  -  DL      931:42.94 [nfscl]
92353  -  DL     1186:53.38 [nfscl]
21105 20  S+        0:00.00 grep nfscl

And this is on a 12-core with Hyperthreading:
# uptime
7:28PM  up 11 days,  4:50, 4 users, load averages: 25.49, 21.91, 20.25

Most of this load is being generated by the nfscl threads.

>>>>>> My jail.conf look like:
>>>>>>
>>>>>> exec.start =3D "/bin/sh /etc/rc";
>>>>>> exec.stop =3D "/bin/sh /etc/rc.shutdown";
>>>>>> exec.clean;
>>>>>> mount.devfs;
>>>>>> exec.consolelog =3D "/var/log/jail.$name.log";
>>>>>> #mount.fstab =3D "/usr/local/etc/jail.fstab.$name";
>>>>>>
>>>>>> test01 {
>>>>>>         host.hostname =3D "test01_hosting";
>>>>>>         ip4.addr =3D somepublicaddress;
>>>>>>         ip4.addr +=3D someprivateaddress;
>>>>>>
>>>>>>         mount =3D "10.13.37.2:/tank/hostingbase      /opt/jails/test=
01
>>>>>>    nfs     nfsv4,minorversion=3D1,pnfs,ro,noatime        0       0";
>>>>>>         mount +=3D  "10.13.37.2:/tank/hosting/test
>>>>>> /opt/jails/test01/opt       nfs     nfsv4,minorversion=3D1,pnfs,noat=
ime
>>>>>>      0       0";
>>>>>>
>>>>>>         path =3D "/opt/jails/test01";
>>>>>> }
>>>>>>
>>>>>> Last test was with NFS 4.1, I also worked with NFS 4.(0) with the
>>>>>> same
>>>>>> result. In the readonly nfs share there are symbolic links point to
>>>>>> the
>>>>>> read-write share for logging, storing .run files, etc. When I monito=
r
>>>>>> my
>>>>>> network interface with tcpdump, there is little nfs traffic, only
>>>>>> when I
>>>>>> do try to access the shares there is activity.
>>>>>>
>>>>>> What is causing nfscl to run around in circles, hogging the CPU (it
>>>>>> makes the system slow to respond too) or how can I found out what's
>>>>>> the
>>>>>> cause?
>>>>>>
>>>>> Well, the nfscl does server->client RPCs referred to as callbacks. I
>>>>> have no idea what the implications of running it in a jail is, but I'=
d
>>>>> guess that these server->client RPCs get blocked somehow, etc...
>>>>> (The NFSv4.0 mechanism requires a separate IP address that the server
>>>>>  can connect to on the client. For NFSv4.1, it should use the same
>>>>>  TCP connection as is used for the client->server RPCs. The latter
>>>>>  seems like it should work, but there is probably some glitch.)
>>>>>
>>>>> ** Just run without the nfscl daemon (it is only needed for delegatio=
ns
>>>>> or
>>>>> pNFS).
>>>>
>>>> How can I disable the nfscl daemon?
>>>>
>>> Well, the daemon for the callbacks is called nfscbd.
>>> You should check via "ps ax", to see if you have it running.
>>> (For NFSv4.0 you probably don't want it running, but for NFSv4.1 you
>>>  do need it. pNFS won't work at all without it, but unless you have a
>>>  server that supports pNFS, it won't work anyhow. Unless your server is
>>>  a clustered Netapp Filer, you should probably not have the "pnfs" opti=
on.)
>>>
>>> To run the "nfscbd" daemon you can set:
>>> nfscbd_enable=3D"TRUE"
>>> in your /etc/rc.conf will start it on boot.
>>> Alternately, just type "nfscbd" as root.
>>>
>>> The "nfscl" thread is always started when an NFSv4 mount is done. It do=
es
>>> an assortment of housekeeping things, including a Renew op to make sure=
 the
>>> lease doesn't expire. If for some reason the jail blocks these Renew RP=
Cs,
>>> it will try to do them over and over and ... because having the lease
>>> expire is bad news for NFSv4. How could you tell?
>>> Well, capturing packets between the client and server, then looking at =
them
>>> in wireshark is probably the only way. (Or maybe a large count for Rene=
w
>>> in the output from "nfsstat -e".)
>>>
>>> "nfscbd" is optional for NFSv4.0. Without it, you simply don't do
>>> callbacks/delegations.
>>> For NFSv4.1 it is pretty much required, but doesn't need a separate
>>> server->client TCP
>>> connection.
>>> --> I'd enable it for NFSv4.1, but disable it for NFSv4.0 at least as a
>>> starting point.
>>>
>>> And as I said before, none of this is tested within jails, so I have no
>>> idea
>>> what effect the jails have. Someone who understands jails might have so=
me
>>> insight
>>> w.r.t. this?
>>>
>>> rick
>>>
>>
>> Since last time I haven't tried to use pnfs and just sticked with
>> nfsv4.0. nfscbd is not running. The server is now running 10.2. The
>> number of renews is not very high (56k, getattr is for example 283M)
>> View with wireshark, renew calls look good ,the nfs status is ok.
>>
>> Is there a way to know what [nfscl] is active with?
>>
> Not that I can think of. When I do "ps axHl" I see it in DL state and not
> doing much of anything. (You could try setting "sysctl vfs.nfs.debuglevel=
=3D4",
> but I don't think you'll see anything syslog'd that is useful?)
> This is what I'd expect for an NFSv4.0 mount without the nfscbd running.

nfscbd is not running.

I've done a bit of digging with kgdb, by attatching to a thread of one
of the nfscl's and grabbed a backtrace:

--8<---- snip
(kgdb) info threads
[...]
  678 Thread 101151 (PID=3D82001: nfscl)  0xffffffff80a9780a in
sched_switch ()
[...]
(kgdb) thread 678
[Switching to thread 678 (Thread 101151)]#0  0xffffffff80a9780a in
sched_switch ()
(kgdb) bt
#0  0xffffffff80a9780a in sched_switch ()
#1  0xffffffff80a75a35 in mi_switch ()
#2  0xffffffff80abaa82 in sleepq_timedwait ()
#3  0xffffffff80a75435 in _sleep ()
#4  0xffffffff8095eb7d in nfscl_renewthread ()
#5  0xffffffff80985204 in start_nfscl ()
#6  0xffffffff80a2f815 in fork_exit ()
#7  0xffffffff80ec3b7e in fork_trampoline ()
#8  0x0000000000000000 in ?? ()
--8<---- snap

So as you said, the nfscl thread is mostly hanging in a sleep. However,
every approx. 30-50 seconds (this corresponds quite well with the timing
of the renew calls on the wire) the CPU usage spikes to 100% on all 24
threads.

dtrace(1m)'ing during those spikes show a huge number (order of 10^5)
calls to nfscl_procdoesntexist and nfscl_cleanup_common:

Calls

  nfscl_postop_attr                                                12

  nfscl_loadattrcache                                              17

  nfscl_request                                                    22

  nfscl_deleggetmodtime                                            26
  nfscl_reqstart                                                   26
  nfscl_mustflush                                                  30

  nfscl_nodeleg                                                    34

  nfscl_cleanup_common                                          12005

  nfscl_procdoesntexist                                         12315



Times

  nfscl_loadattrcache                                           41949

  nfscl_deleggetmodtime                                         63014

  nfscl_postop_attr                                             65046

  nfscl_reqstart                                                77851

  nfscl_procdoesntexist                                      30187728
  nfscl_request                                             423855128
  nfscl_nodeleg                                             841033772
  nfscl_mustflush                                          4838119753
  nfscl_cleanup_common                                     6436207841

For reference, here's the dtrace(1m) script I'm using:

--8<---- snip
#!/usr/sbin/dtrace -Cs

#pragma D option quiet

fbt::nfscl_*:entry {
  @entries[probefunc] =3D count();
  traceme[probefunc] =3D 1;
  ts[probefunc] =3D timestamp;
}

fbt::nfscl_*:return
/traceme[probefunc]/
{
  @times[probefunc] =3D sum(timestamp - ts[probefunc]);

}

tick-5sec
{
  printa(@times);
  clear(@times);
  printa(@entries);
  clear(@entries);
}
--8<---- snap

For completeness, I guess I *should* synchronize this with the for(;;)
loop in nfscl_renewthread and count the actual number of process
terminations, but OTOH, I am extremely sure (just by observing the
number of running processes and PIDs) that it's nowhere near those counts.

Digging a little deeper, I see pfind(9), or more closely pfind_locked is
used in nfscl_procdoesntexist. What I do not understand yet is how
pfind(9) interacts with jails. Maybe someone(TM) could shed some light
on this?

> Basically, when the nfscbd isn't running the server shouldn't issue any
> delegations, because it shouldn't see a callback path (server->client TCP
> connection). Also, if you are using a FreeBSD NFS server, it won't issue
> delegations unless you've enabled that, which isn't the default.
>
> Check to see your Delegs in "nfsstat -e" is 0.

It is.

> If it is, then all the nfscl should be doing is waking up once per second
> and doing very little except a Renew RPC once every 30-60sec. (A fraction=
 of
> what the server's lease duration is.)

A tcpdump shows at least one renew call/reply pair per minute.

> The only thing I can think of that might cause it to run a lot would be s=
ome
> weirdness related to the TOD clock. It msleep()s for hz and also checks f=
or
> time_uptime (which should be in resolution of seconds) !=3D the previous =
time.
> (If the msleep()s were waking up too fequently, then it would loop around
>  doing not much of anything, over and over and over again...)
>
>
>> I do understand nfs + jails could have issues, but I like to understand
>> them.
>>

Fabian

[1]
https://lists.freebsd.org/pipermail/freebsd-hackers/2017-June/thread.html#5=
1200




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YTOPR0101MB21723273008339C23593A2CFDD4E0>