Date: Wed, 17 Mar 2021 18:17:14 -0400 From: Jason Breitman <jbreitman@tildenparkcapital.com> To: Rick Macklem <rmacklem@uoguelph.ca>, Alan Somers <asomers@freebsd.org>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org> Subject: Re: NFS Mount Hangs Message-ID: <789BCFA9-D6BC-4C5A-AEA2-E6F7C6E26CB5@tildenparkcapital.com> In-Reply-To: <YQXPR0101MB09681291684FC684A3319D2ADD6A9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> References: <C643BB9C-6B61-4DAC-8CF9-CE04EA7292D0@tildenparkcapital.com> <3750001D-3F1C-4D9A-A9D9-98BCA6CA65A4@tildenparkcapital.com> <33693DE3-7FF8-4FAB-9A75-75576B88A566@tildenparkcapital.com> <YQXPR0101MB0968DC18E00833DE2969C636DD6A9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <CAOtMX2gQFMWbGKBzLcPW4zOBpQ3YR5=9DRpTyTDi2SC%2BhE8Ehw@mail.gmail.com> <YQXPR0101MB09681291684FC684A3319D2ADD6A9@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
next in thread | previous in thread | raw e-mail | index | archive | help
Thank you for the responses. The NFS Client does properly negotiate down to 128K for the rsize and wsize= . The client port should be changing as we are using the noresvport option. On the NFS Client cat /proc/mounts nfs-server.domain.com:/data /mnt/data nfs4 rw,relatime,vers=3D4.1,rsize=3D1= 31072,wsize=3D131072,namlen=3D255,hard,noresvport,proto=3Dtcp,timeo=3D600,r= etrans=3D2,sec=3Dkrb5,clientaddr=3DNFS.Client.IP.X,lookupcache=3Dpos,local_= lock=3Dnone,addr=3DNFS.Server.IP.X 0 0 When the issue occurs, this is what I see on the NFS Server. tcp4 0 0 NFS.Server.IP.X.2049 NFS.Client.IP.X.51550 CLO= SE_WAIT =20 Capturing packets right before the issue is a great idea, but I am concerne= d about running tcpdump for such an extended period of time on an active se= rver. I have gone 9 days with no issue which would be a lot of data and overhead. I will look into disabling the TSO and LRO options and let the group know h= ow it goes. Below are the current options on the NFS Server. lagg0: flags=3D8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric = 0 mtu 1500 =09options=3De507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HW= CSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6> Please share other ideas if you have them. Jason Breitman On Mar 17, 2021, at 5:58 PM, Rick Macklem <rmacklem@uoguelph.ca> wrote: Alan Somers wrote: [stuff snipped] >Is the 128K limit related to MAXPHYS? If so, it should be greater in 13.0. For the client, yes. For the server, no. For the server, it is just a compile time constant NFS_SRVMAXIO. It's mainly related to the fact that I haven't gotten around to testing lar= ger sizes yet. - kern.ipc.maxsockbuf needs to be several times the limit, which means it w= ould have to increase for 1Mbyte. - The session code must negotiate a maximum RPC size > 1 Mbyte. (I think the server code does do this, but it needs to be tested.) And, yes, the client is limited to MAXPHYS. Doing this is on my todo list, rick The client should acquire the attributes that indicate that and set rsize/w= size to that. "# nfsstat -m" on the client should show you what the client is actually using. If it is larger than 128K, set both rsize and wsize to 1= 28K. >Output from the NFS Client when the issue occurs ># netstat -an | grep NFS.Server.IP.X >tcp 0 0 NFS.Client.IP.X:46896 NFS.Server.IP.X:2049 FIN_WAIT2 I'm no TCP guy. Hopefully others might know why the client would be stuck in FIN_WAIT2 (I vaguely recall this means it is waiting for a fin/ack= , but could be wrong?) ># cat /sys/kernel/debug/sunrpc/rpc_xprt/*/info >netid: tcp >addr: NFS.Server.IP.X >port: 2049 >state: 0x51 > >syslog >Mar 4 10:29:27 hostname kernel: [437414.131978] -pid- flgs status -client-= --rqstp- ->timeout ---ops-- >Mar 4 10:29:27 hostname kernel: [437414.133158] 57419 40a1 0 9b723c73 >143= cfadf 30000 4ca953b5 nfsv4 OPEN_NOATTR a:call_connect_status [sunrpc] >q:xp= rt_pending I don't know what OPEN_NOATTR means, but I assume it is some variant of NFSv4 Open operation. [stuff snipped] >Mar 4 10:29:30 hostname kernel: [437417.110517] RPC: 57419 xprt_connect_st= atus: >connect attempt timed out >Mar 4 10:29:30 hostname kernel: [437417.112172] RPC: 57419 call_connect_st= atus >(status -110) I have no idea what status -110 means? >Mar 4 10:29:30 hostname kernel: [437417.113337] RPC: 57419 call_timeout (m= ajor) >Mar 4 10:29:30 hostname kernel: [437417.114385] RPC: 57419 call_bind (stat= us 0) >Mar 4 10:29:30 hostname kernel: [437417.115402] RPC: 57419 call_connect xp= rt >00000000e061831b is not connected >Mar 4 10:29:30 hostname kernel: [437417.116547] RPC: 57419 xprt_connect xp= rt >00000000e061831b is not connected >Mar 4 10:30:31 hostname kernel: [437478.551090] RPC: 57419 xprt_connect_st= atus: >connect attempt timed out >Mar 4 10:30:31 hostname kernel: [437478.552396] RPC: 57419 call_connect_st= atus >(status -110) >Mar 4 10:30:31 hostname kernel: [437478.553417] RPC: 57419 call_timeout (m= inor) >Mar 4 10:30:31 hostname kernel: [437478.554327] RPC: 57419 call_bind (stat= us 0) >Mar 4 10:30:31 hostname kernel: [437478.555220] RPC: 57419 call_connect xp= rt >00000000e061831b is not connected >Mar 4 10:30:31 hostname kernel: [437478.556254] RPC: 57419 xprt_connect xp= rt >00000000e061831b is not connected Is it possible that the client is trying to (re)connect using the same clie= nt port#? I would normally expect the client to create a new TCP connection using a different client port# and then retry the outstanding RPCs. --> Capturing packets when this happens would show us what is going on. If there is a problem on the FreeBSD end, it is most likely a broken network device driver. --> Try disabling TSO , LRO. --> Try a different driver for the net hardware on the server. --> Try a different net chip on the server. If you can capture packets when (not after) the hang occurs, then you can look at them in wireshark and see what is actually happening. (Ideally on both client and server, to check that your network hasn't dropped anything.) --> I know, if the hangs aren't easily reproducible, this isn't easily done. --> Try a newer Linux kernel and see if the problem persists. The Linux folk will get more interested if you can reproduce the problem on 5.12. (Recent bakeathon testing of the 5.12 kernel against the FreeBSD server did not find any issues.) Hopefully the network folk have some insight w.r.t. why the TCP connection is sitting in FIN_WAIT2. rick Jason Breitman _______________________________________________ freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org<mailt= o:freebsd-net-unsubscribe@freebsd.org>" _______________________________________________ freebsd-net@freebsd.org<mailto:freebsd-net@freebsd.org> mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org<mailt= o:freebsd-net-unsubscribe@freebsd.org>"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?789BCFA9-D6BC-4C5A-AEA2-E6F7C6E26CB5>