From nobody Thu Feb 29 23:30:42 2024 X-Original-To: stable@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Tm6sx6CZwz5D6Xf for ; Thu, 29 Feb 2024 23:31:01 +0000 (UTC) (envelope-from rick.macklem@gmail.com) Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Tm6sx0cmGz42KW; Thu, 29 Feb 2024 23:31:01 +0000 (UTC) (envelope-from rick.macklem@gmail.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20230601 header.b=ipZ54TSQ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of rick.macklem@gmail.com designates 2607:f8b0:4864:20::635 as permitted sender) smtp.mailfrom=rick.macklem@gmail.com Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1dc0e5b223eso14632465ad.1; Thu, 29 Feb 2024 15:31:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709249459; x=1709854259; darn=freebsd.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=agjotOpTK8y1NLx7JS/G4SuRbAX2kP7IXS8jsN3catY=; b=ipZ54TSQN7kGrVYhbU2bfOcupfgLCG5sc/eYwtZTtIMkQXhgDPtQ/x0J8j28FeLw+e TwwjDK0gn1UF6W5L5wzAeo25gI+dUUy34gOM0FkPE6oIvs7/WjsUTHC0nck78+bCAWSH nlMukrdccWNbwJVhSXABCa3cP0MNAQeJLYxDO+zyWsk9j0k9QcWbqxrLOtgthrn9gdxu XtGEXQ0OyKQyb2M7iA5DqZI8B9KAhgZhrOqlSqonMCjeX47HPFS2IW0Nl+JeXuFnyK6a RAXHJunYptQ2zOkANNRDP96sQhs4FOVmBSxHrdSA3NFDfZE2fek+y/OhMBcmYq+y7DL/ xIkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709249459; x=1709854259; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=agjotOpTK8y1NLx7JS/G4SuRbAX2kP7IXS8jsN3catY=; b=gM9ze7spLqaz6A4Z1IaDF3UuYu6ZO9b4p2XLqL8XuUYhOkxmughfTUPR6pbOgGjfiw 87y0OoHzDiMQ+b7paoSt/LgGD1ftMkF9jh72XFmQ59C3I59QB1ZrKns8zdtyW3gWbbVg n003U2Jp9zvJ+idBwnCwAu+a7WeG1x2AcPhPd9DpsKE7Un7N96trQQv/OfMtMdseCHl8 u9TIByN+XbskNorsmpbOnd6/VUoTY3QnHv5YspPSwYKXeG2u0AcbXpP3JgmyzQkuMx1B CqIM5Z5FQoriDdJTlt7P049+abtF3aqFknmMlk3CbP5icGrO5Yz08cwSm84xZyLCo2lH OaKA== X-Forwarded-Encrypted: i=1; AJvYcCUC5y0ulqcrTreUhBqhlb06Pc0ZlpqH1Q2PEEZLQvNF2ywhQBs0SyoSS33gyFKQ+WByqQkZxWZ7/QYJDkQYs/CH4SLxPw== X-Gm-Message-State: AOJu0YxqcMy5ZjYcm4ICT2wZvXuktKqXTZQYh1hzqB3OXkgQIGU7w+CK yG/dVPVUKSSooupLUEAc+kelLsherQpnFc03N5Cg1HDjdv9lHoxnqA2LNAWPNZrEwBCfQUDayPZ 3OFuqAbbQO+dZZMa6bzr5X/ZvGDxlVwA= X-Google-Smtp-Source: AGHT+IFABc39YSKGulEHrxAILgMbWNKFRTC8TCHc9FD8gKvbW6xFppD2PnADwruUPyEfjTPXGi9AFND7S28WA1rMWSM= X-Received: by 2002:a17:902:f552:b0:1dc:7b6:867a with SMTP id h18-20020a170902f55200b001dc07b6867amr24685plf.21.1709249459040; Thu, 29 Feb 2024 15:30:59 -0800 (PST) List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-stable@freebsd.org X-BeenThere: freebsd-stable@freebsd.org MIME-Version: 1.0 References: <26078.50375.679881.64018@hergotha.csail.mit.edu> In-Reply-To: From: Rick Macklem Date: Thu, 29 Feb 2024 15:30:42 -0800 Message-ID: Subject: Re: 13-stable NFS server hang To: Garrett Wollman Cc: stable@freebsd.org, rmacklem@freebsd.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.97 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.97)[-0.971]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20230601]; MIME_GOOD(-0.10)[text/plain]; DWL_DNSWL_NONE(0.00)[gmail.com:dkim]; RCVD_TLS_LAST(0.00)[]; FREEMAIL_FROM(0.00)[gmail.com]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; TAGGED_FROM(0.00)[]; TO_DN_SOME(0.00)[]; FROM_HAS_DN(0.00)[]; MISSING_XM_UA(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; MID_RHS_MATCH_FROMTLD(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; MLMMJ_DEST(0.00)[stable@freebsd.org]; RCVD_COUNT_ONE(0.00)[1]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; RCVD_IN_DNSWL_NONE(0.00)[2607:f8b0:4864:20::635:from]; FREEMAIL_ENVFROM(0.00)[gmail.com] X-Rspamd-Queue-Id: 4Tm6sx0cmGz42KW On Wed, Feb 28, 2024 at 4:04=E2=80=AFPM Rick Macklem wrote: > > On Tue, Feb 27, 2024 at 9:30=E2=80=AFPM Garrett Wollman wrote: > > > > Hi, all, > > > > We've had some complaints of NFS hanging at unpredictable intervals. > > Our NFS servers are running a 13-stable from last December, and > > tonight I sat in front of the monitor watching `nfsstat -dW`. I was > > able to clearly see that there were periods when NFS activity would > > drop *instantly* from 30,000 ops/s to flat zero, which would last > > for about 25 seconds before resuming exactly as it was before. > > > > I wrote a little awk script to watch for this happening and run > > `procstat -k` on the nfsd process, and I saw that all but two of the > > service threads were idle. The three nfsd threads that had non-idle > > kstacks were: > > > > PID TID COMM TDNAME KSTACK > > 997 108481 nfsd nfsd: master mi_switch sleepq_t= imedwait _sleep nfsv4_lock nfsrvd_dorpc nfssvc_program svc_run_internal svc= _run nfsrvd_nfsd nfssvc_nfsd sys_nfssvc amd64_syscall fast_syscall_common > > 997 960918 nfsd nfsd: service mi_switch sleepq_t= imedwait _sleep nfsv4_lock nfsrv_setclient nfsrvd_exchangeid nfsrvd_dorpc n= fssvc_program svc_run_internal svc_thread_start fork_exit fork_trampoline > > 997 962232 nfsd nfsd: service mi_switch _cv_wait= txg_wait_synced_impl txg_wait_synced dmu_offset_next zfs_holey zfs_freebsd= _ioctl vn_generic_copy_file_range vop_stdcopy_file_range VOP_COPY_FILE_RANG= E vn_copy_file_range nfsrvd_copy_file_range nfsrvd_dorpc nfssvc_program svc= _run_internal svc_thread_start fork_exit fork_trampoline > > > > I'm suspicious of two things: first, the copy_file_range RPC; second, > > the "master" nfsd thread is actually servicing an RPC which requires > > obtaining a lock. The "master" getting stuck while performing client > > RPCs is, I believe, the reason NFS service grinds to a halt when a > > client tries to write into a near-full filesystem, so this problem > > would be more evidence that the dispatching function should not be > > mixed with actual operations. I don't know what the clients are > > doing, but is it possible that nfsrvd_copy_file_range is holding a > > lock that is needed by one or both of the other two threads? > > > > Near-term I could change nfsrvd_copy_file_range to just > > unconditionally return NFSERR_NOTSUP and force the clients to fall > > back, but I figured I would ask if anyone else has seen this. > I have attached a little patch that should limit the server's Copy size > to vfs.nfsd.maxcopyrange (default of 10Mbytes). > Hopefully this makes sure that the Copy does not take too long. > > You could try this instead of disabling Copy. It would be nice to know if > this is suffciient? (If not, I'll probably add a sysctl to disable Copy.) I did a quick test without/with this patch,where I copied a 1Gbyte file. Without this patch, the Copy RPCs mostly replied in just under 1sec (which is what the flag requests), but took over 4sec for one of the Copy operations. This implies that one Read/Write of 1Mbyte on the server took over 3 seconds. I noticed the first Copy did over 600Mbytes, but the rest did about 100Mbyt= es each and it was one of these 100Mbyte Copy operations that took over 4sec. With the patch, there were a lot more Copy RPCs (as expected) of 10Mbytes each and they took a consistent 0.25-0.3sec to reply. (This is a test of a = local mount on an old laptop, so nowhere near a server hardware config.) So, the patch might be sufficient? It would be nice to avoid disabling Copy, since it avoids reading the data into the client and then writing it back to the server. I will probably commit both patches (10Mbyte clip of Copy size and disabling Copy) to main soon, since I cannot say if clipping the size of the Copy will always be sufficient. Pleas let us know how trying these patches goes, rick > > rick > > > > > -GAWollman > > > >