From owner-freebsd-current@freebsd.org Wed Mar 17 19:54:18 2021 Return-Path: Delivered-To: freebsd-current@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id CC90D577141; Wed, 17 Mar 2021 19:54:18 +0000 (UTC) (envelope-from lists@yamagi.org) Received: from mail1.yamagi.org (mail1.yamagi.org [IPv6:2001:19f0:b001:853::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4F117s6myqz3G2x; Wed, 17 Mar 2021 19:54:17 +0000 (UTC) (envelope-from lists@yamagi.org) Received: from [2001:470:6845:1:fc9a:4782:611e:4abf] (helo=anemone.localdomain) by mail1.yamagi.org with esmtpsa (TLS1.3) tls TLS_AES_256_GCM_SHA384 (Exim 4.94 (FreeBSD)) (envelope-from ) id 1lMcFD-000HWk-FY; Wed, 17 Mar 2021 20:54:16 +0100 Date: Wed, 17 Mar 2021 20:53:52 +0100 From: Yamagi Burmeister To: mjguzik@gmail.com Cc: freebsd-current@freebsd.org, freebsd-stable@freebsd.org Subject: Re: 13.0-RC2 / 14-CURRENT: Processes getting stuck in vlruwk state Message-Id: <20210317205352.4ea8384e3ca1e6660dbbb06a@yamagi.org> In-Reply-To: References: <20210317143307.20beb5fca0814246f2a91e9a@yamagi.org> <20210317164821.a65559ba0df6645085466484@yamagi.org> X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-unknown-linux-gnu) Mime-Version: 1.0 Content-Type: multipart/signed; protocol="application/pgp-signature"; micalg="PGP-SHA256"; boundary="Signature=_Wed__17_Mar_2021_20_53_52_+0100_xJ+NbZ5mtNrRuJVe" X-Rspamd-Queue-Id: 4F117s6myqz3G2x X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of lists@yamagi.org designates 2001:19f0:b001:853::3 as permitted sender) smtp.mailfrom=lists@yamagi.org X-Spamd-Result: default: False [-2.90 / 15.00]; RCVD_TLS_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; MV_CASE(0.50)[]; R_SPF_ALLOW(-0.20)[+mx]; MIME_GOOD(-0.20)[multipart/signed,text/plain]; TO_DN_NONE(0.00)[]; ARC_NA(0.00)[]; DMARC_NA(0.00)[yamagi.org]; SPAMHAUS_ZRD(0.00)[2001:19f0:b001:853::3:from:127.0.2.255]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[2001:19f0:b001:853::3:from]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FREEMAIL_TO(0.00)[gmail.com]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ASN(0.00)[asn:20473, ipnet:2001:19f0:b000::/38, country:US]; RCVD_COUNT_TWO(0.00)[2]; MAILMAN_DEST(0.00)[freebsd-current,freebsd-stable] X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Mar 2021 19:54:18 -0000 --Signature=_Wed__17_Mar_2021_20_53_52_+0100_xJ+NbZ5mtNrRuJVe Content-Type: text/plain; charset=US-ASCII Content-Disposition: inline Content-Transfer-Encoding: quoted-printable This time poudriere came to an end: % sysctl vfs.highest_numvnodes vfs.highest_numvnodes: 500976 On Wed, 17 Mar 2021 18:55:43 +0100 Mateusz Guzik wrote: > Thanks, I'm going to have to ponder a little bit. >=20 > In the meantime can you apply this: > https://people.freebsd.org/~mjg/maxvnodes.diff >=20 > Once you boot, tweak maxvnodes: > sysctl kern.maxvnodes=3D1049226 >=20 > Run poudriere. Once it finishes, inspect sysctl vfs.highest_numvnodes >=20 > On 3/17/21, Yamagi wrote: > > Hi Mateusz, > > the sysctl output after about 10 minutes into the problem is attached. > > In case that its stripped by Mailman a copy can be found here: > > https://deponie.yamagi.org/temp/sysctl_vlruwk.txt.xz > > > > Regards, > > Yamagi > > > > On Wed, 17 Mar 2021 15:57:59 +0100 > > Mateusz Guzik wrote: > > > >> Can you reproduce the problem and run obtain "sysctl -a"? > >> > >> In general, there is a vnode limit which is probably too small. The > >> reclamation mechanism is deficient in that it will eventually inject > >> an arbitrary pause. > >> > >> On 3/17/21, Yamagi wrote: > >> > Hi, > >> > me and some other users in the ##bsdforen.de IRC channel have the > >> > problem that during Poudriere runs processes getting stuck in the > >> > 'vlruwk' state. > >> > > >> > For me it's fairly reproduceable. The problems begin about 20 to 25 > >> > minutes after I've started poudriere. At first only some ccache > >> > processes hang in the 'vlruwk' state, after another 2 to 3 minutes > >> > nearly everything hangs and the total CPU load drops to about 5%. > >> > When I stop poudriere with ctrl-c it takes another 3 to 5 minutes > >> > until the system recovers. > >> > > >> > First the setup: > >> > * poudriere runs in a bhyve vm on zvol. The host is a 12.2-RELEASE-p= 2. > >> > The zvol has a 8k blocksize, the guests partition are aligned to 8= k. > >> > The guest has only zpool, the pool was created with ashift=3D13. T= he > >> > vm has 16 E5-2620 and 16 gigabytes RAM assigned to it. > >> > * poudriere is configured with ccache and ALLOW_MAKE_JOBS=3Dyes. Rem= oving > >> > either of these options lowers the probability of the problem to s= how > >> > up significantly. > >> > > >> > I've tried several git revisions starting with 14-CURRENT at > >> > 54ac6f721efccdba5a09aa9f38be0a1c4ef6cf14 in the hope that I can find= at > >> > least one known to be good revision. No chance, even a kernel build > >> > from 0932ee9fa0d82b2998993b649f9fa4cc95ba77d6 (Wed Sep 2 19:18:27 20= 20 > >> > +0000) has the problem. The problem isn't reproduceable with > >> > 12.2-RELEASE. > >> > > >> > The kernel stack ('procstat -kk') of a hanging process is: > >> > mi_switch+0x155 sleepq_switch+0x109 sleepq_catch_signals+0x3f1 > >> > sleepq_wait_sig+0x9 _sleep+0x2aa kern_wait6+0x482 sys_wait4+0x7d > >> > amd64_syscall+0x140 fast_syscall_common+0xf8 > >> > > >> > The kernel stack of vnlru is changing, even while the processes are > >> > hanging: > >> > * mi_switch+0x155 sleepq_switch+0x109 sleepq_timedwait+0x4b > >> > _sleep+0x29b vnlru_proc+0xa05 fork_exit+0x80 fork_trampoline+0xe > >> > * fork_exit+0x80 fork_trampoline+0xe > >> > > >> > Since vnlru is accumulating CPU time it looks like it's doing at lea= st > >> > something. As an educated guess I would say that vn_alloc_hard() is > >> > waiting a long time or even forever to allocate new vnodes. > >> > > >> > I can provide more information, I just need to know what. > >> > > >> > > >> > Regards, > >> > Yamagi > >> > > >> > -- > >> > Homepage: https://www.yamagi.org > >> > Github: https://github.com/yamagi > >> > GPG: 0x1D502515 > >> > > >> > >> > >> -- > >> Mateusz Guzik > > > > > > -- > > Homepage: https://www.yamagi.org > > Github: https://github.com/yamagi > > GPG: 0x1D502515 > > >=20 >=20 > --=20 > Mateusz Guzik --=20 Homepage: https://www.yamagi.org Github: https://github.com/yamagi GPG: 0x1D502515 --Signature=_Wed__17_Mar_2021_20_53_52_+0100_xJ+NbZ5mtNrRuJVe Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEOXu/lxyufwz0gC5x6xRy5x1QJRUFAmBSXlAACgkQ6xRy5x1Q JRWtcBAAm7O41n9F2mCThr8qOwG/RhcjzOXpRN5taHUYf1ghzGkHRClh9whxyPIl x/yVNqHAy7wLGVWrB3YiPc9C1LVd09Re/aQF0NuSATXm5XPo0PKMFO14NcJ2sR/k oLBPW8zO08WmhWWVSYUBkjHDVn+vcjs2Wj9Cg/rxCUrjHX0+MPvOms6lV64zdUYH YQeZaUHYyhNOvzoTRPvV4nxrHZS+crWhu+jdcH0ysg5xIVevsnWj6yDNrdyyvC5/ ZoJEz4Z/TP2IS0znVXnfJGyOXhRORN6Vyttupo4z9YsN4Dz9oWkOUeCYDVNkrz8U 5YXEqcZAyEZeoY3i7cEvDkelNQi0qax3zuFb9fhXJUCp7sKMJwMP8NQ2TzfZ6cLH 4wdf/t2hnsJSujMvr4+FFfZGX1CHlQy+A5EgQxreqlisq5PsZPLNoBgfielzqiG8 MGaWjIV/yiaZu+SCEScx/JHB0LvuayiuCZiDzNyojtQhTT2X+cYLNXQv4cuBBX85 AYY3gQnuLwjb7fXUcRwAae+sddfvI/RHvNEiUbSWONYFj7uBcvIetP+ziM6ef3FT 8spzkPjgJT56pTOVJJlMOIbwZWKBZrbNmof6sXlMzlv6mX5QdSqX11glx28QwPQq QHZ9ro5nMU68J5APrPmj2EoUkYV2TPFmgHKRIuU6rHqwXYdcHZo= =NdvJ -----END PGP SIGNATURE----- --Signature=_Wed__17_Mar_2021_20_53_52_+0100_xJ+NbZ5mtNrRuJVe--