Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 17 Mar 2021 20:53:52 +0100
From:      Yamagi Burmeister <lists@yamagi.org>
To:        mjguzik@gmail.com
Cc:        freebsd-current@freebsd.org, freebsd-stable@freebsd.org
Subject:   Re: 13.0-RC2 / 14-CURRENT: Processes getting stuck in vlruwk state
Message-ID:  <20210317205352.4ea8384e3ca1e6660dbbb06a@yamagi.org>
In-Reply-To: <CAGudoHE0tNFK5=Xwa52bd2YDAMOp0BXLjyxMPVQ-w09RHoSGBg@mail.gmail.com>
References:  <20210317143307.20beb5fca0814246f2a91e9a@yamagi.org> <CAGudoHG5emBBEMS_aZUH7jqfnEXWELsAhu-iCXhV=NkQ1g4QMQ@mail.gmail.com> <20210317164821.a65559ba0df6645085466484@yamagi.org> <CAGudoHE0tNFK5=Xwa52bd2YDAMOp0BXLjyxMPVQ-w09RHoSGBg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--Signature=_Wed__17_Mar_2021_20_53_52_+0100_xJ+NbZ5mtNrRuJVe
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

This time poudriere came to an end:

  % sysctl vfs.highest_numvnodes
  vfs.highest_numvnodes: 500976

On Wed, 17 Mar 2021 18:55:43 +0100
Mateusz Guzik <mjguzik@gmail.com> wrote:

> Thanks, I'm going to have to ponder a little bit.
>=20
> In the meantime can you apply this:
> https://people.freebsd.org/~mjg/maxvnodes.diff
>=20
> Once you boot, tweak maxvnodes:
> sysctl kern.maxvnodes=3D1049226
>=20
> Run poudriere. Once it finishes, inspect sysctl vfs.highest_numvnodes
>=20
> On 3/17/21, Yamagi <lists@yamagi.org> wrote:
> > Hi Mateusz,
> > the sysctl output after about 10 minutes into the problem is attached.
> > In case that its stripped by Mailman a copy can be found here:
> > https://deponie.yamagi.org/temp/sysctl_vlruwk.txt.xz
> >
> > Regards,
> > Yamagi
> >
> > On Wed, 17 Mar 2021 15:57:59 +0100
> > Mateusz Guzik <mjguzik@gmail.com> wrote:
> >
> >> Can you reproduce the problem and run obtain "sysctl -a"?
> >>
> >> In general, there is a vnode limit which is probably too small. The
> >> reclamation mechanism is deficient in that it will eventually inject
> >> an arbitrary pause.
> >>
> >> On 3/17/21, Yamagi <lists@yamagi.org> wrote:
> >> > Hi,
> >> > me and some other users in the ##bsdforen.de IRC channel have the
> >> > problem that during Poudriere runs processes getting stuck in the
> >> > 'vlruwk' state.
> >> >
> >> > For me it's fairly reproduceable. The problems begin about 20 to 25
> >> > minutes after I've started poudriere. At first only some ccache
> >> > processes hang in the 'vlruwk' state, after another 2 to 3 minutes
> >> > nearly everything hangs and the total CPU load drops to about 5%.
> >> > When I stop poudriere with ctrl-c it takes another 3 to 5 minutes
> >> > until the system recovers.
> >> >
> >> > First the setup:
> >> > * poudriere runs in a bhyve vm on zvol. The host is a 12.2-RELEASE-p=
2.
> >> >   The zvol has a 8k blocksize, the guests partition are aligned to 8=
k.
> >> >   The guest has only zpool, the pool was created with ashift=3D13. T=
he
> >> >   vm has 16 E5-2620 and 16 gigabytes RAM assigned to it.
> >> > * poudriere is configured with ccache and ALLOW_MAKE_JOBS=3Dyes. Rem=
oving
> >> >   either of these options lowers the probability of the problem to s=
how
> >> >   up significantly.
> >> >
> >> > I've tried several git revisions starting with 14-CURRENT at
> >> > 54ac6f721efccdba5a09aa9f38be0a1c4ef6cf14 in the hope that I can find=
 at
> >> > least one known to be good revision. No chance, even a kernel build
> >> > from 0932ee9fa0d82b2998993b649f9fa4cc95ba77d6 (Wed Sep 2 19:18:27 20=
20
> >> > +0000) has the problem. The problem isn't reproduceable with
> >> > 12.2-RELEASE.
> >> >
> >> > The kernel stack ('procstat -kk') of a hanging process is:
> >> > mi_switch+0x155 sleepq_switch+0x109 sleepq_catch_signals+0x3f1
> >> > sleepq_wait_sig+0x9 _sleep+0x2aa kern_wait6+0x482 sys_wait4+0x7d
> >> > amd64_syscall+0x140 fast_syscall_common+0xf8
> >> >
> >> > The kernel stack of vnlru is changing, even while the processes are
> >> > hanging:
> >> > * mi_switch+0x155 sleepq_switch+0x109 sleepq_timedwait+0x4b
> >> > _sleep+0x29b vnlru_proc+0xa05 fork_exit+0x80 fork_trampoline+0xe
> >> > * fork_exit+0x80 fork_trampoline+0xe
> >> >
> >> > Since vnlru is accumulating CPU time it looks like it's doing at lea=
st
> >> > something. As an educated guess I would say that vn_alloc_hard() is
> >> > waiting a long time or even forever to allocate new vnodes.
> >> >
> >> > I can provide more information, I just need to know what.
> >> >
> >> >
> >> > Regards,
> >> > Yamagi
> >> >
> >> > --
> >> > Homepage: https://www.yamagi.org
> >> > Github:   https://github.com/yamagi
> >> > GPG:      0x1D502515
> >> >
> >>
> >>
> >> --
> >> Mateusz Guzik <mjguzik gmail.com>
> >
> >
> > --
> > Homepage: https://www.yamagi.org
> > Github:   https://github.com/yamagi
> > GPG:      0x1D502515
> >
>=20
>=20
> --=20
> Mateusz Guzik <mjguzik gmail.com>


--=20
Homepage: https://www.yamagi.org
Github:   https://github.com/yamagi
GPG:      0x1D502515

--Signature=_Wed__17_Mar_2021_20_53_52_+0100_xJ+NbZ5mtNrRuJVe
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEOXu/lxyufwz0gC5x6xRy5x1QJRUFAmBSXlAACgkQ6xRy5x1Q
JRWtcBAAm7O41n9F2mCThr8qOwG/RhcjzOXpRN5taHUYf1ghzGkHRClh9whxyPIl
x/yVNqHAy7wLGVWrB3YiPc9C1LVd09Re/aQF0NuSATXm5XPo0PKMFO14NcJ2sR/k
oLBPW8zO08WmhWWVSYUBkjHDVn+vcjs2Wj9Cg/rxCUrjHX0+MPvOms6lV64zdUYH
YQeZaUHYyhNOvzoTRPvV4nxrHZS+crWhu+jdcH0ysg5xIVevsnWj6yDNrdyyvC5/
ZoJEz4Z/TP2IS0znVXnfJGyOXhRORN6Vyttupo4z9YsN4Dz9oWkOUeCYDVNkrz8U
5YXEqcZAyEZeoY3i7cEvDkelNQi0qax3zuFb9fhXJUCp7sKMJwMP8NQ2TzfZ6cLH
4wdf/t2hnsJSujMvr4+FFfZGX1CHlQy+A5EgQxreqlisq5PsZPLNoBgfielzqiG8
MGaWjIV/yiaZu+SCEScx/JHB0LvuayiuCZiDzNyojtQhTT2X+cYLNXQv4cuBBX85
AYY3gQnuLwjb7fXUcRwAae+sddfvI/RHvNEiUbSWONYFj7uBcvIetP+ziM6ef3FT
8spzkPjgJT56pTOVJJlMOIbwZWKBZrbNmof6sXlMzlv6mX5QdSqX11glx28QwPQq
QHZ9ro5nMU68J5APrPmj2EoUkYV2TPFmgHKRIuU6rHqwXYdcHZo=
=NdvJ
-----END PGP SIGNATURE-----

--Signature=_Wed__17_Mar_2021_20_53_52_+0100_xJ+NbZ5mtNrRuJVe--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20210317205352.4ea8384e3ca1e6660dbbb06a>