From nobody Mon Aug 7 22:09:06 2023 X-Original-To: virtualization@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4RKVpV6Gv0z4mQT5 for ; Mon, 7 Aug 2023 22:09:06 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4RKVpV5FZCz4LXv for ; Mon, 7 Aug 2023 22:09:06 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1691446146; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BJljQebvJ9fi+qatvpCi/IZlaKdixcTn5V38CX5OyB0=; b=HMhCu7zc1V5pZso0TWWuBLCDsekDf/S8LkGuEr1Lc31zyhj1exSM0pGajm2sZutA41zSac P/MK5xWKrK+vJQtP0/yk0WKvO+9sY75IUtgYe9aaKQQeSsMm/K8kiQbomP0qcah/Yu/QGX ZkYVDKsZF79EwhK9NNAEIT8p+loN/8VfJQv5izMR9ZUarHfR8sBcVA3dFWYJlbDPy3yL0Y HxVDGMZH8pAtQm2pnHYZz7ZN71XlYGZ4umyRh2M73A2GzsgRmNH7Ti9T05b44OumA8Ap1x DD7Ik0pR4Wdc4gu559OPPu6BPOce07HDg8N+qcnwZMInmUogNV0uCYAJeC04Pw== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1691446146; a=rsa-sha256; cv=none; b=ejtgCGA9Ti9+DzlTjkX/RmsNBfDrHhIH9whu0o8+HuNUp5Z23kdIqh8yylj+VudLDN2415 3+Lwze7pBrutvTXORoWaV2hkY+VOxKXFaD1lBJeHym49mqGpT+CmqW35jgNcb4B57jVKA1 kXnJCeef3uTSPwnrKA3puaWSGBKFAug5L8r/Ga5zvAGMeJH56TTkqUNHj/QptmFKfpdJwt CkZsDTB8XFdA20MScO4tqrBDpI4eEu0KiJbZmaBzVEn5hR/aHyPVsxQnWOTEvhDd+eE4sI +e6TYEJ5urJp8e4jX7aO72VQumRN20nSjukwAuGNNMA758MswvIaiw34327Evw== ARC-Authentication-Results: i=1; mx1.freebsd.org; none Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4RKVpV4Kpdz18FG for ; Mon, 7 Aug 2023 22:09:06 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 377M96Lf025835 for ; Mon, 7 Aug 2023 22:09:06 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 377M96ZQ025834 for virtualization@FreeBSD.org; Mon, 7 Aug 2023 22:09:06 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: virtualization@FreeBSD.org Subject: [Bug 263062] tcp_inpcb leaking in VM environment Date: Mon, 07 Aug 2023 22:09:06 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 13.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: fjoe@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: virtualization@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated List-Id: Discussion List-Archive: https://lists.freebsd.org/archives/freebsd-virtualization List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-virtualization@freebsd.org X-BeenThere: freebsd-virtualization@freebsd.org MIME-Version: 1.0 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D263062 --- Comment #6 from Max Khon --- I can confirm that switching Hetzner VM to i440fx (rescale to Intel plan, t= hen ask Hetzner support to switch to i440fx as some Intel VMs are also provisio= ned with Q35 chipset) solves the issue (on the same 13.2-RELEASE kernel): --- cut here --- ITEM SIZE LIMIT USED FREE REQ FAILSLEEP XDOMAIN udp_inpcb: 496, 510927, 12, 1516, 70257, 0, 0, = 0 tcp_inpcb: 496, 510927, 649, 1383, 111267, 0, 0, = 0 udplite_inpcb: 496, 510927, 0, 0, 0, 0, 0, = 0 --- cut here --- The difference between i440fx and Q35 is that the latter provides "modern" virtio devices, while i440fx provides "legacy" virtio devices. I suspect the problem is somewhere in "modern" virtqueue or modern vtnet implementation (which has been added in FreeBSD 13). FreeBSD 12 does not ev= en boot on Q35 chipset because of missing "modern" support. I would suggest to not do any MFC of "modern" virtio until this issue is fi= xed. On a side note: I have reproduced this issue with Q35 chipset ("modern" vir= tio) on a plain 13.2-RELEASE in a Hetzner Q35 VM (any AMD plan) with just nginx serving static content (default nginx page) and running "ab -c 100 -n 1000000000 http://x.y.z.w/" in a loop: --- cut here --- ITEM SIZE LIMIT USED FREE REQ FAILSLEEP XDOMAIN udp_inpcb: 496, 126863, 12187, 261, 12475, 0, 0, = 0 tcp_inpcb: 496, 126863, 29245, 219, 1204697, 0, 0, = 0 udplite_inpcb: 496, 126863, 0, 0, 0, 0, 0, = 0 --- cut here --- Also, I noticed that nginx process becomes unkillable (even with SIGKILL) a= nd "ps axl | grep nginx" output is as follows: --- cut here --- 0 848 1 1 20 0 20024 7624 pause Is - 0:00.00 nginx: mas= ter process /usr/local/sbin/nginx 80 898 848 0 33 0 20024 8480 - R - 1:44.88 nginx: wor= ker process (nginx) --- cut here --- Notice that nginx worker process does not have MWCHAN. Also, trying to do ktrace/struss or attaching gdb to nginx process just hangs. Additionally, adding a simple Django application (just default empty Django application, run as "manage.py runserver") behind nginx increases a probabi= lity of inpcb leak (USER counters grows faster). I use simple reverse proxying l= ike this: --- cut here --- location / { proxy_pass http://localhost:8000; } --- cut here --- --=20 You are receiving this mail because: You are the assignee for the bug.=