From nobody Fri Jun 27 08:38:18 2025 X-Original-To: freebsd-net@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4bT88j0wqYz60Tjb for ; Fri, 27 Jun 2025 08:38:29 +0000 (UTC) (envelope-from michael.tuexen@lurchi.franken.de) Received: from drew.franken.de (drew.ipv6.franken.de [IPv6:2001:638:a02:a001:20e:cff:fe4a:feaa]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.franken.de", Issuer "Certum Domain Validation CA SHA2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4bT88h3HXnz3KMp for ; Fri, 27 Jun 2025 08:38:28 +0000 (UTC) (envelope-from michael.tuexen@lurchi.franken.de) Authentication-Results: mx1.freebsd.org; none Received: from smtpclient.apple (unknown [IPv6:2a02:8109:1101:be00:d0b3:38f6:4cb8:9551]) (Authenticated sender: lurchi) by mail-n.franken.de (Postfix) with ESMTPSA id E4CC5721BE006; Fri, 27 Jun 2025 10:38:18 +0200 (CEST) Content-Type: text/plain; charset=us-ascii List-Id: Networking and TCP/IP with FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-net List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-net@FreeBSD.org Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3826.600.51.1.1\)) Subject: Re: Network Tuning - mbuf From: Michael Tuexen In-Reply-To: <9fe64741-9f42-4e7e-9671-345221676136@benhutton.com.au> Date: Fri, 27 Jun 2025 10:38:18 +0200 Cc: freebsd-net@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: <3964B087-54A2-4ABD-B69E-F93DDFF763C7@lurchi.franken.de> References: <8255b0b9-c9df-4af9-bbb2-94140edf189c@benhutton.com.au> <1B2AEE29-C71B-4EF7-9DDC-F45A13B0DC5F@lurchi.franken.de> <9fe64741-9f42-4e7e-9671-345221676136@benhutton.com.au> To: Ben Hutton X-Mailer: Apple Mail (2.3826.600.51.1.1) X-Rspamd-Queue-Id: 4bT88h3HXnz3KMp X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:680, ipnet:2001:638::/32, country:DE] > On 27. Jun 2025, at 07:53, Ben Hutton wrote: >=20 > Hi Michael, > 8G > $ netstat -m=20 > 1590074/4231/1594305 mbufs in use (current/cache/total)=20 > 797974/2592/800566/1800796 mbuf clusters in use = (current/cache/total/max)=20 > 797974/790 mbuf+clusters out of packet secondary zone in use = (current/cache)=20 > 644657/1542/646199/1550398 4k (page size) jumbo clusters in use = (current/cache/total/max)=20 > 0/0/0/74192 9k jumbo clusters in use (current/cache/total/max)=20 > 0/0/0/41733 16k jumbo clusters in use (current/cache/total/max)=20 > 4572094K/12409K/4584504K bytes allocated to network = (current/cache/total)=20 > 0/8507/8489 requests for mbufs denied (mbufs/clusters/mbuf+clusters)=20= > 0/30432/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)=20 > 0/0/0 requests for jumbo clusters delayed (4k/9k/16k)=20 > 485354407/0/0 requests for jumbo clusters denied (4k/9k/16k)=20 > 2 sendfile syscalls=20 > 2 sendfile syscalls completed without I/O request=20 > 0 requests for I/O initiated by sendfile=20 > 0 pages read by sendfile as part of a request=20 > 2 pages were valid at time of a sendfile request=20 > 0 pages were valid and substituted to bogus page=20 > 0 pages were requested for read ahead by applications=20 > 0 pages were read ahead by sendfile=20 > 0 times sendfile encountered an already busy page=20 > 0 requests for sfbufs denied=20 > 0 requests for sfbufs delayed OK. You can use netstat -x to figure out how the occupancy of the send and receive buffers for TCP connections is. The output contain the IP addresses of your current connections. So you might not want to post it here. But you can remove the IP addresses from it and send it to = me... Best regards Michael > Kind regards > Ben > On 27/06/2025 12:52, Michael Tuexen wrote: >>> On 27. Jun 2025, at 04:17, Ben Hutton wrote: >>>=20 >>> Hi, >>> I'm currently having an issue with a spring-boot application (with = nginx in front on the same instance) running on FreeBSD 14.1 in AWS. Two = of our instances at present have had the application go offline with the = following appearing in the /var/log/messages: >>> Jun 26 07:57:47 freebsd kernel: [zone: mbuf_jumbo_page] = kern.ipc.nmbjumbop limit reached=20 >>> Jun 26 07:57:47 freebsd kernel: [zone: mbuf_cluster] = kern.ipc.nmbclusters limit reached=20 >>> Jun 26 07:59:34 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 = (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue = awaiting acceptance (104 occurrences), euid 0, rgid 0, jail 0=20 >>> Jun 26 08:01:51 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 = (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue = awaiting acceptance (13 occurrences), euid 0, rgid 0, jail 0 >>>=20 >>> Each time this has occurred I have increased the nmbjumbop and = nmbclusters values. The last time by a huge amount to see if we can = mitigate the issue. Once I adjust the values the application starts = responding to requests again. >>> My question is, is just increasing this the correct course of action = or should I be investigating something else, or adjusting other settings = accordingly? Also if this is due to an underlying issue and not just = network load how would I get to the root cause? Note the application = streams allot of files in rapid succession which I'm suspecting is what = is causing the issue. >>>=20 >> Hi Ben, >>=20 >> how much memory does your VM have? What is the output of >> netstat -m >> when the system is in operation? >>=20 >> Best regards >> Michael >>=20 >>> Thanks >>> Ben >>>=20 >>>=20 >>=20