Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Jun 2025 13:53:02 +0800
From:      Ben Hutton <ben@benhutton.com.au>
To:        Michael Tuexen <michael.tuexen@lurchi.franken.de>
Cc:        freebsd-net@freebsd.org
Subject:   Re: Network Tuning - mbuf
Message-ID:  <9fe64741-9f42-4e7e-9671-345221676136@benhutton.com.au>
In-Reply-To: <1B2AEE29-C71B-4EF7-9DDC-F45A13B0DC5F@lurchi.franken.de>
References:  <8255b0b9-c9df-4af9-bbb2-94140edf189c@benhutton.com.au> <1B2AEE29-C71B-4EF7-9DDC-F45A13B0DC5F@lurchi.franken.de>

next in thread | previous in thread | raw e-mail | index | archive | help

[-- Attachment #1 --]
Hi Michael,

8G

$ netstat -m
1590074/4231/1594305 mbufs in use (current/cache/total)
797974/2592/800566/1800796 mbuf clusters in use (current/cache/total/max)
797974/790 mbuf+clusters out of packet secondary zone in use (current/cache)
644657/1542/646199/1550398 4k (page size) jumbo clusters in use 
(current/cache/total/max)
0/0/0/74192 9k jumbo clusters in use (current/cache/total/max)
0/0/0/41733 16k jumbo clusters in use (current/cache/total/max)
4572094K/12409K/4584504K bytes allocated to network (current/cache/total)
0/8507/8489 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/30432/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
485354407/0/0 requests for jumbo clusters denied (4k/9k/16k)
2 sendfile syscalls
2 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
2 pages were valid at time of a sendfile request
0 pages were valid and substituted to bogus page
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed

Kind regards
Ben

On 27/06/2025 12:52, Michael Tuexen wrote:
>> On 27. Jun 2025, at 04:17, Ben Hutton<ben@benhutton.com.au> wrote:
>>
>> Hi,
>> I'm currently having an issue with a spring-boot application (with nginx  in front on the same instance) running on FreeBSD 14.1 in AWS. Two of our instances at present have had the application go offline with the following appearing in the /var/log/messages:
>> Jun 26 07:57:47 freebsd kernel: [zone: mbuf_jumbo_page] kern.ipc.nmbjumbop limit reached
>> Jun 26 07:57:47 freebsd kernel: [zone: mbuf_cluster] kern.ipc.nmbclusters limit reached
>> Jun 26 07:59:34 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue awaiting acceptance (104 occurrences), euid 0, rgid 0, jail 0
>> Jun 26 08:01:51 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue awaiting acceptance (13 occurrences), euid 0, rgid 0, jail 0
>>
>> Each time this has occurred I have increased the nmbjumbop and nmbclusters values. The last time by a huge amount to see if we can mitigate the issue. Once I adjust the values the application starts responding to requests again.
>> My question is, is just increasing this the correct course of action or should I be investigating something else, or adjusting other settings accordingly? Also if this is due to an underlying issue and not just network load how would I get to the root cause? Note the application streams allot of files in rapid succession which I'm suspecting is what is causing the issue.
> Hi Ben,
>
> how much memory does your VM have? What is the output of
> netstat -m
> when the system is in operation?
>
> Best regards
> Michael
>> Thanks
>> Ben
>>
[-- Attachment #2 --]
<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hi Michael,</p>
    <p>8G</p>
    <p>
      <span style="font-family:monospace"><span
          style="color:#000000;background-color:#ffffff;">$ netstat -m</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">1590074/4231/1594305
          mbufs in use (current/cache/total)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">797974/2592/800566/1800796
          mbuf clusters in use (current/cache/total/max)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">797974/790
          mbuf+clusters out of packet secondary zone in use
          (current/cache)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">644657/1542/646199/1550398
          4k (page size) jumbo clusters in use (current/cache/total/max)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0/0/0/74192
          9k jumbo clusters in use (current/cache/total/max)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0/0/0/41733
          16k jumbo clusters in use (current/cache/total/max)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">4572094K/12409K/4584504K
          bytes allocated to network (current/cache/total)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0/8507/8489
          requests for mbufs denied (mbufs/clusters/mbuf+clusters)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0/30432/0
          requests for mbufs delayed (mbufs/clusters/mbuf+clusters)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0/0/0
          requests for jumbo clusters delayed (4k/9k/16k)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">485354407/0/0
          requests for jumbo clusters denied (4k/9k/16k)</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">2 sendfile
          syscalls</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">2 sendfile
          syscalls completed without I/O request</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 requests
          for I/O initiated by sendfile</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 pages
          read by sendfile as part of a request</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">2 pages
          were valid at time of a sendfile request</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 pages
          were valid and substituted to bogus page</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 pages
          were requested for read ahead by applications</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 pages
          were read ahead by sendfile</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 times
          sendfile encountered an already busy page</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 requests
          for sfbufs denied</span><span
          style="color:#000000;background-color:#ffffff;">
        </span><br>
        <span style="color:#000000;background-color:#ffffff;">0 requests
          for sfbufs delayed</span><br>
        <span style="color:#000000;background-color:#ffffff;"></span></span></p>
    <p>Kind regards<br>
      Ben</p>
    <div class="moz-cite-prefix">On 27/06/2025 12:52, Michael Tuexen
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:1B2AEE29-C71B-4EF7-9DDC-F45A13B0DC5F@lurchi.franken.de">
      <blockquote type="cite">
        <pre wrap="" class="moz-quote-pre">On 27. Jun 2025, at 04:17, Ben Hutton <a class="moz-txt-link-rfc2396E" href="mailto:ben@benhutton.com.au">&lt;ben@benhutton.com.au&gt;</a> wrote:

Hi,
I'm currently having an issue with a spring-boot application (with nginx  in front on the same instance) running on FreeBSD 14.1 in AWS. Two of our instances at present have had the application go offline with the following appearing in the /var/log/messages:
Jun 26 07:57:47 freebsd kernel: [zone: mbuf_jumbo_page] kern.ipc.nmbjumbop limit reached 
Jun 26 07:57:47 freebsd kernel: [zone: mbuf_cluster] kern.ipc.nmbclusters limit reached 
Jun 26 07:59:34 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue awaiting acceptance (104 occurrences), euid 0, rgid 0, jail 0 
Jun 26 08:01:51 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue awaiting acceptance (13 occurrences), euid 0, rgid 0, jail 0

Each time this has occurred I have increased the nmbjumbop and nmbclusters values. The last time by a huge amount to see if we can mitigate the issue. Once I adjust the values the application starts responding to requests again.
My question is, is just increasing this the correct course of action or should I be investigating something else, or adjusting other settings accordingly? Also if this is due to an underlying issue and not just network load how would I get to the root cause? Note the application streams allot of files in rapid succession which I'm suspecting is what is causing the issue.
</pre>
      </blockquote>
      <pre wrap="" class="moz-quote-pre">Hi Ben,

how much memory does your VM have? What is the output of
netstat -m
when the system is in operation?

Best regards
Michael
</pre>
      <blockquote type="cite">
        <pre wrap="" class="moz-quote-pre">Thanks
Ben

</pre>
      </blockquote>
      <pre wrap="" class="moz-quote-pre">
</pre>
    </blockquote>
  </body>
</html>

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9fe64741-9f42-4e7e-9671-345221676136>