Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 04 May 2019 16:17:41 +0000
From:      bugzilla-noreply@freebsd.org
To:        net@FreeBSD.org
Subject:   [Bug 237720] tcpip network stack seized for six hours after large high-throughput file transfer
Message-ID:  <bug-237720-7501-GaaTgZ2ucJ@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-237720-7501@https.bugs.freebsd.org/bugzilla/>
References:  <bug-237720-7501@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D237720

Rick Macklem <rmacklem@FreeBSD.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |rmacklem@FreeBSD.org

--- Comment #4 from Rick Macklem <rmacklem@FreeBSD.org> ---
Since no one else has mentioned this yet...
The stats suggest to me that you've fragmented the mbuf cluster memory pool.
9K mbuf clusters are known to be a serious problem, see this recent post:
http://docs.FreeBSD.org/cgi/mid.cgi?23756.39015.553779.526064

Some net interfaces have a setting that tells them to not use 9K mbuf clust=
ers
even if the interface is using 9K jumbo packets.
If that exists for this net driver, I'd suggest you try it.

For some reason, the stats show large numbers of both 9K and 4K mbuf cluste=
rs.
(The 4K mbuf clusters aren't nearly as bad w.r.t. fragmentation, but mixing
 them with the 9K ones seems likely to cause fragmentation.)

Alternately, I'd suggest you turn of jumbo packets and try it with ordinary
1500 byte ethernet packets.

Hopefully others more conversant with this net driver and the mbuf stats wi=
ll
comment.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-237720-7501-GaaTgZ2ucJ>