Date: Tue, 29 May 2012 08:12:40 -0400 From: John Baldwin <jhb@freebsd.org> To: freebsd-amd64@freebsd.org, Ziyan Maraikar <ziyanm@gmail.com> Cc: FreeBSD-gnats-submit@freebsd.org, Darshana Jayasinghe <darshana.jayasinghe@gmail.com> Subject: Re: amd64/168342: mbuf exhaustion hangs all daemons in keglimit state Message-ID: <201205290812.40093.jhb@freebsd.org> In-Reply-To: <201205252034.q4PKYKcB038870@nanuoya.pdn.ac.lk> References: <201205252034.q4PKYKcB038870@nanuoya.pdn.ac.lk>
next in thread | previous in thread | raw e-mail | index | archive | help
On Friday, May 25, 2012 4:34:20 pm Ziyan Maraikar wrote: > > >Number: 168342 > >Category: amd64 > >Synopsis: mbuf exhaustion hangs all daemons in keglimit state > >Confidential: no > >Severity: serious > >Priority: medium > >Responsible: freebsd-amd64 > >State: open > >Quarter: > >Keywords: > >Date-Required: > >Class: sw-bug > >Submitter-Id: current-users > >Arrival-Date: Fri May 25 20:40:01 UTC 2012 > >Closed-Date: > >Last-Modified: > >Originator: Ziyan Maraikar > >Release: FreeBSD 9.0-RELEASE amd64 > >Organization: > Department of computer engineering, University of Peradeniya > >Environment: > System: FreeBSD nanuoya.pdn.ac.lk 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan 3 07:46:30 UTC 2012 root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 > HP Proliant DL165 4-core, 8G RAM > 4x igb NICs -- 1 interface assigned 6 IPv4 aliases. > 3x 1TB SATA zfs RAID-Z pool (zfs boot) > > >Description: > This machine has been running DHCP, BIND, NFS and, openldap serving a lab of about 40 machines. The machine recently began to experience very frequentlockups in all network services including, ssh. The services all hang in state keglimit, even under very light load. I have tried disbling TSO and hardware checksum on igb as suggested in related mailing list posts, but it has no effect. > > >How-To-Repeat: > Several ssh attempts after boot is enough to make all daemons hang in keglimit. > # netstat -m > 25034/1602/26636 mbufs in use (current/cache/total) > 24892/708/25600/25600 mbuf clusters in use (current/cache/total/max) > 24642/708 mbuf+clusters out of packet secondary zone in use (current/cache) > 0/9/9/12800 4k (page size) jumbo clusters in use (current/cache/total/max) > 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) > 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) > 56053K/1852K/57905K bytes allocated to network (current/cache/total) > 0/1697/1209 requests for mbufs denied (mbufs/clusters/mbuf+clusters) > 0/0/0 requests for jumbo clusters denied (4k/9k/16k) > 0/0/0 sfbufs in use (current/peak/max) > 0 requests for sfbufs denied > 0 requests for sfbufs delayed > 0 requests for I/O initiated by sendfile > 0 calls to protocol drain routines Have you tried increasing kern.ipc.nmbclusters? Alternatively, have you tried restricting igb to only using 1 queue? It sounds like all your igb interfaces are allocating all of your mbuf clusters for their receive rings. -- John Baldwin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201205290812.40093.jhb>