From owner-freebsd-stable@FreeBSD.ORG Wed Jan 8 20:38:52 2014 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 87CDABEA for ; Wed, 8 Jan 2014 20:38:52 +0000 (UTC) Received: from mail.egr.msu.edu (hill.egr.msu.edu [35.9.37.162]) by mx1.freebsd.org (Postfix) with ESMTP id 562081FB2 for ; Wed, 8 Jan 2014 20:38:51 +0000 (UTC) Received: from hill (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id C652922D48 for ; Wed, 8 Jan 2014 15:32:59 -0500 (EST) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by hill (hill.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NVrhA_lgXKZD for ; Wed, 8 Jan 2014 15:32:59 -0500 (EST) Received: from EGR authenticated sender Message-ID: <52CDB5FB.90108@egr.msu.edu> Date: Wed, 08 Jan 2014 15:32:59 -0500 From: Adam McDougall User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-stable@freebsd.org Subject: Re: 10.0-RC1: bad mbuf leak? References: <1387204500.12061.60192349.19EAE1B4@webmail.messagingengine.com> <3A115E20-3ADB-49BA-885D-16189B97842B@FreeBSD.org> <20131225133356.GL71033@FreeBSD.org> <20140104195505.GV71033@glebius.int.ru> <11BB3983-28F7-40EF-87DA-FD95BD297EA7@FreeBSD.org> <1389033148.5084.67285353.3B31094A@webmail.messagingengine.com> In-Reply-To: <1389033148.5084.67285353.3B31094A@webmail.messagingengine.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jan 2014 20:38:52 -0000 On 01/06/2014 13:32, Mark Felder wrote: > It's not looking promising. mbuf usage is really high again. I haven't > hit the point where the system is unavailable on the network but it > appears to be approaching. > > root@skeletor:/usr/home/feld # netstat -m > 4093391/3109/4096500 mbufs in use (current/cache/total) > 1025/1725/2750/1017354 mbuf clusters in use (current/cache/total/max) > 1025/1725 mbuf+clusters out of packet secondary zone in use > (current/cache) > 0/492/492/508677 4k (page size) jumbo clusters in use > (current/cache/total/max) > 0/0/0/150719 9k jumbo clusters in use (current/cache/total/max) > 0/0/0/84779 16k jumbo clusters in use (current/cache/total/max) > 1025397K/6195K/1031593K bytes allocated to network (current/cache/total) > 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) > 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters) > 0/0/0 requests for jumbo clusters delayed (4k/9k/16k) > 0/0/0 requests for jumbo clusters denied (4k/9k/16k) > 0 requests for sfbufs denied > 0 requests for sfbufs delayed > 0 requests for I/O initiated by sendfile > > root@skeletor:/usr/home/feld # vmstat -z | grep mbuf > mbuf_packet: 256, 6511065, 1025, 1725, 9153363, 0, > 0 > mbuf: 256, 6511065, 4092367, 1383,74246554, 0, > 0 > mbuf_cluster: 2048, 1017354, 2750, 0, 2750, 0, > 0 > mbuf_jumbo_page: 4096, 508677, 0, 492, 2655317, 0, 0 > mbuf_jumbo_9k: 9216, 150719, 0, 0, 0, 0, 0 > mbuf_jumbo_16k: 16384, 84779, 0, 0, 0, 0, 0 > mbuf_ext_refcnt: 4, 0, 0, 0, 0, 0, 0 > > root@skeletor:/usr/home/feld # uptime > 12:30PM up 15:05, 1 user, load averages: 0.24, 0.23, 0.27 > > root@skeletor:/usr/home/feld # uname -a > FreeBSD skeletor.feld.me 10.0-PRERELEASE FreeBSD 10.0-PRERELEASE #17 > r260339M: Sun Jan 5 21:23:10 CST 2014 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > Can you try your NFS mounts from directly within the jails, or stop one or more jails for a night and see if it becomes stable? Anything else unusual besides the jails/nullfs such as pf, ipfw, nat, vimages? My systems running 10 seem fine including the one running poudriere builds which uses jails and I think nullfs, but not nfs. Do mbufs go up when you cause nfs traffic?