From owner-freebsd-stable@FreeBSD.ORG Thu Jan 9 13:53:33 2014 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C7E81EED for ; Thu, 9 Jan 2014 13:53:33 +0000 (UTC) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 92538161F for ; Thu, 9 Jan 2014 13:53:33 +0000 (UTC) Received: from compute1.internal (compute1.nyi.mail.srv.osa [10.202.2.41]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 202CA21DF8 for ; Thu, 9 Jan 2014 08:53:29 -0500 (EST) Received: from web3 ([10.202.2.213]) by compute1.internal (MEProxy); Thu, 09 Jan 2014 08:53:30 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:from:to:mime-version :content-transfer-encoding:content-type:subject:date:in-reply-to :references; s=smtpout; bh=cWNx/IoZ5aBwpNyPHMYnnhXINHg=; b=HwGOa tbJsKy+fhRZoDUt5i0yhP53oI5pC41HbK7wc0mogAugm/ik3Af0GacS26lTkAsPW 4c7aY1sH4jLID7gMh/NUiZBGMPVWU3fNJGJWqWi4Ucc/2s27RsOVGuQ5hT2FEPAo SdcRzA81LRgLDit5VM2kxkhXob7A3ZJDR3noU8= Received: by web3.nyi.mail.srv.osa (Postfix, from userid 99) id 63F7210874F; Thu, 9 Jan 2014 08:53:29 -0500 (EST) Message-Id: <1389275609.22759.68597369.0D9EAE0F@webmail.messagingengine.com> X-Sasl-Enc: GxIBD+Djla0DRHMQB0XCtbq3D4zavIEjPSx7BPGGzE5m 1389275609 From: Mark Felder To: freebsd-stable@freebsd.org MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain X-Mailer: MessagingEngine.com Webmail Interface - ajax-064ceef5 Subject: Re: 10.0-RC1: bad mbuf leak? Date: Thu, 09 Jan 2014 07:53:29 -0600 In-Reply-To: <52CDBB9C.6080406@egr.msu.edu> References: <1387204500.12061.60192349.19EAE1B4@webmail.messagingengine.com> <3A115E20-3ADB-49BA-885D-16189B97842B@FreeBSD.org> <20131225133356.GL71033@FreeBSD.org> <20140104195505.GV71033@glebius.int.ru> <11BB3983-28F7-40EF-87DA-FD95BD297EA7@FreeBSD.org> <1389033148.5084.67285353.3B31094A@webmail.messagingengine.com> <52CDB5FB.90108@egr.msu.edu> <1389213929.2278.68300789.3FE331A1@webmail.messagingengine.com> <52CDBB9C.6080406@egr.msu.edu> X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jan 2014 13:53:33 -0000 On Wed, Jan 8, 2014, at 14:57, Adam McDougall wrote: > On 01/08/2014 15:45, Mark Felder wrote: > > On Wed, Jan 8, 2014, at 14:32, Adam McDougall wrote: > >> On 01/06/2014 13:32, Mark Felder wrote: > >>> It's not looking promising. mbuf usage is really high again. I haven't > >>> hit the point where the system is unavailable on the network but it > >>> appears to be approaching. > >>> > >>> root@skeletor:/usr/home/feld # netstat -m > >>> 4093391/3109/4096500 mbufs in use (current/cache/total) > >>> 1025/1725/2750/1017354 mbuf clusters in use (current/cache/total/max) > >>> 1025/1725 mbuf+clusters out of packet secondary zone in use > >>> (current/cache) > >>> 0/492/492/508677 4k (page size) jumbo clusters in use > >>> (current/cache/total/max) > >>> 0/0/0/150719 9k jumbo clusters in use (current/cache/total/max) > >>> 0/0/0/84779 16k jumbo clusters in use (current/cache/total/max) > >>> 1025397K/6195K/1031593K bytes allocated to network (current/cache/total) > >>> 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) > >>> 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters) > >>> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k) > >>> 0/0/0 requests for jumbo clusters denied (4k/9k/16k) > >>> 0 requests for sfbufs denied > >>> 0 requests for sfbufs delayed > >>> 0 requests for I/O initiated by sendfile > >>> > >>> root@skeletor:/usr/home/feld # vmstat -z | grep mbuf > >>> mbuf_packet: 256, 6511065, 1025, 1725, 9153363, 0, > >>> 0 > >>> mbuf: 256, 6511065, 4092367, 1383,74246554, 0, > >>> 0 > >>> mbuf_cluster: 2048, 1017354, 2750, 0, 2750, 0, > >>> 0 > >>> mbuf_jumbo_page: 4096, 508677, 0, 492, 2655317, 0, 0 > >>> mbuf_jumbo_9k: 9216, 150719, 0, 0, 0, 0, 0 > >>> mbuf_jumbo_16k: 16384, 84779, 0, 0, 0, 0, 0 > >>> mbuf_ext_refcnt: 4, 0, 0, 0, 0, 0, 0 > >>> > >>> root@skeletor:/usr/home/feld # uptime > >>> 12:30PM up 15:05, 1 user, load averages: 0.24, 0.23, 0.27 > >>> > >>> root@skeletor:/usr/home/feld # uname -a > >>> FreeBSD skeletor.feld.me 10.0-PRERELEASE FreeBSD 10.0-PRERELEASE #17 > >>> r260339M: Sun Jan 5 21:23:10 CST 2014 > >>> _______________________________________________ > >>> freebsd-stable@freebsd.org mailing list > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable > >>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > >>> > >> > >> Can you try your NFS mounts from directly within the jails, or stop one > >> or more jails for a night and see if it becomes stable? Anything else > >> unusual besides the jails/nullfs such as pf, ipfw, nat, vimages? My > >> systems running 10 seem fine including the one running poudriere builds > >> which uses jails and I think nullfs, but not nfs. Do mbufs go up when > >> you cause nfs traffic? > >> > > > > You can't do NFS mounts from within a jail, which is why I have to do it > > this way. > > > > Nothing else unusual. Very few services running. The box sits mostly > > idle and the traffic is light -- watching some TV shows (the jail runs > > Plex Media Server). I haven't been able to locate a reason for the mbufs > > to go up, but often a wake up in the morning after it has been doing > > nothing all night and see it made a large jump in mbufs used. When I'm > > running an 11-CURRENT kernel these problems do not exist. > > Can you have a script run some stats like netstat -m every few minutes > during the night to see if it happens at a particular time? I'm > wondering if the system scripts are crawling the mountpoints to cause > this. Alternately, as far as NFS mounts and jails, with a reasonable > amount of work could you replace the nullfs/nfs usage with temporary NFS > mounts outside of the jails but mounted in the jail root fs? > Yes, that won't be difficult. I only nullfs because it's convenient to be able to access the mounts without having to be in the jail :-) I also just realized that there will be some filesystem activity when I'm not using the system: when a download completes my NAS signals the Plex program to rescan for new media files, so it does walk the entire NFS mount occasionally.