From owner-freebsd-questions@FreeBSD.ORG Fri Oct 2 14:23:46 2009 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B16EE106566B for ; Fri, 2 Oct 2009 14:23:46 +0000 (UTC) (envelope-from gpeel@thenetnow.com) Received: from thenetnow.com (constellation.thenetnow.com [207.112.4.14]) by mx1.freebsd.org (Postfix) with ESMTP id 7D4DE8FC1B for ; Fri, 2 Oct 2009 14:23:46 +0000 (UTC) Received: from hpeel.ody.ca ([216.240.12.2]:1916 helo=GRANT) by constellation.thenetnow.com with esmtpa (Exim 4.63 (FreeBSD)) (envelope-from ) id 1Mtj2v-00046d-BI for freebsd-questions@freebsd.org; Fri, 02 Oct 2009 10:23:45 -0400 Message-ID: <88CD8D18ADF4405DAA81B43F941698A5@GRANT> From: "Grant Peel" To: Date: Fri, 2 Oct 2009 10:23:35 -0400 MIME-Version: 1.0 X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5843 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5579 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: "Out of mbuf address space!" X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 14:23:46 -0000 Hi all, I have an older RAID 5 machine running FreeBSD 5.2.1 and am using it as = a backup storage unit. Yesterday morning, we noticed the the NFS mounts on the clients to this = machine we not available, which sent a bunch of cronjobs spinning out of = control etc. We also became unable to connect via ssh. Once at the console we noted sevral dozen entries in the messages.log: Oct 1 08:32:13 enterprise kernel: Out of mbuf address space! Oct 1 08:32:13 enterprise kernel: Consider increasing NMBCLUSTERS Oct 1 08:32:13 enterprise kernel: All mbufs or mbuf clusters exhausted, = please see tuning(7). After rebooting the machine, and getting the clients under control I = started investigating tunning(7) in the man pages. I am confused however. I have increased the kern.ipc.nmbclusters to 2048 in the = /boot/loader.conf, but when I checked netstat -m, it appears that there = are less buffers available then there were when the problem happened. netstat -m enterprise# netstat -m mbuf usage: GEN cache: 0/64 (in use/in pool) CPU #0 cache: 145/640 (in use/in pool) Total: 145/704 (in use/in pool) Mbuf cache high watermark: 512 Maximum possible: 4096 Allocated mbuf types: 144 mbufs allocated to data 1 mbufs allocated to packet headers 17% of mbuf map consumed mbuf cluster usage: GEN cache: 0/232 (in use/in pool) CPU #0 cache: 135/232 (in use/in pool) Total: 135/464 (in use/in pool) Cluster cache high watermark: 128 Maximum possible: 2048 = <-- this number was much higher 22% of cluster map consumed = <- this number was much lower. 1104 KBytes of wired memory reserved (27% in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines This particular machine has 512 MB of ram. Any suggestions what an NFS intensive machine with 512 meg ram should = have kern.ipc.nmbclusters set to? Are there any otyher tunables I should be looking at. -Grant