From owner-freebsd-questions@FreeBSD.ORG Tue Jun 17 09:39:08 2003 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 47CCD37B401 for ; Tue, 17 Jun 2003 09:39:08 -0700 (PDT) Received: from malkav.snowmoon.com (malkav.snowmoon.com [209.23.60.62]) by mx1.FreeBSD.org (Postfix) with SMTP id 6022243F3F for ; Tue, 17 Jun 2003 09:39:07 -0700 (PDT) (envelope-from jaime@snowmoon.com) Received: (qmail 96458 invoked from network); 17 Jun 2003 16:38:32 -0000 Received: from localhost.snowmoon.com (HELO localhost) (127.0.0.1) by localhost.snowmoon.com with SMTP; 17 Jun 2003 16:38:32 -0000 Date: Tue, 17 Jun 2003 12:37:49 -0400 (EDT) From: jaime@snowmoon.com To: Bill Moran In-Reply-To: <3EEF2108.1010802@potentialtech.com> Message-ID: <20030617122628.N96282@malkav.snowmoon.com> References: <20030617075240.L94567@malkav.snowmoon.com> <3EEF1302.8060908@potentialtech.com> <3EEF2108.1010802@potentialtech.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: freebsd-questions@freebsd.org Subject: Re: ping: sendto: No buffer space available X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2003 16:39:08 -0000 On Tue, 17 Jun 2003, Bill Moran wrote: > > I think that the NIC is on the logic board. I can try to install > > a PCI card and use that in its place to see if the problem goes away. > > Should I bother? > > I would. There are two possibilities that I would consider here: > a) The NIC has gone flaky with age > b) Newer drivers don't talk to that particular NIC as well as the old > > Did you notice this starting to happen after a particular upgrade? You > may be able to correlate this with a particular update to the driver by > looking at dates in the cvs logs. Nope. The problem is only a few days old and the OS is 4.7-Stable. I think that the last update was in February or so. > This is hearsay, and I have no personal experience with it, but I've > seen lots of complaints across the lists about "onboard" cards that > use the fxp driver not being very good. I've never had (nor heard of) > any problems with the PCI versions. Hrm.... An interesting thought.... > Another possibility is hardware ... have you added any hardware or > changed any BIOS settings? There's the possibility of interrupt > problems. No. The system was up for more than 2 months before the problems began. > I'm just shooting out ideas for you to work with. Please distill > everything I've said through your own experience. i.e. take it with > a grain of salt, as I don't _know_ what your problem is. I always try to take email list advice this way. :) > Never helped for me either. You may want to check, but in my experience > the output of 'netstat -m' will also tell you that you have plenty of > network buffers available. bash-2.05b$ netstat -m 144/768/26624 mbufs in use (current/peak/max): 139 mbufs allocated to data 5 mbufs allocated to packet headers 138/572/6656 mbuf clusters in use (current/peak/max) 1336 Kbytes allocated to network (6% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines That was durring normal operation. The following are at the tail end of one of the outages: bash-2.05b$ netstat -m 477/768/26624 mbufs in use (current/peak/max): 386 mbufs allocated to data 91 mbufs allocated to packet headers 384/572/6656 mbuf clusters in use (current/peak/max) 1336 Kbytes allocated to network (6% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines bash-2.05b$ netstat -m 476/768/26624 mbufs in use (current/peak/max): 387 mbufs allocated to data 89 mbufs allocated to packet headers 385/572/6656 mbuf clusters in use (current/peak/max) 1336 Kbytes allocated to network (6% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines bash-2.05b$ netstat -m 182/768/26624 mbufs in use (current/peak/max): 149 mbufs allocated to data 33 mbufs allocated to packet headers 147/572/6656 mbuf clusters in use (current/peak/max) 1336 Kbytes allocated to network (6% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines bash-2.05b$ netstat -m 156/768/26624 mbufs in use (current/peak/max): 153 mbufs allocated to data 3 mbufs allocated to packet headers 151/572/6656 mbuf clusters in use (current/peak/max) 1336 Kbytes allocated to network (6% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines bash-2.05b$ netstat -m 135/768/26624 mbufs in use (current/peak/max): 134 mbufs allocated to data 1 mbufs allocated to packet headers 132/572/6656 mbuf clusters in use (current/peak/max) 1336 Kbytes allocated to network (6% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines bash-2.05b$ netstat -m 144/768/26624 mbufs in use (current/peak/max): 139 mbufs allocated to data 5 mbufs allocated to packet headers 136/572/6656 mbuf clusters in use (current/peak/max) 1336 Kbytes allocated to network (6% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines It looks like something is causing it to pile up packets in the buffers temporarily. Any thoughts? In the mean time, I will see if I can dig up a PCI ethernet card. Thanks, Jaime