From owner-freebsd-stable@FreeBSD.ORG Fri Dec 28 05:23:52 2007 Return-Path: Delivered-To: freebsd-stable@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1C3BC16A417; Fri, 28 Dec 2007 05:23:52 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail02.syd.optusnet.com.au (mail02.syd.optusnet.com.au [211.29.132.183]) by mx1.freebsd.org (Postfix) with ESMTP id A7B7A13C467; Fri, 28 Dec 2007 05:23:51 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from besplex.bde.org (c211-30-219-213.carlnfd3.nsw.optusnet.com.au [211.30.219.213]) by mail02.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id lBS5NfSv022528 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 28 Dec 2007 16:23:44 +1100 Date: Fri, 28 Dec 2007 16:23:40 +1100 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Bruce Evans In-Reply-To: <20071228143411.C3587@besplex.bde.org> Message-ID: <20071228155323.X3858@besplex.bde.org> References: <20071221234347.GS25053@tnn.dglawrence.com> <20071222050743.GP57756@deviant.kiev.zoral.com.ua> <20071223032944.G48303@delplex.bde.org> <985A3F99-B3F4-451E-BD77-E2EB4351E323@eng.oar.net> <20071228143411.C3587@besplex.bde.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Kostik Belousov , freebsd-stable@FreeBSD.org, freebsd-net@FreeBSD.org Subject: Re: Packet loss every 30.999 seconds X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Dec 2007 05:23:52 -0000 On Fri, 28 Dec 2007, Bruce Evans wrote: > In previous mail, you (Mark) wrote: > > # With FreeBSD 4 I was able to run a UDP data collector with rtprio set, > # kern.ipc.maxsockbuf=20480000, then use setsockopt() with SO_RCVBUF > # in the application. If packets were dropped they would show up > # with netstat -s as "dropped due to full socket buffers". > # # Since the packet never makes it to ip_input() I no longer have > # any way to count drops. There will always be corner cases where > # interrupts are lost and drops not accounted for if the adapter > # hardware can't report them, but right now I've got no way to > # estimate any loss. > > I tried using SO_RCVBUF in ttcp (it's an old version of ttcp that doesn't > have an option for this). With the default kern.ipc.maxsockbuf of 256K, > this didn't seem to help. 20MB should work better :-) but I didn't try that. I've now tried this. With kern.ipc.maxsockbuf=20480000 (~20MB) an SO_RCVBUF of 0x1000000 (16MB), the "socket buffer full lossage increases from ~300 kpps (~47%) to ~450 kpps (70%) with tiny packets. I think this is caused by most accesses to the larger buffer being cache misses -- since the system can't keep up, cache misses make it worse). However, with 1500-byte packets, the larger buffer reduces the lossage from 1 kpps in 76 kpps to precisely zero pps, at a cost of only a small percentage of system overhead (~20Idle to ~18%Idle). The above is with net.isr.direct=1. With net.isr.direct=0, the loss is too small to be obvious and is reported as 0, but I don't trust the report. ttcp's packet counts indicate losses of a few per million with direct=0 but none with direct=1. "while :; do sync; sleep 0.1" in the background causes a loss of about 100 pps with direct=0 and a smaller loss with direct=1. Running the ttcp receiver at rtprio 0 doesn't make much difference to the losses. Bruce