From owner-freebsd-net@FreeBSD.ORG Thu Mar 8 06:36:58 2012 Return-Path: Delivered-To: net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E6E4106564A; Thu, 8 Mar 2012 06:36:58 +0000 (UTC) (envelope-from egrosbein@rdtc.ru) Received: from eg.sd.rdtc.ru (eg.sd.rdtc.ru [IPv6:2a03:3100:c:13::5]) by mx1.freebsd.org (Postfix) with ESMTP id 8922D8FC14; Thu, 8 Mar 2012 06:36:57 +0000 (UTC) Received: from eg.sd.rdtc.ru (localhost [127.0.0.1]) by eg.sd.rdtc.ru (8.14.5/8.14.5) with ESMTP id q286atVB020945; Thu, 8 Mar 2012 13:36:56 +0700 (NOVT) (envelope-from egrosbein@rdtc.ru) Message-ID: <4F585387.7010706@rdtc.ru> Date: Thu, 08 Mar 2012 13:36:55 +0700 From: Eugene Grosbein User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; ru-RU; rv:1.9.2.13) Gecko/20110112 Thunderbird/3.1.7 MIME-Version: 1.0 To: pyunyh@gmail.com References: <4F5608EA.6080705@rdtc.ru> <20120307202914.GB9436@michelle.cdnetworks.com> <4F571870.3090902@rdtc.ru> <20120308034345.GD9436@michelle.cdnetworks.com> <4F578FE1.1000808@rdtc.ru> <20120308190628.GB13138@michelle.cdnetworks.com> <4F584896.5010807@rdtc.ru> <20120308232346.GA15604@michelle.cdnetworks.com> In-Reply-To: <20120308232346.GA15604@michelle.cdnetworks.com> Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 8bit Cc: marius@freebsd.org, yongari@freebsd.org, "net@freebsd.org" Subject: Re: suboptimal bge(4) BCM5704 performance in RELENG_8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Mar 2012 06:36:58 -0000 09.03.2012 06:23, YongHyeon PYUN пишет: >> Btw, I still think these errors are pretty seldom and cannot explain >> why I can't get full output gigabit speed. And what do these > > Right. > >> DmaWriteQueueFull/DmaReadQueueFull mean? Will it help to increase > > State machine in controller will add DMA descriptors to DMA engine > whenever controller send/receive frames. These numbers indicate > how many times the state machine sees DMA write/read queue full. > And the state machine will have to retry the operation once it see > a queue full. > >> interface FIFO queue to eliminate output drops? >> > > These queues reside in internal RISC processors and I don't think > there is an interface that changes the queue length. It's not > normal FIFO which is used to send/receive a frame. Every ethernet network interface in FreeBSD has its own interface FIFO queue used by higher levels of our TCP/IP stack to place outgouing packets to. >From if_bge.c: ifp->if_snd.ifq_drv_maxlen = BGE_TX_RING_CNT - 1; IFQ_SET_MAXLEN(&ifp->if_snd, ifp->if_snd.ifq_drv_maxlen); > > I don't see any abnormal DMA configuration for PCI-X 5704 so I'm > still interested in knowing netperf benchmark result. Performing netperf benchmark would be a bit problematic in this case because the box lives in hoster's datacenter and netperf needs a peer to work with... But I'll try, it only matter of time. Meantime I have setup dummynet pipe for outgoing traffic having 875Mbit/s bandwidth and 72916 slots so it can take up to 1 second of traffic. I hope this will help to deal with traffic spikes and eliminate mentioned FIFO overflows and packet drops at cost of small extra delays. For TCP, drops are much worse than delays. Delays may be compensated with increased TCP windows and for icecast2 audio over tcp with some extra buffering at clients side. Drops make TCP flow control think the channel is overloaded when it's not. And many TCP peers do not use SACK. Eugene Grosbein