From owner-freebsd-net@FreeBSD.ORG Thu Mar 8 07:45:08 2012 Return-Path: Delivered-To: net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 593AD1065672; Thu, 8 Mar 2012 07:45:08 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pz0-f54.google.com (mail-pz0-f54.google.com [209.85.210.54]) by mx1.freebsd.org (Postfix) with ESMTP id 03E4A8FC12; Thu, 8 Mar 2012 07:45:07 +0000 (UTC) Received: by dald2 with SMTP id d2so226781dal.13 for ; Wed, 07 Mar 2012 23:45:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=FA1pWPAEp07NSxdY6LjgpvnZd5k3XkvkRX91taz/W2c=; b=KHXwBW02wDJB5LX8G1teRmbFrmiSnC4rAvMKH3HBIN+kcf8dgiaxs6ZdFF0S7AHmhr AgAbIc0LMFdtMHMa0hS2M1DY3tvPbiQbgfevkQbFRaSwPjUR5Qi7uoj3xnalYaSVHEJ5 mt1+cMQo41p6ghll1mgI0wSojZtt6/JKfOBkk3GSwXe4nUjRPFH5gDJ2/iS5YrQjxFwT wLD+nNKr9RMPA7xtEU1FHaH9hfYJmhbDa0M9vgBvs8K3CXAHYcYfpVckK8bQskoF3nES ux9+mfmVQw6P7jUIB1bEXTVmYL1M4K2DnqCoUyER9DwWRVtu44dmpFkFU95na7eqaCQ8 Girw== Received: by 10.68.241.201 with SMTP id wk9mr2728429pbc.136.1331192707376; Wed, 07 Mar 2012 23:45:07 -0800 (PST) Received: from pyunyh@gmail.com ([114.111.62.249]) by mx.google.com with ESMTPS id c9sm2176736pbr.65.2012.03.07.23.45.04 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 07 Mar 2012 23:45:06 -0800 (PST) Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Thu, 08 Mar 2012 16:45:01 -0800 From: YongHyeon PYUN Date: Thu, 8 Mar 2012 16:45:01 -0800 To: Eugene Grosbein Message-ID: <20120309004501.GB15604@michelle.cdnetworks.com> References: <4F5608EA.6080705@rdtc.ru> <20120307202914.GB9436@michelle.cdnetworks.com> <4F571870.3090902@rdtc.ru> <20120308034345.GD9436@michelle.cdnetworks.com> <4F578FE1.1000808@rdtc.ru> <20120308190628.GB13138@michelle.cdnetworks.com> <4F584896.5010807@rdtc.ru> <20120308232346.GA15604@michelle.cdnetworks.com> <4F585387.7010706@rdtc.ru> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <4F585387.7010706@rdtc.ru> User-Agent: Mutt/1.4.2.3i Cc: marius@freebsd.org, yongari@freebsd.org, "net@freebsd.org" Subject: Re: suboptimal bge(4) BCM5704 performance in RELENG_8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: pyunyh@gmail.com List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Mar 2012 07:45:08 -0000 On Thu, Mar 08, 2012 at 01:36:55PM +0700, Eugene Grosbein wrote: > 09.03.2012 06:23, YongHyeon PYUN пишет: > > >> Btw, I still think these errors are pretty seldom and cannot explain > >> why I can't get full output gigabit speed. And what do these > > > > Right. > > > >> DmaWriteQueueFull/DmaReadQueueFull mean? Will it help to increase > > > > State machine in controller will add DMA descriptors to DMA engine > > whenever controller send/receive frames. These numbers indicate > > how many times the state machine sees DMA write/read queue full. > > And the state machine will have to retry the operation once it see > > a queue full. > > > >> interface FIFO queue to eliminate output drops? > >> > > > > These queues reside in internal RISC processors and I don't think > > there is an interface that changes the queue length. It's not > > normal FIFO which is used to send/receive a frame. > > Every ethernet network interface in FreeBSD has its own interface FIFO queue > used by higher levels of our TCP/IP stack to place outgouing packets to. > From if_bge.c: > > ifp->if_snd.ifq_drv_maxlen = BGE_TX_RING_CNT - 1; > IFQ_SET_MAXLEN(&ifp->if_snd, ifp->if_snd.ifq_drv_maxlen); > The FIFO I said above is controller's internal DMA request queue. > > > > I don't see any abnormal DMA configuration for PCI-X 5704 so I'm > > still interested in knowing netperf benchmark result. > > Performing netperf benchmark would be a bit problematic > in this case because the box lives in hoster's datacenter and > netperf needs a peer to work with... But I'll try, it only matter of time. > Ok, thanks. > Meantime I have setup dummynet pipe for outgoing traffic having 875Mbit/s bandwidth > and 72916 slots so it can take up to 1 second of traffic. > > I hope this will help to deal with traffic spikes and eliminate mentioned FIFO > overflows and packet drops at cost of small extra delays. For TCP, drops are much worse > than delays. Delays may be compensated with increased TCP windows and for icecast2 > audio over tcp with some extra buffering at clients side. Drops make TCP flow control > think the channel is overloaded when it's not. And many TCP peers do not use SACK. Before digging deeper into TCP stack it would be easier to know which is the bottleneck(i.e. controller, driver etc) and simple netperf network benchmark in same network segment can tell whether bge(4) is bottleneck or not. > > Eugene Grosbein