Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 08 Mar 2012 13:36:55 +0700
From:      Eugene Grosbein <egrosbein@rdtc.ru>
To:        pyunyh@gmail.com
Cc:        marius@freebsd.org, yongari@freebsd.org, "net@freebsd.org" <net@freebsd.org>
Subject:   Re: suboptimal bge(4) BCM5704 performance in RELENG_8
Message-ID:  <4F585387.7010706@rdtc.ru>
In-Reply-To: <20120308232346.GA15604@michelle.cdnetworks.com>
References:  <4F5608EA.6080705@rdtc.ru> <20120307202914.GB9436@michelle.cdnetworks.com> <4F571870.3090902@rdtc.ru> <20120308034345.GD9436@michelle.cdnetworks.com> <4F578FE1.1000808@rdtc.ru> <20120308190628.GB13138@michelle.cdnetworks.com> <4F584896.5010807@rdtc.ru> <20120308232346.GA15604@michelle.cdnetworks.com>

next in thread | previous in thread | raw e-mail | index | archive | help
09.03.2012 06:23, YongHyeon PYUN пишет:

>> Btw, I still think these errors are pretty seldom and cannot explain
>> why I can't get full output gigabit speed. And what do these
> 
> Right.
> 
>> DmaWriteQueueFull/DmaReadQueueFull mean? Will it help to increase
> 
> State machine in controller will add DMA descriptors to DMA engine
> whenever controller send/receive frames.  These numbers indicate
> how many times the state machine sees DMA write/read queue full.
> And the state machine will have to retry the operation once it see
> a queue full.  
> 
>> interface FIFO queue to eliminate output drops?
>>
> 
> These queues reside in internal RISC processors and I don't think
> there is an interface that changes the queue length. It's not
> normal FIFO which is used to send/receive a frame.

Every ethernet network interface in FreeBSD has its own interface FIFO queue
used by higher levels of our TCP/IP stack to place outgouing packets to.
>From if_bge.c:

	ifp->if_snd.ifq_drv_maxlen = BGE_TX_RING_CNT - 1;
	IFQ_SET_MAXLEN(&ifp->if_snd, ifp->if_snd.ifq_drv_maxlen);

> 
> I don't see any abnormal DMA configuration for PCI-X 5704 so I'm
> still interested in knowing netperf benchmark result.

Performing netperf benchmark would be a bit problematic
in this case because the box lives in hoster's datacenter and
netperf needs a peer to work with... But I'll try, it only matter of time.

Meantime I have setup dummynet pipe for outgoing traffic having 875Mbit/s bandwidth
and 72916 slots so it can take up to 1 second of traffic.

I hope this will help to deal with traffic spikes and eliminate mentioned FIFO
overflows and packet drops at cost of small extra delays. For TCP, drops are much worse
than delays. Delays may be compensated with increased TCP windows and for icecast2
audio over tcp with some extra buffering at clients side. Drops make TCP flow control
think the channel is overloaded when it's not. And many TCP peers do not use SACK.

Eugene Grosbein



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F585387.7010706>