Date: Mon, 12 May 2008 15:57:59 +0200 From: Andre Oppermann <andre@freebsd.org> To: Tim Gebbett <tim@gebbettco.com> Cc: freebsd-net@freebsd.org, Deng XueFeng <dengxf@gmail.com>, Mark Hills <mark@pogo.org.uk> Subject: Re: read() returns ETIMEDOUT on steady TCP connection Message-ID: <48284CE7.2020707@freebsd.org> In-Reply-To: <482561F3.6080701@gebbettco.com> References: <4822BABB.4020407@freebsd.org> <f0ded2a94e286433785a3e78d40fc2ea@193.189.140.95> <4824211C.9090105@freebsd.org> <482561F3.6080701@gebbettco.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Tim Gebbett wrote: > Hi Andre, did some careful testing yesterday and last night. I seem to > be still hitting an unknown buffer although the probem is much alleviated. > The system achieved a 7hour run at 500mbit where ETIMEDOUT occured. I > was feeding 11 other streams to the server whos counters show an > uninterrupted eleven hours. The feeder streams are from the same source, > so it is unlikely that the one feeding the test could of had a problem > without affecting the counters of the others. > sysctls are: > > (loader.conf) hw.em.txd=4096 > net.inet.tcp.sendspace=78840 > net.inet.tcp.recvspace=78840 > > kern.ipc.nmbjumbop=51200 > kern.ipc.nmbclusters=78840 > kern.maxfiles=50000 > > IP stats are miraculously improved, going from a 10% packet loss within > stack (output drops) to a consistent zero at peaks of 80000 pps. I > believe the problem is now being shunted to the NIC from the following > output: > > dev.em.0.debug=1 > > < em0: Adapter hardware address = 0xc520b224 > > < em0: CTRL = 0x48f00249 RCTL = 0x8002 < em0: Packet buffer = Tx=16k > Rx=48k < em0: Flow control watermarks high = 47104 low = 45604 > < em0: tx_int_delay = 66, tx_abs_int_delay = 66 > < em0: rx_int_delay = 0, rx_abs_int_delay = 66 > < em0: fifo workaround = 0, fifo_reset_count = 0 > < em0: hw tdh = 3285, hw tdt = 3285 > < em0: hw rdh = 201, hw rdt = 200 > < em0: Num Tx descriptors avail = 4096 > < em0: Tx Descriptors not avail1 = 4591225 > < em0: Tx Descriptors not avail2 = 0 > < em0: Std mbuf failed = 0 > < em0: Std mbuf cluster failed = 0 > < em0: Driver dropped packets = 0 > < em0: Driver tx dma failure in encap = 0 > > dev.em.0.stats=1 > > < em0: Excessive collisions = 0 > > < em0: Sequence errors = 0 > < em0: Defer count = 0 > < em0: Missed Packets = 16581181 > < em0: Receive No Buffers = 74605555 > < em0: Receive Length Errors = 0 > < em0: Receive errors = 0 > < em0: Crc errors = 0 > < em0: Alignment errors = 0 > < em0: Collision/Carrier extension errors = 0 > < em0: RX overruns = 289717 > < em0: watchdog timeouts = 0 > < em0: XON Rcvd = 0 > < em0: XON Xmtd = 0 > < em0: XOFF Rcvd = 0 > < em0: XOFF Xmtd = 0 > < em0: Good Packets Rcvd = 848158221 > < em0: Good Packets Xmtd = 1080368640 > < em0: TSO Contexts Xmtd = 0 > < em0: TSO Contexts Failed = 0 > > > Does the counter 'Tx Descriptors not avail1' indicate lack of > decriptors at the time not available, and would this be symptomatic of > something Mark suggested: > "(the stack) needs to handle local buffer fills not as a failed attempt > on transmission that increments the retry counter, a possible better > strategy required for backoff > when the hardware buffer is full?" Indeed. We have rethink a couple of assumptions the code currently makes and has made for the longest time. Additionally the defaults for the network hardware need to be better tuned for workloads like yours. I'm on my way to BSDCan'08 soon and I will discuss these issues at the Developer Summit. -- Andre
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?48284CE7.2020707>