From owner-freebsd-arm@FreeBSD.ORG Sat Mar 14 18:12:34 2015 Return-Path: Delivered-To: freebsd-arm@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 94708FBC for ; Sat, 14 Mar 2015 18:12:34 +0000 (UTC) Received: from monday.kientzle.com (142-254-26-11.dsl.static.fusionbroadband.com [142.254.26.11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2BC14F42 for ; Sat, 14 Mar 2015 18:12:33 +0000 (UTC) Received: (from root@localhost) by monday.kientzle.com (8.14.4/8.14.4) id t2EICTMC021284; Sat, 14 Mar 2015 18:12:29 GMT (envelope-from tim@kientzle.com) Received: from [192.168.2.105] (192.168.1.65 [192.168.1.65]) by kientzle.com with SMTP id 6n3eijpdzwcxhcmu2diwmjakzi; Sat, 14 Mar 2015 18:12:29 +0000 (UTC) (envelope-from tim@kientzle.com) Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2070.6\)) Subject: Re: BeagleBone slow inbound net I/O From: Tim Kientzle In-Reply-To: <807E4289-EC2E-49F9-A909-4D2A2A149302@gromit.dlib.vt.edu> Date: Sat, 14 Mar 2015 11:12:28 -0700 Message-Id: <8EAD1C86-B7FD-4B30-A390-8E60D378224F@kientzle.com> References: <20150311165115.32327c5a@ivory.wynn.com> <89CEBFCA-6B94-4F48-8DFD-790E4667632D@kientzle.com> <20150314031542.439cdee3@ivory.wynn.com> <1426339400.52318.3.camel@freebsd.org> <807E4289-EC2E-49F9-A909-4D2A2A149302@gromit.dlib.vt.edu> To: Paul Mather X-Mailer: Apple Mail (2.2070.6) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-arm X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: "Porting FreeBSD to ARM processors." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Mar 2015 18:12:34 -0000 Paul=E2=80=99s data looks more like I expect from a healthy network: a few explanations below: > On Mar 14, 2015, at 8:42 AM, Paul Mather = wrote: >=20 > Here is another data point from my BBB: >=20 > pmather@beaglebone:~ % sysctl dev.cpsw > dev.cpsw.0.stats.GoodRxFrames: 4200799 > dev.cpsw.0.stats.RxStartOfFrameOverruns: 1708 In Paul's case, the only non-zero =E2=80=9Cerror=E2=80=9D count was the RxStartOfFrameOverruns, which impacted only 0.04% of all RX frames. This is comparable to what I see on my network. > dev.cpsw.0.queue.tx.totalBuffers: 128 > dev.cpsw.0.queue.tx.maxActiveBuffers: 7 > dev.cpsw.0.queue.tx.longestChain: 4 Paul=E2=80=99s stress tests managed to get 7 mbufs onto the hardware TX queue at the same time (out of 128 slots reserved for the hardware TX queue). At some point, there was a single TX packet that required 4 mbufs. > dev.cpsw.0.queue.rx.totalBuffers: 384 > dev.cpsw.0.queue.rx.maxAvailBuffers: 55 Paul managed to stress the RX side a little harder: At one point, there were 55 unprocessed mbufs on the hardware RX queue. If you managed to saturate the RX queue, that could also lead to packet loss, though TCP should adapt automatically; I wouldn=E2=80=99t expect a saturated queue to cause the kind of throughput degradation you would get from more random errors. Tim