Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 14 Mar 2015 11:12:28 -0700
From:      Tim Kientzle <tim@kientzle.com>
To:        Paul Mather <paul@gromit.dlib.vt.edu>
Cc:        freebsd-arm <freebsd-arm@freebsd.org>
Subject:   Re: BeagleBone slow inbound net I/O
Message-ID:  <8EAD1C86-B7FD-4B30-A390-8E60D378224F@kientzle.com>
In-Reply-To: <807E4289-EC2E-49F9-A909-4D2A2A149302@gromit.dlib.vt.edu>
References:  <20150311165115.32327c5a@ivory.wynn.com> <89CEBFCA-6B94-4F48-8DFD-790E4667632D@kientzle.com> <20150314031542.439cdee3@ivory.wynn.com> <1426339400.52318.3.camel@freebsd.org> <807E4289-EC2E-49F9-A909-4D2A2A149302@gromit.dlib.vt.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
Paul=E2=80=99s data looks more like I expect from a healthy network:
a few explanations below:

> On Mar 14, 2015, at 8:42 AM, Paul Mather <paul@gromit.dlib.vt.edu> =
wrote:
>=20
> Here is another data point from my BBB:
>=20
> pmather@beaglebone:~ % sysctl dev.cpsw
> dev.cpsw.0.stats.GoodRxFrames: 4200799
> dev.cpsw.0.stats.RxStartOfFrameOverruns: 1708

In Paul's case, the only non-zero =E2=80=9Cerror=E2=80=9D count was the
RxStartOfFrameOverruns, which impacted only
0.04% of all RX frames.

This is comparable to what I see on my network.

> dev.cpsw.0.queue.tx.totalBuffers: 128
> dev.cpsw.0.queue.tx.maxActiveBuffers: 7
> dev.cpsw.0.queue.tx.longestChain: 4

Paul=E2=80=99s stress tests managed to get 7 mbufs onto
the hardware TX queue at the same time (out of 128
slots reserved for the hardware TX queue).  At
some point, there was a single TX packet that required
4 mbufs.


> dev.cpsw.0.queue.rx.totalBuffers: 384
> dev.cpsw.0.queue.rx.maxAvailBuffers: 55

Paul managed to stress the RX side a little harder:
At one point, there were 55 unprocessed mbufs
on the hardware RX queue.

If you managed to saturate the RX queue, that could
also lead to packet loss, though TCP should adapt
automatically; I wouldn=E2=80=99t expect a saturated queue
to cause the kind of throughput degradation you
would get from more random errors.

Tim




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8EAD1C86-B7FD-4B30-A390-8E60D378224F>