Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Oct 2018 14:45:14 -0400
From:      Randall Stewart <rrs@netflix.com>
To:        hiren panchasara <hiren@strugglingcoder.info>
Cc:        Chenyang Zhong <zhongcy95@gmail.com>, FreeBSD Transports <transport@freebsd.org>
Subject:   Re: TCP RACK performance
Message-ID:  <F77065A6-7081-4DC8-8E91-2A7C52CC055B@netflix.com>
In-Reply-To: <20181001231900.GA23735@strugglingcoder.info>
References:  <CAKS6SJyRk3T5YL72TWC1TtTvOktws9OhHxM7cdzA4RNkcs%2BuVQ@mail.gmail.com> <20181001231900.GA23735@strugglingcoder.info>

next in thread | previous in thread | raw e-mail | index | archive | help
Chenyang:

Interesting

We don=E2=80=99t usually use iperf in any of our testing.. instead we =
use
sophisticated metrics with NF traffic.=20

Now, we do have a lab, and I will have to look into setting some test up =
like the one below.

I would imagine the reason your =E2=80=9Cperformance=E2=80=9D dropped is =
the loss.. you were getting
loss which then caused the drop in b/w .. i.e. the cwnd gets halved =
etc..

When I have a chance I will circle back and take a look.. what we have =
is a
bit different than what is in FreeBSD (have some changes that need to be =
upstreamed yet=E2=80=A6)

R


> On Oct 1, 2018, at 7:19 PM, hiren panchasara =
<hiren@strugglingcoder.info> wrote:
>=20
> Unsure if your questions got answered but this is a more appropriate
> list for such questions.
>=20
> Interesting results. People working on or testing rack don't really =
use
> plain newreno as congestion control afaik. Which might be why they
> didn't notice this - is my speculation.
>=20
> Default Linux uses Cubic as CC. See if switching to cubic helps on
> FreeBSD too.
>=20
> Cheers,
> Hiren
>=20
> On 09/11/18 at 05:41P, Chenyang Zhong wrote:
>> Hi,
>>=20
>> I am really excited to see that @rrs from Netflix is adding TCP RACK
>> and High Precision Timer System to the kernel, so I built a kernel
>> (r338543) and ran some test.
>>=20
>> I used the following kernel config, as suggested in commit rS334804.
>>=20
>> makeoptions WITH_EXTRA_TCP_STACKS=3D1
>> options TCPHPTS
>>=20
>> After booting the new kernel, I loaded the tcp_rack.ko,
>> # kldload tcp_rack
>>=20
>> and checked the sysctl to make sure rack is there.
>> # sysctl net.inet.tcp.functions_available
>> net.inet.tcp.functions_available:
>> Stack                           D Alias                            =
PCB count
>> freebsd                         * freebsd                          3
>> rack                              rack                             0
>>=20
>> I ran the first test with the default stack. I was running iperf3 =
over
>> a wireless network where rtt was fluctuating but no packet loss. Here
>> is a ping result summary. The average and stddev of rtt is relatively
>> high.
>>=20
>> 39 packets transmitted, 39 packets received, 0.0% packet loss
>> round-trip min/avg/max/stddev =3D 1.920/40.206/124.094/39.093 ms
>>=20
>> Here is the iperf3 result of a 30-second benchmark.
>>=20
>> [ ID] Interval           Transfer     Bitrate         Retr
>> [  5]   0.00-30.00  sec   328 MBytes  91.8 Mbits/sec   62             =
sender
>> [  5]   0.00-30.31  sec   328 MBytes  90.9 Mbits/sec                  =
receiver
>>=20
>> Then I switched to the new RACK stack.
>> # sysctl net.inet.tcp.functions_default=3Drack
>> net.inet.tcp.functions_default: freebsd -> rack
>>=20
>> There was a 10% - 15% performance loss after running the same iperf3
>> benchmark. Also, the number of retransmissions increased =
dramatically.
>>=20
>> [ ID] Interval           Transfer     Bitrate         Retr
>> [  5]   0.00-30.00  sec   286 MBytes  79.9 Mbits/sec  271             =
sender
>> [  5]   0.00-30.30  sec   286 MBytes  79.0 Mbits/sec                  =
receiver
>>=20
>> I then ran iperf3 on a Linux machine with kernel 4.15, which uses =
RACK
>> by default. I verified that through sysctl:
>>=20
>> # sysctl net.ipv4.tcp_recovery
>> net.ipv4.tcp_recovery =3D 1
>>=20
>> The iperf3 result showed the same speed with the default freebsd
>> stack, and the number of retransmission matched the RACK stack on
>> freebsd.
>>=20
>> [ ID] Interval           Transfer     Bandwidth       Retr
>> [  4]   0.00-30.00  sec   330 MBytes  92.3 Mbits/sec  270             =
sender
>> [  4]   0.00-30.00  sec   329 MBytes  92.1 Mbits/sec                  =
receiver
>>=20
>> I am not sure whether the performance issue is related to my
>> configuration or to the new implementation of RACK on FreeBSD. I am
>> glad to offer more information if anyone is interested. Thanks again
>> for all the hard work. I cannot wait to see TCP BBR on FreeBSD.
>>=20
>> Best,
>> Chenyang
>> _______________________________________________
>> freebsd-current@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-current
>> To unsubscribe, send any mail to =
"freebsd-current-unsubscribe@freebsd.org"

--------
Randall Stewart
rrs@netflix.com
803-317-4952








Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?F77065A6-7081-4DC8-8E91-2A7C52CC055B>