Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 1 Apr 2007 14:02:29 -0700
From:      "Kip Macy" <kip.macy@gmail.com>
To:        "Andre Oppermann" <andre@freebsd.org>
Cc:        Perforce Change Reviews <perforce@freebsd.org>, Kip Macy <kmacy@freebsd.org>
Subject:   Re: PERFORCE change 117123 for review
Message-ID:  <b1fa29170704011402s734eae0ek4266382b4d6f1c14@mail.gmail.com>
In-Reply-To: <46101C26.5030306@freebsd.org>
References:  <200704012020.l31KKr0O097740@repoman.freebsd.org> <46101C26.5030306@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> Thanks for using it.  This was the idea behind providing this interface.
>  From a cache busting point of view attaching the mbuf after the cluster
> has been filled is very good.  The Sandvine guys found that out a long
> time ago and it indeed makes a lot of sense.  When allocating whole
> clusters the mbuf gets touched twice, once at allocation and once when
> the driver fills in the information from the RX ring.  This way it only
> gets touched in the latter case and the former cache pollution is skipped
> over.

Yup. This is actually only the initial part of what I'm working on. I
was expecting < 3% improvement.

However, the before and after numbers look more like:
before:
chaos# netperf -H 10.0.0.150 -tTCP_SENDFILE -F /var/tmp/bigfile -Cc -P0 -l 5
 65536  32768  32768    5.00       7682.06   25.36    36.07    1.082   1.539
 65536  32768  32768    5.00       7713.27   24.55    36.97    1.043   1.571
 65536  32768  32768    5.00       7755.67   26.25    40.62    1.109   1.716
 65536  32768  32768    5.00       7593.98   21.03    34.79    0.908   1.501

after:
chaos# netperf -H 10.0.0.150 -tTCP_SENDFILE -F /var/tmp/bigfile -Cc -P0 -l 5

 65536  32768  32768    5.00       8109.80   33.65    33.22    1.360   1.342
 65536  32768  32768    5.00       8649.49   32.89    45.29    1.246   1.716
 65536  32768  32768    5.00       8211.80   26.35    34.70    1.051   1.385
 65536  32768  32768    5.00       8538.48   29.55    44.05    1.134   1.691


A couple of weeks ago I was getting 8.8 - 9.6 Gbps

>
> >       This change alleviates a good portion of the recent (last 2 weeks) 18% performance drop
> >       in peak TCP throughput
>
> Can you attribute any specific change to the drop in performance?

Unfortunately, I haven't tracked HEAD the last 2 weeks or so.


          -Kip



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b1fa29170704011402s734eae0ek4266382b4d6f1c14>