Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Dec 2006 20:54:18 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Bruce Evans <bde@zeta.org.au>
Cc:        Robert Watson <rwatson@freebsd.org>, Ivan Voras <ivoras@fer.hr>, freebsd-arch@freebsd.org
Subject:   Re: What is the PREEMPTION option good for?
Message-ID:  <200612020454.kB24sIpq071255@apollo.backplane.com>
References:  <20061119041421.I16763@delplex.bde.org> <ejnvfo$tv2$1@sea.gmane.org> <ek4gc8$492$1@sea.gmane.org> <20061126174041.V83346@fledge.watson.org> <ekckpt$4h6$1@sea.gmane.org> <20061128142218.P44465@fledge.watson.org> <45701A49.5020809@fer.hr> <20061202094431.O16375@delplex.bde.org>

next in thread | previous in thread | raw e-mail | index | archive | help
:...
:the client.  The difference is entirely due to dead time somewhere in
:nfs.  Unfortunately, turning on PREEMPTION and IPI_PREEMPTION didn't
:recover all the lost performance.  This is despite the ~current kernel
:having slightly lower latency for flood pings and similar optimizations
:for nfs that reduce the RPC count by a factor of 4 and the ping latency
:by a factor of 2.

    The single biggest NFS client performance issue I have encountered
    in an environment where most of the data can be cached from earlier
    runs is with negative name lookups.  Due the large number of -I
    options used in builds, the include search path is fairly long and
    this usually results in a large number of negative lookups, all of
    which introduce synchronous dead times while the stat() or open() 
    waits for the over-the-wire transaction to complete.

    The #1 solution is to cache negative namecache hits for NFS clients.
    You don't have to cache them for long... just 3 seconds is usually
    enough to remove most of the dead time.  Also make sure your access
    cache timeout is something reasonable.

    It is possible to reduce the number of over-the-wire transactions to
    zero but it requires seriously nerfing the access and negative cache
    timeouts.  It isn't usually worth doing.

    Here are some test results:

    make buildkernel, /usr/src mounted via NFS, 10 second access cache
    timeout, multiple runs to pre-cache data and tcpdump used to verify
    that only access RPCs were being sent over the wire for all tests.
    (on DragonFly):

	No negative cache	    - 440 seconds real
	 3 second neg cache timeout - 411 seconds real
	10 second neg cache timeout - 410 seconds real (6% improvement)
	30 second neg cache timeout - 409 seconds real

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200612020454.kB24sIpq071255>