Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 01 Mar 1997 17:42:33 -0500
From:      Bakul Shah <bakul@chai.plexuscom.com>
To:        dennis <dennis@etinc.com>
Cc:        dg@root.com, Julian Assange <proff@iq.org>, hackers@freebsd.org
Subject:   Re: optimised ip_input 
Message-ID:  <199703012242.RAA04265@chai.plexuscom.com>
In-Reply-To: Your message of "Sat, 01 Mar 1997 12:27:13 EST." <3.0.32.19970301122709.00b1f390@etinc.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
> A better way of handing the overhead and the complexity  issue is to
> keep local addresses in the routing table with a LOCAL flag. This 
> eliminates the overhead altogether and simplifies the process:

> route=routelookup()
> if (route.flags & localflag)
> 	deal_withit_locally()
> else
> 	forward()

> You can argue that in some cases this is more overhead for locally
> destined packets, but in todays world (where routing speed is a
> primary concern) this is faster most of the time and makes life
> easier as it centralizes the structure dependencies.

The vast majority of the hosts have one or two interfaces so what
you suggest above is likely to be a `pessimization' -- route lookup
is not cheap.  Perhaps a different version of ip_input() can be used
depending on whether the host is acting as a router or not.  The
host version can use a more efficient match code.

Some comments on the topic of networking code optimization:
1) There are a number of areas where things are being done
   suboptimally in the networking code:
   - things are checked multiple times
   - mbuf manipulation is expensive.
   - copying multiple times
   - things are `handed over' via queues more than once.
   - the cost of manipulating certain data structures can 
     scale more than linearly with increased use.
   - certain things are done in mainline code when they should
     be done offline or via use of specialized functions.
   - new code is bolted on top of existing structure where
     it should have been added by simplifying/generalizing
     the existing structure.
   [Too general?  Well, start with any 200 lines worth of functions
    and see how many things you can think of optimizing!]

2) Profiling the networking code in a production environment ought
   to point out the `hot spots' where much of the time is spent
   under various conditions.  Measure various costs by changing data
   structure sizes and feeding similar traffic.  Find out *why* the
   hot spots are the way they are; gain a deeper understanding of
   the structure and behavior of the networking code.  Next optimize
   the hell out of the top N offenders.  Without this sort of
   _prioritization_ you are likely to spend much time on optimizing
   less travelled roads.  If and when such profiling is done,
   publish the results (and conditions under which the numbers were
   obtained).  Brainstorm about various solutions & analyze how they
   will stand up under load and corner cases.

3) There is also the danger of destabilizing working code by making
   too many changes, which is another reason for making only those
   changes that yield substantial improvement and only after peer
   reviewing.

-- bakul



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199703012242.RAA04265>