From owner-freebsd-net Tue Aug 20 10:57: 1 2002 Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7481E37B400 for ; Tue, 20 Aug 2002 10:56:58 -0700 (PDT) Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by mx1.FreeBSD.org (Postfix) with ESMTP id BDF7243E65 for ; Tue, 20 Aug 2002 10:56:57 -0700 (PDT) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.9.3/8.9.3) with ESMTP id NAA17374; Tue, 20 Aug 2002 13:56:56 -0400 (EDT) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.11.6/8.9.1) id g7KHuQP27331; Tue, 20 Aug 2002 13:56:26 -0400 (EDT) (envelope-from gallatin@cs.duke.edu) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15714.33482.820805.887447@grasshopper.cs.duke.edu> Date: Tue, 20 Aug 2002 13:56:26 -0400 (EDT) To: Luigi Rizzo Cc: freebsd-net@FreeBSD.ORG Subject: Re: m_getcl and end-to-end performance In-Reply-To: <20020820093939.B48541@iguana.icir.org> References: <15714.27671.533860.408996@grasshopper.cs.duke.edu> <20020820093939.B48541@iguana.icir.org> X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Sender: owner-freebsd-net@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Luigi Rizzo writes: > On Tue, Aug 20, 2002 at 12:19:35PM -0400, Andrew Gallatin wrote: > > > > The current code for stocking the mcl_pool is located in m_freem(). > > This is fine for forwarding, however the most commonly used receive > > path in soreceive() frees mbufs via m_free() (uipc_socket.c:868 in > > today's -stable). This means that on a machine which is an endpoint, > > rather than a forwarder, the mcl_pool will spend much of its time > > empty. > > > > Is there any reason why the mcl_pool is not stocked in m_free() > > rather than m_freem()? > > a couple, both of which are probably rather weak: > > #1 my (mis)assumption that m_free() was mostly unused; > #2 the assumption (this one possibly more correct) that the mbufs > freed by the socket layer do not have M_PKTHDR set, so when it > comes to initialize the mcl_pool from these ones you have more > work to do. At least on the recv side, M_PKTHDR will still be set by the time that m_free{,m}() is called. Speaking of M_PKTHDR .. why is the pool optimization restricted to pkthdr mbufs? A legitimate way to allocate a jumbo frame is to allocate 4 clusters, only the first of which will have M_PKTHDR set. It seems like not limiting it to M_PKTHDR would be just as efficient, as you could avoid a compare in the critical path at the cost of changing mp->m_flags = M_PKTHDR|M_EXT to mp->m_flags = flags|M_EXT; On the free side, you add back the compare, though.. > My impression is that it might be useful to do the following: > + expand MFREE() in the body of m_freem() thus saving the extra > function call at each iteration of m_freem() (which is a cost > paid by all drivers); This makes a lot of sense. > + rewrite m_free() in terms of m_freem(), either as a function or > maybe a macro (for critical paths -- not sure how often it is > used in critical paths); I'm missing something here. Isn't m_freem() implemented in terms of m_free() now? > now if you have patches i'll be happy to have a look at them. Not yet.. I'm still fighting the pkthdr issue before I see how much (or even if) it helps.. BTW, I'm glad somebody else still cares about performance ;) Drew To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message