From owner-freebsd-arch Tue Mar 4 15:27:55 2003 Delivered-To: freebsd-arch@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 34A2237B401 for ; Tue, 4 Mar 2003 15:27:53 -0800 (PST) Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by mx1.FreeBSD.org (Postfix) with ESMTP id CE0C443FAF for ; Tue, 4 Mar 2003 15:27:51 -0800 (PST) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.12.8/8.12.8) with ESMTP id h24NRlG1004856 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Tue, 4 Mar 2003 18:27:50 -0500 (EST) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.11.6/8.9.1) id h24NRgP73552; Tue, 4 Mar 2003 18:27:42 -0500 (EST) (envelope-from gallatin@cs.duke.edu) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <15973.13934.694598.353417@grasshopper.cs.duke.edu> Date: Tue, 4 Mar 2003 18:27:42 -0500 (EST) To: arch@freebsd.org Cc: Sean Chittenden Subject: Re: Should sendfile() to return ENOBUFS? In-Reply-To: <3E64FEA0.CCA21C7@imimic.com> References: <3E64FEA0.CCA21C7@imimic.com> X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG Alan L. Cox writes: > Sean, > > The current sf_buf implementation has a simple problem that could > account for your frequent blocking. Let me describe an extreme example > that will make it clear. Suppose you have a web server that delivers > nothing but a single file of 8 pages, or 32K bytes of data, to its > clients. Here's the punchline: If you had 1,000 concurrent requests, > you could wind up allocating 8,000 sf_bufs. Given that the main purpose > of the sf_buf is simply to provide an in-kernel virtual address for the > page, one sf_buf per page should suffice. Sf_bufs are already reference > counted. So, the principle change would be to add a directory data > structure that could answer the question "Does this page already have an > allocated sf_buf?" In a reply I previously sent privately to Alan, I suggested: One off-the-cuff idea would be to trade the u_int cow field of a vm_page for a struct sf_buf *sf_buf ptr, and to move the cow field into the sf_buf. That way, the sendfile and zero-copy code could find the relevant sfbuf without doing any additional hashing beyond what they needed to do to find the page. If page->sf_buf == NULL, then an sf buf is alloc'ed off the list, and page->sf_buf = new_sfbuf. Otherwise, a refcnt is incremented. The vm_fault() code would change so that it first checked for a non-null sf buf, then checked the cow count in the sf buf. This increases the size of a vm_page of 4 bytes on a 64-bit platform (or maybe 8, depending on the size of a vm_page), but should not affect the 32-bit platforms. There'd be a 4-byte size increase per sf_buf, but the decrease in the number of sf_buf's in flight should more than make up for the bloat. Alan suggested that once this was done: alc> the next step would be to manage sf_buf's as a sort alc> of "mapping cache". This could reduce the number of TLB shootdowns on alc> SMPs; and on 64-bit architectures we should be using the "mapping of alc> all RAM". Unfortunately, I don't have any time to implement this, nor does Alan. Is there any interest in this idea? Anbody like it enough to implement it? Drew To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message