From owner-freebsd-current@FreeBSD.ORG Fri Mar 2 16:14:57 2007 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A310116A404; Fri, 2 Mar 2007 16:14:57 +0000 (UTC) (envelope-from gallatin@cs.duke.edu) Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by mx1.freebsd.org (Postfix) with ESMTP id 64D2113C467; Fri, 2 Mar 2007 16:14:57 +0000 (UTC) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.14.0/8.14.0) with ESMTP id l22GEu7E028871 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 2 Mar 2007 11:14:56 -0500 (EST) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.12.9p2/8.12.9/Submit) id l22GEput046769; Fri, 2 Mar 2007 11:14:51 -0500 (EST) (envelope-from gallatin) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <17896.19835.258246.284397@grasshopper.cs.duke.edu> Date: Fri, 2 Mar 2007 11:14:51 -0500 (EST) To: Andre Oppermann In-Reply-To: <45E8276D.60105@freebsd.org> References: <45E8276D.60105@freebsd.org> X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid Cc: freebsd-net@freebsd.org, freebsd-current@freebsd.org, rwatson@freebsd.org, kmacy@freebsd.org Subject: Re: New optimized soreceive_stream() for TCP sockets, proof of concept X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Mar 2007 16:14:57 -0000 Andre Oppermann writes: > Instead of the unlock-lock dance soreceive_stream() pulls a properly sized > (relative to the receive system call buffer space) from the socket buffer drops > the lock and gives copyout as much time as it needs. In the mean time the lower > half can happily add as many new packets as it wants without having to wait for > a lock. It also allows the upper and lower halfs to run on different CPUs without > much interference. There is a unsolved nasty race condition in the patch though. Excellent. This sounds very exciting! > Any testing, especially on 10Gig cards, and feedback appreciated. I'll try to test sometime soon, but possibly not until next week.. Is there any particular config you're interested in? If not, I'll just compare the pre/post-patch performance of a fast (linux) sender to an SMP (FreeBSD) receiver, using the default "out of the box" settings for a jumbo and standard MTU. Drew