From owner-freebsd-hackers@FreeBSD.ORG Thu Apr 12 20:00:45 2007 Return-Path: X-Original-To: freebsd-hackers@freebsd.org Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CAEF516A408 for ; Thu, 12 Apr 2007 20:00:45 +0000 (UTC) (envelope-from mwm-keyword-freebsdhackers2.e313df@mired.org) Received: from mired.org (vpn.mired.org [66.92.153.74]) by mx1.freebsd.org (Postfix) with SMTP id 5486E13C4BC for ; Thu, 12 Apr 2007 20:00:45 +0000 (UTC) (envelope-from mwm-keyword-freebsdhackers2.e313df@mired.org) Received: (qmail 1511 invoked by uid 1001); 12 Apr 2007 20:00:27 -0000 Received: by bhuda.mired.org (tmda-sendmail, from uid 1001); Thu, 12 Apr 2007 16:00:27 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <17950.36826.926845.213901@bhuda.mired.org> Date: Thu, 12 Apr 2007 16:00:26 -0400 To: Daniel Taylor In-Reply-To: <20070412190849.63355.qmail@web27705.mail.ukl.yahoo.com> References: <20070412190849.63355.qmail@web27705.mail.ukl.yahoo.com> X-Mailer: VM 7.17 under 21.4 (patch 20) "Double Solitaire" XEmacs Lucid X-Primary-Address: mwm@mired.org X-face: "5Mnwy%?j>IIV\)A=):rjWL~NB2aH[}Yq8Z=u~vJ`"(,&SiLvbbz2W`; h9L,Yg`+vb1>RG% *h+%X^n0EZd>TM8_IB;a8F?(Fb"lw'IgCoyM.[Lg#r\ X-Delivery-Agent: TMDA/1.1.5 (Fettercairn) From: Mike Meyer Cc: freebsd-hackers@freebsd.org Subject: Re: tcp connection splitter X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Apr 2007 20:00:45 -0000 In <20070412190849.63355.qmail@web27705.mail.ukl.yahoo.com>, Daniel Taylor typed: > data/second), a lot of memcpy()s, and doesn't scale > very well. Also, adding a packet to N queues is > expensive because it needs to acquire and release > N mutex locks (one for each client queue.) You can't escape that with this architecture. In paticular: > Each > enqueue bumps the refcount, each dequeue decreases it; > when the refcount drops to 0, the packet is free()'d > (by whomever happened to dequeue it last). These operations have to be locked, so you have to acquire and release 1 mutex lock N+1 times. The FSM model already suggested works well, though I tend to call it the async I/O model, because all your I/O is done async. You track the state of each socket, and events on the socket trigger state transitions for that socket. The programming for a single execution path is a bit more complicated, because the state has to be tracked explicitly instead of being implicit in the PC, but *all* the concurrency issues go away, so overall it's a win. http://www.mired.org/consulting.html Independent Network/Unix/Perforce consultant, email for more information.