From owner-freebsd-net@FreeBSD.ORG Sun Mar 30 10:34:21 2008 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E77D3106566B; Sun, 30 Mar 2008 10:34:21 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.freebsd.org (Postfix) with ESMTP id C82AE8FC16; Sun, 30 Mar 2008 10:34:21 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id 59F6946B54; Sun, 30 Mar 2008 06:34:21 -0400 (EDT) Date: Sun, 30 Mar 2008 11:34:21 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Alexander Motin In-Reply-To: <47EF4F18.502@FreeBSD.org> Message-ID: <20080330112846.Y5921@fledge.watson.org> References: <47EF4F18.502@FreeBSD.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-hackers@freebsd.org, FreeBSD Net Subject: Re: Multiple netgraph threads X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 30 Mar 2008 10:34:22 -0000 On Sun, 30 Mar 2008, Alexander Motin wrote: > I have implemented a patch (for the HEAD) making netgraph to use several own > threads for event queues processing instead of using single swinet. It > should significantly improve netgraph SMP scalability on complicated > workloads that require queueing by implementation (PPTP/L2TP) or stack size > limitations. It works perfectly on my UP system, showing results close to > original or even a bit better. I have no real SMP test server to measure > real scalability, but test on HTT CPU works fine, utilizing both virtual > cores at the predictable level. Reviews and feedbacks are welcome. URL: > http://people.freebsd.org/~mav/netgraph.threads.patch FYI, you might be interested in some similar work I've been doing in the rwatson_netisr branch in Perforce, which: (1) Adds per-CPU netisr threads (2) Introduces inpcb affinity, assigned using a hash on the tuple (similar to RSS) to load balance (3) Moves to rwlock use on inpcb and pcbinfo, used extensively in UDP and somewhat in TCP My initial leaning would be that we would like to avoid adding too many more threads that will do per-packet work, as that leads to excessive context switching. I have similar worries regarding ithreads, and I suspect (hope?) we will have an active discussion of this at the BSDCan developer summit. BTW, I'd be careful with the term "should" and SMP -- often times scalability to multiple cores involves not just introducing the opportunity for parallelism, but also significantly refining or changing the data model to allow that parallelism to be efficiently used. Right now, despite loopback performance being a bottleneck with a single netisr thread, we're not seeing a performance improvement for database workloads over loopback with multiple netisr threads. We're diagnosing this still -- initially it appeared to be tcbinfo lock contention (not surprising), but moving to read locking on tcbinfo didn't appear to help (except in reducing contention leading to more idle time rather than more progress). The current theory is that something about the approach isn't working well with the scheduler but we need to analyze the scheduler traces further. My recommendation would be that you do a fairly thorough performance evaluation before committing. Robert N M Watson Computer Laboratory University of Cambridge