From owner-freebsd-net@FreeBSD.ORG Sat Feb 11 17:45:15 2006 Return-Path: X-Original-To: freebsd-net@freebsd.org Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E90BF16A420 for ; Sat, 11 Feb 2006 17:45:14 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.FreeBSD.org (Postfix) with ESMTP id 91E5143D45 for ; Sat, 11 Feb 2006 17:45:14 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id 1385746BB4; Sat, 11 Feb 2006 12:45:02 -0500 (EST) Date: Sat, 11 Feb 2006 17:48:04 +0000 (GMT) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: dima <_pppp@mail.ru> In-Reply-To: Message-ID: <20060211174648.X90460@fledge.watson.org> References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Marcos Bedinelli , freebsd-net@freebsd.org Subject: Re: Network performance in a dual CPU system X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 Feb 2006 17:45:15 -0000 On Sat, 11 Feb 2006, dima wrote: >> The system is mainly being used as a dedicated router. It runs OSPF, BGP >> and IPFW (around 150 rules). OSPF and BGP are managed by Quagga. The box >> has 2 gigabit interfaces that handle on average 200Mbp/s - 50K packets/s >> (inbound and outbound combined), each one of them. > > The second CPU wouldn't help you for sure. There's only one [swi1: net] kernel thread which deals with all the kernel traffic. The option of per-CPU [swi: net] threads was discussed on freebsd-arch@ several months ago, but it wouldn't be implemented soon. So, the only hardware option is installing the fastest CPU possible. > There are several software (FreeBSD specific) options though: If you set net.isr.direct=1, the netisr workload is moved from the netisr thread to the thread performing the dispatch -- typically, the ithread. If you have multiple interfaces and they are assigned different ithreads, then the work can occur in parallel. However, there are some other properties of this setting that are important, so it affects different workloads in different ways. Robert N M Watson > 1. You should surely try polling(4). 50kpps mean 50000 interrupts and the same amount of context switches, which are quite expensive. > 2. FastForwarding. It's the most suitable for you. As I know, Quagga inserts its dynamic routes to the system routing table. And FastForwarding is aware of routing table and firewall rules. And the most exciting: you can switch it on/off without reboot: > # sysctl net.inet.ip.fastforwarding=1 > The only limitation is it applies to IPv4 unicast traffic only. There's no documentation on this feature as i know (am I wrong, or should I report this as a documentation bug?) but you can look at the comments in the beginning of /sys/netinet/ip_fastfwd.c > The authors reported up to 1Mpps (see page 10 at http://people.freebsd.org/~andre/FreeBSD-5.3-Networking.pdf) > >> >> >> Some of you have asked for the following information: >> >> >> - As I indicated before, polling is currently disabled. >> >> >> - Hyperthreading (HTT) is disabled. >> >> >> mull [~]$vmstat -i >> interrupt total rate >> irq1: atkbd0 3466 0 >> irq6: fdc0 10 0 >> irq13: npx0 1 0 >> irq14: ata0 47 0 >> irq21: fxp1 20462527 8 >> irq28: bge0 3511765157 1444 >> irq29: bge1 3633124373 1494 >> irq30: aac0 1842472 0 >> cpu0: timer 566751007 233 >> Total 7733949060 3181 >> >> >> mull [~]$netstat -m >> 644/646/1290 mbufs in use (current/cache/total) >> 643/407/1050/17088 mbuf clusters in use (current/cache/total/max) >> 0/5/4528 sfbufs in use (current/peak/max) >> 1447K/975K/2422K bytes allocated to network (current/cache/total) >> 0 requests for sfbufs denied >> 0 requests for sfbufs delayed >> 0 requests for I/O initiated by sendfile >> 0 calls to protocol drain routines >> >> >> >> Thank you, >> >> -- >> Marcos > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >