From owner-freebsd-hardware Thu Oct 26 10:35:16 2000 Delivered-To: freebsd-hardware@freebsd.org Received: from aurora.sol.net (aurora.sol.net [206.55.65.76]) by hub.freebsd.org (Postfix) with ESMTP id E7EBA37B479 for ; Thu, 26 Oct 2000 10:35:13 -0700 (PDT) Received: (from jgreco@localhost) by aurora.sol.net (8.9.3/8.9.2/SNNS-1.02) id MAA34054; Thu, 26 Oct 2000 12:35:12 -0500 (CDT) From: Joe Greco Message-Id: <200010261735.MAA34054@aurora.sol.net> Subject: Re: Multiple PCI busses? To: dmiller@search.sparks.net (David Miller) Date: Thu, 26 Oct 2000 12:35:12 -0500 (CDT) Cc: freebsd-hardware@freebsd.org, peter.jeremy@alcatel.com.au In-Reply-To: from "David Miller" at Oct 26, 2000 11:57:37 AM X-Mailer: ELM [version 2.5 PL3] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-hardware@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org > > Why are you concerned about full 'net BGP tables? Are you really sending > > data to all ~90,000 advertised routes out there simultaneously? Or is it > > more likely that you're actively sending many packets to a few hundred? > > The box in question is intended for application at a NAP, feeding some > packets (Maybe a few thousand/sec) out for a local site. Chance are that > over any small amount of time most of the packets heading through the box > will be from a small set of networks. > > > With an average routetbl entry of ~136 bytes, that's very likely to at > > least mostly make it into cache. A nice large cache should minimally > > make a very large dent in main memory thrashing. > > You've probably got me here: I'd assume that the routing routines would > have to do a tree search through the table to get the appropriate > interface. Perhaps the significant nodes of the tree would be > cached? Does freebsd support a route-cache like cisco? I'm not sure a "route cache" is all that meaningful. It's sort of a bandaid fix for slow CPU, slow RAM, and poor algorithms. Anyways, for comparison sake, here's one of my FreeBSD BGP speakers. # netstat 60 input (Total) output packets errs bytes packets errs bytes colls 865576 3 509144121 863780 5 508110716 0 I pumped a little extra traffic through it :-) Due to the 79 IPFW rules that have to mostly be parsed for each packet (yes, 79), this causes the interrupt load to hit ~60% on the box. That's pretty respectable, IMO, 14426pps, 84 mbits/s. I do know for certain that if I remove the IPFW rules, this thing pumps it out much faster. The box itself is a K6-III-400, 128MB RAM, on an ASUS P/I-P55T2P4 3.1 512, which provides some additional cache in addition to the on-CPU cache. There's a tag ram of course, too. :-) # /usr/bin/time netstat -rn | wc -l 9.24 real 3.25 user 3.52 sys 93534 # ps agxuww | grep gated root 96691 0.1 37.4 65232 48000 p2- S 12Oct00 160:07.15 /usr/local/sbin/gated -N It's taking two IBGP sessions with other BGP speakers, and one BGP session to a peer. Things get lively when something flaps :-) Incidentally, the box has two 100mbps Ethernet interfaces, for connection into my redundant OSPF-based network at this data center, and an OC3c ATM connection, for WAN connections to other data centers, and for the BGP peer (ATM DS3). This is arguably a taxed setup. With over a third of the machine's memory "wired", I should probably have more like 192MB in it. But I'm cheap, and disk is cheap, and it doesn't matter as much to me if it takes a minute for routing to stabilize after a flap. -- ... Joe ------------------------------------------------------------------------------- Joe Greco - Systems Administrator jgreco@ns.sol.net Solaria Public Access UNIX - Milwaukee, WI 414/342-4847 To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hardware" in the body of the message