Date: Sat, 01 Mar 2008 12:31:23 +0100 From: Willem Jan Withagen <wjw@digiware.nl> To: Ingo Flaschberger <if@xip.at> Cc: alves <daniel@dgnetwork.com.br>, freebsd-net@freebsd.org, "Daniel Dias Gonç"@FreeBSD.ORG, freebsd-performance@freebsd.org, Kevin Oberman <oberman@es.net> Subject: Re: FBSD 1GBit router? Message-ID: <47C93E8B.3010609@digiware.nl> In-Reply-To: <alpine.LFD.1.00.0803010137041.13659@filebunker.xip.at> References: <20080226003107.54CD94500E@ptavv.es.net> <alpine.LFD.1.00.0802260132240.9719@filebunker.xip.at> <47C8964C.9080309@digiware.nl> <alpine.LFD.1.00.0803010137041.13659@filebunker.xip.at>
next in thread | previous in thread | raw e-mail | index | archive | help
Ingo Flaschberger wrote: > >>> I have a 1.2Ghz Pentium-M appliance, with 4x 32bit, 33MHz pci >>> intel e1000 cards. With maximum tuning I can "route" ~400mbps >>> with big packets and ~80mbps with 64byte packets. around 100kpps, >>> whats not bad for a pci architecture. >>> >>> To reach higher bandwiths, better busses are needed. pci-express >>> cards are currently the best choice. one dedicated pci-express >>> lane (1.25gbps) has more bandwith than a whole 32bit, 33mhz >>> pci-bus. >> >> Like you say routing 400 Mb/s is close to the max of the PCI bus, >> which has a theoretical max of 33*4*8 ~ 1Gbps. Now routing is >> 500Mb/s in, 500Mb/s out. So you are within 80% of the bus-max, not >> counting memory-access and others. > > yes. > >> PCI express will give you a bus per PCI-E device into a central >> hub, thus upping the limit to the speed of the FrontSideBus in >> Intel architectures. Which at the moment is a lot higher than what >> a single PCI bus does. > > Thats why my next router will be based at this box: > http://www.axiomtek.com/products/ViewProduct.asp?view=429 Nice piece of hardware. Don't like the 2.5" one disk option though. And not shure what to think of: "Seven 10/100/1000Mbps (through PCI-E by one interface) ports (RJ-45)" Which seems to suggest everything comes in thru on PCI-E interface. That than better have 8 or 16 lanes. > Hopefully there will be direct memory bus connected nic's in future. > (HyperTransport connected nic's) Well that is going to be an AMD only solution, and I'm not even shure that AMD would like to have other things than CPU's on that bus. > >> What it does not explain is why you can only get 80Mb/s with 64byte >> packets, which would suggest other bottlenecks than just the bus. > > Perhaps something with interrupts: > http://books.google.at/books?id=pr4fspaQqZkC&pg=PA144&lpg=PA144&dq=pci+interrupt+delay&source=web&ots=zbvVU2CgVx&sig=APe9YjdtK35ccnow7BDI2hzie7s&hl=de#PPA144,M1 > > > > MSI (Message-signalled Interrupts) are not very common on PCI > architekture; PCI-E use only MSI. > > The kpps keept always around 100, equally if I used fast-forwarding, > fast-interrupts, or higher HZ values than 1000HZ. MSI is not used for regular PCI busses.Could be that PCI-E does use it. I believe youon that. But even than I'd like to know where the bottleneck is in the 100kp/s limit with 64byte pakkets. > But 100kpps is great for a router hardware of about 600eur. I've seen routers 10 times that expensive, not able to that. --WjW
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47C93E8B.3010609>