Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 25 Jul 1997 14:24:34 +0100 (BST)
From:      Stephen Roome <steve@visint.co.uk>
To:        Robert Shady <rls@mail.id.net>
Cc:        freebsd-isp@freebsd.org
Subject:   Re: FreeBSD Router
Message-ID:  <Pine.BSF.3.95.970725135536.2761K-100000@dylan.visint.co.uk>
In-Reply-To: <199707251244.IAA28861@server.id.net>

next in thread | previous in thread | raw e-mail | index | archive | help

Warning: some guesswork/cheesy estimations contained ...

On Fri, 25 Jul 1997, Robert Shady wrote:
> My guess is that 32MB is not enough to hold a full routing table, or maybe
> if I was running NOTHING else...  From my calculations, 48MB would probably
> be okay, but memory is cheap as dirt right now, so...

Fair enough, it's not exactly a small table is it!

> > Although you haven't said what rate of traffic you intend to route, and
> > I'm assuming something like one/two T1's through the SDL card. (is that
> > and N2/WANic card or something else?)
> 
> No, it's the old N2/ISA card.  I obviously would love to get full throughput
> on all of the ports if it's possible.. I am noticing a MAX of about ~250KBytes
> a second from ethernet -> ethernet right now...

I did use a an N2/ISA for a while in a box here, but then moved to the
WANic, it's driving one E1 (2MB) line without problems, only one netcard
though... and our routing table is several thousand times smaller than
yours! The load never gets much higher than 0.0 ( :) ) and pushing as much
as I can through the line (e.g. with tcpblast to the other end) I get
about an extra 100 or so interupts a second and about an 10-15% system
load. ABout 5% Intr shown in vmstat display...

Don't know how well the ISA cards perform anymore..

> 
> > I'm not sure why you'd want to go for two different sorts of cards either,
> > does the Ultra 16 have 10base2 or 5 or something..
> 
> Um, well.. There is only 2 PCI, 2 ISA, 1 VLB, and 1 shared/ISA-PCI slot..
> ISA(4) = 1 Video, 1 N2 card, 2 SMC Elita Ultra cards
> 
> Which leaves 2 usuable PCI slots = 2 Intel ethernet cards... ;)

This goes back to the get yourself a Pentium argument again...

> 
> A little more info, here is the basic outline right now...  What is needed
> to get more interrupts/second? How can I tell if I'm maxing them out/
> 
>  8:43AM  up 1 day,  5:06, 1 user, load averages: 0.11, 0.09, 0.06

I'm not sure how to tell if your maxing them out, but as I said with
tcpblast I'm only getting an extra 100 or so interupts per second. On my
P166 (this machine) runing X I can go from 0 to 500 interuppts per second
on sio0 by just shaking my mouse about (a lot!).

You'd have to try some stuff on your machine, but I've just got a remote
machine to tcpblast this machine constantly while at the same time
tcpblasting localhost on this machine constantly and shaking the mouse etc
I got up to 1770 interrupts per second. (Yes, there's got to be a slightly
more experimental method of doing this, but this is quick and easy!) This
got me burst of cpu load up to about 70% usage, which isn't much really.

God knows if I'm reaching some limit, but the tcpblast to localhost was
still getting 3.6MB/s and the blast from the other machine a fairly
standard 1.0MB/s.. (it's got a cheap ethernet card).

I don't think interrupts is likely to be a problem for you yet then,
although this is a P166. (32MB but no big routing tables)

> last pid:  5296;  load averages:  0.20,  0.10,  0.06                   08:42:16
> 26 processes:  1 running, 25 sleeping
> CPU states:  1.2% user,  0.0% nice,  0.4% system, 14.3% interrupt, 84.2% idle
> Mem: 20M Active, 2944K Inact, 25M Wired, 11M Cache, 7097K Buf, 2760K Free
> Swap: 128M Total, 128K Used, 128M Free
> 
>   PID USERNAME PRI NICE SIZE    RES STATE    TIME   WCPU    CPU COMMAND
>  5296 root     28   0   640K   828K RUN      0:00  2.40%  1.37% top
>  5293 root     18   0   644K   900K pause    0:00  1.88%  1.22% tcsh
>  5292 root      2   0   200K   608K select   0:00  0.74%  0.50% telnetd
>   198 root     18   0   536K   636K pause    1:12  0.04%  0.04% httpd
>   841 root      2   0 14440K 14408K select   5:55  0.00%  0.00% gated
>   161 root     18   0   364K   400K pause    0:07  0.00%  0.00% cron
>    23 root     18   0   200K    56K pause    0:00  0.00%  0.00% adjkerntz
>     1 root     10   0   472K   164K wait     0:00  0.00%  0.00% init
>  3278 root      3   0   176K   536K ttyin    0:00  0.00%  0.00% getty
>   119 root      2   0   828K  1112K select   9:17  0.00%  0.00% ypserv
>   622 nobody    2   0   580K   820K select   0:00  0.00%  0.00% httpd
>   623 nobody    2   0   580K   808K select   0:00  0.00%  0.00% httpd
>   106 root      2   0   560K   704K select   0:00  0.00%  0.00% named
>  3438 root      2   0   260K   620K select   0:00  0.00%  0.00% radiusd.ascend
>   204 root      2   0   476K   596K select   1:03  0.00%  0.00% snmpd
>  3437 root      2   0   244K   584K select   0:00  0.00%  0.00% radiusd.ascend
>  1025 root      2   0   304K   520K select   1:43  0.00%  0.00% mrouted
>   101 root      2   0   200K   508K select   0:07  0.00%  0.00% syslogd
>   227 root      2   0   212K   496K select   1:04  0.00%  0.00% radiusd.living
>   113 root      2   0   480K   448K select   0:01  0.00%  0.00% timed
>   116 daemon    2   0   180K   448K select   0:01  0.00%  0.00% portmap
>   144 daemon    2   0   208K   440K sbwait   0:10  0.00%  0.00% rwhod
>   124 root      2   0   224K   424K select   0:00  0.00%  0.00% rpc.yppasswdd
> 
> # netstat -nr|wc -l
>    45189

yup, that's large!

> # vmstat -i
> interrupt      total      rate
> clk0 irq0    10462882       99
> rtc0 irq8    13388084      127
> pci irq9     13135083      125
> pci irq10        7100        0
> fdc0 irq6           1        0
> wdc0 irq14      49514        0
> sc0 irq1         1761        0
> ed0 irq5        29405        0
> ed1 irq7      4124821       39
> sr0 irq11    12338039      117
> Total        53536690      511

I could be wrong but I beleive this is the averaged out figure since boot,
so a peak figure would be handy.

> 
> # netstat -m
> 80 mbufs in use:
>         65 mbufs allocated to data
>         2 mbufs allocated to packet headers
>         10 mbufs allocated to protocol control blocks
>         3 mbufs allocated to socket names and addresses
> 64/208 mbuf clusters in use
> 426 Kbytes allocated to network (32% in use)
> 0 requests for memory denied
> 0 requests for memory delayed
> 0 calls to protocol drain routines

You could always up the amounts in the kernel.. but this isn't looking
like it's about to cause you problems. On average the system will be fine,
but I can't tell whether it's giong to handle the peak traffic, someone
here will try to do the math, but that never works as someone is bound to
lose a k or an M somewhere in the calculations again =)

--
Steve Roome - Vision Interactive Ltd.
Tel:+44(0)117 9730597 Home:+44(0)976 241342
WWW: http://dylan.visint.co.uk/




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.95.970725135536.2761K-100000>