From owner-freebsd-questions@FreeBSD.ORG Sat Feb 4 19:48:50 2006 Return-Path: X-Original-To: questions@freebsd.org Delivered-To: freebsd-questions@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1F21316A422 for ; Sat, 4 Feb 2006 19:48:50 +0000 (GMT) (envelope-from infofarmer@gmail.com) Received: from zproxy.gmail.com (zproxy.gmail.com [64.233.162.202]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2A98443D49 for ; Sat, 4 Feb 2006 19:48:49 +0000 (GMT) (envelope-from infofarmer@gmail.com) Received: by zproxy.gmail.com with SMTP id 8so847301nzo for ; Sat, 04 Feb 2006 11:48:48 -0800 (PST) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=h2Ksaz60fBoeJBy8NMQ2AYZuKbAslG6vZY0S/rz/jLF9UGqNhX7vQMFeUluXIl5V3caA528Vc6rGGSGjUvLu4LQBilOgJP6SWjzgnSjRQt5XxGWyltDwJbKsfMOWXPQJW+58HtHl0jA8dK+ecZJK5DEXNLG+MwVZhal5meN3j3A= Received: by 10.36.251.46 with SMTP id y46mr1868051nzh; Sat, 04 Feb 2006 11:48:48 -0800 (PST) Received: by 10.37.20.11 with HTTP; Sat, 4 Feb 2006 11:48:48 -0800 (PST) Message-ID: Date: Sat, 4 Feb 2006 22:48:48 +0300 From: Andrew Pantyukhin To: FreeBSD Questions MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Cc: Subject: Trouble with resources under network load X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 Feb 2006 19:48:50 -0000 I've got P4 box with 256Mb RAM. I want it to be able to forward 5Mbit/s between 500 PPTP clients (no crypto/ compression) and our ISP. I understand we should probably get Cisco for this, or at least a higher-spec box, but I just want this setup to be kinda proof of concept. Complicated things can be done using cheap hardware and a good OS. Can't they? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D It happens that I run named, smbd/nmbd and dhcpd (serving only 50 clients) until we setup additional boxes. The load is pretty mild (I cut getty out): last pid: 33780; load averages: 0.01, 0.01, 0.00 =20 =20 up 3+20:30:42 22:38:17 28 processes: 1 running, 27 sleeping CPU states: 0.4% user, 0.0% nice, 0.4% system, 0.7% interrupt, 98.5% id= le Mem: 33M Active, 119M Inact, 65M Wired, 8304K Cache, 33M Buf, 1456K Free Swap: 453M Total, 8K Used, 453M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 265 root 1 96 0 2048K 1472K select 23:45 0.00% natd 24733 root 1 96 0 7196K 5332K select 3:06 0.00% mpd 585 root 1 96 0 3640K 1464K select 1:03 0.00% nmbd 400 bind 1 96 0 10764K 9392K select 0:53 0.00% named 806 root 1 96 0 2944K 2568K select 0:34 0.00% bsnmpd 391 root 1 96 0 1352K 800K select 0:12 0.00% syslogd 501 root 1 96 0 3052K 1436K select 0:10 0.00% ntpd 563 dhcpd 1 96 0 2960K 2328K select 0:03 0.00% dhcpd 531 root 1 8 0 1360K 948K nanslp 0:01 0.00% cron 589 root 1 96 0 5800K 2448K select 0:00 0.00% smbd 518 root 1 96 0 3552K 2048K select 0:00 0.00% sshd 33750 root 1 4 0 6300K 2576K sbwait 0:00 0.00% sshd 33757 root 1 20 0 3996K 2500K pause 0:00 0.00% csh 33752 sat 1 96 0 6296K 2892K select 0:00 0.00% sshd 33753 sat 1 20 0 3736K 2444K pause 0:00 0.00% tcsh 33756 sat 1 8 0 1656K 1184K wait 0:00 0.00% su 33780 root 1 96 0 2280K 1344K RUN 0:00 0.00% top 358 root 1 97 0 508K 264K select 0:00 0.00% devd 604 root 1 20 0 5800K 2448K pause 0:00 0.00% smbd 163 root 1 20 0 1216K 576K pause 0:00 0.00% adjkerntz =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D I'm constantly stumbling upon some out-of-resources problems. Just to name a couple: named[400]: client 10.32.23.92#1714: error sending response: not enough free resources snmpd[806]: sysctl get: Cannot allocate memory =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D I have these in loader.conf and sysctl.conf: kern.maxfiles=3D65536 kern.maxfilesperproc=3D65536 net.graph.maxdgram=3D65536 net.graph.recvspace=3D65536 kern.maxusers=3D512 kern.ipc.maxpipekva=3D268435456 net.graph.maxalloc=3D65536 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D I get these when trying to diagnose: gw# uname -a FreeBSD gw.campus.gubkin.ru 6.0-RELEASE-p4 FreeBSD 6.0-RELEASE-p4 #4: Wed Feb 1 01:13:45 MSK 2006 sat@gw.campus.gubkin.ru: /usr/obj/usr/src/sys/CAMPUS-GW i386 gw# netstat -m 67/1178/1245 mbufs in use (current/cache/total) 64/134/198/33792 mbuf clusters in use (current/cache/total/max) 0/4/8704 sfbufs in use (current/peak/max) 144K/562K/707K bytes allocated to network (current/cache/total) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 82 calls to protocol drain routines gw# netstat -s 0 output packets dropped due to no bufs, etc. gw# sysctl -a | grep socket kern.ipc.numopensockets: 691 kern.ipc.maxsockets: 33792 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D What's wrong?