From owner-freebsd-fs Tue Mar 20 10:12: 2 2001 Delivered-To: freebsd-fs@freebsd.org Received: from fw.wintelcom.net (ns1.wintelcom.net [209.1.153.20]) by hub.freebsd.org (Postfix) with ESMTP id 0F33237B718; Tue, 20 Mar 2001 10:11:55 -0800 (PST) (envelope-from bright@fw.wintelcom.net) Received: (from bright@localhost) by fw.wintelcom.net (8.10.0/8.10.0) id f2KI99W22661; Tue, 20 Mar 2001 10:09:09 -0800 (PST) Date: Tue, 20 Mar 2001 10:09:09 -0800 From: Alfred Perlstein To: "Michael C . Wu" Cc: izero@ms26.hinet.net, cross@math.psu.edu, "Michael C . Wu" , dillon@FreeBSD.ORG, grog@FreeBSD.ORG, fs@FreeBSD.ORG, hackers@FreeBSD.ORG Subject: Re: tuning a VERY heavily (30.0) loaded scerver Message-ID: <20010320100909.T29888@fw.wintelcom.net> References: <20010320111144.A51924@peorth.iteration.net> <20010320092717.R29888@fw.wintelcom.net> <20010320113818.B52586@peorth.iteration.net> <20010320120112.C52586@peorth.iteration.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <20010320120112.C52586@peorth.iteration.net>; from keichii@iteration.net on Tue, Mar 20, 2001 at 12:01:12PM -0600 X-all-your-base: are belong to us. Sender: owner-freebsd-fs@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org * Michael C . Wu [010320 10:01] wrote: > MRTG Graph at > http://zoonews.ee.ntu.edu.tw/mrtg/zoo.html > > | > | FreeBSD zoo.ee.ntu.edu.tw 4.2-STABLE FreeBSD 4.2-STABLE > | #0: Tue Mar 20 11:10:46 CST 2001 root@:/usr/src/sys/compile/SimFarm i386 > | > | | > system stats at > | | > http://zoo.ee.ntu.edu.tw/~keichii/ > | md0/MFS is used for caching the articles that BBS users read. > | They often read the same articles over and over again, > | and we find that a 128MB MFS/md0 will have 70% hitrate > | > | When our MFS/md0 fills up after long usage, the box easily > | dies. (We crontab clean the mfs, but sometimes the load > | shoots up for no reason and is not able to clean the mfs in time.) > | If we dont do this cache, the data for the bulletin boards > | > > Another problem is that we have around 4000+ processes accessing > lots of SHM at the same time.. How much SHM? Like, what's the combined size of all segments in the system? You can make SHM non-pageable which results in a lot of saved memory for attached processes. You want to be after this date and have this file: Revision 1.3.2.3 / (download) - annotate - [select for diffs], Sun Dec 17 02:05:41 2000 UTC (3 months ago) by alfred Branch: RELENG_4 Changes since 1.3.2.2: +37 -32 lines Diff to previous 1.3.2.2 (colored) to branchpoint 1.3 (colored) next main 1.4 (colored) MFC: phys_pager fix for multiple segments Then set kern.ipc.shm_use_phys=1 > The *UGLY* source code for the BBS is at > http://zoo.ee.ntu.edu.tw/~keichii/zoo_bbsd_src.tgz tis ok, maybe later... though :) > > We can only provide crash dumps for trusted people because > of the thousands of passwords in the dump. Heh. :) -- -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org] To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-fs" in the body of the message