From owner-freebsd-current Fri Jan 30 19:14:11 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id TAA01157 for current-outgoing; Fri, 30 Jan 1998 19:14:11 -0800 (PST) (envelope-from owner-freebsd-current@FreeBSD.ORG) Received: from lamb.sas.com (root@lamb.sas.com [192.35.83.8]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id TAA01145 for ; Fri, 30 Jan 1998 19:14:07 -0800 (PST) (envelope-from jwd@unx.sas.com) Received: from mozart (markham.southpeak.com [192.58.185.8]) by lamb.sas.com (8.8.7/8.8.7) with SMTP id WAA08729 for ; Fri, 30 Jan 1998 22:14:04 -0500 (EST) Received: from iluvatar.unx.sas.com by mozart (5.65c/SAS/Domains/5-6-90) id AA09181; Fri, 30 Jan 1998 22:14:04 -0500 From: "John W. DeBoskey" Received: by iluvatar.unx.sas.com (5.65c/SAS/Generic 9.01/3-26-93) id AA16700; Fri, 30 Jan 1998 22:14:03 -0500 Message-Id: <199801310314.AA16700@iluvatar.unx.sas.com> Subject: NFS v3 3.0-Current performance questions To: freebsd-current@FreeBSD.ORG Date: Fri, 30 Jan 1998 22:14:02 -0500 (EST) Cc: jwd@unx.sas.com (John W. DeBoskey) X-Mailer: ELM [version 2.4 PL23] Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-freebsd-current@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG X-To-Unsubscribe: mail to majordomo@FreeBSD.org "unsubscribe current" Hello, I have a series of NFS v3 performance related questions which I would like to present. The setup: 6 266Mhz PII 128Meg computers hooked up to a network appliance file server via NFS V3. Running 3.0-980128-SNAP. Problem: This system is running a distributed make process which accesses (minimum) 2564 h files in 20+ directories located on the file server. I would like to buffer as much of (if not all) the directory information concerning these files and the file contents. Questions: Is it possible to tune the amount of cached directory information in the NFS v3 protocol? Is is possible to tune the amount of file datablock contents which are cached? From nfsstat, note the number of BioR hits and misses from simply running a job which continuously cats the files (of course I may be mis-interpreting the output). Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses 69997 9172 43654 8700 10006 12674 0 0 fyi: The time to cat all the files to /dev/null : 3 iterations : 6.78s real 0.05s user 1.42s system 6.82s real 0.03s user 1.41s system 6.78s real 0.01s user 1.46s system ( I'd like to cut this by at least 50% :-) Using the default (NBUF=0) value causes nbuf to aquire the value 3078 for each system by default. I have set NBUF=8196 with no real performance gain, so I don't think this is the right direction. Comments? I beleive I can get a big gain if I can simply reduce the number of BioR Misses. Again, Comments? From a performance testing standpoint, it would be nice if we could add a 'clear the counters' option to nfsstat so that root could reset the stat numbers to zero. Comments? I do not beleive I need to add memory to these boxes either. Note the number of free vm pages in the following vmstat output. $ vmstat -s 127855 cpu context switches 877149 device interrupts 52881 software interrupts 11625 traps 91484 system calls 0 swap pager pageins 0 swap pager pages paged in 0 swap pager pageouts 0 swap pager pages paged out 243 vnode pager pageins 1165 vnode pager pages paged in 0 vnode pager pageouts 0 vnode pager pages paged out 0 page daemon wakeups 0 pages examined by the page daemon 0 pages reactivated 4287 copy-on-write faults 3173 zero fill pages zeroed 5 intransit blocking page faults 12564 total VM faults taken 20366 pages freed 0 pages freed by daemon 5237 pages freed by exiting processes 296 pages active 1657 pages inactive 1243 pages in VM cache 3427 pages wired down 25252 pages free 4096 bytes per page 79741 total name lookups cache hits (81% pos + 0% neg) system 0% per-directory deletions 0%, falsehits 0%, toolong 0% $ Any and all comments are welcome. Thanks, John -- jwd@unx.sas.com (w) John W. De Boskey (919) 677-8000 x6915