Date: Mon, 10 Apr 2006 20:19:33 +0200 From: Nicolas KOWALSKI <Nicolas.Kowalski@imag.fr> To: freebsd-fs@FreeBSD.org Subject: Re: [patch] giant-less quotas for UFS Message-ID: <vqoacate1je.fsf@corbeau.imag.fr> In-Reply-To: <443A97F9.8090601@centtech.com> References: <20060329152608.GB1375@deviant.kiev.zoral.com.ua> <vqoy7ydv7lw.fsf@corbeau.imag.fr> <20060410144904.GC1408@deviant.kiev.zoral.com.ua> <vqou091v3vt.fsf@corbeau.imag.fr> <443A7C8E.4020203@centtech.com> <vqopsjpv2ci.fsf@corbeau.imag.fr> <443A8842.6060802@centtech.com> <vqolkudv09k.fsf@corbeau.imag.fr> <443A97F9.8090601@centtech.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Eric Anderson <anderson@centtech.com> writes: > Nicolas KOWALSKI wrote: >> Eric Anderson <anderson@centtech.com> writes: >> >>> Nicolas KOWALSKI wrote: >>>> Yes, this is exactly what is happening. To add some precision, some >>>> students here use calculation applications >>>> that allocate a lot of disk space, ususally more than their allowed >>>> home quotas; when by error they launch these apps in their home >>>> directories, instead of their workstation dedicated space, it makes >>>> the server go to its knees on the NFS client side. >>> When you say 'to it's knees' - what do you mean exactly? How many >>> clients do you have, how much memory is on the server, and how many >>> nfsd threads are you using? What kind of load average do you see >>> during this (on the server)? >> Sorry for the imprecision. >> The server is a Dual-Xeon 2.8Ghz, 2GB of RAM, using SCSI3 Ultra320 >> 76GB disks and controller. It is accessed by NFS from ~100 Unix >> (Linux, Solaris) clients, and by Samba from ~15 Windows XP. The >> network connection is GB ethernet. >> During slowdowns, it's only from a NFS client view that the server >> does not respond. For example, a simple 'ls' in my home directory is >> almost immediate, but when it slows down, it can take up to 2 minutes. >> On the server, the load average goes to 0.5, compared to a default >> maximum of 0.15-0.20. The nfsd processus shows them in the state >> "biowr" in top, but nothing is really written, because the quotas >> system block any further writes to the user exceeding her/his quotas. >> > > In this case (which is what I suspected), try bumping up your nfsd > threads to 128. I set mine very high (I have around 1000 clients), > and I can say there aren't really ill-effects besides a bit of memory > usage (which you have plenty of). I suspect increasing the threads > will neutralize this problem for you. Thanks for your suggestion. However, I am apparently not able to change the default. I stopped the nfsd master process (kill -USR1 as written in the manpage), then started it: pave# nfsd -t -u -n 128 nfsd: nfsd count 128; reset to 4 What am I forgetting here ? Thanks, -- Nicolas
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?vqoacate1je.fsf>