From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 04:32:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 69B335B8 for ; Thu, 4 Apr 2013 04:32:45 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id 4881A3D3 for ; Thu, 4 Apr 2013 04:32:44 +0000 (UTC) Received: from c-174-53-189-64.hsd1.mn.comcast.net ([174.53.189.64] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UNbqt-000ASk-4J; Wed, 03 Apr 2013 23:32:44 -0500 Message-ID: <515D0287.2060704@physics.umn.edu> Date: Wed, 03 Apr 2013 23:33:11 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Rick Macklem References: <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_20, TW_ZF autolearn=no version=3.3.2 Subject: Re: zfs home directories best practice X-SA-Exim-Version: 4.2 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 04:32:45 -0000 On 4/3/2013 6:56 PM, Rick Macklem wrote: >> > Well, there isn't any limit to the # of exported file systems afaik, > but updating a large /etc/exports file takes quite a bit of time and > when you use mountd (the default) for this, you can have problems. > (You either have a period of time when no client can get response > from the server or a period of time when I/O fails because the > file system isn't re-exported yet.) > > If you choose this approach, you should look seriously at using > nfse (on sourceforge) instead of mountd. That's an interesting-looking project though I'm beginning to think that unless there's some serious downside to the "one big filesystem", I should just defer the per-user filesystems for the system after this one. As you remind me below, I'll probably have other issues to chase down besides that one (performance as well as making the jump to NFSv4...) > You might also want to contact Garrett Wollman w.r.t. the NFS > server patch(es) and setup he is using, since he has been > working through performance issues (relatively successfully > now, as I understand) for a fairly large NFS/ZFS server. > You should be able to find a thread discussing this on > freebsd-fs or freebsd-current. I found the thread "NFS server bottlenecks" on freebsd-hackers, which has a lot of interesting reading, and then also "NFS DRC size" on freebsd-fs. We might dig into some of that material (eg DRC-related patches) though I probably need to spend more time on basics first (kernel parameters, number of nfsd threads, etc). Thanks for the pointers, Graham --