From owner-freebsd-current Fri Aug 2 02:36:49 1996 Return-Path: owner-current Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id CAA29613 for current-outgoing; Fri, 2 Aug 1996 02:36:49 -0700 (PDT) Received: from minnow.render.com (render.demon.co.uk [158.152.30.118]) by freefall.freebsd.org (8.7.5/8.7.3) with SMTP id CAA29608 for ; Fri, 2 Aug 1996 02:36:43 -0700 (PDT) Received: from minnow.render.com (minnow.render.com [193.195.178.1]) by minnow.render.com (8.6.12/8.6.9) with SMTP id KAA25179; Fri, 2 Aug 1996 10:33:26 +0100 Date: Fri, 2 Aug 1996 10:33:24 +0100 (BST) From: Doug Rabson To: Terry Lambert cc: jkh@time.cdrom.com, tony@fit.qut.edu.au, freebsd-current@FreeBSD.ORG Subject: Re: NFS Diskless Dispare... In-Reply-To: <199608011802.LAA04239@phaeton.artisoft.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-current@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk On Thu, 1 Aug 1996, Terry Lambert wrote: > Mountd is far from being concurrent enough. At one time, back in > the 1.1.5.1 days, I had it hacked up sufficiently to allow NFS > access by 20 or so X terminals, all at the same time. I think > this is kludgable by hacking the timeout up for now. Mountd wants > a bit of a rewrite once real threading is generally available. Wouldn't that need to wait until a threaded rpc library was available... > > > > > I think that this is a more generic NFS bug in -current. I can > > > reproduce this, even causing mountd to silently exit (no core, no > > > syslog msg) with just one client and some fierce AMD-assisted pounding > > > on a 2.2-current NFS server. > > I think the exit is a seperate problem. I'd be curious about what > you could find out from a trace of the process started before it dies. > > > > > 2. Files permissions are read incorrectly. Files that should be able to > > > > be executed are giving "permission denied" messages.. Sometimes even > > > > the kernel can't be loaded by netboot.com but if you persist by > > > > typing "autoboot" it will magically start to work. Machines fail to > > > > boot correctly as programms called in /etc/rc don't start > > > > (permission denied). > > > > > > Probably more NFS bogosity. > > [ ... ] > > > I think for diskless root filesystems, you must export the fs with > > -root=0, otherwise lots of stuff will break. > > [ this is true, but it's not the cause ] > > > > > 3. Pageing in of binaries cause the system to panic. Vnode_pager does > > > > not seem to like it when it can't page in executables, even when the > > > > > > See #2. :-) > > > > Probably paging from a file which root can't access (see above). > > Actually, I think it's the problem in vop_bmap for nfs that David noted > the other day. Which problem is this? I have always been slightly worried about the hacky nature of the nfs_bmap code (basically just multiplies b_lblkno by 16 or so, depending on the fs blocksize). The higher level fs code seems to try to figure out whether to call VOP_BMAP by comparing b_blkno to b_lblkno and mapping if they are equal. For NFS, they will always be equal for the first block of the file. I didn't think it would be a problem since it would just call nfs_bmap a bit more often for that block. > > > > 2.1.5? Its NFS is still unstable, but I don't believe anywhere near > > > the state it's in with -current. > > > > I think some of the stability problems with NFS are due to its lack of > > vnode locking primitives. This might be addressed by the lite2 fs work > > but if not, I will try to get something in after that work is merged. > > The NFS, procfs, and several other non-boot-critical FS's didn't have > the new primitives in the patch sets we've seen so far. I don't think > they will have much positive effect on this problem, but there are three > or four other problems that will clear up (mostly two client race > conditions). I think the worst races would be between VOP_READ or VOP_WRITE and vclean. I think that you could cause real damage with one of those :-). -- Doug Rabson, Microsoft RenderMorphics Ltd. Mail: dfr@render.com Phone: +44 171 251 4411 FAX: +44 171 251 0939