From owner-freebsd-hackers Fri Sep 26 01:01:20 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id BAA18995 for hackers-outgoing; Fri, 26 Sep 1997 01:01:20 -0700 (PDT) Received: from usr04.primenet.com (tlambert@usr04.primenet.com [206.165.6.204]) by hub.freebsd.org (8.8.7/8.8.7) with ESMTP id BAA18990 for ; Fri, 26 Sep 1997 01:01:18 -0700 (PDT) Received: (from tlambert@localhost) by usr04.primenet.com (8.8.5/8.8.5) id BAA17357; Fri, 26 Sep 1997 01:01:10 -0700 (MST) From: Terry Lambert Message-Id: <199709260801.BAA17357@usr04.primenet.com> Subject: Re: problem compiling for linux under compat_linux To: mike@smith.net.au (Mike Smith) Date: Fri, 26 Sep 1997 08:01:09 +0000 (GMT) Cc: bartol@salk.edu, freebsd-hackers@FreeBSD.ORG In-Reply-To: <199709260412.NAA00737@word.smith.net.au> from "Mike Smith" at Sep 26, 97 01:42:41 pm X-Mailer: ELM [version 2.4 PL23] Content-Type: text Sender: owner-freebsd-hackers@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk > > If I use /compat/linux/usr/bin/gcc to compile anything other that very > > trivial c program sources located on an NFS mounted filesystem I get > > broken executables which seg fault. This same source code compiles > > correctly when located on a local filesystem. This problem does not occur > > when compiling trivial sources such as "Hello World". How do you make the compiler invoke the right linker in this case? Do you hack your path? I've never thought running a Linux compiler in a compat directory would work because it would invoke FreeBSD pieces for hidden components. Are they really all referenced off the user's path?!? Are they all branded Linux executables so the will go through compat first on their path lookups? > It's not clear to me how this could be an emulation-related problem > just yet. (Possibly an mmap() incompatability?) Maybe. Do they mmap() the objects in ld now? Editorial on cache thrashing by mmap() in ld: This was a big screwup when USL first did it, and if GNU ld now does it, it's a big screwup for them, too. On UnixWare, it thrashed the buffer cache to death, forcing all other inodes buffers out. The X server went to hell. It's a pretty trivial denial-of-service attack. The fix is pretty trivial too: set a per file "working set quota" and when you go over the quota on a particular vnode, you take pages from it (they have to be LRU'ed off the vnode) instead of getting them from the system. This could be a big pessimization, though, for say a standalone news server that isn't competing with other processes for cache, or a big executable with poor locality (like, oh, an X server?). A kludge for the exectuable is to not do it for programs with VEXEC set. A kludge for the news server is to enforce the working set restriction based on per process settings (ala login.conf). This doesn't fall down (far, anyway) if one process with a big quota and one with a small quota are hitting the same file, since you only take the last page on the LRU (neither process is likely to ask for it), and you only take it when the process wants a new page. So the in-core working set size will grow to the highest process quota, and the lowest quota process will still recycle pages. I suppose you could habe a low quota process that buzzes pages still, which would LRU off the higher quotas pages. The only way to really fix that is to chain the processes using the vnode off the vnode and have the vnode have a quota field it inherits from the highest quota'ed process. They wouldn't let me add another pointer to the proc struct in UnixWare. It's still succeptible to the problem, since my working set code was turned down in favor of giving the X server a new scheduling class ("fixed") and letting it swap pages back in when it needs them. Bah Humbug. 8-(. Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers.