From owner-freebsd-arch Wed May 16 15:11:12 2001 Delivered-To: freebsd-arch@freebsd.org Received: from midten.fast.no (midten.fast.no [213.188.8.11]) by hub.freebsd.org (Postfix) with ESMTP id B1B4037B42C for ; Wed, 16 May 2001 15:11:09 -0700 (PDT) (envelope-from Tor.Egge@fast.no) Received: from fast.no (IDENT:tegge@midten.fast.no [213.188.8.11]) by midten.fast.no (8.9.3/8.9.3) with ESMTP id AAA02889; Thu, 17 May 2001 00:11:06 +0200 (CEST) Message-Id: <200105162211.AAA02889@midten.fast.no> To: dillon@earth.backplane.com Cc: arch@FreeBSD.ORG Subject: Re: on load control / process swapping From: Tor.Egge@fast.no In-Reply-To: Your message of "Wed, 16 May 2001 14:35:39 -0700 (PDT)" References: <200105162135.f4GLZdo78984@earth.backplane.com> X-Mailer: Mew version 1.70 on Emacs 19.34.1 Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit Date: Thu, 17 May 2001 00:11:05 +0200 Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG > Question number 2. You have this: > > error = cluster_read(vp, ip->i_size, lbn, > - size, NOCRED, uio->uio_resid, seqcount, &bp); > + size, NOCRED, blkoffset + uio->uio_resid, seqcount, &bp); > > > What is the blkoffset adjustment for? Is that a bug fix for something > else? lbn doesn't reflect the least significant bits in uio->uio_offset, causing too small readahead. Adding blkoffset to uio->uio_resid compensates for that. > In anycase, in regards to the main patch. Why don't I commit > the header file support pieces from your patch with some minor > alignment cleanups to the struct file, but leave your > rawread/rawwrite out until we can make it work properly. Fine. > Then I can use IO_NOBUFFER to cause the underlying VM pages to > be freed (the underlying struct buf is already released in the > existing code). The result will be the same low-VM-page-cache > impact as your rawread/rawwrite code except for the extra buffer > copy. I think I can reach about 90% of the performance you get > simply by freeing the underlying VM pages because this will > allow them to be reused in the next read(), and they will > already be in the L2 cache. If I don't free the underlying VM > pages the sequential read will force the L2 cache to cycle, and > I'll bet that is why you get such drastically different idle > times. Avoiding that copyout() is the major reason for increased idle time. The L2 cache will still cycle a lot with your suggested implementation for the load I used since the normal amount of outstanding IO is 25 MB (256 KB x 100). The L2 cache is a lot smaller then 25 MB. - Tor Egge To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message