From owner-freebsd-arch Wed Jan 31 14: 8:49 2001 Delivered-To: freebsd-arch@freebsd.org Received: from earth.backplane.com (earth-nat-cw.backplane.com [208.161.114.67]) by hub.freebsd.org (Postfix) with ESMTP id 4DEB737B698; Wed, 31 Jan 2001 14:08:30 -0800 (PST) Received: (from dillon@localhost) by earth.backplane.com (8.11.1/8.9.3) id f0VM8Tm17958; Wed, 31 Jan 2001 14:08:29 -0800 (PST) (envelope-from dillon) Date: Wed, 31 Jan 2001 14:08:29 -0800 (PST) From: Matt Dillon Message-Id: <200101312208.f0VM8Tm17958@earth.backplane.com> To: Mike Smith Cc: Dag-Erling Smorgrav , Dan Nelson , Seigo Tanimura , arch@FreeBSD.ORG Subject: Re: Bumping up {MAX,DFLT}*PHYS (was Re: Bumping up {MAX,DFL}*SIZ in i386) References: <200101312022.f0VKMDW00902@mass.dis.org> Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG :> Dan Nelson writes: :> > On a similar note, is there any reason for us to have DFLTPHYS at 64k :> > anymore? With the insane interface speeds of SCSI and ATA devices :> > nowadays, you can easily hit 600 I/Os per second on sequential reads :> > (40MB/sec, 64K per I/O). Would anything break if MAXPHYS/DFLTPHYS was :> > bumped to say, 1mb? :> :> I think so; we can't do DMA transfers larger than 64k (128k in word :> mode) - at least for ISA devices, I don't know much about PCI. : :It's 128K right now, actually. The problem is that a lot of older :devices have limits which cap them at 64K. (Typically, 16-bit bytecount :registers, or 16- or 17-slot scatter/gather tables.) : There are a number of places in the kernel where increasing DFLTPHYS will create stack bloating problems, too much reservation of KVM, hit certain inefficiencies in the buffer cache, and cause other insundry problems. Also, increasing DFLTPHYS is not going to make one iota of difference insofar as DMA goes. Since most of the high bandwidth devices already supporting DMA chaining and since we have to translate virtual to physical addresses anyway -- the chaining granularity is often 4K no matter what you do. And, finally, while large I/O's may seem to be a good idea, they can actually interfere with the time-share mechanisms that smooth system operation. If you queue a 1 MByte I/O to a disk device, that disk device is locked up doing that one I/O for a long time (in cpu-time terms). Having a large number of bytes queued for I/O on one device can interfere with the performance of another device. In short, your performance is not going to get better and could very well get worse. What is important to system performance is more the ability to maintain an I/O pipeline and less in the cpu overhead required to keep the pipeline full. I test SCSI performance every year or so and, frankly, once you get above a DMA size 4K all you gain are a few additional cpu cycles. You gain nothing in transfer rate or overall system performance. So my recommendation: don't do it. -Matt To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message