Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 31 Jan 2001 14:08:29 -0800 (PST)
From:      Matt Dillon <dillon@earth.backplane.com>
To:        Mike Smith <msmith@FreeBSD.ORG>
Cc:        Dag-Erling Smorgrav <des@ofug.org>, Dan Nelson <dnelson@emsphone.com>, Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>, arch@FreeBSD.ORG
Subject:   Re: Bumping up {MAX,DFLT}*PHYS (was Re: Bumping up {MAX,DFL}*SIZ in i386) 
Message-ID:  <200101312208.f0VM8Tm17958@earth.backplane.com>
References:   <200101312022.f0VKMDW00902@mass.dis.org>

next in thread | previous in thread | raw e-mail | index | archive | help

:> Dan Nelson <dnelson@emsphone.com> writes:
:> > On a similar note, is there any reason for us to have DFLTPHYS at 64k
:> > anymore?  With the insane interface speeds of SCSI and ATA devices
:> > nowadays, you can easily hit 600 I/Os per second on sequential reads
:> > (40MB/sec, 64K per I/O).  Would anything break if MAXPHYS/DFLTPHYS was
:> > bumped to say, 1mb?
:> 
:> I think so; we can't do DMA transfers larger than 64k (128k in word
:> mode) - at least for ISA devices, I don't know much about PCI.
:
:It's 128K right now, actually.  The problem is that a lot of older 
:devices have limits which cap them at 64K.  (Typically, 16-bit bytecount 
:registers, or 16- or 17-slot scatter/gather tables.)
:

    There are a number of places in the kernel where increasing DFLTPHYS
    will create stack bloating problems, too much reservation of KVM,
    hit certain inefficiencies in the buffer cache, and cause other insundry
    problems.

    Also, increasing DFLTPHYS is not going to make one iota of difference
    insofar as DMA goes.  Since most of the high bandwidth devices already
    supporting DMA chaining and since we have to translate virtual
    to physical addresses anyway -- the chaining granularity is often
    4K no matter what you do.

    And, finally, while large I/O's may seem to be a good idea, they can
    actually interfere with the time-share mechanisms that smooth system
    operation.  If you queue a 1 MByte I/O to a disk device, that disk
    device is locked up doing that one I/O for a long time (in cpu-time
    terms).  Having a large number of bytes queued for I/O on one device
    can interfere with the performance of another device.  In short,
    your performance is not going to get better and could very well get
    worse.

    What is important to system performance is more the ability to maintain
    an I/O pipeline and less in the cpu overhead required to keep the pipeline
    full.  I test SCSI performance every year or so and, frankly, once you
    get above a DMA size 4K all you gain are a few additional cpu cycles.
    You gain nothing in transfer rate or overall system performance.

    So my recommendation: don't do it.

						-Matt



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200101312208.f0VM8Tm17958>