Date: Tue, 21 Mar 2000 09:26:42 -0800 (PST) From: Matthew Dillon <dillon@apollo.backplane.com> To: Matthew Jacob <mjacob@feral.com> Cc: wilko@FreeBSD.ORG, Poul-Henning Kamp <phk@critter.freebsd.dk>, Alfred Perlstein <bright@wintelcom.net>, current@FreeBSD.ORG Subject: Re: patches for test / review Message-ID: <200003211726.JAA81137@apollo.backplane.com> References: <Pine.BSF.4.10.10003202353460.8524-100000@beppo.feral.com>
next in thread | previous in thread | raw e-mail | index | archive | help
:> Hm. But I'd think that even with modern drives a smaller number of bigger :> I/Os is preferable over lots of very small I/Os. : :Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you :do pay in interference costs (you can't transfer data for request N because :the 256Kbytes of request M is still in the pipe). This problem has scaled over the last few years. With 5 MB/sec SCSI busses it was a problem. With 40, 80, and 160 MB/sec it isn't as big an issue any more. 256K @ 40 MBytes/sec = 6.25 mS. 256K @ 80 MBytes/sec = 3.125 mS. When you add in write-decoupling (take softupdates, for example), the issue become even less of a problem. The biggest single item that does not scale well is command/response overhead. I think it has been successfully argued (but I forgot who made the point) that 64K is not quite into the sweet spot - that 256K is closer to the mark. But one has to be careful to only issue large requests for things that are actually going to be used. If you read 256K but only use 8K of it, you just wasted a whole lot of cpu and bus bandwidth. -Matt Matthew Dillon <dillon@backplane.com> To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200003211726.JAA81137>