Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 22 Apr 2000 11:41:59 -0700
From:      Kent Stewart <kstewart@3-cities.com>
To:        Matthew Dillon <dillon@apollo.backplane.com>
Cc:        Michael Bacarella <mbac@nyct.net>, Alfred Perlstein <bright@wintelcom.net>, Kevin Day <toasty@dragondata.com>, hackers@FreeBSD.ORG
Subject:   Re: Double buffered cp(1)
Message-ID:  <3901F277.66DDDDAF@3-cities.com>
References:  <Pine.BSF.4.21.0004221320250.38433-100000@bsd1.nyct.net> <200004221736.KAA55484@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help


Matthew Dillon wrote:
> 
> :
> :
> :> :extend (using truncate) and then mmap() the destination file, then
> :> :read() directly into the mmap()'d portion.
> :> :
> :> :I'd like to see what numbers you get. :)
> :
> :>     read + write is a better way to do it.  It is still possible to
> :>     double buffer.  In this case simply create a small anonymous shared
> :>     mmap that fits in the L2 cache (like 128K), setup a pipe, fork, and
> :>     have one process read() from the source while the other write()s to the
> :>     destination.  The added overhead is actually less then 'one buffer copy'
> :>     worth if the added buffering fits in the L1 or L2 cache.
> :
> :It seems silly to implement something as trivial and straightforward as
> :copying a file in userland. The process designated to copy a file just
> :sits in a tight loop invoking the read()/write() syscalls
> :repeatedly. Since this operation is already system bound and very simple,
> :what's the arguement against absorbing it into the kernel?
> :
> :-MB
> 
>     I don't think anyone has suggested that it be absorbed into the kernel.
>     We are talking about userland code here.
> 
>     The argument for double-buffering is a simple one - it allows the
>     process read()ing from the source file to block without stalling the
>     process write()ing to the destination file.
> 
>     I think the reality, though, is that at least insofar as copying a
>     single large file the source is going to be relatively contiguous on
>     the disk and thus will tend not to block.  More specifically, the
>     disk itself is probably the bottleneck.  Disk writes tend to be
>     somewhat slower then disk reads and the seeking alone (between source
>     file and destination file), even when using a large block size,
>     will reduce performance drastically verses simply reading or writing
>     a single file linearly.  Double buffering may help a disk-to-disk
>     file copy, but I doubt it will help a disk-to-same-disk file copy.

I made some tests on my FreeBSD machine. In the past, double buffering
only helps if you have concurrent I/O capability. You only have that
if you have dual access to each I/O device (HD) via different data
channels. We don't have that capability on PC's. The typical drives
that we purchase have only one data path, i.e., the ribbon cable. 

I tested build worlds where I created my /usr/obj on one controller
and left the /usr/src on a different controller. The buildworlds are
pretty much I/O bound. They ran faster but not that much faster when
different controllers were used. I have an IBM uw scsi on that system
and I haven't tried to do a build world using it for one of the file
systems. The test I like is iozone because it uses everything I
normally use. I tell it to test using 160MB, which is 20+ times the
available cache from top. Rawio doesn't mean anything when everything
is cached. The tests showed the scsi system was slower than the
UDMA-33 drives in everything except for randon I/O and then it was
much faster. It could mean that the buffering logic on the scsi HD and
scsi controller were smart enough that everything was in cache. 

It is too easy to split your obj and src filesystem on to different
controllers and test if cached I/O between controllers helps on your
system. I think that even cp is cached on a normal system and that
means you already have "n" buffers available for reading and "N"
buffers available for writing. Trying to make cp more complicated
won't help because the copy of a file will still be twice the time to
just write the file + the number of times that you had to wait for
full revolutions of the disk before you could do your next scheduled
read or write I/O operation. 

Kent

> 
>                                         -Matt
>                                         Matthew Dillon
>                                         <dillon@backplane.com>
> 
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-hackers" in the body of the message

-- 
Kent Stewart
Richland, WA

mailto:kstewart@3-cities.com
http://www.3-cities.com/~kstewart/index.html
FreeBSD News http://daily.daemonnews.org/

SETI(Search for Extraterrestrial Intelligence) @ HOME
http://setiathome.ssl.berkeley.edu/


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3901F277.66DDDDAF>