Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 22 Apr 2000 10:36:22 -0700 (PDT)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Michael Bacarella <mbac@nyct.net>
Cc:        Alfred Perlstein <bright@wintelcom.net>, Kevin Day <toasty@dragondata.com>, hackers@FreeBSD.ORG
Subject:   Re: Double buffered cp(1)
Message-ID:  <200004221736.KAA55484@apollo.backplane.com>
References:   <Pine.BSF.4.21.0004221320250.38433-100000@bsd1.nyct.net>

next in thread | previous in thread | raw e-mail | index | archive | help

:
:
:> :extend (using truncate) and then mmap() the destination file, then
:> :read() directly into the mmap()'d portion.
:> :
:> :I'd like to see what numbers you get. :)
:
:>     read + write is a better way to do it.  It is still possible to
:>     double buffer.  In this case simply create a small anonymous shared
:>     mmap that fits in the L2 cache (like 128K), setup a pipe, fork, and 
:>     have one process read() from the source while the other write()s to the
:>     destination.  The added overhead is actually less then 'one buffer copy'
:>     worth if the added buffering fits in the L1 or L2 cache.
:
:It seems silly to implement something as trivial and straightforward as
:copying a file in userland. The process designated to copy a file just
:sits in a tight loop invoking the read()/write() syscalls
:repeatedly. Since this operation is already system bound and very simple,
:what's the arguement against absorbing it into the kernel?
:
:-MB

    I don't think anyone has suggested that it be absorbed into the kernel.
    We are talking about userland code here.

    The argument for double-buffering is a simple one - it allows the
    process read()ing from the source file to block without stalling the
    process write()ing to the destination file.

    I think the reality, though, is that at least insofar as copying a
    single large file the source is going to be relatively contiguous on
    the disk and thus will tend not to block.  More specifically, the
    disk itself is probably the bottleneck.  Disk writes tend to be
    somewhat slower then disk reads and the seeking alone (between source
    file and destination file), even when using a large block size, 
    will reduce performance drastically verses simply reading or writing
    a single file linearly.  Double buffering may help a disk-to-disk
    file copy, but I doubt it will help a disk-to-same-disk file copy.

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200004221736.KAA55484>