Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 16 May 2001 14:35:39 -0700 (PDT)
From:      Matt Dillon <dillon@earth.backplane.com>
To:        Tor.Egge@fast.no
Cc:        arch@FreeBSD.ORG
Subject:   Re: on load control / process swapping
Message-ID:  <200105162135.f4GLZdo78984@earth.backplane.com>
References:  <200105162031.f4GKVkd77205@earth.backplane.com> <200105162050.WAA01047@midten.fast.no>

next in thread | previous in thread | raw e-mail | index | archive | help

:
:>     Ok, I've done a quick once-over of the patch and I have a question:
:>     What happens if you've just written that file normally and there are
:>     still some uncomitted dirty buffers associated with it, and you then
:>     do an O_DIRECT read of the file?  Do you get the old data or the new
:>     data?
:
:Currently, you get the old data.  That's both semantically incorrect
:and a security hole.  Some check for dirty buffers should be made if
:the OBJ_MIGHTBEDIRTY flag is set on the vm object.
:
:- Tor Egge

    Question number 2.  You have this:

    error = cluster_read(vp, ip->i_size, lbn,
-                   size, NOCRED, uio->uio_resid, seqcount, &bp);
+                   size, NOCRED, blkoffset + uio->uio_resid, seqcount, &bp);


    What is the blkoffset adjustment for?  Is that a bug fix for something
    else?

    --

    In anycase, in regards to the main patch.  Why don't I commit the
    header file support pieces from your patch with some minor 
    alignment cleanups to the struct file, but leave your rawread/rawwrite
    out until we can make it work properly.  Then I can use IO_NOBUFFER to 
    cause the underlying VM pages to be freed (the underlying struct buf
    is already released in the existing code).  The result will be the 
    same low-VM-page-cache impact as your rawread/rawwrite code except for
    the extra buffer copy.  I think I can reach about 90% of the performance
    you get simply by freeing the underlying VM pages because this will allow
    them to be reused in the next read(), and they will already be in the L2
    cache.  If I don't free the underlying VM pages the sequential read will
    force the L2 cache to cycle, and I'll bet that is why you get such
    drastically different idle times.

						-Matt


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200105162135.f4GLZdo78984>