From owner-freebsd-fs@FreeBSD.ORG Wed Sep 8 08:53:26 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0D3EE10656E0; Wed, 8 Sep 2010 08:53:26 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 037D18FC13; Wed, 8 Sep 2010 08:53:24 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA08721; Wed, 08 Sep 2010 11:53:21 +0300 (EEST) (envelope-from avg@freebsd.org) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1OtGPA-0000WR-Ug; Wed, 08 Sep 2010 11:53:20 +0300 Message-ID: <4C874F00.3050605@freebsd.org> Date: Wed, 08 Sep 2010 11:53:20 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.8) Gecko/20100822 Lightning/1.0b2 Thunderbird/3.1.2 MIME-Version: 1.0 To: Kostik Belousov References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk> <4C85E91E.1010602@icyb.net.ua> <4C873914.40404@freebsd.org> <20100908084855.GF2465@deviant.kiev.zoral.com.ua> In-Reply-To: <20100908084855.GF2465@deviant.kiev.zoral.com.ua> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Sep 2010 08:53:26 -0000 on 08/09/2010 11:48 Kostik Belousov said the following: > On Wed, Sep 08, 2010 at 10:19:48AM +0300, Andriy Gapon wrote: >> on 07/09/2010 10:26 Andriy Gapon said the following: >>> Interesting. I briefly looked at the code in mappedread(), zfs_vnops.c, and I >>> have a VM question. >>> Shouldn't we mark the corresponding page bits as valid after reading data into >>> the page? >>> I specifically speak of the block that starts with the following line: >>> } else if (m != NULL && uio->uio_segflg == UIO_NOCOPY) { >>> I am taking mdstart_swap as an example and it does m->valid = VM_PAGE_BITS_ALL. >>> >> >> I've chatted with and conclusion seems to be that vm_page_set_validclean() call >> should be added at the end of the block. >> >> Perhaps, something like this: >> >> --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >> +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c >> @@ -500,6 +500,7 @@ again: >> sched_unpin(); >> } >> VM_OBJECT_LOCK(obj); >> + vm_page_set_validclean(m, off, bytes); > Only if error == 0, perhaps ? Yes, I agree, thanks! -- Andriy Gapon