From owner-freebsd-fs@FreeBSD.ORG Tue Sep 7 07:26:28 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 67FD01065695; Tue, 7 Sep 2010 07:26:28 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 3FF918FC08; Tue, 7 Sep 2010 07:26:26 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id KAA16381; Tue, 07 Sep 2010 10:26:23 +0300 (EEST) (envelope-from avg@icyb.net.ua) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1OssZT-000MiY-9N; Tue, 07 Sep 2010 10:26:23 +0300 Message-ID: <4C85E91E.1010602@icyb.net.ua> Date: Tue, 07 Sep 2010 10:26:22 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.8) Gecko/20100822 Lightning/1.0b2 Thunderbird/3.1.2 MIME-Version: 1.0 To: Wiktor Niesiobedzki , Pawel Jakub Dawidek , Konstantin Belousov References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk> In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Sep 2010 07:26:28 -0000 on 06/09/2010 21:34 Wiktor Niesiobedzki said the following: > As far as I have check recently, nginx is using sendfile by default. > There is already a reported bug against ZFS+sendfile > (http://www.freebsd.org/cgi/query-pr.cgi?pr=141305&cat=) which results > in bad performance. > > The quickest workaround is to set: > sendfile off; > > In http {} sectio of nginx.conf. > > What I personally have observed, is that memory, that is used by > sendfile, once freed lands in Inact group. And ARC is not able to > force free of this memory. Well, there is a patch for this, but that's besides the point of the sendfile issue. > In my case, where I have 1G of ARC, then after sending 2G of file, my > ARC is barerly on minimum level, and my ARC hit ratio drops to ~50%. > > If I remove the file that was sent through sendfile, memory is moved > from Inact to free, from where ARC happly grabs what it wants, and ARC > hit ratio comes back to normal (~99%). Interesting. I briefly looked at the code in mappedread(), zfs_vnops.c, and I have a VM question. Shouldn't we mark the corresponding page bits as valid after reading data into the page? I specifically speak of the block that starts with the following line: } else if (m != NULL && uio->uio_segflg == UIO_NOCOPY) { I am taking mdstart_swap as an example and it does m->valid = VM_PAGE_BITS_ALL. -- Andriy Gapon