From owner-freebsd-fs@FreeBSD.ORG Sun Jan 9 11:52:59 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6BA7B106566B; Sun, 9 Jan 2011 11:52:59 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id 5E2E38FC22; Sun, 9 Jan 2011 11:52:57 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id D0E1C7045D3; Sun, 9 Jan 2011 12:52:56 +0100 (CET) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000004, version=1.2.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 15.4350] X-CRM114-CacheID: sfid-20110109_12525_6A621F56 X-CRM114-Status: Good ( pR: 15.4350 ) X-Spambayes-Classification: ham; 0.00 Message-ID: <4D29A198.4070107@fsn.hu> Date: Sun, 09 Jan 2011 12:52:56 +0100 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Artem Belevich References: <4D0A09AF.3040005@FreeBSD.org> <4D1F7008.3050506@fsn.hu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: New ZFSv28 patchset for 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 09 Jan 2011 11:52:59 -0000 On 01/01/2011 08:09 PM, Artem Belevich wrote: > On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote: >> What I see: >> - increased CPU load >> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased >> hard disk load (IOPS graph) >> > ... >> Any ideas on what could cause these? I haven't upgraded the pool version and >> nothing was changed in the pool or in the file system. > The fact that L2 ARC is full does not mean that it contains the right > data. Initial L2ARC warm up happens at a much higher rate than the > rate L2ARC is updated after it's been filled initially. Even > accelerated warm-up took almost a day in your case. In order for L2ARC > to warm up properly you may have to wait quite a bit longer. My guess > is that it should slowly improve over the next few days as data goes > through L2ARC and those bits that are hit more often take residence > there. The larger your data set, the longer it will take for L2ARC to > catch the right data. > > Do you have similar graphs from pre-patch system just after reboot? I > suspect that it may show similarly abysmal L2ARC hit rates initially, > too. > > I've finally found the time to read the v28 patch and figured out the problem: vfs.zfs.l2arc_noprefetch was changed to 1, so it doesn't use the prefetched data on the L2ARC devices. This is a major hit in my case. Enabling this again restored the previous hit rates and lowered the load on the hard disks significantly.