From owner-freebsd-current@freebsd.org Sat Sep 8 11:56:10 2018 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0ACFDFF130D for ; Sat, 8 Sep 2018 11:56:10 +0000 (UTC) (envelope-from jakob@alvermark.net) Received: from out.alvermark.net (out.alvermark.net [185.34.136.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 853BC91745; Sat, 8 Sep 2018 11:56:09 +0000 (UTC) (envelope-from jakob@alvermark.net) Received: from c-42bc70d5.06-431-73746f70.bbcust.telenor.se ([213.112.188.66] helo=mail.alvermark.net) by out.alvermark.net with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.91 (FreeBSD)) (envelope-from ) id 1fybqR-000N1u-Tm; Sat, 08 Sep 2018 13:56:07 +0200 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=alvermark.net; s=x; h=Content-Transfer-Encoding:Content-Type:In-Reply-To: MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=KveEU/VXtjvkLeO6d3+a2gPcNv0ec9XPT8gmwOEt29w=; b=CJCPGJBQ6+lKeh4GmuCseiHE+D 7HEqOvHrAql54skVcGA63M/GAEysSfAAEfhcFwdafi+ZM26t3fdbZh03sYgxMGYKOOVviT8v+oBWS XvaNHTsgp2W/COV6PfYNrrGOX5gK6K7jBZYjjp3Ab1GpbybgIcUtTfyQjhHzHvlEG1fxyjLCgRTli o1Wr+7G2qM6/3vDp1mc4yfhqP3M6hnehq0i9ySP+BYLm3kn5A+AV9wXAPXYjHfy49JpvAh3lni9kX XDOsNzDKeKuwK2B/16vpu79nrHEUiIDSadJzI4xsRqppzmj6LAfrQm+6EwyBta7sZD93cwzLAEEv1 Xa0n7zUA==; Received: from [192.168.67.33] by mail.alvermark.net with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.91 (FreeBSD)) (envelope-from ) id 1fybqR-000GyB-1i; Sat, 08 Sep 2018 13:56:07 +0200 Subject: Re: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4 To: Mark Johnston Cc: Subbsd , allanjude@freebsd.org, freebsd-current Current References: <20180906002825.GB77324@raichu> <26c0f87e-2dd5-b088-8edb-0790b6b01ef0@alvermark.net> <20180907160654.GD63224@raichu> From: Jakob Alvermark Message-ID: <79f45d61-da93-af24-1b29-9c5db92b5b85@alvermark.net> Date: Sat, 8 Sep 2018 13:56:06 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: <20180907160654.GD63224@raichu> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Sep 2018 11:56:10 -0000 On 9/7/18 6:06 PM, Mark Johnston wrote: > On Fri, Sep 07, 2018 at 03:40:52PM +0200, Jakob Alvermark wrote: >> On 9/6/18 2:28 AM, Mark Johnston wrote: >>> On Wed, Sep 05, 2018 at 11:15:03PM +0300, Subbsd wrote: >>>> On Wed, Sep 5, 2018 at 5:58 PM Allan Jude wrote: >>>>> On 2018-09-05 10:04, Subbsd wrote: >>>>>> Hi, >>>>>> >>>>>> I'm seeing a huge loss in performance ZFS after upgrading FreeBSD 12 >>>>>> to latest revision (r338466 the moment) and related to ARC. >>>>>> >>>>>> I can not say which revision was before except that the newver.sh >>>>>> pointed to ALPHA3. >>>>>> >>>>>> Problems are observed if you try to limit ARC. In my case: >>>>>> >>>>>> vfs.zfs.arc_max="128M" >>>>>> >>>>>> I know that this is very small. However, for two years with this there >>>>>> were no problems. >>>>>> >>>>>> When i send SIGINFO to process which is currently working with ZFS, i >>>>>> see "arc_reclaim_waiters_cv": >>>>>> >>>>>> e.g when i type: >>>>>> >>>>>> /bin/csh >>>>>> >>>>>> I have time (~5 seconds) to press several times 'ctrl+t' before csh is executed: >>>>>> >>>>>> load: 0.70 cmd: csh 5935 [arc_reclaim_waiters_cv] 1.41r 0.00u 0.00s 0% 3512k >>>>>> load: 0.70 cmd: csh 5935 [zio->io_cv] 1.69r 0.00u 0.00s 0% 3512k >>>>>> load: 0.70 cmd: csh 5935 [arc_reclaim_waiters_cv] 1.98r 0.00u 0.01s 0% 3512k >>>>>> load: 0.73 cmd: csh 5935 [arc_reclaim_waiters_cv] 2.19r 0.00u 0.01s 0% 4156k >>>>>> >>>>>> same story with find or any other commans: >>>>>> >>>>>> load: 0.34 cmd: find 5993 [zio->io_cv] 0.99r 0.00u 0.00s 0% 2676k >>>>>> load: 0.34 cmd: find 5993 [arc_reclaim_waiters_cv] 1.13r 0.00u 0.00s 0% 2676k >>>>>> load: 0.34 cmd: find 5993 [arc_reclaim_waiters_cv] 1.25r 0.00u 0.00s 0% 2680k >>>>>> load: 0.34 cmd: find 5993 [arc_reclaim_waiters_cv] 1.38r 0.00u 0.00s 0% 2684k >>>>>> load: 0.34 cmd: find 5993 [arc_reclaim_waiters_cv] 1.51r 0.00u 0.00s 0% 2704k >>>>>> load: 0.34 cmd: find 5993 [arc_reclaim_waiters_cv] 1.64r 0.00u 0.00s 0% 2716k >>>>>> load: 0.34 cmd: find 5993 [arc_reclaim_waiters_cv] 1.78r 0.00u 0.00s 0% 2760k >>>>>> >>>>>> this problem goes away after increasing vfs.zfs.arc_max >>>>>> _______________________________________________ >>>>>> freebsd-current@freebsd.org mailing list >>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-current >>>>>> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" >>>>>> >>>>> Previously, ZFS was not actually able to evict enough dnodes to keep >>>>> your arc_max under 128MB, it would have been much higher based on the >>>>> number of open files you had. A recent improvement from upstream ZFS >>>>> (r337653 and r337660) was pulled in that fixed this, so setting an >>>>> arc_max of 128MB is much more effective now, and that is causing the >>>>> side effect of "actually doing what you asked it to do", in this case, >>>>> what you are asking is a bit silly. If you have a working set that is >>>>> greater than 128MB, and you ask ZFS to use less than that, it'll have to >>>>> constantly try to reclaim memory to keep under that very low bar. >>>>> >>>> Thanks for comments. Mark was right when he pointed to r338416 ( >>>> https://svnweb.freebsd.org/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=338416&r2=338415&pathrev=338416 >>>> ). Commenting aggsum_value returns normal speed regardless of the rest >>>> of the new code from upstream. >>>> I would like to repeat that the speed with these two lines is not just >>>> slow, but _INCREDIBLY_ slow! Probably, this should be written in the >>>> relevant documentation for FreeBSD 12+ >> Hi, >> >> I am experiencing the same slowness when there is a bit of load on the >> system (buildworld for example) which I haven't seen before. > Is it a regression following a recent kernel update? Yes. > >> I have vfs.zfs.arc_max=2G. >> >> Top is reporting >> >> ARC: 607M Total, 140M MFU, 245M MRU, 1060K Anon, 4592K Header, 217M Other >>      105M Compressed, 281M Uncompressed, 2.67:1 Ratio >> >> Should I test the patch? > I would be interested in the results, assuming it is indeed a > regression. This gets more interesting. Kernel + world was at r338465 I was going to test the patch, but since I had updated the src tree to r338499 I built it first without your patch. Now, at r338499, without the patch, it doesn't seem to hit the performance problem. vfs.zfs.arc_max is still set to 2G ARC display in top is around 1000M total, haven't seen go above about 1200M, even if I stress it.