From owner-freebsd-questions@freebsd.org Fri Jul 13 21:50:52 2018 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 30E511026CCE for ; Fri, 13 Jul 2018 21:50:52 +0000 (UTC) (envelope-from list@museum.rain.com) Received: from g5.umpquanet.com (ns.umpquanet.com [209.216.177.146]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BC7B67A769 for ; Fri, 13 Jul 2018 21:50:51 +0000 (UTC) (envelope-from list@museum.rain.com) Received: from g5.umpquanet.com (localhost [127.0.0.1]) by g5.umpquanet.com (8.15.2/8.15.2) with ESMTPS id w6DLnumw026402 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 13 Jul 2018 14:49:56 -0700 (PDT) (envelope-from list@museum.rain.com) Received: (from james@localhost) by g5.umpquanet.com (8.15.2/8.15.2/Submit) id w6DLnuDU026401; Fri, 13 Jul 2018 14:49:56 -0700 (PDT) (envelope-from list@museum.rain.com) X-Authentication-Warning: g5.umpquanet.com: james set sender to list@museum.rain.com using -f Date: Fri, 13 Jul 2018 14:49:56 -0700 From: Jim Long To: Mike Tancsa Cc: freebsd-questions@freebsd.org Subject: Re: Disk/ZFS activity crash on 11.2-STABLE [SOLVED] Message-ID: <20180713214956.GA20628@g5.umpquanet.com> References: <20180711212959.GA81029@g5.umpquanet.com> <5ebd8573-1363-06c7-cbb2-8298b0894319@sentex.net> <20180712183512.GA75020@g5.umpquanet.com> <20180712214248.GA98578@g5.umpquanet.com> <20180713191050.GA98371@g5.umpquanet.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.5 (2018-04-13) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2018 21:50:52 -0000 On Fri, Jul 13, 2018 at 03:22:39PM -0400, Mike Tancsa wrote: > > If you ever have a system with a LOT of small files and directories, a > handy value to tune / keep an eye on is the mix allocated to metadata vs > file data. vfs.zfs.arc_meta_limit. You can tell when doing things like > "ls" in a directory takes a LONG time to list files. In my case, I had > many directories with 50,000+ files. > > Also things like 'zfs list -t snapshot' start to take a long time. I think I already have that symptom on another new server, a backup retention server. It's slow (CPU) and fat (disk). # time zfs list -Hrt snap | wc -l 27365 real 2m47.811s user 0m1.757s sys 0m20.828s Almost three minutes to list all snapshots found. So when that symptomatic slowness appears, is the tweak to *raise* arc_meta_limit ? I don't immediately see how to tell what the arc_meta usage is, and thus see how close it is to the limit. >From that storage server ("electron"): ARC Efficiency: 20.18m Cache Hit Ratio: 91.20% 18.40m Cache Miss Ratio: 8.80% 1.78m Actual Hit Ratio: 91.18% 18.40m Data Demand Efficiency: 87.10% 11.95k CACHE HITS BY CACHE LIST: Anonymously Used: 0.02% 3.15k Most Recently Used: 0.30% 55.95k Most Frequently Used: 99.68% 18.34m Most Recently Used Ghost: 0.00% 0 Most Frequently Used Ghost: 0.00% 0 CACHE HITS BY DATA TYPE: Demand Data: 0.06% 10.41k Prefetch Data: 0.00% 0 Demand Metadata: 99.93% 18.39m Prefetch Metadata: 0.02% 3.15k CACHE MISSES BY DATA TYPE: Demand Data: 0.09% 1.54k Prefetch Data: 0.00% 0 Demand Metadata: 99.73% 1.77m Prefetch Metadata: 0.19% 3.30k # sysctl -a | grep arc | grep ^vfs.zfs vfs.zfs.l2arc_norw: 1 vfs.zfs.l2arc_feed_again: 1 vfs.zfs.l2arc_noprefetch: 1 vfs.zfs.l2arc_feed_min_ms: 200 vfs.zfs.l2arc_feed_secs: 1 vfs.zfs.l2arc_headroom: 2 vfs.zfs.l2arc_write_boost: 8388608 vfs.zfs.l2arc_write_max: 8388608 vfs.zfs.arc_meta_limit: 16432737280 vfs.zfs.arc_free_target: 113124 vfs.zfs.compressed_arc_enabled: 1 vfs.zfs.arc_grow_retry: 60 vfs.zfs.arc_shrink_shift: 7 vfs.zfs.arc_average_blocksize: 8192 vfs.zfs.arc_no_grow_shift: 5 vfs.zfs.arc_min: 8216368640 vfs.zfs.arc_max: 65730949120 # top | head -8 last pid: 943; load averages: 0.14, 0.15, 0.10 up 0+00:30:43 14:42:26 22 processes: 1 running, 21 sleeping Mem: 16M Active, 13M Inact, 1063M Wired, 61G Free ARC: 374M Total, 214M MFU, 63M MRU, 32K Anon, 5614K Header, 91M Other 79M Compressed, 223M Uncompressed, 2.81:1 Ratio Swap: 8192M Total, 8192M Free Thanks again, Mike. Jim