From owner-freebsd-stable@freebsd.org Mon Jul 20 13:04:09 2020 Return-Path: Delivered-To: freebsd-stable@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id E3E8535DDBF for ; Mon, 20 Jul 2020 13:04:09 +0000 (UTC) (envelope-from ronald-lists@klop.ws) Received: from smtp-relay-int.realworks.nl (smtp-relay-int.realworks.nl [194.109.157.24]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4B9MPN0dKKz40ch for ; Mon, 20 Jul 2020 13:04:07 +0000 (UTC) (envelope-from ronald-lists@klop.ws) Date: Mon, 20 Jul 2020 15:04:03 +0200 (CEST) From: Ronald Klop To: FreeBSD-STABLE Mailing List Message-ID: <1949194763.1.1595250243575@localhost> In-Reply-To: References: Subject: Re: zfs meta data slowness MIME-Version: 1.0 X-Mailer: Realworks (517.349.cbedd3c6603) Importance: Normal X-Priority: 3 (Normal) X-Rspamd-Queue-Id: 4B9MPN0dKKz40ch X-Spamd-Bar: / Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of ronald-lists@klop.ws designates 194.109.157.24 as permitted sender) smtp.mailfrom=ronald-lists@klop.ws X-Spamd-Result: default: False [-0.97 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.71)[-0.709]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:194.109.157.0/24]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; DMARC_NA(0.00)[klop.ws]; NEURAL_HAM_LONG(-0.43)[-0.430]; RCPT_COUNT_ONE(0.00)[1]; TO_DN_ALL(0.00)[]; NEURAL_HAM_SHORT(-0.03)[-0.031]; HAS_X_PRIO_THREE(0.00)[3]; RCVD_IN_DNSWL_NONE(0.00)[194.109.157.24:from]; RCVD_COUNT_ZERO(0.00)[0]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ASN(0.00)[asn:3265, ipnet:194.109.0.0/16, country:NL]; MID_RHS_NOT_FQDN(0.50)[]; RWL_MAILSPIKE_VERYGOOD(0.00)[194.109.157.24:from] Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.33 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Jul 2020 13:04:10 -0000 Hi, My first suggestion would be to remove a lot of snapshots. But that my not match your business case. Maybe you can provide more information about your setup: Amount of RAM, CPU? output of "zpool status" output of "zfs list" if possible to share Type of disks/ssds? What is the load of the system? I/O per second, etc. Do you use dedup, GELI? Something else special about the setup. output of "top -b" That kind of information. Regards, Ronald. Van: mike tancsa Datum: zondag, 19 juli 2020 16:17 Aan: FreeBSD-STABLE Mailing List Onderwerp: zfs meta data slowness > > Are there any tweaks that can be done to speed up or improve zfs > metadata performance ? I have a backup server with a lot of snapshots > (40,000) and just doing a listing can take a great deal of time. Best > case scenario is about 24 seconds, worst case, I have seen it up to 15 > minutes. (FreeBSD 12.1-STABLE r363078) > > > ARC Efficiency: 79.33b > Cache Hit Ratio: 92.81% 73.62b > Cache Miss Ratio: 7.19% 5.71b > Actual Hit Ratio: 92.78% 73.60b > > Data Demand Efficiency: 96.47% 461.91m > Data Prefetch Efficiency: 1.00% 262.73m > > CACHE HITS BY CACHE LIST: > Anonymously Used: 0.01% 3.86m > Most Recently Used: 3.91% 2.88b > Most Frequently Used: 96.06% 70.72b > Most Recently Used Ghost: 0.01% 5.31m > Most Frequently Used Ghost: 0.01% 10.47m > > CACHE HITS BY DATA TYPE: > Demand Data: 0.61% 445.60m > Prefetch Data: 0.00% 2.63m > Demand Metadata: 99.36% 73.15b > Prefetch Metadata: 0.03% 21.00m > > CACHE MISSES BY DATA TYPE: > Demand Data: 0.29% 16.31m > Prefetch Data: 4.56% 260.10m > Demand Metadata: 95.02% 5.42b > Prefetch Metadata: 0.14% 7.75m > > > Other than increase the metadata max, I havent really changed any tuneables > > > ZFS Tunables (sysctl): > kern.maxusers 4416 > vm.kmem_size 66691842048 > vm.kmem_size_scale 1 > vm.kmem_size_min 0 > vm.kmem_size_max 1319413950874 > vfs.zfs.trim.max_interval 1 > vfs.zfs.trim.timeout 30 > vfs.zfs.trim.txg_delay 32 > vfs.zfs.trim.enabled 1 > vfs.zfs.vol.immediate_write_sz 32768 > vfs.zfs.vol.unmap_sync_enabled 0 > vfs.zfs.vol.unmap_enabled 1 > vfs.zfs.vol.recursive 0 > vfs.zfs.vol.mode 1 > vfs.zfs.version.zpl 5 > vfs.zfs.version.spa 5000 > vfs.zfs.version.acl 1 > vfs.zfs.version.ioctl 7 > vfs.zfs.debug 0 > vfs.zfs.super_owner 0 > vfs.zfs.immediate_write_sz 32768 > vfs.zfs.sync_pass_rewrite 2 > vfs.zfs.sync_pass_dont_compress 5 > vfs.zfs.sync_pass_deferred_free 2 > vfs.zfs.zio.dva_throttle_enabled 1 > vfs.zfs.zio.exclude_metadata 0 > vfs.zfs.zio.use_uma 1 > vfs.zfs.zio.taskq_batch_pct 75 > vfs.zfs.zil_maxblocksize 131072 > vfs.zfs.zil_slog_bulk 786432 > vfs.zfs.zil_nocacheflush 0 > vfs.zfs.zil_replay_disable 0 > vfs.zfs.cache_flush_disable 0 > vfs.zfs.standard_sm_blksz 131072 > vfs.zfs.dtl_sm_blksz 4096 > vfs.zfs.min_auto_ashift 9 > vfs.zfs.max_auto_ashift 13 > vfs.zfs.vdev.trim_max_pending 10000 > vfs.zfs.vdev.bio_delete_disable 0 > vfs.zfs.vdev.bio_flush_disable 0 > vfs.zfs.vdev.def_queue_depth 32 > vfs.zfs.vdev.queue_depth_pct 1000 > vfs.zfs.vdev.write_gap_limit 4096 > vfs.zfs.vdev.read_gap_limit 32768 > vfs.zfs.vdev.aggregation_limit_non_rotating131072 > vfs.zfs.vdev.aggregation_limit 1048576 > vfs.zfs.vdev.initializing_max_active 1 > vfs.zfs.vdev.initializing_min_active 1 > vfs.zfs.vdev.removal_max_active 2 > vfs.zfs.vdev.removal_min_active 1 > vfs.zfs.vdev.trim_max_active 64 > vfs.zfs.vdev.trim_min_active 1 > vfs.zfs.vdev.scrub_max_active 2 > vfs.zfs.vdev.scrub_min_active 1 > vfs.zfs.vdev.async_write_max_active 10 > vfs.zfs.vdev.async_write_min_active 1 > vfs.zfs.vdev.async_read_max_active 3 > vfs.zfs.vdev.async_read_min_active 1 > vfs.zfs.vdev.sync_write_max_active 10 > vfs.zfs.vdev.sync_write_min_active 10 > vfs.zfs.vdev.sync_read_max_active 10 > vfs.zfs.vdev.sync_read_min_active 10 > vfs.zfs.vdev.max_active 1000 > vfs.zfs.vdev.async_write_active_max_dirty_percent60 > vfs.zfs.vdev.async_write_active_min_dirty_percent30 > vfs.zfs.vdev.mirror.non_rotating_seek_inc1 > vfs.zfs.vdev.mirror.non_rotating_inc 0 > vfs.zfs.vdev.mirror.rotating_seek_offset1048576 > vfs.zfs.vdev.mirror.rotating_seek_inc 5 > vfs.zfs.vdev.mirror.rotating_inc 0 > vfs.zfs.vdev.trim_on_init 1 > vfs.zfs.vdev.cache.bshift 16 > vfs.zfs.vdev.cache.size 0 > vfs.zfs.vdev.cache.max 16384 > vfs.zfs.vdev.validate_skip 0 > vfs.zfs.vdev.max_ms_shift 34 > vfs.zfs.vdev.default_ms_shift 29 > vfs.zfs.vdev.max_ms_count_limit 131072 > vfs.zfs.vdev.min_ms_count 16 > vfs.zfs.vdev.default_ms_count 200 > vfs.zfs.txg.timeout 5 > vfs.zfs.space_map_ibs 14 > vfs.zfs.special_class_metadata_reserve_pct25 > vfs.zfs.user_indirect_is_special 1 > vfs.zfs.ddt_data_is_special 1 > vfs.zfs.spa_allocators 4 > vfs.zfs.spa_min_slop 134217728 > vfs.zfs.spa_slop_shift 5 > vfs.zfs.spa_asize_inflation 24 > vfs.zfs.deadman_enabled 1 > vfs.zfs.deadman_checktime_ms 5000 > vfs.zfs.deadman_synctime_ms 1000000 > vfs.zfs.debugflags 0 > vfs.zfs.recover 0 > vfs.zfs.spa_load_verify_data 1 > vfs.zfs.spa_load_verify_metadata 1 > vfs.zfs.spa_load_verify_maxinflight 10000 > vfs.zfs.max_missing_tvds_scan 0 > vfs.zfs.max_missing_tvds_cachefile 2 > vfs.zfs.max_missing_tvds 0 > vfs.zfs.spa_load_print_vdev_tree 0 > vfs.zfs.ccw_retry_interval 300 > vfs.zfs.check_hostid 1 > vfs.zfs.multihost_fail_intervals 10 > vfs.zfs.multihost_import_intervals 20 > vfs.zfs.multihost_interval 1000 > vfs.zfs.mg_fragmentation_threshold 85 > vfs.zfs.mg_noalloc_threshold 0 > vfs.zfs.condense_pct 200 > vfs.zfs.metaslab_sm_blksz 4096 > vfs.zfs.metaslab.bias_enabled 1 > vfs.zfs.metaslab.lba_weighting_enabled 1 > vfs.zfs.metaslab.fragmentation_factor_enabled1 > vfs.zfs.metaslab.preload_enabled 1 > vfs.zfs.metaslab.preload_limit 3 > vfs.zfs.metaslab.unload_delay 8 > vfs.zfs.metaslab.load_pct 50 > vfs.zfs.metaslab.min_alloc_size 33554432 > vfs.zfs.metaslab.df_free_pct 4 > vfs.zfs.metaslab.df_alloc_threshold 131072 > vfs.zfs.metaslab.debug_unload 0 > vfs.zfs.metaslab.debug_load 0 > vfs.zfs.metaslab.fragmentation_threshold70 > vfs.zfs.metaslab.force_ganging 16777217 > vfs.zfs.free_bpobj_enabled 1 > vfs.zfs.free_max_blocks -1 > vfs.zfs.zfs_scan_checkpoint_interval 7200 > vfs.zfs.zfs_scan_legacy 0 > vfs.zfs.no_scrub_prefetch 0 > vfs.zfs.no_scrub_io 0 > vfs.zfs.resilver_min_time_ms 3000 > vfs.zfs.free_min_time_ms 1000 > vfs.zfs.scan_min_time_ms 1000 > vfs.zfs.scan_idle 50 > vfs.zfs.scrub_delay 4 > vfs.zfs.resilver_delay 2 > vfs.zfs.zfetch.array_rd_sz 1048576 > vfs.zfs.zfetch.max_idistance 67108864 > vfs.zfs.zfetch.max_distance 8388608 > vfs.zfs.zfetch.min_sec_reap 2 > vfs.zfs.zfetch.max_streams 8 > vfs.zfs.prefetch_disable 0 > vfs.zfs.delay_scale 500000 > vfs.zfs.delay_min_dirty_percent 60 > vfs.zfs.dirty_data_sync_pct 20 > vfs.zfs.dirty_data_max_percent 10 > vfs.zfs.dirty_data_max_max 4294967296 > vfs.zfs.dirty_data_max 4294967296 > vfs.zfs.max_recordsize 1048576 > vfs.zfs.default_ibs 17 > vfs.zfs.default_bs 9 > vfs.zfs.send_holes_without_birth_time 1 > vfs.zfs.mdcomp_disable 0 > vfs.zfs.per_txg_dirty_frees_percent 5 > vfs.zfs.nopwrite_enabled 1 > vfs.zfs.dedup.prefetch 1 > vfs.zfs.dbuf_cache_lowater_pct 10 > vfs.zfs.dbuf_cache_hiwater_pct 10 > vfs.zfs.dbuf_metadata_cache_overflow 0 > vfs.zfs.dbuf_metadata_cache_shift 6 > vfs.zfs.dbuf_cache_shift 5 > vfs.zfs.dbuf_metadata_cache_max_bytes 1025282816 > vfs.zfs.dbuf_cache_max_bytes 2050565632 > vfs.zfs.arc_min_prescient_prefetch_ms 6 > vfs.zfs.arc_min_prefetch_ms 1 > vfs.zfs.l2c_only_size 0 > vfs.zfs.mfu_ghost_data_esize 7778263552 > vfs.zfs.mfu_ghost_metadata_esize 16851792896 > vfs.zfs.mfu_ghost_size 24630056448 > vfs.zfs.mfu_data_esize 3059418112 > vfs.zfs.mfu_metadata_esize 28641792 > vfs.zfs.mfu_size 6399023104 > vfs.zfs.mru_ghost_data_esize 2199812096 > vfs.zfs.mru_ghost_metadata_esize 6289682432 > vfs.zfs.mru_ghost_size 8489494528 > vfs.zfs.mru_data_esize 22781456384 > vfs.zfs.mru_metadata_esize 309155840 > vfs.zfs.mru_size 23847875584 > vfs.zfs.anon_data_esize 0 > vfs.zfs.anon_metadata_esize 0 > vfs.zfs.anon_size 8556544 > vfs.zfs.l2arc_norw 1 > vfs.zfs.l2arc_feed_again 1 > vfs.zfs.l2arc_noprefetch 1 > vfs.zfs.l2arc_feed_min_ms 200 > vfs.zfs.l2arc_feed_secs 1 > vfs.zfs.l2arc_headroom 2 > vfs.zfs.l2arc_write_boost 8388608 > vfs.zfs.l2arc_write_max 8388608 > vfs.zfs.arc_meta_strategy 1 > vfs.zfs.arc_meta_limit 15833624576 > vfs.zfs.arc_free_target 346902 > vfs.zfs.arc_kmem_cache_reap_retry_ms 1000 > vfs.zfs.compressed_arc_enabled 1 > vfs.zfs.arc_grow_retry 60 > vfs.zfs.arc_shrink_shift 7 > vfs.zfs.arc_average_blocksize 8192 > vfs.zfs.arc_no_grow_shift 5 > vfs.zfs.arc_min 8202262528 > vfs.zfs.arc_max 39334498304 > vfs.zfs.abd_chunk_size 4096 > vfs.zfs.abd_scatter_enabled 1 > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > > > From owner-freebsd-stable@freebsd.org Mon Jul 20 13:26:32 2020 Return-Path: Delivered-To: freebsd-stable@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id ADC6235ED27 for ; Mon, 20 Jul 2020 13:26:32 +0000 (UTC) (envelope-from kevans@freebsd.org) Received: from smtp.freebsd.org (smtp.freebsd.org [96.47.72.83]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "smtp.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4B9MvD462Zz42N2 for ; Mon, 20 Jul 2020 13:26:32 +0000 (UTC) (envelope-from kevans@freebsd.org) Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) (Authenticated sender: kevans) by smtp.freebsd.org (Postfix) with ESMTPSA id 6B64019658 for ; Mon, 20 Jul 2020 13:26:32 +0000 (UTC) (envelope-from kevans@freebsd.org) Received: by mail-qt1-f170.google.com with SMTP id w27so12889158qtb.7 for ; Mon, 20 Jul 2020 06:26:32 -0700 (PDT) X-Gm-Message-State: AOAM532fg3lRp7UHIqZ1DdhDpa9PzbDymGgq2dCXMD+IX2uymwcO2Bgh tVyfwwjzSIqZgoXewPi9eah2NlDrWayVSQYnuZo= X-Google-Smtp-Source: ABdhPJypVOA2bR2Wke2R+UXYYcfVvnywVau66cUSJvrieapTXGFyZm7fwIYbD7R8iOCRIYJPXqTFlhCYBsOdLqASsxM= X-Received: by 2002:ac8:464f:: with SMTP id f15mr22888549qto.211.1595251591961; Mon, 20 Jul 2020 06:26:31 -0700 (PDT) MIME-Version: 1.0 References: <557add91-44cf-c981-8965-9bab90498ea1@digital-chaos.com> In-Reply-To: <557add91-44cf-c981-8965-9bab90498ea1@digital-chaos.com> From: Kyle Evans Date: Mon, 20 Jul 2020 08:26:20 -0500 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: ls colour (COLORTERM / CLICOLOR) To: James Wright Cc: FreeBSD-STABLE Mailing List Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Jul 2020 13:26:32 -0000 On Sat, Jul 18, 2020 at 7:51 PM James Wright wrote: > > > Updated to 12.1-STABLE r363215 a few days ago (previous build was > circa 1st June) > but seem to have lost "ls" colour output with "COLORTERM=yes" set in my env. > > Setting "CLICOLOR=yes" seems to enable it again, however the man page > states that > setting either should work? > Hi, Indeed, sorry for the flip-flopping. The short version of the situation is that I had flipped ls(1) to --color=auto by default based on a misunderstanding of defaults elsewhere due to shell aliases that I hadn't realized were in use. The ls(1) binary is historically and almost universally configured for non-colored by default where color support exists, and you should instead use appropriate shell alias for ls=`ls -G` or `ls --color=auto`. I can see where the manpage could describe the differences a little better. CLICOLOR (On FreeBSD) historically meant that we'll enable color if the terminal supports it, and setting it would have the same effect as the above shell alias. COLORTERM is less aggressive and won't imply any specific --color option, you would still --color=auto to go with it for it to have any effect. Thanks, Kyle Evans