Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Jul 2020 13:24:28 +0200 (CEST)
From:      Ronald Klop <ronald-lists@klop.ws>
To:        FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: zfs meta data slowness
Message-ID:  <1473979028.2.1595417068381@localhost>
In-Reply-To: <975657af-ccac-bbd1-e22b-86270c624226@sentex.net>
References:  <b6d670e6-6e35-a1f7-7daf-76b458f26e30@sentex.net> <1949194763.1.1595250243575@localhost> <975657af-ccac-bbd1-e22b-86270c624226@sentex.net>

next in thread | previous in thread | raw e-mail | index | archive | help
 
Van: mike tancsa <mike@sentex.net>
Datum: dinsdag, 21 juli 2020 21:37
Aan: Ronald Klop <ronald-lists@klop.ws>, FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Onderwerp: Re: zfs meta data slowness
> 
> Hi,
>     Thanks for the response. Reply in line
> 
> On 7/20/2020 9:04 AM, Ronald Klop wrote:
> > Hi,
> >
> > My first suggestion would be to remove a lot of snapshots. But that my
> > not match your business case.
> 
> As its a backup server, its sort of the point to have all those snapshots.
> 
> 
> > Maybe you can provide more information about your setup:
> > Amount of RAM, CPU?
> 64G, Xeon(R) CPU E3-1240 v6 @ 3.70GHz
> > output of "zpool status"
> # zpool status -x
> 
> all pools are healthy
>  

That is nice to know.
Instead of zpool status -x, the output of "zpool status" is very interesting. And "zpool list" also. That gives the reader information about your setup, which helps thinking along about the possible cause.

But as somebody else mentioned. Profiling the kernel might be the best thing to do. Dtrace can be used for it. But I don't know these commands by heart.

If I remember correctly there is an optimization for "zfs list -o name". This is much faster because it does not get extra information from the disks.
See: https://svnweb.freebsd.org/base?view=revision&revision=230438

Regards,
Ronald.

 > 
> > output of "zfs list" if possible to share
> 
> its a big list
> 
> # zfs list | wc
>      824    4120  107511
> 
> 
> > Type of disks/ssds?
> old school Device Model:     WDC WD80EFAX-68KNBN0
> > What is the load of the system? I/O per second, etc.
> its not cpu bound, disks are sometimes running at 100% based on gstat,
> but not always
> > Do you use dedup, GELI?
> 
> no and no
> 
> 
> > Something else special about the setup.
> > output of "top -b"
> >
> 
> ports are right now being built in a VM, but the problem (zrepl hanging)
> and zfs list -t snapshots taking forever happens regardless
> 
>   PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU
> COMMAND
>  4439 root         12  40   20  6167M  5762M kqread   3 535:13 200.00% bhyve
> 98783 root          2  21    0    16M  5136K hdr->b   4   0:01   1.95% zfs
> 76489 root         21  23    0   738M    54M uwait    1   2:18   0.88% zrepl
> 98784 root          1  21    0    13M  3832K piperd   3   0:01   0.59% zfs
> 99563 root          1  20    0    13M  4136K zio->i   4   0:00   0.39% zfs
> 16136 root         18  25    0   705M    56M uwait    3  29:58   0.00%
> zrepl-freebsd-amd64
>  1845 root          1  20    0    12M  3772K nanslp   7   5:54   0.00%
> ossec-syscheckd
>  1567 root          1  20    0    11M  2744K select   0   2:22   0.00%
> syslogd
>  1737 root         32  20    0    11M  2844K rpcsvc   6   1:40   0.00% nfsd
>  1660 root          1 -52   r0    11M    11M nanslp   5   1:18   0.00%
> watchdogd
>  1434 root          1  20    0  9988K   988K select   3   0:27   0.00% devd
>  2435 mdtancsa      1  20    0    20M  8008K select   0   0:21   0.00% sshd
>  1754 root          3  20    0    18M  3556K select   1   0:11   0.00%
> apcupsd
>  5917 root          1  20    0    11M  2672K select   2   0:06   0.00%
> script
>  1449 _pflogd       1  20    0    12M  3572K bpf      3   0:05   0.00%
> pflogd
> 
>     ---Mike
> 
> > That kind of information.
> >
> > Regards,
> > Ronald.
> >
> >
> > Van: mike tancsa <mike@sentex.net>
> > Datum: zondag, 19 juli 2020 16:17
> > Aan: FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
> > Onderwerp: zfs meta data slowness
> >>
> >> Are there any tweaks that can be done to speed up or improve zfs
> >> metadata performance ? I have a backup server with a lot of snapshots
> >> (40,000)  and just doing a listing can take a great deal of time.  Best
> >> case scenario is about 24 seconds, worst case, I have seen it up to 15
> >> minutes.  (FreeBSD 12.1-STABLE r363078)
> >>
> >>
> >> ARC Efficiency:                                 79.33b
> >>         Cache Hit Ratio:                92.81%  73.62b
> >>         Cache Miss Ratio:               7.19%   5.71b
> >>         Actual Hit Ratio:               92.78%  73.60b
> >>
> >>         Data Demand Efficiency:         96.47%  461.91m
> >>         Data Prefetch Efficiency:       1.00%   262.73m
> >>
> >>         CACHE HITS BY CACHE LIST:
> >>           Anonymously Used:             0.01%   3.86m
> >>           Most Recently Used:           3.91%   2.88b
> >>           Most Frequently Used:         96.06%  70.72b
> >>           Most Recently Used Ghost:     0.01%   5.31m
> >>           Most Frequently Used Ghost:   0.01%   10.47m
> >>
> >>         CACHE HITS BY DATA TYPE:
> >>           Demand Data:                  0.61%   445.60m
> >>           Prefetch Data:                0.00%   2.63m
> >>           Demand Metadata:              99.36%  73.15b
> >>           Prefetch Metadata:            0.03%   21.00m
> >>
> >>         CACHE MISSES BY DATA TYPE:
> >>           Demand Data:                  0.29%   16.31m
> >>           Prefetch Data:                4.56%   260.10m
> >>           Demand Metadata:              95.02%  5.42b
> >>           Prefetch Metadata:            0.14%   7.75m
> >>
> >>
> >> Other than increase the metadata max, I havent really changed any
> >> tuneables
> >>
> >>
> >> ZFS Tunables (sysctl):
> >>         kern.maxusers                           4416
> >>         vm.kmem_size                            66691842048
> >>         vm.kmem_size_scale                      1
> >>         vm.kmem_size_min                        0
> >>         vm.kmem_size_max                        1319413950874
> >>         vfs.zfs.trim.max_interval               1
> >>         vfs.zfs.trim.timeout                    30
> >>         vfs.zfs.trim.txg_delay                  32
> >>         vfs.zfs.trim.enabled                    1
> >>         vfs.zfs.vol.immediate_write_sz          32768
> >>         vfs.zfs.vol.unmap_sync_enabled          0
> >>         vfs.zfs.vol.unmap_enabled               1
> >>         vfs.zfs.vol.recursive                   0
> >>         vfs.zfs.vol.mode                        1
> >>         vfs.zfs.version.zpl                     5
> >>         vfs.zfs.version.spa                     5000
> >>         vfs.zfs.version.acl                     1
> >>         vfs.zfs.version.ioctl                   7
> >>         vfs.zfs.debug                           0
> >>         vfs.zfs.super_owner                     0
> >>         vfs.zfs.immediate_write_sz              32768
> >>         vfs.zfs.sync_pass_rewrite               2
> >>         vfs.zfs.sync_pass_dont_compress         5
> >>         vfs.zfs.sync_pass_deferred_free         2
> >>         vfs.zfs.zio.dva_throttle_enabled        1
> >>         vfs.zfs.zio.exclude_metadata            0
> >>         vfs.zfs.zio.use_uma                     1
> >>         vfs.zfs.zio.taskq_batch_pct             75
> >>         vfs.zfs.zil_maxblocksize                131072
> >>         vfs.zfs.zil_slog_bulk                   786432
> >>         vfs.zfs.zil_nocacheflush                0
> >>         vfs.zfs.zil_replay_disable              0
> >>         vfs.zfs.cache_flush_disable             0
> >>         vfs.zfs.standard_sm_blksz               131072
> >>         vfs.zfs.dtl_sm_blksz                    4096
> >>         vfs.zfs.min_auto_ashift                 9
> >>         vfs.zfs.max_auto_ashift                 13
> >>         vfs.zfs.vdev.trim_max_pending           10000
> >>         vfs.zfs.vdev.bio_delete_disable         0
> >>         vfs.zfs.vdev.bio_flush_disable          0
> >>         vfs.zfs.vdev.def_queue_depth            32
> >>         vfs.zfs.vdev.queue_depth_pct            1000
> >>         vfs.zfs.vdev.write_gap_limit            4096
> >>         vfs.zfs.vdev.read_gap_limit             32768
> >>         vfs.zfs.vdev.aggregation_limit_non_rotating131072
> >>         vfs.zfs.vdev.aggregation_limit          1048576
> >>         vfs.zfs.vdev.initializing_max_active    1
> >>         vfs.zfs.vdev.initializing_min_active    1
> >>         vfs.zfs.vdev.removal_max_active         2
> >>         vfs.zfs.vdev.removal_min_active         1
> >>         vfs.zfs.vdev.trim_max_active            64
> >>         vfs.zfs.vdev.trim_min_active            1
> >>         vfs.zfs.vdev.scrub_max_active           2
> >>         vfs.zfs.vdev.scrub_min_active           1
> >>         vfs.zfs.vdev.async_write_max_active     10
> >>         vfs.zfs.vdev.async_write_min_active     1
> >>         vfs.zfs.vdev.async_read_max_active      3
> >>         vfs.zfs.vdev.async_read_min_active      1
> >>         vfs.zfs.vdev.sync_write_max_active      10
> >>         vfs.zfs.vdev.sync_write_min_active      10
> >>         vfs.zfs.vdev.sync_read_max_active       10
> >>         vfs.zfs.vdev.sync_read_min_active       10
> >>         vfs.zfs.vdev.max_active                 1000
> >>         vfs.zfs.vdev.async_write_active_max_dirty_percent60
> >>         vfs.zfs.vdev.async_write_active_min_dirty_percent30
> >>         vfs.zfs.vdev.mirror.non_rotating_seek_inc1
> >>         vfs.zfs.vdev.mirror.non_rotating_inc    0
> >>         vfs.zfs.vdev.mirror.rotating_seek_offset1048576
> >>         vfs.zfs.vdev.mirror.rotating_seek_inc   5
> >>         vfs.zfs.vdev.mirror.rotating_inc        0
> >>         vfs.zfs.vdev.trim_on_init               1
> >>         vfs.zfs.vdev.cache.bshift               16
> >>         vfs.zfs.vdev.cache.size                 0
> >>         vfs.zfs.vdev.cache.max                  16384
> >>         vfs.zfs.vdev.validate_skip              0
> >>         vfs.zfs.vdev.max_ms_shift               34
> >>         vfs.zfs.vdev.default_ms_shift           29
> >>         vfs.zfs.vdev.max_ms_count_limit         131072
> >>         vfs.zfs.vdev.min_ms_count               16
> >>         vfs.zfs.vdev.default_ms_count           200
> >>         vfs.zfs.txg.timeout                     5
> >>         vfs.zfs.space_map_ibs                   14
> >>         vfs.zfs.special_class_metadata_reserve_pct25
> >>         vfs.zfs.user_indirect_is_special        1
> >>         vfs.zfs.ddt_data_is_special             1
> >>         vfs.zfs.spa_allocators                  4
> >>         vfs.zfs.spa_min_slop                    134217728
> >>         vfs.zfs.spa_slop_shift                  5
> >>         vfs.zfs.spa_asize_inflation             24
> >>         vfs.zfs.deadman_enabled                 1
> >>         vfs.zfs.deadman_checktime_ms            5000
> >>         vfs.zfs.deadman_synctime_ms             1000000
> >>         vfs.zfs.debugflags                      0
> >>         vfs.zfs.recover                         0
> >>         vfs.zfs.spa_load_verify_data            1
> >>         vfs.zfs.spa_load_verify_metadata        1
> >>         vfs.zfs.spa_load_verify_maxinflight     10000
> >>         vfs.zfs.max_missing_tvds_scan           0
> >>         vfs.zfs.max_missing_tvds_cachefile      2
> >>         vfs.zfs.max_missing_tvds                0
> >>         vfs.zfs.spa_load_print_vdev_tree        0
> >>         vfs.zfs.ccw_retry_interval              300
> >>         vfs.zfs.check_hostid                    1
> >>         vfs.zfs.multihost_fail_intervals        10
> >>         vfs.zfs.multihost_import_intervals      20
> >>         vfs.zfs.multihost_interval              1000
> >>         vfs.zfs.mg_fragmentation_threshold      85
> >>         vfs.zfs.mg_noalloc_threshold            0
> >>         vfs.zfs.condense_pct                    200
> >>         vfs.zfs.metaslab_sm_blksz               4096
> >>         vfs.zfs.metaslab.bias_enabled           1
> >>         vfs.zfs.metaslab.lba_weighting_enabled  1
> >>         vfs.zfs.metaslab.fragmentation_factor_enabled1
> >>         vfs.zfs.metaslab.preload_enabled        1
> >>         vfs.zfs.metaslab.preload_limit          3
> >>         vfs.zfs.metaslab.unload_delay           8
> >>         vfs.zfs.metaslab.load_pct               50
> >>         vfs.zfs.metaslab.min_alloc_size         33554432
> >>         vfs.zfs.metaslab.df_free_pct            4
> >>         vfs.zfs.metaslab.df_alloc_threshold     131072
> >>         vfs.zfs.metaslab.debug_unload           0
> >>         vfs.zfs.metaslab.debug_load             0
> >>         vfs.zfs.metaslab.fragmentation_threshold70
> >>         vfs.zfs.metaslab.force_ganging          16777217
> >>         vfs.zfs.free_bpobj_enabled              1
> >>         vfs.zfs.free_max_blocks                 -1
> >>         vfs.zfs.zfs_scan_checkpoint_interval    7200
> >>         vfs.zfs.zfs_scan_legacy                 0
> >>         vfs.zfs.no_scrub_prefetch               0
> >>         vfs.zfs.no_scrub_io                     0
> >>         vfs.zfs.resilver_min_time_ms            3000
> >>         vfs.zfs.free_min_time_ms                1000
> >>         vfs.zfs.scan_min_time_ms                1000
> >>         vfs.zfs.scan_idle                       50
> >>         vfs.zfs.scrub_delay                     4
> >>         vfs.zfs.resilver_delay                  2
> >>         vfs.zfs.zfetch.array_rd_sz              1048576
> >>         vfs.zfs.zfetch.max_idistance            67108864
> >>         vfs.zfs.zfetch.max_distance             8388608
> >>         vfs.zfs.zfetch.min_sec_reap             2
> >>         vfs.zfs.zfetch.max_streams              8
> >>         vfs.zfs.prefetch_disable                0
> >>         vfs.zfs.delay_scale                     500000
> >>         vfs.zfs.delay_min_dirty_percent         60
> >>         vfs.zfs.dirty_data_sync_pct             20
> >>         vfs.zfs.dirty_data_max_percent          10
> >>         vfs.zfs.dirty_data_max_max              4294967296
> >>         vfs.zfs.dirty_data_max                  4294967296
> >>         vfs.zfs.max_recordsize                  1048576
> >>         vfs.zfs.default_ibs                     17
> >>         vfs.zfs.default_bs                      9
> >>         vfs.zfs.send_holes_without_birth_time   1
> >>         vfs.zfs.mdcomp_disable                  0
> >>         vfs.zfs.per_txg_dirty_frees_percent     5
> >>         vfs.zfs.nopwrite_enabled                1
> >>         vfs.zfs.dedup.prefetch                  1
> >>         vfs.zfs.dbuf_cache_lowater_pct          10
> >>         vfs.zfs.dbuf_cache_hiwater_pct          10
> >>         vfs.zfs.dbuf_metadata_cache_overflow    0
> >>         vfs.zfs.dbuf_metadata_cache_shift       6
> >>         vfs.zfs.dbuf_cache_shift                5
> >>         vfs.zfs.dbuf_metadata_cache_max_bytes   1025282816
> >>         vfs.zfs.dbuf_cache_max_bytes            2050565632
> >>         vfs.zfs.arc_min_prescient_prefetch_ms   6
> >>         vfs.zfs.arc_min_prefetch_ms             1
> >>         vfs.zfs.l2c_only_size                   0
> >>         vfs.zfs.mfu_ghost_data_esize            7778263552
> >>         vfs.zfs.mfu_ghost_metadata_esize        16851792896
> >>         vfs.zfs.mfu_ghost_size                  24630056448
> >>         vfs.zfs.mfu_data_esize                  3059418112
> >>         vfs.zfs.mfu_metadata_esize              28641792
> >>         vfs.zfs.mfu_size                        6399023104
> >>         vfs.zfs.mru_ghost_data_esize            2199812096
> >>         vfs.zfs.mru_ghost_metadata_esize        6289682432
> >>         vfs.zfs.mru_ghost_size                  8489494528
> >>         vfs.zfs.mru_data_esize                  22781456384
> >>         vfs.zfs.mru_metadata_esize              309155840
> >>         vfs.zfs.mru_size                        23847875584
> >>         vfs.zfs.anon_data_esize                 0
> >>         vfs.zfs.anon_metadata_esize             0
> >>         vfs.zfs.anon_size                       8556544
> >>         vfs.zfs.l2arc_norw                      1
> >>         vfs.zfs.l2arc_feed_again                1
> >>         vfs.zfs.l2arc_noprefetch                1
> >>         vfs.zfs.l2arc_feed_min_ms               200
> >>         vfs.zfs.l2arc_feed_secs                 1
> >>         vfs.zfs.l2arc_headroom                  2
> >>         vfs.zfs.l2arc_write_boost               8388608
> >>         vfs.zfs.l2arc_write_max                 8388608
> >>         vfs.zfs.arc_meta_strategy               1
> >>         vfs.zfs.arc_meta_limit                  15833624576
> >>         vfs.zfs.arc_free_target                 346902
> >>         vfs.zfs.arc_kmem_cache_reap_retry_ms    1000
> >>         vfs.zfs.compressed_arc_enabled          1
> >>         vfs.zfs.arc_grow_retry                  60
> >>         vfs.zfs.arc_shrink_shift                7
> >>         vfs.zfs.arc_average_blocksize           8192
> >>         vfs.zfs.arc_no_grow_shift               5
> >>         vfs.zfs.arc_min                         8202262528
> >>         vfs.zfs.arc_max                         39334498304
> >>         vfs.zfs.abd_chunk_size                  4096
> >>         vfs.zfs.abd_scatter_enabled             1
> >>
> >> _______________________________________________
> >> freebsd-stable@freebsd.org mailing list
> >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> >> To unsubscribe, send any mail to
> >> "freebsd-stable-unsubscribe@freebsd.org"
> >>
> >>
> >>
> > _______________________________________________
> > freebsd-stable@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
> >
> 
From owner-freebsd-stable@freebsd.org  Wed Jul 22 12:56:57 2020
Return-Path: <owner-freebsd-stable@freebsd.org>
Delivered-To: freebsd-stable@mailman.nyi.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.nyi.freebsd.org (Postfix) with ESMTP id 72DC637BEC9
 for <freebsd-stable@mailman.nyi.freebsd.org>;
 Wed, 22 Jul 2020 12:56:57 +0000 (UTC) (envelope-from mike@sentex.net)
Received: from pyroxene2a.sentex.ca (pyroxene19.sentex.ca
 [IPv6:2607:f3e0:0:3::19])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256
 client-signature RSA-PSS (2048 bits) client-digest SHA256)
 (Client CN "pyroxene.sentex.ca",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id 4BBb885RRfz48sg
 for <freebsd-stable@freebsd.org>; Wed, 22 Jul 2020 12:56:56 +0000 (UTC)
 (envelope-from mike@sentex.net)
Received: from [IPv6:2607:f3e0:0:4:8125:6e09:8b8c:7875]
 ([IPv6:2607:f3e0:0:4:8125:6e09:8b8c:7875])
 by pyroxene2a.sentex.ca (8.15.2/8.15.2) with ESMTPS id 06MCusSG069039
 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NO);
 Wed, 22 Jul 2020 08:56:54 -0400 (EDT) (envelope-from mike@sentex.net)
Subject: Re: zfs meta data slowness
To: Rick Macklem <rmacklem@uoguelph.ca>, Ronald Klop <ronald-lists@klop.ws>,
 FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
References: <b6d670e6-6e35-a1f7-7daf-76b458f26e30@sentex.net>
 <1949194763.1.1595250243575@localhost>
 <975657af-ccac-bbd1-e22b-86270c624226@sentex.net>
 <QB1PR01MB3364420AA4C45D2BDD089C0DDD790@QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM>
From: mike tancsa <mike@sentex.net>
Autocrypt: addr=mike@sentex.net; keydata=
 mQENBFywzOMBCACoNFpwi5MeyEREiCeHtbm6pZJI/HnO+wXdCAWtZkS49weOoVyUj5BEXRZP
 xflV2ib2hflX4nXqhenaNiia4iaZ9ft3I1ebd7GEbGnsWCvAnob5MvDZyStDAuRxPJK1ya/s
 +6rOvr+eQiXYNVvfBhrCfrtR/esSkitBGxhUkBjOti8QwzD71JVF5YaOjBAs7jZUKyLGj0kW
 yDg4jUndudWU7G2yc9GwpHJ9aRSUN8e/mWdIogK0v+QBHfv/dsI6zVB7YuxCC9Fx8WPwfhDH
 VZC4kdYCQWKXrm7yb4TiVdBh5kgvlO9q3js1yYdfR1x8mjK2bH2RSv4bV3zkNmsDCIxjABEB
 AAG0HW1pa2UgdGFuY3NhIDxtaWtlQHNlbnRleC5uZXQ+iQFUBBMBCAA+FiEEmuvCXT0aY6hs
 4SbWeVOEFl5WrMgFAlywzOYCGwMFCQHhM4AFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQ
 eVOEFl5WrMhnPAf7Bf+ola0V9t4i8rwCMGvzkssGaxY/5zNSZO9BgSgfN0WzgmBEOy/3R4km
 Yn5KH94NltJYAAE5hqkFmAwK6psOqAR9cxHrRfU+gV2KO8pCDc6K/htkQcd/mclJYpCHp6Eq
 EVJOiAxcNaYuHZkeMdXDuvvI5Rk82VHk84BGgxIqIrhLlkguoPbXOOa+8c/Mpb1sRAGZEOuX
 EzKNC49+GS9gKW6ISbanyPsGEcFyP7GKMzcHBPf3cPrewZQZ6gBoNscasL6IJeAQDqzQAxbU
 GjO0qBSMRgnLXK7+DJlxrYdHGXqNbV6AYsmHJ6c2WWWiuRviFBqXinlgJ2FnYebZPAfWiQ==
Message-ID: <963d17ae-d3bb-6824-4ef8-dbe70dd13791@sentex.net>
Date: Wed, 22 Jul 2020 08:56:54 -0400
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <QB1PR01MB3364420AA4C45D2BDD089C0DDD790@QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Rspamd-Queue-Id: 4BBb885RRfz48sg
X-Spamd-Bar: -
Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none;
 spf=pass (mx1.freebsd.org: domain of mike@sentex.net designates
 2607:f3e0:0:3::19 as permitted sender) smtp.mailfrom=mike@sentex.net
X-Spamd-Result: default: False [-1.05 / 15.00]; ARC_NA(0.00)[];
 NEURAL_HAM_MEDIUM(-0.92)[-0.917]; FROM_HAS_DN(0.00)[];
 RCPT_COUNT_THREE(0.00)[3];
 R_SPF_ALLOW(-0.20)[+ip6:2607:f3e0::/32];
 NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain];
 HFILTER_HELO_IP_A(1.00)[pyroxene2a.sentex.ca];
 HFILTER_HELO_NORES_A_OR_MX(0.30)[pyroxene2a.sentex.ca];
 DMARC_NA(0.00)[sentex.net]; TO_MATCH_ENVRCPT_SOME(0.00)[];
 TO_DN_ALL(0.00)[]; NEURAL_HAM_SHORT(-0.13)[-0.132];
 FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[];
 MIME_TRACE(0.00)[0:+];
 ASN(0.00)[asn:11647, ipnet:2607:f3e0::/32, country:CA];
 MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[];
 RCVD_COUNT_TWO(0.00)[2]
X-BeenThere: freebsd-stable@freebsd.org
X-Mailman-Version: 2.1.33
Precedence: list
List-Id: Production branch of FreeBSD source code <freebsd-stable.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/freebsd-stable>, 
 <mailto:freebsd-stable-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-stable/>;
List-Post: <mailto:freebsd-stable@freebsd.org>
List-Help: <mailto:freebsd-stable-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/freebsd-stable>,
 <mailto:freebsd-stable-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Wed, 22 Jul 2020 12:56:57 -0000

On 7/22/2020 1:04 AM, Rick Macklem wrote:
> mike tancsa wrote:
>> Hi,
>>    Thanks for the response. Reply in line
>>
>> On 7/20/2020 9:04 AM, Ronald Klop wrote:
>>> Hi,
>>>
>>> My first suggestion would be to remove a lot of snapshots. But that my
>>> not match your business case.
>> As its a backup server, its sort of the point to have all those snapshots.
> I'm the last guy who should be commenting on ZFS, since I never use it.
> However, it is my understanding that ZFS "pseudo automounts" each
> snapshot when you go there, so I think that might be what is taking
> so long (ie. not really meta data).
>
Thanks Rick, in this case, its just listing snapshots from the command line

zfs list -t snapshot

that is taking forever.

Best case scenario when the box boots up after running it once, it will
take about 25 seconds

but when the box is taking in zfs streams, it really slows down and can
be anywhere up to 30min


0{backup4}# time zfs list -t snapshot > /tmp/snap.out
1.839u 23.211s 3:11.69 13.0%    71+178k 2504801+38io 0pf+0w
0{backup4}# time zfs list -t snapshot > /tmp/snap.out
1.817u 23.612s 0:25.47 99.8%    71+178k 2472088+38io 0pf+0w
0{backup4}# time zfs list -t snapshot > /tmp/snap.out
2.040u 23.314s 0:25.40 99.8%    71+177k 2472105+38io 0pf+0w
0{backup4}#






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1473979028.2.1595417068381>