From owner-freebsd-stable@freebsd.org Wed Jul 22 05:05:06 2020 Return-Path: Delivered-To: freebsd-stable@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id D45D93712F5 for ; Wed, 22 Jul 2020 05:05:06 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-TO1-obe.outbound.protection.outlook.com (mail-eopbgr670063.outbound.protection.outlook.com [40.107.67.63]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "GlobalSign Organization Validation CA - SHA256 - G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4BBNgj6fzlz4fxN for ; Wed, 22 Jul 2020 05:05:05 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VXZmwwi7E4N4I1RPi42KZp2PO6zUO/6XmKSw7/TtHBCapaXD3QCInqrbaH8Ssho6TY58BXAyMpXxjW2AMZqZpcudHB/rFZ83Eh8ePC/oE+ihrH5YFbh4bckFI/O6lzS6TmQyHbh0jkUV5xfHTGecwvoE7YHg0x09LbKzojCOfsAKl3+lJjpMXlYQxVU8yNm2/ST8QUQViRgv+TfS0dclvOdwsdxT6qdxuCtiohnwN5YwFU9ROZDZ4XDAvZn6PTVFiKW7DC69OZZ2INXgsRO+v8YlJIz+rc+rPL+q/xXAtq4/TqPGz0GNkDvNuamfYHO75MGUc9WFINp2/eB1QGRqGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SQ2y+RrkDDmKaj7gS26+JvD6Sk2uqVO9zMaSwX5R6Pk=; b=mFs8R72lrc35pAeQT+vVsp14D6Owg+rbvg8Ars45gp2m2juJ/IA3CYUK7W5diEyLIkQ9/1s8CeSUId/Nsz6VLlvxCsE1w6Dhj06T5O71u+hjcZodfl/wl86+M8IGz+4eW2wsfiZv9tsqflpYnPKFwtVFgNxfrRNNojp8/GpcorMtJtOeRxz0IlS4FiTDUfxQWa///cKzI6jmDeMuHcQWcX4j9hko6Ieqh1lBUsyzAPXq8Xv2fA4Dq43uD43Aa1P2dw/Kz6mjpvx4zKtKj39zC7e2+jAIOgooAfzPDFn7K2BCE3F89zKxT9kXaWhIS/XKv49NSTUnzuL9AfZQeo5iDQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SQ2y+RrkDDmKaj7gS26+JvD6Sk2uqVO9zMaSwX5R6Pk=; b=Us8pInEDIZBLFx5SpPEgcEe8uU6/xpGoQ29snSHFplx6U8bH15KTWP0pgcCvOW2Y/Q4sypCbQUekL+HR1pnokQAcvLyyEx+erqpFHS0O6iDiAcojvIGyO5uZWU2eGI1mkmMWRXv1PTomE6opOymwWLLkx4pCxcbliEYF83BmAEBJWcH33Ql7Q4iHfjudmcyNMBC1mAbo9zb2HoGTF3pTJgqWe9aoFk39Efe83ws8O35QCmRKVxWnBayHRITg7xWsrYBZkUNAdHczeb13N23DyEL97Ecvg671K6q+AEdrKXrekNWIyKxGk+vI7lyFBx9PPU9PfX115/ssCQlwHxVm5g== Received: from QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:38::14) by QB1PR01MB3363.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:3d::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.22; Wed, 22 Jul 2020 05:05:03 +0000 Received: from QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM ([fe80::60f3:4ca2:8a4a:1e91]) by QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM ([fe80::60f3:4ca2:8a4a:1e91%7]) with mapi id 15.20.3195.028; Wed, 22 Jul 2020 05:04:55 +0000 From: Rick Macklem To: mike tancsa , Ronald Klop , FreeBSD-STABLE Mailing List Subject: Re: zfs meta data slowness Thread-Topic: zfs meta data slowness Thread-Index: AQHWX5Zn3RfLm17QbU6StnM4JI9lQKkTCiOb Date: Wed, 22 Jul 2020 05:04:54 +0000 Message-ID: References: <1949194763.1.1595250243575@localhost>, <975657af-ccac-bbd1-e22b-86270c624226@sentex.net> In-Reply-To: <975657af-ccac-bbd1-e22b-86270c624226@sentex.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: d76e3997-abe7-4ead-a6f0-08d82dfcc60c x-ms-traffictypediagnostic: QB1PR01MB3363: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:4502; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: hD8ZX1vcEX2270TutNoExzoqMF+ixQaL3IuxCE3Z57wbzZkeST7WMCIXpzwVEahIaPasw66kmUkPwzDp8ZX0UxYjBciOaL/YZRPmwkdoostkEwSAu92QCQcM+x8i110t361bermObblI6YoEwLlLl5VbheFnt62d+s4VxSQIUuUtvOEzUEuuQ0vJPvcLol/fii4Pbw3W6fbUX0VgmRh4S9bOKw4/xJrfpNqrf9hL66e7SNzV+AIeLWZfkIuy6T1Z8VpumHJoSrYr1sKkK8bo0Wb0iyHPezFJvo5zHaUqK67pVbcf+s14L9V4NQNxpDFamF56VP21CU99gAUHZERHuQY4LBNok2a5RlNHqyVGPGJJOwfiCucXSb+CpN8u+lqBmIOl69Ppcuulg3P13wzQvQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFTY:; SFS:(376002)(136003)(39830400003)(346002)(366004)(396003)(66556008)(66446008)(66476007)(52536014)(64756008)(3480700007)(66946007)(55016002)(76116006)(91956017)(9686003)(186003)(5660300002)(30864003)(966005)(33656002)(110136005)(478600001)(86362001)(71200400001)(6506007)(83380400001)(2906002)(7696005)(8676002)(316002)(786003)(8936002); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: dnvMHzHE8o1grTpGPcYugAOHlPrBrr8K5TLMQ8E3ZTyvvDQI2mixrrQOY+kwViDEFnsqwXao/E/D+tNNahPW64CwldKqUV94e+uNTBBsZfwQBbjxibaVTN8XaZ2+lvZF5xLaUj280k2XG412uVM6fHIQwY4NkbUcfGr09HHMZZSXczMTNXBeS3kxJsk0YZjw29ciig55k6CLTgY8uaV8gI+JYG0Xf5YmvbPV/t3wHo4DOlfyp/1QVIpcBT02v+18yPuUXG7oNKGNmD2B4kz9ri+D6MyHYw2w71y67aOuZIOA9NLujUIGSHTdCxdLQuIy+7Gcf9LZT64j69Fb9PgMyCSqoVKdzsCIu15nRe39ZuySKYIx0tMvebFSGC7tS9ZrlqIQogIRWUR8xlK9+tgBOnmS1RtzmjdJwnP//e14RLGaU1L+CUKUe3s28vDxTvmlfu6iYTJzLzg+props6Vye68V/AjaE8yEepUZemG1EfqyOgkBgLw8rYwAaXPserCCt+8O8bVWol1Cq/p0TUhZrnhqMEV+0F8ByQzjX0lT7L9TOGGJjM94iwfTAH1OahG6 x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: d76e3997-abe7-4ead-a6f0-08d82dfcc60c X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jul 2020 05:04:54.9642 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: TR6HVl0tae4op8qiuIUPbbG++U2go26YcT1dBA6h6DQIzakxNynDuMN6O1ysQK6bTrl6r+kNFCaYSFeApQvKMA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: QB1PR01MB3363 X-Rspamd-Queue-Id: 4BBNgj6fzlz4fxN X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector1 header.b=Us8pInED; dmarc=none; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 40.107.67.63 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-5.38 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-0.998]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector1]; RWL_MAILSPIKE_POSSIBLE(0.00)[40.107.67.63:from]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+ip4:40.107.0.0/16]; NEURAL_HAM_LONG(-0.98)[-0.982]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[uoguelph.ca]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[uoguelph.ca:+]; NEURAL_HAM_SHORT(-0.80)[-0.801]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:40.104.0.0/14, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; RCVD_IN_DNSWL_LOW(-0.10)[40.107.67.63:from] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Jul 2020 05:05:06 -0000 mike tancsa wrote:=0A= >Hi,=0A= > Thanks for the response. Reply in line=0A= >=0A= >On 7/20/2020 9:04 AM, Ronald Klop wrote:=0A= >> Hi,=0A= >>=0A= >> My first suggestion would be to remove a lot of snapshots. But that my= =0A= >> not match your business case.=0A= >=0A= >As its a backup server, its sort of the point to have all those snapshots.= =0A= I'm the last guy who should be commenting on ZFS, since I never use it.=0A= However, it is my understanding that ZFS "pseudo automounts" each=0A= snapshot when you go there, so I think that might be what is taking=0A= so long (ie. not really meta data).=0A= =0A= Of course I have no idea what might speed that up. I would be=0A= tempted to look in ZFS for the "snapshot mounting code", in=0A= case I could find an obvious problem...=0A= =0A= rick=0A= =0A= =0A= > Maybe you can provide more information about your setup:=0A= > Amount of RAM, CPU?=0A= 64G, Xeon(R) CPU E3-1240 v6 @ 3.70GHz=0A= > output of "zpool status"=0A= # zpool status -x=0A= =0A= all pools are healthy=0A= =0A= =0A= > output of "zfs list" if possible to share=0A= =0A= its a big list=0A= =0A= # zfs list | wc=0A= 824 4120 107511=0A= =0A= =0A= > Type of disks/ssds?=0A= old school Device Model: WDC WD80EFAX-68KNBN0=0A= > What is the load of the system? I/O per second, etc.=0A= its not cpu bound, disks are sometimes running at 100% based on gstat,=0A= but not always=0A= > Do you use dedup, GELI?=0A= =0A= no and no=0A= =0A= =0A= > Something else special about the setup.=0A= > output of "top -b"=0A= >=0A= =0A= ports are right now being built in a VM, but the problem (zrepl hanging)=0A= and zfs list -t snapshots taking forever happens regardless=0A= =0A= PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU=0A= COMMAND=0A= 4439 root 12 40 20 6167M 5762M kqread 3 535:13 200.00% bhyv= e=0A= 98783 root 2 21 0 16M 5136K hdr->b 4 0:01 1.95% zfs= =0A= 76489 root 21 23 0 738M 54M uwait 1 2:18 0.88% zrep= l=0A= 98784 root 1 21 0 13M 3832K piperd 3 0:01 0.59% zfs= =0A= 99563 root 1 20 0 13M 4136K zio->i 4 0:00 0.39% zfs= =0A= 16136 root 18 25 0 705M 56M uwait 3 29:58 0.00%=0A= zrepl-freebsd-amd64=0A= 1845 root 1 20 0 12M 3772K nanslp 7 5:54 0.00%=0A= ossec-syscheckd=0A= 1567 root 1 20 0 11M 2744K select 0 2:22 0.00%=0A= syslogd=0A= 1737 root 32 20 0 11M 2844K rpcsvc 6 1:40 0.00% nfsd= =0A= 1660 root 1 -52 r0 11M 11M nanslp 5 1:18 0.00%=0A= watchdogd=0A= 1434 root 1 20 0 9988K 988K select 3 0:27 0.00% devd= =0A= 2435 mdtancsa 1 20 0 20M 8008K select 0 0:21 0.00% sshd= =0A= 1754 root 3 20 0 18M 3556K select 1 0:11 0.00%=0A= apcupsd=0A= 5917 root 1 20 0 11M 2672K select 2 0:06 0.00%=0A= script=0A= 1449 _pflogd 1 20 0 12M 3572K bpf 3 0:05 0.00%=0A= pflogd=0A= =0A= ---Mike=0A= =0A= > That kind of information.=0A= >=0A= > Regards,=0A= > Ronald.=0A= >=0A= >=0A= > Van: mike tancsa =0A= > Datum: zondag, 19 juli 2020 16:17=0A= > Aan: FreeBSD-STABLE Mailing List =0A= > Onderwerp: zfs meta data slowness=0A= >>=0A= >> Are there any tweaks that can be done to speed up or improve zfs=0A= >> metadata performance ? I have a backup server with a lot of snapshots=0A= >> (40,000) and just doing a listing can take a great deal of time. Best= =0A= >> case scenario is about 24 seconds, worst case, I have seen it up to 15= =0A= >> minutes. (FreeBSD 12.1-STABLE r363078)=0A= >>=0A= >>=0A= >> ARC Efficiency: 79.33b=0A= >> Cache Hit Ratio: 92.81% 73.62b=0A= >> Cache Miss Ratio: 7.19% 5.71b=0A= >> Actual Hit Ratio: 92.78% 73.60b=0A= >>=0A= >> Data Demand Efficiency: 96.47% 461.91m=0A= >> Data Prefetch Efficiency: 1.00% 262.73m=0A= >>=0A= >> CACHE HITS BY CACHE LIST:=0A= >> Anonymously Used: 0.01% 3.86m=0A= >> Most Recently Used: 3.91% 2.88b=0A= >> Most Frequently Used: 96.06% 70.72b=0A= >> Most Recently Used Ghost: 0.01% 5.31m=0A= >> Most Frequently Used Ghost: 0.01% 10.47m=0A= >>=0A= >> CACHE HITS BY DATA TYPE:=0A= >> Demand Data: 0.61% 445.60m=0A= >> Prefetch Data: 0.00% 2.63m=0A= >> Demand Metadata: 99.36% 73.15b=0A= >> Prefetch Metadata: 0.03% 21.00m=0A= >>=0A= >> CACHE MISSES BY DATA TYPE:=0A= >> Demand Data: 0.29% 16.31m=0A= >> Prefetch Data: 4.56% 260.10m=0A= >> Demand Metadata: 95.02% 5.42b=0A= >> Prefetch Metadata: 0.14% 7.75m=0A= >>=0A= >>=0A= >> Other than increase the metadata max, I havent really changed any=0A= >> tuneables=0A= >>=0A= >>=0A= >> ZFS Tunables (sysctl):=0A= >> kern.maxusers 4416=0A= >> vm.kmem_size 66691842048=0A= >> vm.kmem_size_scale 1=0A= >> vm.kmem_size_min 0=0A= >> vm.kmem_size_max 1319413950874=0A= >> vfs.zfs.trim.max_interval 1=0A= >> vfs.zfs.trim.timeout 30=0A= >> vfs.zfs.trim.txg_delay 32=0A= >> vfs.zfs.trim.enabled 1=0A= >> vfs.zfs.vol.immediate_write_sz 32768=0A= >> vfs.zfs.vol.unmap_sync_enabled 0=0A= >> vfs.zfs.vol.unmap_enabled 1=0A= >> vfs.zfs.vol.recursive 0=0A= >> vfs.zfs.vol.mode 1=0A= >> vfs.zfs.version.zpl 5=0A= >> vfs.zfs.version.spa 5000=0A= >> vfs.zfs.version.acl 1=0A= >> vfs.zfs.version.ioctl 7=0A= >> vfs.zfs.debug 0=0A= >> vfs.zfs.super_owner 0=0A= >> vfs.zfs.immediate_write_sz 32768=0A= >> vfs.zfs.sync_pass_rewrite 2=0A= >> vfs.zfs.sync_pass_dont_compress 5=0A= >> vfs.zfs.sync_pass_deferred_free 2=0A= >> vfs.zfs.zio.dva_throttle_enabled 1=0A= >> vfs.zfs.zio.exclude_metadata 0=0A= >> vfs.zfs.zio.use_uma 1=0A= >> vfs.zfs.zio.taskq_batch_pct 75=0A= >> vfs.zfs.zil_maxblocksize 131072=0A= >> vfs.zfs.zil_slog_bulk 786432=0A= >> vfs.zfs.zil_nocacheflush 0=0A= >> vfs.zfs.zil_replay_disable 0=0A= >> vfs.zfs.cache_flush_disable 0=0A= >> vfs.zfs.standard_sm_blksz 131072=0A= >> vfs.zfs.dtl_sm_blksz 4096=0A= >> vfs.zfs.min_auto_ashift 9=0A= >> vfs.zfs.max_auto_ashift 13=0A= >> vfs.zfs.vdev.trim_max_pending 10000=0A= >> vfs.zfs.vdev.bio_delete_disable 0=0A= >> vfs.zfs.vdev.bio_flush_disable 0=0A= >> vfs.zfs.vdev.def_queue_depth 32=0A= >> vfs.zfs.vdev.queue_depth_pct 1000=0A= >> vfs.zfs.vdev.write_gap_limit 4096=0A= >> vfs.zfs.vdev.read_gap_limit 32768=0A= >> vfs.zfs.vdev.aggregation_limit_non_rotating131072=0A= >> vfs.zfs.vdev.aggregation_limit 1048576=0A= >> vfs.zfs.vdev.initializing_max_active 1=0A= >> vfs.zfs.vdev.initializing_min_active 1=0A= >> vfs.zfs.vdev.removal_max_active 2=0A= >> vfs.zfs.vdev.removal_min_active 1=0A= >> vfs.zfs.vdev.trim_max_active 64=0A= >> vfs.zfs.vdev.trim_min_active 1=0A= >> vfs.zfs.vdev.scrub_max_active 2=0A= >> vfs.zfs.vdev.scrub_min_active 1=0A= >> vfs.zfs.vdev.async_write_max_active 10=0A= >> vfs.zfs.vdev.async_write_min_active 1=0A= >> vfs.zfs.vdev.async_read_max_active 3=0A= >> vfs.zfs.vdev.async_read_min_active 1=0A= >> vfs.zfs.vdev.sync_write_max_active 10=0A= >> vfs.zfs.vdev.sync_write_min_active 10=0A= >> vfs.zfs.vdev.sync_read_max_active 10=0A= >> vfs.zfs.vdev.sync_read_min_active 10=0A= >> vfs.zfs.vdev.max_active 1000=0A= >> vfs.zfs.vdev.async_write_active_max_dirty_percent60=0A= >> vfs.zfs.vdev.async_write_active_min_dirty_percent30=0A= >> vfs.zfs.vdev.mirror.non_rotating_seek_inc1=0A= >> vfs.zfs.vdev.mirror.non_rotating_inc 0=0A= >> vfs.zfs.vdev.mirror.rotating_seek_offset1048576=0A= >> vfs.zfs.vdev.mirror.rotating_seek_inc 5=0A= >> vfs.zfs.vdev.mirror.rotating_inc 0=0A= >> vfs.zfs.vdev.trim_on_init 1=0A= >> vfs.zfs.vdev.cache.bshift 16=0A= >> vfs.zfs.vdev.cache.size 0=0A= >> vfs.zfs.vdev.cache.max 16384=0A= >> vfs.zfs.vdev.validate_skip 0=0A= >> vfs.zfs.vdev.max_ms_shift 34=0A= >> vfs.zfs.vdev.default_ms_shift 29=0A= >> vfs.zfs.vdev.max_ms_count_limit 131072=0A= >> vfs.zfs.vdev.min_ms_count 16=0A= >> vfs.zfs.vdev.default_ms_count 200=0A= >> vfs.zfs.txg.timeout 5=0A= >> vfs.zfs.space_map_ibs 14=0A= >> vfs.zfs.special_class_metadata_reserve_pct25=0A= >> vfs.zfs.user_indirect_is_special 1=0A= >> vfs.zfs.ddt_data_is_special 1=0A= >> vfs.zfs.spa_allocators 4=0A= >> vfs.zfs.spa_min_slop 134217728=0A= >> vfs.zfs.spa_slop_shift 5=0A= >> vfs.zfs.spa_asize_inflation 24=0A= >> vfs.zfs.deadman_enabled 1=0A= >> vfs.zfs.deadman_checktime_ms 5000=0A= >> vfs.zfs.deadman_synctime_ms 1000000=0A= >> vfs.zfs.debugflags 0=0A= >> vfs.zfs.recover 0=0A= >> vfs.zfs.spa_load_verify_data 1=0A= >> vfs.zfs.spa_load_verify_metadata 1=0A= >> vfs.zfs.spa_load_verify_maxinflight 10000=0A= >> vfs.zfs.max_missing_tvds_scan 0=0A= >> vfs.zfs.max_missing_tvds_cachefile 2=0A= >> vfs.zfs.max_missing_tvds 0=0A= >> vfs.zfs.spa_load_print_vdev_tree 0=0A= >> vfs.zfs.ccw_retry_interval 300=0A= >> vfs.zfs.check_hostid 1=0A= >> vfs.zfs.multihost_fail_intervals 10=0A= >> vfs.zfs.multihost_import_intervals 20=0A= >> vfs.zfs.multihost_interval 1000=0A= >> vfs.zfs.mg_fragmentation_threshold 85=0A= >> vfs.zfs.mg_noalloc_threshold 0=0A= >> vfs.zfs.condense_pct 200=0A= >> vfs.zfs.metaslab_sm_blksz 4096=0A= >> vfs.zfs.metaslab.bias_enabled 1=0A= >> vfs.zfs.metaslab.lba_weighting_enabled 1=0A= >> vfs.zfs.metaslab.fragmentation_factor_enabled1=0A= >> vfs.zfs.metaslab.preload_enabled 1=0A= >> vfs.zfs.metaslab.preload_limit 3=0A= >> vfs.zfs.metaslab.unload_delay 8=0A= >> vfs.zfs.metaslab.load_pct 50=0A= >> vfs.zfs.metaslab.min_alloc_size 33554432=0A= >> vfs.zfs.metaslab.df_free_pct 4=0A= >> vfs.zfs.metaslab.df_alloc_threshold 131072=0A= >> vfs.zfs.metaslab.debug_unload 0=0A= >> vfs.zfs.metaslab.debug_load 0=0A= >> vfs.zfs.metaslab.fragmentation_threshold70=0A= >> vfs.zfs.metaslab.force_ganging 16777217=0A= >> vfs.zfs.free_bpobj_enabled 1=0A= >> vfs.zfs.free_max_blocks -1=0A= >> vfs.zfs.zfs_scan_checkpoint_interval 7200=0A= >> vfs.zfs.zfs_scan_legacy 0=0A= >> vfs.zfs.no_scrub_prefetch 0=0A= >> vfs.zfs.no_scrub_io 0=0A= >> vfs.zfs.resilver_min_time_ms 3000=0A= >> vfs.zfs.free_min_time_ms 1000=0A= >> vfs.zfs.scan_min_time_ms 1000=0A= >> vfs.zfs.scan_idle 50=0A= >> vfs.zfs.scrub_delay 4=0A= >> vfs.zfs.resilver_delay 2=0A= >> vfs.zfs.zfetch.array_rd_sz 1048576=0A= >> vfs.zfs.zfetch.max_idistance 67108864=0A= >> vfs.zfs.zfetch.max_distance 8388608=0A= >> vfs.zfs.zfetch.min_sec_reap 2=0A= >> vfs.zfs.zfetch.max_streams 8=0A= >> vfs.zfs.prefetch_disable 0=0A= >> vfs.zfs.delay_scale 500000=0A= >> vfs.zfs.delay_min_dirty_percent 60=0A= >> vfs.zfs.dirty_data_sync_pct 20=0A= >> vfs.zfs.dirty_data_max_percent 10=0A= >> vfs.zfs.dirty_data_max_max 4294967296=0A= >> vfs.zfs.dirty_data_max 4294967296=0A= >> vfs.zfs.max_recordsize 1048576=0A= >> vfs.zfs.default_ibs 17=0A= >> vfs.zfs.default_bs 9=0A= >> vfs.zfs.send_holes_without_birth_time 1=0A= >> vfs.zfs.mdcomp_disable 0=0A= >> vfs.zfs.per_txg_dirty_frees_percent 5=0A= >> vfs.zfs.nopwrite_enabled 1=0A= >> vfs.zfs.dedup.prefetch 1=0A= >> vfs.zfs.dbuf_cache_lowater_pct 10=0A= >> vfs.zfs.dbuf_cache_hiwater_pct 10=0A= >> vfs.zfs.dbuf_metadata_cache_overflow 0=0A= >> vfs.zfs.dbuf_metadata_cache_shift 6=0A= >> vfs.zfs.dbuf_cache_shift 5=0A= >> vfs.zfs.dbuf_metadata_cache_max_bytes 1025282816=0A= >> vfs.zfs.dbuf_cache_max_bytes 2050565632=0A= >> vfs.zfs.arc_min_prescient_prefetch_ms 6=0A= >> vfs.zfs.arc_min_prefetch_ms 1=0A= >> vfs.zfs.l2c_only_size 0=0A= >> vfs.zfs.mfu_ghost_data_esize 7778263552=0A= >> vfs.zfs.mfu_ghost_metadata_esize 16851792896=0A= >> vfs.zfs.mfu_ghost_size 24630056448=0A= >> vfs.zfs.mfu_data_esize 3059418112=0A= >> vfs.zfs.mfu_metadata_esize 28641792=0A= >> vfs.zfs.mfu_size 6399023104=0A= >> vfs.zfs.mru_ghost_data_esize 2199812096=0A= >> vfs.zfs.mru_ghost_metadata_esize 6289682432=0A= >> vfs.zfs.mru_ghost_size 8489494528=0A= >> vfs.zfs.mru_data_esize 22781456384=0A= >> vfs.zfs.mru_metadata_esize 309155840=0A= >> vfs.zfs.mru_size 23847875584=0A= >> vfs.zfs.anon_data_esize 0=0A= >> vfs.zfs.anon_metadata_esize 0=0A= >> vfs.zfs.anon_size 8556544=0A= >> vfs.zfs.l2arc_norw 1=0A= >> vfs.zfs.l2arc_feed_again 1=0A= >> vfs.zfs.l2arc_noprefetch 1=0A= >> vfs.zfs.l2arc_feed_min_ms 200=0A= >> vfs.zfs.l2arc_feed_secs 1=0A= >> vfs.zfs.l2arc_headroom 2=0A= >> vfs.zfs.l2arc_write_boost 8388608=0A= >> vfs.zfs.l2arc_write_max 8388608=0A= >> vfs.zfs.arc_meta_strategy 1=0A= >> vfs.zfs.arc_meta_limit 15833624576=0A= >> vfs.zfs.arc_free_target 346902=0A= >> vfs.zfs.arc_kmem_cache_reap_retry_ms 1000=0A= >> vfs.zfs.compressed_arc_enabled 1=0A= >> vfs.zfs.arc_grow_retry 60=0A= >> vfs.zfs.arc_shrink_shift 7=0A= >> vfs.zfs.arc_average_blocksize 8192=0A= >> vfs.zfs.arc_no_grow_shift 5=0A= >> vfs.zfs.arc_min 8202262528=0A= >> vfs.zfs.arc_max 39334498304=0A= >> vfs.zfs.abd_chunk_size 4096=0A= >> vfs.zfs.abd_scatter_enabled 1=0A= >>=0A= >> _______________________________________________=0A= >> freebsd-stable@freebsd.org mailing list=0A= >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable=0A= >> To unsubscribe, send any mail to=0A= >> "freebsd-stable-unsubscribe@freebsd.org"=0A= >>=0A= >>=0A= >>=0A= > _______________________________________________=0A= > freebsd-stable@freebsd.org mailing list=0A= > https://lists.freebsd.org/mailman/listinfo/freebsd-stable=0A= > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"= =0A= >=0A= _______________________________________________=0A= freebsd-stable@freebsd.org mailing list=0A= https://lists.freebsd.org/mailman/listinfo/freebsd-stable=0A= To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"= =0A=