From nobody Thu Nov 28 14:26:17 2024 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4XzdsW107tz5dnNy for ; Thu, 28 Nov 2024 14:26:23 +0000 (UTC) (envelope-from dclarke@blastwave.org) Received: from mail.oetec.com (mail.oetec.com [108.160.241.186]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature ECDSA (P-256) client-digest SHA256) (Client CN "mail.oetec.com", Issuer "E6" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4XzdsV6vqHz42gT; Thu, 28 Nov 2024 14:26:22 +0000 (UTC) (envelope-from dclarke@blastwave.org) Authentication-Results: mx1.freebsd.org; none Received: from [172.16.35.3] (pool-99-253-118-250.cpe.net.cable.rogers.com [99.253.118.250]) (authenticated bits=0) by mail.oetec.com (8.17.1/8.17.1) with ESMTPSA id 4ASEQH5a028250 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT); Thu, 28 Nov 2024 09:26:18 -0500 (EST) (envelope-from dclarke@blastwave.org) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=blastwave.org; s=default; t=1732803979; bh=8S6aUBrYGf20OqVgWV/eOJBGNOQLMtfp9hr3AVwnGAA=; h=Date:Subject:To:Cc:References:From:In-Reply-To; b=B7TZ9pR8nn1IjDtULwqN+VvR376V21Zu1PJZLhpDPqj/1yhXyoiUZaqovMYipQAmP lFb7fhjKzgHhlSUDqyuckzcuupTUW+FTAe1STNRT5Tvpg/6EJVaV2Jl7HWfTUietFy bZMugPCx7OfaAUkXJRCOk8Q1s7ISXaYpxFri2zeATxs1VwZ4AzBzxUdunUyEvzKcxX RygDcyWv9nymzvm473NfqDn0hGMi54on4meiIFApV+G6imQCF5vWL7jzb/Uhwj5TgK KWIaUr8eBbywzevKc3ir2GHLOzxfV8AaESW5Ssm4nbzeVt9cN9wXgAZLCeOG2cDGv5 GxYtcOuMRDKIA== Message-ID: <8846124d-5a84-48e3-97c5-7a9b3db63e02@blastwave.org> Date: Thu, 28 Nov 2024 09:26:17 -0500 List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@FreeBSD.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: zpools no longer exist after boot Content-Language: en-CA To: Juraj Lutter Cc: freebsd-current@freebsd.org References: <5798b0db-bc73-476a-908a-dd1f071bfe43@blastwave.org> <6bf5884f-9871-4c5b-912a-c64daf9577b5@blastwave.org> <132E5715-C717-4BA6-A5C7-E5F2CA81013B@FreeBSD.org> From: Dennis Clarke Organization: GENUNIX In-Reply-To: <132E5715-C717-4BA6-A5C7-E5F2CA81013B@FreeBSD.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-oetec-MailScanner-Information: Please contact the ISP for more information X-oetec-MailScanner-ID: 4ASEQH5a028250 X-oetec-MailScanner: Found to be clean X-oetec-MailScanner-From: dclarke@blastwave.org X-Spam-Status: No X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:812, ipnet:108.160.240.0/20, country:CA] X-Rspamd-Queue-Id: 4XzdsV6vqHz42gT X-Spamd-Bar: ---- On 11/28/24 09:10, Juraj Lutter wrote: > > > > Are there any differences in each pool’s properties? (zpool get all …) > Well, they are all different. There is a pool called leaf which is a mirror of two disks on two SATA/SAS backplanes. There is proteus which *was* working great over iSCSI and then the local pool t0 from the local little nvme Samsung device. Which thankfully exists or the machine would likely not boot at all. titan# zpool list leaf NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT leaf 18.2T 900K 18.2T - - 0% 0% 1.00x ONLINE - titan# titan# zpool status leaf pool: leaf state: ONLINE config: NAME STATE READ WRITE CKSUM leaf ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0 ONLINE 0 0 0 ada1 ONLINE 0 0 0 errors: No known data errors titan# titan# zpool get all leaf NAME PROPERTY VALUE SOURCE leaf size 18.2T - leaf capacity 0% - leaf altroot - default leaf health ONLINE - leaf guid 227678941907208615 - leaf version - default leaf bootfs - default leaf delegation on default leaf autoreplace off default leaf cachefile - default leaf failmode continue local leaf listsnapshots off default leaf autoexpand off default leaf dedupratio 1.00x - leaf free 18.2T - leaf allocated 900K - leaf readonly off - leaf ashift 0 default leaf comment - default leaf expandsize - - leaf freeing 0 - leaf fragmentation 0% - leaf leaked 0 - leaf multihost off default leaf checkpoint - - leaf load_guid 6926439177379939855 - leaf autotrim off default leaf compatibility openzfs-2.0-freebsd local leaf bcloneused 0 - leaf bclonesaved 0 - leaf bcloneratio 1.00x - leaf dedup_table_size 0 - leaf dedup_table_quota auto default leaf feature@async_destroy enabled local leaf feature@empty_bpobj enabled local leaf feature@lz4_compress active local leaf feature@multi_vdev_crash_dump enabled local leaf feature@spacemap_histogram active local leaf feature@enabled_txg active local leaf feature@hole_birth active local leaf feature@extensible_dataset active local leaf feature@embedded_data active local leaf feature@bookmarks enabled local leaf feature@filesystem_limits enabled local leaf feature@large_blocks enabled local leaf feature@large_dnode enabled local leaf feature@sha512 enabled local leaf feature@skein enabled local leaf feature@edonr disabled local leaf feature@userobj_accounting enabled local leaf feature@encryption enabled local leaf feature@project_quota enabled local leaf feature@device_removal enabled local leaf feature@obsolete_counts enabled local leaf feature@zpool_checkpoint enabled local leaf feature@spacemap_v2 active local leaf feature@allocation_classes enabled local leaf feature@resilver_defer enabled local leaf feature@bookmark_v2 enabled local leaf feature@redaction_bookmarks enabled local leaf feature@redacted_datasets enabled local leaf feature@bookmark_written enabled local leaf feature@log_spacemap active local leaf feature@livelist enabled local leaf feature@device_rebuild enabled local leaf feature@zstd_compress active local leaf feature@draid disabled local leaf feature@zilsaxattr disabled local leaf feature@head_errlog disabled local leaf feature@blake3 disabled local leaf feature@block_cloning disabled local leaf feature@vdev_zaps_v2 disabled local leaf feature@redaction_list_spill disabled local leaf feature@raidz_expansion disabled local leaf feature@fast_dedup disabled local leaf feature@longname disabled local leaf feature@large_microzap disabled local titan# Nothing of interest there other than the blank cachefile which I can not set to anything. At least it seems to reject my attempts to set it. titan# titan# zpool status proteus pool: proteus state: ONLINE scan: scrub repaired 0B in 00:53:43 with 0 errors on Mon Jul 1 18:56:34 2024 config: NAME STATE READ WRITE CKSUM proteus ONLINE 0 0 0 da0p1 ONLINE 0 0 0 errors: No known data errors titan# titan# camcontrol devlist | grep 'FREEBSD' at scbus8 target 0 lun 0 (da0,pass5) titan# titan# zpool get all proteus NAME PROPERTY VALUE SOURCE proteus size 1.98T - proteus capacity 17% - proteus altroot - default proteus health ONLINE - proteus guid 4488185358894371950 - proteus version - default proteus bootfs - default proteus delegation on default proteus autoreplace on local proteus cachefile - default proteus failmode continue local proteus listsnapshots off default proteus autoexpand off default proteus dedupratio 1.00x - proteus free 1.63T - proteus allocated 361G - proteus readonly off - proteus ashift 0 default proteus comment - default proteus expandsize - - proteus freeing 0 - proteus fragmentation 1% - proteus leaked 0 - proteus multihost off default proteus checkpoint - - proteus load_guid 3646341449300914421 - proteus autotrim off default proteus compatibility openzfs-2.0-freebsd local proteus bcloneused 0 - proteus bclonesaved 0 - proteus bcloneratio 1.00x - proteus dedup_table_size 0 - proteus dedup_table_quota auto default proteus feature@async_destroy enabled local proteus feature@empty_bpobj active local proteus feature@lz4_compress active local proteus feature@multi_vdev_crash_dump enabled local proteus feature@spacemap_histogram active local proteus feature@enabled_txg active local proteus feature@hole_birth active local proteus feature@extensible_dataset active local proteus feature@embedded_data active local proteus feature@bookmarks enabled local proteus feature@filesystem_limits enabled local proteus feature@large_blocks enabled local proteus feature@large_dnode enabled local proteus feature@sha512 active local proteus feature@skein enabled local proteus feature@edonr disabled local proteus feature@userobj_accounting active local proteus feature@encryption enabled local proteus feature@project_quota active local proteus feature@device_removal enabled local proteus feature@obsolete_counts enabled local proteus feature@zpool_checkpoint enabled local proteus feature@spacemap_v2 active local proteus feature@allocation_classes enabled local proteus feature@resilver_defer enabled local proteus feature@bookmark_v2 enabled local proteus feature@redaction_bookmarks enabled local proteus feature@redacted_datasets enabled local proteus feature@bookmark_written enabled local proteus feature@log_spacemap active local proteus feature@livelist enabled local proteus feature@device_rebuild enabled local proteus feature@zstd_compress active local proteus feature@draid disabled local proteus feature@zilsaxattr disabled local proteus feature@head_errlog disabled local proteus feature@blake3 disabled local proteus feature@block_cloning disabled local proteus feature@vdev_zaps_v2 disabled local proteus feature@redaction_list_spill disabled local proteus feature@raidz_expansion disabled local proteus feature@fast_dedup disabled local proteus feature@longname disabled local proteus feature@large_microzap disabled local titan# Again here we see cachefile is blank. Lastly there is the little samsung nvme bootable device : titan# titan# zpool list t0 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT t0 444G 91.2G 353G - - 27% 20% 1.00x ONLINE - titan# titan# zpool status t0 pool: t0 state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:00:44 with 0 errors on Wed Feb 7 09:56:40 2024 config: NAME STATE READ WRITE CKSUM t0 ONLINE 0 0 0 nda0p3 ONLINE 0 0 0 errors: No known data errors titan# titan# zpool get all t0 NAME PROPERTY VALUE SOURCE t0 size 444G - t0 capacity 20% - t0 altroot - default t0 health ONLINE - t0 guid 2604455524152494878 - t0 version - default t0 bootfs t0/ROOT/default local t0 delegation on default t0 autoreplace off default t0 cachefile - default t0 failmode wait default t0 listsnapshots off default t0 autoexpand off default t0 dedupratio 1.00x - t0 free 353G - t0 allocated 91.2G - t0 readonly off - t0 ashift 0 default t0 comment - default t0 expandsize - - t0 freeing 0 - t0 fragmentation 27% - t0 leaked 0 - t0 multihost off default t0 checkpoint - - t0 load_guid 5797689675549497497 - t0 autotrim off default t0 compatibility off default t0 bcloneused 12K - t0 bclonesaved 12K - t0 bcloneratio 2.00x - t0 dedup_table_size 0 - t0 dedup_table_quota auto default t0 feature@async_destroy enabled local t0 feature@empty_bpobj active local t0 feature@lz4_compress active local t0 feature@multi_vdev_crash_dump enabled local t0 feature@spacemap_histogram active local t0 feature@enabled_txg active local t0 feature@hole_birth active local t0 feature@extensible_dataset active local t0 feature@embedded_data active local t0 feature@bookmarks enabled local t0 feature@filesystem_limits enabled local t0 feature@large_blocks enabled local t0 feature@large_dnode enabled local t0 feature@sha512 active local t0 feature@skein enabled local t0 feature@edonr enabled local t0 feature@userobj_accounting active local t0 feature@encryption enabled local t0 feature@project_quota active local t0 feature@device_removal enabled local t0 feature@obsolete_counts enabled local t0 feature@zpool_checkpoint enabled local t0 feature@spacemap_v2 active local t0 feature@allocation_classes enabled local t0 feature@resilver_defer enabled local t0 feature@bookmark_v2 enabled local t0 feature@redaction_bookmarks enabled local t0 feature@redacted_datasets enabled local t0 feature@bookmark_written enabled local t0 feature@log_spacemap active local t0 feature@livelist enabled local t0 feature@device_rebuild enabled local t0 feature@zstd_compress active local t0 feature@draid enabled local t0 feature@zilsaxattr active local t0 feature@head_errlog active local t0 feature@blake3 enabled local t0 feature@block_cloning active local t0 feature@vdev_zaps_v2 active local t0 feature@redaction_list_spill enabled local t0 feature@raidz_expansion enabled local t0 feature@fast_dedup disabled local t0 feature@longname disabled local t0 feature@large_microzap disabled local titan# There is nothing of interest in the properties other than the absent cachefile setting. However I guess I could try to delete the previous cache files seen in /etc/rc.d/zpool : titan# cat /etc/rc.d/zpool #!/bin/sh # # # PROVIDE: zpool # REQUIRE: hostid disks # BEFORE: mountcritlocal # KEYWORD: nojail . /etc/rc.subr name="zpool" desc="Import ZPOOLs" rcvar="zfs_enable" start_cmd="zpool_start" required_modules="zfs" zpool_start() { local cachefile for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do if [ -r $cachefile ]; then zpool import -c $cachefile -a -N if [ $? -ne 0 ]; then echo "Import of zpool cache ${cachefile} failed," \ "will retry after root mount hold release" root_hold_wait zpool import -c $cachefile -a -N fi break fi done } load_rc_config $name run_rc_command "$1" titan# titan# titan# titan# ls -l /etc/zfs/zpool.cache /boot/zfs/zpool.cache -rw-r--r-- 1 root wheel 1424 Jan 16 2024 /boot/zfs/zpool.cache -rw-r--r-- 1 root wheel 4960 Nov 28 14:15 /etc/zfs/zpool.cache titan# May as well delete them. I have nothing to lose at this point. -- -- Dennis Clarke RISC-V/SPARC/PPC/ARM/CISC UNIX and Linux spoken