Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Nov 2024 08:52:45 -0600
From:      Alan Somers <asomers@freebsd.org>
To:        Dennis Clarke <dclarke@blastwave.org>
Cc:        Alan Somers <asomers@freebsd.org>, Current FreeBSD <freebsd-current@freebsd.org>
Subject:   Re: zpools no longer exist after boot
Message-ID:  <CAOtMX2gdGWRfOa%2Bm9FctMNCVwDQ9GUE=vhxaEY_gorDFOU0fHg@mail.gmail.com>
In-Reply-To: <22187e59-b6e9-4f2e-ba9b-f43944d1a37b@blastwave.org>
References:  <5798b0db-bc73-476a-908a-dd1f071bfe43@blastwave.org> <CAOtMX2hKCYrx92SBLQOtekKiBWMgBy_n93ZGQ_NVLq=6puRhOg@mail.gmail.com> <22187e59-b6e9-4f2e-ba9b-f43944d1a37b@blastwave.org>

index | next in thread | previous in thread | raw e-mail

[-- Attachment #1 --]
On Thu, Nov 28, 2024, 8:45 AM Dennis Clarke <dclarke@blastwave.org> wrote:

> On 11/28/24 08:52, Alan Somers wrote:
> > On Thu, Nov 28, 2024, 7:06 AM Dennis Clarke <dclarke@blastwave.org>
> wrote:
> >
> >>
> >> This is a baffling problem wherein two zpools no longer exist after
> >> boot. This is :
> .
> .
> .
> > Do you have zfs_enable="YES" set in /etc/rc.conf? If not then nothing
> will
> > get imported.
> >
> > Regarding the cachefile property, it's expected that "zpool import" will
> > change it, unless you do "zpool import -O cachefile=whatever".
> >
>
> The rc script seems to do something slightly different with zpool import
> -c $FOOBAR thus :
>
>
> titan# cat  /etc/rc.d/zpool
> #!/bin/sh
> #
> #
>
> # PROVIDE: zpool
> # REQUIRE: hostid disks
> # BEFORE: mountcritlocal
> # KEYWORD: nojail
>
> . /etc/rc.subr
>
> name="zpool"
> desc="Import ZPOOLs"
> rcvar="zfs_enable"
> start_cmd="zpool_start"
> required_modules="zfs"
>
> zpool_start()
> {
>          local cachefile
>
>          for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do
>                  if [ -r $cachefile ]; then
>                          zpool import -c $cachefile -a -N
>                          if [ $? -ne 0 ]; then
>                                  echo "Import of zpool cache
> ${cachefile} failed," \
>                                      "will retry after root mount hold
> release"
>                                  root_hold_wait
>                                  zpool import -c $cachefile -a -N
>                          fi
>                          break
>                  fi
>          done
> }
>
> load_rc_config $name
> run_rc_command "$1"
> titan#
>
>
>
> I may as well nuke the pre-existing cache file and start over :
>
>
> titan# ls -l /etc/zfs/zpool.cache /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 1424 Jan 16  2024 /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 4960 Nov 28 14:15 /etc/zfs/zpool.cache
> titan#
> titan#
> titan# rm /boot/zfs/zpool.cache
> titan# zpool set cachefile="/boot/zfs/zpool.cache" t0
> titan#
> titan# ls -l /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 1456 Nov 28 14:27 /boot/zfs/zpool.cache
> titan#
> titan# zpool set cachefile="/boot/zfs/zpool.cache" leaf
> titan#
> titan# ls -l /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 3536 Nov 28 14:28 /boot/zfs/zpool.cache
> titan#
> titan# zpool set cachefile="/boot/zfs/zpool.cache" proteus
> titan#
> titan# ls -l /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 4960 Nov 28 14:28 /boot/zfs/zpool.cache
> titan#
> titan# zpool get cachefile t0
> NAME  PROPERTY   VALUE                  SOURCE
> t0    cachefile  /boot/zfs/zpool.cache  local
> titan#
> titan# zpool get cachefile leaf
> NAME  PROPERTY   VALUE                  SOURCE
> leaf  cachefile  /boot/zfs/zpool.cache  local
> titan#
> titan# zpool get cachefile proteus
> NAME     PROPERTY   VALUE                  SOURCE
> proteus  cachefile  /boot/zfs/zpool.cache  local
> titan#
>
> titan#
> titan# reboot
> Nov 28 14:34:05 Waiting (max 60 seconds) for system process `vnlru' to
> stop... done
> Waiting (max 60 seconds) for system process `syncer' to stop...
> Syncing disks, vnodes remaining... 0 0 0 0 0 0 done
> All buffers synced.
> Uptime: 2h38m57s
> GEOM_MIRROR: Device swap: provider destroyed.
> GEOM_MIRROR: Device swap destroyed.
> uhub5: detached
> uhub1: detached
> uhub4: detached
> uhub2: detached
> uhub3: detached
> uhub6: detached
> uhub0: detached
> ix0: link state changed to DOWN
> .
> .
> .
>
> Starting iscsid.
> Starting iscsictl.
> Clearing /tmp.
> Updating /var/run/os-release done.
> Updating motd:.
> Creating and/or trimming log files.
> Starting syslogd.
> No core dumps found.
> Starting local daemons:failed to open cache file: No such file or directory
> .
> Starting ntpd.
> Starting powerd.
> Mounting late filesystems:.
> Starting cron.
> Performing sanity check on sshd configuration.
> Starting sshd.
> Starting background file system
> FreeBSD/amd64 (titan) (ttyu0)
>
> login: root
> Password:
> Nov 28 14:36:29 titan login[4162]: ROOT LOGIN (root) ON ttyu0
> Last login: Thu Nov 28 14:33:45 on ttyu0
> FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1
> main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024
>
> Welcome to FreeBSD!
>
> Release Notes, Errata: https://www.FreeBSD.org/releases/
> Security Advisories:   https://www.FreeBSD.org/security/
> FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
> FreeBSD FAQ:           https://www.FreeBSD.org/faq/
> Questions List:        https://www.FreeBSD.org/lists/questions/
> FreeBSD Forums:        https://forums.FreeBSD.org/
>
> Documents installed with the system are in the
> /usr/local/share/doc/freebsd/
> directory, or can be installed later with:  pkg install en-freebsd-doc
> For other languages, replace "en" with a language code like de or fr.
>
> Show the version of FreeBSD installed:  freebsd-version ; uname -a
> Please include that output and any error messages when posting questions.
> Introduction to manual pages:  man man
> FreeBSD directory layout:      man hier
>
> To change this login announcement, see motd(5).
> You have new mail.
> titan#
> titan# zpool list
> NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
> HEALTH  ALTROOT
> leaf     18.2T   984K  18.2T        -         -     0%     0%  1.00x
> ONLINE  -
> proteus  1.98T   361G  1.63T        -         -     1%    17%  1.00x
> ONLINE  -
> t0        444G  91.2G   353G        -         -    27%    20%  1.00x
> ONLINE  -
> titan#
>
> This is progress ... however the cachefile property is wiped out again :
>
> titan# zpool get cachefile t0
> NAME  PROPERTY   VALUE      SOURCE
> t0    cachefile  -          default
> titan# zpool get cachefile leaf
> NAME  PROPERTY   VALUE      SOURCE
> leaf  cachefile  -          default
> titan# zpool get cachefile proteus
> NAME     PROPERTY   VALUE      SOURCE
> proteus  cachefile  -          default
> titan#
>
> Also, strangely, none of the filesystem in proteus are mounted :
>
> titan#
> titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r
> proteus
> NAME                EXEC  CHECKSUM   CANMOUNT  MOUNTED  MOUNTPOINT
> proteus             on    sha512     on        no       none
> proteus/bhyve       off   sha512     on        no       /bhyve
> proteus/bhyve/disk  off   sha512     on        no       /bhyve/disk
> proteus/bhyve/isos  off   sha512     on        no       /bhyve/isos
> proteus/obj         on    sha512     on        no       /usr/obj
> proteus/src         on    sha512     on        no       /usr/src
> titan#
>
> If I reboot again without doing anything will the zpools re-appear ?
>
>
> titan#
> titan# Nov 28 14:37:08 titan su[4199]: admsys to root on /dev/pts/0
>
> titan# reboot
> Nov 28 14:40:29 Waiting (max 60 seconds) for system process `vnlru' to
> stop... done
> Waiting (max 60 seconds) for system process `syncer' to stop...
> Syncing disks, vnodes remaining... 0 0 0 0 0 done
> All buffers synced.
> Uptime: 4m50s
> GEOM_MIRROR: Device swap: provider destroyed.
> GEOM_MIRROR: Device swap destroyed.
> uhub4: detached
> uhub1: detached
> uhub5: detached
> uhub0: detached
> uhub3: detached
> uhub6: detached
> uhub2: detached
> ix0: link state changed to DOWN
> .
> .
> .
> Starting iscsid.
> Starting iscsictl.
> Clearing /tmp.
> Updating /var/run/os-release done.
> Updating motd:.
> Creating and/or trimming log files.
> Starting syslogd.
> No core dumps found.
> Starting local daemons:failed to open cache file: No such file or directory
> .
> Starting ntpd.
> Starting powerd.
> Mounting late filesystems:.
> Starting cron.
> Performing sanity check on sshd configuration.
> Starting sshd.
> Starting background file system
> FreeBSD/amd64 (titan) (ttyu0)
>
> login: root
> Password:
> Nov 28 14:43:01 titan login[4146]: ROOT LOGIN (root) ON ttyu0
> Last login: Thu Nov 28 14:36:29 on ttyu0
> FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1
> main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024
>
> Welcome to FreeBSD!
>
> Release Notes, Errata: https://www.FreeBSD.org/releases/
> Security Advisories:   https://www.FreeBSD.org/security/
> FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
> FreeBSD FAQ:           https://www.FreeBSD.org/faq/
> Questions List:        https://www.FreeBSD.org/lists/questions/
> FreeBSD Forums:        https://forums.FreeBSD.org/
>
> Documents installed with the system are in the
> /usr/local/share/doc/freebsd/
> directory, or can be installed later with:  pkg install en-freebsd-doc
> For other languages, replace "en" with a language code like de or fr.
>
> Show the version of FreeBSD installed:  freebsd-version ; uname -a
> Please include that output and any error messages when posting questions.
> Introduction to manual pages:  man man
> FreeBSD directory layout:      man hier
>
> To change this login announcement, see motd(5).
> You have new mail.
> titan#
> titan# zpool list
> NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
> HEALTH  ALTROOT
> leaf     18.2T  1.01M  18.2T        -         -     0%     0%  1.00x
> ONLINE  -
> proteus  1.98T   361G  1.63T        -         -     1%    17%  1.00x
> ONLINE  -
> t0        444G  91.2G   353G        -         -    27%    20%  1.00x
> ONLINE  -
> titan#
> titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r
> proteus
> NAME                EXEC  CHECKSUM   CANMOUNT  MOUNTED  MOUNTPOINT
> proteus             on    sha512     on        no       none
> proteus/bhyve       off   sha512     on        no       /bhyve
> proteus/bhyve/disk  off   sha512     on        no       /bhyve/disk
> proteus/bhyve/isos  off   sha512     on        no       /bhyve/isos
> proteus/obj         on    sha512     on        no       /usr/obj
> proteus/src         on    sha512     on        no       /usr/src
> titan#
>
> OKay so the zpools appear to be back in spite of the strange situation
> with the cachefile property is empty everywhere.  My guess is the zpool
> rc script is bring in information during early boot.
>
> Why the zfs filesystems on proteus do not mount? Well that is a strange
> problem but at least the zpool can be used.
>
> --
> --
> Dennis Clarke
> RISC-V/SPARC/PPC/ARM/CISC
> UNIX and Linux spoken
>

For "zpool import", the "-c" argument instructs zfs which cachefile to
search for importable pools. "-O", on the other hand, specifies how the
cachefile property should be set after the pool is imported.

>

[-- Attachment #2 --]
<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Nov 28, 2024, 8:45 AM Dennis Clarke &lt;<a href="mailto:dclarke@blastwave.org">dclarke@blastwave.org</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 11/28/24 08:52, Alan Somers wrote:<br>
&gt; On Thu, Nov 28, 2024, 7:06 AM Dennis Clarke &lt;<a href="mailto:dclarke@blastwave.org" target="_blank" rel="noreferrer">dclarke@blastwave.org</a>&gt; wrote:<br>
&gt; <br>
&gt;&gt;<br>
&gt;&gt; This is a baffling problem wherein two zpools no longer exist after<br>
&gt;&gt; boot. This is :<br>
.<br>
.<br>
.<br>
&gt; Do you have zfs_enable=&quot;YES&quot; set in /etc/rc.conf? If not then nothing will<br>
&gt; get imported.<br>
&gt; <br>
&gt; Regarding the cachefile property, it&#39;s expected that &quot;zpool import&quot; will<br>
&gt; change it, unless you do &quot;zpool import -O cachefile=whatever&quot;.<br>
&gt; <br>
<br>
The rc script seems to do something slightly different with zpool import <br>
-c $FOOBAR thus :<br>
<br>
<br>
titan# cat  /etc/rc.d/zpool<br>
#!/bin/sh<br>
#<br>
#<br>
<br>
# PROVIDE: zpool<br>
# REQUIRE: hostid disks<br>
# BEFORE: mountcritlocal<br>
# KEYWORD: nojail<br>
<br>
. /etc/rc.subr<br>
<br>
name=&quot;zpool&quot;<br>
desc=&quot;Import ZPOOLs&quot;<br>
rcvar=&quot;zfs_enable&quot;<br>
start_cmd=&quot;zpool_start&quot;<br>
required_modules=&quot;zfs&quot;<br>
<br>
zpool_start()<br>
{<br>
         local cachefile<br>
<br>
         for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do<br>
                 if [ -r $cachefile ]; then<br>
                         zpool import -c $cachefile -a -N<br>
                         if [ $? -ne 0 ]; then<br>
                                 echo &quot;Import of zpool cache <br>
${cachefile} failed,&quot; \<br>
                                     &quot;will retry after root mount hold <br>
release&quot;<br>
                                 root_hold_wait<br>
                                 zpool import -c $cachefile -a -N<br>
                         fi<br>
                         break<br>
                 fi<br>
         done<br>
}<br>
<br>
load_rc_config $name<br>
run_rc_command &quot;$1&quot;<br>
titan#<br>
<br>
<br>
<br>
I may as well nuke the pre-existing cache file and start over :<br>
<br>
<br>
titan# ls -l /etc/zfs/zpool.cache /boot/zfs/zpool.cache<br>
-rw-r--r--  1 root wheel 1424 Jan 16  2024 /boot/zfs/zpool.cache<br>
-rw-r--r--  1 root wheel 4960 Nov 28 14:15 /etc/zfs/zpool.cache<br>
titan#<br>
titan#<br>
titan# rm /boot/zfs/zpool.cache<br>
titan# zpool set cachefile=&quot;/boot/zfs/zpool.cache&quot; t0<br>
titan#<br>
titan# ls -l /boot/zfs/zpool.cache<br>
-rw-r--r--  1 root wheel 1456 Nov 28 14:27 /boot/zfs/zpool.cache<br>
titan#<br>
titan# zpool set cachefile=&quot;/boot/zfs/zpool.cache&quot; leaf<br>
titan#<br>
titan# ls -l /boot/zfs/zpool.cache<br>
-rw-r--r--  1 root wheel 3536 Nov 28 14:28 /boot/zfs/zpool.cache<br>
titan#<br>
titan# zpool set cachefile=&quot;/boot/zfs/zpool.cache&quot; proteus<br>
titan#<br>
titan# ls -l /boot/zfs/zpool.cache<br>
-rw-r--r--  1 root wheel 4960 Nov 28 14:28 /boot/zfs/zpool.cache<br>
titan#<br>
titan# zpool get cachefile t0<br>
NAME  PROPERTY   VALUE                  SOURCE<br>
t0    cachefile  /boot/zfs/zpool.cache  local<br>
titan#<br>
titan# zpool get cachefile leaf<br>
NAME  PROPERTY   VALUE                  SOURCE<br>
leaf  cachefile  /boot/zfs/zpool.cache  local<br>
titan#<br>
titan# zpool get cachefile proteus<br>
NAME     PROPERTY   VALUE                  SOURCE<br>
proteus  cachefile  /boot/zfs/zpool.cache  local<br>
titan#<br>
<br>
titan#<br>
titan# reboot<br>
Nov 28 14:34:05 Waiting (max 60 seconds) for system process `vnlru&#39; to <br>
stop... done<br>
Waiting (max 60 seconds) for system process `syncer&#39; to stop...<br>
Syncing disks, vnodes remaining... 0 0 0 0 0 0 done<br>
All buffers synced.<br>
Uptime: 2h38m57s<br>
GEOM_MIRROR: Device swap: provider destroyed.<br>
GEOM_MIRROR: Device swap destroyed.<br>
uhub5: detached<br>
uhub1: detached<br>
uhub4: detached<br>
uhub2: detached<br>
uhub3: detached<br>
uhub6: detached<br>
uhub0: detached<br>
ix0: link state changed to DOWN<br>
.<br>
.<br>
.<br>
<br>
Starting iscsid.<br>
Starting iscsictl.<br>
Clearing /tmp.<br>
Updating /var/run/os-release done.<br>
Updating motd:.<br>
Creating and/or trimming log files.<br>
Starting syslogd.<br>
No core dumps found.<br>
Starting local daemons:failed to open cache file: No such file or directory<br>
.<br>
Starting ntpd.<br>
Starting powerd.<br>
Mounting late filesystems:.<br>
Starting cron.<br>
Performing sanity check on sshd configuration.<br>
Starting sshd.<br>
Starting background file system<br>
FreeBSD/amd64 (titan) (ttyu0)<br>
<br>
login: root<br>
Password:<br>
Nov 28 14:36:29 titan login[4162]: ROOT LOGIN (root) ON ttyu0<br>
Last login: Thu Nov 28 14:33:45 on ttyu0<br>
FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1 <br>
main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024<br>
<br>
Welcome to FreeBSD!<br>
<br>
Release Notes, Errata: <a href="https://www.FreeBSD.org/releases/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/releases/</a><br>;
Security Advisories:   <a href="https://www.FreeBSD.org/security/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/security/</a><br>;
FreeBSD Handbook:      <a href="https://www.FreeBSD.org/handbook/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/handbook/</a><br>;
FreeBSD FAQ:           <a href="https://www.FreeBSD.org/faq/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/faq/</a><br>;
Questions List:        <a href="https://www.FreeBSD.org/lists/questions/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/lists/questions/</a><br>;
FreeBSD Forums:        <a href="https://forums.FreeBSD.org/" rel="noreferrer noreferrer" target="_blank">https://forums.FreeBSD.org/</a><br>;
<br>
Documents installed with the system are in the /usr/local/share/doc/freebsd/<br>
directory, or can be installed later with:  pkg install en-freebsd-doc<br>
For other languages, replace &quot;en&quot; with a language code like de or fr.<br>
<br>
Show the version of FreeBSD installed:  freebsd-version ; uname -a<br>
Please include that output and any error messages when posting questions.<br>
Introduction to manual pages:  man man<br>
FreeBSD directory layout:      man hier<br>
<br>
To change this login announcement, see motd(5).<br>
You have new mail.<br>
titan#<br>
titan# zpool list<br>
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP <br>
HEALTH  ALTROOT<br>
leaf     18.2T   984K  18.2T        -         -     0%     0%  1.00x <br>
ONLINE  -<br>
proteus  1.98T   361G  1.63T        -         -     1%    17%  1.00x <br>
ONLINE  -<br>
t0        444G  91.2G   353G        -         -    27%    20%  1.00x <br>
ONLINE  -<br>
titan#<br>
<br>
This is progress ... however the cachefile property is wiped out again :<br>
<br>
titan# zpool get cachefile t0<br>
NAME  PROPERTY   VALUE      SOURCE<br>
t0    cachefile  -          default<br>
titan# zpool get cachefile leaf<br>
NAME  PROPERTY   VALUE      SOURCE<br>
leaf  cachefile  -          default<br>
titan# zpool get cachefile proteus<br>
NAME     PROPERTY   VALUE      SOURCE<br>
proteus  cachefile  -          default<br>
titan#<br>
<br>
Also, strangely, none of the filesystem in proteus are mounted :<br>
<br>
titan#<br>
titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r proteus<br>
NAME                EXEC  CHECKSUM   CANMOUNT  MOUNTED  MOUNTPOINT<br>
proteus             on    sha512     on        no       none<br>
proteus/bhyve       off   sha512     on        no       /bhyve<br>
proteus/bhyve/disk  off   sha512     on        no       /bhyve/disk<br>
proteus/bhyve/isos  off   sha512     on        no       /bhyve/isos<br>
proteus/obj         on    sha512     on        no       /usr/obj<br>
proteus/src         on    sha512     on        no       /usr/src<br>
titan#<br>
<br>
If I reboot again without doing anything will the zpools re-appear ?<br>
<br>
<br>
titan#<br>
titan# Nov 28 14:37:08 titan su[4199]: admsys to root on /dev/pts/0<br>
<br>
titan# reboot<br>
Nov 28 14:40:29 Waiting (max 60 seconds) for system process `vnlru&#39; to <br>
stop... done<br>
Waiting (max 60 seconds) for system process `syncer&#39; to stop...<br>
Syncing disks, vnodes remaining... 0 0 0 0 0 done<br>
All buffers synced.<br>
Uptime: 4m50s<br>
GEOM_MIRROR: Device swap: provider destroyed.<br>
GEOM_MIRROR: Device swap destroyed.<br>
uhub4: detached<br>
uhub1: detached<br>
uhub5: detached<br>
uhub0: detached<br>
uhub3: detached<br>
uhub6: detached<br>
uhub2: detached<br>
ix0: link state changed to DOWN<br>
.<br>
.<br>
.<br>
Starting iscsid.<br>
Starting iscsictl.<br>
Clearing /tmp.<br>
Updating /var/run/os-release done.<br>
Updating motd:.<br>
Creating and/or trimming log files.<br>
Starting syslogd.<br>
No core dumps found.<br>
Starting local daemons:failed to open cache file: No such file or directory<br>
.<br>
Starting ntpd.<br>
Starting powerd.<br>
Mounting late filesystems:.<br>
Starting cron.<br>
Performing sanity check on sshd configuration.<br>
Starting sshd.<br>
Starting background file system<br>
FreeBSD/amd64 (titan) (ttyu0)<br>
<br>
login: root<br>
Password:<br>
Nov 28 14:43:01 titan login[4146]: ROOT LOGIN (root) ON ttyu0<br>
Last login: Thu Nov 28 14:36:29 on ttyu0<br>
FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1 <br>
main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024<br>
<br>
Welcome to FreeBSD!<br>
<br>
Release Notes, Errata: <a href="https://www.FreeBSD.org/releases/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/releases/</a><br>;
Security Advisories:   <a href="https://www.FreeBSD.org/security/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/security/</a><br>;
FreeBSD Handbook:      <a href="https://www.FreeBSD.org/handbook/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/handbook/</a><br>;
FreeBSD FAQ:           <a href="https://www.FreeBSD.org/faq/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/faq/</a><br>;
Questions List:        <a href="https://www.FreeBSD.org/lists/questions/" rel="noreferrer noreferrer" target="_blank">https://www.FreeBSD.org/lists/questions/</a><br>;
FreeBSD Forums:        <a href="https://forums.FreeBSD.org/" rel="noreferrer noreferrer" target="_blank">https://forums.FreeBSD.org/</a><br>;
<br>
Documents installed with the system are in the /usr/local/share/doc/freebsd/<br>
directory, or can be installed later with:  pkg install en-freebsd-doc<br>
For other languages, replace &quot;en&quot; with a language code like de or fr.<br>
<br>
Show the version of FreeBSD installed:  freebsd-version ; uname -a<br>
Please include that output and any error messages when posting questions.<br>
Introduction to manual pages:  man man<br>
FreeBSD directory layout:      man hier<br>
<br>
To change this login announcement, see motd(5).<br>
You have new mail.<br>
titan#<br>
titan# zpool list<br>
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP <br>
HEALTH  ALTROOT<br>
leaf     18.2T  1.01M  18.2T        -         -     0%     0%  1.00x <br>
ONLINE  -<br>
proteus  1.98T   361G  1.63T        -         -     1%    17%  1.00x <br>
ONLINE  -<br>
t0        444G  91.2G   353G        -         -    27%    20%  1.00x <br>
ONLINE  -<br>
titan#<br>
titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r proteus<br>
NAME                EXEC  CHECKSUM   CANMOUNT  MOUNTED  MOUNTPOINT<br>
proteus             on    sha512     on        no       none<br>
proteus/bhyve       off   sha512     on        no       /bhyve<br>
proteus/bhyve/disk  off   sha512     on        no       /bhyve/disk<br>
proteus/bhyve/isos  off   sha512     on        no       /bhyve/isos<br>
proteus/obj         on    sha512     on        no       /usr/obj<br>
proteus/src         on    sha512     on        no       /usr/src<br>
titan#<br>
<br>
OKay so the zpools appear to be back in spite of the strange situation <br>
with the cachefile property is empty everywhere.  My guess is the zpool<br>
rc script is bring in information during early boot.<br>
<br>
Why the zfs filesystems on proteus do not mount? Well that is a strange <br>
problem but at least the zpool can be used.<br>
<br>
-- <br>
--<br>
Dennis Clarke<br>
RISC-V/SPARC/PPC/ARM/CISC<br>
UNIX and Linux spoken<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">For &quot;zpool import&quot;, the &quot;-c&quot; argument instructs zfs which cachefile to search for importable pools. &quot;-O&quot;, on the other hand, specifies how the cachefile property should be set after the pool is imported. </div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
</blockquote></div></div></div>
home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2gdGWRfOa%2Bm9FctMNCVwDQ9GUE=vhxaEY_gorDFOU0fHg>