Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 29 Nov 2024 15:49:27 +0100 (CET)
From:      Ronald Klop <ronald-lists@klop.ws>
To:        Dennis Clarke <dclarke@blastwave.org>
Cc:        Alan Somers <asomers@freebsd.org>, Current FreeBSD <freebsd-current@freebsd.org>
Subject:   Re: zpools no longer exist after boot
Message-ID:  <754754561.9245.1732891767670@localhost>
In-Reply-To: <22187e59-b6e9-4f2e-ba9b-f43944d1a37b@blastwave.org>
References:  <5798b0db-bc73-476a-908a-dd1f071bfe43@blastwave.org> <CAOtMX2hKCYrx92SBLQOtekKiBWMgBy_n93ZGQ_NVLq=6puRhOg@mail.gmail.com> <22187e59-b6e9-4f2e-ba9b-f43944d1a37b@blastwave.org>

next in thread | previous in thread | raw e-mail | index | archive | help
------=_Part_9244_611194504.1732891767665
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit

Van: Dennis Clarke <dclarke@blastwave.org>
Datum: donderdag, 28 november 2024 15:45
Aan: Alan Somers <asomers@freebsd.org>
CC: Current FreeBSD <freebsd-current@freebsd.org>
Onderwerp: Re: zpools no longer exist after boot
> 
> On 11/28/24 08:52, Alan Somers wrote:
> > On Thu, Nov 28, 2024, 7:06AM Dennis Clarke <dclarke@blastwave.org> wrote:
> >
> >>
> >> This is a baffling problem wherein two zpools no longer exist after
> >> boot. This is :
> .
> .
> .
> > Do you have zfs_enable="YES" set in /etc/rc.conf? If not then nothing will
> > get imported.
> >
> > Regarding the cachefile property, it's expected that "zpool import" will
> > change it, unless you do "zpool import -O cachefile=whatever".
> >
> 
> The rc script seems to do something slightly different with zpool import -c $FOOBAR thus :
> 
> 
> titan# cat  /etc/rc.d/zpool
> #!/bin/sh
> #
> #
> 
> # PROVIDE: zpool
> # REQUIRE: hostid disks
> # BEFORE: mountcritlocal
> # KEYWORD: nojail
> 
> . /etc/rc.subr
> 
> name="zpool"
> desc="Import ZPOOLs"
> rcvar="zfs_enable"
> start_cmd="zpool_start"
> required_modules="zfs"
> 
> zpool_start()
> {
>          local cachefile
> 
>          for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do
>                  if [ -r $cachefile ]; then
>                          zpool import -c $cachefile -a -N
>                          if [ $? -ne 0 ]; then
>                                  echo "Import of zpool cache ${cachefile} failed," \
>                                      "will retry after root mount hold release"
>                                  root_hold_wait
>                                  zpool import -c $cachefile -a -N
>                          fi
>                          break
>                  fi
>          done
> }
> 
> load_rc_config $name
> run_rc_command "$1"
> titan#
> 
> 
> 
> I may as well nuke the pre-existing cache file and start over :
> 
> 
> titan# ls -l /etc/zfs/zpool.cache /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 1424 Jan 16  2024 /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 4960 Nov 28 14:15 /etc/zfs/zpool.cache
> titan#
> titan#
> titan# rm /boot/zfs/zpool.cache
> titan# zpool set cachefile="/boot/zfs/zpool.cache" t0
> titan#
> titan# ls -l /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 1456 Nov 28 14:27 /boot/zfs/zpool.cache
> titan#
> titan# zpool set cachefile="/boot/zfs/zpool.cache" leaf
> titan#
> titan# ls -l /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 3536 Nov 28 14:28 /boot/zfs/zpool.cache
> titan#
> titan# zpool set cachefile="/boot/zfs/zpool.cache" proteus
> titan#
> titan# ls -l /boot/zfs/zpool.cache
> -rw-r--r--  1 root wheel 4960 Nov 28 14:28 /boot/zfs/zpool.cache
> titan#
> titan# zpool get cachefile t0
> NAME  PROPERTY   VALUE                  SOURCE
> t0    cachefile  /boot/zfs/zpool.cache  local
> titan#
> titan# zpool get cachefile leaf
> NAME  PROPERTY   VALUE                  SOURCE
> leaf  cachefile  /boot/zfs/zpool.cache  local
> titan#
> titan# zpool get cachefile proteus
> NAME     PROPERTY   VALUE                  SOURCE
> proteus  cachefile  /boot/zfs/zpool.cache  local
> titan#
> 
> titan#
> titan# reboot
> Nov 28 14:34:05 Waiting (max 60 seconds) for system process `vnlru' to stop... done
> Waiting (max 60 seconds) for system process `syncer' to stop...
> Syncing disks, vnodes remaining... 0 0 0 0 0 0 done
> All buffers synced.
> Uptime: 2h38m57s
> GEOM_MIRROR: Device swap: provider destroyed.
> GEOM_MIRROR: Device swap destroyed.
> uhub5: detached
> uhub1: detached
> uhub4: detached
> uhub2: detached
> uhub3: detached
> uhub6: detached
> uhub0: detached
> ix0: link state changed to DOWN
> .
> .
> .
> 
> Starting iscsid.
> Starting iscsictl.
> Clearing /tmp.
> Updating /var/run/os-release done.
> Updating motd:.
> Creating and/or trimming log files.
> Starting syslogd.
> No core dumps found.
> Starting local daemons:failed to open cache file: No such file or directory
> .
> Starting ntpd.
> Starting powerd.
> Mounting late filesystems:.
> Starting cron.
> Performing sanity check on sshd configuration.
> Starting sshd.
> Starting background file system
> FreeBSD/amd64 (titan) (ttyu0)
> 
> login: root
> Password:
> Nov 28 14:36:29 titan login[4162]: ROOT LOGIN (root) ON ttyu0
> Last login: Thu Nov 28 14:33:45 on ttyu0
> FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1 main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024
> 
> Welcome to FreeBSD!
> 
> Release Notes, Errata: https://www.FreeBSD.org/releases/
> Security Advisories:   https://www.FreeBSD.org/security/
> FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
> FreeBSD FAQ:           https://www.FreeBSD.org/faq/
> Questions List:        https://www.FreeBSD.org/lists/questions/
> FreeBSD Forums:        https://forums.FreeBSD.org/
> 
> Documents installed with the system are in the /usr/local/share/doc/freebsd/
> directory, or can be installed later with:  pkg install en-freebsd-doc
> For other languages, replace "en" with a language code like de or fr.
> 
> Show the version of FreeBSD installed:  freebsd-version ; uname -a
> Please include that output and any error messages when posting questions.
> Introduction to manual pages:  man man
> FreeBSD directory layout:      man hier
> 
> To change this login announcement, see motd(5).
> You have new mail.
> titan#
> titan# zpool list
> NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP HEALTH  ALTROOT
> leaf     18.2T   984K  18.2T        -         -     0%     0%  1.00x ONLINE  -
> proteus  1.98T   361G  1.63T        -         -     1%    17%  1.00x ONLINE  -
> t0        444G  91.2G   353G        -         -    27%    20%  1.00x ONLINE  -
> titan#
> 
> This is progress ... however the cachefile property is wiped out again :
> 
> titan# zpool get cachefile t0
> NAME  PROPERTY   VALUE      SOURCE
> t0    cachefile  -          default
> titan# zpool get cachefile leaf
> NAME  PROPERTY   VALUE      SOURCE
> leaf  cachefile  -          default
> titan# zpool get cachefile proteus
> NAME     PROPERTY   VALUE      SOURCE
> proteus  cachefile  -          default
> titan#
> 
> Also, strangely, none of the filesystem in proteus are mounted :
> 
> titan#
> titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r proteus
> NAME                EXEC  CHECKSUM   CANMOUNT  MOUNTED  MOUNTPOINT
> proteus             on    sha512     on        no       none
> proteus/bhyve       off   sha512     on        no       /bhyve
> proteus/bhyve/disk  off   sha512     on        no       /bhyve/disk
> proteus/bhyve/isos  off   sha512     on        no       /bhyve/isos
> proteus/obj         on    sha512     on        no       /usr/obj
> proteus/src         on    sha512     on        no       /usr/src
> titan#
> 
> If I reboot again without doing anything will the zpools re-appear ?
> 
> 
> titan#
> titan# Nov 28 14:37:08 titan su[4199]: admsys to root on /dev/pts/0
> 
> titan# reboot
> Nov 28 14:40:29 Waiting (max 60 seconds) for system process `vnlru' to stop... done
> Waiting (max 60 seconds) for system process `syncer' to stop...
> Syncing disks, vnodes remaining... 0 0 0 0 0 done
> All buffers synced.
> Uptime: 4m50s
> GEOM_MIRROR: Device swap: provider destroyed.
> GEOM_MIRROR: Device swap destroyed.
> uhub4: detached
> uhub1: detached
> uhub5: detached
> uhub0: detached
> uhub3: detached
> uhub6: detached
> uhub2: detached
> ix0: link state changed to DOWN
> .
> .
> .
> Starting iscsid.
> Starting iscsictl.
> Clearing /tmp.
> Updating /var/run/os-release done.
> Updating motd:.
> Creating and/or trimming log files.
> Starting syslogd.
> No core dumps found.
> Starting local daemons:failed to open cache file: No such file or directory
> .
> Starting ntpd.
> Starting powerd.
> Mounting late filesystems:.
> Starting cron.
> Performing sanity check on sshd configuration.
> Starting sshd.
> Starting background file system
> FreeBSD/amd64 (titan) (ttyu0)
> 
> login: root
> Password:
> Nov 28 14:43:01 titan login[4146]: ROOT LOGIN (root) ON ttyu0
> Last login: Thu Nov 28 14:36:29 on ttyu0
> FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1 main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024
> 
> Welcome to FreeBSD!
> 
> Release Notes, Errata: https://www.FreeBSD.org/releases/
> Security Advisories:   https://www.FreeBSD.org/security/
> FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
> FreeBSD FAQ:           https://www.FreeBSD.org/faq/
> Questions List:        https://www.FreeBSD.org/lists/questions/
> FreeBSD Forums:        https://forums.FreeBSD.org/
> 
> Documents installed with the system are in the /usr/local/share/doc/freebsd/
> directory, or can be installed later with:  pkg install en-freebsd-doc
> For other languages, replace "en" with a language code like de or fr.
> 
> Show the version of FreeBSD installed:  freebsd-version ; uname -a
> Please include that output and any error messages when posting questions.
> Introduction to manual pages:  man man
> FreeBSD directory layout:      man hier
> 
> To change this login announcement, see motd(5).
> You have new mail.
> titan#
> titan# zpool list
> NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP HEALTH  ALTROOT
> leaf     18.2T  1.01M  18.2T        -         -     0%     0%  1.00x ONLINE  -
> proteus  1.98T   361G  1.63T        -         -     1%    17%  1.00x ONLINE  -
> t0        444G  91.2G   353G        -         -    27%    20%  1.00x ONLINE  -
> titan#
> titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r proteus
> NAME                EXEC  CHECKSUM   CANMOUNT  MOUNTED  MOUNTPOINT
> proteus             on    sha512     on        no       none
> proteus/bhyve       off   sha512     on        no       /bhyve
> proteus/bhyve/disk  off   sha512     on        no       /bhyve/disk
> proteus/bhyve/isos  off   sha512     on        no       /bhyve/isos
> proteus/obj         on    sha512     on        no       /usr/obj
> proteus/src         on    sha512     on        no       /usr/src
> titan#
> 
> OKay so the zpools appear to be back in spite of the strange situation with the cachefile property is empty everywhere.  My guess is the zpool
> rc script is bring in information during early boot.
> 
> Why the zfs filesystems on proteus do not mount? Well that is a strange problem but at least the zpool can be used.
> 
> -- 
> --
> Dennis Clarke
> RISC-V/SPARC/PPC/ARM/CISC
> UNIX and Linux spoken
> 
>  
> 
> 
> 


Hi,

The output you provide contains this line:
"Starting local daemons:failed to open cache file: No such file or directory"

Where does that output come from? What is in your file /etc/rc.local file?

Regards,
Ronald.
 
------=_Part_9244_611194504.1732891767665
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<html><head></head><body><br>
<p><strong>Van:</strong> Dennis Clarke &lt;dclarke@blastwave.org&gt;<br>
<strong>Datum:</strong> donderdag, 28 november 2024 15:45<br>
<strong>Aan:</strong> Alan Somers &lt;asomers@freebsd.org&gt;<br>
<strong>CC:</strong> Current FreeBSD &lt;freebsd-current@freebsd.org&gt;<br>
<strong>Onderwerp:</strong> Re: zpools no longer exist after boot</p>

<blockquote style="padding-right: 0px; padding-left: 5px; margin-left: 5px; border-left: #000000 2px solid; margin-right: 0px">
<div class="MessageRFC822Viewer" id="P">
<div class="TextPlainViewer" id="P.P">On 11/28/24 08:52, Alan Somers wrote:<br>
&gt; On Thu, Nov 28, 2024, 7:06AM Dennis Clarke &lt;dclarke@blastwave.org&gt; wrote:<br>
&gt;<br>
&gt;&gt;<br>
&gt;&gt; This is a baffling problem wherein two zpools no longer exist after<br>
&gt;&gt; boot. This is :<br>
.<br>
.<br>
.<br>
&gt; Do you have zfs_enable="YES" set in /etc/rc.conf? If not then nothing will<br>
&gt; get imported.<br>
&gt;<br>
&gt; Regarding the cachefile property, it's expected that "zpool import" will<br>
&gt; change it, unless you do "zpool import -O cachefile=whatever".<br>
&gt;<br>
<br>
The rc script seems to do something slightly different with zpool import -c $FOOBAR thus :<br>
<br>
<br>
titan# cat &nbsp;/etc/rc.d/zpool<br>
#!/bin/sh<br>
#<br>
#<br>
<br>
# PROVIDE: zpool<br>
# REQUIRE: hostid disks<br>
# BEFORE: mountcritlocal<br>
# KEYWORD: nojail<br>
<br>
. /etc/rc.subr<br>
<br>
name="zpool"<br>
desc="Import ZPOOLs"<br>
rcvar="zfs_enable"<br>
start_cmd="zpool_start"<br>
required_modules="zfs"<br>
<br>
zpool_start()<br>
{<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;local cachefile<br>
<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if [ -r $cachefile ]; then<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;zpool import -c $cachefile -a -N<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if [ $? -ne 0 ]; then<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;echo "Import of zpool cache ${cachefile} failed," \<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"will retry after root mount hold release"<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;root_hold_wait<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;zpool import -c $cachefile -a -N<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;fi<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;fi<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;done<br>
}<br>
<br>
load_rc_config $name<br>
run_rc_command "$1"<br>
titan#<br>
<br>
<br>
<br>
I may as well nuke the pre-existing cache file and start over :<br>
<br>
<br>
titan# ls -l /etc/zfs/zpool.cache /boot/zfs/zpool.cache<br>
-rw-r--r-- &nbsp;1 root wheel 1424 Jan 16 &nbsp;2024 /boot/zfs/zpool.cache<br>
-rw-r--r-- &nbsp;1 root wheel 4960 Nov 28 14:15 /etc/zfs/zpool.cache<br>
titan#<br>
titan#<br>
titan# rm /boot/zfs/zpool.cache<br>
titan# zpool set cachefile="/boot/zfs/zpool.cache" t0<br>
titan#<br>
titan# ls -l /boot/zfs/zpool.cache<br>
-rw-r--r-- &nbsp;1 root wheel 1456 Nov 28 14:27 /boot/zfs/zpool.cache<br>
titan#<br>
titan# zpool set cachefile="/boot/zfs/zpool.cache" leaf<br>
titan#<br>
titan# ls -l /boot/zfs/zpool.cache<br>
-rw-r--r-- &nbsp;1 root wheel 3536 Nov 28 14:28 /boot/zfs/zpool.cache<br>
titan#<br>
titan# zpool set cachefile="/boot/zfs/zpool.cache" proteus<br>
titan#<br>
titan# ls -l /boot/zfs/zpool.cache<br>
-rw-r--r-- &nbsp;1 root wheel 4960 Nov 28 14:28 /boot/zfs/zpool.cache<br>
titan#<br>
titan# zpool get cachefile t0<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
t0 &nbsp;&nbsp;&nbsp;cachefile &nbsp;/boot/zfs/zpool.cache &nbsp;local<br>
titan#<br>
titan# zpool get cachefile leaf<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
leaf &nbsp;cachefile &nbsp;/boot/zfs/zpool.cache &nbsp;local<br>
titan#<br>
titan# zpool get cachefile proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
proteus &nbsp;cachefile &nbsp;/boot/zfs/zpool.cache &nbsp;local<br>
titan#<br>
<br>
titan#<br>
titan# reboot<br>
Nov 28 14:34:05 Waiting (max 60 seconds) for system process `vnlru' to stop... done<br>
Waiting (max 60 seconds) for system process `syncer' to stop...<br>
Syncing disks, vnodes remaining... 0 0 0 0 0 0 done<br>
All buffers synced.<br>
Uptime: 2h38m57s<br>
GEOM_MIRROR: Device swap: provider destroyed.<br>
GEOM_MIRROR: Device swap destroyed.<br>
uhub5: detached<br>
uhub1: detached<br>
uhub4: detached<br>
uhub2: detached<br>
uhub3: detached<br>
uhub6: detached<br>
uhub0: detached<br>
ix0: link state changed to DOWN<br>
.<br>
.<br>
.<br>
<br>
Starting iscsid.<br>
Starting iscsictl.<br>
Clearing /tmp.<br>
Updating /var/run/os-release done.<br>
Updating motd:.<br>
Creating and/or trimming log files.<br>
Starting syslogd.<br>
No core dumps found.<br>
Starting local daemons:failed to open cache file: No such file or directory<br>
.<br>
Starting ntpd.<br>
Starting powerd.<br>
Mounting late filesystems:.<br>
Starting cron.<br>
Performing sanity check on sshd configuration.<br>
Starting sshd.<br>
Starting background file system<br>
FreeBSD/amd64 (titan) (ttyu0)<br>
<br>
login: root<br>
Password:<br>
Nov 28 14:36:29 titan login[4162]: ROOT LOGIN (root) ON ttyu0<br>
Last login: Thu Nov 28 14:33:45 on ttyu0<br>
FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1 main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024<br>
<br>
Welcome to FreeBSD!<br>
<br>
Release Notes, Errata: <a href="https://www.FreeBSD.org/releases/">https://www.FreeBSD.org/releases/</a><br>;
Security Advisories: &nbsp;&nbsp;<a href="https://www.FreeBSD.org/security/">https://www.FreeBSD.org/security/</a><br>;
FreeBSD Handbook: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.FreeBSD.org/handbook/">https://www.FreeBSD.org/handbook/</a><br>;
FreeBSD FAQ: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.FreeBSD.org/faq/">https://www.FreeBSD.org/faq/</a><br>;
Questions List: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.FreeBSD.org/lists/questions/">https://www.FreeBSD.org/lists/questions/</a><br>;
FreeBSD Forums: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://forums.FreeBSD.org/">https://forums.FreeBSD.org/</a><br>;
<br>
Documents installed with the system are in the /usr/local/share/doc/freebsd/<br>
directory, or can be installed later with: &nbsp;pkg install en-freebsd-doc<br>
For other languages, replace "en" with a language code like de or fr.<br>
<br>
Show the version of FreeBSD installed: &nbsp;freebsd-version ; uname -a<br>
Please include that output and any error messages when posting questions.<br>
Introduction to manual pages: &nbsp;man man<br>
FreeBSD directory layout: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;man hier<br>
<br>
To change this login announcement, see motd(5).<br>
You have new mail.<br>
titan#<br>
titan# zpool list<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SIZE &nbsp;ALLOC &nbsp;&nbsp;FREE &nbsp;CKPOINT &nbsp;EXPANDSZ &nbsp;&nbsp;FRAG &nbsp;&nbsp;&nbsp;CAP &nbsp;DEDUP HEALTH &nbsp;ALTROOT<br>
leaf &nbsp;&nbsp;&nbsp;&nbsp;18.2T &nbsp;&nbsp;984K &nbsp;18.2T &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;0% &nbsp;&nbsp;&nbsp;&nbsp;0% &nbsp;1.00x ONLINE &nbsp;-<br>
proteus &nbsp;1.98T &nbsp;&nbsp;361G &nbsp;1.63T &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;1% &nbsp;&nbsp;&nbsp;17% &nbsp;1.00x ONLINE &nbsp;-<br>
t0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;444G &nbsp;91.2G &nbsp;&nbsp;353G &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;27% &nbsp;&nbsp;&nbsp;20% &nbsp;1.00x ONLINE &nbsp;-<br>
titan#<br>
<br>
This is progress ... however the cachefile property is wiped out again :<br>
<br>
titan# zpool get cachefile t0<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
t0 &nbsp;&nbsp;&nbsp;cachefile &nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;default<br>
titan# zpool get cachefile leaf<br>
NAME &nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
leaf &nbsp;cachefile &nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;default<br>
titan# zpool get cachefile proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;PROPERTY &nbsp;&nbsp;VALUE &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOURCE<br>
proteus &nbsp;cachefile &nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;default<br>
titan#<br>
<br>
Also, strangely, none of the filesystem in proteus are mounted :<br>
<br>
titan#<br>
titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EXEC &nbsp;CHECKSUM &nbsp;&nbsp;CANMOUNT &nbsp;MOUNTED &nbsp;MOUNTPOINT<br>
proteus &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;none<br>
proteus/bhyve &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;off &nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/bhyve<br>
proteus/bhyve/disk &nbsp;off &nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/bhyve/disk<br>
proteus/bhyve/isos &nbsp;off &nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/bhyve/isos<br>
proteus/obj &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/usr/obj<br>
proteus/src &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/usr/src<br>
titan#<br>
<br>
If I reboot again without doing anything will the zpools re-appear ?<br>
<br>
<br>
titan#<br>
titan# Nov 28 14:37:08 titan su[4199]: admsys to root on /dev/pts/0<br>
<br>
titan# reboot<br>
Nov 28 14:40:29 Waiting (max 60 seconds) for system process `vnlru' to stop... done<br>
Waiting (max 60 seconds) for system process `syncer' to stop...<br>
Syncing disks, vnodes remaining... 0 0 0 0 0 done<br>
All buffers synced.<br>
Uptime: 4m50s<br>
GEOM_MIRROR: Device swap: provider destroyed.<br>
GEOM_MIRROR: Device swap destroyed.<br>
uhub4: detached<br>
uhub1: detached<br>
uhub5: detached<br>
uhub0: detached<br>
uhub3: detached<br>
uhub6: detached<br>
uhub2: detached<br>
ix0: link state changed to DOWN<br>
.<br>
.<br>
.<br>
Starting iscsid.<br>
Starting iscsictl.<br>
Clearing /tmp.<br>
Updating /var/run/os-release done.<br>
Updating motd:.<br>
Creating and/or trimming log files.<br>
Starting syslogd.<br>
No core dumps found.<br>
Starting local daemons:failed to open cache file: No such file or directory<br>
.<br>
Starting ntpd.<br>
Starting powerd.<br>
Mounting late filesystems:.<br>
Starting cron.<br>
Performing sanity check on sshd configuration.<br>
Starting sshd.<br>
Starting background file system<br>
FreeBSD/amd64 (titan) (ttyu0)<br>
<br>
login: root<br>
Password:<br>
Nov 28 14:43:01 titan login[4146]: ROOT LOGIN (root) ON ttyu0<br>
Last login: Thu Nov 28 14:36:29 on ttyu0<br>
FreeBSD 15.0-CURRENT (GENERIC-NODEBUG) #1 main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024<br>
<br>
Welcome to FreeBSD!<br>
<br>
Release Notes, Errata: <a href="https://www.FreeBSD.org/releases/">https://www.FreeBSD.org/releases/</a><br>;
Security Advisories: &nbsp;&nbsp;<a href="https://www.FreeBSD.org/security/">https://www.FreeBSD.org/security/</a><br>;
FreeBSD Handbook: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.FreeBSD.org/handbook/">https://www.FreeBSD.org/handbook/</a><br>;
FreeBSD FAQ: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.FreeBSD.org/faq/">https://www.FreeBSD.org/faq/</a><br>;
Questions List: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.FreeBSD.org/lists/questions/">https://www.FreeBSD.org/lists/questions/</a><br>;
FreeBSD Forums: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://forums.FreeBSD.org/">https://forums.FreeBSD.org/</a><br>;
<br>
Documents installed with the system are in the /usr/local/share/doc/freebsd/<br>
directory, or can be installed later with: &nbsp;pkg install en-freebsd-doc<br>
For other languages, replace "en" with a language code like de or fr.<br>
<br>
Show the version of FreeBSD installed: &nbsp;freebsd-version ; uname -a<br>
Please include that output and any error messages when posting questions.<br>
Introduction to manual pages: &nbsp;man man<br>
FreeBSD directory layout: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;man hier<br>
<br>
To change this login announcement, see motd(5).<br>
You have new mail.<br>
titan#<br>
titan# zpool list<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SIZE &nbsp;ALLOC &nbsp;&nbsp;FREE &nbsp;CKPOINT &nbsp;EXPANDSZ &nbsp;&nbsp;FRAG &nbsp;&nbsp;&nbsp;CAP &nbsp;DEDUP HEALTH &nbsp;ALTROOT<br>
leaf &nbsp;&nbsp;&nbsp;&nbsp;18.2T &nbsp;1.01M &nbsp;18.2T &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;0% &nbsp;&nbsp;&nbsp;&nbsp;0% &nbsp;1.00x ONLINE &nbsp;-<br>
proteus &nbsp;1.98T &nbsp;&nbsp;361G &nbsp;1.63T &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;1% &nbsp;&nbsp;&nbsp;17% &nbsp;1.00x ONLINE &nbsp;-<br>
t0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;444G &nbsp;91.2G &nbsp;&nbsp;353G &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;&nbsp;&nbsp;27% &nbsp;&nbsp;&nbsp;20% &nbsp;1.00x ONLINE &nbsp;-<br>
titan#<br>
titan# zfs list -o name,exec,checksum,canmount,mounted,mountpoint -r proteus<br>
NAME &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EXEC &nbsp;CHECKSUM &nbsp;&nbsp;CANMOUNT &nbsp;MOUNTED &nbsp;MOUNTPOINT<br>
proteus &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;none<br>
proteus/bhyve &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;off &nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/bhyve<br>
proteus/bhyve/disk &nbsp;off &nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/bhyve/disk<br>
proteus/bhyve/isos &nbsp;off &nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/bhyve/isos<br>
proteus/obj &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/usr/obj<br>
proteus/src &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;sha512 &nbsp;&nbsp;&nbsp;&nbsp;on &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;no &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/usr/src<br>
titan#<br>
<br>
OKay so the zpools appear to be back in spite of the strange situation with the cachefile property is empty everywhere. &nbsp;My guess is the zpool<br>
rc script is bring in information during early boot.<br>
<br>
Why the zfs filesystems on proteus do not mount? Well that is a strange problem but at least the zpool can be used.<br>
<br>
--&nbsp;<br>
--<br>
Dennis Clarke<br>
RISC-V/SPARC/PPC/ARM/CISC<br>
UNIX and Linux spoken<br>
<br>
&nbsp;</div>

<hr></div>
</blockquote>
<br>
<br>
Hi,<br>
<br>
The output you provide contains this line:<br>
"Starting local daemons:failed to open cache file: No such file or directory"<br>
<br>
Where does that output come from? What is in your file /etc/rc.local file?<br>
<br>
Regards,<br>
Ronald.<br>
&nbsp;</body></html>
------=_Part_9244_611194504.1732891767665--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?754754561.9245.1732891767670>