Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Apr 2022 14:11:43 -0400
From:      Rick Summerhill <rrsum@summerhill.org>
To:        questions@freebsd.org
Subject:   Re: zpools disappearing on reboot with 13.1Beta3 (and now RC1)
Message-ID:  <32e33149-8013-9c5e-868b-0beef4c63889@summerhill.org>
In-Reply-To: <e0dadd98-8447-7d77-f6a0-75d26d744f5a@summerhill.org>
References:  <383af7d8-294f-bd43-9a4e-660b56261d3e@summerhill.org> <20220329210600.b4e405ae3a5006eceaaf44cb@sohara.org> <a353e5ed-2506-273f-d198-ffca99ad3657@summerhill.org> <e0dadd98-8447-7d77-f6a0-75d26d744f5a@summerhill.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 4/1/22 2:01 PM, Rick Summerhill wrote:
> On 3/29/22 6:20 PM, R Richard Summerhill wrote:
>> On 3/29/22 16:06, Steve O'Hara-Smith wrote:
>>> On Tue, 29 Mar 2022 15:37:19 -0400
>>> Rick Summerhill <rrsum@summerhill.org> wrote:
>>>
>>>> I'm running a server under 13.1Beta3 and it has 2 zpools, one a mirror
>>>> and the other a raidz1.  When I reboot the server, both pools disappear
>>>> but come back when imported.  Do I have something set wrong on the 
>>>> server?
>>>     Does your /etc/rc.conf contain
>>>
>>> zfs_enable="YES"
>>
>>
>> Yes, it does.  Let me give you a bit more info.  I rebuilt 13.1Beta3 
>> with just the mirror pool.  Rebooted several times to test everything 
>> and it came back correctly each time.  Then I added one of these 5x1 
>> port multiplier arrays to an esata port (this is one of my test boxes) 
>> and rebooted.  All the disks were there (they are all labeled 
>> appropriately), and the mirror zpool (which uses the labels) was not 
>> there.  Importing it, however, brought it back.  Then I created the 
>> raidz1 pool on the multiplier array.  Both pools showed up with a good 
>> status.  On rebooting, the disks are there and the pools are not, but 
>> on importing, both come back.
>>
>> Not that the multiplier array has been working fine under 13.0Release 
>> and ZFS.
>>
>>
> 
> I upgraded to 13.1RC1 and the condition persists.  Also, dmesg shows
> 
> ZFS filesystem version: 5
> ZFS storage pool version: features support (5000)
> pid 49 (zpool), jid 0, uid 0: exited on signal 6
> 
> and the disks for the pools appear after that in dmesg.
> 
> I also tried adding the following lines to loader.conf
> 
> kern.cam.boot_delay="5000"  # Delay (in ms) of root mount for CAM bus
> kern.cam.scsi_delay="2000"  # Delay (in ms) before probing SCSI
> 
> That did not solve the problem, however.  Any thoughts?

One other thing I should have pointed out:  The system disks are on nvd0 
and the line right before the zfs lines in dmesg say:

Trying to mount root from ufs:/dev/nvd0p2 [rw]...

Rick

-- 
Rick Summerhill
Retired, Chief Technology Officer, Internet2
10233 Timberhill Rd
Manchester, MI 48158 USA

Home: 734-428-1422
Web:  http://www.rick.summerhill.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?32e33149-8013-9c5e-868b-0beef4c63889>