Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 Dec 2016 18:38:02 -0600
From:      Mike Selner <mike@tela.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: ZFS crash on mountroot after removal of slog device
Message-ID:  <20161231003801.GC43835@spider3.tela.com>
In-Reply-To: <20161222012310.GA59045@spider3.tela.com>
References:  <20161222012310.GA59045@spider3.tela.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Update - no replies on this, don't know if anyone has suggestions. 

I checked the history on the original pool and it was set up on 9.3 with two mirrored devices ada0p3 and ada1p3. A few months later I added an identically sized vdev with mirrored devices ada2p3 and ada3p3.

I built a new system with a similar setup including slog on ssd and shut down, unplugged the slog device to simulate a failure and rebooted. The system came up fine and zpool status showed a missing log device. I was able to remove the device with zpool remove root devicename. No crashes.

So I'm confident that zpool remove slog device should work.

Next I added a "znew" pool to the original system (running off a memstick). I did a zfs send -R zpool@snapshot  into zfs recv -d znew. Then I made znew/ROOT/default the bootable FS & was able to boot and run off of znew.  

I think this tells me that the problem on the original pool is some type of corruption but the data was recoverable with zfs send. At no point in this adventure did I have the opportunity to do any kind of rollback, so I'm not sure what else I could have done.

Still, I'm concerned that a device failure could render a production server unusable. 

Full details at https://forums.freebsd.org/threads/59006/.

Thanks for any suggestions on recovering a zpool that crashes when mounting root.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20161231003801.GC43835>