Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 22 Feb 2016 13:03:22 -0500
From:      Dan Langille <dan@langille.org>
To:        Matthew Seaman <matthew@FreeBSD.org>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: ZFS: i/o error - all block copies unavailable
Message-ID:  <C3CE892B-1E97-4FA4-ACEA-C741A643CD0A@langille.org>
In-Reply-To: <56CB4C33.2030109@FreeBSD.org>
References:  <5C208714-5117-4089-A872-85A6375856B7@langille.org> <56CB4C33.2030109@FreeBSD.org>

index | next in thread | previous in thread | raw e-mail

[-- Attachment #1 --]
> On Feb 22, 2016, at 12:58 PM, Matthew Seaman <matthew@FreeBSD.org> wrote:
> 
> On 2016/02/22 17:41, Dan Langille wrote:
>> I have a FreeBSD 10.2 (with freebsd-update applied) system at home which cannot boot. The message is:
>> 
>> ZFS: i/o error - all block copies unavailable
>> ZFS: can't read MOS of pool system
>> gptzfsboot: failed to mount default pool system
> 
> This always used to indicate problems with /boot/zfs/zpool.cache being
> inconsistent.  However, my understanding is that ZFS should be able to
> cope with an inonsistent zpool.cache nowadays.
> 
> The trick there was to boot from some other media, export the pool and
> then import it again.
> 
>> The screen shot is https://twitter.com/DLangille/status/701611716614946816
>> 
>> The zpool name is 'system'.
>> 
>> I booted the box via mfsBSD thumb drive, and was able to import the zpool: https://gist.github.com/dlangille/6da065e309301196b9cd <https://gist.github.com/dlangille/6da065e309301196b9cd>;
> 
> ... which means all the zpool.cache stuff above isn't going to help.
> 
>> I have also run: "gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 XXX" against each drive. I did with the the files
>> provided with mfsBSD and with the files from my local 10.2 system.  Neither solution changed the booting problem.
>> 
>> Ideas?  Suggestions?
> 
> Is this mirrored or RAIDZx?  If it's mirrored, you might be able to:
> 
>  - split your existing zpool (leaves it without redundancy)
>  - on the half of your drives removed from the existing zpool,
>    create a new zpool (again, without redundancy)
>  - do a zfs send | zfs receive to copy all your data into the
>    new zpool
>  - boot from the new zpool
>  - deconfigure the old zpool, and add the drives to the new zpool
>    to make it fully redundant again
>  - wait for lots of resilvering to complete
> 
> However, this really only works if the pool is mirrored throughout.
> RAIDZ users will be out of luck.

It is raidz2.  There is a zpool status here: http://dan.langille.org/2013/08/18/knew/ <http://dan.langille.org/2013/08/18/knew/>;


[-- Attachment #2 --]
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org

iQJ8BAEBCgBmBQJWy01qXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ1MTE2RjM0ODIzRDdERDM4OTY0OUJBNzdF
QjIxNTlERUU5NzI3MzlGAAoJEOshWd7pcnOfun8P/1NA9eXL2TK3OjGiFOMDUSMk
hLrlfXLK3tOYZTCtWVme0IyixYEUyFF5rQzJfGw35rPgAf4L+iBkHLI4elzShiFw
eDnoxOD+16i2PrzMxpwSUwTYQiYqerJmKmx3cz5Hf/DQIiAUfBSNOYKUVlC5P7n0
EGKY/F4GU+OSdoOHBqeVhplw6OUJxwNKzht7LCCPXCv8zdoWBfy+0ICCF+nit2C9
ZXAjdwuhGcHXMcuKsS63XuASN2e0ajDeUj2g/wm4dws9g0CRuXdbsTAGPQ+QoBMi
PgfALhB+P/f1TOcegGJB4POyqU7n2NchwCrazh8EKbu4pTuE1yQn+GF+hV2IfS46
9on6JHCwz3KwbLHCJAEq9lY5r/ffqjWj5nAVHrAgTHDKCnDUdA7BodMYbWj/VQ+0
yVT4ig5pLX/qKvj88iE1nJfuiJ/qCWiEDwRfkkwVPCYFbePYNv1Rwm/Eu0frcylQ
Ljm/DDQFnMZdcru4GN6yshukik/fcxgZn3J1mqZ2P0IdxZwXjU4yvwyybV9essdE
xTPtrHVh1UhspWynTHCSDIwvCgFVKdGCpwB7czsyJcUFBBITAzRqFa7sSHkHcvIl
yMZQ30jazs9KvI0vr3kDTaB7BgFT6rcgSfqdPFi/y6xFkBpQViG70I/4dVw+7Syt
qne878iSdzLrF2g5exiN
=fZV8
-----END PGP SIGNATURE-----
help

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?C3CE892B-1E97-4FA4-ACEA-C741A643CD0A>