Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 22 Feb 2016 17:58:11 +0000
From:      Matthew Seaman <matthew@FreeBSD.org>
To:        freebsd-questions@freebsd.org
Subject:   Re: ZFS: i/o error - all block copies unavailable
Message-ID:  <56CB4C33.2030109@FreeBSD.org>
In-Reply-To: <5C208714-5117-4089-A872-85A6375856B7@langille.org>
References:  <5C208714-5117-4089-A872-85A6375856B7@langille.org>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--0DnIS5dCAKnfi27iXjQQENW2LgCGqfqWw
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On 2016/02/22 17:41, Dan Langille wrote:
> I have a FreeBSD 10.2 (with freebsd-update applied) system at home whic=
h cannot boot. The message is:
>=20
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool system
> gptzfsboot: failed to mount default pool system

This always used to indicate problems with /boot/zfs/zpool.cache being
inconsistent.  However, my understanding is that ZFS should be able to
cope with an inonsistent zpool.cache nowadays.

The trick there was to boot from some other media, export the pool and
then import it again.

> The screen shot is https://twitter.com/DLangille/status/701611716614946=
816
>=20
> The zpool name is 'system'.
>=20
> I booted the box via mfsBSD thumb drive, and was able to import the zpo=
ol: https://gist.github.com/dlangille/6da065e309301196b9cd <https://gist.=
github.com/dlangille/6da065e309301196b9cd>

=2E.. which means all the zpool.cache stuff above isn't going to help.

> I have also run: "gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1=
 XXX" against each drive. I did with the the files
> provided with mfsBSD and with the files from my local 10.2 system.  Nei=
ther solution changed the booting problem.
>=20
> Ideas?  Suggestions?

Is this mirrored or RAIDZx?  If it's mirrored, you might be able to:

  - split your existing zpool (leaves it without redundancy)
  - on the half of your drives removed from the existing zpool,
    create a new zpool (again, without redundancy)
  - do a zfs send | zfs receive to copy all your data into the
    new zpool
  - boot from the new zpool
  - deconfigure the old zpool, and add the drives to the new zpool
    to make it fully redundant again
  - wait for lots of resilvering to complete

However, this really only works if the pool is mirrored throughout.
RAIDZ users will be out of luck.

	Cheers,

	Matthew




--0DnIS5dCAKnfi27iXjQQENW2LgCGqfqWw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQJ8BAEBCgBmBQJWy0w7XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQxOUYxNTRFQ0JGMTEyRTUwNTQ0RTNGMzAw
MDUxM0YxMEUwQTlFNEU3AAoJEABRPxDgqeTnlxAQAJiTwtMgzDgCncR+oUdIIOX9
NFajo+MzjyUeKgC1GXLprMy7cgGwLGpNYApnm9BOK94koThCxlY5WmdmuRP4Mb1Z
P0DHFbxJBvtcBdKlhWFez6S/khojtDuywUk2XoGBuMCGKRXb/7rsWlCM914i5Tnt
9YXZGqVgbiKh0MM0XJD+2B7f/XnLvkm5HVX6WpLkLkSz/ejv/m29vwSW7fNwjBeL
tNxvAENYu8ptHrFnkZ6mBYSaMnyscbxKxljQQJGW8CLuQ/XTYkHCmhLExW824J9K
RiIVoxEbDAHN8KBie9bZ89Mj/ObavdpA/3QWu/7LBgkpdtrFUZCeVCq3iACletii
Wh9qO1jVrVhD1ot9yljySHnuMHcll4bNCZlaYVPpPZLCcGfoc6kmQ6GdXzpFNyB4
cY90cYVKdbqppvMJdAudRcVq19696HwaThONcz4W5eI0/iIn9Dhc3JRgGJTJA/9S
GTUfBBE6lDIeUW8waWBFKhjo4DSEONiBjscKeEV1qtAnhUpkaVU+jVLeOLbboZMB
Xk68KZJAYVH6e7H/b9QsP8QtrYXahdLGMZ0BarJCNRKcPDL6Xn3D/n0bGZYxzRRf
QYg5a0q7+Db70ejiG10pWdZmAx0GBxXCCxuOgNJ2lSvcvYEi81OOF75QaTHGNidn
HBYtMSiFc3IqpZOJk/D1
=gPY2
-----END PGP SIGNATURE-----

--0DnIS5dCAKnfi27iXjQQENW2LgCGqfqWw--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56CB4C33.2030109>