Date: Sat, 21 Feb 2015 12:21:53 +0100 From: Fabian Keil <freebsd-listen@fabiankeil.de> To: freebsd-fs@freebsd.org Subject: Re: panic: solaris assert: rt->rt_space == 0 (0xe000 == 0x0), file: /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c, line: 153 Message-ID: <48e5e0b3.6c036ece@fabiankeil.de> In-Reply-To: <580853d0.0ab6eb7d@fabiankeil.de> References: <04f3092d.6fdfad8a@fabiankeil.de> <580853d0.0ab6eb7d@fabiankeil.de>
next in thread | previous in thread | raw e-mail | index | archive | help
--Sig_/1l/65pMCjgx1ZqnFqMysy8V Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Fabian Keil <freebsd-listen@fabiankeil.de> wrote: > Fabian Keil <freebsd-listen@fabiankeil.de> wrote: >=20 > > Using an 11.0-CURRENT based on r276255 I just got a panic > > after trying to export a certain ZFS pool: [...] > > The export triggered the same panic again, but with a different rt->rt_= space value: > >=20 > > panic: solaris assert: rt->rt_space =3D=3D 0 (0x22800 =3D=3D 0x0), file= : /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c, lin= e: 153 > >=20 > > I probably won't have time to scrub the pool and investigate this furth= er > > until next week. >=20 > With this patch and vfs.zfs.recover=3D1 the pool can be exported without = panic: > https://www.fabiankeil.de/sourcecode/electrobsd/range_tree_destroy-Option= ally-tolerate-non-zero-rt-r.diff [...] > Due to interruptions the scrubbing will probably take a couple of days. > ZFS continues to complain about checksum errors but apparently no > affected files have been found yet: The results are finally in: OpenZFS found nothing to repair but continues to complain about checksum errors, presumably in "<0xffffffffffffffff>:<0x0= >" which totally looks like a legit path that may affect my applications: fk@r500 ~ $zogftw zpool status -v 2015-02-21 12:06:29 zogftw: Executing: zpool status -v wde4=20 pool: wde4 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 134h55m with 0 errors on Sat Feb 21 12:00:47 20= 15 config: NAME STATE READ WRITE CKSUM wde4 ONLINE 0 0 795 label/wde4.eli ONLINE 0 0 3.11K errors: Permanent errors have been detected in the following files: <0xffffffffffffffff>:<0x0> fk@r500 ~ $zogftw export 2015-02-21 12:07:03 zogftw: No zpool specified. Exporting all external ones= : wde4 2015-02-21 12:07:03 zogftw: Exporting wde4 fk@r500 ~ $zogftw import 2015-02-21 12:07:13 zogftw: No pool name specified. Trying all unattached l= abels: wde4 2015-02-21 12:07:13 zogftw: Using geli keyfile /home/fk/.config/zogftw/geli= /keyfiles/wde4.key 2015-02-21 12:07:25 zogftw: 'wde4' attached 2015-02-21 12:08:07 zogftw: 'wde4' imported fk@r500 ~ $zogftw zpool status -v 2015-02-21 12:08:13 zogftw: Executing: zpool status -v wde4=20 pool: wde4 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 134h55m with 0 errors on Sat Feb 21 12:00:47 20= 15 config: NAME STATE READ WRITE CKSUM wde4 ONLINE 0 0 9 label/wde4.eli ONLINE 0 0 36 errors: Permanent errors have been detected in the following files: <0xffffffffffffffff>:<0x0> Exporting the pool still triggers the sanity check in range_tree_destroy(). Fabian --Sig_/1l/65pMCjgx1ZqnFqMysy8V Content-Type: application/pgp-signature Content-Description: OpenPGP digital signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlToak4ACgkQBYqIVf93VJ0yrwCgzbidsRGcuFt8mX8onui6gpHT nE8Ani9TN6O7/Eem0oVsrcVDtXJIlAzR =4xup -----END PGP SIGNATURE----- --Sig_/1l/65pMCjgx1ZqnFqMysy8V--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?48e5e0b3.6c036ece>