Date: Tue, 24 Jul 2007 17:36:39 +0200 From: Pawel Jakub Dawidek <pjd@FreeBSD.org> To: Sverre Svenningsen <ss.alert@online.no> Cc: freebsd-current@freebsd.org Subject: Re: (ZFS) zpool replace weirdness Message-ID: <20070724153639.GB12473@garage.freebsd.pl> In-Reply-To: <1736829882.20070723231753@online.no> References: <20070719102302.R1534@rust.salford.ac.uk> <20070719135510.GE1194@garage.freebsd.pl> <20070719181313.G4923@rust.salford.ac.uk> <20070721065204.GA2044@garage.freebsd.pl> <1736829882.20070723231753@online.no>
next in thread | previous in thread | raw e-mail | index | archive | help
--pvezYHf7grwyp3Bc Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jul 23, 2007 at 11:17:53PM +0200, Sverre Svenningsen wrote: > I've been playing around with zfs for a bit, and ran into a problem > where i corrupted an entire drive (on purpose) by way of dd > if=3D/dev/urandom of=3D/dev/ad12 .. as expected, the zpool noticed: >=20 > su-2.05b# zpool status > pool: array1 > state: ONLINE > status: One or more devices could not be used because the label is missin= g or > invalid. Sufficient replicas exist for the pool to continue > functioning in a degraded state. > action: Replace the device using 'zpool replace'. > see: http://www.sun.com/msg/ZFS-8000-4J > scrub: resilver completed with 0 errors on Mon Jul 23 23:05:53 2007 > config: >=20 > NAME STATE READ WRITE CKSUM > array1 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad10 ONLINE 0 0 0 > ad12 UNAVAIL 0 0 0 corrupted data > ad14 ONLINE 0 0 0 > ad16 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad18 ONLINE 0 0 0 > ad20 ONLINE 0 0 0 > ad22 ONLINE 0 0 0 > ad24 ONLINE 0 0 0 >=20 > errors: No known data errors >=20 >=20 > now i want to resilver that disk, but the problem is this: >=20 > su-2.05b# zpool replace -f array1 ad12 > invalid vdev specification > the following errors must be manually repaired: > ad12 is in use (r1w1e1) >=20 > but nothing is using that disk as far as i can tell! has anyone > successfully done this? It just works here, but the version I'm using is not yet committed, maybe there was a fix in OpenSolaris. You could try removing /boot/zfs/zpool.cache. > would it be better to use slices instead of whole disks for zfs on > freebsd? i want to get some experience with this so that i know > what not to do when a disk breaks for real :) Whole disks are fine. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --pvezYHf7grwyp3Bc Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFGphyHForvXbEpPzQRAlN3AJ4+JrDTEK0eSSpgoJ9W9Y4B+wrMzQCgwqBA O7JWXu8dqM456kKLRVhaX1k= =uMH+ -----END PGP SIGNATURE----- --pvezYHf7grwyp3Bc--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070724153639.GB12473>