Date: Fri, 2 Jun 2000 14:11:26 -0400 (EDT) From: Greg Skouby <gskouby@ns0.sitesnow.com> To: freebsd-questions@freebsd.org Subject: vinum help. corrupt raid 5 volume Message-ID: <Pine.BSF.4.10.10006021407210.41106-100000@ns0.sitesnow.com>
next in thread | raw e-mail | index | archive | help
Hello, We have been using RAID5 on a 3.3 release system quite successfully until the last day. This morning we got these messages in /var/log/messages: Jun 2 10:10:47 mail2 /kernel: (da3:ahc0:0:4:0): Invalidating pack Jun 2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal read I/O error Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is crashed by force Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0 is degraded Jun 2 10:10:47 mail2 /kernel: d: fatal drive I/O error Jun 2 10:10:47 mail2 /kernel: vinum: drive d is down Jun 2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal write I/O error Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is stale by force Jun 2 10:10:47 mail2 /kernel: d: fatal drive I/O error Jun 2 10:10:47 mail2 /kernel: biodone: buffer already done Then we rebooted because the system was locked up and 'vinum start' gave these errors: %vinum start Warning: defective objects P raid5.p0 R5 State: corrupt Subdisks: 4 Size: 64 GB S raid5.p0.s2 State: crashed PO: 1024 kB Size: 21 GB S raid5.p0.s3 State: stale PO: 1536 kB Size: 21 GB % I searched the archives and found nothing on how to fix a corrupt system besides Mr. Lehey mentioning something about vinum start being able to fix this possibly. Is there anything else I can try? vinum list produces: Configuration summary Drives: 4 (8 configured) Volumes: 1 (4 configured) Plexes: 1 (8 configured) Subdisks: 4 (16 configured) D a State: up Device /dev/da0h Avail: 0/22129 MB (0%) D b State: up Device /dev/da1h Avail: 0/22129 MB (0%) D c State: up Device /dev/da2h Avail: 0/22129 MB (0%) D d State: up Device /dev/da3h Avail: 0/22129 MB (0%) V raid5 State: up Plexes: 1 Size: 64 GB P raid5.p0 R5 State: corrupt Subdisks: 4 Size: 64 GB S raid5.p0.s0 State: up PO: 0 B Size: 21 GB S raid5.p0.s1 State: up PO: 512 kB Size: 21 GB S raid5.p0.s2 State: crashed PO: 1024 kB Size: 21 GB S raid5.p0.s3 State: stale PO: 1536 kB Size: 21 GB To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.10.10006021407210.41106-100000>