From owner-freebsd-questions Fri Jun 2 11:11:36 2000 Delivered-To: freebsd-questions@freebsd.org Received: from ns0.sitesnow.com (ns0.sitesnow.com [63.166.182.130]) by hub.freebsd.org (Postfix) with ESMTP id 50BA037BA79 for ; Fri, 2 Jun 2000 11:11:33 -0700 (PDT) (envelope-from gskouby@ns0.sitesnow.com) Received: from gskouby (helo=localhost) by ns0.sitesnow.com with local-esmtp (Exim 3.13 #1) id 12xvup-000AnF-00 for freebsd-questions@freebsd.org; Fri, 02 Jun 2000 14:11:27 -0400 Date: Fri, 2 Jun 2000 14:11:26 -0400 (EDT) From: Greg Skouby To: freebsd-questions@freebsd.org Subject: vinum help. corrupt raid 5 volume Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Hello, We have been using RAID5 on a 3.3 release system quite successfully until the last day. This morning we got these messages in /var/log/messages: Jun 2 10:10:47 mail2 /kernel: (da3:ahc0:0:4:0): Invalidating pack Jun 2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal read I/O error Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is crashed by force Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0 is degraded Jun 2 10:10:47 mail2 /kernel: d: fatal drive I/O error Jun 2 10:10:47 mail2 /kernel: vinum: drive d is down Jun 2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal write I/O error Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is stale by force Jun 2 10:10:47 mail2 /kernel: d: fatal drive I/O error Jun 2 10:10:47 mail2 /kernel: biodone: buffer already done Then we rebooted because the system was locked up and 'vinum start' gave these errors: %vinum start Warning: defective objects P raid5.p0 R5 State: corrupt Subdisks: 4 Size: 64 GB S raid5.p0.s2 State: crashed PO: 1024 kB Size: 21 GB S raid5.p0.s3 State: stale PO: 1536 kB Size: 21 GB % I searched the archives and found nothing on how to fix a corrupt system besides Mr. Lehey mentioning something about vinum start being able to fix this possibly. Is there anything else I can try? vinum list produces: Configuration summary Drives: 4 (8 configured) Volumes: 1 (4 configured) Plexes: 1 (8 configured) Subdisks: 4 (16 configured) D a State: up Device /dev/da0h Avail: 0/22129 MB (0%) D b State: up Device /dev/da1h Avail: 0/22129 MB (0%) D c State: up Device /dev/da2h Avail: 0/22129 MB (0%) D d State: up Device /dev/da3h Avail: 0/22129 MB (0%) V raid5 State: up Plexes: 1 Size: 64 GB P raid5.p0 R5 State: corrupt Subdisks: 4 Size: 64 GB S raid5.p0.s0 State: up PO: 0 B Size: 21 GB S raid5.p0.s1 State: up PO: 512 kB Size: 21 GB S raid5.p0.s2 State: crashed PO: 1024 kB Size: 21 GB S raid5.p0.s3 State: stale PO: 1536 kB Size: 21 GB To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message