From owner-freebsd-questions Sat Jun 3 0:26:55 2000 Delivered-To: freebsd-questions@freebsd.org Received: from wantadilla.lemis.com (wantadilla.lemis.com [192.109.197.80]) by hub.freebsd.org (Postfix) with ESMTP id 15BF237B55F for ; Sat, 3 Jun 2000 00:26:50 -0700 (PDT) (envelope-from grog@wantadilla.lemis.com) Received: (from grog@localhost) by wantadilla.lemis.com (8.9.3/8.9.3) id QAA32706; Sat, 3 Jun 2000 16:56:40 +0930 (CST) (envelope-from grog) Date: Sat, 3 Jun 2000 16:56:40 +0930 From: Greg Lehey To: Greg Skouby Cc: freebsd-questions@FreeBSD.ORG Subject: Re: vinum help. corrupt raid 5 volume Message-ID: <20000603165640.M30249@wantadilla.lemis.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 1.0pre2i In-Reply-To: Organization: LEMIS, PO Box 460, Echunga SA 5153, Australia Phone: +61-8-8388-8286 Fax: +61-8-8388-8725 Mobile: +61-418-838-708 WWW-Home-Page: http://www.lemis.com/~grog X-PGP-Fingerprint: 6B 7B C3 8C 61 CD 54 AF 13 24 52 F8 6D A4 95 EF Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Friday, 2 June 2000 at 14:11:26 -0400, Greg Skouby wrote: > Hello, > > We have been using RAID5 on a 3.3 release system quite successfully until > the last day. This morning we got these messages in /var/log/messages: > > Jun 2 10:10:47 mail2 /kernel: (da3:ahc0:0:4:0): Invalidating pack > Jun 2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal read I/O error > Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is crashed by force > Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0 is degraded > Jun 2 10:10:47 mail2 /kernel: d: fatal drive I/O error > Jun 2 10:10:47 mail2 /kernel: vinum: drive d is down > Jun 2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal write I/O error > Jun 2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is stale by force > Jun 2 10:10:47 mail2 /kernel: d: fatal drive I/O error > Jun 2 10:10:47 mail2 /kernel: biodone: buffer already done On Friday, 2 June 2000 at 14:27:42 -0400, Greg Skouby wrote: > Hello again, > > I just sent a message regarding raid5 and vinum a couple of minutes ago. I > managed to get the volume to this state: > Configuration summary > > Drives: 4 (8 configured) > Volumes: 1 (4 configured) > Plexes: 1 (8 configured) > Subdisks: 4 (16 configured) > > D a State: up Device /dev/da0h Avail:0/22129 MB (0%) > D b State: up Device /dev/da1h Avail:0/22129 MB (0%) > D c State: up Device /dev/da2h Avail:0/22129 MB (0%) > D d State: up Device /dev/da3h Avail:0/22129 MB (0%) > > V raid5 State: up Plexes: 1 Size: 64GB > > P raid5.p0 R5 State: degraded Subdisks: 4 Size: 64GB > > S raid5.p0.s0 State: up PO: 0 B Size: 21GB > S raid5.p0.s1 State: up PO: 512 kB Size: 21GB > S raid5.p0.s2 State: up PO: 1024 kB Size: 21GB > S raid5.p0.s3 State: reviving PO: 1536 kB Size: 21GB > > How long does the reviving process take? That depends on the size and speed of the drives. I'd expect this to take an hour or two. You should see heavy disk activity. > I saw that Mr. Lehey noted that there were some problems with raid5 > and the start raid5.p0.s3 command. I must say you're brave running RAID-5 on 3.3-RELEASE. > Is there anything else I can do? Thanks for your time. I'd suggest you leave it the way it is at the moment. There are so many bugs in revive in 3.3 that it's not even worth trying. I'm about to commit a whole lot of fixes to 3-STABLE. When I've done it, you can upgrade. Reviving the plex should then work. Greg -- When replying to this message, please copy the original recipients. For more information, see http://www.lemis.com/questions.html Finger grog@lemis.com for PGP public key See complete headers for address and phone numbers To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message