From owner-freebsd-geom@FreeBSD.ORG Sun Mar 12 17:42:23 2006 Return-Path: X-Original-To: freebsd-geom@freebsd.org Delivered-To: freebsd-geom@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2E02516A402 for ; Sun, 12 Mar 2006 17:42:18 +0000 (GMT) (envelope-from lightmoogle@hotmail.com) Received: from hotmail.com (bay114-dav13.bay114.hotmail.com [65.54.169.85]) by mx1.FreeBSD.org (Postfix) with ESMTP id E5B4843D45 for ; Sun, 12 Mar 2006 17:42:17 +0000 (GMT) (envelope-from lightmoogle@hotmail.com) Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Sun, 12 Mar 2006 09:42:17 -0800 Message-ID: Received: from 69.146.38.217 by BAY114-DAV13.phx.gbl with DAV; Sun, 12 Mar 2006 17:42:14 +0000 X-Originating-IP: [69.146.38.217] X-Originating-Email: [lightmoogle@hotmail.com] X-Sender: lightmoogle@hotmail.com From: "Andrew" To: Date: Sun, 12 Mar 2006 10:42:31 -0700 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1506 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1506 X-OriginalArrivalTime: 12 Mar 2006 17:42:17.0674 (UTC) FILETIME=[4E70FAA0:01C645FC] Subject: Gvinum RAID5 volume don't work after a simple reboot X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Mar 2006 17:42:23 -0000 I ran into the problem of initializing a RAID5 array after reboot as well. It seems, when you create the RAID5 array, it shows everything is up and ok. Indeed, you can newfs it, mount it, put data on it. When you reboot, it'll be in the "down" state. If you do start , it'll go through the initialization process. Let me add one thing: using dd to zero out the drive is a waste of time, as it will be done after you reboot and the initialization happens. The fix that I determined is to create the array, reboot, start the array and it'll initialize, this initialization will be persistent from then on -- it'll only do it once. You mentioned needing to run newfs twice.. this seems like why. My problem, though, is when a disk is removed. If a disk is removed, the array fails entirely -- there is no degraded state. If I reboot with the drie readded, things will work flawlessly again. If I reboot with a _new_ drive, I can't replace the drive in the array, I can't rebuild parity... I'm stuck. So, rather than RAID5's "One drive fails and you're still OK", I'm getting, "Well.. we're using that one drive for parity, but if _any_ of your drives fail you're screwed." The only even _thought_ I have on that last part is that I'm using the entire disk (/dev/ad0) as opposed to a labeled section (/dev/ad0s1a).. but I haven't tried to mess with that yet. -DrkShadow