From owner-freebsd-geom@FreeBSD.ORG Sat May 21 00:09:10 2005 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 87FBB16A4CE for ; Sat, 21 May 2005 00:09:10 +0000 (GMT) Received: from relay02.mail-hub.dodo.com.au (relay02.mail-hub.dodo.com.au [202.136.32.45]) by mx1.FreeBSD.org (Postfix) with ESMTP id EF82943DA9 for ; Sat, 21 May 2005 00:09:07 +0000 (GMT) (envelope-from aris@pharoe.com) Received: from [220.240.249.96] (helo=pharoe.com) by relay02.mail-hub.dodo.com.au with esmtp (Exim 4.34) id 1DZHYA-0004yL-Ij for freebsd-geom@freebsd.org; Sat, 21 May 2005 10:09:07 +1000 Received: from pharoe.com (localhost.com [127.0.0.1]) by pharoe.com (8.13.1/8.13.1) with ESMTP id j4L0AaTN000739 for ; Sat, 21 May 2005 10:10:36 +1000 (EST) (envelope-from aris@pharoe.com) Received: from localhost (aris@localhost) by pharoe.com (8.13.1/8.13.1/Submit) with ESMTP id j4L0AaK8000736 for ; Sat, 21 May 2005 10:10:36 +1000 (EST) (envelope-from aris@pharoe.com) Date: Sat, 21 May 2005 10:10:36 +1000 (EST) From: rk To: freebsd-geom@freebsd.org Message-ID: <20050521095457.N661@pharoe.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Subject: gvinum crashes, help! X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 May 2005 00:09:10 -0000 Hi I have a server running FreeBSD 5.3 ... as of yesterday 5.4 It has a RAID1 on 2x4.3Gig drives for the OS, and *had* RAID5 on 3x120GB drives (the 120GB drives are no longer in the box) It has now beed upgraded to 4x250GB drives (that will run in RAID5). My problems are: (in a nutshell: no resetconfig implemented in gvinum yet) A. I cannot fully clean up the old RAID5 configuration. the subdisks appear as "up" (the rest of the objects - disks, volume, plex - are gone) attempting to delete these subdisks immediately crashes the machine. I upped the sys/, lib/libgeom, sbin/geom and sbin/gvinum to RELENG_5_4 I hoped the newly-added support for set-state will help. nada. I can set the ghost subdisks to down (which will not persist over a reboot) but even when down attempting to remove them bounces the box. The configuration seems like it is saved somewhere on the 4.3 Gig disks. Maybe I can somehow manually edit it...? Here's a gvinum l -r : gvinum l -r 2 drives: D sys-b State: up /dev/da1s1h A: 255/4141 MB (6%) S sys.p1.s0 State: up D: sys-b Size: 3885 MB D sys-a State: up /dev/da0s1h A: 255/4141 MB (6%) S sys.p0.s0 State: up D: sys-a Size: 3885 MB 1 volume: V sys State: up Plexes: 2 Size: 3885 MB P sys.p1 C State: up Subdisks: 1 Size: 3885 MB S sys.p1.s0 State: up D: sys-b Size: 3885 MB P sys.p0 C State: up Subdisks: 1 Size: 3885 MB S sys.p0.s0 State: up D: sys-a Size: 3885 MB 2 plexes: P sys.p0 C State: up Subdisks: 1 Size: 3885 MB S sys.p0.s0 State: up D: sys-a Size: 3885 MB P sys.p1 C State: up Subdisks: 1 Size: 3885 MB S sys.p1.s0 State: up D: sys-b Size: 3885 MB 5 subdisks: S store.p0.s2 State: up D: store-c Size: 111 GB S store.p0.s1 State: up D: store-b Size: 111 GB S store.p0.s0 State: up D: store-a Size: 111 GB S sys.p0.s0 State: up D: sys-a Size: 3885 MB S sys.p1.s0 State: up D: sys-b Size: 3885 MB How can I clean those annying bits out? B. Creating the new RAID. When I run gvinum create, again, the box mercilessly crashes. No nice readable kernel panic, just an immediate reboot. The four new drives are all in. All look like this: #bsdlabel /dev/ad4s1 # /dev/ad4s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] c: 488392002 0 unused 0 0 # "raw" part, don't edit h: 488391986 16 vinum I created a new configuration to RAID them: drive hold1 device /dev/ad4s1h drive hold2 device /dev/ad6s1h drive hold3 device /dev/ad8s1h drive hold4 device /dev/ad10s1h volume hold plex org raid5 715425M sd length 238475M drive hold1 sd length 238475M drive hold2 sd length 238475M drive hold3 sd length 238475M drive hold4 The moment I run create with this file, zbang. Help :-) TIA Miki S