From owner-freebsd-stable@FreeBSD.ORG Thu Aug 17 19:00:51 2006 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 35F0A16A4DD for ; Thu, 17 Aug 2006 19:00:51 +0000 (UTC) (envelope-from stevep-hv@zpfe.com) Received: from mail.zpfe.com (mail.zpfe.com [208.42.168.33]) by mx1.FreeBSD.org (Postfix) with SMTP id B489643D49 for ; Thu, 17 Aug 2006 19:00:50 +0000 (GMT) (envelope-from stevep-hv@zpfe.com) Received: (qmail 97560 invoked from network); 17 Aug 2006 19:00:47 -0000 Received: from unknown (HELO revere.zpfe.com) (208.42.168.33) by mail.zpfe.com with SMTP; 17 Aug 2006 19:00:47 -0000 Message-Id: <6.2.3.4.0.20060817113801.0309ca10@localhost> X-Mailer: QUALCOMM Windows Eudora Version 6.2.3.4 Date: Thu, 17 Aug 2006 14:00:50 -0500 To: freebsd-stable@freebsd.org From: Steve Peterson Mime-Version: 1.0 Content-Type: text/plain; charset="US-ASCII"; format=flowed Subject: gvinum / FreeBSD 6.1 / stale subdisks X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Aug 2006 19:00:51 -0000 Greetings all -- I'm running FreeBSD 6.1-RELEASE on i386 with a stock kernel, and am trying to build a 4 disk RAID5 array using vinum. The issue is that, once the system is rebooted after initially creating the array, the subdisks come up as stale. I first started by creating a 4 subdisk array on the 4 devices, with each subdisk sized at 200GB. This configuration I used to test performance and verify the disk subsystem. I used UFS2 with 64K blocks and 8K fragments. I then deleted the volume, plex, and subdisks in order to do some performance testing on one of the drives using an fdisk-style partition with a UFS2 filesystem. This filesystem was created using the same block/fragment settings. I ran bonnie++ on this filesystem and did some bulk copies to check on performance. I then returned to gvinum to recreate my RAID5 array. The configuration file I used was similar to, but not exactly the same as the original configuration file as I wanted to try a different stripe size. The filesystem was created with the same configuration as listed above. After running gvinum create gvinum.config, formatting the filesystem, and mounting it, I am able to successfully use the filesystem to read and write data. However, after rebooting, the subdisks all show as being stale, as shown below: gvinum -> list 4 drives: D drive03 State: up /dev/ad4 A: 38474/238475 MB (16%) D drive02 State: up /dev/ad6 A: 38474/238475 MB (16%) D drive04 State: up /dev/ad8 A: 38474/238475 MB (16%) D drive01 State: up /dev/ad10 A: 38474/238475 MB (16%) 1 volume: V vol1 State: down Plexes: 1 Size: 585 GB 1 plex: P vol1.p0 R5 State: down Subdisks: 4 Size: 585 GB 4 subdisks: S vol1.p0.s0 State: stale D: drive01 Size: 195 GB S vol1.p0.s1 State: stale D: drive02 Size: 195 GB S vol1.p0.s2 State: stale D: drive03 Size: 195 GB S vol1.p0.s3 State: stale D: drive04 Size: 195 GB gvinum -> quit There is not a /var/log/vinum_history file. grep vinum /var/log/messages yielded the one line below: Aug 16 23:36:08 archive kernel: g_vfs_done():gvinum/vol1[READ(offset=65536, length=8192)]error = 6 Thanks in advance for any suggestions you might have. Steve