From owner-freebsd-stable@FreeBSD.ORG Wed Nov 10 21:30:47 2004 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 11C4F16A4CE for ; Wed, 10 Nov 2004 21:30:47 +0000 (GMT) Received: from mta11.adelphia.net (mta11.adelphia.net [68.168.78.205]) by mx1.FreeBSD.org (Postfix) with ESMTP id 78AED43D39 for ; Wed, 10 Nov 2004 21:30:46 +0000 (GMT) (envelope-from security@jim-liesl.org) Received: from smtp.jim-liesl.org ([68.71.52.28]) by mta11.adelphia.net (InterMail vM.6.01.03.02 201-2131-111-104-20040324) with ESMTP id <20041110213045.OUTI19338.mta11.adelphia.net@smtp.jim-liesl.org>; Wed, 10 Nov 2004 16:30:45 -0500 Received: from [127.0.0.1] (localhost.jim-liesl.org [127.0.0.1]) by smtp.jim-liesl.org (Postfix) with ESMTP id 4CDDD152B4; Wed, 10 Nov 2004 14:30:43 -0700 (MST) Message-ID: <4192889E.8010506@jim-liesl.org> Date: Wed, 10 Nov 2004 14:31:10 -0700 From: secmgr User-Agent: Mozilla Thunderbird 0.8 (Windows/20040913) X-Accept-Language: en-us, en MIME-Version: 1.0 To: msch@snafu.de References: <02f201c4ba91$f9f95db0$33017f80@psique> <200411061217.06061.msch@snafu.de> <1099805401.4420.4.camel@emperor> <200411071042.03382.msch@snafu.de> In-Reply-To: <200411071042.03382.msch@snafu.de> Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit cc: freebsd-stable@freebsd.org Subject: Re: freebsd 5.3 have any problem with vinum ? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Nov 2004 21:30:47 -0000 ok, your instructions worked like a charm. So i'm running my nice 4 member SCSI gvinum raid5 array (with softupdates turned on), and it's zipping along. Now I need to test just how robust this is. camcontrol is too nice. I want to test a more real world failure. I'm running dbench and just pull one of the drives. My expectation is that I should see a minor pause, and then the array continue in some slower, degraded mode. What I get is a kernel trap 12 (boom!). I reboot, and it will not mount the degraded set till I replace the drive. I turned off softupdates, and had the same thing happen. Is this a bogus test? Is it reasonable to expect that a scsi drive failure should of been tolerated w/o crashing? (bunch of scsi msgs to console) sub-disk down plex degraded g_access failed:6 trap 12 page fault while in kernel mode cpuid=1 apic id=01 fault virtual address =0x18c fault code supervisor write, page not present instruction pointer =0x8:0xc043d72c stack pointer =0x10:cbb17bf0 code segment =base 0x0, limit 0xfff, type 0x1b =DPL0, pres1,def32,gran1 Processor flags interupt enabled, resume,IOPL=0 current process 22(irq11:ahc1) Matthias Schuendehuette wrote: >gvinum> start > >This (as far as I investigated :-) > >a) initializes a newly created RAID5-plex or > >b) recalculates parity informations on a degraded RAID5-plex with > a new replaced subdisk. > >So, a 'gvinum start raid5.p0' initializes my RAID5-plex if newly >created. You can monitor the initialization process with subsequent >'gvinum list' commands. > >If you degrade a RAID5-plex with 'camcontrol stop ' (in case >of SCSI-Disks) and 'repair' it afterwards with 'camcontrol start >', the 'gvinum start raid5.p0' (my volume here is called >'raid5') command recalculates the parity and revives the subdisk which >was on disk . > >