Date: Sat, 26 Feb 2000 13:27:09 +1030 From: Greg Lehey <grog@lemis.com> To: Gerd Knops <gerti@bitart.com> Cc: Jon Rust <jpr@vcnet.com>, freebsd-questions@FreeBSD.ORG Subject: Re: Disk Mirroring Message-ID: <20000226132709.G31594@freebie.lemis.com> In-Reply-To: <20000226012137.25935.qmail@camelot.bitart.com> References: <Pine.BSF.3.96.1000225083344.14721A-100000@omnix.net> <v0421012eb4dc5c1efdd2@[209.239.239.22]> <20000225222638.25346.qmail@camelot.bitart.com> <v04210135b4dcb3c99d03@[209.239.239.22]> <20000226103604.A19404@freebie.lemis.com> <20000226012137.25935.qmail@camelot.bitart.com>
next in thread | previous in thread | raw e-mail | index | archive | help
[Format recovered--see http://www.lemis.com/email/email-format.html] On Friday, 25 February 2000 at 19:21:37 -0600, Gerd Knops wrote: > Greg Lehey wrote: >> On Friday, 25 February 2000 at 14:59:40 -0800, Jon Rust wrote: >>> At 4:26 PM -0600 2/25/00, Gerd Knops wrote: >>> >>>> Well, did you ever try to recover a vinum mirror setup after one of two >>>> IDE disks died and you replaced it with a new one? It just doesn't work! >>> >>> I did this as a test when I first brought up my Vinum raids. I'll see >>> if I can remember the steps: >>> >>> 1) Drive fails (I unplugged the drive's power in my tests) >>> 2) Shutdown >>> 3) Plug new drive in (in same logical location, ie IDE master/slave, >>> SCSI ID, etc). Make sure it's big enough! >>> 4) Boot up >>> 5) fdisk new drive (I use /stand/sysintall for this, and inital label) >>> 6) (re)label new drive slice as Vinum (disklabel -e da?) >>> 7) start vinum and issue "start mirror.p1" (for example, if a drive >>> on plex p1 in volume mirror had failed) >>> >>> Step 7 is the part my memory is fuzzy on, but without looking up my >>> notes on it, I beleive this is correct. Mr. Lehey, do you have >>> anything to add? >> >> In this situation you probably need to tell Vinum the name of the new >> drive: >> >> drive b device /dev/da1e >> >> Otherwise vinum will know that the drive exists, but there's nothing >> on the drive to identify it. >> > Err: > > That gives me: > > ============== > D d1 State: up Device /dev/wd0f Avail: 2487/2512 MB (99%) > D d2 State: up Device /dev/wd2f Avail: 2511/2512 MB (100%) > > V mirror State: up Plexes: 2 Size: 24 MB > > P mirror.p0 C State: up Subdisks: 1 Size: 24 MB > P mirror.p1 C State: flaky Subdisks: 1 Size: 24 MB > > S mirror.p0.s0 State: up PO: 0 B Size: 24 MB > S mirror.p1.s0 State: reborn PO: 0 B Size: 24 MB > ============== > (/dev/wd2f being the 'new' drive) > > 'vinum start mirror.p1' doesn't seem to do anything, nor does 'vinum stop' > followed by 'vinum start' (that complains 'Warning: defective objects'). Read carefully what Jon wrote: >>> 7) start vinum and issue "start mirror.p1" (for example, if a >>> drive on plex p1 in volume mirror had failed) 'vinum start' has two different functions. From the man page: start [-S size] [-w] [volume | plex | subdisk] start starts (brings into to the up state) one or more vinum ob- jects. If no object names are specified, vinum scans the disks known to the system for vinum drives and then reads in the configuration as described under the read commands. The vinum drive contains a header with all information about the data stored on the drive, including the names of the other drives which are required in or- der to represent plexes and volumes. If vinum encounters any errors during this command, it will turn off automatic configuration update to avoid corrupting the copies on disk. This will also happen if the configuration on disk in- dicates a configuration error (for example, subdisks which do not have a valid space specification). You can turn the updates on again with the setdaemon and saveconfig command. Reset bit 4 of the daemon options mask to re-enable configuration saves. If object names are specified, vinum starts them. Normally this operation is only of use with subdisks. The action depends on the current state of the object: o If the object is already in the up state, vinum does nothing. o If the object is a subdisk in the down or reborn states, vinum changes it to the up state. o If the object is a subdisk in the empty state, the change de- pends on the subdisk. If it is part of a plex which is part of a volume which contains other plexes, vinum places the subdisk in the reviving state and attempts to copy the data from the volume. When the operation completes, the subdisk is set into the up state. If it is part of a plex which is part of a volume which contains no other plexes, or if it is not part of a plex, vinum brings it into the up state immedi- ately. o If the object is a subdisk in the reviving state, vinum con- tinues the revive operation offline. When the operation com- pletes, the subdisk is set into the up state. When a subdisk comes into the up state, vinum automatically checks the state of any plex and volume to which it may belong and changes their state where appropriate. If the object is a volume or a plex, start currently has no ef- fect: it checks the state of the subordinate subdisks (and plexes in the case of a volume) and sets the state of the object accord- ingly. To start a plex in a multi-plex volume, the data must be copied from another plex in the volume. Since this frequently takes a long time, it is normally done in the background. If you want to wait for this operation to complete (for example, if you are per- forming this operation in a script), use the -w flag. Copying data doesn't just take a long time, it can also place a significant load on the system. You can specify the transfer size with the -S flag. A future change to vinum will allow a pause between each block to lessen the load on the system. Greg -- When replying to this message, please copy the original recipients. For more information, see http://www.lemis.com/questions.html Finger grog@lemis.com for PGP public key See complete headers for address and phone numbers To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20000226132709.G31594>