From owner-freebsd-questions Fri Apr 27 3:20:58 2001 Delivered-To: freebsd-questions@freebsd.org Received: from legion.openhost.co.uk (sip212021125069.ras.network-i.net [212.21.125.69]) by hub.freebsd.org (Postfix) with ESMTP id 10B7037B423 for ; Fri, 27 Apr 2001 03:20:42 -0700 (PDT) (envelope-from scott.culverhouse@opencube.co.uk) Received: from SCULVERHOUSE (sculverhouse.eastonneston [194.194.0.47] (may be forged)) by legion.openhost.co.uk (2.0/2.0) with ESMTP id f3RAKgQ42769; Fri, 27 Apr 2001 11:20:42 +0100 (BST) (envelope-from scott.culverhouse@opencube.co.uk) Date: Fri, 27 Apr 2001 11:20:02 +0100 From: Scott Culverhouse To: Greg Lehey Subject: Re: vinum - mirroring! Cc: questions@FreeBSD.ORG In-Reply-To: <20010427084328.C70059@wantadilla.lemis.com> References: <20010426111634.C3FC.SCOTT.CULVERHOUSE@opencube.co.uk> <20010427084328.C70059@wantadilla.lemis.com> Message-Id: <20010427111728.C41B.SCOTT.CULVERHOUSE@opencube.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Mailer: Becky! ver. 2.00.05 Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Thanks for the config but when I do: vinum -> create -f /etc/vinum.conf I get the following errors: ============== Apr 27 11:02:33 /kernel: vinum: drive a is up Apr 27 11:02:33 /kernel: vinum: drive b is up Apr 27 11:02:33 /kernel: vinum: removing 752 blocks of partial stripe at the en d of var.p0 Apr 27 11:02:33 /kernel: vinum: var.p0.s0 is up Apr 27 11:02:33 /kernel: vinum: var.p0.s1 is up Apr 27 11:02:33 /kernel: vinum: var.p0 is up Apr 27 11:02:33 /kernel: vinum: var is up Apr 27 11:02:33 /kernel: vinum: removing 752 blocks of partial stripe at the en d of var.p1 Apr 27 11:02:33 /kernel: vinum: var.p1 is faulty Apr 27 11:02:33 /kernel: vinum: removing 412 blocks of partial stripe at the en d of usr.p0 Apr 27 11:02:33 /kernel: vinum: usr.p0.s0 is up Apr 27 11:02:33 /kernel: vinum: usr.p0.s1 is up Apr 27 11:02:33 /kernel: vinum: usr.p0 is up Apr 27 11:02:33 /kernel: vinum: usr is up Apr 27 11:02:33 /kernel: vinum: removing 412 blocks of partial stripe at the en d of usr.p1 Apr 27 11:02:33 /kernel: vinum: usr.p1 is faulty ============== Here is my output from "vinum list" ============== 2 drives: D a State: up Device /dev/da0e Avail: 2029/32749 MB (6%) D b State: up Device /dev/da1e Avail: 2029/32749 MB (6%) 2 volumes: V var State: up Plexes: 2 Size: 9 GB V usr State: up Plexes: 2 Size: 19 GB 4 plexes: P var.p0 S State: up Subdisks: 2 Size: 9 GB P var.p1 S State: faulty Subdisks: 2 Size: 9 GB P usr.p0 S State: up Subdisks: 2 Size: 19 GB P usr.p1 S State: faulty Subdisks: 2 Size: 19 GB 8 subdisks: S var.p0.s0 State: up PO: 0 B Size: 5119 MB S var.p0.s1 State: up PO: 273 kB Size: 5119 MB S var.p1.s0 State: empty PO: 0 B Size: 5119 MB S var.p1.s1 State: empty PO: 273 kB Size: 5119 MB S usr.p0.s0 State: up PO: 0 B Size: 9 GB S usr.p0.s1 State: up PO: 273 kB Size: 9 GB S usr.p1.s0 State: empty PO: 0 B Size: 9 GB S usr.p1.s1 State: empty PO: 273 kB Size: 9 GB ============== Here is the output from "disklabel da0" ============== type: SCSI disk: da0s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 4442 sectors/unit: 71376732 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 102400 0 4.2BSD 0 0 0 # (Cyl. 0 - 6*) b: 4204380 67172352 swap # (Cyl. 4181*- 4442*) c: 71376732 0 unused 0 0 # (Cyl. 0 - 4442*) e: 67069952 102400 vinum # (Cyl. 6*- 4181*) ============== Here is the output from "disklabel da1" ============== type: SCSI disk: da1s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 4442 sectors/unit: 71376732 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 104284 71272448 4.2BSD 0 0 0 # (Cyl. 4436*- 4442*) c: 71376732 0 unused 0 0 # (Cyl. 0 - 4442*) e: 67069952 0 vinum # (Cyl. 0 - 4174*) f: 409600 67069952 4.2BSD 0 0 0 # (Cyl. 4174*- 4200*) g: 3788800 67479552 4.2BSD 0 0 0 # (Cyl. 4200*- 4436*) ============== Any ideas? If I create a mirror using the "vinum mirror" command it creates a mirror ok full size of the disk! If I do a "vinum printconfig >myconf" then change the length in myconf to say 5g. If I then do "vinum resetconfig" and then do a "vinum create -f myconf" this does not work! I get the same results as the config you recommended! Also final queston:- The config you gave (when it works) will that give increase read speed and resilience! If a drive fails we want to be able to rebuild from the mirror set! Thanks! Scott On Fri, 27 Apr 2001 08:43:28 +0930 Greg Lehey wrote: > On Thursday, 26 April 2001 at 11:19:27 +0100, Scott Culverhouse wrote: > > Hi we have a machine with 2 x 36Gb SCSI drives and we want to put var > > and usr on to 2 vinum volumes. But we want both performance and > > resilience can they be done with two drives only? > > > > Here is a config I propose but not sure whether it will work too well or > > not:- Is it ok? > > > > drive sd0 device /dev/da0e > > drive sd1 device /dev/da0f > > You shouldn't put more than one drive on a spindle. > > > drive sd2 device /dev/da1e > > drive sd3 device /dev/da1h > > > > volume var > > plex org striped 256k > > You don't want a stripe size which is a power of 2. > > > sd length 5g drive sd0 > > sd length 5g drive sd1 > > plex org striped 256k > > sd length 5g drive sd2 > > sd length 5g drive sd3 > > > > volume usr > > plex org striped 256k > > sd length 10g drive sd0 > > sd length 10g drive sd1 > > This would give you terrible performance like this. You'd be adding > gratuitous seeks. The two subdisks should be on different spindles. > > > plex org striped 256k > > sd length 10g drive sd2 > > sd length 10g drive sd3 > > > > The reason I created 4 drives (2 slices on each drive) is so I could > > have 2 drives per plex is the a good or bad idea! > > It's a bad idea. That's what we have subdisks. Try this: > > drive a device /dev/da0e > drive b device /dev/da1e > > volume var > plex org striped 273k > sd length 5g drive a > sd length 5g drive b > plex org striped 273k > sd length 5g drive b > sd length 5g drive a > > volume usr > plex org striped 273k > sd length 10g drive a > sd length 10g drive b > plex org striped 273k > sd length 10g drive b > sd length 10g drive a > > Note that the subdisks on the second plex are the other way round; > otherwise the failure of a drive would cause you to lose half the > volume, which is obviously not what you want. > > > What size var and usr will this give me 5g and 10g respectively! > > This will give you 10g and 20g respectively. > > Greg > -- > When replying to this message, please copy the original recipients. > If you don't, I may ignore the reply. > For more information, see http://www.lemis.com/questions.html > Finger grog@lemis.com for PGP public key > See complete headers for address and phone numbers -- Scott Culverhouse To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message