From owner-freebsd-doc@FreeBSD.ORG Fri May 15 07:54:36 2009 Return-Path: Delivered-To: freebsd-doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 954AC106564A for ; Fri, 15 May 2009 07:54:36 +0000 (UTC) (envelope-from minimarmot@gmail.com) Received: from ey-out-2122.google.com (ey-out-2122.google.com [74.125.78.27]) by mx1.freebsd.org (Postfix) with ESMTP id 218638FC1E for ; Fri, 15 May 2009 07:54:35 +0000 (UTC) (envelope-from minimarmot@gmail.com) Received: by ey-out-2122.google.com with SMTP id 9so565570eyd.7 for ; Fri, 15 May 2009 00:54:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=SgHPVNP27TAFXBHaT6kg1cQeMwEwd+egIf2QjYPJ6u4=; b=xpAB0JpaJnXJC8/n6UoZ65UBsc4DJoDlEcyh1LR65QarV4zM0s29jMpHd4pVlgkboj A646FgIvSXnudcJ0fmTW1QsBiDjzOEhPWrWpu94f3KYYjJa/moXKNzq2ENz7v/YNNtGb kLe2XJIVG7mu6ai0s8YEOeo03cPz5FJNqKv/w= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=GsT7skc0n/lSgLscU+W9xK+lmBOjWjCStClP5wmJW4jWl/eqKLYyaPNz+ZLAp84SBH IID5tPMi/MlD1894s++Kr5jK4/zI2y8XPsaHxMnIGRoh0zF7SCDLRl0ycZCBjPsEIgMi BillDk6CiRqa8wPVZi0QyusK/MG5QjB912g40= MIME-Version: 1.0 Received: by 10.210.114.1 with SMTP id m1mr3808346ebc.26.1242374074807; Fri, 15 May 2009 00:54:34 -0700 (PDT) In-Reply-To: <4A09D211.5000701@FreeBSD.org> References: <4A09D211.5000701@FreeBSD.org> Date: Fri, 15 May 2009 03:54:34 -0400 Message-ID: <47d0403c0905150054j11187ed4q2ebd20ee379a79aa@mail.gmail.com> From: Ben Kaduk To: Ulf Lilleengen Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-doc@freebsd.org Subject: Re: [Review request] improving (g)vinum documentation X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 May 2009 07:54:36 -0000 On Tue, May 12, 2009 at 3:46 PM, Ulf Lilleengen wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, > > As part of SoC 2007, I extended the gvinum documentation in the handbook > with some examples that I would like to commit. It would be good if > someone could review the language. Thanks! > > Patch here: > http://people.freebsd.org/~lulf/patches/doc/vinum_doc.diff I'm not a vinum user, which makes me excellently suited to getting confused and asking easy questions :) --- chapter.sgml.orig 2008-12-22 22:51:29.000000000 +0100 +++ chapter.sgml 2009-05-11 21:17:38.986943400 +0200 @@ -742,6 +742,99 @@ + + Rebuilding a RAID-5 volume + + RAID-5 rebuilding is a frequent task for + many administrators, and gvinum supports online rebuild if RAID-5 "of". I might s/rebuild/rebuilding/, too. + plexes. This means that the filesystem on your volume may very well be + mounted while this is going on. A typical RAID-5 configuration might This sentence is fairly informal, but not very informational -- I would remove it. + look like this: + + + drive a device /dev/ad1 + drive b device /dev/ad2 + drive c device /dev/ad3 + volume raid5vol + plex org raid5 512k name raid5vol.p0 + sd drive a name raid5vol.p0.s0 + sd drive b name raid5vol.p0.s1 + sd drive c name raid5vol.p0.s2 + + If one of the drives fails (let's say ad3 for instance), the subdisk s/let's say ad3/ad3,/ + using that drive will fail. When the drive is replaced, a new drive + will have to be created for vinum to use: + + + drive d device /dev/ad4 Do I create this drive by putting a line like that into some configuration file? If so, which file? Do I need to send a signal to a daemon after changing the file? If not, what command do I need to run? + + When this drive is created, the subdisk using the failed drive will + have to be moved to the new drive. This can be done with the following + command: + + + gvinum move d raid5vol.p0.s2 + + This will bind the subdisk to the new drive, and set it's state to The subdisk is ... s2? Maybe mention it explicitly for clarity? Probably also say "the new drive d" to reinforce that this is the same one added above. + 'stale'. This means the plex is ready for rebuilding: + + + gvinum start raid5vol.p0 + + This command initiates the rebuild of the plex. The status of the + rebuild can be checked with the 'list' command, which shows how big + precentage of the plex that is rebuilt. s/how big percentage/how much/ Maybe expand 'list' to 'gvinum list' ? + + + Growing a RAID-5 volume + + Just like rebuilding, growing is a task + that is not that frequent, but rather very handy for an administrator. The phrasing here is somewhat awkward. Try "not very frequent" and "can be handy" (no rather) + Gvinum supports online growing of RAID-5 plexes the same way it does + with rebuilds. Also note that growing striped (RAID-0) plexes is also s/with/for/ + supported, and the process of doing this is the same as for RAID-5 + plexes. A typical configuration before expanding might look like Is "growing" the technical term for this process? If so, it should be used in place of "expanding", here. Otherwise, I think s/expanding/the expansion/ would be more clear. + this: + + + drive a device /dev/ad1 + drive b device /dev/ad2 + drive c device /dev/ad3 + volume raid5vol + plex org raid5 512k name raid5vol.p0 + sd drive a name raid5vol.p0.s0 + sd drive b name raid5vol.p0.s1 + sd drive c name raid5vol.p0.s2 + + Let us say we want to expand this array with a new drive. There are + two ways to do this. One way is to extend the configuration and create + the drive manually: + + + drive d device /dev/ad4 + sd drive d name raid5vol.p0.s3 plex raid5vol.p0 + + However, the following is a short version of the same: + + + grow raid5vol.p0 /dev/ad4 + + After the configuration is created, the state of the plex will be + set to 'growable'. This state means that the plex is capable of being + expanded. The size of the plex is not changed until the growing is + complete. First, start the growing process: + + + gvinum start raid5vol.p0 + + This command initiates the growing process. Just like when + rebuilding a plex, you are able to watch the status of the growing + process with the 'list' command, which shows how big precentage of the + plex that is grown. When the growing is finished, the plex will s/how big/what/ and s/that is grown/has grown/ + hopefully be up again, and the volume will have the new size. Remember + that if UFS is run on top of the volume, the filesystem itself will + also have to be grown using growfs. markup on growfs? + Thanks for writing this up -- it will be a really handy reference for those of us who don't know what we're doing! -Ben Kaduk