Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 Dec 2008 11:35:19 +0100
From:      "Dimitri Aivaliotis" <aglarond@gmail.com>
To:        "Ulf Lilleengen" <ulf.lilleengen@gmail.com>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: gvinum raid10 stale
Message-ID:  <55c107bf0812190235g408b9e00mf402fa76670c461b@mail.gmail.com>
In-Reply-To: <20081218175752.GA10326@carrot.lan>
References:  <55c107bf0812180320x502847efi53df5a7da68b73e1@mail.gmail.com> <20081218175752.GA10326@carrot.lan>

next in thread | previous in thread | raw e-mail | index | archive | help

Hi Ulf,

On Thu, Dec 18, 2008 at 6:57 PM, Ulf Lilleengen
<ulf.lilleengen@gmail.com> wrote:
> On tor, des 18, 2008 at 12:20:26pm +0100, Dimitri Aivaliotis wrote:

> Why do you create 32 subdisks for each stripe? They are still on the same
> drive, and should not give you any performance increase as I see it. Just
> having one subdisk for each drive and mirroring them would give the same
> effect, and would allow you to expand the size.
>

A good question.  I'm not quite sure anymore why I did it this way.  I
guess that I thought it would bring performance gains, spreading the
stripes across the disks.  That's probably old-school thinking, and
doesn't apply anymore to modern disks.

> I don't see how the subdisks could go stale after inserting the disks unless
> they changed names, and the new disks you inserted was named with the old
> disks device number.

That was my initial assumption as well, but it turned out to be incorrect.

>> - How can I recover the other plex, such that the data continues to be
>> striped+mirrored correctly?
> For the volume where you have one good plex, you can do:
> gvinum start raid10
>
> This command will sync the bad plex from the good one.

OK, I've done this now:

# gvinum start raid10

GEOM_VINUM: plex sync raid10.p0 -> raid10.p1 started
GEOM_VINUM: sd raid10.p1.s0 is initializing

<snip>

GEOM_VINUM: plex raid10.p1 state change: down -> degraded


> For the volume where both the plexes are down, you can try to force subdisks
> of one of the plex in the upstate and see if you are able to fsck/mount the
> volume. If not, try the same procedure with the other plex. If one of them is
> good, you can be pretty certain it is good, and  you can do a sync of the
> plexes.

That's what I did on the server where both plexes were down.  I'm
sorry if that wasn't clear from my initial message (reordered quote
here):

>> (I wound up doing a 'gvinum setstate -f up raid10.p1.s<num>' 32 times
>> to bring one plex back up on the server that had both down.)



>> - How can I extend this raid10 by adding two additional disks?
> I assume you want to increase the size, and not add more mirrors, so you
> can't. The plexes are striped, and extending the stripes is only supported in
> a new gvinum version not yet committed.

Great.  So, I'm stuck moving the data, rebuilding the RAID, and moving
the data back?  Good thing I do have an extra server handy...

- Dimitri



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55c107bf0812190235g408b9e00mf402fa76670c461b>