Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 18 Dec 2004 17:35:59 -0500
From:      Paul Mather <paul@gromit.dlib.vt.edu>
To:        Nikolaj Hansen <nikolaj.hansen@barnabas.dk>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: FreeBSD 5.3 and vinum upgrade #2
Message-ID:  <1103409359.52279.2.camel@zappa.Chelsea-Ct.Org>
In-Reply-To: <41C48A6F.7000203@barnabas.dk>
References:  <41C48A6F.7000203@barnabas.dk>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, 2004-12-18 at 20:52 +0100, Nikolaj Hansen wrote:

> While some uncommon configurations, such as multiple vinum drives on a 
> disk, are not supported, it is generally backward compatible. Note that 
> for the geom(4)-aware vinum, its new userland control program, gvinum, 
> should be used, and it is not yet feature-complete."
> 
> I think I have to disagree calling muliple drives on a disk "uncommon".
> In fact, I think I remember that being the way it was demonstrated in an 
> old version of the handbook. Here is my current setup after rolling back 
> to FreeBSD 5.2.1:

I think you are misunderstanding things a little, here.  It's not that
multiple vinum volumes per disk can't be handled, but instead it's
multiple vinum configurations per disk that are problematic.  In other
words, I believe it's not supported to have, say, a /dev/da0s1g "vinum"
partition (containing vinum volumes, plexes, and subdisks) and also,
say, a /dev/da0s1h "vinum" partition (again, containing vinum volumes,
plexes, and subdisks).  Such a setup was okay under the old vinum, but
is not okay under geom_vinum (AFAIK). 

> As far as I can tell, the new 5.3 release makes this disk configuration 
> invalid?

The vinum configuration you listed appears fine for geom_vinum.  I
transitioned by old root-on-vinum all-mirrored setup over to geom_vinum
without any problems.  (Yours looks the same, except that you also have
a third drive with a single concat plex volume on it.)

> If not I have a major problem here :-(

The biggest problem you'll have is if your system suffers the ATA
"TIMEOUT - WRITE_DMA" woe that bedevils some of us under 5.3.  When that
happens, your mirror will be knocked into a degraded state (half of your
mirrored plexes will be marked down) even though the drive is okay.
Unfortunately, without "setstate" being implemented in "gvinum" to mark
the drive as up, thereby allowing you to issue "gvinum start"s for the
"downed" plexes, there's little more you can do to get the "failed"
drive recognised as being in the "up" state other than to reboot.  (You
might be able to use atacontrol to stop/start or otherwise reset the
drive; in my particular system I can't use atacontrol detach/attach
because they're both on the same channel.)  At any rate, every so often,
when this happens, you'll have to resynchronise the "failed" plexes,
which *really* sucks the I/O life out of the system because there's no
way to throttle back reconstruction, unlike with geom_mirror (which has
two sysctls to govern the load imposed by resynchronisation).

But, it looks like you're lucky, because your mirrored drives are SCSI.
I don't know about your ATA concat plex volume, though...

Cheers,

Paul.
-- 
e-mail: paul@gromit.dlib.vt.edu

"Without music to decorate it, time is just a bunch of boring production
 deadlines or dates by which bills must be paid."
        --- Frank Vincent Zappa



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1103409359.52279.2.camel>