Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 11 Jan 2006 06:58:34 -0500 (EST)
From:      "Brian Szymanski" <ski@mediamatters.org>
To:        "Stijn Hoop" <stijn@win.tue.nl>
Cc:        freebsd-stable@freebsd.org, Lukas Ertl <le@FreeBSD.org>
Subject:   Re: gvinum/vinum on 6.0
Message-ID:  <3657.68.49.189.193.1136980714.squirrel@68.49.189.193>
In-Reply-To: <20060111064349.GJ63938@pcwin002.win.tue.nl>
References:  <2178.68.49.189.193.1136950709.squirrel@68.49.189.193> <20060111064349.GJ63938@pcwin002.win.tue.nl>

next in thread | previous in thread | raw e-mail | index | archive | help

[-- Attachment #1 --]
Stijn, thanks for your help, I'm getting closer...

>> I took 6.0 for a test drive today and was disappointed to find that
>> vinum/gvinum are still in disarray. For example there is a man page for
>> vinum, but only a gvinum binary. gvinum help still lists lots of old
>> vinum
>> commands that are not implemented in gvinum. Lots of basic things I try
>> from the gvinum prompt just tell me "not yet supported".
>
> Hmm. There is a manpage in 6-STABLE. And there are a few things that
> don't work but I wouldn't call it "lots".

Ah, a manpage! Progress...

>> But most importantly, gvinum configuration (at least for a raid-5 plex)
>> still doesn't persist across a reboot :(
>
> That's a bug; I think it might be related to compiling gvinum in the
> kernel
> as opposed to loading it from /boot/loader.conf. I also think there is a
> fix already commited to 6-STABLE.

Hmm, I upgraded to 6-STABLE and I'm still having the problem.

Here's basically how it happens:
gvinum create /etc/vinum.cnf
newfs /dev/gvinum/VOLUME
mount /dev/gvinum/VOLUME /mnt
#screw with /mnt, everything works and is happy, yay!
reboot

At this point I call "gvinum l" (which loads geom_vinum.ko) by hand (after
the reboot). My configuration mostly seems to persist - except or the
"drives" section...

One of two things happens here: Either
a) The volumes/plexes appear down, the subdisks are stale. Additionally,
/dev/gvinum is not there - suffice it to say I can't mount anything.
b) Everything has status up (except the nonexistent drives). In this case
as well, /dev/gvinum is not there - and I still can't mount anything.

If I try to fix the configuration by gvinum rm'ing the volumes and plexes
that are there, then reloading my vinum.cnf (gvinum create), either:
a) everything seems to work
b) the system panics

Unfortunately there is no correlation between "gvinum l" output and
whether the system panics or is happy when I run gvinum create again.

Would I have better luck compiling gvinum into the kernel instead of
loading the module? What do other folks who have successful config's do?

Thanks in advance,
Brian

Brian Szymanski
Software and Systems Developer
Media Matters for America
ski@mediamatters.org

[-- Attachment #2 --]
drive apple device /dev/ad1d
drive banana device /dev/ad2d
drive orange device /dev/ad3d

volume rv
 plex org raid5 288k
  sd length 1018m drive apple
  sd length 1018m drive banana
  sd length 1018m drive orange


[-- Attachment #3 --]
3 drives:
D orange                State: up	/dev/ad3d	A: 6/1023 MB (0%)
D banana                State: up	/dev/ad2d	A: 6/1023 MB (0%)
D apple                 State: up	/dev/ad1d	A: 6/1023 MB (0%)

1 volume:
V rv                    State: up	Plexes:       1	Size:       2035 MB

1 plex:
P rv.p0              R5 State: up	Subdisks:     3	Size:       2035 MB

3 subdisks:
S rv.p0.s2              State: up	D: orange       Size:       1017 MB
S rv.p0.s1              State: up	D: banana       Size:       1017 MB
S rv.p0.s0              State: up	D: apple        Size:       1017 MB

[-- Attachment #4 --]
0 drives:

1 volume:
V rv                    State: down	Plexes:       0	Size:          0  B

1 plex:
P rv.p0              R5 State: down	Subdisks:     0	Size:          0  B

3 subdisks:
S rv.p0.s0              State: stale	D: apple        Size:       1017 MB
S rv.p0.s1              State: stale	D: banana       Size:       1017 MB
S rv.p0.s2              State: stale	D: orange       Size:       1017 MB

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3657.68.49.189.193.1136980714.squirrel>