Date: Tue, 04 Jan 2000 11:22:46 -0800 From: Matthew Reimer <mreimer@vpop.net> To: freebsd-stable@freebsd.org Subject: Unexpected vinum behavior while reviving Message-ID: <38724886.9A2DFB05@vpop.net>
next in thread | raw e-mail | index | archive | help
FreeBSD 3.3 circa 1999 Nov 10, two 36G disks mirrored (one plex per
disk, da1 and da2).
While testing vinum's ability to revive a plex, I was surprised by the
pattern of disk activity. Nothing was running on the box, except vinum,
reviving a 35G mirrored plex I had earlier disconnected. What seems
strange is the traffic pattern between the two disks: da1 was
(presumably reading) at ~2MB/s, while da2 was (presumably writing) at
6.5MB/s. Shouldn't the traffic be the same for both? Reading/writing the
volume when it's up (i.e., not reviving) yields symmetrical traffic
patterns.
Here's systat -vmstat output:
3 users Load 0.01 0.00 0.00 Wed Nov 10 18:38
Mem:KB REAL VIRTUAL VN PAGER SWAP
PAGER
Tot Share Tot Share Free in out in
out
Act 1736 848 2780 916 222708 count
All 32784 1452 2259116 1728 pages
cow
Interrupts
Proc:r p d s w Csw Trp Sys Int Sof Flt zfod 369
total
2 2 143 90 368 71 2 15248 wire 100
clk0 irq0
5792 act 128
rtc0 irq8
0.5%Sys 0.0%Intr 0.0%User 0.0%Nice 99.5%Idl 11704 inact 140
pci irq11
| | | | | | | | | | cache 1
pci irq10
222708 free
fdc0 irq6
daefr
atkbd0 irq
Namei Name-cache Dir-cache prcfr
Calls hits % hits % react
pdwake
pdpgs
Discs da0 da1 da2 fd0 pass0 pass1 pass2 intrn
KB/t 0.00 64.00 64.00 0.00 0.00 0.00 0.00 8385 buf
tps 0 35 105 0 0 0 0 dirtybuf
MB/s 0.00 2.18 6.56 0.00 0.00 0.00 0.00 17268 desiredvnodes
% busy 0 4 95 0 0 0 0 340 numvnodes
----------------------------
Vinum is configured like this:
vinum -> l
Configuration summary
Drives: 2 (4 configured)
Volumes: 1 (4 configured)
Plexes: 2 (8 configured)
Subdisks: 2 (16 configured)
D vinumdrive0 State: up Device /dev/da1s1e Avail:
0/35000 M
B (0%)
D vinumdrive1 State: up Device /dev/da2e Avail:
0/35000 M
B (0%)
V fatmirror State: up Plexes: 2 Size: 34
GB
P fatmirror.p0 C State: up Subdisks: 1 Size: 34
GB
P fatmirror.p1 C State: faulty Subdisks: 1 Size: 34
GB
S fatmirror.p0.s0 State: up PO: 0 B Size: 34
GB
S fatmirror.p1.s0 State: reviving PO: 0 B Size: 34
GB
After fatmirror.p1 has been revived, it looks like this:
vinum -> l
Configuration summary
Drives: 2 (4 configured)
Volumes: 1 (4 configured)
Plexes: 2 (8 configured)
Subdisks: 2 (16 configured)
D vinumdrive0 State: up Device /dev/da1s1e Avail:
0/35000 M
B (0%)
D vinumdrive1 State: up Device /dev/da2s1e Avail:
35000/350
00 MB (100%)
V fatmirror State: up Plexes: 2 Size: 34
GB
P fatmirror.p0 C State: up Subdisks: 1 Size: 34
GB
P fatmirror.p1 C State: up Subdisks: 1 Size: 34
GB
S fatmirror.p0.s0 State: up PO: 0 B Size: 34
GB
S fatmirror.p1.s0 State: up PO: 0 B Size: 34
GB
Is this unequal traffic across the disks a bug?
Matt
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?38724886.9A2DFB05>
