Date: Fri, 15 Jun 2007 10:45:06 +0200 From: Lapo Luchini <lapo@lapo.it> To: freebsd-fs@freebsd.org Subject: ZFS panic on mdX-based raidz Message-ID: <f4tjii$bus$1@sea.gmane.org>
next in thread | raw e-mail | index | archive | help
I have a vmcore, a couple of them in fact, but they don't seem to have a
valid bt in them.. dunno why, I'm not really into kgdb yet 0=)
(only backtrace visible is kern_shutdown, then only ??)
This is the list of operations with whom I can reproduce the panic at
will (or, at the very least, with whom I reproduced it three times):
rm data0 data1 data2
truncate -s 64M data0
truncate -s 128M data1
truncate -s 256M data2
mdconfig -f data0 -u 0
mdconfig -f data1 -u 1
mdconfig -f data2 -u 2
zpool create prova raidz md0 md1 md2
zfs create -o mountpoint=/usr/tmp/p prova/p
dd if=/dev/zero of=/usr/tmp/p/file bs=1M
zpool status
sysctl kern.geom.debugflags=16
dd if=/dev/zero of=/dev/md0 bs=1M
dd if=/dev/zero of=/dev/md1 bs=1M
zpool scrub prova
zpool status
Follows the status with two invalid disks (I wonder why :P) and a scrub
in progress; but the host will panic before the scrub ends.
7.0-CURRENT of June 11 2007 (with added destroy_dev_sched.6.patch and
destroy_dev_sched_addon.2.patch from the "smb wedges" thread).
Lapo
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f4tjii$bus$1>
