Date: Wed, 19 Apr 2017 10:56:05 -0400 From: Dan Langille <dan@langille.org> To: freebsd-fs@freebsd.org Subject: vdev state changed & zfs scrub Message-ID: <0030E8CC-66B2-4EBF-A63B-91CF8370D526@langille.org>
next in thread | raw e-mail | index | archive | help
I see this on more than one system:
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D3558867368789024889
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D3597532040953426928
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D8095897341669412185
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D15391662935041273970
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D8194939911233312160
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D4885020496131451443
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D14289732009384117747
Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D7564561573692839552
zpool status output includes:
$ zpool status
pool: system
state: ONLINE
scan: scrub in progress since Wed Apr 19 03:12:22 2017
2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
0 repaired, 41.94% done
The timing of the scrub is not coincidental.
Why is vdev status changing?
Thank you.
--=20
Dan Langille - BSDCan / PGCon
dan@langille.org
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0030E8CC-66B2-4EBF-A63B-91CF8370D526>
