Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 Apr 2017 14:18:26 +0300
From:      Andriy Gapon <avg@FreeBSD.org>
To:        Johan Hendriks <joh.hendriks@gmail.com>, Dan Langille <dan@langille.org>
Cc:        freebsd-fs@FreeBSD.org
Subject:   Re: vdev state changed & zfs scrub
Message-ID:  <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org>
In-Reply-To: <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com>
References:  <0030E8CC-66B2-4EBF-A63B-91CF8370D526@langille.org> <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 20/04/2017 12:39, Johan Hendriks wrote:
> Op 19/04/2017 om 16:56 schreef Dan Langille:
>> I see this on more than one system:
>>
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
>>
>> zpool status output includes:
>>
>> $ zpool status
>>   pool: system
>>  state: ONLINE
>>   scan: scrub in progress since Wed Apr 19 03:12:22 2017
>>         2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
>>         0 repaired, 41.94% done
>>
>> The timing of the scrub is not coincidental.
>>
>> Why is vdev status changing?
>>
>> Thank you.
>>
> I have the same "issue", I asked this in the stable list but did not got
> any reaction.
> https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html
> 
> In my initial mail it was only one machine running 11.0, the rest was
> running 10.x.
> Now I have upgraded other machines to 11.0 and I see it there also.

Previously none of ZFS events were logged at all, that's why you never saw them.
As to those particular events, unfortunately two GUIDs is all that the event
contains.  So, to get the state you have to explicitly check it, for example,
with zpool status.  It could be that the scrub is simply re-opening the devices,
so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to
VDEV_STATE_HEALTHY.  You can simply ignore those reports if you don't see any
trouble.
Maybe lower priority of those messages in /etc/devd/zfs.conf...

-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?106e81a2-4631-642d-6567-319d20d943d2>