Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 Apr 2017 07:42:47 -0400
From:      Dan Langille <dan@langille.org>
To:        Andriy Gapon <avg@FreeBSD.org>
Cc:        Johan Hendriks <joh.hendriks@gmail.com>, freebsd-fs@FreeBSD.org
Subject:   Re: vdev state changed & zfs scrub
Message-ID:  <AE63F640-D325-48C2-A4F5-7771E4A07144@langille.org>
In-Reply-To: <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org>
References:  <0030E8CC-66B2-4EBF-A63B-91CF8370D526@langille.org> <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com> <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> On Apr 20, 2017, at 7:18 AM, Andriy Gapon <avg@FreeBSD.org> wrote:
>=20
> On 20/04/2017 12:39, Johan Hendriks wrote:
>> Op 19/04/2017 om 16:56 schreef Dan Langille:
>>> I see this on more than one system:
>>>=20
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D3558867368789024889
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D3597532040953426928
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D8095897341669412185
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D15391662935041273970
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D8194939911233312160
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D4885020496131451443
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D14289732009384117747
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, =
pool_guid=3D15387115135938424988 vdev_guid=3D7564561573692839552
>>>=20
>>> zpool status output includes:
>>>=20
>>> $ zpool status
>>>  pool: system
>>> state: ONLINE
>>>  scan: scrub in progress since Wed Apr 19 03:12:22 2017
>>>        2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
>>>        0 repaired, 41.94% done
>>>=20
>>> The timing of the scrub is not coincidental.
>>>=20
>>> Why is vdev status changing?
>>>=20
>>> Thank you.
>>>=20
>> I have the same "issue", I asked this in the stable list but did not =
got
>> any reaction.
>> =
https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html
>>=20
>> In my initial mail it was only one machine running 11.0, the rest was
>> running 10.x.
>> Now I have upgraded other machines to 11.0 and I see it there also.
>=20
> Previously none of ZFS events were logged at all, that's why you never =
saw them.
> As to those particular events, unfortunately two GUIDs is all that the =
event
> contains.  So, to get the state you have to explicitly check it, for =
example,
> with zpool status.  It could be that the scrub is simply re-opening =
the devices,
> so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to
> VDEV_STATE_HEALTHY.  You can simply ignore those reports if you don't =
see any
> trouble.
> Maybe lower priority of those messages in /etc/devd/zfs.conf...

I found the relevant entries in said file:

notify 10 {
        match "system"          "ZFS";
        match "type"            "resource.fs.zfs.statechange";
        action "logger -p kern.notice -t ZFS 'vdev state changed, =
pool_guid=3D$pool_guid vdev_guid=3D$vdev_guid'";
};

Is 10 priority the current priority?

At first, I thought it might be kern.notice, but reading man =
syslog.conf, notice is a level, not a priority.

I've change 10 to a 1 and we shall see.

Thank you.

--=20
Dan Langille - BSDCan / PGCon
dan@langille.org





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AE63F640-D325-48C2-A4F5-7771E4A07144>