Date: Thu, 20 Apr 2017 16:14:04 +0100 From: Martin Simmons <martin@lispworks.com> To: Dan Langille <dan@langille.org> Cc: avg@FreeBSD.org, freebsd-fs@FreeBSD.org Subject: Re: vdev state changed & zfs scrub Message-ID: <201704201514.v3KFE4P8001833@higson.cam.lispworks.com> In-Reply-To: <AE63F640-D325-48C2-A4F5-7771E4A07144@langille.org> (message from Dan Langille on Thu, 20 Apr 2017 07:42:47 -0400) References: <0030E8CC-66B2-4EBF-A63B-91CF8370D526@langille.org> <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com> <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org> <AE63F640-D325-48C2-A4F5-7771E4A07144@langille.org>
next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> On Thu, 20 Apr 2017 07:42:47 -0400, Dan Langille said: > > > On Apr 20, 2017, at 7:18 AM, Andriy Gapon <avg@FreeBSD.org> wrote: > > > > On 20/04/2017 12:39, Johan Hendriks wrote: > >> Op 19/04/2017 om 16:56 schreef Dan Langille: > >>> I see this on more than one system: > >>> > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552 > >>> > >>> zpool status output includes: > >>> > >>> $ zpool status > >>> pool: system > >>> state: ONLINE > >>> scan: scrub in progress since Wed Apr 19 03:12:22 2017 > >>> 2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go > >>> 0 repaired, 41.94% done > >>> > >>> The timing of the scrub is not coincidental. > >>> > >>> Why is vdev status changing? > >>> > >>> Thank you. > >>> > >> I have the same "issue", I asked this in the stable list but did not got > >> any reaction. > >> https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html > >> > >> In my initial mail it was only one machine running 11.0, the rest was > >> running 10.x. > >> Now I have upgraded other machines to 11.0 and I see it there also. > > > > Previously none of ZFS events were logged at all, that's why you never saw them. > > As to those particular events, unfortunately two GUIDs is all that the event > > contains. So, to get the state you have to explicitly check it, for example, > > with zpool status. It could be that the scrub is simply re-opening the devices, > > so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to > > VDEV_STATE_HEALTHY. You can simply ignore those reports if you don't see any > > trouble. > > Maybe lower priority of those messages in /etc/devd/zfs.conf... > > I found the relevant entries in said file: > > notify 10 { > match "system" "ZFS"; > match "type" "resource.fs.zfs.statechange"; > action "logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=$pool_guid vdev_guid=$vdev_guid'"; > }; > > Is 10 priority the current priority? > > At first, I thought it might be kern.notice, but reading man syslog.conf, notice is a level, not a priority. No, I think he meant change kern.notice to something else such as kern.info so you don't see them in /var/log/messages (as controlled by /etc/syslog.conf). __Martin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201704201514.v3KFE4P8001833>