From owner-freebsd-fs@freebsd.org Thu Apr 20 15:24:55 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2270BD48E4A for ; Thu, 20 Apr 2017 15:24:55 +0000 (UTC) (envelope-from martin@lispworks.com) Received: from lwfs1-cam.cam.lispworks.com (mail.lispworks.com [46.17.166.21]) by mx1.freebsd.org (Postfix) with ESMTP id 97A94DD9; Thu, 20 Apr 2017 15:24:53 +0000 (UTC) (envelope-from martin@lispworks.com) Received: from higson.cam.lispworks.com (higson.cam.lispworks.com [192.168.1.7]) by lwfs1-cam.cam.lispworks.com (8.15.2/8.15.2) with ESMTP id v3KFE4XU044124; Thu, 20 Apr 2017 16:14:04 +0100 (BST) (envelope-from martin@lispworks.com) Received: from higson.cam.lispworks.com (localhost.localdomain [127.0.0.1]) by higson.cam.lispworks.com (8.14.4) id v3KFE4Kt001839; Thu, 20 Apr 2017 16:14:04 +0100 Received: (from martin@localhost) by higson.cam.lispworks.com (8.14.4/8.14.4/Submit) id v3KFE4P8001833; Thu, 20 Apr 2017 16:14:04 +0100 Date: Thu, 20 Apr 2017 16:14:04 +0100 Message-Id: <201704201514.v3KFE4P8001833@higson.cam.lispworks.com> From: Martin Simmons To: Dan Langille CC: avg@FreeBSD.org, freebsd-fs@FreeBSD.org In-reply-to: (message from Dan Langille on Thu, 20 Apr 2017 07:42:47 -0400) Subject: Re: vdev state changed & zfs scrub References: <0030E8CC-66B2-4EBF-A63B-91CF8370D526@langille.org> <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com> <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Apr 2017 15:24:55 -0000 >>>>> On Thu, 20 Apr 2017 07:42:47 -0400, Dan Langille said: > > > On Apr 20, 2017, at 7:18 AM, Andriy Gapon wrote: > > > > On 20/04/2017 12:39, Johan Hendriks wrote: > >> Op 19/04/2017 om 16:56 schreef Dan Langille: > >>> I see this on more than one system: > >>> > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747 > >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552 > >>> > >>> zpool status output includes: > >>> > >>> $ zpool status > >>> pool: system > >>> state: ONLINE > >>> scan: scrub in progress since Wed Apr 19 03:12:22 2017 > >>> 2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go > >>> 0 repaired, 41.94% done > >>> > >>> The timing of the scrub is not coincidental. > >>> > >>> Why is vdev status changing? > >>> > >>> Thank you. > >>> > >> I have the same "issue", I asked this in the stable list but did not got > >> any reaction. > >> https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html > >> > >> In my initial mail it was only one machine running 11.0, the rest was > >> running 10.x. > >> Now I have upgraded other machines to 11.0 and I see it there also. > > > > Previously none of ZFS events were logged at all, that's why you never saw them. > > As to those particular events, unfortunately two GUIDs is all that the event > > contains. So, to get the state you have to explicitly check it, for example, > > with zpool status. It could be that the scrub is simply re-opening the devices, > > so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to > > VDEV_STATE_HEALTHY. You can simply ignore those reports if you don't see any > > trouble. > > Maybe lower priority of those messages in /etc/devd/zfs.conf... > > I found the relevant entries in said file: > > notify 10 { > match "system" "ZFS"; > match "type" "resource.fs.zfs.statechange"; > action "logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=$pool_guid vdev_guid=$vdev_guid'"; > }; > > Is 10 priority the current priority? > > At first, I thought it might be kern.notice, but reading man syslog.conf, notice is a level, not a priority. No, I think he meant change kern.notice to something else such as kern.info so you don't see them in /var/log/messages (as controlled by /etc/syslog.conf). __Martin