From owner-freebsd-fs@freebsd.org Thu Apr 20 11:42:59 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2FE15D47401 for ; Thu, 20 Apr 2017 11:42:59 +0000 (UTC) (envelope-from dan@langille.org) Received: from clavin1.langille.org (clavin.langille.org [162.208.116.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "clavin.langille.org", Issuer "BSD Cabal Headquarters" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id E54C51511; Thu, 20 Apr 2017 11:42:58 +0000 (UTC) (envelope-from dan@langille.org) Received: from (clavin1.int.langille.org (clavin1.int.unixathome.org [10.4.7.7]) (Authenticated sender: hidden) with ESMTPSA id E21CD26FD ; Thu, 20 Apr 2017 11:42:49 +0000 (UTC) From: Dan Langille Message-Id: Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) Subject: Re: vdev state changed & zfs scrub Date: Thu, 20 Apr 2017 07:42:47 -0400 In-Reply-To: <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org> Cc: Johan Hendriks , freebsd-fs@FreeBSD.org To: Andriy Gapon References: <0030E8CC-66B2-4EBF-A63B-91CF8370D526@langille.org> <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com> <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org> X-Mailer: Apple Mail (2.3273) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Apr 2017 11:42:59 -0000 > On Apr 20, 2017, at 7:18 AM, Andriy Gapon wrote: >=20 > On 20/04/2017 12:39, Johan Hendriks wrote: >> Op 19/04/2017 om 16:56 schreef Dan Langille: >>> I see this on more than one system: >>>=20 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D3558867368789024889 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D3597532040953426928 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D8095897341669412185 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D15391662935041273970 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D8194939911233312160 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D4885020496131451443 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D14289732009384117747 >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, = pool_guid=3D15387115135938424988 vdev_guid=3D7564561573692839552 >>>=20 >>> zpool status output includes: >>>=20 >>> $ zpool status >>> pool: system >>> state: ONLINE >>> scan: scrub in progress since Wed Apr 19 03:12:22 2017 >>> 2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go >>> 0 repaired, 41.94% done >>>=20 >>> The timing of the scrub is not coincidental. >>>=20 >>> Why is vdev status changing? >>>=20 >>> Thank you. >>>=20 >> I have the same "issue", I asked this in the stable list but did not = got >> any reaction. >> = https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html >>=20 >> In my initial mail it was only one machine running 11.0, the rest was >> running 10.x. >> Now I have upgraded other machines to 11.0 and I see it there also. >=20 > Previously none of ZFS events were logged at all, that's why you never = saw them. > As to those particular events, unfortunately two GUIDs is all that the = event > contains. So, to get the state you have to explicitly check it, for = example, > with zpool status. It could be that the scrub is simply re-opening = the devices, > so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to > VDEV_STATE_HEALTHY. You can simply ignore those reports if you don't = see any > trouble. > Maybe lower priority of those messages in /etc/devd/zfs.conf... I found the relevant entries in said file: notify 10 { match "system" "ZFS"; match "type" "resource.fs.zfs.statechange"; action "logger -p kern.notice -t ZFS 'vdev state changed, = pool_guid=3D$pool_guid vdev_guid=3D$vdev_guid'"; }; Is 10 priority the current priority? At first, I thought it might be kern.notice, but reading man = syslog.conf, notice is a level, not a priority. I've change 10 to a 1 and we shall see. Thank you. --=20 Dan Langille - BSDCan / PGCon dan@langille.org