From owner-freebsd-fs@freebsd.org Thu Apr 20 11:19:11 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 50995D459E4 for ; Thu, 20 Apr 2017 11:19:11 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citapm.icyb.net.ua (citapm.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id A46FE6DD for ; Thu, 20 Apr 2017 11:19:10 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citapm.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA27220; Thu, 20 Apr 2017 14:19:02 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1d1A74-000MVg-K7; Thu, 20 Apr 2017 14:19:02 +0300 Subject: Re: vdev state changed & zfs scrub To: Johan Hendriks , Dan Langille Cc: freebsd-fs@FreeBSD.org References: <0030E8CC-66B2-4EBF-A63B-91CF8370D526@langille.org> <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com> From: Andriy Gapon Message-ID: <106e81a2-4631-642d-6567-319d20d943d2@FreeBSD.org> Date: Thu, 20 Apr 2017 14:18:26 +0300 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Thunderbird/52.0 MIME-Version: 1.0 In-Reply-To: <597c74ea-c414-cf2f-d98c-24bb231009ea@gmail.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Apr 2017 11:19:11 -0000 On 20/04/2017 12:39, Johan Hendriks wrote: > Op 19/04/2017 om 16:56 schreef Dan Langille: >> I see this on more than one system: >> >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889 >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928 >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185 >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970 >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160 >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443 >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747 >> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552 >> >> zpool status output includes: >> >> $ zpool status >> pool: system >> state: ONLINE >> scan: scrub in progress since Wed Apr 19 03:12:22 2017 >> 2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go >> 0 repaired, 41.94% done >> >> The timing of the scrub is not coincidental. >> >> Why is vdev status changing? >> >> Thank you. >> > I have the same "issue", I asked this in the stable list but did not got > any reaction. > https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html > > In my initial mail it was only one machine running 11.0, the rest was > running 10.x. > Now I have upgraded other machines to 11.0 and I see it there also. Previously none of ZFS events were logged at all, that's why you never saw them. As to those particular events, unfortunately two GUIDs is all that the event contains. So, to get the state you have to explicitly check it, for example, with zpool status. It could be that the scrub is simply re-opening the devices, so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to VDEV_STATE_HEALTHY. You can simply ignore those reports if you don't see any trouble. Maybe lower priority of those messages in /etc/devd/zfs.conf... -- Andriy Gapon