From owner-freebsd-stable@freebsd.org Mon Sep 19 18:42:55 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C78A6BE1AF9 for ; Mon, 19 Sep 2016 18:42:55 +0000 (UTC) (envelope-from ubm.freebsd@googlemail.com) Received: from mail-wm0-x230.google.com (mail-wm0-x230.google.com [IPv6:2a00:1450:400c:c09::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4F55018BC for ; Mon, 19 Sep 2016 18:42:55 +0000 (UTC) (envelope-from ubm.freebsd@googlemail.com) Received: by mail-wm0-x230.google.com with SMTP id b130so77955977wmc.0 for ; Mon, 19 Sep 2016 11:42:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=from:date:to:subject:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=VNkT2nDppMXoo5AF4+eswJ2W1Lg1+FDgZt0DM4ZJI6M=; b=XRV//QtjIzaTjY4ox/tk97yOeckabQx9x9isQcbidoTi4Jsu46GOWSFSiqj7zXNVhn 5wkOaKrLNQ33Lr2Hw6I3irQCXXKQiNlgOjoGknViOlKQDZ/K5UXAV9TJXiqx51R3T0/d Vj+znmX+iu5dtNDrVUBc55HTIXmjarmQyw0gPpjfH/EU4eURTEUZbsFXzYtZ50KWl26e o6TGeSrKxD5vvg0vdrh3PyRhEvz2nj9Uk+Ma40qD+7CH1iB/hFG6GEvIjIaFV7XnY+OK PkiHRzZSvVV+GfBZqnNGQK6cXcmH/rsUQGT/rB7xJIgGcK9ZhyXq0bguPCsW6b4epcjJ 4S6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:date:to:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VNkT2nDppMXoo5AF4+eswJ2W1Lg1+FDgZt0DM4ZJI6M=; b=VTsTJzSqO79k0+2ygyEOUn+UjJdCNBma528PH3PxHt5LeWDiMd8/YDpJwqhH/ykHbg 9AjMqamTfh324497geg1P5nCkhfjhOEt1XkolAiXNzWZkJQr2HgQgElm4da0EoYXjW3e lDZiygnahrX9UmhLvZ+ozW0Cgdg1nqCPCFkCtfRVIkK1fIudq1j46dh09z+fJkBsHvHn S1Rf8qcuRsaIY9PV3ZsIHdKjImqVTSeImm8Yxlrj5Dj+mSBjlc+QBj5PJreRwT8d8wzP tCbiZL7dxsD71j0BhDMCUgv0Uz0qERTf176VObsMrSZg6QH4+BqOFuq/QlXhNu4tgSru VBXw== X-Gm-Message-State: AE9vXwM7xySjmmLf9yaePRehcp+qwfYjC3AWAl4I0qAZAHSWAt1urJXNGTdPMkclTw3Sow== X-Received: by 10.194.95.36 with SMTP id dh4mr24533462wjb.156.1474310573532; Mon, 19 Sep 2016 11:42:53 -0700 (PDT) Received: from ubm.strangled.net (ipb21a85d1.dynamic.kabel-deutschland.de. [178.26.133.209]) by smtp.gmail.com with ESMTPSA id 123sm23388003wmj.5.2016.09.19.11.42.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Sep 2016 11:42:52 -0700 (PDT) From: Marc UBM Bocklet X-Google-Original-From: Marc "UBM" Bocklet Date: Mon, 19 Sep 2016 20:42:51 +0200 To: freebsd-stable Subject: Re: zfs resilver keeps restarting Message-Id: <20160919204251.47930526f3059bc62430b83a@gmail.com> In-Reply-To: References: <20160918150917.09f9448464d84d4e50808707@gmail.com> <20160918184636.5861562661d4376e845ac75d@gmail.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.29; amd64-portbld-freebsd11.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2016 18:42:55 -0000 On Sun, 18 Sep 2016 11:41:54 -0600 Alan Somers wrote: > On Sun, Sep 18, 2016 at 10:46 AM, Marc UBM Bocklet via freebsd-stable > wrote: > > On Sun, 18 Sep 2016 10:05:52 -0600 > > Alan Somers wrote: > > > >> On Sun, Sep 18, 2016 at 7:09 AM, Marc UBM Bocklet via freebsd-stable > >> wrote: > >> > > >> > Hi all, > >> > > >> > due to two bad cables, I had two drives drop from my striped raidz2 > >> > pool (built on top of geli encrypted drives). I replaced one of the > >> > drives before I realized that the cabling was at fault - that's the > >> > drive which is being replaced in the ouput of zpool status below. > >> > > >> > I have just installed the new cables and all sata errors are gone. > >> > However, the resilver of the pool keeps restarting. > >> > > >> > I see no errors in /var/log/messages, but zpool history -i says: > >> > > >> > 2016-09-18.14:56:21 [txg:1219501] scan setup func=2 mintxg=3 > >> > maxtxg=1219391 2016-09-18.14:56:51 [txg:1219505] scan done complete=0 > >> > 2016-09-18.14:56:51 [txg:1219505] scan setup func=2 mintxg=3 > >> > maxtxg=1219391 2016-09-18.14:57:20 [txg:1219509] scan done complete=0 > >> > 2016-09-18.14:57:20 [txg:1219509] scan setup func=2 mintxg=3 > >> > maxtxg=1219391 2016-09-18.14:57:49 [txg:1219513] scan done complete=0 > >> > 2016-09-18.14:57:49 [txg:1219513] scan setup func=2 mintxg=3 > >> > maxtxg=1219391 2016-09-18.14:58:19 [txg:1219517] scan done complete=0 > >> > 2016-09-18.14:58:19 [txg:1219517] scan setup func=2 mintxg=3 > >> > maxtxg=1219391 2016-09-18.14:58:45 [txg:1219521] scan done complete=0 > >> > 2016-09-18.14:58:45 [txg:1219521] scan setup func=2 mintxg=3 > >> > maxtxg=1219391 > >> > > >> > I assume that "scan done complete=0" means that the resilver didn't > >> > finish? > >> > > >> > pool layout is the following: > >> > > >> > pool: pool > >> > state: DEGRADED > >> > status: One or more devices is currently being resilvered. The pool > >> > will continue to function, possibly in a degraded state. > >> > action: Wait for the resilver to complete. > >> > scan: resilver in progress since Sun Sep 18 14:51:39 2016 > >> > 235G scanned out of 9.81T at 830M/s, 3h21m to go > >> > 13.2M resilvered, 2.34% done > >> > config: > >> > > >> > NAME STATE READ WRITE CKSUM > >> > pool DEGRADED 0 0 0 > >> > raidz2-0 ONLINE 0 0 0 > >> > da6.eli ONLINE 0 0 0 > >> > da7.eli ONLINE 0 0 0 > >> > ada1.eli ONLINE 0 0 0 > >> > ada2.eli ONLINE 0 0 0 > >> > da10.eli ONLINE 0 0 2 > >> > da11.eli ONLINE 0 0 0 > >> > da12.eli ONLINE 0 0 0 > >> > da13.eli ONLINE 0 0 0 > >> > raidz2-1 DEGRADED 0 0 0 > >> > da0.eli ONLINE 0 0 0 > >> > da1.eli ONLINE 0 0 0 > >> > da2.eli ONLINE 0 0 1 > >> > (resilvering) > >> > replacing-3 DEGRADED 0 0 1 > >> > 10699825708166646100 UNAVAIL 0 0 0 > >> > was /dev/da3.eli da4.eli ONLINE 0 0 0 > >> > (resilvering) > >> > da3.eli ONLINE 0 0 0 > >> > da5.eli ONLINE 0 0 0 > >> > da8.eli ONLINE 0 0 0 > >> > da9.eli ONLINE 0 0 0 > >> > > >> > errors: No known data errors > >> > > >> > system is > >> > FreeBSD xxx 10.1-BETA1 FreeBSD 10.1-BETA1 #27 r271633: > >> > Mon Sep 15 22:34:05 CEST 2014 > >> > root@xxx:/usr/obj/usr/src/sys/xxx amd64 > >> > > >> > controller is > >> > SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] > >> > > >> > Drives are connected via four four-port sata cables. > >> > > >> > Should I upgrade to 10.3-release or did I make some sort of > >> > configuration error / overlook something? > >> > > >> > Thanks in advance! > >> > > >> > Cheers, > >> > Marc > >> > >> Resilver will start over anytime there's new damage. In your case, > >> with two failed drives, resilver should've begun after you replaced > >> the first drive, and restarted after you replaced the second. Have > >> you seen it restart more than that? If so, keep an eye on the error > >> counters in "zpool status"; they might give you a clue. You could > >> also raise the loglevel of devd to "info" in /etc/syslog.conf and see > >> what gets logged to /etc/devd.log. That will tell you if drives a > >> dropping out and automatically rejoining the pool, for example. > > > > Thanks a lot for your fast reply, unfortunately (or not), devd is silent > > and the error count for the pool remains at zero. The resilver, however, > > just keeps restarting. The furthest it got was about 68% resilvered. > > Usually, it gets to 2 - 3%, then restarts. > > > > I plan on offlining the pool, upgrading to 10.3, and then reimporting > > the pool next. Does that make sense? > > > > Cheers, > > Marc > > I suspect an upgrade won't make a difference, but it certainly won't > hurt. Did you remember to change devd's loglevel to "info" and > restart syslogd? I just upgraded to FreeBSD xxx 10.3-STABLE FreeBSD 10.3-STABLE #29 r305937: Sun Sep 18 19:48:32 CEST 2016 xxx amd64 and the resilver is apparently finishing on the first try (currently at 97% and continuing). I changed devd's loglevel and restarted syslog, but caught no messages in the log except those (malformatted due to line wrap) Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=15989198335786279524'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=15989198335786279524 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=7556163349212343309'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=7556163349212343309 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=11941432590348604434'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=11941432590348604434 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=417984871670910036'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=417984871670910036 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=9683634954111285635'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=9683634954111285635 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=13914729548038345942'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=13914729548038345942 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=11390813927692356269'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=11390813927692356269 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=17789791879708299257'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=17789791879708299257 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=3906761392810563354'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=3906761392810563354 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=14587480937528490734'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=14587480937528490734 Sep 19 19:55:54 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=10215581546520896687'' Sep 19 19:55:54 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=10215581546520896687 Sep 19 19:55:56 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=15040171490277026943'' Sep 19 19:55:56 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=15040171490277026943 Sep 19 19:55:56 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=1334619275191205373'' Sep 19 19:55:56 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=1334619275191205373 Sep 19 19:55:56 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=10400807283721245288'' Sep 19 19:55:56 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=10400807283721245288 Sep 19 19:55:56 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=12117703925014396267'' Sep 19 19:55:56 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=12117703925014396267 Sep 19 19:55:56 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=1855871756760387608'' Sep 19 19:55:56 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=1855871756760387608 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=15989198335786279524'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=15989198335786279524 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=7556163349212343309'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=7556163349212343309 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=11941432590348604434'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=11941432590348604434 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=417984871670910036'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=417984871670910036 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=9683634954111285635'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=9683634954111285635 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=13914729548038345942'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=13914729548038345942 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=11390813927692356269'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=11390813927692356269 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=17789791879708299257'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=17789791879708299257 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=3906761392810563354'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=3906761392810563354 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=14587480937528490734'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=14587480937528490734 Sep 19 19:55:57 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=10215581546520896687'' Sep 19 19:55:57 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=10215581546520896687 Sep 19 19:55:58 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=15040171490277026943'' Sep 19 19:55:58 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=15040171490277026943 Sep 19 19:55:58 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=1334619275191205373'' Sep 19 19:55:58 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=1334619275191205373 Sep 19 19:55:58 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=10400807283721245288'' Sep 19 19:55:58 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=10400807283721245288 Sep 19 19:55:58 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=12117703925014396267'' Sep 19 19:55:58 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=12117703925014396267 Sep 19 19:55:58 hamstor devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=17098460560433284558 vdev_guid=1855871756760387608'' Sep 19 19:55:58 hamstor ZFS: vdev state changed, pool_guid=17098460560433284558 vdev_guid=1855871756760387608 These were logged while already on 10.3-stable. Thanks a lot for taking the time to help me! Cheers, Marc