From owner-freebsd-current@freebsd.org Wed Aug 10 18:09:01 2016 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 04980BB5BD5 for ; Wed, 10 Aug 2016 18:09:01 +0000 (UTC) (envelope-from se@freebsd.org) Received: from mailout08.t-online.de (mailout08.t-online.de [194.25.134.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mailout00.t-online.de", Issuer "TeleSec ServerPass DE-2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id BCDC71AA0; Wed, 10 Aug 2016 18:09:00 +0000 (UTC) (envelope-from se@freebsd.org) Received: from fwd12.aul.t-online.de (fwd12.aul.t-online.de [172.20.26.241]) by mailout08.t-online.de (Postfix) with SMTP id CAE1841DCDA9; Wed, 10 Aug 2016 20:08:50 +0200 (CEST) Received: from [192.168.119.34] (G-d9nuZlrhJJosGR4ypUaAGHsHjQlLLYBzgk6ti5Vwak3Z7Cht4q7qppfdIS6o4gB6@[87.151.219.230]) by fwd12.t-online.de with (TLSv1.2:ECDHE-RSA-AES256-SHA encrypted) esmtp id 1bXXvs-264xP60; Wed, 10 Aug 2016 20:08:48 +0200 Subject: Re: Possible zpool online, resilvering issue To: Ultima References: <196319fe-8113-bb2d-74b7-fbdd3369d988@freebsd.org> Cc: freebsd-current@freebsd.org From: Stefan Esser Message-ID: <2f93e8dd-83d5-c8e1-1dd2-5c8ee98597a6@freebsd.org> Date: Wed, 10 Aug 2016 20:08:46 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-ID: G-d9nuZlrhJJosGR4ypUaAGHsHjQlLLYBzgk6ti5Vwak3Z7Cht4q7qppfdIS6o4gB6 X-TOI-MSGID: 4918bb5f-e91f-4b03-a675-efbce737f488 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2016 18:09:01 -0000 Am 10.08.2016 um 18:53 schrieb Ultima: > Hello, > >> I didn't see any reply on the list, so I thought I might let you know > > Sorry, never received this reply (till now) xD > >>what I assume is happening: > >> ZFS never updates data in place, which affects inode updates, e.g. if >> a file has been read and access times must be updated. (For that reason, >> many ZFS file systems are configured to ignore access time updates). > >> Even if there were only R/O accesses to files in the pool, there will >> have been updates to the inodes, which were missed by the offlined >> drives (unless you ignore atime updates). > >> But even if there are no access time updates, ZFS might have written >> new uberblocks and other meta information. Check the POOL history and >> see if there were any TXGs created during the scrub. > >> If you scrub the pooll while it is off-line, it should stay stable >> (but if any information about the scrub, the offlining of drives etc. >> is recorded in the pool's history log, differences are to be expected). > >> Just my $.02 ... > >> Regards, STefan > > Thanks for the reply, I'm not completely sure what would be considered a > TXG. Maintained normal operations during most this noise and this pool > has quite a bit of activity during normal operations. My zpool history > looks like it gos on forever and the last scrub is showing it repaired > 9.48G. That was for all these access time updates? I guess that would be > a little less then 2.5G per disk worth. > > The zpool history looks like it gos on forever (733373 lines). This pool > has much of this activity with poudriere. All the entries I see are > clone, destroy, rollback and snapshotting. I can't really say how much > but at least 500 (prob much more than that) entries between the last two > scrubs. Atime is off on all datasets. > > So to be clear, this is expected behavior with atime=off + TXGs during > offline time? I had thought that the resilver after onlining the disk > would bring that disk up-to-date with the pool. I guess my understanding > was a bit off. Sorry, you'll have to ask somebody more familiar with ZFS internals than me. I just wanted to point out, that scrub might change the state of the drives, even though no file data is modified. Some 10 GB "repaired" on a 35000 GB pool is not much, it is about what I'd expect to be required for meta-data. BTW: The pool history is chronologically sorted, you need only check the last few lines (written after the start time of the scrub, or rather written after offlining some of the disk drives). Regards, STefan