From owner-freebsd-fs@FreeBSD.ORG Wed Jun 9 15:23:15 2010 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1FEF31065672 for ; Wed, 9 Jun 2010 15:23:15 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id C25378FC19 for ; Wed, 9 Jun 2010 15:23:14 +0000 (UTC) Received: by vws1 with SMTP id 1so1303981vws.13 for ; Wed, 09 Jun 2010 08:23:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:openpgp:content-type:content-transfer-encoding; bh=svHz9HjkzfVMAY2N38DZeqY4A5nlEmdWy//f2aHo5jE=; b=bXGj9gInM60USS1xy/0Ltf2IyjwuKpv/fGaCqXlarkR11Ab23whtpX35bBYpto5b6O lG0TgaFJy5CrKfK2capWutsUW790nqes/8BE015Hm4sC0XWTJAhZ31n/Mnkz5xwi5yA4 pR7+R21bu49mtrUZPyaXwoQLKlAkTS6HMVil0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:openpgp:content-type :content-transfer-encoding; b=F4KeFOIqllciuu28djKHrqIY+CyBgmz2rQA1Lzj/U0zJd2e4NF0nM625mTj4p1yDdT hahRdEbUpjejDhqHD8Zst0CdkGSfVCKN9vlpaJA/T35GmDYKrrF3zHt3wyGaD9MYSU/Y pjNaPc6ELXnqL9XY3T1Lrdyedu2mhAOkEMWzQ= Received: by 10.224.64.76 with SMTP id d12mr2572727qai.208.1276096992919; Wed, 09 Jun 2010 08:23:12 -0700 (PDT) Received: from centel.dataix.local (adsl-99-181-128-180.dsl.klmzmi.sbcglobal.net [99.181.128.180]) by mx.google.com with ESMTPS id m29sm9513928qck.16.2010.06.09.08.23.11 (version=SSLv3 cipher=RC4-MD5); Wed, 09 Jun 2010 08:23:12 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C0FB1DE.9080508@dataix.net> Date: Wed, 09 Jun 2010 11:23:10 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.1.9) Gecko/20100515 Thunderbird MIME-Version: 1.0 To: Alexander Leidinger References: <20100609162627.11355zjzwnf7nj8k@webmail.leidinger.net> <4C0FAE2A.7050103@dataix.net> In-Reply-To: <4C0FAE2A.7050103@dataix.net> X-Enigmail-Version: 1.0.1 OpenPGP: id=89D8547E Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: fs@freebsd.org Subject: Re: Do we want a periodic script for a zfs scrub? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jun 2010 15:23:15 -0000 On 06/09/2010 11:07, jhell wrote: > On 06/09/2010 10:26, Alexander Leidinger wrote: >> Hi, >> >> I noticed that we do not have an automatism to scrub a ZFS pool >> periodically. Is there interest in something like this, or shall I keep >> it local? >> >> Here's the main part of the monthly periodic script I quickly created: >> ---snip--- >> case "$monthly_scrub_zfs_enable" in >> [Yy][Ee][Ss]) >> echo >> echo 'Scrubbing of zfs pools:' >> >> if [ -z "${monthly_scrub_zfs_pools}" ]; then >> monthly_scrub_zfs_pools="$(zpool list -H -o name)" >> fi >> >> for pool in ${monthly_scrub_zfs_pools}; do >> # successful only if there is at least one pool to scrub >> rc=0 >> >> echo " starting scrubbing of pool '${pool}'" >> zpool scrub ${pool} >> echo " consult 'zpool status ${pool}' for the result" >> echo " or wait for the daily_status_zfs mail, if >> enabled" >> done >> ;; >> ---snip--- >> >> Bye, >> Alexander. >> > > Please add a check to see if any resilerving is being done on the pool > that the scub is being executed on. (Just in case), I would hope that > the scrub would fail silently in this case. > > Please also check whether a scrub is already running on one of the pools > and if so & another pool exists start a background loop to wait for the > first scrub to finish or die silently. > > I had a scrub fully restart from calling scrub a second time after being > more than 50% complete, its frustrating. > > > Thanks!, > I should probably suggest one check that comes to mind. zpool history ${pool} | grep scrub | tail -1 |cut -f1 -d. Then compare the output with today's date to make sure today is >= 30 days from the date of the last scrub. With the above this could be turned into a daily_zfs_scrub_enable with a default daily_zfs_scrub_threshold="30" and ensuring that if one check is missed it will not take another 30 days to run the check again. Food for thought. Thanks!, Thanks!, -- jhell