From owner-freebsd-stable@FreeBSD.ORG Sat Aug 21 20:29:47 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DBC011065694 for ; Sat, 21 Aug 2010 20:29:47 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id 8C6228FC13 for ; Sat, 21 Aug 2010 20:29:47 +0000 (UTC) Received: by yxe42 with SMTP id 42so1973481yxe.13 for ; Sat, 21 Aug 2010 13:29:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :x-enigmail-version:content-type; bh=i+VdIkVfqyTrz0f4OmwI6WE/xyfOy3GgxlhrU85PzxI=; b=irdTeQQhH5fZtiOcXD1vo+GFRxf6+fBQJE2Ogu2q135QFSX6EYEmEtbDwZw+izmQXE 71g7xccQ8+AFIfGr3iHw0WHXk5zy9oGaIGRDOTt3LX51jb4qS98/5kcizItnxa7IGBXj UPbThhOnG/HDTFbvOkFKmPtZV/9JUdF6Z1gqk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:x-enigmail-version:content-type; b=JmdrzYhWJa8w/bktnK09e8OFDNe1pLNVltk5YZLjxuaer2pAn4mzerb4yCG2ZEmrGa fS95uOiW2ySKSI3xp1Wn47Li7CpQypsTf2qVj9mnnPuxfchyrLl7GTuxZftjnADXJozP PvECnK7zjwTU1ZQDTjQUQwv2ZRSTvZOagCLkA= Received: by 10.101.212.16 with SMTP id o16mr3448096anq.113.1282422586751; Sat, 21 Aug 2010 13:29:46 -0700 (PDT) Received: from centel.dataix.local (adsl-99-190-84-182.dsl.klmzmi.sbcglobal.net [99.190.84.182]) by mx.google.com with ESMTPS id h5sm7207286anb.8.2010.08.21.13.29.45 (version=SSLv3 cipher=RC4-MD5); Sat, 21 Aug 2010 13:29:45 -0700 (PDT) Sender: "J. Hellenthal" Message-ID: <4C703737.3020007@DataIX.net> Date: Sat, 21 Aug 2010 16:29:43 -0400 From: jhell User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.2.8) Gecko/20100806 Lightning/1.0b1 Thunderbird MIME-Version: 1.0 To: Alexander Leidinger References: <4C6F5344.6040808@DataIX.net> <20100821215052.000030f1@unknown> In-Reply-To: <20100821215052.000030f1@unknown> X-Enigmail-Version: 1.1.2 Content-Type: multipart/mixed; boundary="------------030703000401080309080401" Cc: FreeBSD Stable Subject: Re: daily run output 800.scrub-zfs fixups X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Aug 2010 20:29:47 -0000 This is a multi-part message in MIME format. --------------030703000401080309080401 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit On 08/21/2010 15:50, Alexander Leidinger wrote: > On Sat, 21 Aug 2010 00:17:08 -0400 jhell wrote: > >> >> Hi Alexander, >> >> Attached is a fix for one problem and one slight overlook for >> 800.scrub-zfs. >> >> The first & second change was probably just an oversight but none the >> less they both give a false impression of actions taken. >> >> Change1: >> ${daily_scrub_zfs_default_threshold=30} is missng the ':' >> which would ultimately reset the users supplied value in >> periodic.conf to 30. > > I will have a look at this. > >> Change2: >> ${_scrub_diff} -le ${_pool_threshold} would cause the scrub >> to be run on the day after the threshold was met. So I changed '-le' >> -> '-lt' which causes it to be run on the 30th day instead of the >> 31st day. > > This depends how you define threshold... I had number of days between > scrubs in my mind. Now it depends what I wrote in the man page, if it > tells what I had in mind (I don't remember, I have to look at it > myself, but I'm not a native english speaker, so I may have not wrote > it good enough). > I believe that people in this case would think that if they set the threshold to 12 days that its going to run on the same day that the threshold was expired and not the 13th. Usually when thresholds are met the commands are fired that same instance and not a day later. > This is not set in stone, if the majority of people want something > else, I'm surely not in the way. > Also I just noticed another confusing message 'at least for me' that said "starting first scrubbing (after reboot) of pool 'exports'". I read that like it is going to scrub the pool after the next reboot. I actually had to open up the script to double check that was not the case. The new attached patch changes the message to "starting scrub of pool 'pool'" so there is no confusion of when the scrub is actually going to happen. Regards, -- jhell,v --------------030703000401080309080401 Content-Type: text/x-patch; name="800.scrub-zfs.diff" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="800.scrub-zfs.diff" Index: etc/periodic/daily/800.scrub-zfs =================================================================== --- etc/periodic/daily/800.scrub-zfs (revision 211527) +++ etc/periodic/daily/800.scrub-zfs (working copy) @@ -11,7 +11,7 @@ source_periodic_confs fi -: ${daily_scrub_zfs_default_threshold=30} +: ${daily_scrub_zfs_default_threshold:=30} case "$daily_scrub_zfs_enable" in [Yy][Ee][Ss]) @@ -53,7 +53,7 @@ # Now minus last scrub (both in seconds) converted to days. _scrub_diff=$(expr -e \( $(date +%s) - \ $(date -j -f %F.%T ${_last_scrub} +%s) \) / 60 / 60 / 24) - if [ ${_scrub_diff} -le ${_pool_threshold} ]; then + if [ ${_scrub_diff} -lt ${_pool_threshold} ]; then echo " skipping scrubbing of pool '${pool}':" echo " last scrubbing is ${_scrub_diff} days ago, threshold is set to ${_pool_threshold} days" continue @@ -65,11 +65,11 @@ echo " scrubbing of pool '${pool}' already in progress, skipping:" ;; *"none requested"*) - echo " starting first scrubbing (after reboot) of pool '${pool}':" + echo " starting scrub of pool '${pool}':" zpool scrub ${pool} ;; *) - echo " starting scrubbing of pool '${pool}':" + echo " starting scrub of pool '${pool}':" zpool scrub ${pool} ;; esac --------------030703000401080309080401--