From owner-freebsd-current@freebsd.org Thu Aug 11 01:52:05 2016 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D5E3BBB32B7 for ; Thu, 11 Aug 2016 01:52:05 +0000 (UTC) (envelope-from ultima1252@gmail.com) Received: from mail-yb0-x230.google.com (mail-yb0-x230.google.com [IPv6:2607:f8b0:4002:c09::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 88DAC1335 for ; Thu, 11 Aug 2016 01:52:05 +0000 (UTC) (envelope-from ultima1252@gmail.com) Received: by mail-yb0-x230.google.com with SMTP id g133so19969277ybf.2 for ; Wed, 10 Aug 2016 18:52:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=9RDLpM4NZECtV2/0Q1u0BI0S1oQonexjCrcs7hy9gPo=; b=fAqRN2xda53ABDsY2ufopFGjKoZHYtxUND0MuTriyoAaA2ynzBVTRd6pOuevS9s8Z/ keWOeJHbIBTaop2T6qwvYJAcUIkHihcSEf33DaKWfQ4MRBpa322BolGybMpfSS43+UIm TZjjAQHPUlFOXyXWt6NVCEywnwsPGahrB9FA9EcDCA1xLGD6h/7HdauhLWT/l+GeZ0Ug W7bxtS/B1917T4arD8qo+7+XXjocraV3B3pOTfUYlvHvmQn/U4yJn6nDwEVLZYytAppi FkIzVyc18R/fAMxZPloCdkJzdHL4MWkEXf8zsXRlR48LrPcdPOYsO238YlvWnemBZ5Al Lvuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=9RDLpM4NZECtV2/0Q1u0BI0S1oQonexjCrcs7hy9gPo=; b=mPg7SH17o1oekhXaT9NwY/u7eVTHGu65XaptePvHuPaeGAuqZTN/hjpYkWB7c3xoL6 iZ5FAmXFHjnI0qF9Y3287C1BFUkEUaBupcdMnkSOJuzULoDdPlbAL+7SG4E0hMBYo3dg P8UhZ60B67WxTRUzH6J+JXubQXsJ3wpPumu5aasYJnoucfNYMJOUheRpNUlkj52kXyJT 2R3KPI3gnV2uKZ2XbR0XDia9sjD4t5fqEM0H06U42FYoCKj9U0VTiQzsAq1hC/C7mx/1 6qapoh76XgaBBtrVjdu2ZEAorovbEvl9bwNkvPeSbqTJ4nFcx130Ug89NYA9QMI61JBr csuA== X-Gm-Message-State: AEkoouvpZ19Kba6enh9H437wPRZESIIEerC6KwS4k/stB6v+Gv+p551bATGRuOz1FrZiHkuBpxQCohfYsNzf/A== X-Received: by 10.37.201.131 with SMTP id z125mr4371141ybf.183.1470880324449; Wed, 10 Aug 2016 18:52:04 -0700 (PDT) MIME-Version: 1.0 Received: by 10.129.51.150 with HTTP; Wed, 10 Aug 2016 18:52:03 -0700 (PDT) In-Reply-To: References: From: Ultima Date: Wed, 10 Aug 2016 21:52:03 -0400 Message-ID: Subject: Re: Possible zpool online, resilvering issue To: olli hauer Cc: freebsd-current@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 01:52:05 -0000 > A new transaction group (TXG) is created at LEAST every > vfs.zfs.txg.timeout (defaults to 5) seconds. > f you offline a drive for hours or more, it must have all blocks with a > 'birth time' newer than the last transaction that was recorded on the > offlined drive replayed to catch that drive up to the other drives in > the pool. > As long as you have enough redundancy, the checksum errors can be > corrected without concern. > In the end, the checksum errors can be written off as being caused by > the bad hardware. After you finish the scrub and everything is OK, do: > 'zpool clear poolname', and it will reset all of the error and checksum > counts to 0, so you can track if any more ever show up. Thanks Allan, can always count on you for crystal clear answers =]. I'm surprised tho that it would be concluded as bad hardware(assuming you mean hd?). Just seems like its too much of a coincidence. I always ran zpool clear each time after the resilver/scrub was completed. > Perhaps on or more of the drives running out of Realloc Sectors? > I had once a case where smartctl showed no issues but zfs scrubbing showed > a defect, some weeks later smartctl was showing some reallocated sectors > and one week later the HD was out of spare sectors. > Have you already tested every single HD for smart issues? Smartd is set to run a short test weekly on Tuesday Thursday and Saturday. Extended test is performed weekly on Tuesday an hour after the short test. This occurs on all 24 drives. A scrub is performed once per month on Saturday an hour after the short test. 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 This is the value of Reallocated sectors on all the drives(I think this is the normal value?). This drives smart looks like the worst of the lot. === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 592) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 491) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x50bd) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 072 063 044 Pre-fail Always - 20189561 3 Spin_Up_Time 0x0003 091 091 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 188 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 092 085 030 Pre-fail Always - 1802626788 9 Power_On_Hours 0x0032 081 081 000 Old_age Always - 17457 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 158 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 099 000 Old_age Always - 65537 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 055 045 045 Old_age Always In_the_past 45 (Min/Max 34/51) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 157 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 867 194 Temperature_Celsius 0x0022 045 055 000 Old_age Always - 45 (0 22 0 0 0) 195 Hardware_ECC_Recovered 0x001a 053 011 000 Old_age Always - 20189561 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 17423 - # 2 Short offline Completed without error 00% 17412 - # 3 Short offline Completed without error 00% 17340 - # 4 Short offline Completed without error 00% 17293 - # 5 Extended offline Completed without error 00% 17261 - # 6 Short offline Completed without error 00% 17245 - # 7 Short offline Completed without error 00% 17173 - # 8 Short offline Completed without error 00% 17125 - # 9 Extended offline Completed without error 00% 17101 - #10 Short offline Completed without error 00% 17084 - #11 Short offline Completed without error 00% 17012 - #12 Short offline Completed without error 00% 16964 - #13 Extended offline Completed without error 00% 16927 - #14 Short offline Completed without error 00% 16916 - #15 Short offline Completed without error 00% 16916 - #16 Short offline Completed without error 00% 16844 - #17 Short offline Completed without error 00% 16805 - #18 Extended offline Completed without error 00% 16775 - #19 Short offline Completed without error 00% 16757 - #20 Short offline Completed without error 00% 16685 - #21 Short offline Completed without error 00% 16637 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. On Wed, Aug 10, 2016 at 2:56 PM, olli hauer wrote: > On 2016-08-04 07:22, Ultima wrote: > > Hello, > > > > I recently had some issue with a PSU and ran several scrubs on a pool > with > > around 35T. Random drives would drop and require a zpool online, this > found > > checksum errors. (as expected) However, after all the scrubs I ran, I > think > > I may have found a bug with zpool online resilvering process. > > > > 24 disks total, 4 vdevs raidz2 (6 drives each). > > > > Before this next part... I had a backup PSU, however it was also going > bad > > and waiting for RMA. The current one seemed to be dieing but ran fine > with > > less drives. So I decided I would run the server short 4 drives. > > > > Started by offline(or already removed from psu) 4 drives from different > > vdevs, then ran a scrub to verify everything. Many sum errors were > present > > on some of the drives, but this was expected due to faulty psu. Then > > offlined 4 different drives and onlined the other 4 and scrubbed once > > again. After resilver, again, many sum errors on these drives as > expected. > > > > After the scrub completed, I decided to offline 4 different drives, then > > online the ones that were out of pool for awhile. During the resilver, > > checksum errors were once again found. I was surprised due to the recent > > scrub, So I decided to run another scrub, and it found even more checksum > > errors on these recently onlined drives. I didn't think much about it, > > however after the replacement PSU arrived, I onlined all the drives out > of > > pool and again, resilver had checksum errors as well as another scrub > with > > more sum errors. > > > > Is this issue known? Is it common for a scrub to be required after > onlining > > a disk that was out of pool for some time? > > > > The drives are ST4000NM0033, and until recent have never had a single > > checksum error in they're lifetime.(at least with zfs) > > FreeBSD S1 12.0-CURRENT FreeBSD 12.0-CURRENT #19 r303224: Sat Jul 23 > > 10:41:12 EDT 2016 > > root@S1:/usr/src/head/obj/usr/src/head/src/sys/MYKERNEL-NODEBUG > > amd64 > > > > > > Sorry for the wall of text, but I hope this helps in tracking down this > > possible bug. > > > > Perhaps on or more of the drives running out of Realloc Sectors? > I had once a case where smartctl showed no issues but zfs scrubbing showed > a defect, some weeks later smartctl was showing some reallocated sectors > and one week later the HD was out of spare sectors. > > Have you already tested every single HD for smart issues? > > -- > olli >