From owner-freebsd-fs@FreeBSD.ORG Mon Jan 25 19:06:13 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0E10F10656A8 for ; Mon, 25 Jan 2010 19:06:13 +0000 (UTC) (envelope-from sty@blosphere.net) Received: from mail-pz0-f176.google.com (mail-pz0-f176.google.com [209.85.222.176]) by mx1.freebsd.org (Postfix) with ESMTP id E7CE78FC12 for ; Mon, 25 Jan 2010 19:06:12 +0000 (UTC) Received: by pzk6 with SMTP id 6so1177876pzk.3 for ; Mon, 25 Jan 2010 11:06:12 -0800 (PST) MIME-Version: 1.0 Sender: sty@blosphere.net Received: by 10.114.118.17 with SMTP id q17mr4748517wac.219.1264445025518; Mon, 25 Jan 2010 10:43:45 -0800 (PST) Date: Tue, 26 Jan 2010 03:43:45 +0900 X-Google-Sender-Auth: b42ab15bb4a76ef3 Message-ID: From: =?UTF-8?B?VG9tbWkgTMOkdHRp?= To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: slight zfs problem after playing with WDIDLE3 and WDTLER X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Jan 2010 19:06:13 -0000 Hi, After googling a bit and finding out that changing the few parameters on my wd green drives with wdidle3 and wdtler shouldn't cause any data loss, decided to go ahead and change the parameters. Now after changing the parameters so that the drives don't try to park their heads every 8 seconds and the TLER kicks in earlier, one of the raidz1 vdevs of course went south. First raidz1 is 3x1.5TB drives (which seems to be fine) and the second one (which now reports UNAVAIL) is 3x1.0TB which apparently got the short straw now reports "corrupted data", all the drives are online but of course the whole tank is unavailable due the second raidz1 being unavailable. I immediately went back to the old settings with the utilities but this didn't change the situation. Getting the data back from backups is a rather long process due the small pipes involved and could take a week or more, I'd rather hear if there is any other options to recover from the mess... After checking the logs carefully, it seems that the ada1 device permanently lost some sectors. Before twiddling with the parameters, it was 1953525168 sectors (953869MB), now it reports 1953523055 (953868MB). So, would removing it and maybe export/import get me back to degraded state and then I could just replace the now suddenly-lost-some-sectors drive? -- br, Tommi