From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 09:43:31 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 15DA9106568B for ; Thu, 1 Oct 2009 09:43:31 +0000 (UTC) (envelope-from solon@pyro.de) Received: from srv23.fsb.echelon.bnd.org (mail.pyro.de [83.137.99.96]) by mx1.freebsd.org (Postfix) with ESMTP id BE3BD8FC08 for ; Thu, 1 Oct 2009 09:43:30 +0000 (UTC) Received: from port-87-193-183-44.static.qsc.de ([87.193.183.44] helo=flash.home) by srv23.fsb.echelon.bnd.org with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MtHaz-0000LW-OO for freebsd-fs@freebsd.org; Thu, 01 Oct 2009 11:05:09 +0200 Date: Thu, 1 Oct 2009 11:05:03 +0200 From: Solon Lutz X-Mailer: The Bat! (v3.99.25) Professional Organization: pyro.labs berlin X-Priority: 3 (Normal) Message-ID: <683849754.20091001110503@pyro.de> To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Spam-Score: -0.1 (/) X-Spam-Report: Spam detection software, running on the system "srv23.fsb.echelon.bnd.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see The administrator of that system for details. Content preview: Hi erverybody, I'm faced with a 10TB ZFS pool on a 12TB RAID6 Areca controller. And yes, I know, you shouldn't put a zpool on a RAID-device... =( Due to problems with a sata-cable, some days ago the raid-controller started to produce long timeouts to recover the resulting read errors. [...] Content analysis details: (-0.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.4 ALL_TRUSTED Passed through trusted hosts only via SMTP 1.3 PLING_QUERY Subject has exclamation mark and question mark X-Spam-Flag: NO Subject: Help needed! ZFS I/O error recovery? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 09:43:31 -0000 Hi erverybody, I'm faced with a 10TB ZFS pool on a 12TB RAID6 Areca controller. And yes, I know, you shouldn't put a zpool on a RAID-device... =( Due to problems with a sata-cable, some days ago the raid-controller started to produce long timeouts to recover the resulting read errors. The cable was replaced, a parity check was run on the RAID-Volume and showed no errors, the zfs scrub however showed some 'defective' files. After copying these files with 'dd -conv=noerror...' and comparing them to the originals, they were error-free. Yesterday however, three more defective cables forced the controller to take the RAID6 volume offline. Now all cables were replaced and a parity check was run on the RAID-Volume -> data integrity OK. But now ZFS refuses to mount all volumes: Solaris: WARNING: can't process intent log for temp/space1 Solaris: WARNING: can't process intent log for temp/space2 Solaris: WARNING: can't process intent log for temp/space3 Solaris: WARNING: can't process intent log for temp/space4 A scrub revealed to following: errors: Permanent errors have been detected in the following files: temp:<0x0> temp/space1:<0x0> temp/space2:<0x0> temp/space3:<0x0> temp/space4:<0x0> I tried to switch off checksums for this pool, but that didn't help in any way. I also mounted the pool by hand and was faced with with 'empty' volumes and 'I/O errors' when trying to list their contents... Any suggestions? I'm offering some self-made blackberry jam and raspberry brandy to the person who can help to restore or backup the data. Tech specs: FreeBSD 7.2-STABLE #21: Tue May 5 18:44:10 CEST 2009 (AMD64) da0 at arcmsr0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-5 device da0: 166.666MB/s transfers (83.333MHz DT, offset 32, 16bit) da0: Command Queueing Enabled da0: 10490414MB (21484367872 512 byte sectors: 255H 63S/T 1337340C) ZFS filesystem version 6 ZFS storage pool version 6 Best regards, Solon