From owner-freebsd-fs@FreeBSD.ORG Sun Feb 1 14:18:23 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DE02A1D8 for ; Sun, 1 Feb 2015 14:18:22 +0000 (UTC) Received: from song.uckmgae.com (song.uckmgae.com [107.179.37.84]) by mx1.freebsd.org (Postfix) with ESMTP id 2E439A34 for ; Sun, 1 Feb 2015 14:18:22 +0000 (UTC) To: freebsd-fs@freebsd.org Subject: target email marketing Message-ID: <6ac0aff09fa3f468c02ef93450b49668@japan-zone.com> Date: Sun, 01 Feb 2015 15:16:45 +0100 From: "John" Reply-To: alexliucontact@sina.com MIME-Version: 1.0 X-Mailer-LID: 5 X-Mailer-RecptId: 6480514 X-Mailer-SID: 92 X-Mailer-Sent-By: 1 Content-Type: text/plain; format=flowed; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Feb 2015 14:18:23 -0000 Hi, You are receiving this email because we wish you to use our target email marketing service. We specialize in providing target email marketing services to a number of businesses all over the world! Email marketing is one of the best marketing strategies of all time and has helped many businesses globally achieve their goals, double their profits and increase their client base. We have worked on a number of projects and campaigns, all our packages are tailor made and designed according to your requirements. Increase your client base and market your product to millions or let us bring the buying leads for you! We would love to be your marketing partners, would you be interested in email marketing services for your product or service? We can always help your business reach the next level! Our goal is to increase your business sales 2-5 times than now. If you would require more information please send us an email and we would be glad to discuss the project requirements with you! Looking forward to your positive response. Remember! It won't sell if nobody knows you have it. Kind Regards Email Marketing Specialist Email: alexliucontact@sina.com ------------------------------------------------- This e-mail message and its attachments (if any) are intended solely for the use of the addressee(s) hereof. In addition, this message and the attachments (if any) may contain information that is confidential, privileged and exempt from disclosure under applicable law. If you are not the intended recipient of this message, you are prohibited from reading, disclosing, reproducing, distributing, disseminating or otherwise using this transmission. Delivery of this message to any person other than the intended recipient is not intended to waive any right or privilege. If you have received this message in error, please promptly notify the sender and immediately delete this message from your system. From owner-freebsd-fs@FreeBSD.ORG Sun Feb 1 15:48:05 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9C5317FF for ; Sun, 1 Feb 2015 15:48:05 +0000 (UTC) Received: from mail.myota.org (mail.myota.org [85.10.206.105]) by mx1.freebsd.org (Postfix) with ESMTP id 2C24228B for ; Sun, 1 Feb 2015 15:48:04 +0000 (UTC) Received: from mobile.client (95.75.167.190.d.dyn.codetel.net.do [190.167.75.95] (may be forged)) (authenticated bits=128) by mail.myota.org (8.14.9/8.14.9) with ESMTP id t11Fjwqe088048; Sun, 1 Feb 2015 16:46:06 +0100 (CET) (envelope-from andre@fbsd.ata.myota.org) Received: from submit.client ([127.0.0.1]) by schlappy.local (8.14.9/8.14.9) with ESMTP id t11FjpEx001673; Sun, 1 Feb 2015 16:45:52 +0100 (CET) (envelope-from andre@fbsd.ata.myota.org) Received: (from user@localhost) by schlappy.local (8.14.9/8.14.9/Submit) id t11Fjpg6001672; Sun, 1 Feb 2015 16:45:51 +0100 (CET) (envelope-from andre@fbsd.ata.myota.org) Date: Sun, 1 Feb 2015 16:45:51 +0100 From: Andre Albsmeier To: freebsd-fs@freebsd.org Subject: [unionfs] deadlocking a 9-STABLE machine with two unionfs mounts onto the same mountpoint Message-ID: <20150201154551.GA1633@schlappy> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Echelon: 757, Submarine, 707, Sears, anarchy X-Advice: Drop that crappy M$-Outlook, I'm tired of your viruses! User-Agent: Mutt/1.5.21 (2010-09-15) X-Greylist: Not delayed on 85.10.206.105, ACL: AUTH(59), Origin: DO, OS: FreeBSD 9.x X-Virus-Scanned: clamav-milter 0.98.6 at colo X-Virus-Status: Clean Cc: andre@fbsd.ata.myota.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Feb 2015 15:48:05 -0000 I can reliably deadlock a 9.3-STABLE by the following procedure: Let's assume that /tmp is a standard swap-backed file system already. First let's set up what we need: mkdir /tmp/1 /tmp/2 mount -v -t unionfs /tmp/1 /usr/local mount -v -t unionfs /tmp/2 /usr/local No let's lock the system: mkdir /tmp/2/bla while :; do echo go tar -cC /usr/src/etc -f - . | tar -xpC /tmp/2/bla -f - done It survives about 3 or 4 rounds, sometimes more, sometimes only 2. It is important to use tar to copy the stuff. If we replace the tar line by e.g. cp -pR /usr/src/etc/* /tmp/2/bla things are all well. The system doesn't lock up entirely, you can move the mouse and ping it but no fs access is possible anymore. One can switch to the console and enter the debugger but a reboot with ctrl-alt-del doesn't work... The interesting part is that all this worked pretty well on 9-STABLE until approx. 2 months ago. But nothing had been committed to unionfs for a long time so I really have no idea what's going on. It also reminds us of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=161511 but this stuff had been merged to 9-STABLE already... Anything I can do to get this fixed? From owner-freebsd-fs@FreeBSD.ORG Sun Feb 1 19:25:21 2015 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2C4779C0 for ; Sun, 1 Feb 2015 19:25:21 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 135D49DF for ; Sun, 1 Feb 2015 19:25:21 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id t11JPKQL022070 for ; Sun, 1 Feb 2015 19:25:20 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 183077] [opensolaris] [patch] don't have the compiler inline txg_quiesce so that zilstat works Date: Sun, 01 Feb 2015 19:25:20 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: lacey.leanne@gmail.com X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Feb 2015 19:25:21 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=183077 lacey.leanne@gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |lacey.leanne@gmail.com --- Comment #3 from lacey.leanne@gmail.com --- This is also missing on FreeBSD 10.1-RELEASE-p5, keeping the same zilstat script from working. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sun Feb 1 21:00:17 2015 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1B9BE466 for ; Sun, 1 Feb 2015 21:00:17 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E65D6308 for ; Sun, 1 Feb 2015 21:00:16 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id t11L0G9i001281 for ; Sun, 1 Feb 2015 21:00:16 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201502012100.t11L0G9i001281@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-fs@FreeBSD.org Subject: Problem reports for freebsd-fs@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 01 Feb 2015 21:00:16 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Feb 2015 21:00:17 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- Open | 136470 | [nfs] Cannot mount / in read-only, over NFS Open | 139651 | [nfs] mount(8): read-only remount of NFS volume d Open | 144447 | [zfs] sharenfs fsunshare() & fsshare_main() non f 3 problems total for which you should take action. From owner-freebsd-fs@FreeBSD.ORG Mon Feb 2 00:18:31 2015 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2E1859FE for ; Mon, 2 Feb 2015 00:18:31 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 151A7A57 for ; Mon, 2 Feb 2015 00:18:31 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id t120IUgW074524 for ; Mon, 2 Feb 2015 00:18:30 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 183077] [opensolaris] [patch] don't have the compiler inline txg_quiesce so that zilstat works Date: Mon, 02 Feb 2015 00:18:31 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status resolution cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Feb 2015 00:18:31 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=183077 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Progress |Closed Resolution|--- |FIXED CC| |smh@FreeBSD.org --- Comment #4 from Steven Hartland --- Fixed by: https://svnweb.freebsd.org/changeset/base/278040 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Feb 2 05:51:59 2015 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C9169505 for ; Mon, 2 Feb 2015 05:51:59 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B0282E77 for ; Mon, 2 Feb 2015 05:51:59 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id t125pxxg090610 for ; Mon, 2 Feb 2015 05:51:59 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 194938] [10.1-RC4-p1][panic] panic by setting sysctl vfs.zfs.vdev.aggregation_limit (with backtrace) Date: Mon, 02 Feb 2015 05:51:59 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RC2 X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: code.jpe@gmail.com X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: resolution bug_status Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Feb 2015 05:51:59 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D194938 J. Pernfu=C3=9F changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|--- |Not A Bug Status|New |Closed --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@FreeBSD.ORG Tue Feb 3 21:38:34 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 402C4A2A for ; Tue, 3 Feb 2015 21:38:34 +0000 (UTC) Received: from HUB022-ca-4.exch022.serverdata.net (hub022-ca-4.exch022.serverdata.net [64.78.56.56]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 263B0127 for ; Tue, 3 Feb 2015 21:38:33 +0000 (UTC) Received: from MBX022-W1-CA-4.exch022.domain.local ([10.254.6.54]) by HUB022-CA-4.exch022.domain.local ([10.254.6.39]) with mapi id 14.03.0174.001; Tue, 3 Feb 2015 13:30:22 -0800 From: Patrick Quigley To: "freebsd-fs@freebsd.org" Subject: RFC: FUSE kernel module for the kernel... Thread-Topic: FUSE kernel module for the kernel... Thread-Index: AQHQP/icWJ5BWqW7OESfqDvta6Jsew== Date: Tue, 3 Feb 2015 21:30:21 +0000 Message-ID: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.4.7.141117 x-originating-ip: [209.6.147.250] MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Feb 2015 21:38:34 -0000 From owner-freebsd-fs@FreeBSD.ORG Thu Feb 5 18:21:02 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 629B3381 for ; Thu, 5 Feb 2015 18:21:02 +0000 (UTC) Received: from sinkng.sics.se (unknown [IPv6:2001:6b0:3a:1:c654:44ff:fe45:117c]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EF8E5351 for ; Thu, 5 Feb 2015 18:21:01 +0000 (UTC) Received: from P142s.sics.se (P142s.sics.se [193.10.66.127]) by sinkng.sics.se (8.14.9/8.14.9) with ESMTP id t15IKv5B001000 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 5 Feb 2015 19:20:58 +0100 (CET) (envelope-from bengta@P142s.sics.se) Received: from P142s.sics.se (localhost [127.0.0.1]) by P142s.sics.se (8.14.9/8.14.9) with ESMTP id t15IKtUt009537; Thu, 5 Feb 2015 19:20:55 +0100 (CET) (envelope-from bengta@P142s.sics.se) Received: (from bengta@localhost) by P142s.sics.se (8.14.9/8.14.9/Submit) id t15IKt09009536; Thu, 5 Feb 2015 19:20:55 +0100 (CET) (envelope-from bengta@P142s.sics.se) From: Bengt Ahlgren To: freebsd-fs@freebsd.org Subject: Advice on disk media error recovery User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (berkeley-unix) Date: Thu, 05 Feb 2015 19:20:55 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Feb 2015 18:21:02 -0000 I have a five-disk non-redundant zpool where two disks have media errors. One with five sectors unreadable and the other with three sectors. I would like to recover as much data as possible from these disks. What are my possible options? (And, no, there is no backup.) Making block-copy clones of the faulty disks and removing files that had the media errors? Copying the files that can be read from the pool to a different pool on other disks? Other recommendations? Any help appreciated! Bengt From owner-freebsd-fs@FreeBSD.ORG Thu Feb 5 19:12:33 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 16CB56C6 for ; Thu, 5 Feb 2015 19:12:33 +0000 (UTC) Received: from webmail2.jnielsen.net (webmail2.jnielsen.net [50.114.224.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "webmail2.jnielsen.net", Issuer "freebsdsolutions.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id EA2A5BB9 for ; Thu, 5 Feb 2015 19:12:32 +0000 (UTC) Received: from [10.10.1.196] (office.betterlinux.com [199.58.199.60]) (authenticated bits=0) by webmail2.jnielsen.net (8.15.1/8.14.9) with ESMTPSA id t15JCMC2069226 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Feb 2015 12:12:25 -0700 (MST) (envelope-from lists@jnielsen.net) X-Authentication-Warning: webmail2.jnielsen.net: Host office.betterlinux.com [199.58.199.60] claimed to be [10.10.1.196] Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2070.6\)) Subject: Re: Advice on disk media error recovery From: John Nielsen In-Reply-To: Date: Thu, 5 Feb 2015 12:12:22 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: To: Bengt Ahlgren X-Mailer: Apple Mail (2.2070.6) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Feb 2015 19:12:33 -0000 On Feb 5, 2015, at 11:20 AM, Bengt Ahlgren wrote: > I have a five-disk non-redundant zpool where two disks have media > errors. One with five sectors unreadable and the other with three > sectors. >=20 > I would like to recover as much data as possible from these disks. = What > are my possible options? (And, no, there is no backup.) (Aside from learn to always have backups and where feasible to use = redundant disks...) > Copying the files that can be read from the pool to a different pool = on > other disks? That would be a good place to start so you at least have backups of most = things. You could use rsync but you would lose and ZFS dataset structure = and attributes. You could add disks to the system and do "zpool replace". You could make a new (hopefully redundant) pool and try "zpool send | = zpool recv". I don't know firsthand if that would cope with errors. > Making block-copy clones of the faulty disks and removing files that = had > the media errors? This might be a last resort. You should export the zpool before you = start. From owner-freebsd-fs@FreeBSD.ORG Thu Feb 5 22:09:04 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 275C8ABA for ; Thu, 5 Feb 2015 22:09:04 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id 129F33B3 for ; Thu, 5 Feb 2015 22:09:03 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NJB00K15KF0RS00@hades.sorbs.net> for freebsd-fs@freebsd.org; Thu, 05 Feb 2015 14:13:50 -0800 (PST) Message-id: <54D3E9F6.20702@sorbs.net> Date: Thu, 05 Feb 2015 23:08:54 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: "freebsd-fs@freebsd.org" Subject: ZFS pool faulted (corrupt metadata) but the disk data appears ok... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Feb 2015 22:09:04 -0000 Any clues on this? root@colossus:~ # zpool import pool: storage id: 10618504954404185222 state: FAULTED status: The pool metadata is corrupted. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using the '-f' flag. see: http://illumos.org/msg/ZFS-8000-72 config: storage FAULTED corrupted data raidz2-0 ONLINE mfid0 ONLINE mfid15 ONLINE mfid1 ONLINE mfid2 ONLINE mfid3 ONLINE mfid4 ONLINE mfid5 ONLINE replacing-7 ONLINE mfid13 ONLINE mfid14 ONLINE mfid6 ONLINE mfid7 ONLINE mfid8 ONLINE mfid9 ONLINE mfid10 ONLINE mfid11 ONLINE mfid12 ONLINE root@colossus:~ # zpool import -Ff storage cannot import 'storage': I/O error Destroy and re-create the pool from a backup source. root@colossus:~ # zdb -l /dev/mfid0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'storage' state: 0 txg: 1928241 pool_guid: 10618504954404185222 hostid: 4203774842 hostname: 'colossus' top_guid: 12489400212295803034 guid: 3998695725653225547 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 12489400212295803034 nparity: 2 metaslab_array: 34 metaslab_shift: 38 ashift: 9 asize: 45000449064960 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 3998695725653225547 path: '/dev/mfid0' phys_path: '/dev/mfid0' whole_disk: 1 DTL: 168 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 10795471632546545577 path: '/dev/mfid1' phys_path: '/dev/mfid1' whole_disk: 1 DTL: 167 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 15820272272734706674 path: '/dev/mfid2' phys_path: '/dev/mfid2' whole_disk: 1 DTL: 166 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 3928579496187019848 path: '/dev/mfid3' phys_path: '/dev/mfid3' whole_disk: 1 DTL: 165 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 7125052278051590304 path: '/dev/mfid4' phys_path: '/dev/mfid4' whole_disk: 1 DTL: 164 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 14370198745088794709 path: '/dev/mfid5' phys_path: '/dev/mfid5' whole_disk: 1 DTL: 163 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 1843597351388951655 path: '/dev/mfid6' phys_path: '/dev/mfid6' whole_disk: 1 DTL: 162 create_txg: 4 children[7]: type: 'replacing' id: 7 guid: 2914889727426054645 whole_disk: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 10956220251832269421 path: '/dev/mfid15' phys_path: '/dev/mfid15' whole_disk: 1 DTL: 179 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 2463756237300743131 path: '/dev/mfid13' phys_path: '/dev/mfid13' whole_disk: 1 DTL: 181 create_txg: 4 resilvering: 1 children[8]: type: 'disk' id: 8 guid: 8864096842672670007 path: '/dev/mfid7' phys_path: '/dev/mfid7' whole_disk: 1 DTL: 160 create_txg: 4 children[9]: type: 'disk' id: 9 guid: 4650681673751655245 path: '/dev/mfid8' phys_path: '/dev/mfid8' whole_disk: 1 DTL: 159 create_txg: 4 children[10]: type: 'disk' id: 10 guid: 8432109430432996813 path: '/dev/mfid9' phys_path: '/dev/mfid9' whole_disk: 1 DTL: 158 create_txg: 4 children[11]: type: 'disk' id: 11 guid: 414941847968750824 path: '/dev/mfid10' phys_path: '/dev/mfid10' whole_disk: 1 DTL: 157 create_txg: 4 children[12]: type: 'disk' id: 12 guid: 7335375930620195352 path: '/dev/mfid11' phys_path: '/dev/mfid11' whole_disk: 1 DTL: 156 create_txg: 4 children[13]: type: 'disk' id: 13 guid: 5100737174610362 path: '/dev/mfid12' phys_path: '/dev/mfid12' whole_disk: 1 DTL: 155 create_txg: 4 children[14]: type: 'disk' id: 14 guid: 15695558693726858796 path: '/dev/mfid14' phys_path: '/dev/mfid14' whole_disk: 1 DTL: 174 create_txg: 4 features_for_read: -------------------------------------------- LABEL 1 -------------------------------------------- version: 5000 name: 'storage' state: 0 txg: 1928241 pool_guid: 10618504954404185222 hostid: 4203774842 hostname: 'colossus' top_guid: 12489400212295803034 guid: 3998695725653225547 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 12489400212295803034 nparity: 2 metaslab_array: 34 metaslab_shift: 38 ashift: 9 asize: 45000449064960 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 3998695725653225547 path: '/dev/mfid0' phys_path: '/dev/mfid0' whole_disk: 1 DTL: 168 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 10795471632546545577 path: '/dev/mfid1' phys_path: '/dev/mfid1' whole_disk: 1 DTL: 167 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 15820272272734706674 path: '/dev/mfid2' phys_path: '/dev/mfid2' whole_disk: 1 DTL: 166 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 3928579496187019848 path: '/dev/mfid3' phys_path: '/dev/mfid3' whole_disk: 1 DTL: 165 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 7125052278051590304 path: '/dev/mfid4' phys_path: '/dev/mfid4' whole_disk: 1 DTL: 164 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 14370198745088794709 path: '/dev/mfid5' phys_path: '/dev/mfid5' whole_disk: 1 DTL: 163 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 1843597351388951655 path: '/dev/mfid6' phys_path: '/dev/mfid6' whole_disk: 1 DTL: 162 create_txg: 4 children[7]: type: 'replacing' id: 7 guid: 2914889727426054645 whole_disk: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 10956220251832269421 path: '/dev/mfid15' phys_path: '/dev/mfid15' whole_disk: 1 DTL: 179 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 2463756237300743131 path: '/dev/mfid13' phys_path: '/dev/mfid13' whole_disk: 1 DTL: 181 create_txg: 4 resilvering: 1 children[8]: type: 'disk' id: 8 guid: 8864096842672670007 path: '/dev/mfid7' phys_path: '/dev/mfid7' whole_disk: 1 DTL: 160 create_txg: 4 children[9]: type: 'disk' id: 9 guid: 4650681673751655245 path: '/dev/mfid8' phys_path: '/dev/mfid8' whole_disk: 1 DTL: 159 create_txg: 4 children[10]: type: 'disk' id: 10 guid: 8432109430432996813 path: '/dev/mfid9' phys_path: '/dev/mfid9' whole_disk: 1 DTL: 158 create_txg: 4 children[11]: type: 'disk' id: 11 guid: 414941847968750824 path: '/dev/mfid10' phys_path: '/dev/mfid10' whole_disk: 1 DTL: 157 create_txg: 4 children[12]: type: 'disk' id: 12 guid: 7335375930620195352 path: '/dev/mfid11' phys_path: '/dev/mfid11' whole_disk: 1 DTL: 156 create_txg: 4 children[13]: type: 'disk' id: 13 guid: 5100737174610362 path: '/dev/mfid12' phys_path: '/dev/mfid12' whole_disk: 1 DTL: 155 create_txg: 4 children[14]: type: 'disk' id: 14 guid: 15695558693726858796 path: '/dev/mfid14' phys_path: '/dev/mfid14' whole_disk: 1 DTL: 174 create_txg: 4 features_for_read: -------------------------------------------- LABEL 2 -------------------------------------------- version: 5000 name: 'storage' state: 0 txg: 1928241 pool_guid: 10618504954404185222 hostid: 4203774842 hostname: 'colossus' top_guid: 12489400212295803034 guid: 3998695725653225547 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 12489400212295803034 nparity: 2 metaslab_array: 34 metaslab_shift: 38 ashift: 9 asize: 45000449064960 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 3998695725653225547 path: '/dev/mfid0' phys_path: '/dev/mfid0' whole_disk: 1 DTL: 168 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 10795471632546545577 path: '/dev/mfid1' phys_path: '/dev/mfid1' whole_disk: 1 DTL: 167 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 15820272272734706674 path: '/dev/mfid2' phys_path: '/dev/mfid2' whole_disk: 1 DTL: 166 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 3928579496187019848 path: '/dev/mfid3' phys_path: '/dev/mfid3' whole_disk: 1 DTL: 165 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 7125052278051590304 path: '/dev/mfid4' phys_path: '/dev/mfid4' whole_disk: 1 DTL: 164 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 14370198745088794709 path: '/dev/mfid5' phys_path: '/dev/mfid5' whole_disk: 1 DTL: 163 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 1843597351388951655 path: '/dev/mfid6' phys_path: '/dev/mfid6' whole_disk: 1 DTL: 162 create_txg: 4 children[7]: type: 'replacing' id: 7 guid: 2914889727426054645 whole_disk: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 10956220251832269421 path: '/dev/mfid15' phys_path: '/dev/mfid15' whole_disk: 1 DTL: 179 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 2463756237300743131 path: '/dev/mfid13' phys_path: '/dev/mfid13' whole_disk: 1 DTL: 181 create_txg: 4 resilvering: 1 children[8]: type: 'disk' id: 8 guid: 8864096842672670007 path: '/dev/mfid7' phys_path: '/dev/mfid7' whole_disk: 1 DTL: 160 create_txg: 4 children[9]: type: 'disk' id: 9 guid: 4650681673751655245 path: '/dev/mfid8' phys_path: '/dev/mfid8' whole_disk: 1 DTL: 159 create_txg: 4 children[10]: type: 'disk' id: 10 guid: 8432109430432996813 path: '/dev/mfid9' phys_path: '/dev/mfid9' whole_disk: 1 DTL: 158 create_txg: 4 children[11]: type: 'disk' id: 11 guid: 414941847968750824 path: '/dev/mfid10' phys_path: '/dev/mfid10' whole_disk: 1 DTL: 157 create_txg: 4 children[12]: type: 'disk' id: 12 guid: 7335375930620195352 path: '/dev/mfid11' phys_path: '/dev/mfid11' whole_disk: 1 DTL: 156 create_txg: 4 children[13]: type: 'disk' id: 13 guid: 5100737174610362 path: '/dev/mfid12' phys_path: '/dev/mfid12' whole_disk: 1 DTL: 155 create_txg: 4 children[14]: type: 'disk' id: 14 guid: 15695558693726858796 path: '/dev/mfid14' phys_path: '/dev/mfid14' whole_disk: 1 DTL: 174 create_txg: 4 features_for_read: -------------------------------------------- LABEL 3 -------------------------------------------- version: 5000 name: 'storage' state: 0 txg: 1928241 pool_guid: 10618504954404185222 hostid: 4203774842 hostname: 'colossus' top_guid: 12489400212295803034 guid: 3998695725653225547 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 12489400212295803034 nparity: 2 metaslab_array: 34 metaslab_shift: 38 ashift: 9 asize: 45000449064960 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 3998695725653225547 path: '/dev/mfid0' phys_path: '/dev/mfid0' whole_disk: 1 DTL: 168 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 10795471632546545577 path: '/dev/mfid1' phys_path: '/dev/mfid1' whole_disk: 1 DTL: 167 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 15820272272734706674 path: '/dev/mfid2' phys_path: '/dev/mfid2' whole_disk: 1 DTL: 166 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 3928579496187019848 path: '/dev/mfid3' phys_path: '/dev/mfid3' whole_disk: 1 DTL: 165 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 7125052278051590304 path: '/dev/mfid4' phys_path: '/dev/mfid4' whole_disk: 1 DTL: 164 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 14370198745088794709 path: '/dev/mfid5' phys_path: '/dev/mfid5' whole_disk: 1 DTL: 163 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 1843597351388951655 path: '/dev/mfid6' phys_path: '/dev/mfid6' whole_disk: 1 DTL: 162 create_txg: 4 children[7]: type: 'replacing' id: 7 guid: 2914889727426054645 whole_disk: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 10956220251832269421 path: '/dev/mfid15' phys_path: '/dev/mfid15' whole_disk: 1 DTL: 179 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 2463756237300743131 path: '/dev/mfid13' phys_path: '/dev/mfid13' whole_disk: 1 DTL: 181 create_txg: 4 resilvering: 1 children[8]: type: 'disk' id: 8 guid: 8864096842672670007 path: '/dev/mfid7' phys_path: '/dev/mfid7' whole_disk: 1 DTL: 160 create_txg: 4 children[9]: type: 'disk' id: 9 guid: 4650681673751655245 path: '/dev/mfid8' phys_path: '/dev/mfid8' whole_disk: 1 DTL: 159 create_txg: 4 children[10]: type: 'disk' id: 10 guid: 8432109430432996813 path: '/dev/mfid9' phys_path: '/dev/mfid9' whole_disk: 1 DTL: 158 create_txg: 4 children[11]: type: 'disk' id: 11 guid: 414941847968750824 path: '/dev/mfid10' phys_path: '/dev/mfid10' whole_disk: 1 DTL: 157 create_txg: 4 children[12]: type: 'disk' id: 12 guid: 7335375930620195352 path: '/dev/mfid11' phys_path: '/dev/mfid11' whole_disk: 1 DTL: 156 create_txg: 4 children[13]: type: 'disk' id: 13 guid: 5100737174610362 path: '/dev/mfid12' phys_path: '/dev/mfid12' whole_disk: 1 DTL: 155 create_txg: 4 children[14]: type: 'disk' id: 14 guid: 15695558693726858796 path: '/dev/mfid14' phys_path: '/dev/mfid14' whole_disk: 1 DTL: 174 create_txg: 4 features_for_read: LSI9260-16i raid controller all drives in single disk RAID0, system wasn't busy (=in use for writing, however was resilvering a replacement drive after the HSP had taken over due to failed drive) and it got rebooted... faulted on reboot. Recoverable? Regards, -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Thu Feb 5 22:32:19 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 448E14B7 for ; Thu, 5 Feb 2015 22:32:19 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id 312EC8CB for ; Thu, 5 Feb 2015 22:32:18 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NJB00K1NLHYRS00@hades.sorbs.net> for freebsd-fs@freebsd.org; Thu, 05 Feb 2015 14:37:11 -0800 (PST) Message-id: <54D3EF70.6090703@sorbs.net> Date: Thu, 05 Feb 2015 23:32:16 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: "freebsd-fs@freebsd.org" Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> In-reply-to: <54D3E9F6.20702@sorbs.net> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Feb 2015 22:32:19 -0000 Michelle Sullivan wrote: > Any clues on this? > > root@colossus:~ # zpool import > pool: storage > id: 10618504954404185222 > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > The pool may be active on another system, but can be imported using > the '-f' flag. > see: http://illumos.org/msg/ZFS-8000-72 > config: > > storage FAULTED corrupted data > raidz2-0 ONLINE > mfid0 ONLINE > mfid15 ONLINE > mfid1 ONLINE > mfid2 ONLINE > mfid3 ONLINE > mfid4 ONLINE > mfid5 ONLINE > replacing-7 ONLINE > mfid13 ONLINE > mfid14 ONLINE > mfid6 ONLINE > mfid7 ONLINE > mfid8 ONLINE > mfid9 ONLINE > mfid10 ONLINE > mfid11 ONLINE > mfid12 ONLINE > root@colossus:~ # zpool import -Ff storage > cannot import 'storage': I/O error > Destroy and re-create the pool from > a backup source. > root@colossus:~ # zdb -l /dev/mfid0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 0 > txg: 1928241 > pool_guid: 10618504954404185222 > hostid: 4203774842 > hostname: 'colossus' > top_guid: 12489400212295803034 > guid: 3998695725653225547 > vdev_children: 1 > vdev_tree: > type: 'raidz' > id: 0 > guid: 12489400212295803034 > nparity: 2 > metaslab_array: 34 > metaslab_shift: 38 > ashift: 9 > asize: 45000449064960 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 3998695725653225547 > path: '/dev/mfid0' > phys_path: '/dev/mfid0' > whole_disk: 1 > DTL: 168 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 10795471632546545577 > path: '/dev/mfid1' > phys_path: '/dev/mfid1' > whole_disk: 1 > DTL: 167 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 15820272272734706674 > path: '/dev/mfid2' > phys_path: '/dev/mfid2' > whole_disk: 1 > DTL: 166 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 3928579496187019848 > path: '/dev/mfid3' > phys_path: '/dev/mfid3' > whole_disk: 1 > DTL: 165 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 7125052278051590304 > path: '/dev/mfid4' > phys_path: '/dev/mfid4' > whole_disk: 1 > DTL: 164 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 14370198745088794709 > path: '/dev/mfid5' > phys_path: '/dev/mfid5' > whole_disk: 1 > DTL: 163 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 1843597351388951655 > path: '/dev/mfid6' > phys_path: '/dev/mfid6' > whole_disk: 1 > DTL: 162 > create_txg: 4 > children[7]: > type: 'replacing' > id: 7 > guid: 2914889727426054645 > whole_disk: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 10956220251832269421 > path: '/dev/mfid15' > phys_path: '/dev/mfid15' > whole_disk: 1 > DTL: 179 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 2463756237300743131 > path: '/dev/mfid13' > phys_path: '/dev/mfid13' > whole_disk: 1 > DTL: 181 > create_txg: 4 > resilvering: 1 > children[8]: > type: 'disk' > id: 8 > guid: 8864096842672670007 > path: '/dev/mfid7' > phys_path: '/dev/mfid7' > whole_disk: 1 > DTL: 160 > create_txg: 4 > children[9]: > type: 'disk' > id: 9 > guid: 4650681673751655245 > path: '/dev/mfid8' > phys_path: '/dev/mfid8' > whole_disk: 1 > DTL: 159 > create_txg: 4 > children[10]: > type: 'disk' > id: 10 > guid: 8432109430432996813 > path: '/dev/mfid9' > phys_path: '/dev/mfid9' > whole_disk: 1 > DTL: 158 > create_txg: 4 > children[11]: > type: 'disk' > id: 11 > guid: 414941847968750824 > path: '/dev/mfid10' > phys_path: '/dev/mfid10' > whole_disk: 1 > DTL: 157 > create_txg: 4 > children[12]: > type: 'disk' > id: 12 > guid: 7335375930620195352 > path: '/dev/mfid11' > phys_path: '/dev/mfid11' > whole_disk: 1 > DTL: 156 > create_txg: 4 > children[13]: > type: 'disk' > id: 13 > guid: 5100737174610362 > path: '/dev/mfid12' > phys_path: '/dev/mfid12' > whole_disk: 1 > DTL: 155 > create_txg: 4 > children[14]: > type: 'disk' > id: 14 > guid: 15695558693726858796 > path: '/dev/mfid14' > phys_path: '/dev/mfid14' > whole_disk: 1 > DTL: 174 > create_txg: 4 > features_for_read: > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 0 > txg: 1928241 > pool_guid: 10618504954404185222 > hostid: 4203774842 > hostname: 'colossus' > top_guid: 12489400212295803034 > guid: 3998695725653225547 > vdev_children: 1 > vdev_tree: > type: 'raidz' > id: 0 > guid: 12489400212295803034 > nparity: 2 > metaslab_array: 34 > metaslab_shift: 38 > ashift: 9 > asize: 45000449064960 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 3998695725653225547 > path: '/dev/mfid0' > phys_path: '/dev/mfid0' > whole_disk: 1 > DTL: 168 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 10795471632546545577 > path: '/dev/mfid1' > phys_path: '/dev/mfid1' > whole_disk: 1 > DTL: 167 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 15820272272734706674 > path: '/dev/mfid2' > phys_path: '/dev/mfid2' > whole_disk: 1 > DTL: 166 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 3928579496187019848 > path: '/dev/mfid3' > phys_path: '/dev/mfid3' > whole_disk: 1 > DTL: 165 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 7125052278051590304 > path: '/dev/mfid4' > phys_path: '/dev/mfid4' > whole_disk: 1 > DTL: 164 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 14370198745088794709 > path: '/dev/mfid5' > phys_path: '/dev/mfid5' > whole_disk: 1 > DTL: 163 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 1843597351388951655 > path: '/dev/mfid6' > phys_path: '/dev/mfid6' > whole_disk: 1 > DTL: 162 > create_txg: 4 > children[7]: > type: 'replacing' > id: 7 > guid: 2914889727426054645 > whole_disk: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 10956220251832269421 > path: '/dev/mfid15' > phys_path: '/dev/mfid15' > whole_disk: 1 > DTL: 179 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 2463756237300743131 > path: '/dev/mfid13' > phys_path: '/dev/mfid13' > whole_disk: 1 > DTL: 181 > create_txg: 4 > resilvering: 1 > children[8]: > type: 'disk' > id: 8 > guid: 8864096842672670007 > path: '/dev/mfid7' > phys_path: '/dev/mfid7' > whole_disk: 1 > DTL: 160 > create_txg: 4 > children[9]: > type: 'disk' > id: 9 > guid: 4650681673751655245 > path: '/dev/mfid8' > phys_path: '/dev/mfid8' > whole_disk: 1 > DTL: 159 > create_txg: 4 > children[10]: > type: 'disk' > id: 10 > guid: 8432109430432996813 > path: '/dev/mfid9' > phys_path: '/dev/mfid9' > whole_disk: 1 > DTL: 158 > create_txg: 4 > children[11]: > type: 'disk' > id: 11 > guid: 414941847968750824 > path: '/dev/mfid10' > phys_path: '/dev/mfid10' > whole_disk: 1 > DTL: 157 > create_txg: 4 > children[12]: > type: 'disk' > id: 12 > guid: 7335375930620195352 > path: '/dev/mfid11' > phys_path: '/dev/mfid11' > whole_disk: 1 > DTL: 156 > create_txg: 4 > children[13]: > type: 'disk' > id: 13 > guid: 5100737174610362 > path: '/dev/mfid12' > phys_path: '/dev/mfid12' > whole_disk: 1 > DTL: 155 > create_txg: 4 > children[14]: > type: 'disk' > id: 14 > guid: 15695558693726858796 > path: '/dev/mfid14' > phys_path: '/dev/mfid14' > whole_disk: 1 > DTL: 174 > create_txg: 4 > features_for_read: > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 0 > txg: 1928241 > pool_guid: 10618504954404185222 > hostid: 4203774842 > hostname: 'colossus' > top_guid: 12489400212295803034 > guid: 3998695725653225547 > vdev_children: 1 > vdev_tree: > type: 'raidz' > id: 0 > guid: 12489400212295803034 > nparity: 2 > metaslab_array: 34 > metaslab_shift: 38 > ashift: 9 > asize: 45000449064960 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 3998695725653225547 > path: '/dev/mfid0' > phys_path: '/dev/mfid0' > whole_disk: 1 > DTL: 168 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 10795471632546545577 > path: '/dev/mfid1' > phys_path: '/dev/mfid1' > whole_disk: 1 > DTL: 167 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 15820272272734706674 > path: '/dev/mfid2' > phys_path: '/dev/mfid2' > whole_disk: 1 > DTL: 166 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 3928579496187019848 > path: '/dev/mfid3' > phys_path: '/dev/mfid3' > whole_disk: 1 > DTL: 165 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 7125052278051590304 > path: '/dev/mfid4' > phys_path: '/dev/mfid4' > whole_disk: 1 > DTL: 164 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 14370198745088794709 > path: '/dev/mfid5' > phys_path: '/dev/mfid5' > whole_disk: 1 > DTL: 163 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 1843597351388951655 > path: '/dev/mfid6' > phys_path: '/dev/mfid6' > whole_disk: 1 > DTL: 162 > create_txg: 4 > children[7]: > type: 'replacing' > id: 7 > guid: 2914889727426054645 > whole_disk: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 10956220251832269421 > path: '/dev/mfid15' > phys_path: '/dev/mfid15' > whole_disk: 1 > DTL: 179 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 2463756237300743131 > path: '/dev/mfid13' > phys_path: '/dev/mfid13' > whole_disk: 1 > DTL: 181 > create_txg: 4 > resilvering: 1 > children[8]: > type: 'disk' > id: 8 > guid: 8864096842672670007 > path: '/dev/mfid7' > phys_path: '/dev/mfid7' > whole_disk: 1 > DTL: 160 > create_txg: 4 > children[9]: > type: 'disk' > id: 9 > guid: 4650681673751655245 > path: '/dev/mfid8' > phys_path: '/dev/mfid8' > whole_disk: 1 > DTL: 159 > create_txg: 4 > children[10]: > type: 'disk' > id: 10 > guid: 8432109430432996813 > path: '/dev/mfid9' > phys_path: '/dev/mfid9' > whole_disk: 1 > DTL: 158 > create_txg: 4 > children[11]: > type: 'disk' > id: 11 > guid: 414941847968750824 > path: '/dev/mfid10' > phys_path: '/dev/mfid10' > whole_disk: 1 > DTL: 157 > create_txg: 4 > children[12]: > type: 'disk' > id: 12 > guid: 7335375930620195352 > path: '/dev/mfid11' > phys_path: '/dev/mfid11' > whole_disk: 1 > DTL: 156 > create_txg: 4 > children[13]: > type: 'disk' > id: 13 > guid: 5100737174610362 > path: '/dev/mfid12' > phys_path: '/dev/mfid12' > whole_disk: 1 > DTL: 155 > create_txg: 4 > children[14]: > type: 'disk' > id: 14 > guid: 15695558693726858796 > path: '/dev/mfid14' > phys_path: '/dev/mfid14' > whole_disk: 1 > DTL: 174 > create_txg: 4 > features_for_read: > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 0 > txg: 1928241 > pool_guid: 10618504954404185222 > hostid: 4203774842 > hostname: 'colossus' > top_guid: 12489400212295803034 > guid: 3998695725653225547 > vdev_children: 1 > vdev_tree: > type: 'raidz' > id: 0 > guid: 12489400212295803034 > nparity: 2 > metaslab_array: 34 > metaslab_shift: 38 > ashift: 9 > asize: 45000449064960 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 3998695725653225547 > path: '/dev/mfid0' > phys_path: '/dev/mfid0' > whole_disk: 1 > DTL: 168 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 10795471632546545577 > path: '/dev/mfid1' > phys_path: '/dev/mfid1' > whole_disk: 1 > DTL: 167 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 15820272272734706674 > path: '/dev/mfid2' > phys_path: '/dev/mfid2' > whole_disk: 1 > DTL: 166 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 3928579496187019848 > path: '/dev/mfid3' > phys_path: '/dev/mfid3' > whole_disk: 1 > DTL: 165 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 7125052278051590304 > path: '/dev/mfid4' > phys_path: '/dev/mfid4' > whole_disk: 1 > DTL: 164 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 14370198745088794709 > path: '/dev/mfid5' > phys_path: '/dev/mfid5' > whole_disk: 1 > DTL: 163 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 1843597351388951655 > path: '/dev/mfid6' > phys_path: '/dev/mfid6' > whole_disk: 1 > DTL: 162 > create_txg: 4 > children[7]: > type: 'replacing' > id: 7 > guid: 2914889727426054645 > whole_disk: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 10956220251832269421 > path: '/dev/mfid15' > phys_path: '/dev/mfid15' > whole_disk: 1 > DTL: 179 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 2463756237300743131 > path: '/dev/mfid13' > phys_path: '/dev/mfid13' > whole_disk: 1 > DTL: 181 > create_txg: 4 > resilvering: 1 > children[8]: > type: 'disk' > id: 8 > guid: 8864096842672670007 > path: '/dev/mfid7' > phys_path: '/dev/mfid7' > whole_disk: 1 > DTL: 160 > create_txg: 4 > children[9]: > type: 'disk' > id: 9 > guid: 4650681673751655245 > path: '/dev/mfid8' > phys_path: '/dev/mfid8' > whole_disk: 1 > DTL: 159 > create_txg: 4 > children[10]: > type: 'disk' > id: 10 > guid: 8432109430432996813 > path: '/dev/mfid9' > phys_path: '/dev/mfid9' > whole_disk: 1 > DTL: 158 > create_txg: 4 > children[11]: > type: 'disk' > id: 11 > guid: 414941847968750824 > path: '/dev/mfid10' > phys_path: '/dev/mfid10' > whole_disk: 1 > DTL: 157 > create_txg: 4 > children[12]: > type: 'disk' > id: 12 > guid: 7335375930620195352 > path: '/dev/mfid11' > phys_path: '/dev/mfid11' > whole_disk: 1 > DTL: 156 > create_txg: 4 > children[13]: > type: 'disk' > id: 13 > guid: 5100737174610362 > path: '/dev/mfid12' > phys_path: '/dev/mfid12' > whole_disk: 1 > DTL: 155 > create_txg: 4 > children[14]: > type: 'disk' > id: 14 > guid: 15695558693726858796 > path: '/dev/mfid14' > phys_path: '/dev/mfid14' > whole_disk: 1 > DTL: 174 > create_txg: 4 > features_for_read: > > LSI9260-16i raid controller all drives in single disk RAID0, system > wasn't busy (=in use for writing, however was resilvering a replacement > drive after the HSP had taken over due to failed drive) and it got > rebooted... faulted on reboot. > > Recoverable? > > Regards, > > FYI before the first reboot: root@colossus:~ # zpool status -x pool: storage state: FAULTED status: The pool metadata is corrupted and the pool cannot be opened. action: Destroy and re-create the pool from a backup source. see: http://illumos.org/msg/ZFS-8000-72 scan: none requested config: NAME STATE READ WRITE CKSUM storage FAULTED 0 0 1 raidz2-0 ONLINE 0 0 7 mfid0 ONLINE 0 0 1 mfid1 ONLINE 0 0 0 mfid2 ONLINE 0 0 0 mfid3 ONLINE 0 0 0 mfid4 ONLINE 0 0 0 mfid5 ONLINE 0 0 0 mfid6 ONLINE 0 0 0 replacing-7 ONLINE 0 0 0 mfid14 ONLINE 0 0 0 mfid15 ONLINE 0 0 0 mfid7 ONLINE 0 0 0 mfid8 ONLINE 0 0 0 mfid9 ONLINE 0 0 0 mfid10 ONLINE 0 0 0 mfid11 ONLINE 0 0 0 mfid12 ONLINE 0 0 1 mfid13 ONLINE 0 0 0 root@colossus:~ # zpool clear -nF storage internal error: out of memory root@colossus:~ # "zpool import -Fn storage" reports no errors. Really don't care if a file or three is corrupted, just want the pool back as don't have a backup since Dec 3, 2014. -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 00:34:57 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 261C64D2 for ; Fri, 6 Feb 2015 00:34:57 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id 124366EA for ; Fri, 6 Feb 2015 00:34:56 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NJB00K31R6BRS00@hades.sorbs.net> for freebsd-fs@freebsd.org; Thu, 05 Feb 2015 16:39:49 -0800 (PST) Message-id: <54D40C2D.1070801@sorbs.net> Date: Fri, 06 Feb 2015 01:34:53 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: "freebsd-fs@freebsd.org" Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> In-reply-to: <54D3E9F6.20702@sorbs.net> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 00:34:57 -0000 Michelle Sullivan wrote: > Any clues on this? > > root@colossus:~ # zpool import > pool: storage > id: 10618504954404185222 > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > The pool may be active on another system, but can be imported using > the '-f' flag. > see: http://illumos.org/msg/ZFS-8000-72 > config: > > storage FAULTED corrupted data > raidz2-0 ONLINE > mfid0 ONLINE > mfid15 ONLINE > mfid1 ONLINE > mfid2 ONLINE > mfid3 ONLINE > mfid4 ONLINE > mfid5 ONLINE > replacing-7 ONLINE > mfid13 ONLINE > mfid14 ONLINE > mfid6 ONLINE > mfid7 ONLINE > mfid8 ONLINE > mfid9 ONLINE > mfid10 ONLINE > mfid11 ONLINE > mfid12 ONLINE > root@colossus:~ # zpool import -Ff storage > cannot import 'storage': I/O error > Destroy and re-create the pool from > a backup source. > When running the import without -f says it may still be in use with the date of a few hours ago (when the reboot occurred) also which might be the key to the problem says (in dmesg/console): ZFS filesystem version: 5 ZFS storage pool version: features support (5000) GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. GEOM: mfid0: corrupt or invalid GPT detected. GEOM: mfid0: GPT rejected -- may not be recoverable. -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 01:16:57 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B1A69A4F for ; Fri, 6 Feb 2015 01:16:57 +0000 (UTC) Received: from anubis.delphij.net (anubis.delphij.net [64.62.153.212]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 98380A71 for ; Fri, 6 Feb 2015 01:16:57 +0000 (UTC) Received: from zeta.ixsystems.com (unknown [12.229.62.2]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id 9C52C2172; Thu, 5 Feb 2015 17:16:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1423185416; x=1423199816; bh=sD7GszmZmA6uMTTDY68CS+Fh3wWOelrw+4GkabMQhE4=; h=Date:From:Reply-To:To:Subject:References:In-Reply-To; b=KmMS2U0eVhiQ3vjvsQfdgJpn6R/hj48KNFSTxwQUUOwnx1OIsmIBC37AKqbVfAoL6 AcrehI8WyayZoU3XU6nfVCcDsZPNg1mnZDWkdKWhPrfHWmoNVzghL37nedzTi0arBq bMkoNzVGmPv2Wl90meMO9izKWQCwJH10xpuvLktg= Message-ID: <54D41608.50306@delphij.net> Date: Thu, 05 Feb 2015 17:16:56 -0800 From: Xin Li Reply-To: d@delphij.net Organization: The FreeBSD Project MIME-Version: 1.0 To: Michelle Sullivan , "freebsd-fs@freebsd.org" Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> In-Reply-To: <54D3E9F6.20702@sorbs.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 01:16:57 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 02/05/15 14:08, Michelle Sullivan wrote: > Any clues on this? > > root@colossus:~ # zpool import pool: storage id: > 10618504954404185222 state: FAULTED status: The pool metadata is > corrupted. action: The pool cannot be imported due to damaged > devices or data. This is standard FAULTED message. > The pool may be active on another system, but can be imported > using the '-f' flag. This suggests the pool was connected to a different system, is that the case? > see: http://illumos.org/msg/ZFS-8000-72 config: > > storage FAULTED corrupted data raidz2-0 ONLINE > mfid0 ONLINE mfid15 ONLINE mfid1 ONLINE mfid2 > ONLINE mfid3 ONLINE mfid4 ONLINE mfid5 ONLINE > replacing-7 ONLINE mfid13 ONLINE mfid14 ONLINE mfid6 > ONLINE mfid7 ONLINE mfid8 ONLINE mfid9 ONLINE > mfid10 ONLINE mfid11 ONLINE mfid12 ONLINE > root@colossus:~ # zpool import -Ff storage cannot import 'storage': > I/O error Destroy and re-create the pool from a backup source. uname -a? > Recoverable? It's hard to tell right now, and we shall try all possible remedies but be prepared for the worst. Cheers, - -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.1.1 (FreeBSD) iQIcBAEBCgAGBQJU1BYEAAoJEJW2GBstM+nsjJkP/2AqVGt/OOw2pmGtO3lkiB4c w1lzxzVniXWrpbP/eStgSYlOtHzhF4/ObE1v+00N1qiTwUwnQLJvIbav6pBW78lW sXILY8JHk5Wcp26PyyzWbC0STJzFzdoA9y8QiXgzSa2tb3ZVzKhVn410fCdqcC0h 5wVAs0+jsoo/+e+A/y4CE5CmjtbW2Ql6BJk2BB5iKOXvLVL+4Ejpw2mp6qBFQlkm SCWdGlRNv0UMna0KNye7FGZ/SJ+ERKFM1THT3181WVhxzGIxfsISAIBcoYpO3wiL g3P45ne8vHBLxe41vdDCXiqKMs0sGrvsN3p7Xucni17VvlzjF8IXJABLFlIvnCW0 d5PvOsFvp1eLfu2xKMH/LMRg3UmLMbcqRQFrz/5XJkVYxNVZawhSs7zTXgtU63vY k2Xd6CE3RI8kyyYxuRtGtsW73bX2kb5L+QFdvH+bmmCQhozi7J1O7pWn/XYh7Q5S 2HnuniOxKNdCw1hTD0iAWcqdNpdxc8dJryW5MNOGL6kxiIpr1RXRnbp4Ta/qiniy DMvzLm/I1e7W2kXBxzmHYhPtMbn1Hi2fJ6lz3AnV+dZcl3eSFgIYSlvRBiMlIeS9 mQgFf9ru5rp8vDsPy26ykKGgKGBK866WYDj7xtJ5o01geFgTsr3GmLGKkIysOoVj 9WlGIW5gQW6j92A43eFe =DFA1 -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 01:36:46 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4168D89 for ; Fri, 6 Feb 2015 01:36:46 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id 907F8C48 for ; Fri, 6 Feb 2015 01:36:46 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NJB00K3HU1DRS00@hades.sorbs.net> for freebsd-fs@freebsd.org; Thu, 05 Feb 2015 17:41:39 -0800 (PST) Message-id: <54D41AAA.6070303@sorbs.net> Date: Fri, 06 Feb 2015 02:36:42 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: d@delphij.net Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> In-reply-to: <54D41608.50306@delphij.net> Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 01:36:46 -0000 Xin Li wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 02/05/15 14:08, Michelle Sullivan wrote: > >> Any clues on this? >> >> root@colossus:~ # zpool import pool: storage id: >> 10618504954404185222 state: FAULTED status: The pool metadata is >> corrupted. action: The pool cannot be imported due to damaged >> devices or data. >> > > This is standard FAULTED message. > > >> The pool may be active on another system, but can be imported >> using the '-f' flag. >> > > This suggests the pool was connected to a different system, is that > the case? > No. >> see: http://illumos.org/msg/ZFS-8000-72 config: >> >> storage FAULTED corrupted data raidz2-0 ONLINE >> mfid0 ONLINE mfid15 ONLINE mfid1 ONLINE mfid2 >> ONLINE mfid3 ONLINE mfid4 ONLINE mfid5 ONLINE >> replacing-7 ONLINE mfid13 ONLINE mfid14 ONLINE mfid6 >> ONLINE mfid7 ONLINE mfid8 ONLINE mfid9 ONLINE >> mfid10 ONLINE mfid11 ONLINE mfid12 ONLINE >> root@colossus:~ # zpool import -Ff storage cannot import 'storage': >> I/O error Destroy and re-create the pool from a backup source. >> > > uname -a? > FreeBSD colossus 9.2-RELEASE-p15 FreeBSD 9.2-RELEASE-p15 #0: Mon Nov 3 20:31:29 UTC 2014 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > >> Recoverable? >> > > It's hard to tell right now, and we shall try all possible remedies > but be prepared for the worst. > I am :( -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 01:43:47 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B666C1E4 for ; Fri, 6 Feb 2015 01:43:47 +0000 (UTC) Received: from anubis.delphij.net (anubis.delphij.net [64.62.153.212]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 9BC27D13 for ; Fri, 6 Feb 2015 01:43:46 +0000 (UTC) Received: from zeta.ixsystems.com (unknown [12.229.62.2]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id 5C904229B; Thu, 5 Feb 2015 17:43:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1423187026; x=1423201426; bh=190C8V6SzNoQ9Dbf8NLIl/h9tTGiO4XhsNkor31HDA0=; h=Date:From:Reply-To:To:CC:Subject:References:In-Reply-To; b=08vUbrhbaicaAy5LTyb28IcbPVflVmFwwb7D60ZonMG3n/oEZYG4Qvu/yqT28VCzH qFHWuI2liD9K9iymBsxscAKbAOl/fVJStp8LBGszcJq3gNqodq87IrhuhrH7to8/3r udbycyEGd93uREQNOH3HshJ0O6PpM23v2qb7QsqY= Message-ID: <54D41C52.1020003@delphij.net> Date: Thu, 05 Feb 2015 17:43:46 -0800 From: Xin Li Reply-To: d@delphij.net Organization: The FreeBSD Project MIME-Version: 1.0 To: Michelle Sullivan , d@delphij.net Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> In-Reply-To: <54D41AAA.6070303@sorbs.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 01:43:47 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 02/05/15 17:36, Michelle Sullivan wrote: >> This suggests the pool was connected to a different system, is >> that the case? >> > > No. Ok, that's good. Actually if you have two heads that writes to the same pool at the same time, it can easily enter an unrecoverable state. >> It's hard to tell right now, and we shall try all possible >> remedies but be prepared for the worst. > > I am :( The next thing I would try is to: 1. move /boot/zfs/zpool.cache to somewhere else; 2. zpool import -f -n -F -X storage and see if the system would give you a proposal. Cheers, - -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.1.1 (FreeBSD) iQIcBAEBCgAGBQJU1BxLAAoJEJW2GBstM+nsOJ0P/3be8Z1WsGGOGNY+WZdr7FRp Jl++Ef3VSpd1Qf1jFZuRIS/hLfbMh0bWjOxyKiF9ivu77QZ9qCXk+pmn0oTZ3e1r 7g80CRKk2rapTqkagFRuPfo6b9vDQz3qYazahhZrhRyTFA1l2V+Wka+yw9Hx18ds MLaAps7Kpn67BRRV6Q+9+/oQdBzllSx8S77AkesPp5s3oHTQ8jntSSN9D9p/+jQu Wo0/t4k7x3pYpA0BzBQdms/pj38vIPSvjtnHpFggwztNKKkEaIPy49kFOBIVhJTv e8h3z5PoXre9r1cZ5ay3zTs23vc7GLGqphrRLguwsUvYa1cY1T4vQWY4dommpM/0 VHLUhp8oNtokqqzUSYMd8FTF+55rzSuBN+Y+UEFUHakZ9QXOnvwXfAJk6CwQdTHn YCGNKGY24qpYeJkfEq3e2QQC+WNDd1pqLCBENpD1uCpmejctHO4mVaO3032Gxd5/ FCVGiBgV+SW7h0jUEr3pk7CnUigBwMGy9UT/QuDP9N2ID7tAbfbmrr0zJ8hkLmR8 0xFGyaMK2jJx9C+DDjzbCw4lrKfWGkvjHRR6MPJ5QUcKWiji8xh8TCSlNZOxCq43 Mt7aMjZbWJhlIH15F8wSCrKFOAWHRud35asHJqPFZhRFJvA5Ly8Yy5cVcb4hboZj bkaZwfABTvGLO0SEFb1T =xRdB -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 02:20:35 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C74DF507 for ; Fri, 6 Feb 2015 02:20:35 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id B21F4FEA for ; Fri, 6 Feb 2015 02:20:35 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NJB00K3XW2ERS00@hades.sorbs.net> for freebsd-fs@freebsd.org; Thu, 05 Feb 2015 18:25:28 -0800 (PST) Message-id: <54D424F0.9080301@sorbs.net> Date: Fri, 06 Feb 2015 03:20:32 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: d@delphij.net Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> In-reply-to: <54D41C52.1020003@delphij.net> Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 02:20:36 -0000 Xin Li wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 02/05/15 17:36, Michelle Sullivan wrote: > >>> This suggests the pool was connected to a different system, is >>> that the case? >>> >>> >> No. >> > > Ok, that's good. Actually if you have two heads that writes to the > same pool at the same time, it can easily enter an unrecoverable state. > > >>> It's hard to tell right now, and we shall try all possible >>> remedies but be prepared for the worst. >>> >> I am :( >> > > The next thing I would try is to: > > 1. move /boot/zfs/zpool.cache to somewhere else; > There isn't one. However 'cat'ing the inode I can see there was one... <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > 2. zpool import -f -n -F -X storage and see if the system would give > you a proposal. > This crashes (without -n) the machine out of memory.... there's 32G of RAM. /boot/loader.conf contains: vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES" Regards, Michelle PS: it's 16x3T Drives in RAIDZ2+HSP - 34T formatted. > Cheers, > - -- > Xin LI https://www.delphij.net/ > FreeBSD - The Power to Serve! Live free or die > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.1.1 (FreeBSD) > > iQIcBAEBCgAGBQJU1BxLAAoJEJW2GBstM+nsOJ0P/3be8Z1WsGGOGNY+WZdr7FRp > Jl++Ef3VSpd1Qf1jFZuRIS/hLfbMh0bWjOxyKiF9ivu77QZ9qCXk+pmn0oTZ3e1r > 7g80CRKk2rapTqkagFRuPfo6b9vDQz3qYazahhZrhRyTFA1l2V+Wka+yw9Hx18ds > MLaAps7Kpn67BRRV6Q+9+/oQdBzllSx8S77AkesPp5s3oHTQ8jntSSN9D9p/+jQu > Wo0/t4k7x3pYpA0BzBQdms/pj38vIPSvjtnHpFggwztNKKkEaIPy49kFOBIVhJTv > e8h3z5PoXre9r1cZ5ay3zTs23vc7GLGqphrRLguwsUvYa1cY1T4vQWY4dommpM/0 > VHLUhp8oNtokqqzUSYMd8FTF+55rzSuBN+Y+UEFUHakZ9QXOnvwXfAJk6CwQdTHn > YCGNKGY24qpYeJkfEq3e2QQC+WNDd1pqLCBENpD1uCpmejctHO4mVaO3032Gxd5/ > FCVGiBgV+SW7h0jUEr3pk7CnUigBwMGy9UT/QuDP9N2ID7tAbfbmrr0zJ8hkLmR8 > 0xFGyaMK2jJx9C+DDjzbCw4lrKfWGkvjHRR6MPJ5QUcKWiji8xh8TCSlNZOxCq43 > Mt7aMjZbWJhlIH15F8wSCrKFOAWHRud35asHJqPFZhRFJvA5Ly8Yy5cVcb4hboZj > bkaZwfABTvGLO0SEFb1T > =xRdB > -----END PGP SIGNATURE----- > -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 05:58:07 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 52AAC864 for ; Fri, 6 Feb 2015 05:58:07 +0000 (UTC) Received: from anubis.delphij.net (anubis.delphij.net [IPv6:2001:470:1:117::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 31121A36 for ; Fri, 6 Feb 2015 05:58:07 +0000 (UTC) Received: from Xins-MBP.home.us.delphij.net (c-71-202-112-39.hsd1.ca.comcast.net [71.202.112.39]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id 8F3E12BEC; Thu, 5 Feb 2015 21:58:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1423202286; x=1423216686; bh=xkI4MzzLp6MQL3Q7/YSS6hnL0DHQ9koEOeIH7T2qE8E=; h=Date:From:To:CC:Subject:References:In-Reply-To; b=D44PIvxriLJi1OlV3R7av8uXjrYOCmMwzruOyGwj450bqsV3VaOpu+VaOluUPlmOK 82/B4s3J72BglSJuJ0ojdK+lSn2L5DZksfTpQhsfcKYnkFw+vHH9ZKDpgM/Kgu4uvg W9a7a8Zp4kEthCzDn+gQ34WRXiJB2b1fDkurLU7c= Message-ID: <54D457F0.8080502@delphij.net> Date: Thu, 05 Feb 2015 21:58:08 -0800 From: Xin Li User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Michelle Sullivan , d@delphij.net Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> In-Reply-To: <54D424F0.9080301@sorbs.net> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 05:58:07 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 2/5/15 18:20, Michelle Sullivan wrote: > Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote: > >>>>> This suggests the pool was connected to a different system, >>>>> is that the case? >>>>> >>>>> >>>> No. >>>> > > Ok, that's good. Actually if you have two heads that writes to > the same pool at the same time, it can easily enter an > unrecoverable state. > > >>>>> It's hard to tell right now, and we shall try all possible >>>>> remedies but be prepared for the worst. >>>>> >>>> I am :( >>>> > > The next thing I would try is to: > > 1. move /boot/zfs/zpool.cache to somewhere else; > > >> There isn't one. However 'cat'ing the inode I can see there was >> one... > >> <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ >> >> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > 2. zpool import -f -n -F -X storage and see if the system would > give you a proposal. > > >> This crashes (without -n) the machine out of memory.... there's >> 32G of RAM. /boot/loader.conf contains: > >> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" >> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" >> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 >> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES" Which release this is? write_limit_override have been removed quite a while ago. I'd recommend using a fresh -CURRENT snapshot if possible (possibly with -NODEBUG kernel). Cheers, -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJU1FfuAAoJEJW2GBstM+nsiJMP/2G7VHlNkBl+IAllWdECjcVs oseOwsV9pZkcaj8DQP7Y295+UQbbq09m2fy9disqjH1mPpssR7sthiEIBSmjXGWT 4alWO4C6Y38XfMRFjMkvrj03vWV8caaMuipYYscVXq7N/pa/zMiqt+ECPSMnli9M jeaOEh/vitHMddBTG3YAQ62OLXBq2T/0iqA7VyiPRxJbmVE/iiG6nC4Ve3NeUYIq 2gdZHvKUIGUqSRhfvkzqRk2vUs3SzaGPHLWok6e8j0XYHrfSC1W0kO7VMR8TZwxD lzxnJ0tTjBTcBinNtLBggBl8s8Ps7WoWSTf1JWAi7RSIwcf/os3vt87b+LJ9eaGe gUsU3MvDrPGIHwg+OSHkya8+IKuvxhTEdzPVEi0RfL2sKe3HjtHcllJGiAUDAYbc IwYIELVnXglD0qc1SHvit7fjN8zDPk/fbaKIbSZVp6ilkOgbTCnwAmsiA9cCN3Ir dwuP1n+GKPRq3ufThBZ6KGo60/5nGwa4HxTZZ1sj6Jczatb1EraytAkIzWCpmd0Z wySVojokz5IL+F3Bp4o4/TBJGPkOEf2Wl9Zcoe3pnahofDv5+hQYpb4HPyHlN/lE qy5ig+iSVKd7IJ2Twkz6WNaUTx1avnxO57qnXI3/dYhM7mxT8Zrzi7RQkpSFXWbX ojz/8g/KvpvR4lJf8I+K =Bxyv -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 11:21:08 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 743B8715 for ; Fri, 6 Feb 2015 11:21:08 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id 5EA0EF3B for ; Fri, 6 Feb 2015 11:21:07 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NJC00K9AL3ARS00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 06 Feb 2015 03:26:00 -0800 (PST) Message-id: <54D4A3A0.2040408@sorbs.net> Date: Fri, 06 Feb 2015 12:21:04 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Xin Li Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D457F0.8080502@delphij.net> In-reply-to: <54D457F0.8080502@delphij.net> Cc: "freebsd-fs@freebsd.org" , d@delphij.net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 11:21:08 -0000 Xin Li wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > > On 2/5/15 18:20, Michelle Sullivan wrote: > >> Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote: >> >> >>>>>> This suggests the pool was connected to a different system, >>>>>> is that the case? >>>>>> >>>>>> >>>>>> >>>>> No. >>>>> >>>>> >> Ok, that's good. Actually if you have two heads that writes to >> the same pool at the same time, it can easily enter an >> unrecoverable state. >> >> >> >>>>>> It's hard to tell right now, and we shall try all possible >>>>>> remedies but be prepared for the worst. >>>>>> >>>>>> >>>>> I am :( >>>>> >>>>> >> The next thing I would try is to: >> >> 1. move /boot/zfs/zpool.cache to somewhere else; >> >> >> >>> There isn't one. However 'cat'ing the inode I can see there was >>> one... >>> >>> <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ >>> >>> >>> > ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > >> 2. zpool import -f -n -F -X storage and see if the system would >> give you a proposal. >> >> >> >>> This crashes (without -n) the machine out of memory.... there's >>> 32G of RAM. /boot/loader.conf contains: >>> >>> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" >>> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" >>> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 >>> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES" >>> > > Which release this is? write_limit_override have been removed quite a > while ago. > FreeBSD colossus 9.2-RELEASE-p15 FreeBSD 9.2-RELEASE-p15 #0: Mon Nov 3 20:31:29 UTC 2014 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > I'd recommend using a fresh -CURRENT snapshot if possible (possibly > with -NODEBUG kernel). > I'm sorta afraid to try and upgrade it at this point. Michelle -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 11:28:21 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BD61EB82; Fri, 6 Feb 2015 11:28:21 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id A74309B; Fri, 6 Feb 2015 11:28:20 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NJC00K9GLFCRS00@hades.sorbs.net>; Fri, 06 Feb 2015 03:33:14 -0800 (PST) Message-id: <54D4A552.7050502@sorbs.net> Date: Fri, 06 Feb 2015 12:28:18 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Stefan Esser , "freebsd-fs@freebsd.org" Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D47F94.9020404@freebsd.org> In-reply-to: <54D47F94.9020404@freebsd.org> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 11:28:21 -0000 Stefan Esser wrote: > Am 06.02.2015 um 03:20 schrieb Michelle Sullivan: > >> 2. zpool import -f -n -F -X storage and see if the system would >> give you a proposal. >> >> >>> This crashes (without -n) the machine out of memory.... there's >>> 32G of RAM. /boot/loader.conf contains: >>> >>> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" >>> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" >>> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 >>> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES" >>> > > I've recovered two "lost" ZFS pools (1 on my system, the other on > someone elses) by identifying a TXG for state that at least allowed > copying to a fresh pool. > > The main tool was zdb, which contains user land implementations of > the kernel code, that leads to panic on import. You can use zdb to > test what is left from your pool (and then try to find a way to get > most of it rescued), if you add commands that make errors non-fatal > and that skip some consistency checks, e.g.: > > # zdb -AAA -L -u %POOL% > > You may need to add -e and possibly also -p %PATH_TO_DEVS% before > the pool lname. > > root@colossus:~ # zdb -AAA -L -e storage Configuration for import: vdev_children: 1 version: 5000 pool_guid: 10618504954404185222 name: 'storage' state: 0 hostid: 4203774842 hostname: 'colossus' vdev_tree: type: 'root' id: 0 guid: 10618504954404185222 children[0]: type: 'raidz' id: 0 guid: 12489400212295803034 nparity: 2 metaslab_array: 34 metaslab_shift: 38 ashift: 9 asize: 45000449064960 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 3998695725653225547 phys_path: '/dev/mfid0' whole_disk: 1 DTL: 168 create_txg: 4 path: '/dev/mfid15' children[1]: type: 'disk' id: 1 guid: 10795471632546545577 phys_path: '/dev/mfid1' whole_disk: 1 DTL: 167 create_txg: 4 path: '/dev/mfid13' children[2]: type: 'disk' id: 2 guid: 15820272272734706674 phys_path: '/dev/mfid2' whole_disk: 1 DTL: 166 create_txg: 4 path: '/dev/mfid0' children[3]: type: 'disk' id: 3 guid: 3928579496187019848 phys_path: '/dev/mfid3' whole_disk: 1 DTL: 165 create_txg: 4 path: '/dev/mfid1' children[4]: type: 'disk' id: 4 guid: 7125052278051590304 phys_path: '/dev/mfid4' whole_disk: 1 DTL: 164 create_txg: 4 path: '/dev/mfid2' children[5]: type: 'disk' id: 5 guid: 14370198745088794709 phys_path: '/dev/mfid5' whole_disk: 1 DTL: 163 create_txg: 4 path: '/dev/mfid3' children[6]: type: 'disk' id: 6 guid: 1843597351388951655 phys_path: '/dev/mfid6' whole_disk: 1 DTL: 162 create_txg: 4 path: '/dev/mfid4' children[7]: type: 'replacing' id: 7 guid: 2914889727426054645 whole_disk: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 10956220251832269421 phys_path: '/dev/mfid15' whole_disk: 1 DTL: 179 create_txg: 4 path: '/dev/mfid11' children[1]: type: 'disk' id: 1 guid: 2463756237300743131 phys_path: '/dev/mfid13' whole_disk: 1 DTL: 181 create_txg: 4 resilvering: 1 path: '/dev/mfid12' children[8]: type: 'disk' id: 8 guid: 8864096842672670007 phys_path: '/dev/mfid7' whole_disk: 1 DTL: 160 create_txg: 4 path: '/dev/mfid5' children[9]: type: 'disk' id: 9 guid: 4650681673751655245 phys_path: '/dev/mfid8' whole_disk: 1 DTL: 159 create_txg: 4 path: '/dev/mfid14' children[10]: type: 'disk' id: 10 guid: 8432109430432996813 phys_path: '/dev/mfid9' whole_disk: 1 DTL: 158 create_txg: 4 path: '/dev/mfid6' children[11]: type: 'disk' id: 11 guid: 414941847968750824 phys_path: '/dev/mfid10' whole_disk: 1 DTL: 157 create_txg: 4 path: '/dev/mfid7' children[12]: type: 'disk' id: 12 guid: 7335375930620195352 phys_path: '/dev/mfid11' whole_disk: 1 DTL: 156 create_txg: 4 path: '/dev/mfid8' children[13]: type: 'disk' id: 13 guid: 5100737174610362 phys_path: '/dev/mfid12' whole_disk: 1 DTL: 155 create_txg: 4 path: '/dev/mfid9' children[14]: type: 'disk' id: 14 guid: 15695558693726858796 phys_path: '/dev/mfid14' whole_disk: 1 DTL: 174 create_txg: 4 path: '/dev/mfid10' Segmentation fault (core dumped) root@colossus:~ # zdb -AAA -L -u -e storage Segmentation fault (core dumped) > Other commands to try instead of -u are e.g. -d and -h. > root@colossus:~ # zdb -AAA -L -d -e storage Segmentation fault (core dumped) root@colossus:~ # zdb -AAA -L -h -e storage Segmentation fault (core dumped) > If you can get a history list, then you may want to add -T %TXG% > for some txg number in the past, to see whether you get better > results. > > You may want to set "vfs.zfs.debug=1" in loader.conf to prevent the > kernel from panicing during import, BTW. But be careful, this can > lead to undetected inconsistencies and is only a last resort for a > read-only mounted pool that is to be copied out (once you are able > to import it). > > Good luck, STefan > -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 12:24:31 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6F6AF6A8 for ; Fri, 6 Feb 2015 12:24:31 +0000 (UTC) Received: from server.linsystem.net (server.linsystem.net [80.79.23.169]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 03295A15 for ; Fri, 6 Feb 2015 12:24:30 +0000 (UTC) Received: from [80.92.253.14] (helo=localhost) by server.linsystem.net with esmtpa (Exim 4.72) (envelope-from ) id 1YJhyc-0006Yo-5I; Fri, 06 Feb 2015 13:25:38 +0100 Date: Fri, 6 Feb 2015 13:25:38 +0100 From: Robert David To: Michelle Sullivan Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... Message-ID: <20150206132538.24993e60@linsystem.net> In-Reply-To: <54D4A3A0.2040408@sorbs.net> References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D457F0.8080502@delphij.net> <54D4A3A0.2040408@sorbs.net> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.25; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" , d@delphij.net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 12:24:31 -0000 I suggest booting to 10.1 livecd. Than check the partitions if they were created prior zfs: $ gpart show mfid0 And than try to import pool as suggested. Robert. On Fri, 06 Feb 2015 12:21:04 +0100 Michelle Sullivan wrote: > Xin Li wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA512 > > > > > > > > On 2/5/15 18:20, Michelle Sullivan wrote: > > > >> Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote: > >> > >> > >>>>>> This suggests the pool was connected to a different system, > >>>>>> is that the case? > >>>>>> > >>>>>> > >>>>>> > >>>>> No. > >>>>> > >>>>> > >> Ok, that's good. Actually if you have two heads that writes to > >> the same pool at the same time, it can easily enter an > >> unrecoverable state. > >> > >> > >> > >>>>>> It's hard to tell right now, and we shall try all possible > >>>>>> remedies but be prepared for the worst. > >>>>>> > >>>>>> > >>>>> I am :( > >>>>> > >>>>> > >> The next thing I would try is to: > >> > >> 1. move /boot/zfs/zpool.cache to somewhere else; > >> > >> > >> > >>> There isn't one. However 'cat'ing the inode I can see there was > >>> one... > >>> > >>> <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > >>> > >>> > >>> > > ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > > > >> 2. zpool import -f -n -F -X storage and see if the system would > >> give you a proposal. > >> > >> > >> > >>> This crashes (without -n) the machine out of memory.... there's > >>> 32G of RAM. /boot/loader.conf contains: > >>> > >>> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" > >>> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" > >>> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 > >>> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES" > >>> > > > > Which release this is? write_limit_override have been removed quite a > > while ago. > > > > FreeBSD colossus 9.2-RELEASE-p15 FreeBSD 9.2-RELEASE-p15 #0: Mon Nov 3 > 20:31:29 UTC 2014 > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > > > > I'd recommend using a fresh -CURRENT snapshot if possible (possibly > > with -NODEBUG kernel). > > > > I'm sorta afraid to try and upgrade it at this point. > > Michelle > From owner-freebsd-fs@FreeBSD.ORG Fri Feb 6 16:17:47 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 12409D78 for ; Fri, 6 Feb 2015 16:17:47 +0000 (UTC) Received: from mail-vc0-x22a.google.com (mail-vc0-x22a.google.com [IPv6:2607:f8b0:400c:c03::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B967982D for ; Fri, 6 Feb 2015 16:17:46 +0000 (UTC) Received: by mail-vc0-f170.google.com with SMTP id kv7so5374462vcb.1 for ; Fri, 06 Feb 2015 08:17:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=dUwkker78zaxcjn3l6rJIg6Rteu2rYz32Ap/01zlJlo=; b=HVOfTJ1ptDix8png3EsOZ96qKNCPzjfDx7qBxPsNdZ+rJT0DFZWN9qrxfmP1pzAJDq B2k0v8P6HiuEwNsI4atUZj3xtcLtte3sSmeeW4mJYVBNuG52QHKZIChEvYh56F5qo9le Peu1s8Qo4W7UCyJVTQ9GxUpJzog1hNeRYgUS8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=dUwkker78zaxcjn3l6rJIg6Rteu2rYz32Ap/01zlJlo=; b=EOK0R1dfhuvDsnzhpDOl/cwbnsFCHA4j2cVZn4iLVx0e8Odqtz+anN0EzXU7jMvbNl MXCcoZhMNqqiX96UmERCuvAi+Bf+jnl+MA7OP7zilS+xRks365suBde5GrAZuusXptPf gvro5Lyl/8r9PpvI4FdhQCmCuXoFTdqHdgu6p2CIOT8VadQ3BinDZcC85OE4shboVOqv APV6pklwmNO5QAT5YgK3aPhi1mNVQvEBqla3otFmnapsCFN/QnFY7/PGpwMCkhbEoFBy 9E1DEfMBTcIhjgpoBIedX5hH6x4Kd1ofOZ51mbl31tNT1wCpaKkpGWeBbK8HDZR/wDd4 fxgQ== X-Gm-Message-State: ALoCoQkQRMk38+Irb9flIivcX/FXR/4IVoXPAFyktXCePyoiUU8uZ15Nr3RwtB2Xq7xeYhkNUYR/ MIME-Version: 1.0 X-Received: by 10.52.168.105 with SMTP id zv9mr2199527vdb.33.1423239465370; Fri, 06 Feb 2015 08:17:45 -0800 (PST) Received: by 10.52.115.103 with HTTP; Fri, 6 Feb 2015 08:17:45 -0800 (PST) Received: by 10.52.115.103 with HTTP; Fri, 6 Feb 2015 08:17:45 -0800 (PST) In-Reply-To: <54D4A3A0.2040408@sorbs.net> References: <54D3E9F6.20702@sorbs.net> <54D41608.50306@delphij.net> <54D41AAA.6070303@sorbs.net> <54D41C52.1020003@delphij.net> <54D424F0.9080301@sorbs.net> <54D457F0.8080502@delphij.net> <54D4A3A0.2040408@sorbs.net> Date: Fri, 6 Feb 2015 08:17:45 -0800 Message-ID: Subject: Re: ZFS pool faulted (corrupt metadata) but the disk data appears ok... From: Michael Ware To: Michelle Sullivan Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: "freebsd-fs@freebsd.org" , d@delphij.net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 16:17:47 -0000 You can mount using a live cd if you haven't tried yet. Maybe try the 10.1 iso and see if you have any luck. Mike On Feb 6, 2015 3:21 AM, "Michelle Sullivan" wrote: > Xin Li wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA512 > > > > > > > > On 2/5/15 18:20, Michelle Sullivan wrote: > > > >> Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote: > >> > >> > >>>>>> This suggests the pool was connected to a different system, > >>>>>> is that the case? > >>>>>> > >>>>>> > >>>>>> > >>>>> No. > >>>>> > >>>>> > >> Ok, that's good. Actually if you have two heads that writes to > >> the same pool at the same time, it can easily enter an > >> unrecoverable state. > >> > >> > >> > >>>>>> It's hard to tell right now, and we shall try all possible > >>>>>> remedies but be prepared for the worst. > >>>>>> > >>>>>> > >>>>> I am :( > >>>>> > >>>>> > >> The next thing I would try is to: > >> > >> 1. move /boot/zfs/zpool.cache to somewhere else; > >> > >> > >> > >>> There isn't one. However 'cat'ing the inode I can see there was > >>> one... > >>> > >>> > <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.c= ache.tmp^@<89>^LR^@^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^= @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@= ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > >>> > >>> > >>> > > > ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^= @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@= ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^= @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@= ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^= @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@= ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^= @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@= ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^= @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ > > > >> 2. zpool import -f -n -F -X storage and see if the system would > >> give you a proposal. > >> > >> > >> > >>> This crashes (without -n) the machine out of memory.... there's > >>> 32G of RAM. /boot/loader.conf contains: > >>> > >>> vfs.zfs.prefetch_disable=3D1 #vfs.zfs.arc_min=3D"8G" > >>> #vfs.zfs.arc_max=3D"16G" #vm.kmem_size_max=3D"8" #vm.kmem_size=3D"6G" > >>> vfs.zfs.txg.timeout=3D"5" kern.maxvnodes=3D250000 > >>> vfs.zfs.write_limit_override=3D1073741824 vboxdrv_load=3D"YES" > >>> > > > > Which release this is? write_limit_override have been removed quite a > > while ago. > > > > FreeBSD colossus 9.2-RELEASE-p15 FreeBSD 9.2-RELEASE-p15 #0: Mon Nov 3 > 20:31:29 UTC 2014 > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > > > > I'd recommend using a fresh -CURRENT snapshot if possible (possibly > > with -NODEBUG kernel). > > > > I'm sorta afraid to try and upgrade it at this point. > > Michelle > > -- > Michelle Sullivan > http://www.mhix.org/ > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >