From owner-freebsd-fs@FreeBSD.ORG Thu May 9 12:31:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1783D872 for ; Thu, 9 May 2013 12:31:50 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-ob0-x235.google.com (mail-ob0-x235.google.com [IPv6:2607:f8b0:4003:c01::235]) by mx1.freebsd.org (Postfix) with ESMTP id DED043C6 for ; Thu, 9 May 2013 12:31:49 +0000 (UTC) Received: by mail-ob0-f181.google.com with SMTP id ta14so2788837obb.40 for ; Thu, 09 May 2013 05:31:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=fJHI0H1ICtEs5t7soLHlTi7DAzXHyTzAhQQKoKk3oks=; b=bDvvTDFVST3TI/GjvAdgEhhAoJZEbwawvKhndi6+gknSDUPp0/CyLAqz3ibJmE4M/s 3LNL2j+HIeF5FZKsy4ieIk1DPoZUNzZvKLMpQTsFOEWk9xxNFaCR6MV4pGUoFlkyavXD 0cltM7ZYxessd0b68SST9j2jaqMJKdW3PoX0Pv/lINsYghdDIX2kHGf/rSENZFDkzGsO kBnOeghnBGZTKtyVBqyo872GwPEDT6/VX1hJeDdbf7wcNBeBpG2lGQ+6iYM1l5RlHyYO ARdi0tUL7kULKuDffdg1+9MohF9IB6UOhut5DOLqx6jv29CoL/d/jY8D0Uhl/ulTigea dy9Q== MIME-Version: 1.0 X-Received: by 10.60.121.2 with SMTP id lg2mr2689684oeb.89.1368102709567; Thu, 09 May 2013 05:31:49 -0700 (PDT) Received: by 10.76.96.49 with HTTP; Thu, 9 May 2013 05:31:49 -0700 (PDT) Date: Thu, 9 May 2013 08:31:49 -0400 Message-ID: Subject: Corrupted zpool import -f FAILS state FAULTED From: Outback Dingo To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2013 12:31:50 -0000 ok zfsgurus, FreeBSD 9.1-STABLE box zpool import -f reports pool status Faulted, one of more devices contains corrupted data, however its showing the guid as faulted in the poll, and not the actual disk device /dev/daX, the pool is a single vdev 24 disk raidz3. Essentially the hardward platform is a dual node system, with 8 enclosures connected to 24 SAS drives via 4 LSI cards. I am not currently using geom_multipath, but the box is zoned so that each node can see 50% of the drives, in case of Failure, carp kicks in and migrates "zpool import -af" the pools onto the other node. it seems as though somehow the pool is now seeing guid and not devices, not sure if they have switched devices ids due to a reboot.