From owner-freebsd-fs@freebsd.org Sun May 29 02:42:43 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B6B21B521E3 for ; Sun, 29 May 2016 02:42:43 +0000 (UTC) (envelope-from esamorokov@gmail.com) Received: from mail-io0-x235.google.com (mail-io0-x235.google.com [IPv6:2607:f8b0:4001:c06::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 814FA1AC5 for ; Sun, 29 May 2016 02:42:43 +0000 (UTC) (envelope-from esamorokov@gmail.com) Received: by mail-io0-x235.google.com with SMTP id p64so51547900ioi.2 for ; Sat, 28 May 2016 19:42:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=qzyTepJx3YTSPOqZFXrQ5+mfgJMyNj1fMQJ/FTCQ6qM=; b=FZEKFnPlaXg1r1Vdz8UkeqSzFAwNSSxSQqhW2xGqC4Rcp8g2fttfNJ/jduhDxxLGDe 8Tn5bJ/FMFr32N73QKctAw29SQ4O15b/rKPBCEAE82Lp2RQzGMDt6JbLH9kiZkmf8Fdm pl5GBAxq4++RPmb8RpoqVECK0SoU5gj4Dy6Dg+hMLs8ACgy7NH/d2L/yZACQoCXnfS9z LoLsCKvq+Cq9kJRGBxk1+MOaqohKLHl7wfPWKN62sSSYhGA4P5J9RhKk/S5J72x0hSos dQZ/FC6j8v8oWiNWqgdU00jX6aSmdwcnI34JwWhD1DmgB+NChA9NnrIjP3Mk6uTwgEzF IAzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=qzyTepJx3YTSPOqZFXrQ5+mfgJMyNj1fMQJ/FTCQ6qM=; b=bKWXLsUTfcKL+UcQb8B1Ic0IVDQJjvtZvivHHsOZy9ZToWeNX4Dn5QtCONPBMWhiYC PI8X1Xumwp6kRW+cIznoyEM3+Yl4VUVwIDsoQG6ydv1XP+54mcX+hh9CByYdw38ESgxf EXHQcqAXMWr3UR6+yR4fY3se3WhYRsP+OSjQ8prR0J0gWDNfjUlGZJPjLUULzlnoXWBS HLkgwRPVeJRy7ZeLSrBG8nXluLtFPZqhNh0Ns9GKpXCyPeslYpc3xOSoL6NHmrpqC8sb mJxb8kWqPWRxXxpvUCoshTIxMrx/acqclWhNPbsjtX+R53bO8R+S0jtUYDSi9Hh/gK9y jEuA== X-Gm-Message-State: ALyK8tL0nUwLvVBefC04oBzLLN3yu6JpVmenRrl0WChDskEkiC8yOhXAiBxX5jpa4LhdQI60fv0s/Ty+7H/pNw== MIME-Version: 1.0 X-Received: by 10.107.131.105 with SMTP id f102mr18278198iod.136.1464489762803; Sat, 28 May 2016 19:42:42 -0700 (PDT) Received: by 10.107.154.16 with HTTP; Sat, 28 May 2016 19:42:42 -0700 (PDT) In-Reply-To: References: Date: Sat, 28 May 2016 19:42:42 -0700 Message-ID: Subject: Re: ZFS - RAIDZ1 Recovery (Evgeny Sam) From: Evgeny Sam To: BlackCat Cc: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.22 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 May 2016 02:42:43 -0000 The Command "zpool import -fFn 2918670121059000644 zh_vol_old" does not show anything. The failed drive is(has always been) 'ada3' - currently showing bad bloks. The disk FreeNas sent me an alert that the drive has failed and I verified that. [root@juicy] ~# zpool status no pools available [root@juicy] ~# zpool import pool: zh_vol id: 2918670121059000644 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. The pool may be active on another system, but can be imported using the '-f' flag. see: http://illumos.org/msg/ZFS-8000-3C config: zh_vol FAULTED corrupted data raidz1-0 DEGRADED 17624020450804741401 UNAVAIL cannot open gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587 ONLINE gptid/5dacd737-18ac-11e6-9c25-001b7859b93e ONLINE [root@juicy] ~# zdb zh_vol: version: 5000 name: 'zh_vol' state: 0 txg: 1491 pool_guid: 10149654347507244742 hostid: 1802987710 hostname: 'juicy.zhelana.local' vdev_children: 2 vdev_tree: type: 'root' id: 0 guid: 10149654347507244742 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 5892508334691495384 path: '/dev/ada0s2' whole_disk: 1 metaslab_array: 33 metaslab_shift: 23 ashift: 12 asize: 983564288 is_log: 0 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 296669430778697937 path: '/dev/ada2p2' whole_disk: 1 metaslab_array: 37 metaslab_shift: 34 ashift: 12 asize: 2997366816768 is_log: 0 create_txg: 1489 features_for_read: [root@juicy] ~# zdb -e SHOWS ONLY HELP INFO [root@juicy] ~# dmesg | grep ada ada0 at ahcich2 bus 0 scbus0 target 0 lun 0 ada0: ATA-8 SATA 3.x device ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 57241MB (117231408 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 ada1 at ahcich3 bus 0 scbus1 target 0 lun 0 ada1: ATA-9 SATA 3.x device ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) ada1: quirks=0x1<4K> ada1: Previously was known as ad6 ada2 at ahcich4 bus 0 scbus2 target 0 lun 0 ada2: ATA-9 SATA 3.x device ada2: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) ada2: quirks=0x1<4K> ada2: Previously was known as ad8 ada3 at ahcich5 bus 0 scbus3 target 0 lun 0 ada3: ATA-8 SATA 3.x device ada3: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) ada3: quirks=0x1<4K> ada3: Previously was known as ad10 GEOM_ELI: Device ada1p1.eli created. GEOM_ELI: Device ada2p1.eli created. On Sat, May 28, 2016 at 5:03 PM, BlackCat wrote: > 2016-05-29 2:07 GMT+03:00 Evgeny Sam : > > I ran the command "zpool import -fFn 2918670121059000644 zh_vol_old" > > amd it did not work. > > > > [root@juicy] ~# zpool import -fFn 2918670121059000644 zh_vol_old > > [root@juicy] ~# zpool status > > no pools available > > > Does 'zpool import' print something? > > > I think it did not work, because I am running it on the clonned drives, > > which have different GPID's, please correct me if I am wrong. I can > switch > > it to the original drives, if you suggest so. > > > No, please do not connect original drives until you get your data back. > > Ok, let's start info collection from beginning. Could you describe > which exactly disk is fail (e.g. /dev/ada????)? Since I still can not > realize this. Please describe, what happened to the disk (e.g. a lot > of bad blocks or disk controller start to fail). > > Could you show output of these commands: > zpool status > zpool import > zdb > zdb -e > > And if your disk has bad blocks then please show or post to pastebin > output of (where ??? is failed disk) > dmesg | grep 'ada???' > > -- > BR BC >