From owner-freebsd-fs@freebsd.org Sun Aug 13 03:14:44 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0DB68DDF383 for ; Sun, 13 Aug 2017 03:14:44 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qk0-x229.google.com (mail-qk0-x229.google.com [IPv6:2607:f8b0:400d:c09::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B97DF81621 for ; Sun, 13 Aug 2017 03:14:43 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by mail-qk0-x229.google.com with SMTP id z18so37041164qka.4 for ; Sat, 12 Aug 2017 20:14:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kraus-haus-org.20150623.gappssmtp.com; s=20150623; h=mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=A4mg0OXQpf+dnuRYZbbQJqQdNlHjo4/qUkMUh8w2z7o=; b=XSFq3lFpBLtnPs0A3RQJAa+U3PBogqpgobKkBf8n6YvXs5IOgE4aAxlkdWGMK4csiE prXopEDQxnBrp0fJfuseCyQB2PA270F3BEjgMQk59ndlIHLsxiDIwXQrZ8DM3RQkLVSG cY8SaO1dScuiG9tahcGG3VLcdeW9D+y7I1t0/bRUEJly6y1M99Sbp7ohh5u9oAWjBgiH H/QfLEHM7qSXU0UNOmLiLLLz8eWmKa35b6QcRK8o6qMlodiEkHp0Dc65t+b8+lLFBh1i p8JXEvUyzGaX2yw53ztKbd9UyY61L7ZsyVkhGuDQPc4D1+e/xJtccEMpQAqvA+j0JISB CXUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=A4mg0OXQpf+dnuRYZbbQJqQdNlHjo4/qUkMUh8w2z7o=; b=Pb2yKUEB/Db0PkUcSfhsKtbi0PdPS25BxlWuxT9d7smmCkLdRQZcZtoc3t5OskZudP In51NASitHrnONAjJb6dPIB5UDV+syTV2lC94/gSzXl7I535HvhioRceQHqYDa4FWRpN 5qnf0060xCWraqwZqRA03bjtAaKO50oIABXeQFDncCfu38Kf2bUsyo64d8Lsvnjc/G6h HHpoLbpvZJMcbQ36VUzHkEwGFkk8bTnP7X0N3eUtFRbAcpfTEu4/pooKiJPYs2298Wuw 3zdeHrFy01eiWJreWScM/eZICh/b7CU2GF1MvmyblWhTyqEnU/+IrXGkf0OoMLWsmOTE 1kOA== X-Gm-Message-State: AHYfb5jKTFmC0Rgy/MHl6/FiEWrgoORSlgvabISBxhToX35gWzOIhbvj RsCYDFMubDbV6+uV X-Received: by 10.55.212.2 with SMTP id l2mr19694123qki.3.1502594082930; Sat, 12 Aug 2017 20:14:42 -0700 (PDT) Received: from [192.168.2.65] (pool-74-109-188-83.albyny.fios.verizon.net. [74.109.188.83]) by smtp.gmail.com with ESMTPSA id y9sm3159655qkg.36.2017.08.12.20.14.41 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 12 Aug 2017 20:14:41 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: Oh no, what have I done... From: Paul Kraus In-Reply-To: <928F1F16-0777-4D66-BD27-8224DF80363C@distal.com> Date: Sat, 12 Aug 2017 23:14:40 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <06EEEF86-E466-44EA-86F1-866DA32DD92D@kraus-haus.org> References: <3408A832-BF2E-4525-9EAC-40979BA3555B@distal.com> <20170813000323.GC95431@server.rulingia.com> <5B508AED-4DAC-42E4-8C56-4619C0D1A1C6@distal.com> <035BFBCB-BA43-4568-89E9-8E8DCDFAA8CA@kraus-haus.org> <928F1F16-0777-4D66-BD27-8224DF80363C@distal.com> To: Chris Ross , FreeBSD FS X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Aug 2017 03:14:44 -0000 > On Aug 12, 2017, at 10:48 PM, Chris Ross = wrote: >=20 >> On Aug 12, 2017, at 22:23 , Paul Kraus wrote: >> The same disk. ZFS is looking for the labels it writes (4x for = redundancy; 2 at the beginning of the disk and 2 at the end; it only = _needs_ to find one of them) to identify the drive to include it in the = zpool. If that disk no longer contains the ZFS labels then you are = probably out of luck. >=20 > Okay. Well, the work I had done on the other disk to =E2=80=9Cwipe=E2=80= =9D it only wiped the beginning. So I appear to have gotten lucky that = ZFS writes labels at the end, and also learned that my intent of wiping = it was insufficient. In this case, to my benefit. I was able to bring = the zpool back online. You got luck (and in this case you win :-) >=20 >> You _may_, and this is a very, very long shot, be able to force an = import to a TXG (transaction group) _before_ you added the lone drive = vdev. >=20 > I=E2=80=99m curious to learn more about this. Had I inserted an = geometrically identical disk, without the labels ZFS was able to find, = and the above was my option. How could I have done that? And, now that = I have it up again, am I moving farther away from that possibility? Using zdb you can examine the TXGs and there is a way to force an import = to a certain TXG (at least under Illumos there is, I assume the FBSD ZFS = code is the same). I do not remember the specific procedure, but it went = by on one of the ZFS mailing lists a few years ago. The goal at the time = was not to deal with a changed pool configuration but a series of writes = that corrupted something (possibly due to a feature being turned on at a = certain point in time). You would lose all data written after the TXG = you force the import to. Looking at the manpages for zpool and zdb, it looks like doing this has = gotten easier with `zdb -e -F` =E2=80=A6 operate on exported pool and = rollback TXG until the pool is importable =E2=80=A6 but I=E2=80=99m sure = there are limits :-) Spend time with the zdb manpage and remember that many of these = functions were added due to someone needing to do something = =E2=80=9Cinappropriate=E2=80=9D to a pool to recover some data :-) >=20 >>=20 >>> I=E2=80=99ll try putting a disk with the same layout in, and see how = that goes. That was a thought I=E2=80=99d had. >>=20 >> Won=E2=80=99t work, without the ZFS labels it will not be seen as the = missing drive. >=20 > K. What, if it hadn=E2=80=99t found labels I=E2=80=99d fail to = clear, would it have done with it? Just considered it irrelevant, and = still refused to bring the pool in much the same was as I=E2=80=99d been = trying with the disk forcibly removed? ZFS would have considered the new drive as just that and the pool would = still be un-importable due to a missing top level vdev. > Thanks all. I=E2=80=99d like to learn more about what to do in this = situation, but I appear to have at least gotten to the point where I can = create a backup, which should allow me to destroy and create a pool more = to my actual intent. Keep in mind are that you cannot add devices for capacity _within_ a = vdev only to add mirrors, you add capacity to a pool by _adding_ vdevs. = As you learned, all of the top level vdevs do not have to be the same = type (mirror, radizn), but with different types of vdevs it is almost = impossible to predict the performance of the pool. These days I = generally only use raidz2 for backup data or long term storage, I use = mirrors (2 or 3 way) for production data. This gets me better = performance and easier scalability. ZFS is very good at getting you access to your data as long as it can = find enough of each top level vdev to bring them all on line. I once had = a hardware config with limitations, so I designed my pools around that = and _when_ the external disk chassis went offline I lost performance but = no data. When trying to recover a ZFS pool, never overwrite any drives that may = have been part of the pool as you may need them to bring the pool back = online.