From owner-freebsd-stable@FreeBSD.ORG Tue Jul 20 02:50:06 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B9214106564A for ; Tue, 20 Jul 2010 02:50:06 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: from mail-qw0-f54.google.com (mail-qw0-f54.google.com [209.85.216.54]) by mx1.freebsd.org (Postfix) with ESMTP id 6E4E28FC0C for ; Tue, 20 Jul 2010 02:50:06 +0000 (UTC) Received: by qwg5 with SMTP id 5so2447829qwg.13 for ; Mon, 19 Jul 2010 19:50:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=e0HaW7DFuQheM1CG4BaqQ7Cwb55W8CtVlj6EEwaeA64=; b=F+WU4DEuZSZ8fuqUv/KlI/eJ/6foQcQB4vjmV68dtsOHRB7Yh1Xyww2axcVblPP/pc I3JzGmy6rvoWF75tx39F5kDAbLMo4p2belKLYy8Iq6OjE9/5gPu/aVZqk3U4HDJ/3dtS XhAqLA6OC+aG16G1G9e29aK9EiEjImjT33LwI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=fIBu7e+Kmm5klMOqQQe/nkYeR4J95FZIrfGhd4QiWJrB5ZfH39WKlesUKRWgpPrVdY unqQtzWfvdj1SoeteUmgPKLZ4ublEq4ZW/mcFJAuG/Q5ECGqTRE/YPU6oaefrdfOudCJ B6SIuRZ6fXVYp3N4VM6hXY3Za2KHSE2TCGc+0= MIME-Version: 1.0 Received: by 10.224.3.10 with SMTP id 10mr5265540qal.145.1279594203822; Mon, 19 Jul 2010 19:50:03 -0700 (PDT) Received: by 10.229.29.71 with HTTP; Mon, 19 Jul 2010 19:50:03 -0700 (PDT) In-Reply-To: <4C4504DF.30602@langille.org> References: <4C4504DF.30602@langille.org> Date: Mon, 19 Jul 2010 21:50:03 -0500 Message-ID: From: Adam Vande More To: Dan Langille Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-stable Subject: Re: Problems replacing failing drive in ZFS pool X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Jul 2010 02:50:06 -0000 On Mon, Jul 19, 2010 at 9:07 PM, Dan Langille wrote: > I think it's because you pull the old drive, boot with the new drive, >> the controller re-numbers all the devices (ie da3 is now da2, da2 is >> now da1, da1 is now da0, da0 is now da6, etc), and ZFS thinks that all >> the drives have changed, thus corrupting the pool. I've had this >> happen on our storage servers a couple of times before I started using >> glabel(8) on all our drives (dead drive on RAID controller, remove >> drive, reboot for whatever reason, all device nodes are renumbered, >> everything goes kablooey). >> > > > Can you explain a bit about how you use glabel(8) in conjunction with ZFS? > If I can retrofit this into an exist ZFS array to make things easier in the > future... > If you've used whole disks in ZFS, you can't retrofit it if by retrofit you mean an almost painless method of resolving this. GEOM setup stuff generally should happen BEFORE the file system is on it. You would create your partition(s) slightly smaller than the disk, label it, then use the resulting device as your zfs device when creating the pool. If you have an existing full disk install, that means restoring the data after you've done those steps. It works just as well with MBR style partitioning, there's nothing saying you have to use GPT. GPT is just better though in terms of ease of use IMO among other things. -- Adam Vande More