From owner-freebsd-fs@FreeBSD.ORG Sun Jan 1 00:35:21 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F175A106566C for ; Sun, 1 Jan 2012 00:35:21 +0000 (UTC) (envelope-from boydjd@jbip.net) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id 861708FC08 for ; Sun, 1 Jan 2012 00:35:21 +0000 (UTC) Received: by eaaf13 with SMTP id f13so19116177eaa.13 for ; Sat, 31 Dec 2011 16:35:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jbip.net; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=hTpvDZjAfjS3GWwc/8T1ZDeaydU+R3LPEjgwfjenJPA=; b=KWXzwF1DWHO0EcHcaFaJjO4pVkeg03DPnvCip8Fvt9BCsnR0Ud+SqG0z3RSBH/5EPI uRFULTg0eXDnr2nx+3B2NSvSGf/G7kNoAE9mmcATzHCDdg9GWYlsxcoLL0gR3Mpe6ytG 49PinwWTL6pFi9Vyh/3rS6JYtKXe3IwfrTyFk= Received: by 10.204.149.212 with SMTP id u20mr10741053bkv.120.1325376521021; Sat, 31 Dec 2011 16:08:41 -0800 (PST) MIME-Version: 1.0 Received: by 10.204.98.202 with HTTP; Sat, 31 Dec 2011 16:08:18 -0800 (PST) In-Reply-To: <8EA721E0-977D-483C-AC06-1040B87E0AA7@deman.com> References: <8EA721E0-977D-483C-AC06-1040B87E0AA7@deman.com> From: Joshua Boyd Date: Sat, 31 Dec 2011 19:08:18 -0500 Message-ID: To: Michael DeMan Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: zfs detach/replace X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jan 2012 00:35:22 -0000 On Sat, Dec 31, 2011 at 1:58 AM, Michael DeMan wrote: > Hi All, > > The origination of the problem is entirely my fault on FreeBSD 8.1 > RELEASE #0. We had old notes that attempting a 'replace' (which is > appropriate for a mirror) leaves ZFS in a funky state on BSD. I > inadvertently did just that on a drive swap on a raidz2 pool. My old notes > show the only recovery that we knew of at the time was to rsync or zfs-send > the pool elsewhere, destroy the local and rebuild from scratch. > I've never had a problem before, and have replaced about 5 drives in my striped raidz ... Usually I'll execute a zpool offline, camcontrol stop, remove the drive, then zpool replace. I am running 8-STABLE though, and not -RELEASE. -- Joshua Boyd E-mail: boydjd@jbip.net http://www.jbip.net