From owner-freebsd-fs@FreeBSD.ORG Tue Jan 3 12:27:52 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6D9D0106564A for ; Tue, 3 Jan 2012 12:27:52 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id E86BA8FC15 for ; Tue, 3 Jan 2012 12:27:51 +0000 (UTC) Received: by yenl9 with SMTP id l9so10225462yen.13 for ; Tue, 03 Jan 2012 04:27:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=jVcWKoEjj+SpgxpAKj5LBsk7BaePilaxQM/1rhYtUG8=; b=UjrK23GMDgsDbuV8kPzIyiQpbjSoHu7RovE4JNcV4ONJ57hM1ocPcWyafNrnvsMNJz X9Ul7xNwKvkeArKmC1jqhO0V3G0jkAfqzz74nGoHO5A9+xPvr/arjGmzraqVT5tnPwks tMFR8tYua63T93kM6VBtH16DSFXUoYR30wjJ8= MIME-Version: 1.0 Received: by 10.236.78.193 with SMTP id g41mr65602330yhe.25.1325593671199; Tue, 03 Jan 2012 04:27:51 -0800 (PST) Received: by 10.236.139.193 with HTTP; Tue, 3 Jan 2012 04:27:51 -0800 (PST) In-Reply-To: <4F003EB8.6080006@dannysplace.net> References: <4F003EB8.6080006@dannysplace.net> Date: Tue, 3 Jan 2012 12:27:51 +0000 Message-ID: From: krad To: Dan Carroll Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS With Gpart partitions X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jan 2012 12:27:52 -0000 On 1 January 2012 11:08, Dan Carroll wrote: > Hello all, > > I'm currently trying to fix a suspect drive and I've run into a small > problem. > I was wondering if someone can shed some light into how GPart works (when > using labels for partitions). > > My drives are 2Tb WD RE4's, originally the array was using 1Tb Seagate > drives, and I was replacing about 3 of those a year, but since I migrated > to the RE4's this is my first problem. > Here is my setup. > > NAME STATE READ WRITE CKSUM > areca ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > gpt/data0 ONLINE 0 0 0 > gpt/data1 ONLINE 0 0 0 > gpt/data2 ONLINE 0 0 0 > gpt/data3 ONLINE 103 0 0 > gpt/data4 ONLINE 0 0 0 > gpt/data5 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > gpt/data6 ONLINE 0 0 0 > gpt/data7 ONLINE 0 0 0 > gpt/data8 ONLINE 0 0 0 > gpt/data9 ONLINE 0 0 0 > gpt/data10 ONLINE 0 0 0 > gpt/data11 ONLINE 0 0 0 > > errors: No known data errors > > The drives are connected via an Areca controller, each drive is created as > a Pass-Thru (just like JBod but also using the cache and BBU). > So, my problem began when I tried to replace gpt/data3. > > Here is what I did. > > # zpool offline areca gpt/data3 > # shutdown -p now > > (I could not remember the camcontrol commands to detach a device and > shutting down was not an issue, so that's the way I did it.) > Replace the failing drive and re-create the passthru device in the areca > console. > power on. > > All good so far, except the drive I used as a replacement came from a > decomissioned server. It already had a gpart label on it. > As it happens it was labelled data2. > > I quickly shut down the system, took the new drive out, put it into > another machine and wiped the first few megabytes of the disk with dd. > > I re-inserted the drive, recreated the passthrough, powered up and > replaced the offlined drive. > Now it's resilvering. > Currently, my system looks like this: > > NAME STATE READ WRITE CKSUM > areca DEGRADED 0 0 0 > raidz1 DEGRADED 0 0 0 > gpt/data0 ONLINE 0 0 0 > gpt/data1 ONLINE 0 0 0 > da8p1 ONLINE 0 0 0 > replacing DEGRADED 0 0 0 > gpt/data3/old OFFLINE 0 0 0 > gpt/data3 ONLINE 0 0 0 931G resilvered > gpt/data4 ONLINE 0 0 0 > gpt/data5 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > gpt/data6 ONLINE 0 0 0 > gpt/data7 ONLINE 0 0 0 > gpt/data8 ONLINE 0 0 0 > gpt/data9 ONLINE 0 0 0 > gpt/data10 ONLINE 0 0 0 > gpt/data11 ONLINE 0 0 0 > > The resilvering looks like it's working fine, but I am curious about the > gpart label. When I query da8p1 I cannot find it. > # gpart show da8 > => 34 3907029101 da8 GPT (1.8T) > 34 3907029101 1 freebsd-zfs (1.8T) > > # glabel list da8p1 > glabel: No such geom: da8p1. > > It should look like this: > > # gpart show da0 > => 34 3907029101 da0 GPT (1.8T) > 34 3907029101 1 freebsd-zfs (1.8T) > > # glabel list da0p1 > Geom name: da0p1 > Providers: > 1. Name: gpt/data0 > Mediasize: 2000398899712 (1.8T) > Sectorsize: 512 > Mode: r1w1e1 > secoffset: 0 > offset: 0 > seclength: 3907029101 > length: 2000398899712 > index: 0 > Consumers: > 1. Name: da0p1 > Mediasize: 2000398899712 (1.8T) > Sectorsize: 512 > Mode: r1w1e2 > > > So it seems to me that when I inserted the second drive with a label > called data2, it wiped the label from the *original* drive. > ZFS does not seem to care about this. If the label is simply a label and > losing it does not alter the user data on the drive, then this makes sense. > > I am wondering if I can simply re-label the partition without fear of > breaking something? Reading the glabel man page I suspect that it may be > ok. > > -D > ______________________________**_________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/**mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@**freebsd.org > " > Just a not you dont appear to be 4k aligned on this drive. As the drive capacity is > 1.5 Tb you probably should be. You will also be ashift=9 as well. This may or may not be a problem for you.