From owner-freebsd-current@FreeBSD.ORG Thu Apr 1 19:34:25 2010 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5891D1065675; Thu, 1 Apr 2010 19:34:25 +0000 (UTC) (envelope-from rnoland@FreeBSD.org) Received: from gizmo.2hip.net (gizmo.2hip.net [64.74.207.195]) by mx1.freebsd.org (Postfix) with ESMTP id 15C3E8FC19; Thu, 1 Apr 2010 19:34:24 +0000 (UTC) Received: from [10.170.20.44] (nat-170-141-177-44.tn.gov [170.141.177.44]) (authenticated bits=0) by gizmo.2hip.net (8.14.3/8.14.3) with ESMTP id o31JYMqm041104 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 1 Apr 2010 15:34:23 -0400 (EDT) (envelope-from rnoland@FreeBSD.org) Message-ID: <4BB4F539.1070204@FreeBSD.org> Date: Thu, 01 Apr 2010 14:34:17 -0500 From: Robert Noland Organization: FreeBSD User-Agent: Thunderbird 2.0.0.19 (X11/20090218) MIME-Version: 1.0 To: Olivier Smedts References: <4BB49E3F.7070506@it4pro.pl> <4BB4EDC9.2050507@FreeBSD.org> In-Reply-To: <4BB4EDC9.2050507@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=1.3 required=5.0 tests=AWL, BAYES_00, FH_DATE_PAST_20XX, RDNS_DYNAMIC,SPF_SOFTFAIL autolearn=no version=3.2.5 X-Spam-Level: * X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on gizmo.2hip.net Cc: Bartosz Stec , freebsd-fs@freebsd.org, freebsd-current@freebsd.org Subject: Re: gpart failing with no such geom after gpt corruption X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Apr 2010 19:34:25 -0000 Robert Noland wrote: > > > Olivier Smedts wrote: >> 2010/4/1 Bartosz Stec : >>> Hello ZFS and GPT hackers :) >>> >>> I'm sending this message to both freebsd-current and freebsd-fs >>> because it >>> doesn't seems to be a CURRENT-specific issue. >>> >>> Yesterday I tried to migrate my mixed UFS/RAIDZ config to clean RAIDZ >>> with >>> GPT boot. I've following mostly this guide: >>> http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/RAIDZ1 >>> I'm using CURRENT on 3x40GB HDDs (ad0-ad3) and additional 250GB HDD >>> has been >>> used for data migration (ad4). >>> >>> Data was copied form RAIDZ to 250GB HDD, GPT sheme was created on 40GB >>> HDDs, then new zpool on them, and finally data went back to RAIDZ. >>> Booting >>> from RAIDZ was succesful, so far so good. >>> >>> After a while I've noticed some SMART errors on ad1, so I've booted >>> machine >>> with seatools for dos and made long test. One bad sector was found and >>> reallocated, nothing to worry about. >>> As I was in seatools already, I've decided to adjust LBA size on that >>> disk >>> (seatools can do that), because it was about 30MB larger than the >>> other two, >>> and because of that I had to adjust size of freebsd-zfs partition on >>> that >>> disk to match exact size of others (otherwise 'zpool create' will >>> complain). >>> So LBA was adjusted and system rebooted. >> >> I don't understand why you adjusted LBA. You're using GPT partitions, >> so, couldn't you just make the zfs partition the same size on all >> disks by adjusting it to the smallest disk, and let free space at the >> end of the bigger ones ? > > For that matter, my understanding is that ZFS just doesn't care. If you > have disks of different sized in a raidz, the pool size will be limited > by the size of the smallest device. If those devices are replaced with > larger ones, then the pool just grows to take advantage of the > additional available space. balrog% gpart show => 34 2097085 md0 GPT (1.0G) 34 128 1 freebsd-boot (64K) 162 2096957 2 freebsd-zfs (1.0G) => 34 2097085 md1 GPT (1.0G) 34 128 1 freebsd-boot (64K) 162 2096957 2 freebsd-zfs (1.0G) => 34 4194237 md2 GPT (2.0G) 34 128 1 freebsd-boot (64K) 162 4194109 2 freebsd-zfs (2.0G) balrog% zpool status pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 raidz1 ONLINE 0 0 0 md0p2 ONLINE 0 0 0 md1p2 ONLINE 0 0 0 md2p2 ONLINE 0 0 0 errors: No known data errors balrog% zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT test 2.98G 141K 2.98G 0% ONLINE - robert. > robert. > >> Cheers, >> Olivier >> >>> Yes, I was aware that changing disk size probably end with corrupted >>> GPT and >>> data loss, but it doesn't seem to be a big deal for me as far as 2/3 of >>> zpool is alive, because I can always recreate gpt and resilver ad1. >>> >>> Unfortunately it wasn't so easy. First of all system booted, and as I >>> expected kernel message shows GPT error on ad1. Zpool was degraded >>> but alive >>> and kicking. However, when I tried to execute any gpart command on >>> ad1, it >>> return: >>> >>> ad1: no such geom >>> >>> ad1 was present under /dev, and it could be accessed by >>> sysinstall/fdisk, >>> but no with gpart. I've created bsd slice with sysinstall on ad1 and >>> rebooted, with hope that after reboot I could acces ad1 with gpart and >>> recreate GPT scheme. Another surprise - system didn't boot at all, >>> rebooting >>> after couple of seconds in loader (changing boot device didn't make a >>> difference). >>> >>> Only way I could boot system at this moment was connecting 250GB HDD >>> which >>> fortunately still had data from zpool migration and boot from it. >>> Another >>> surprise - kernel was still complaining about GPT corruption on ad1. >>> I had >>> no other ideas so I ran >>> >>> dd if=/dev/zero of=/dev/ad1 bs=512 count=512 >>> >>> to clear beginning of the hdd. After that disk was still unaccesible >>> fromt >>> gpart, so I tried sysinstall/fdisk againt to create standard BSD >>> partitioning scheme and rebooted system. >>> After that finally gpart started to talk with ad1 and GPT scheme and >>> zpool >>> has been recreated and work as it supposed to. >>> >>> Still, how can we clear broken GPT data after it got corrupted? >>> Why gpart has been showing "ad1: no such geom", and how can we deal with >>> this problem? >>> Finally, why gptzfsboot failed with GPT corrupted on other disk after >>> trying >>> to fix it, while it booted at first place? >>> >>> Or maybe changing LBA size of already partitioned HDD is extreme >>> case, and >>> the only way these problems could be triggered ;)? >>> >>> Cheers! >>> >>> -- >>> Bartosz Stec >>> >>> >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >> >> > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"