From owner-freebsd-fs@FreeBSD.ORG Thu Dec 24 01:02:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1EDF8106566C for ; Thu, 24 Dec 2009 01:02:37 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yx0-f171.google.com (mail-yx0-f171.google.com [209.85.210.171]) by mx1.freebsd.org (Postfix) with ESMTP id C69478FC13 for ; Thu, 24 Dec 2009 01:02:36 +0000 (UTC) Received: by yxe1 with SMTP id 1so7364844yxe.3 for ; Wed, 23 Dec 2009 17:02:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:content-type:mime-version :subject:from:in-reply-to:date:content-transfer-encoding:message-id :references:to:x-mailer; bh=es5mDdk15+LrWDEPDYGxrkzcOkuGJ/25wMNzv0iRhHc=; b=REOPA1BXWjhIInBNbCGZWt1wVU/YsiEFJ+UGz39Q8iaYx4M5NUYfZHML4PtYAryvlC LD6uTTLa0CHtMGyw9k9UB/BiRu/ppxy1DLqqXnh4C7GcdkeN7olTXwYwfmTgxJCROWYu hx7bIAzvEswLIHVE4K88pCzelKw+Iy6ReHI3Y= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer; b=tKFfohViM5s/REGlkdrZxy0R4DpnZnHVKWpSDNYZMYWRikaM7GkpNBpZgyU494X8cG oHMl2XLWkg6ky6rVPqrurnl91nNzkcGd4WY7lFQ/P3Li7RnvbjPvJabj0yXLz+MSunPT CUfWAOT0JgCbpbDMRZF1Twv9oMaTY2KbCBYYY= Received: by 10.100.220.6 with SMTP id s6mr3966923ang.140.1261616556176; Wed, 23 Dec 2009 17:02:36 -0800 (PST) Received: from 68-29-245-15.pools.spcsdns.net (68-29-245-15.pools.spcsdns.net [68.29.245.15]) by mx.google.com with ESMTPS id 5sm3464085yxd.53.2009.12.23.17.02.34 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 23 Dec 2009 17:02:35 -0800 (PST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1077) From: Steven Schlansker In-Reply-To: <5da0588e0912231632v14b5dfcdrc913a9deeac9e38a@mail.gmail.com> Date: Wed, 23 Dec 2009 17:02:31 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <36133DA6-C26B-4B1B-B3E1-DBB714232F59@gmail.com> References: <048AF210-8B9A-40EF-B970-E8794EC66B2F@gmail.com> <4B315320.5050504@quip.cz> <5da0588e0912231632v14b5dfcdrc913a9deeac9e38a@mail.gmail.com> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Subject: Re: ZFS: Can't repair raidz2 (Cannot replace a replacing device) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Dec 2009 01:02:37 -0000 On Dec 23, 2009, at 4:32 PM, Rich wrote: > That's fascinating - I'd swear it used to be the case (in > Solaris-land, at least) that resilvering with a smaller vdev resulted > in it shrinking the available space on other vdevs as though they were > all as large as the smallest vdev available. Pretty sure that this doesn't exist for raidz. I haven't tried, though, and Sun's bug database's search blows chunks. I remember seeing a bug filed on it before, but I can't for the life of me find it. >=20 > In particular, I'd swear I've done this with some disk arrays I have > laying around with 7x removable SCA drives, which I have in 2, 4.5, 9, > and 18 GB varieties... >=20 > But maybe I'm just hallucinating, or this went away a long time ago. > (This was circa b70 in Solaris.) Shrinking of mirrored drives seems like it might be working. Again Sun's bug database isn't clear at all about what can / can't be shrunk - maybe I should get a Solaris bootdisk and see if I can shrink it from there... >=20 > I know you can't do this in FreeBSD; I've also run into the > "insufficient space" problem when trying to replace with a smaller > vdev. >=20 > - Rich >=20 > On Wed, Dec 23, 2009 at 7:29 PM, Steven Schlansker > wrote: >>=20 >> On Dec 22, 2009, at 3:15 PM, Miroslav Lachman wrote: >>=20 >>> Steven Schlansker wrote: >>>> As a corollary, you may notice some funky concat business going on. >>>> This is because I have drives which are very slightly different in = size (< 1MB) >>>> and whenever one of them goes down and I bring the pool up, it = helpfully (?) >>>> expands the pool by a whole megabyte then won't let the drive back = in. >>>> This is extremely frustrating... is there any way to fix that? I'm >>>> eventually going to keep expanding each of my drives one megabyte = at a time >>>> using gconcat and space on another drive! Very frustrating... >>>=20 >>> You can avoid it by partitioning the drives to the well known = 'minimal' size (size of smallest disk) and use the partition instead of = raw disk. >>> For example ad12s1 instead of ad12 (if you creat slices by fdisk) >>> of ad12p1 (if you creat partitions by gpart) >>=20 >>=20 >> Yes, this makes sense. Unfortunately, I didn't do this when I first = made the array >> as the documentation says you should use whole disks so that it can = enable the write >> cache, which I took to mean you shouldn't use a partition table. And = now there's no >> way to fix it after the fact, as you can't shrink a zpool even by a = single >> MB :( >>=20 >>=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>=20 >=20 >=20 >=20 > --=20 >=20 > [We] use bad software and bad machines for the wrong things. -- R. W. = Hamming