Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Mar 2012 19:22:13 +0200
From:      George Mamalakis <mamalos@eng.auth.gr>
To:        Marco van Tol <marco@tols.org>
Cc:        stable@freebsd.org
Subject:   Re: grow zpool on a mirror setup
Message-ID:  <4F622545.8080802@eng.auth.gr>
In-Reply-To: <20120315162507.GA80200@tolstoy.tols.org>
References:  <4F61D9F0.6000005@eng.auth.gr> <20120315162507.GA80200@tolstoy.tols.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 03/15/12 18:25, Marco van Tol wrote:
> On Thu, Mar 15, 2012 at 02:00:48PM +0200, George Mamalakis wrote:
>> Hello everybody,
>>
>> I have asked the same question in the freebsd forums, but had no luck.
>> Apart of this, there might be a bug somewhere, so I re-ask the question
>> to this list. Here how it goes (three posts):
>>
>> post 1:
>>
>> "I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on a
>> VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. My
>> problem is that I need to grow the filesystem size of zfs partitions. I
>> followed this guide
>> <http://support.freenas.org/ticket/342>(http://support.freenas.org/ticket/342),
>> which is for FreeNAS, and encountered a few problems.
>>
>> # gpart show
>> =>       34  40959933  ada0  GPT  (19G)
>>          34       128     1  freebsd-boot  (64k)
>>         162  35651584     2  freebsd-zfs  (17G)
>>    35651746   5308221     3  freebsd-swap  (2.5G)
>>
>> =>       34  40959933  ada1  GPT  (19G)
>>          34       128     1  freebsd-boot  (64k)
>>         162  35651584     2  freebsd-zfs  (17G)
>>    35651746   5308221     3  freebsd-swap  (2.5G)
>>
>> # zpool status
>>    pool: zroot
>>   state: ONLINE
>>    scan: resilvered 912M in 1h3m with 0 errors on Sat Mar 10 14:01:17 2012
>> config:
>>
>>          NAME        STATE     READ WRITE CKSUM
>>          zroot       ONLINE       0     0     0
>>            mirror-0  ONLINE       0     0     0
>>              ada0p2  ONLINE       0     0     0
>>              ada1p2  ONLINE       0     0     0
>>
>> errors: No known data errors
>>
>> # zpool list
>> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
>> zroot  5.97G  3.69G  2.28G    61%  1.00x  ONLINE  -
>>
>> Let me give you a few info with regard to my setup, before explaining my
>> problems:
>> As you can see, *gpart* shows that my ada0p2 and ada1p2 partitions (used
>> in zroot) are of size 17G, while *zfs list* shows that zroot has a size
>> of 5.97G (which is the initial size of the virtual machine's disks,
>> before I resized them).
>>
>> The problem I encountered when following the aforementioned procedure,
>> was that I was unable to export zroot (the procedure says to export the
>> pool, "resize" the paritions with *gparted*, and then import the pool),
>> because I was receiving a message of some of my filesystems being busy
>> (in single user mode, "/" was busy). Thus, in order to resolve this
>> issue, I booted with a CDROM of FreeBSD 9 RELEASE, I then imported
>> (*-f*) my zpool, and followed the procedure of resizing my filesystems.
>>
>> Does anyone have a better idea as to what I should do in order to make
>> *zpool* see all the available space of the partitions it is using?
>>
>> Thank you all for your time in advance,
>>
>> mamalos"
>>
>> post 2:
>>
>> "Ah,
>>
>> and not to forget: I have enabled the autoexpand property of *zpool* (to
>> be honest I've enabled, disabled, reenabled, and so forth many times,
>> because somewhere I read that it might be needed, sometimes...), with no
>> luck."
>>
>> post 3:
>>
>> "Since nobody has an answer that far, let me ask another thing. Instead
>> of deleting ada0p2 and ada1p2, and then recreating them from the same
>> starting block but with a grater size, could I have just created two new
>> filesystems (ada0p3 and ada1p3), and having them added in the pool as a
>> new mirror? Because if that's the case, then I could try that out, since
>> it seems to have the same result.
>>
>> Not that this answers to my question, but at least it's a workaround. "
>>
>> As stated in these posts, it's really strange that zpool list doesn't
>> seem to react even if I set the expand flag (or autoexpand which is the
>> same), hence my concern whether this could be a bug.
>>
>> Thank you all for your time,
> Hi,
>
> Have you tried offline, online -e yet?
>
> I have done what you are trying succesfully with physical larger drives.
>
> When I understand your layout right, you should be able to do the
> following:
>
> (as root)
> zpool offline zroot ada0p2
> zpool online -e zroot ada0p2
> # Wait till everything settles and looks okay again, monitoring zpool
> # status
> # After all is okay again:
> zpool offline zroot ada1p2
> zpool online -e zroot ada1p2
>
> At this point your zpool should have grown to the size of its underlying
> partitions.
>
> It worked for me, my system was 8-STABLE at the time.  The very same
> system has been upgrade to 9.0-RELEASE in the mean time, without any
> problems.
>
> Marco
>
Marco thank you,

it worked like a charm!

-- 
George Mamalakis

IT and Security Officer
Electrical and Computer Engineer (Aristotle Un. of Thessaloniki),
MSc (Imperial College of London)

Department of Electrical and Computer Engineering
Faculty of Engineering
Aristotle University of Thessaloniki

phone number : +30 (2310) 994379






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F622545.8080802>