Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Mar 2012 17:25:07 +0100
From:      Marco van Tol <marco@tols.org>
To:        George Mamalakis <mamalos@eng.auth.gr>
Cc:        stable@freebsd.org
Subject:   Re: grow zpool on a mirror setup
Message-ID:  <20120315162507.GA80200@tolstoy.tols.org>
In-Reply-To: <4F61D9F0.6000005@eng.auth.gr>
References:  <4F61D9F0.6000005@eng.auth.gr>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Mar 15, 2012 at 02:00:48PM +0200, George Mamalakis wrote:
> Hello everybody,
> 
> I have asked the same question in the freebsd forums, but had no luck. 
> Apart of this, there might be a bug somewhere, so I re-ask the question 
> to this list. Here how it goes (three posts):
> 
> post 1:
> 
> "I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on a 
> VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. My 
> problem is that I need to grow the filesystem size of zfs partitions. I 
> followed this guide 
> <http://support.freenas.org/ticket/342>(http://support.freenas.org/ticket/342), 
> which is for FreeNAS, and encountered a few problems.
> 
> # gpart show
> =>      34  40959933  ada0  GPT  (19G)
>         34       128     1  freebsd-boot  (64k)
>        162  35651584     2  freebsd-zfs  (17G)
>   35651746   5308221     3  freebsd-swap  (2.5G)
> 
> =>      34  40959933  ada1  GPT  (19G)
>         34       128     1  freebsd-boot  (64k)
>        162  35651584     2  freebsd-zfs  (17G)
>   35651746   5308221     3  freebsd-swap  (2.5G)
> 
> # zpool status
>   pool: zroot
>  state: ONLINE
>   scan: resilvered 912M in 1h3m with 0 errors on Sat Mar 10 14:01:17 2012
> config:
> 
>         NAME        STATE     READ WRITE CKSUM
>         zroot       ONLINE       0     0     0
>           mirror-0  ONLINE       0     0     0
>             ada0p2  ONLINE       0     0     0
>             ada1p2  ONLINE       0     0     0
> 
> errors: No known data errors
> 
> # zpool list
> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> zroot  5.97G  3.69G  2.28G    61%  1.00x  ONLINE  -
> 
> Let me give you a few info with regard to my setup, before explaining my 
> problems:
> As you can see, *gpart* shows that my ada0p2 and ada1p2 partitions (used 
> in zroot) are of size 17G, while *zfs list* shows that zroot has a size 
> of 5.97G (which is the initial size of the virtual machine's disks, 
> before I resized them).
> 
> The problem I encountered when following the aforementioned procedure, 
> was that I was unable to export zroot (the procedure says to export the 
> pool, "resize" the paritions with *gparted*, and then import the pool), 
> because I was receiving a message of some of my filesystems being busy 
> (in single user mode, "/" was busy). Thus, in order to resolve this 
> issue, I booted with a CDROM of FreeBSD 9 RELEASE, I then imported 
> (*-f*) my zpool, and followed the procedure of resizing my filesystems.
> 
> Does anyone have a better idea as to what I should do in order to make 
> *zpool* see all the available space of the partitions it is using?
> 
> Thank you all for your time in advance,
> 
> mamalos"
> 
> post 2:
> 
> "Ah,
> 
> and not to forget: I have enabled the autoexpand property of *zpool* (to 
> be honest I've enabled, disabled, reenabled, and so forth many times, 
> because somewhere I read that it might be needed, sometimes...), with no 
> luck."
> 
> post 3:
> 
> "Since nobody has an answer that far, let me ask another thing. Instead 
> of deleting ada0p2 and ada1p2, and then recreating them from the same 
> starting block but with a grater size, could I have just created two new 
> filesystems (ada0p3 and ada1p3), and having them added in the pool as a 
> new mirror? Because if that's the case, then I could try that out, since 
> it seems to have the same result.
> 
> Not that this answers to my question, but at least it's a workaround. "
> 
> As stated in these posts, it's really strange that zpool list doesn't 
> seem to react even if I set the expand flag (or autoexpand which is the 
> same), hence my concern whether this could be a bug.
> 
> Thank you all for your time,

Hi,

Have you tried offline, online -e yet?

I have done what you are trying succesfully with physical larger drives.

When I understand your layout right, you should be able to do the
following:

(as root)
zpool offline zroot ada0p2
zpool online -e zroot ada0p2
# Wait till everything settles and looks okay again, monitoring zpool
# status
# After all is okay again:
zpool offline zroot ada1p2
zpool online -e zroot ada1p2

At this point your zpool should have grown to the size of its underlying
partitions.

It worked for me, my system was 8-STABLE at the time.  The very same
system has been upgrade to 9.0-RELEASE in the mean time, without any
problems.

Marco

-- 
Gisteren is het niet gelukt.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120315162507.GA80200>