Date: Thu, 15 Mar 2012 13:45:20 +0100 (CET) From: =?ISO-8859-1?Q?Trond_Endrest=F8l?= <Trond.Endrestol@fagskolen.gjovik.no> To: George Mamalakis <mamalos@eng.auth.gr> Cc: stable@freebsd.org Subject: Re: grow zpool on a mirror setup Message-ID: <alpine.BSF.2.00.1203151338450.67839@mail.fig.ol.no> In-Reply-To: <4F61D9F0.6000005@eng.auth.gr> References: <4F61D9F0.6000005@eng.auth.gr>
next in thread | previous in thread | raw e-mail | index | archive | help
This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --2055831798-1838294692-1331815520=:67839 Content-Type: TEXT/PLAIN; charset=ISO-8859-1 Content-Transfer-Encoding: 8BIT On Thu, 15 Mar 2012 14:00+0200, George Mamalakis wrote: > Hello everybody, > > I have asked the same question in the freebsd forums, but had no luck. Apart > of this, there might be a bug somewhere, so I re-ask the question to this > list. Here how it goes (three posts): > > post 1: > > "I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on a > VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. My > problem is that I need to grow the filesystem size of zfs partitions. I > followed this guide > <http://support.freenas.org/ticket/342>(http://support.freenas.org/ticket/342), > which is for FreeNAS, and encountered a few problems. > > # gpart show > => 34 40959933 ada0 GPT (19G) > 34 128 1 freebsd-boot (64k) > 162 35651584 2 freebsd-zfs (17G) > 35651746 5308221 3 freebsd-swap (2.5G) > > => 34 40959933 ada1 GPT (19G) > 34 128 1 freebsd-boot (64k) > 162 35651584 2 freebsd-zfs (17G) > 35651746 5308221 3 freebsd-swap (2.5G) There's one mistake I'd point out. Your ZFS partitions are followed by your swap partitions. It would be a lot easier if the ZFS partitions were the last one on each drive. Since your are using VirtualBox, I would simply create a new pair of virtual drives with the desired sizes and attach these to your VM. Next, create new boot, swap, and ZFS partitions, in this particular order, on the new drives. Create a ZFS pool using the new ZFS partitions on the new drives, and transfer the old system from the old drives to the new drives, using a recursive snapshot and the zfs send/receive commands. Remember to set the bootfs property on the newly created ZFS pool prior to reboot. > # zpool status > pool: zroot > state: ONLINE > scan: resilvered 912M in 1h3m with 0 errors on Sat Mar 10 14:01:17 2012 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > ada0p2 ONLINE 0 0 0 > ada1p2 ONLINE 0 0 0 > > errors: No known data errors > > # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 5.97G 3.69G 2.28G 61% 1.00x ONLINE - > > Let me give you a few info with regard to my setup, before explaining my > problems: > As you can see, *gpart* shows that my ada0p2 and ada1p2 partitions (used in > zroot) are of size 17G, while *zfs list* shows that zroot has a size of 5.97G > (which is the initial size of the virtual machine's disks, before I resized > them). > > The problem I encountered when following the aforementioned procedure, was > that I was unable to export zroot (the procedure says to export the pool, > "resize" the paritions with *gparted*, and then import the pool), because I > was receiving a message of some of my filesystems being busy (in single user > mode, "/" was busy). Thus, in order to resolve this issue, I booted with a > CDROM of FreeBSD 9 RELEASE, I then imported (*-f*) my zpool, and followed the > procedure of resizing my filesystems. > > Does anyone have a better idea as to what I should do in order to make *zpool* > see all the available space of the partitions it is using? > > Thank you all for your time in advance, > > mamalos" > > post 2: > > "Ah, > > and not to forget: I have enabled the autoexpand property of *zpool* (to be > honest I've enabled, disabled, reenabled, and so forth many times, because > somewhere I read that it might be needed, sometimes...), with no luck." > > post 3: > > "Since nobody has an answer that far, let me ask another thing. Instead of > deleting ada0p2 and ada1p2, and then recreating them from the same starting > block but with a grater size, could I have just created two new filesystems > (ada0p3 and ada1p3), and having them added in the pool as a new mirror? > Because if that's the case, then I could try that out, since it seems to have > the same result. > > Not that this answers to my question, but at least it's a workaround. " > > As stated in these posts, it's really strange that zpool list doesn't seem to > react even if I set the expand flag (or autoexpand which is the same), hence > my concern whether this could be a bug. > > Thank you all for your time, -- +-------------------------------+------------------------------------+ | Vennlig hilsen, | Best regards, | | Trond Endrestøl, | Trond Endrestøl, | | IT-ansvarlig, | System administrator, | | Fagskolen Innlandet, | Gjøvik Technical College, Norway, | | tlf. dir. 61 14 54 39, | Office.....: +47 61 14 54 39, | | tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, | | sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. | +-------------------------------+------------------------------------+ --2055831798-1838294692-1331815520=:67839--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1203151338450.67839>