Date: Sun, 30 Dec 2007 18:19:27 +0100 From: =?ISO-8859-1?Q?Johan_Str=F6m?= <johan@stromnet.se> To: freebsd-fs@freebsd.org Subject: Re: ZFS replace/expand problem Message-ID: <66640AA3-31BC-4E00-9E59-AF8FEA25EFD1@stromnet.se> In-Reply-To: <5A6CFB06-4175-452F-BFC9-323C2023D2F6@stromnet.se> References: <5A6CFB06-4175-452F-BFC9-323C2023D2F6@stromnet.se>
next in thread | previous in thread | raw e-mail | index | archive | help
On Dec 27, 2007, at 20:25 , Johan Str=F6m wrote: > Hello list > > First of all, I want to thank everybody involved in writing and =20 > porting ZFS to FreeBSD, its working (except for this problem) great =20= > for me! > > Now to my problem. To sumarize it, I want to replace two mirrored =20 > disk with bigger ones. Replace works well but the vdev doesnt =20 > expand until i do export/import. Details follows: > > I currently have the following setup: > > back-1 /$ zpool status > pool: tank > state: ONLINE > scrub: none requested > config: > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad14s1d ONLINE 0 0 0 > ad16s1d ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad8 ONLINE 0 0 0 > ad10s2 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad12 ONLINE 0 0 0 > ad10s1 ONLINE 0 0 0 > > The ad8/ad10/ad12 setup is kindof stupid, I know.. ad8 is a 80Gb =20 > and ad10 is a 120Gb, and a10 200Gb.. But now I want to replace =20 > those two mirrors with 4x 300GB (or rather 2x300 and 2x320). So my =20 > plan was to do something like: > > zpool replace tank ad8 ad18 > zpool replace tank ad10s2 ad20 > > where ad18 and ad20 are the two 300Gbs.. Then the same thing for =20 > ad12 and ad10s1.. But before I did that i wanted to make sure that =20 > it would actually expand as I'ev read, so i tried this first.. > On ad18/ad20 I had ad*s1a, a 500MB partition, and ad*s1g a ~280Gb =20 > partition. So i created a testtank with first ad*s1a: > > back-1 /$ zpool create testtank mirror /dev/ad18s1a /dev/ad20s1a > back-1 /$ zpool list > NAME SIZE USED AVAIL CAP HEALTH =20 > ALTROOT > tank 878G 812G 65.1G 92% ONLINE - > testtank 492M 111K 492M 0% ONLINE - > > back-1 /$ zpool status > .. > pool: testtank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > testtank ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad18s1a ONLINE 0 0 0 > ad20s1a ONLINE 0 0 0 > > errors: No known data errors > back-1 /storage$ zpool replace testtank ad18s1a ad18s1g > status now shows > mirror ONLINE 0 0 0 > replacing ONLINE 0 0 0 > ad18s1a ONLINE 0 0 0 > ad18s1g ONLINE 0 0 0 > ad20s1a ONLINE 0 0 0 > > when that was done (and only ad18s1g was showing) i did > > back-1 /storage$ zpool replace testtank ad20s1a ad20s1g > > and then same replacing output as above (but for ad20) > Okey, so now when this is done.. it should have expanded one would =20 > think, right? > > back-1 /storage$ zpool list > NAME SIZE USED AVAIL CAP HEALTH =20 > ALTROOT > .. > testtank 492M 218K 492M 0% ONLINE - > > > Nope.. Waited a while, nothing happened.. Some googling gave me =20 > that export/import could be done: > > back-1 /storage$ zpool export testtank > back-1 /storage$ zpool import testtank > back-1 /storage$ zpool list > NAME SIZE USED AVAIL CAP HEALTH =20 > ALTROOT > .. > testtank 289G 132K 289G 0% ONLINE - > > Yey! Okey so it expands, but only after export/import.. Havent =20 > realy found much docs about this but according to ppl in =20 > #opensolaris this should not be necessary. > Not a big deal in this test case, but doing it for my real tank =20 > will require me to take the system down on an external boot medium =20 > (CD or something) I guess, and then do zfs export/import there, and =20= > then boot back up.. > Any guidelines how to do this? Will doing import/export from a CD =20 > (rescue shell I guess) work as I expect? Or what would be the =20 > smartest way (the actual downtime isnt such a big deal as long as =20 > it is quick and works). > For the record, I found a somewhat easeier solution.. Just reboot and =20= it was updated! Tested with my testtank first, reboot, worked. Then =20 did the same with my real tank (but with ad20 and ad18, not using =20 slices), and the extra space showed up fine after reboot. Thanks again for ZFS!=20=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?66640AA3-31BC-4E00-9E59-AF8FEA25EFD1>