Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 2 Nov 2010 14:52:45 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        Pawel Jakub Dawidek <pjd@freebsd.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Converting a non-HAST ZFS pool to a HAST pool
Message-ID:  <AANLkTiknuYV53NErau%2BfHLX74yLguv0t0Oi_3exK8%2BEp@mail.gmail.com>
In-Reply-To: <20101016222833.GA6765@garage.freebsd.pl>
References:  <AANLkTin07ZvB%2Bj2sqdi2bSS_4MwEvEcRPgK-0qc%2Brch4@mail.gmail.com> <20101016222833.GA6765@garage.freebsd.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Oct 16, 2010 at 3:28 PM, Pawel Jakub Dawidek <pjd@freebsd.org> wrot=
e:
> On Fri, Oct 15, 2010 at 11:37:34AM -0700, Freddie Cash wrote:
>> Has anyone looked into, attempted, or considered converting a non-HAST
>> ZFS pool configuration into a HAST one? =C2=A0While the pool is live and
>> the server is in use. =C2=A0Would it even be possible?
>>
>> For example, would the following work (in a pool with a single raidz2
>> vdev, where the underlying GEOM provider is glabel)
>> =C2=A0 - zpool offline 1 drive =C2=A0(pool is now running degraded)
>> =C2=A0 - configure hastd in master mode with a single provider using the
>> "offline" disk (hast metadata takes the place of glabel metadata)
>
> HAST metadata takes much more space than glabel metadata. The latter
> takes only one sector, while the former depends on provider size, but we
> have to keep entire extent bitmap there, so definitely more than one
> sector.

Okay, so converting a non-HAST ZFS setup to a HAST setup using the
same drives won't work.

Any reason that it wouldn't work when replacing the drives with larger ones=
?

 - zpool offline poolname label/disk01
 - physically replace drive
 - glabel drive as disk01
 - configure hast to use label/disk01
 - zpool replace poolname label/drive01 hast/drive01

I can't think of any reason why it would fail, since the hast device
will be twice as large as the non-hast device it's replacing.  But
thought I'd double-check, just to be safe.  :)

Granted, doing it this way would required a *long* initial sync, as
there's currently 18 TB of data in the pool.  And more going in every
day.  So it might be better to start fresh.

--=20
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTiknuYV53NErau%2BfHLX74yLguv0t0Oi_3exK8%2BEp>