Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 8 Jun 2013 02:57:44 -0700
From:      Jeremy Chadwick <jdc@koitsu.org>
To:        "Reed A. Cartwright" <cartwright@asu.edu>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS and Glabel
Message-ID:  <20130608095744.GA4643@icarus.home.lan>
In-Reply-To: <CALOkxuzH81UFuVZifJNxyuo6%2Bhu9mCPB1TC91dn5fkjVLFqTKw@mail.gmail.com>
References:  <CALOkxuzH81UFuVZifJNxyuo6%2Bhu9mCPB1TC91dn5fkjVLFqTKw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Jun 08, 2013 at 02:24:46AM -0700, Reed A. Cartwright wrote:
> I currently have a raidz2 pool that uses whole disks: da6, da7, etc.
> I want to label these using glabel and have zfs mount them using the
> labels.  This way, if an HDD fails, I will be able to easily replace
> the drive.

Can I ask what you mean when you say "that way if a HDD fails I'll be
able to easily replace the drive?"

What about using labels makes the process of disk replacement "easier"?

You may want to keep reading, particularly the very bottom of my mail,
as I re-touch on this question there.

> So my questions are a follows:
> 
> 1) Can glabel be used with zfs and raw disks?  Should it be?

glabel(8) offers two methods of behaviour:

a) "automatic", which stores GEOM metadata (i.e. the label) on the disk
itself, which will conflict with ZFS (when using full/raw disks) since
the last sector (I believe) is used for the GEOM label metadata,

b) "manual" -- which does not store any metadata on the disk itself,
except you have to manually set the label every time the disk appears
in the system.  If the system reboots, etc. the label is therefore
lost, and you get to type them all in again.  Therefore on a reboot,
when ZFS goes to taste all the disks to import the pool, it won't find
any of them.

Your below example in (3) indicates you're wanting the "automatic"
method, because you're using "glabel label" not "glabel create" (see
glabel(8) for the difference).

In the case of "automatic", when the kernel starts, GEOM labels would be
tasted first (before ZFS has a chance to refer to them, which is good).

> 2) Can I add a glabel after the disks have been placed in a pool?

With the "automatic" method, I would probably say no, since the rest of
the GEOM subsystem and/or other parts of the subsystems will probably
have the block device (disk) "locked" from those kind of modifications,
particularly if the pool is already actively imported (in use) at the
time.  Consider the fact that ZFS effectively would be saying "Okay I'm
using LBAs 500 to 123456789" (where 123456789 is the last LBA on the
disk), and then you come along and tell glabel "go ahead and stomp all
over LBA 123456789".  Bad idea.

With the "manual" method, I would say yes, but again see the problem
with the "manual" method described above.

> 3) How would I do this without losing data?  E.g.
> 
> glabel label /dev/da6 storage1
> glabel ...
> zpool export storage
> zpool import -d /dev/label storage

No, I don't think this would work.

You have given no information about your existing pool/setup, so I
cannot give you exact commands to use, but you'd probably need to use a
combination of "zpool offline" (on a single disk), followed by doing the
glabel work, followed by "zpool replace", rinse lather repeat for each
disk in the pool.  This would require a vdev type offers redundancy (ex.
mirror, raidzX).

Doing it from scratch would be done like this (note the glabel syntax
here, as your above syntax is incorrect):

glabel label -v mylabel1 /dev/da6
glabel label -v mylabel2 /dev/da7
zpool create storage mirror /dev/label/mylabel1 /dev/label/mylabel2

This would be if you wanted a RAID-1-like mirror across 2 disks,
referenced by their GEOM labels, of course.

> 4) Is there a better alternative that allows me the keep the data on
> the drives and relabel them?

Please explain what actual problem it is you're trying to solve via the
use of labels.

This topic comes up quite often on the mailing lists, and you will find
that I am a very strong advocate of avoiding *any* kind of labelling
mechanism.  I can point you to past conversations if need be.  Keep in
mind I am strong advocate of KISS principle.

If the crux of the problem is that your disk device names may change or
shift depending on if a disk is online/available/plugged in at the time
of boot, then doing what is called "wiring down" in CAM is the proper
solution for this -- it requires a one-time modification of
/boot/loader.conf and the result is that device names remain static
regardless of disk presence or even if another controller is added to
the system.

This is the method I used for many, many years and had absolutely no
problems, especially when replacing disks with ZFS; the last thing I
wanted to deal with when doing a disk swap is "errrr, yeah, so now I get
to remember some command syntaxes to 'label things', wow what if I mess
this up".  It's a hell of a lot easier to just yank the disk, wait a few
seconds for CAM to notice the disk removal, insert the replacement disk,
wait for CAM to notice it, then do "zpool replace pool daX", ensure
resilvering starts via "zpool status" and walk out of the datacenter.

The only caveat with "wiring down" is if you **change** controllers
(e.g. moving from ahci(4) to mpt(4)) -- but all you have to do then is
update loader.conf to reflect how the new controller behaves/shows
devices on the bus.  Again, a one-time deal.

Let us know what issue it is you feel you'd be avoiding using labels.

-- 
| Jeremy Chadwick                                   jdc@koitsu.org |
| UNIX Systems Administrator                http://jdc.koitsu.org/ |
| Making life hard for others since 1977.             PGP 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130608095744.GA4643>