Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 1 Jan 2012 04:45:23 -0800
From:      Freddie Cash <fjwcash@gmail.com>
To:        Dan Carroll <fbsd@dannysplace.net>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS With Gpart partitions
Message-ID:  <CAOjFWZ7K-zLVUEEzf2FUWk2EqDtVdXdt1KaGmCSbhyM_4e_j-Q@mail.gmail.com>
In-Reply-To: <4F00522F.6020700@dannysplace.net>
References:  <4F003EB8.6080006@dannysplace.net> <CAOjFWZ7o3_ZnYPOuMjpa_CNJgNUkatsjDChuhyRRUzMDiw0uiA@mail.gmail.com> <4F00522F.6020700@dannysplace.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Jan 1, 2012 at 4:31 AM, Dan Carroll <fbsd@dannysplace.net> wrote:
> On 1/01/2012 10:08 PM, Freddie Cash wrote:
>>
>> When in doubt, read the man page: =C2=A0man gpart
>>
>> The option you are looking for is "-l":
>>
>> # gpart show -l da8
>>
>> That will show the labels set in the GPT for each partition.
>>
>> And labels are created using "-l" in the "gpart add" command, as well.
>
> Ok, so the label is still there. =C2=A0 Two things are still strange.
> 1) =C2=A0data2 is not in /dev/gpt/
> 2) =C2=A0"glabel list da8p1" does not find anything. =C2=A0 But it does f=
or other
> drives.

You may need to offline the disk in the zfs pool, then "gpart destroy
da8" to clear out the primary GPT table at the start of the disk, and
the secondary GPT table at the end of the disk.  That will also clear
out the /dev/da8p* entries, and related /dev/gpt/ entry.

Then create the GPT from scratch, including the label.

Then, finally, add/replace the disk to the ZFS pool.

The [kernel|GEOM|whatever] doesn't do a very good job of re-checking
the disks for changed GPT tables, instead using the ones in memory.
Or something along those lines.  You have to destroy the existing
table, then create it, which causes the [kernel|GEOM|whatever] to
re-taste the disk and load the GPT off the disk, thus creating the
/dev/gpt/ entry.

> Also, is there a difference (for ZFS) acessing the drive via /dev/da8p1 o=
r
> /dev/gpt/data2

As far as ZFS is concerned, no.  When importing the pool, ZFS checks
for ZFS metadata to determine which disk belongs to which vdev in
which pool, and gets all the pathing sorted out automatically.

However, since disk device nodes can change at boot (add a new
controller, boot with a dead drive, do a BIOS update that reverses the
PCI scan order, etc), it makes it easier on the admin to have labels.

For example, in our 16-bay storage servers, I've set up a coordinate
system where columns are letters A-D, and rows are numbers 1-4.  Then
I've labelled each disk according to where it is in the chassis:
  /dev/gpt/disk-a1
  /dev/gpt/disk-a2
  /dev/gpt/disk-a3
  /dev/gpt/disk-a4
etc

That way, no matter how the drives are actually enumerated by the
BIOS, loader, kernel, drivers, etc, I always know which disk is having
issues.  :)

--=20
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ7K-zLVUEEzf2FUWk2EqDtVdXdt1KaGmCSbhyM_4e_j-Q>