Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 02 Jan 2012 08:04:03 +1000
From:      Dan Carroll <fbsd@dannysplace.net>
To:        Freddie Cash <fjwcash@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS With Gpart partitions
Message-ID:  <4F00D853.5010607@dannysplace.net>
In-Reply-To: <CAOjFWZ7K-zLVUEEzf2FUWk2EqDtVdXdt1KaGmCSbhyM_4e_j-Q@mail.gmail.com>
References:  <4F003EB8.6080006@dannysplace.net> <CAOjFWZ7o3_ZnYPOuMjpa_CNJgNUkatsjDChuhyRRUzMDiw0uiA@mail.gmail.com> <4F00522F.6020700@dannysplace.net> <CAOjFWZ7K-zLVUEEzf2FUWk2EqDtVdXdt1KaGmCSbhyM_4e_j-Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 1/01/2012 10:45 PM, Freddie Cash wrote:
> On Sun, Jan 1, 2012 at 4:31 AM, Dan Carroll<fbsd@dannysplace.net>  wrote:
>>
>> Ok, so the label is still there.   Two things are still strange.
>> 1)  data2 is not in /dev/gpt/
>> 2)  "glabel list da8p1" does not find anything.   But it does for other
>> drives.
>
> You may need to offline the disk in the zfs pool, then "gpart destroy
> da8" to clear out the primary GPT table at the start of the disk, and
> the secondary GPT table at the end of the disk.  That will also clear
> out the /dev/da8p* entries, and related /dev/gpt/ entry.
>
> Then create the GPT from scratch, including the label.
>
> Then, finally, add/replace the disk to the ZFS pool.
>
> The [kernel|GEOM|whatever] doesn't do a very good job of re-checking
> the disks for changed GPT tables, instead using the ones in memory.
> Or something along those lines.  You have to destroy the existing
> table, then create it, which causes the [kernel|GEOM|whatever] to
> re-taste the disk and load the GPT off the disk, thus creating the
> /dev/gpt/ entry.

Thanks for your reply.  It's quite informative.  Reading a little more I 
see that the label stuff is written to the end of the disk.   Here is 
what I think happened.
1) I inserted a disk with a label that already existed.   The kernel 
decided to simply remove the device entry from /dev/gpt (data2) and so 
ZFS found it's data on the GPT partition (rather than the label that 
points to the partition).
2) I wiped the new disk on a second system, but actually I didn't wipe 
the GPT label at the end until I re-did the GPT partition.
3) Re-inserted new disk and resilvered.   All good but ZFS now knows the 
old disk via the GPT partition.

I'm still confused as to why "glabel list da8p1" does not work.   
Perhaps all that command does is:  scan the device for a label (it would 
find data2) and then check what data2 is in /dev/gpt?

I guess another way might be to bring down the server, take the disk 
out, re-do the GPT stuff on another machine, and then put it back.   
That way ZFS may well simpy see it's data on the labeled device.
Oh wait, that wont create the /dev/gpt entries...  :-(

I'd really like to fix this but avoid doing *another* resilver.   The 
first one is finished now and a scrub operation will be done in an hour 
or so.   I guess I should have the faith to do go ahead and do it again, 
but there is an element of risk, and it's to fix something cosmetic.  
That feels wrong...


>> Also, is there a difference (for ZFS) acessing the drive via /dev/da8p1 or
>> /dev/gpt/data2
>
> As far as ZFS is concerned, no.  When importing the pool, ZFS checks
> for ZFS metadata to determine which disk belongs to which vdev in
> which pool, and gets all the pathing sorted out automatically.
>
> However, since disk device nodes can change at boot (add a new
> controller, boot with a dead drive, do a BIOS update that reverses the
> PCI scan order, etc), it makes it easier on the admin to have labels.

Yeah I am aware of ZFS doing that.   It's a nice feature.   It's also 
the reason why in my system data2 is NOT da2p1.
What I was trying to dertermine is if there is any danger to ZFS 
accessing the disk in this manner.   Is da8p1 an identical device to data3?

 From what you describe, from ZFS' point of view, for each device in the 
pool, there are actually 2 versions of it.   The dataX version, and the 
daXp1 version..
I am guessing that ZFS keeps track of the disk names in it's pools.   
Otherwise, what would stop this (the label being replaced with a device 
name in ZFS) happening between reboots?



>
>
> For example, in our 16-bay storage servers, I've set up a coordinate
> system where columns are letters A-D, and rows are numbers 1-4.  Then
> I've labelled each disk according to where it is in the chassis:
>    /dev/gpt/disk-a1
>    /dev/gpt/disk-a2
>    /dev/gpt/disk-a3
>    /dev/gpt/disk-a4
> etc
>
> That way, no matter how the drives are actually enumerated by the
> BIOS, loader, kernel, drivers, etc, I always know which disk is having
> issues.  :)
>

That was also my reasoning.   My numbering starts disk0 being in the top 
left slot, and continues row by row.

-D



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F00D853.5010607>