Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Apr 2010 12:58:22 +0300
From:      Andriy Gapon <avg@icyb.net.ua>
To:        Lister <lister@kawashti.org>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: OCE and GPT
Message-ID:  <4BD01DBE.7030905@icyb.net.ua>
In-Reply-To: <FB409BC0C2654FD78971EDD1A6E17E06@neo>
References:  <B814515407B5445092FD63116EA3DA6B@neo>	<4BCEEA79.7080309@icyb.net.ua> <FB409BC0C2654FD78971EDD1A6E17E06@neo>

next in thread | previous in thread | raw e-mail | index | archive | help
on 21/04/2010 23:49 Lister said the following:
> Hello All,
> 
> I'd like to first thank Andrey Elsukov and Andriy Gapon for their
> valuable contribution and very quick reply.
> Given that the patch is not yet ready as I understand it, I'll go with
> the alternate method of destroying and recreating the GPT. To that end I
> yet have to ask 3 more  questions:
> 1. How do I make sure I have a valid secondary GPT? Neither gpt nor
> gpart tell anything about it. Can I assume that if 'gpart show da0'
> shows a proper layout and no error messages that the 2ry is valid?

I think that should be sufficient.

>    I tried to make a quick visual comparison on another system
> (8.0-RELEASE this time) with a 4TB RAID5 that I just setup yesterday,
> using gpart this time because I had to.  I used hexdump for the purpose,
> dumping the first 34 sectors of /dev/da0, and on another ssh shell, THE
> 34 sectors beyond the last partition.
> hexdump of the second got nothing, it seemed to have frozen but would
> break normally on CTRL+C. I've never seen the likes of this before.
> In an attempt to troubleshoot, I narrowed the selection to only ONE
> sector…same result. Then the last sector of the last partition…same
> thing. Even dump of the first sector of the last partition exhibited
> same behavior. The partition is viable, though.  I copied a 4.4GB file
> to it over ssh without a problem and the data rate was consistent with
> expectations.
> I know this is a side issue, but is hexdump/hd known to have problems
> with large devices, or perhaps 32/64-bit issues?
> I forgot to mention that all my systems are AMD64.

Can you provide the actual commands you used?
Not to doubt your skills, but just to be sure.
BTW, you can discover disk size with diskinfo tool, subtract 34 from that and
use dd on that.

> 2. Now assuming OCE adds the new space at the tail– which I yet have to
> verify before proceeding– will 'growfs' serve the purpose of extending
> newfs' work?
>    Its man page doesn't reference gpt or gpart, but rather bsdlabel and
> fdisk; something suggestiive of the contrary.

Theoretically growfs should work with filesystem data within a partition and
should be agnostic to partition type.
Practically, I am not sure.
Also, there _could_ be issues with very large FS sizes.

In your case it would be great if you could experiment with dummy data on a
different system.  I.e. create something similar to what you have now, then grow
it the way you want and see how it works out.

Don't forget to share the results with us :)

> 3. Does it make a difference if use gpt or gpart to recreate the gpt,
> given that I'd initially created it with gpt?

I think that it's better to use gpart because gpt was deprecated.
But I am not sure what version of FreeBSD you use, that may be important.

> Note. My root fs and everything else beyond the library is on another
> RAID1 (on the Motherboard).

That's good, gives you more freedom in actions.

> ----- Original Message ----- From: "Andriy Gapon" <avg@icyb.net.ua>
> To: "Lister" <lister@kawashti.org>
> Cc: <freebsd-geom@freebsd.org>
> Sent: Wednesday, April 21, 2010 14:07
> Subject: Re: OCE and GPT
> 
> 
>> on 21/04/2010 12:21 Lister said the following:
>>> Hi All,
>>>
>>> I have a 5TB RAID5 (/dev/da0) on a 3Ware controller supporting OCE.  I
>>> partitioned it into p1, p2 & p3 using gpt on FreeBSD-7.1-RELEAE.
>>> P3 is 3.5TB and is the one I need to expand by adding another 1TB drive
>>> to the RAID. It is now 87% full.
>>>
>>> Both gpt and gpart don't allow resizing a partition.
>>> Of course, backing up the RAID to another is not an option.
>>>
>>> I'm in a rather desperate situation and I'm willing to do whatever it
>>> takes. If there's no current software solution, I'm willing to use a hex
>>> editor to edit the disk directly if someone could advise me of the
>>> layout of GPT as created by gpt- and gpart if different.  I used to do
>>> this on MBR disks at times of necessity.
>>
>> If you make any mistake and lose your data, then don't blame me.
>> Before trying what I suggest wait for a few days in case someone
>> points out a
>> mistake or suggests a better way.
>>
>> 1. Get current layout e.g. with 'gpart show'
>> 2. Print (several copies of) it and don't lose it
>> 3. Boot using Live CD (if da0 is your boot disk)
>> 4. Undo the whole GPT layout using 'gpart delete' and 'gpart destroy'
>> 5. Expand RAID (I hope OCE means that the new space will be added at
>> the end)
>> 5. Re-create the same layout but using new size for p3
>>
>> Some notes:
>> 1. Deleting/destroying/adding/creating partitions and scheme does not
>> touch your
>> data/filesystems; it operates only on sectors belonging to GPT metadata.
>> 2. There are two copies of GPT metadata, one at the start of a disk,
>> the other at
>> the end; they both must be valid and provide the same information.
>> -- 
>> Andriy Gapon
>> _______________________________________________
>> freebsd-geom@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-geom
>> To unsubscribe, send any mail to "freebsd-geom-unsubscribe@freebsd.org" 
> 


-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BD01DBE.7030905>