Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 7 Feb 2021 17:55:58 -0500
From:      Paul Mather <paul@gromit.dlib.vt.edu>
To:        Abner Gershon <6731955@gmail.com>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: Making gmirror metadata cooperate with gpt metadata
Message-ID:  <4AB2335F-706F-489D-9228-FEECFDD3CC45@gromit.dlib.vt.edu>
In-Reply-To: <CADC2UYexYa20wiWV2eAKvE-10F4Fi9hnh%2BEToSXEphpyLD5dMw@mail.gmail.com>
References:  <CADC2UYd%2B1hvOORErpHYvFMSGPeOqEd-M=oXiviqV6mRt2DZJMw@mail.gmail.com> <5E7EFDC6-0089-4D6C-B81C-3D98A04C0FA7@gromit.dlib.vt.edu> <CADC2UYexYa20wiWV2eAKvE-10F4Fi9hnh%2BEToSXEphpyLD5dMw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Feb 7, 2021, at 4:23 PM, Abner Gershon <6731955@gmail.com> wrote:

> Wow, thanks for opening my eyes to this. I did not realize that BSD
> partitions may be layered on top of a GPT partition.


Hopefully it was clear from my original reply that I wasn't sure you =
could do this and you should try it out for yourself (e.g., in a VM or =
using an md-disk). :-)

It's not clear whether it is possible from the gpart man page.  For the =
BSD partitioning scheme it says, "Traditional BSD disklabel, usually =
used to subdivide MBR partitions.  (This scheme can also be used as the =
sole partitioning method, without an MBR. ..."  It's not clear to me =
whether you could create a partitioning scheme in this way and still =
have the resultant system boot via EFI or legacy BIOS---it's the latter =
"being able to boot" which is the most important, IMHO.


> If I understand this correctly, you are suggesting I use fdisk to =
partition
> a GPT partition?


No, my thought was just to add a partition of type "freebsd" inside the =
GPT.  I do note that the gpart man page discourages its use: "This is a =
legacy partition type and should not be used for the APM or GPT =
schemes."  Then, as I said above, there is the matter of whether a =
FreeBSD boot loader could successfully boot from such a layout. :-\


> My concern with gmirror on partition level is that I have seen a =
comment
> about how this may cause excessive stress on disk due to contention =
between
> various sectors for writing data during initial mirroring. In other =
words
> will gmirror update one partition at a time or will disk write head =
jitter
> back and forth writing 4k to 1st partition, 4k to 2nd partition, 4k to =
3rd
> partition, etc.


To be honest, I don't remember what it does because I only use gmirror =
for swap nowadays, but I have a sneaking suspicion from memory that it =
was subject to the jitter you mention (at least years ago when I was =
still gmirroring UFS filesystems).  I ended up turning off =
autosynchronisation and doing it myself on the occasions when the mirror =
broke.

For initial mirroring you could make a special case for synchronising, =
if you were worried about disk stress.  People are increasingly using =
SSDs for OS drives nowadays, so stress from mechanical head movement =
becomes a moot point in that case.  (In fact, all those layout and =
rotational optimisations in UFS designed to minimise physical head =
movement and rotational latency become moot in that case.)


> I have been resisting ZFS because of inefficiencies of COW for =
updating
> database files ( as opposed to updating one block of existing file ).


Don't databases themselves use COW techniques to ensure data safety?


> I
> plan to set up a server with some UFS gmirror and some ZFS storage and =
do
> some comparisons. Will post back my results when I do. Most related =
posts I
> have seen suggest ZFS is the way of the now/future but then again I am
> driving a 1988 Ford ranger with manual transmission.


There's nothing wrong with sticking with what you know and what you feel =
comfortable with, and with what you believe best accommodates your use =
case.

Others in this thread have made some great points regarding not =
dismissing ZFS as an option.  I agree with what they said.  I've used =
FreeBSD since version 3 and used ZFS from the moment it was available in =
FreeBSD (version 7).  Here's what I would add/echo to what has already =
been said as plus points for ZFS:

- Pooled storage: no more gnashing teeth over badly sizing your =
filesystems
- Snapshots: cheap and reliable; I never felt the same way about UFS =
snapshots
- Flexible filesets: tune the behaviour of "filesytems" according to use =
cases
- Integrity and durability: advanced "RAID" setups and data integrity =
protections throughout
- Administration: better control over administration and delegation of =
your file systems
- Efficiency: tiered storage model

As regards ZFS being "new," it would be fair to say that it has received =
more active development in the last few years than UFS.  I actually =
switched from UFS to ZFS on FreeBSD/arm64 (on a 1 GB Raspberry Pi) =
because I was tired of losing data due to UFS+SUJ crashing with =
non-fsck-able file systems.  Snapshots on UFS have also had a chequered =
history of working properly (e.g., for consistent dumps).  Data safety =
is the #1 thing that attracts me to ZFS.

Hopefully, data safety is important to you, too.  Also, one big concern =
I would have in changing the semantics of gmirror, as you propose, is =
the easy availability of rescue tools should something happen to my =
storage causing everything to go pear shaped. :-)  You'd have to factor =
it into your disaster recovery plan.

Cheers,

Paul.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4AB2335F-706F-489D-9228-FEECFDD3CC45>