Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Jun 2009 19:16:52 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: Does this disk/filesystem layout look sane to you?
Message-ID:  <b269bc570906141916k7bcf47b2pe87a88dde2d0a7a4@mail.gmail.com>
In-Reply-To: <cf9b1ee00906140917j86b1e4ev4f8e0a1fb5f6b8@mail.gmail.com>
References:  <cf9b1ee00906140916n64a6c0cbr69332811bfa2aa62@mail.gmail.com> <cf9b1ee00906140917j86b1e4ev4f8e0a1fb5f6b8@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Jun 14, 2009 at 9:17 AM, Dan Naumov <dan.naumov@gmail.com> wrote:

> I just wanted to have an extra pair (or a dozen) of eyes look this
> configuration over before I commit to it (tested it in VMWare just in
> case, it works, so I am considering doing this on real hardware soon).
> I drew a nice diagram: http://www.pastebin.ca/1460089 Since it doesnt
> show on the diagram, let me clarify that the geom mirror consumers as
> well as the vdevz for ZFS RAIDZ are going to be partitions (raw disk
> => full disk slice => swap partition | mirror provider partition | zfs
> vdev partition | unused.


I don't know for sure if it's the same on FreeBSD, but on Solaris, ZFS will
disable the onboard disk cache if the vdevs are not whole disks.  IOW, if
you use slices, partitions, or files, the onboard disk cache is disabled.
This can lead to poor write performance.

Unless you can use one of the ZFS-on-root facilities, I'd look into getting
a couple of CompactFlash or USB sticks to use for the gmirror for / and /usr
(put the rest on ZFS).  Then you can dedicate the entirety of all 5 drives
to ZFS.

-- 
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc570906141916k7bcf47b2pe87a88dde2d0a7a4>