From owner-freebsd-stable@FreeBSD.ORG Mon Jun 15 15:47:46 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7CDD6106566B for ; Mon, 15 Jun 2009 15:47:46 +0000 (UTC) (envelope-from hartzell@alerce.com) Received: from merlin.alerce.com (merlin.alerce.com [64.62.142.94]) by mx1.freebsd.org (Postfix) with ESMTP id 653D58FC1C for ; Mon, 15 Jun 2009 15:47:46 +0000 (UTC) (envelope-from hartzell@alerce.com) Received: from merlin.alerce.com (localhost [127.0.0.1]) by merlin.alerce.com (Postfix) with ESMTP id 059A233C62; Mon, 15 Jun 2009 08:47:46 -0700 (PDT) Received: from merlin.alerce.com (localhost [127.0.0.1]) by merlin.alerce.com (Postfix) with ESMTP id B805633C5B; Mon, 15 Jun 2009 08:47:45 -0700 (PDT) From: George Hartzell MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <18998.27937.780804.279674@already.local> Date: Mon, 15 Jun 2009 08:47:45 -0700 To: Freddie Cash In-Reply-To: References: X-Mailer: VM 8.0.12 under 22.3.1 (i386-apple-darwin9.6.0) X-Virus-Scanned: ClamAV using ClamSMTP Cc: FreeBSD-STABLE Mailing List Subject: Re: Does this disk/filesystem layout look sane to you? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: hartzell@alerce.com List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jun 2009 15:47:46 -0000 Freddie Cash writes: > On Sun, Jun 14, 2009 at 9:17 AM, Dan Naumov wrote: > > > I just wanted to have an extra pair (or a dozen) of eyes look this > > configuration over before I commit to it (tested it in VMWare just in > > case, it works, so I am considering doing this on real hardware soon). > > I drew a nice diagram: http://www.pastebin.ca/1460089 Since it doesnt > > show on the diagram, let me clarify that the geom mirror consumers as > > well as the vdevz for ZFS RAIDZ are going to be partitions (raw disk > > => full disk slice => swap partition | mirror provider partition | zfs > > vdev partition | unused. > > > I don't know for sure if it's the same on FreeBSD, but on Solaris, ZFS will > disable the onboard disk cache if the vdevs are not whole disks. IOW, if > you use slices, partitions, or files, the onboard disk cache is disabled. > This can lead to poor write performance. > > Unless you can use one of the ZFS-on-root facilities, I'd look into getting > a couple of CompactFlash or USB sticks to use for the gmirror for / and /usr > (put the rest on ZFS). Then you can dedicate the entirety of all 5 drives > to ZFS. Even if you use do a bootable ZFS on root, you'll end up with a couple of gpt partitions (boot code, swap, then root) and therefor constructing your ZFS file system from a partition. Pawel said, back on April 6, 2007, "We support cache flushing operations on any GEOM provider (disk, partition, slice, anything disk-like), so bascially currently I treat everything as a whole disk [...]" Does anyone know for sure if we disable caching for partitions? g.