From owner-freebsd-stable@FreeBSD.ORG Tue May 4 02:16:59 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 81D3B106566C for ; Tue, 4 May 2010 02:16:59 +0000 (UTC) (envelope-from spork@bway.net) Received: from xena.bway.net (xena.bway.net [216.220.96.26]) by mx1.freebsd.org (Postfix) with ESMTP id 2194A8FC1D for ; Tue, 4 May 2010 02:16:58 +0000 (UTC) Received: (qmail 80337 invoked by uid 0); 4 May 2010 02:16:57 -0000 Received: from unknown (HELO ?10.3.2.41?) (spork@96.57.144.66) by smtp.bway.net with (DHE-RSA-AES256-SHA encrypted) SMTP; 4 May 2010 02:16:57 -0000 Date: Mon, 3 May 2010 22:16:57 -0400 (EDT) From: Charles Sprickman X-X-Sender: spork@hotlap.local To: Wes Morgan In-Reply-To: Message-ID: References: <201005021536.05389.jafa82@gmail.com> User-Agent: Alpine 2.00 (OSX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Eric Damien , freebsd-stable@freebsd.org Subject: Re: ZFS: separate pools X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 May 2010 02:16:59 -0000 On Sun, 2 May 2010, Wes Morgan wrote: > On Sun, 2 May 2010, Eric Damien wrote: > >> Hello list. >> >> I am taking my first steps with ZFS. In the past, I used to have two UFS >> slices: one dedicated to the o.s. partitions, and the second to data (/home, >> etc.). I read on that it was possible to recreate that logic with zfs, using >> separate pools. >> >> Considering the example of >> http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot, >> any idea how I can adapt that to my needs? I am concerned about all the >> different mountpoints. > > Well, you need not create all those filesystems if you don't want them. > The pool and FreeBSD will function just fine. > > However, as far as storage is concerned, there is no disadvantage to > having additional mount pounts. The only limits each filesystem will have > is a limit you explicitly impose. There are many advantages, though. Some > datasets are inherently compressible or incompressible. Other datasets you > may not want to schedule for snapshots, or allow files to be executed, > suid, checksummed, block sizes, you name it (as the examples in the wiki > demonstrate). > > Furthermore, each pool requires its own vdev. If you create slices on a > drive and then make each slice its own pool, I would wonder if zfs's > internal queuing would understand the topology and be able to work as > efficiently. Just a thought, though. I have two boxes setup where zfs is on top of slices like that. One has a small zpool across 3 disks - the rest of those disks and 3 other disks of the same size also make up another zpool. The hardware is old, so performance just is not spectacular (old 8 port 3Ware PATA card). I can't tell if this config is contributing to the somewhat anemic (by today's standards) r/w speeds or not. Another has 4 drives with a gmirror setup on two of the drives for the OS (20G out of 1TB). This box performs extremely well (bonnie++ shows 123MB/s writes, 142MB/s reads). Just some random data. I know when I was reading about ZFS I did come across some vague notion that zfs wanted the entire drive to better deal with queueing, not sure if that was official Sun docs or some random blog though... Charles > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" >