From owner-freebsd-stable@FreeBSD.ORG Wed May 5 15:58:35 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id F02B6106564A for ; Wed, 5 May 2010 15:58:35 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta13.westchester.pa.mail.comcast.net (qmta13.westchester.pa.mail.comcast.net [76.96.59.243]) by mx1.freebsd.org (Postfix) with ESMTP id 9FDA38FC0A for ; Wed, 5 May 2010 15:58:35 +0000 (UTC) Received: from omta19.westchester.pa.mail.comcast.net ([76.96.62.98]) by qmta13.westchester.pa.mail.comcast.net with comcast id Dmwu1e00527AodY5Drybum; Wed, 05 May 2010 15:58:35 +0000 Received: from koitsu.dyndns.org ([98.248.46.159]) by omta19.westchester.pa.mail.comcast.net with comcast id Drya1e00D3S48mS3frybKG; Wed, 05 May 2010 15:58:35 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 4FF379B42E; Wed, 5 May 2010 08:58:33 -0700 (PDT) Date: Wed, 5 May 2010 08:58:33 -0700 From: Jeremy Chadwick To: Tom Evans Message-ID: <20100505155833.GA69377@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: FreeBSD Stable Subject: Re: Different sizes between zfs list and zpool list X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 May 2010 15:58:36 -0000 On Wed, May 05, 2010 at 04:44:32PM +0100, Tom Evans wrote: > When looking at the size of a pool, this information can be got from > both zpool list and zfs list: > > > $ zfs list > NAME USED AVAIL REFER MOUNTPOINT > tank 5.69T 982G 36.5K /tank > > > $ zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > tank 8.14T 6.86T 1.28T 84% ONLINE - > > Why the different sizes? > > The pool is a raidz of 6 x 1.5 TB drives. Is the tank filesystem using compression? "zfs get all" would help shed some light here. There is some variance in AVAIL on all of our systems, and I see identical on Solaris 10. My guess is (the equivalent of) parity of raidz has something to do with it. However, in the case of our systems, USED always matches between zfs list and zpool list. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |