From owner-freebsd-stable@FreeBSD.ORG Tue Jan 17 17:18:26 2012 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3E4571065677 for ; Tue, 17 Jan 2012 17:18:26 +0000 (UTC) (envelope-from christer.solskogen@gmail.com) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id C966D8FC28 for ; Tue, 17 Jan 2012 17:18:25 +0000 (UTC) Received: by wgbgn7 with SMTP id gn7so3338334wgb.31 for ; Tue, 17 Jan 2012 09:18:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=U+ObREQy4wCjIak2h0cKrBs1qGoFqhxOgeGiTknVxnY=; b=LwEeIzuaN7U2tJv6sPdGGDjnmcuhTNXflMojxMmtATZJnmsgGEUA7TiTRlH0rS2MhG Gu6NBew/5wQjWlSKgr0THlMnlpEQKPVX3DsNPsV99jV+zXKsdoE8BVZUMP4U0ndhDVLF mDXchPXzWuc5YdGSx6mUhCDwmB4yyn+brkLkQ= Received: by 10.180.93.168 with SMTP id cv8mr24961483wib.2.1326820704782; Tue, 17 Jan 2012 09:18:24 -0800 (PST) MIME-Version: 1.0 Received: by 10.227.142.20 with HTTP; Tue, 17 Jan 2012 09:18:03 -0800 (PST) In-Reply-To: References: From: Christer Solskogen Date: Tue, 17 Jan 2012 18:18:03 +0100 Message-ID: To: Tom Evans Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD Stable , Shawn Webb Subject: Re: ZFS / zpool size X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jan 2012 17:18:26 -0000 On Tue, Jan 17, 2012 at 5:18 PM, Tom Evans wrote: > On Tue, Jan 17, 2012 at 4:00 PM, Christer Solskogen > wrote: >> A overhead of almost 300GB? That seems a bit to much, don't you think? >> The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1. >> > > Confused about your disks - can you show the output of zpool status. > Sure! $ zpool status pool: data state: ONLINE scan: scrub repaired 0 in 9h11m with 0 errors on Tue Jan 17 18:11:26 2012 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ada1 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 logs gpt/slog ONLINE 0 0 0 cache da0 ONLINE 0 0 0 $ dmesg | grep ada ada0 at ahcich0 bus 0 scbus0 target 0 lun 0 ada0: ATA-6 SATA 2.x device ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes) ada0: Command Queueing enabled ada0: 31472MB (64454656 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 ada1 at ahcich1 bus 0 scbus1 target 0 lun 0 ada1: ATA-8 SATA 2.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad6 ada2 at ahcich2 bus 0 scbus2 target 0 lun 0 ada2: ATA-8 SATA 3.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad8 ada3 at ahcich3 bus 0 scbus3 target 0 lun 0 ada3: ATA-8 SATA 2.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C) ada3: Previously was known as ad10 > If you have a raidz of N disks with a minimum size of Y GB, you can > expect ``zpool list'' to show a size of N*Y and ``zfs list'' to show a > size of roughly (N-1)*Y. > Ah, that explains it. $ zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT data 4.06T 3.33T 748G 82% 1.00x ONLINE - what zpool iostat show is how much of the disks are set to ZFS. > So, on my box with 2 x 6 x 1.5 TB drives in raidz, I see a zpool size > of 16.3 TB, and a zfs size of 13.3 TB. > Yeap. I can see clearly now, thanks! -- chs,