Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 Jan 2012 18:18:03 +0100
From:      Christer Solskogen <christer.solskogen@gmail.com>
To:        Tom Evans <tevans.uk@googlemail.com>
Cc:        FreeBSD Stable <freebsd-stable@freebsd.org>, Shawn Webb <lattera@gmail.com>
Subject:   Re: ZFS / zpool size
Message-ID:  <CAMVU60aXAGaA8nGedgi%2BQNBZ8sdKxTWDKFBLhHD=nCzkn8s1ig@mail.gmail.com>
In-Reply-To: <CAFHbX1LMCPhkEYH=rHbC9bct8mWVSFo2_-brfquvfQKK9bUr-w@mail.gmail.com>
References:  <CAMVU60ZtHp%2B_mhuUh-5RuLNW9XFRxBdfQxXu9vPEzw-P%2BrLUUw@mail.gmail.com> <CADt0fhyg8uXQG8SjWPL2DizZRNTdN9poRjo8Y=c62vN4W7iK6w@mail.gmail.com> <CAMVU60ahgmyK60h83jN9r0VYAWROnMtuz5K_1db0_p=EUZUm5Q@mail.gmail.com> <CAFHbX1LMCPhkEYH=rHbC9bct8mWVSFo2_-brfquvfQKK9bUr-w@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Jan 17, 2012 at 5:18 PM, Tom Evans <tevans.uk@googlemail.com> wrote:
> On Tue, Jan 17, 2012 at 4:00 PM, Christer Solskogen
> <christer.solskogen@gmail.com> wrote:
>> A overhead of almost 300GB? That seems a bit to much, don't you think?
>> The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1.
>>
>
> Confused about your disks - can you show the output of zpool status.
>

Sure!
$ zpool status
  pool: data
 state: ONLINE
 scan: scrub repaired 0 in 9h11m with 0 errors on Tue Jan 17 18:11:26 2012
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
        logs
          gpt/slog  ONLINE       0     0     0
        cache
          da0       ONLINE       0     0     0

$ dmesg | grep ada
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <Crucial CT32GBFAB0 MER1.01k> ATA-6 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes)
ada0: Command Queueing enabled
ada0: 31472MB (64454656 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <WDC WD15EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: <ST3000DM001-9YN166 CC98> ATA-8 SATA 3.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada2: Previously was known as ad8
ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
ada3: <WDC WD15EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)
ada3: Previously was known as ad10


> If you have a raidz of N disks with a minimum size of Y GB, you can
> expect ``zpool list'' to show a size of N*Y and ``zfs list'' to show a
> size of roughly (N-1)*Y.
>

Ah, that explains it.
$ zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
data  4.06T  3.33T   748G    82%  1.00x  ONLINE  -

what zpool iostat show is how much of the disks are set to ZFS.


> So, on my box with 2 x 6 x 1.5 TB drives in raidz, I see a zpool size
> of 16.3 TB, and a zfs size of 13.3 TB.
>

Yeap. I can see clearly now, thanks!


-- 
chs,



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAMVU60aXAGaA8nGedgi%2BQNBZ8sdKxTWDKFBLhHD=nCzkn8s1ig>