Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 21 Feb 2017 11:09:47 +0000
From:      Steven Hartland <killing@multiplay.co.uk>
To:        freebsd-fs@freebsd.org
Subject:   Re: zfs raidz overhead
Message-ID:  <8cbb514b-92a9-c1c3-24e6-22cf9643ed97@multiplay.co.uk>
In-Reply-To: <1b54a2fe35407a95edca1f992fa08a71@norman-vivat.ru>
References:  <1b54a2fe35407a95edca1f992fa08a71@norman-vivat.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
It doesn't directly address ZVOL's on RAIDZ but the following is a very 
good article from Matthew Ahrens on RAIDZ sizing:
https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz

On 21/02/2017 08:45, Eugene M. Zheganin wrote:
>   
>
> Hi.
>
> There's an interesting case described here:
> http://serverfault.com/questions/512018/strange-zfs-disk-space-usage-report-for-a-zvol
> [1]
>
> It's a user story who encountered that under some situations zfs on
> raidz could use up to 200% of the space for a zvol.
>
> I have also seen this. For instance:
>
> [root@san1:~]# zfs get volsize gamestop/reference1
>   NAME PROPERTY VALUE SOURCE
>   gamestop/reference1 volsize 2,50T local
>   [root@san1:~]# zfs get all gamestop/reference1
>   NAME PROPERTY VALUE SOURCE
>   gamestop/reference1 type volume -
>   gamestop/reference1 creation чт нояб. 24 9:09 2016 -
>   gamestop/reference1 used 4,38T -
>   gamestop/reference1 available 1,33T -
>   gamestop/reference1 referenced 4,01T -
>   gamestop/reference1 compressratio 1.00x -
>   gamestop/reference1 reservation none default
>   gamestop/reference1 volsize 2,50T local
>   gamestop/reference1 volblocksize 8K -
>   gamestop/reference1 checksum on default
>   gamestop/reference1 compression off default
>   gamestop/reference1 readonly off default
>   gamestop/reference1 copies 1 default
>   gamestop/reference1 refreservation none received
>   gamestop/reference1 primarycache all default
>   gamestop/reference1 secondarycache all default
>   gamestop/reference1 usedbysnapshots 378G -
>   gamestop/reference1 usedbydataset 4,01T -
>   gamestop/reference1 usedbychildren 0 -
>   gamestop/reference1 usedbyrefreservation 0 -
>   gamestop/reference1 logbias latency default
>   gamestop/reference1 dedup off default
>   gamestop/reference1 mlslabel -
>   gamestop/reference1 sync standard default
>   gamestop/reference1 refcompressratio 1.00x -
>   gamestop/reference1 written 4,89G -
>   gamestop/reference1 logicalused 2,72T -
>   gamestop/reference1 logicalreferenced 2,49T -
>   gamestop/reference1 volmode default default
>   gamestop/reference1 snapshot_limit none default
>   gamestop/reference1 snapshot_count none default
>   gamestop/reference1 redundant_metadata all default
>
> [root@san1:~]# zpool status gamestop
>   pool: gamestop
>   state: ONLINE
>   scan: none requested
>   config:
>
>   NAME STATE READ WRITE CKSUM
>   gamestop ONLINE 0 0 0
>   raidz1-0 ONLINE 0 0 0
>   da6 ONLINE 0 0 0
>   da7 ONLINE 0 0 0
>   da8 ONLINE 0 0 0
>   da9 ONLINE 0 0 0
>   da11 ONLINE 0 0 0
>
>   errors: No known data errors
>
> or, another server (overhead in this case isn't that big, but still
> considerable):
>
> [root@san01:~]# zfs get all data/reference1
>   NAME PROPERTY VALUE SOURCE
>   data/reference1 type volume -
>   data/reference1 creation Fri Jan 6 11:23 2017 -
>   data/reference1 used 3.82T -
>   data/reference1 available 13.0T -
>   data/reference1 referenced 3.22T -
>   data/reference1 compressratio 1.00x -
>   data/reference1 reservation none default
>   data/reference1 volsize 2T local
>   data/reference1 volblocksize 8K -
>   data/reference1 checksum on default
>   data/reference1 compression off default
>   data/reference1 readonly off default
>   data/reference1 copies 1 default
>   data/reference1 refreservation none received
>   data/reference1 primarycache all default
>   data/reference1 secondarycache all default
>   data/reference1 usedbysnapshots 612G -
>   data/reference1 usedbydataset 3.22T -
>   data/reference1 usedbychildren 0 -
>   data/reference1 usedbyrefreservation 0 -
>   data/reference1 logbias latency default
>   data/reference1 dedup off default
>   data/reference1 mlslabel -
>   data/reference1 sync standard default
>   data/reference1 refcompressratio 1.00x -
>   data/reference1 written 498K -
>   data/reference1 logicalused 2.37T -
>   data/reference1 logicalreferenced 2.00T -
>   data/reference1 volmode default default
>   data/reference1 snapshot_limit none default
>   data/reference1 snapshot_count none default
>   data/reference1 redundant_metadata all default
>   [root@san01:~]# zpool status gamestop
>   pool: data
>   state: ONLINE
>   scan: none requested
>   config:
>
>   NAME STATE READ WRITE CKSUM
>   data ONLINE 0 0 0
>   raidz1-0 ONLINE 0 0 0
>   da3 ONLINE 0 0 0
>   da4 ONLINE 0 0 0
>   da5 ONLINE 0 0 0
>   da6 ONLINE 0 0 0
>   da7 ONLINE 0 0 0
>   raidz1-1 ONLINE 0 0 0
>   da8 ONLINE 0 0 0
>   da9 ONLINE 0 0 0
>   da10 ONLINE 0 0 0
>   da11 ONLINE 0 0 0
>   da12 ONLINE 0 0 0
>   raidz1-2 ONLINE 0 0 0
>   da13 ONLINE 0 0 0
>   da14 ONLINE 0 0 0
>   da15 ONLINE 0 0 0
>   da16 ONLINE 0 0 0
>   da17 ONLINE 0 0 0
>
>   errors: No known data errors
>
> So my question is - how to avoid it ? Right now I'm experimenting with
> the volblocksize, making it around 64k. I'm also suspecting that such
> overhead may be the subsequence of the various resizing operations, like
> extening the volsize of the volume or adding new disks into the pool,
> because I have a couple of servers with raidz where the initial
> disk/volsize configuration didn't change, and the referenced/volsize
> numbers are pretty much close to each other.
>
> Eugene.
>
> Links:
> ------
> [1]
> http://serverfault.com/questions/512018/strange-zfs-disk-space-usage-report-for-a-zvol
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8cbb514b-92a9-c1c3-24e6-22cf9643ed97>