From owner-freebsd-fs@freebsd.org Tue Feb 21 11:09:50 2017 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E1545CE8A45 for ; Tue, 21 Feb 2017 11:09:50 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x234.google.com (mail-wm0-x234.google.com [IPv6:2a00:1450:400c:c09::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 77A3D74F for ; Tue, 21 Feb 2017 11:09:50 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x234.google.com with SMTP id v186so106902944wmd.0 for ; Tue, 21 Feb 2017 03:09:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to; bh=K4r3PkIDy7sXfaM7FRIkieUwIXDKcdfQ4JvP6KJTtz0=; b=rbjBxumng7oiABIGJu8H/E/7K30etwZ0QltoowsaA4g9Ymw4HhPR6us+4f/a/omJMP Pr22q3k6EE/8pkgVa5JBHEB7VMUcClO/lerXlwdni+kwkqaY5QHyN4bGiReH5/xLHM3q yJt8owdCD8kZhOZ+Lk4618HRCPhxGQMdzDZF7LSf/5/uuJFs5IjDBWiDcM8D8k5Ok1I/ uiKKDZryZ2FjBzrawAA15N/FL+a16CEQa6OcdiCaMpCAKol2yGYm1dJ1aolguCCFAdhV j/3733sZXR9oa5v8EgpcY+r8pfXyuOJ4XB80H5AxlsUMYRIKkhaRb7j37cuMnBtBYlYg grCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to; bh=K4r3PkIDy7sXfaM7FRIkieUwIXDKcdfQ4JvP6KJTtz0=; b=uKaWT+gpMVCvmEvh9qNCe3C7dD97D37IRHxVuPzOmlQ1777voXNshn5YZgxueeiF4j fpf4O8RNx/vIyu5Y56EAotlzq4e+GD2G8TCeDhH2xx8F91HD7C+60UXyVR3pnREPuRry /+cGjyCml0EnBIfGTW9x1K/tlYhENCv0Ytswr/4agBpe5zPNlYIaFVkEHe+0ffMnI6h8 Z7dswmjsUWK23LfyUx936nDv5L5vapkDmD3uRnchFGYhL+816UOdywkht6yE0g4evRV5 m5j4nSZhDaqqA6Teu3/Fdj0jVUv9eYEgJfjt8LE/8iBPZrKPjXzC6nZUaRarF4kzYw56 Oq1A== X-Gm-Message-State: AMke39lwRN9nr8dlj5WGCfPeInGeaKm4lOldMFf+G3cTigjoHvWKM8J4vx3Ho0OjRWmdfabq X-Received: by 10.28.7.10 with SMTP id 10mr15081971wmh.55.1487675387750; Tue, 21 Feb 2017 03:09:47 -0800 (PST) Received: from [10.10.1.58] ([185.97.61.26]) by smtp.gmail.com with ESMTPSA id h23sm19442131wrc.48.2017.02.21.03.09.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 21 Feb 2017 03:09:47 -0800 (PST) Subject: Re: zfs raidz overhead To: freebsd-fs@freebsd.org References: <1b54a2fe35407a95edca1f992fa08a71@norman-vivat.ru> From: Steven Hartland Message-ID: <8cbb514b-92a9-c1c3-24e6-22cf9643ed97@multiplay.co.uk> Date: Tue, 21 Feb 2017 11:09:47 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <1b54a2fe35407a95edca1f992fa08a71@norman-vivat.ru> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Feb 2017 11:09:51 -0000 It doesn't directly address ZVOL's on RAIDZ but the following is a very good article from Matthew Ahrens on RAIDZ sizing: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz On 21/02/2017 08:45, Eugene M. Zheganin wrote: > > > Hi. > > There's an interesting case described here: > http://serverfault.com/questions/512018/strange-zfs-disk-space-usage-report-for-a-zvol > [1] > > It's a user story who encountered that under some situations zfs on > raidz could use up to 200% of the space for a zvol. > > I have also seen this. For instance: > > [root@san1:~]# zfs get volsize gamestop/reference1 > NAME PROPERTY VALUE SOURCE > gamestop/reference1 volsize 2,50T local > [root@san1:~]# zfs get all gamestop/reference1 > NAME PROPERTY VALUE SOURCE > gamestop/reference1 type volume - > gamestop/reference1 creation чт нояб. 24 9:09 2016 - > gamestop/reference1 used 4,38T - > gamestop/reference1 available 1,33T - > gamestop/reference1 referenced 4,01T - > gamestop/reference1 compressratio 1.00x - > gamestop/reference1 reservation none default > gamestop/reference1 volsize 2,50T local > gamestop/reference1 volblocksize 8K - > gamestop/reference1 checksum on default > gamestop/reference1 compression off default > gamestop/reference1 readonly off default > gamestop/reference1 copies 1 default > gamestop/reference1 refreservation none received > gamestop/reference1 primarycache all default > gamestop/reference1 secondarycache all default > gamestop/reference1 usedbysnapshots 378G - > gamestop/reference1 usedbydataset 4,01T - > gamestop/reference1 usedbychildren 0 - > gamestop/reference1 usedbyrefreservation 0 - > gamestop/reference1 logbias latency default > gamestop/reference1 dedup off default > gamestop/reference1 mlslabel - > gamestop/reference1 sync standard default > gamestop/reference1 refcompressratio 1.00x - > gamestop/reference1 written 4,89G - > gamestop/reference1 logicalused 2,72T - > gamestop/reference1 logicalreferenced 2,49T - > gamestop/reference1 volmode default default > gamestop/reference1 snapshot_limit none default > gamestop/reference1 snapshot_count none default > gamestop/reference1 redundant_metadata all default > > [root@san1:~]# zpool status gamestop > pool: gamestop > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > gamestop ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > da6 ONLINE 0 0 0 > da7 ONLINE 0 0 0 > da8 ONLINE 0 0 0 > da9 ONLINE 0 0 0 > da11 ONLINE 0 0 0 > > errors: No known data errors > > or, another server (overhead in this case isn't that big, but still > considerable): > > [root@san01:~]# zfs get all data/reference1 > NAME PROPERTY VALUE SOURCE > data/reference1 type volume - > data/reference1 creation Fri Jan 6 11:23 2017 - > data/reference1 used 3.82T - > data/reference1 available 13.0T - > data/reference1 referenced 3.22T - > data/reference1 compressratio 1.00x - > data/reference1 reservation none default > data/reference1 volsize 2T local > data/reference1 volblocksize 8K - > data/reference1 checksum on default > data/reference1 compression off default > data/reference1 readonly off default > data/reference1 copies 1 default > data/reference1 refreservation none received > data/reference1 primarycache all default > data/reference1 secondarycache all default > data/reference1 usedbysnapshots 612G - > data/reference1 usedbydataset 3.22T - > data/reference1 usedbychildren 0 - > data/reference1 usedbyrefreservation 0 - > data/reference1 logbias latency default > data/reference1 dedup off default > data/reference1 mlslabel - > data/reference1 sync standard default > data/reference1 refcompressratio 1.00x - > data/reference1 written 498K - > data/reference1 logicalused 2.37T - > data/reference1 logicalreferenced 2.00T - > data/reference1 volmode default default > data/reference1 snapshot_limit none default > data/reference1 snapshot_count none default > data/reference1 redundant_metadata all default > [root@san01:~]# zpool status gamestop > pool: data > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > da3 ONLINE 0 0 0 > da4 ONLINE 0 0 0 > da5 ONLINE 0 0 0 > da6 ONLINE 0 0 0 > da7 ONLINE 0 0 0 > raidz1-1 ONLINE 0 0 0 > da8 ONLINE 0 0 0 > da9 ONLINE 0 0 0 > da10 ONLINE 0 0 0 > da11 ONLINE 0 0 0 > da12 ONLINE 0 0 0 > raidz1-2 ONLINE 0 0 0 > da13 ONLINE 0 0 0 > da14 ONLINE 0 0 0 > da15 ONLINE 0 0 0 > da16 ONLINE 0 0 0 > da17 ONLINE 0 0 0 > > errors: No known data errors > > So my question is - how to avoid it ? Right now I'm experimenting with > the volblocksize, making it around 64k. I'm also suspecting that such > overhead may be the subsequence of the various resizing operations, like > extening the volsize of the volume or adding new disks into the pool, > because I have a couple of servers with raidz where the initial > disk/volsize configuration didn't change, and the referenced/volsize > numbers are pretty much close to each other. > > Eugene. > > Links: > ------ > [1] > http://serverfault.com/questions/512018/strange-zfs-disk-space-usage-report-for-a-zvol > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"