Date: Mon, 9 Dec 2013 10:24:03 -0800 From: Matthew Ahrens <mahrens@delphix.com> To: Jason Keltz <jas@cse.yorku.ca> Cc: freebsd-fs <freebsd-fs@freebsd.org>, Eric.Shrock@delphix.com Subject: Re: question about zfs written property Message-ID: <CAJjvXiEO8u0Y2Y%2Be_XBKwFZ6oF8u5qGL7k9NfKbvWypnTG0mhw@mail.gmail.com> In-Reply-To: <52A5F80D.8020206@cse.yorku.ca> References: <52A5F80D.8020206@cse.yorku.ca>
next in thread | previous in thread | raw e-mail | index | archive | help
Jason, I'm cc'ing the freebsd mailing list as well. In general that is a better forum for questions about how to use ZFS. On Mon, Dec 9, 2013 at 9:04 AM, Jason Keltz <jas@cse.yorku.ca> wrote: > Hi.. > > I saw your names on the feature addition in illumos for the "written" > property for ZFS: > > https://www.illumos.org/issues/1645 > > I had a question and was hoping you might have a moment to answer. > > I'm rsyncing data from a Linux-based system to a ZFS backup and archive > server. It's running FreeBSD, but it's the same ZFS code base as illumos. > I'm seeing (what I think are) some weird numbers looking at ZFS written > property ... > > For example: > > Sat Dec 7 01:05:00 EST 2013 sync start rsync://backup@forest-mrpriv/home9 > /local/backup/home9 (189G/264G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync finish rsync://backup@forest-mrpriv/home9 > /local/backup/home9 (190G/265G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync elapsed 00h:28m:20s > rsync://backup@forest-mrpriv/home9 /local/backup/home9, 514M unarchived > Sat Dec 7 06:32:58 EST 2013 archive create home9 daily 20131207 as > pool1/backup/home9@20131207, 518M > > In the third line, where you see "514M unarchived", I write out the > property of "written" after the rsync completes. However, when the archive > (just a snapshot) runs (hours later), there's 4 MB more data!? Nothing > touches the data after the rsync completes. Both lines are probing the > same property on the same dataset. How can they get a different result? > If you are getting the "written" property just after the rsync completes, it's possible that there is still some data "in flight" inside ZFS. If you run "sync", that should flush out all the dirty data and update the space accounting. Unfortunately this is only documented in the description of the "used" property, we should add similar qualifiers to "available", "referenced", "written", "logicalreferenced", etc.: The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Com- mitting a change to a disk using fsync(3c) or O_SYNC does not necessarily guarantee that the space usage information is updated immediately. > > On another note, it seems there's also a minor dependency in the first and > second lines as well. (189G/264G/1.41x) refers to > (used/lused/compressratio) properties. I don't know how they get rounded, > but if there's 500 MB added, I would have thought that 189 should have been > something like 189.5 beforehand? But that's a different issue. > The rounding generally 3 significant digits, and always rounds down. See zfs_nicenum(). Use "zfs get -p" if you want exact numbers. > > Sometimes, the numbers are the same, like here... > > Sat Dec 7 01:33:21 EST 2013 sync start rsync://backup@mint-mrpriv/home10 > /local/backup/home10 (143G/221G/1.60x) > Sat Dec 7 04:49:17 EST 2013 sync finish rsync://backup@mint-mrpriv/home10 > /local/backup/home10 (144G/222G/1.60x) > Sat Dec 7 04:49:17 EST 2013 sync elapsed 03h:15m:56s > rsync://backup@mint-mrpriv/home10 /local/backup/home10, 485M unarchived > Sat Dec 7 06:33:01 EST 2013 archive create home10 daily 20131207 as > pool1/backup/home10@20131207, 485M > > Other times they are 1 off again ... > > Sat Dec 7 04:49:23 EST 2013 sync start rsync://backup@forest-mrpriv/dept > /local/backup/dept (89.6G/144G/1.68x) > Sat Dec 7 05:19:20 EST 2013 sync finish rsync://backup@forest-mrpriv/dept > /local/backup/dept (89.7G/144G/1.68x) > Sat Dec 7 05:19:20 EST 2013 sync elapsed 00h:29m:57s > rsync://backup@forest-mrpriv/dept /local/backup/dept, 127M unarchived > Sat Dec 7 06:32:59 EST 2013 archive create dept daily 20131207 as > pool1/backup/dept@20131207, 128M > > Here's a discrepancy again ... > > Sat Dec 7 05:45:46 EST 2013 sync start rsync://backup@bronze-mrpriv/mysqlbackup.bronze > /local/backup/mysqlbackup.bronze (20.4M/20.6M/1.01x) > Sat Dec 7 05:45:47 EST 2013 sync finish rsync://backup@bronze-mrpriv/mysqlbackup.bronze > /local/backup/mysqlbackup.bronze (20.4M/20.6M/1.01x) > Sat Dec 7 05:45:47 EST 2013 sync elapsed 00h:00m:01s > rsync://backup@bronze-mrpriv/mysqlbackup.bronze /local/backup/mysqlbackup.bronze, > no new data > Sat Dec 7 06:33:01 EST 2013 archive create mysqlbackup.bronze daily > 20131207 as pool1/backup/mysqlbackup.bronze@20131207, 6.82M > > For "no new data" to be printed on the third line, written would have had > to be 0. > The fact that (used/lused/compressratio) is the same for both the first > and second line, at the end of the backup, there seemed to be no different > data there than before... yet a while later, the archive/snapshot runs, and > there's 6.82 MB new data. > > I'm just wondering if this behavior is odd, or if this is some kind of > cache issue. > > I'm using full disks, and an LSI HBA, so there's no oddness related to > using ZFS with underlying RAID controller cards. > > I can send this to the FreeBSD filesystem list, but I figured I would try > here first. > > Thanks in advance for any help you might be able to provide... > > Jason. > >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJjvXiEO8u0Y2Y%2Be_XBKwFZ6oF8u5qGL7k9NfKbvWypnTG0mhw>