Date: Mon, 7 Jun 2010 12:11:45 -0500 (CDT) From: Bob Friesenhahn <bfriesen@simple.dallas.tx.us> To: Jeremy Chadwick <freebsd@jdc.parodius.com> Cc: freebsd-fs@freebsd.org Subject: Re: zfs i/o error, no driver error Message-ID: <alpine.GSO.2.01.1006071153250.12887@freddy.simplesystems.org> In-Reply-To: <20100607121954.GA52932@icarus.home.lan> References: <4C0CAABA.2010506@icyb.net.ua> <20100607083428.GA48419@icarus.home.lan> <4C0CB3FC.8070001@icyb.net.ua> <20100607090850.GA49166@icarus.home.lan> <201006071112.o57BCGMf027496@higson.cam.lispworks.com> <20100607121954.GA52932@icarus.home.lan>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 7 Jun 2010, Jeremy Chadwick wrote: > rubbish. "Datacenter-quality drives?" Oh, I think they mean > "enterprise-grade drives", which really don't offer much more than > high-end consumer-grade drives at this point in time[2]. One of the key > points of ZFS's creation was to provide a reliable filesystem using > cheap disks[3][4]. There are differences between disks. High-grade enterprise disks offer uncorrected error rates at least an order of magnitude better than typical tier-2 "SATA" disks and sometimes two orders of magnitude better than a cheap maximum-density drive. Yes, there are tier-2 drives that come with SAS interfaces, and you can immediately distinguish what they are since they offer high storage capacities and more reasonable prices. > What's confusing about this is the phrase that pool verification is done > by "verifying all the blocks can be read". Doesn't that happen when a > standard read operation comes down the pipe for a file? What I'm No. A standard read does not verify that all data and metadata can be read. Only one copy of the data and metadata is read and there may be several such copies. Metadata is always stored multiple times, even if the vdev does not offer additional redundancy. > The topic of scrub intervals was also brought up a month later[7]. > Someone said: > > "We did a study on re-write scrubs which showed that once per year was a > good interval for modern, enterprise-class disks. However, ZFS does a > read-only scrub, so you might want to scrub more often". The concept of "bit rot" on modern disk drives is very unproven. The magnetism will surely last 1000+ years so the issue is mostly with stability of the media material and the heads. The idea that scrub should re-write the data assumes that magnetic hysteresis is lost over time. This is all very silly for a device with an expected service life of 5 years. It is much more likely for the drive heads to lose their function or for a mechanical defect to appear. Given the above, it makes sense to scrub more often on pools which see a lot of writes (to verify the recently written data), and less often on pools which are rarely updated. More levels of redundancy diminshes the value of the scrub. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.GSO.2.01.1006071153250.12887>
