Date: Sat, 30 Aug 2014 06:05:12 -0700 From: daniel <DStaal@usa.net> To: freebsd-questions@freebsd.org Subject: Re: gvinum raid5 vs. ZFS raidz Message-ID: <f8433c184dc451a3af976a43377c32e5@mail.magehandbook.com> In-Reply-To: <201408300147.s7U1leJP024616@sdf.org> References: <201408020621.s726LsiA024208@sdf.org> <alpine.BSF.2.11.1408020356250.1128@wonkity.com> <53DCDBE8.8060704@qeng-ho.org> <201408060556.s765uKJA026937@sdf.org> <53E1FF5F.1050500@qeng-ho.org> <201408070831.s778VhJc015365@sdf.org> <alpine.BSF.2.11.1408071034510.64214@mail.fig.ol.no> <201408070936.s779akMv017524@sdf.org> <alpine.BSF.2.11.1408071226020.64214@mail.fig.ol.no> <201408071106.s77B6JCI005742@sdf.org> <5B99AAB4-C8CB-45A9-A6F0-1F8B08221917@kraus-haus.org> <201408220940.s7M9e6pZ008296@sdf.org> <7971D6CA-AEE3-447D-8D09-8AC0B9CC6DBE@kraus-haus.org> <201408260641.s7Q6feBc004970@sdf.org> <9588077E-1198-45AF-8C4A-606C46C6E4F8@kraus-haus.org> <201408280636.s7S6a5OZ022667@sdf.org> <25B567A0-6639-41EE-AB3E-96AFBA3F11B7@kraus-haus.org> <201408300147.s7U1leJP024616@sdf.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On 2014-08-29 18:47, Scott Bennett wrote: > Paul Kraus <paul@kraus-haus.org> wrote: > >> On Aug 28, 2014, at 2:36, Scott Bennett <bennett@sdf.org> wrote: >> >> > Paul Kraus <paul@kraus-haus.org> wrote: >> > Are you using any drives larger than 1 TB? >> >> I have been testing with a bunch of 2TB (3 HGST and 1 WD). I have been >> using ZFS and it has not reported *any* checksum errors. >> > What sort of testing? Unless the data written with errors are > read back, > how would ZFS know about any checksum errors? Does ZFS implement > write-with- > verify? Copying some humongous file and then reading it back for > comparison > (or, with ZFS, just reading them) ought to bring the checksums into > play. Of > course, a scrub should do that, too. No write with verify that I know of (at least, this type of verify) but any read should brings the checksums into play. (And scrub, of course.) > As soon as I can get two more 2 TB drives and set them up under > ZFS, > I intend to try the equivalent of that. Because drives cannot be added > to > an existing raidzN, I need to wait until then to create the 6-drive > raidz2. > However, that original 1.1 TB file is currently sitting on one of the > four > drives I already have that are intended for the raidz2, so that file > will > be trashed by creating the raidz2. The file is a dump(8) file of a 1.2 > TB > file system that is nearly full, so I can run the dump again with the > output > going to the newly created pool, after which I can try a > "dd if=dumpfile of=/dev/null" to see whether ZFS detects any problems. > If > it doesn't, then I can try a scrub on the pool to see whether that > finds any > problems. > My expectation is that I will end up contacting one or more > manufacturers > to try to replace at least two drives based on whatever ZFS detects, > but I > would be glad to be mistaken about that for now. If two are that bad, > then > I hope that ZFS can keep things running until the replacements show up > here. Just for the testing, you can set up a one-drive zpool. ZFS wouldn't be able repair the error in that case (unless you set the 'copies' property, but then you'll need more disk space for the write; basically that would mean writing a backup to the same drive), but it will still be able to detect it. Daniel T. Staal --------------------------------------------------------------- This email copyright the author. Unless otherwise noted, you are expressly allowed to retransmit, quote, or otherwise use the contents for non-commercial purposes. This copyright will expire 5 years after the author's death, or in 30 years, whichever is longer, unless such a period is in excess of local copyright law. ---------------------------------------------------------------
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f8433c184dc451a3af976a43377c32e5>