Date: Fri, 18 Dec 2009 10:42:41 -0500 From: Thomas Burgess <wonslung@gmail.com> To: "James R. Van Artsdalen" <james-freebsd-fs2@jrv.org> Cc: freebsd-fs <freebsd-fs@freebsd.org> Subject: Re: ZFS RaidZ2 with 24 drives? Message-ID: <deb820500912180742h77fdd635i48973b877d98281d@mail.gmail.com> In-Reply-To: <4B2B9F82.4020909@jrv.org> References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <deb820500912161320n43c552d7rf84264332574a701@mail.gmail.com> <E9C46E04-1A81-4EE5-909E-557EA08D16A9@corp.spry.com> <4B299CEA.3070705@jrv.org> <deb820500912161908o7736a667qe24eec9e33f53a8@mail.gmail.com> <4B2B9F82.4020909@jrv.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Dec 18, 2009 at 10:28 AM, James R. Van Artsdalen < james-freebsd-fs2@jrv.org> wrote: > Thomas Burgess wrote: > > One thing most people don't know about hard drives in general is that > > sometimes up to 30% of the space is actually ECC. With software raid > > systems like ZFS, this will eventually be somethign that we can take > > advantage of. > > i was basing this information on a talk Jeff Bonwick gave. Google JeffBonwick_*zfs*-What_*Next*-SDC09.pdfand it should show the information i'm talking about. > ECC is less than 10% of the space. The inter-sector gap and gap between > a sector's address and data fields, etc, are larger and more problematic > as rotation speeds increase. > > > Because of this, you can imagine a scenario where allowing ZFS to > > use this ECC space as raw storage, while leaving the data corrections > > to ZFS would be ideal. It's not only a matter of space, it will also > > lead to nice improvements in speed. (more data can be read/written by > > the head as it passes) > > The disk drive industry's solution to this is 4K sector sizes. See > http://www.anandtech.com/storage/showdoc.aspx?i=3691 > > Even ZFS would need major changes to use drives without ECC without an > increased hard error rate. I don't see this happening since no > filesystems exist yet for this environment, and since transitions to new > filesystems are so slow (99.9%+ of systems today are running filesystems > architectures at least two decades old). > again, i got my information from the lead zfs developer. I also spent a lot of time on google reading up on this after hearing about it because i found it to be so interesting. I am a layman though, so perhaps i'm wrong.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?deb820500912180742h77fdd635i48973b877d98281d>