Date: Thu, 16 Sep 2004 18:53:49 -0400 From: David Schultz <das@FreeBSD.ORG> To: Kris Kennaway <kris@FreeBSD.ORG> Cc: Sam <sah@softcardsystems.com> Subject: Re: ZFS Message-ID: <20040916225349.GA892@VARK.homeunix.com> In-Reply-To: <20040916211837.GE70401@hub.freebsd.org> References: <Pine.LNX.4.60.0409161031280.28550@athena> <20040916211837.GE70401@hub.freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
> On Thu, Sep 16, 2004 at 10:31:57AM -0500, Sam wrote: > > Let's suppose you generate an exabyte of storage per year. Filling a > > 64-bit filesystem would take you approximately 8 million years. I suggest that you review your calculations. > > I'm not saying we'll never get there, [...] > > It's a_single filesystem_. If you want another 8192 ZB, just make another. A goal for ZFS is to eliminate that kind of nonsense. On Thu, Sep 16, 2004, Kris Kennaway wrote: > The detectors in the particle accelerator at Fermilab produce raw data > at a rate of 100 TB/sec (yes, 100 terabytes per second). They have to > use a three-tiered system of hardware filters to throw away most of > this and try to pick out the events that might actually be > interesting, to get it down to a "slow" data rate of 100 MB/sec that > can actually be written out to storage. If the hardware and software > was up to it I'm sure they'd want to keep much more of the data than > this. > > Now, over a year of runtime, the raw data amounts to (according to > Google Calculator): > > (100 (terabytes / sec)) * 1 year = 3.4697207 10^21 bytes > > or just over 2^71 bytes in a year. A UC Berkeley study has some interesting statistics on total storage sold per year, including a breakdown by medium: http://www.sims.berkeley.edu/research/projects/how-much-info-2003/printable_magnetic.pdf They place the total storage sold in 2003 at 2^68 bytes and the amount of original data produced at 2^62 bytes.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040916225349.GA892>