Date: Thu, 16 Sep 2004 21:46:50 +0000 From: Kris Kennaway <kris@FreeBSD.org> To: Wilko Bulte <wb@freebie.xs4all.nl> Cc: Sam <sah@softcardsystems.com> Subject: Re: ZFS Message-ID: <20040916214650.GA73372@hub.freebsd.org> In-Reply-To: <20040916212233.GA64634@freebie.xs4all.nl> References: <Pine.LNX.4.60.0409161031280.28550@athena> <20040916211837.GE70401@hub.freebsd.org> <20040916212233.GA64634@freebie.xs4all.nl>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Sep 16, 2004 at 11:22:33PM +0200, Wilko Bulte wrote:
> On Thu, Sep 16, 2004 at 09:18:37PM +0000, Kris Kennaway wrote..
> > On Thu, Sep 16, 2004 at 10:31:57AM -0500, Sam wrote:
> >
> > > >CERN's LHC is expected to produce 10-15 PB/year. e-science ("the grid")
> > > >is capable of producing whopping huge data sets, and people already are.
> > > >Many aspects of data custodianship are still open questions, but there's
> > > >little doubt that what's cutting-edge storage today will be in
> > > >filesystems between now and 10 years' time. Filesystem views on data
> > > >sets that are physically stored and replicated at disparate locations
> > > >around the planet are the kind of things that potentially need larger
> > > >than 64-bit quantities.
> > > >
> > >
> > > Let's suppose you generate an exabyte of storage per year. Filling a
> > > 64-bit filesystem would take you approximately 8 million years.
> > >
> > > I'm not saying we'll never get there, just that doing it now is nothing
> > > more than a "look at us, ain't we forward thinking" ploy. It's a
> > > _single filesystem_. If you want another 8192 ZB, just make another.
> >
> > The detectors in the particle accelerator at Fermilab produce raw data
> > at a rate of 100 TB/sec (yes, 100 terabytes per second). They have to
> > use a three-tiered system of hardware filters to throw away most of
> > this and try to pick out the events that might actually be
> > interesting, to get it down to a "slow" data rate of 100 MB/sec that
> > can actually be written out to storage. If the hardware and software
>
> 100MB/s is slow, I think this number is wrong.
I think they do heavier software processing in the third stage, so it
might have been CPU-bound instead of storage-bound. The figures I
quoted were from
http://humanresources.web.cern.ch/humanresources/external/training/acad/sphicas.pdf
which I now see was from 1998, so this might have improved somewhat.
It's also for LHC, although I have the Fermilab stats from 2001
somewhere, which I remember being of comparable magnitudes.
Kris
--
In God we Trust -- all others must submit an X.509 certificate.
-- Charles Forsythe <forsythe@alum.mit.edu>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040916214650.GA73372>
