Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 6 Feb 2012 18:47:53 +0100
From:      =?iso-8859-1?Q?Peter_Ankerst=E5l?= <peter@pean.org>
To:        Michael Aronsen <michael.aronsen@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: HPC and zfs.
Message-ID:  <ACD2212B-7B3B-488E-AEFF-52A2CE9ABBC6@pean.org>
In-Reply-To: <AB8B3E3A-1161-4855-B418-B37E16D0EC52@gmail.com>
References:  <4F2FF72B.6000509@pean.org> <20120206162206.GA541@icarus.home.lan> <AB8B3E3A-1161-4855-B418-B37E16D0EC52@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help



On 6 feb 2012, at 17:49, Michael Aronsen wrote:

> Hi,
> 
> On Feb 6, 2012, at 17:22 , Jeremy Chadwick wrote:
>> - What single motherboard supports up to 192GB of RAM
> 
> Get an HP DL580/585 - they support 2TB/1TB RAM.
> 
>> - How you plan on getting roughly 410 hard disks (or 422 assuming
>>  an additional 12 SSDs) hooked up to a single machine
> 
> Use LSI SAS92XX 4 (x4) port external controllers, and SuperMicro SC847E26-RJBOD1 disk shelves.
> Each disk shelf needs 2 ports on the LSI controller, which means you get 90 disks per LSI card.
> The DL580/585's have 11 PCIe slots, so you'd end up with 990 disks per server using this setup.
> 
>> 
> 
> We have NetApp's at our University for home storage, but I would struggle to recommend them for HPC storage.
> 
> A dedicated HPC filesystem such as Lustre or FhGFS (http://www.fhgfs.com/cms/) will almost certainly give you better performance as they're purpose made.
> 
> We use FhGFS in a rather small setup (44 TB usable space and ~200 HPC nodes), but they do have installations with 700TB+.
> The setup consists of 2 metadata nodes and 4 storage nodes, all supermicro servers with 24 WD Velociraptor 600 GB 10K RPM disks.
> This setup gives us 4.8GB/sec write and 4.3GB/sec read speeds, all for a lot less than a comparable NetApp solution (we paid around €30.000).
> It now has support for mirroring on a per folder level for resilience.
> 
> Currently it only runs on Linux but i'm considering a FreeBSD port to get ZFS for volume management and now that OFED is in FreeBSD 9, Infinifband is possible.
> 
> I'd highly recommend a parallel filesystem, unfortunately not many, if any, are available on FreeBSD at this time.
> 
Thanks for the input. We recently had a visit by NetApp and Whamcloud actually and they where pitching for a NetApp+Whamcloud(lustre) installation.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ACD2212B-7B3B-488E-AEFF-52A2CE9ABBC6>