Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 12 Feb 2012 20:02:38 +0100
From:      Charles Orbello <cdorbell@free.fr>
To:        freebsd-fs@freebsd.org
Subject:   Re: HPC and zfs.
Message-ID:  <4F380CCE.5080605@free.fr>
In-Reply-To: <AB8B3E3A-1161-4855-B418-B37E16D0EC52@gmail.com>
References:  <4F2FF72B.6000509@pean.org> <20120206162206.GA541@icarus.home.lan> <AB8B3E3A-1161-4855-B418-B37E16D0EC52@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Michael

what is the impact on the latency read and latency write to use a 
distributed system ?

Regards
Charles

Le 06/02/2012 17:49, Michael Aronsen a écrit :
> Hi,
>
> On Feb 6, 2012, at 17:22 , Jeremy Chadwick wrote:
>> - What single motherboard supports up to 192GB of RAM
> Get an HP DL580/585 - they support 2TB/1TB RAM.
>
>> - How you plan on getting roughly 410 hard disks (or 422 assuming
>>   an additional 12 SSDs) hooked up to a single machine
> Use LSI SAS92XX 4 (x4) port external controllers, and SuperMicro SC847E26-RJBOD1 disk shelves.
> Each disk shelf needs 2 ports on the LSI controller, which means you get 90 disks per LSI card.
> The DL580/585's have 11 PCIe slots, so you'd end up with 990 disks per server using this setup.
>
>> If you are considering investing the time and especially money (the cost
>> here is almost unfathomable, IMO) into this, I strongly recommend you
>> consider an actual hardware filer (e.g. NetApp).  Your performance and
>> reliability will be much greater, plus you will get overall better
>> support from NetApp in the case something goes wrong.  In the case you
>> run into problems with FreeBSD (and I can assure you in this kind of
>> setup you will) with this kind of extensive setup, you will be at the
>> mercy of developers' time/schedules with absolutely no guarantee that
>> your problem will be solved.  You definitely want a support contract.
>> Thus, go NetApp.
> We have NetApp's at our University for home storage, but I would struggle to recommend them for HPC storage.
>
> A dedicated HPC filesystem such as Lustre or FhGFS (http://www.fhgfs.com/cms/) will almost certainly give you better performance as they're purpose made.
>
> We use FhGFS in a rather small setup (44 TB usable space and ~200 HPC nodes), but they do have installations with 700TB+.
> The setup consists of 2 metadata nodes and 4 storage nodes, all supermicro servers with 24 WD Velociraptor 600 GB 10K RPM disks.
> This setup gives us 4.8GB/sec write and 4.3GB/sec read speeds, all for a lot less than a comparable NetApp solution (we paid around €30.000).
> It now has support for mirroring on a per folder level for resilience.
>
> Currently it only runs on Linux but i'm considering a FreeBSD port to get ZFS for volume management and now that OFED is in FreeBSD 9, Infinifband is possible.
>
> I'd highly recommend a parallel filesystem, unfortunately not many, if any, are available on FreeBSD at this time.
>
> Regards,
> Michael
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F380CCE.5080605>