Date: Mon, 16 Jun 2014 00:50:56 -0400 From: Rich <rincebrain@gmail.com> To: Freddie Cash <fjwcash@gmail.com> Cc: FreeBSD Filesystems <freebsd-fs@freebsd.org> Subject: Re: Large ZFS arrays? Message-ID: <CAOeNLuocQ1=XwY-D%2Bed5hLL=B_SWzeC93%2B=A_vXNvexdP2BGew@mail.gmail.com> In-Reply-To: <CAOjFWZ5unVUVpied3v5_OVu4D92nLsmiu_Zuzcdd9gd70u5chQ@mail.gmail.com> References: <1402846139.4722.352.camel@btw.pki2.com> <CAOjFWZ5unVUVpied3v5_OVu4D92nLsmiu_Zuzcdd9gd70u5chQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
I suppose I should jump in... 8 SC847E26-RJBOD1 per Dell R720 "head" w/128GB RAM, with 2 of the 9201-16e controllers, one port connected per enclosure. (The graphs of how it bottlenecks depending on how you daisy-chain things are...fascinating!) 4 zpools, 11 vdevs of 8 disks each, one disk per JBOD per vdev. SSDs for L2ARC or SLOG are of limited usefulness, given the size of the datasets involved - it'll save you on lots of tiny writes over NFS at times, but otherwise, enough spinning heads will beat the SSDs for sequential IO in non-pathological cases. I can describe more things as desired. :) - Rich On Sun, Jun 15, 2014 at 12:11 PM, Freddie Cash <fjwcash@gmail.com> wrote: > On Jun 15, 2014 8:29 AM, "Dennis Glatting" <freebsd@pki2.com> wrote: >> >> Anyone built a large ZFS infrastructures (PB size) and care to share >> words of wisdom? > > We don't yet have a petabyte of storage (currently just under 200 TB raw), > but our infrastructure will scale to 720 TB raw (using 4 TB drives) without > daisy-changing storage boxes, or 1.4 PB if daisy-chained. > > We use a SuperMicro H8DGi-F6 motherboard in an SC826 2U chassis with SSDs > for the OS, log and cache vdevs directly connected to the onboard SAS > controller. We have multiple LSI 9211-8e controllers connected to the > external storage boxes (each chassis has an SAS expander). > > The storage chassis are 45-bay SC846-JBOD chassis, currently using 2TB > drives. We currently only have 2 storage chassis connected. It supports 4 > chassis directly, or 8 if you daisy-chain the storage chassis. > > We currently only use these for backups storage, so we configured things > for bulk storage and not raw I/O or throughout. We only have gigabit > Ethernet, and we saturate that with zfs send every day for several hours. > > Hope that helps. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOeNLuocQ1=XwY-D%2Bed5hLL=B_SWzeC93%2B=A_vXNvexdP2BGew>