Date: Sun, 15 Jun 2014 09:11:12 -0700 From: Freddie Cash <fjwcash@gmail.com> To: Dennis Glatting <freebsd@pki2.com> Cc: FreeBSD Filesystems <freebsd-fs@freebsd.org> Subject: Re: Large ZFS arrays? Message-ID: <CAOjFWZ5unVUVpied3v5_OVu4D92nLsmiu_Zuzcdd9gd70u5chQ@mail.gmail.com> In-Reply-To: <1402846139.4722.352.camel@btw.pki2.com> References: <1402846139.4722.352.camel@btw.pki2.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Jun 15, 2014 8:29 AM, "Dennis Glatting" <freebsd@pki2.com> wrote: > > Anyone built a large ZFS infrastructures (PB size) and care to share > words of wisdom? We don't yet have a petabyte of storage (currently just under 200 TB raw), but our infrastructure will scale to 720 TB raw (using 4 TB drives) without daisy-changing storage boxes, or 1.4 PB if daisy-chained. We use a SuperMicro H8DGi-F6 motherboard in an SC826 2U chassis with SSDs for the OS, log and cache vdevs directly connected to the onboard SAS controller. We have multiple LSI 9211-8e controllers connected to the external storage boxes (each chassis has an SAS expander). The storage chassis are 45-bay SC846-JBOD chassis, currently using 2TB drives. We currently only have 2 storage chassis connected. It supports 4 chassis directly, or 8 if you daisy-chain the storage chassis. We currently only use these for backups storage, so we configured things for bulk storage and not raw I/O or throughout. We only have gigabit Ethernet, and we saturate that with zfs send every day for several hours. Hope that helps.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ5unVUVpied3v5_OVu4D92nLsmiu_Zuzcdd9gd70u5chQ>
