Date: Tue, 2 Sep 2003 13:14:58 -0700 From: "Max Clark" <max.clark@media.net> To: "Poul-Henning Kamp" <phk@phk.freebsd.dk> Cc: freebsd-questions@freebsd.org Subject: RE: FW: 20TB Storage System Message-ID: <ILENIMHFIPIBHJLCDEHKEEMIDCAA.max.clark@media.net> In-Reply-To: <50599.1062532904@critter.freebsd.dk>
next in thread | previous in thread | raw e-mail | index | archive | help
I know adding ccd/vinum to the equation will lower my IO throughput, but the question is... if I have an external hardware shelf with 3.5TB (16 250GB drives w/ Raid 5 from hardware) and I put a Raid 0 stripe across 3 of these shelves what would my expected loss of IO be? Thanks, Max -----Original Message----- From: Poul-Henning Kamp [mailto:phk@phk.freebsd.dk] Sent: Tuesday, September 02, 2003 1:02 PM To: Max Clark Cc: freebsd-questions@freebsd.org; freebsd-performance@freebsd.org; freebsd-hackers@freebsd.org Subject: Re: FW: 20TB Storage System In message <ILENIMHFIPIBHJLCDEHKIEMEDCAA.max.clark@media.net>, "Max Clark" writ es: >Given the above: >1) What would my expected IO be using vinum to stripe the storage enclosures >detailed above? That depends a lot on the applications I/O pattern, an I doubt a precise prediction is possible. In particular the FibreChannel is hard to predict the throughput off because the various implementations seems to have each their own peculiar quirks performance wise. On a SEAGATE ST318452 disks, I see sequential transfer rates at the outside rim of the disk of 58MB/sec. If I stripe two of them them with CCD I get 107MB/sec. CCD has a better performance than Vinum where they compare. RAID-5 and striping a large number of disks does not scale linearly performance wise, in particular you _may_ see your average access time drop somewhat, but there is by far no guarantee that it will be better than the individual drive. >2) What is the maximum size of a filesystem that I can present to the host >OS using vinum/ccd? Am I limited anywhere that I am not aware of? Good question, I'm not sure we currently know the exact barrier. >3) Could I put all 20TB on one system, or will I need two to sustain the IO >required? Spreading it will give you more I/O bandwidth. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ILENIMHFIPIBHJLCDEHKEEMIDCAA.max.clark>