Date: Sun, 30 Nov 2003 19:27:06 -0800 From: Rishi Chopra <rchopra@cal.berkeley.edu> To: questions@freebsd.org Subject: Raid Array Stripe Size Investigation Message-ID: <3FCAB50A.8090705@cal.berkeley.edu>
next in thread | raw e-mail | index | archive | help
After some initial trouble getting my FreeBSD box up and running, I'm happy to report that I've been able to conduct a few empirical tests on IDE hardware RAID array stripe size and performance. I decided to do these benchmarks when I noticed a lack of this information on the web and as part of the mailing list. Information on the setup of the tests and results can be found here: http://www.ocf.berkeley.edu/~rchopra/RaidResults.html If anyone would care to add some insight into why Bonnie and IOZone are/aren't good benchmarks for such a test, feel free to share. Also, I didn't bother to write any conclusions to the tests; I merely generated the data. As with everything RAID, there's no 'best' configuration, and it doesn't seem like there was a clear winner as a result. I also doubt whether you could draw any conclusions about the OS and hardware driver implementation. As far as single-user systems go, I don't think you'd see any difference performance-wise regardless of stripe size. If someone can suggest a methodology for multi-user testing, I'd consider giving it a go. NOTE: I do still have a question about large disks. My 4x200 RAID5 array (~550GB) won't load if I tell sysinstall to use the entire disk as one slice, with partitions amongst that slice. Given the large size of the array, I ignored warnings about '# of cylinders' but I'm still curious why I can't use the whole disk as one bootable slice. Any suggestions?
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3FCAB50A.8090705>