Date: Mon, 21 Aug 2006 11:31:22 +0200 From: "O. Hartmann" <ohartman@uni-mainz.de> Cc: freebsd-stable@freebsd.org Subject: ATA RAID stripesize, performance Message-ID: <44E97D6A.5090603@uni-mainz.de> In-Reply-To: <200608211212.34514.doconnor@gsoft.com.au> References: <200608211212.34514.doconnor@gsoft.com.au>
next in thread | previous in thread | raw e-mail | index | archive | help
A few weeks ago I changed harddrives and rebuilt a RAID 0 volume on nForce4-based RAID. Box runs under FreeBSD 6.1-STABLE as mst recent built-world. After reinitializing RAID, I recognized high performance penalty under heavy disk I/O. New drives in the mentioned RAID 0 array are both Hitachi T7K250 SATA II/300 drives. The drives prior to the change were a 200 GB Samsung SP2004C and a 200 GB Maxtor Diamond 10, SATA 150 (while Samsung was SATA 300). As I remember myself, the old RAID had a stripesize of 64 K and that was reported by kernel AND "atacontrol status ar0". Now I tried atacontrol on the new RAID 0 and it reported stripesize of 128K and I suspect the big stripesize of hitting performance. As far as I can rememeber, I never got above 64 KB stripesize on every RAID array (also many SCSI RAID 5 systems i built in the past with FreeBSD 4/5/6). BIOS of my ASUS A8N32-SLI Deluxe (AMI BIOS) offered me this stripesize as default without telling me the real size, it only said 'default' and I took it as the best known-and-evaluated-value. My question is: may it be performance boost changing back the stripesize back to 64KB per stripe or are there newer insights in increasing the stripesize depending on hardware and blocksizes? If 64 KB is still the best COMMON value, I will change back to 64 KB. Thanks, oh
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?44E97D6A.5090603>