Date: Thu, 10 Aug 2006 14:31:55 -0400 From: "Bucky Jordan" <bjordan@lumeta.com> To: <freebsd-hardware@freebsd.org> Subject: RE: PERC 5/E SAS RAID in Dell PowerEdge 1950/2950 Message-ID: <78ED28FACE63744386D68D8A9D1CF5D410496E@MAIL.corp.lumeta.com>
next in thread | raw e-mail | index | archive | help
Original message: http://www.freebsd.org/cgi/getmsg.cgi?fetch=3D40885+44951+/usr/local/www/= d b/text/2006/freebsd-hardware/20060625.freebsd-hardware >Does anyone have details about the new PERC 5/E SAS RAID controller Dell >is (or will soon be) shipping in the 1950/2950? I've got one in that I'm setting up/testing for postgres. >This replaces the long standing PERC4 (which was an OEM LSI / AMI >MegaRAID U320) in the [1,2]850 series. I've used the 2850 with FreeBSD/postgres, but didn't have the need at the time to do much tuning, so all I know is that it worked...=20 >Obviously this is an OEM chipset as well. I see it listed in mfi(4). >It appears to be backported into RELENG_6. I'm running 6.1 RELEASE amd64. It picked up the mfi device just fine, and even realized that it was a Perc5/i. However, that's about where the pleasantness ends. Here's the hardware: 2xDual Core 3.0 Ghz CPU (Xeon 5160- 1333Mhz FSB, 4 MB shared cache per socket) 8 GB RAM (DDR2, fully buffered, Dual Ranked, 667 Mhz) 6x300 10k RPM SAS drives Perc 5i w/256 MB battery backed cache DRAC5 (which I do see listed in dmesg). Here's my experience so far (please keep in mind I'm not a FreeBSD expert, so pointers on where I went wrong are appreciated). 1. The box came configured as a RAID 10 across all 6 disks. It appeared to do mirroring first, then striping. I ran the following: time bash -c "(dd if=3D/dev/zero of=3Dbigfile count=3D125000 bs=3D8k && = sync)" this returned ~117 MB/s, which seems a bit slow for 6 spindles. I also ran bonnie++ with similar results. Keep in mind just 1 of these SAS drives easily pumps out a sustained 75 MB/s read/write rate. 2. Thinking there might be a problem with the controller and complex RAID (this was discussed the Postgres performance mailing list), I tried a RAID 5 and RAID 0 config. This resulted in the following results, which seem more reasonable for the hardware: RAID5 (x4 disks) 1024000000 bytes transferred in 6.375067 secs (160625763 bytes/sec) RAID0 (x2 disks) 1024000000 bytes transferred in 7.392225 secs (138523922 bytes/sec) Both of the above numbers, while the RAID 5 is not stellar, look reasonable to me. 3. So initial conclusion was Perc5/I is crappy about multi levels of raid (10, 0+1, etc). However, a coworker suggested I test on Knoppix 5. Here's the results: RAID5 (x4 disks) ~270 MB/s with dd on ext2 (very close to theoretical max) RAID10 (4 disks) mixed results- anywhere from 148 MB/s to an unrealistic 700+ MB/s (which I attribute to caching in RAM, although issuing a sync should force it to disk.. odd) So I ran the following: bonnie++ -d bonnie -s 6600:8k and got ~100 Mb/s for sequential input I'm not sure why Knoppix is so much faster than BSD 6.1 amd64 on the 4 disk RAID 5 test, but I'm going to move forward and use all 6 disks in a RAID5 configuration. 4. During testing, I tried installing BSD on disk0, then setting up a RAID0 on disks 2 & 3, and a RAID1 on disks 4 & 5 for testing basic raid performance. Unfortunately, BSD was unable to recognize the other volumes. In dmesg, I would see mfid0, mifid1, and mfid2, but when I would try to mount them using sysinstall and the instructions in the handbook, FDisk did not see the correct sizes. Furthermore, I think it always pointed to mfid0. More specifically A: go to FDISK, select mfid0, partition 250GB /, remainder swap B: select mfid1, saw partitions created in (A)=20 C: select mfid2, saw partitions created in (A) Note: Steps A-C were performed on a freshly initialized RAID as described above. I have not had the chance to try the above RAID configuration on any other OS at this point. Sorry for the long post, but hopefully some of the above info will be useful. If anyone has suggestions for solutions to the multiple raid set issues (#4 above) or hints on tuning BSD I/O performance for RAID, I'd be interested. Thanks, Bucky
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?78ED28FACE63744386D68D8A9D1CF5D410496E>