Date: Thu, 27 May 2010 19:40:03 GMT From: Alexander Motin <mav@FreeBSD.org> To: freebsd-bugs@FreeBSD.org Subject: Re: kern/147086: AHCI not being enabled on PC Message-ID: <201005271940.o4RJe3jo088056@freefall.freebsd.org>
next in thread | raw e-mail | index | archive | help
The following reply was made to PR kern/147086; it has been noted by GNATS. From: Alexander Motin <mav@FreeBSD.org> To: "ryan@ryanholt.net" <ryan@ryanholt.net> Cc: bug-followup@freebsd.org, Garrett Cooper <yanefbsd@gmail.com> Subject: Re: kern/147086: AHCI not being enabled on PC Date: Thu, 27 May 2010 22:35:37 +0300 ryan@ryanholt.net wrote: > OK I got mvs compiled and the drives are using it now. Tested hot swap; > removed and replaced drives while writing data to the ZFS pool sitting > atop. One issue I noticed is that the glabels don't seem to be read once > I insert the disk back into the server. Seems like I need to re-label > the drive and then replace it back into the zpool using zpool replace. > Would this be an issue with ZFS and not the mvs driver? I don't think it is driver related. > Additionally, this might be a cabling issue or an issue with my drive > cage, but when I place a disk in slot 0 (port 0 on AOC-SAT2-MV8) I get > mvsch0 timeout errors and the whole box locks up requiring a hard > reboot. Placing the drive in slot 4 and utilizing slots 2-5 instead of > 1-4 allows the server to boot up fine / run fine. I need more info to say something. Boot with verbose messages and send be complete log from boot messages up to the error. > Also, the drive pool seems a bit slow. Using the command... > > dd if=/dev/urandom (and /dev/zero) of=/tank/test.file bs=1024 count=102400 > > Seems to indicate that I get somwhere between 15MB/sec (urandom) and 30 > mb/sec (zero). This card is on a PCI bus and therefore will be somewhat > restricted, but should it be reasonable to expect better performance > than this out of a 4 disk Hitachi 1TB 7200 RPM 32mb cache raidz1 pool? > Are there any other recommended tests to benchmark performance? If your card placed into regular PCI slot - absolute maximum you can get from it is 133MB/s, but usually practical limit is lower. When you are writing to redundant array, like raidz, same data written to several devices same time. It also divides effective bandwidth by several times, proportionally to redundancy. So even if 30MB/s a bit small, it is possible in such configuration. Also it is a bit strange to test linear bandwidth, using 1K blocks. System I/O overhead may also affect result. Try it with at least 64K block to be sure. /dev/urandom may also affect performance. On my machine I can read only about 60MB/s from it, so it is not a good test. Also, depending on your typical workload you may be interested in random I/O performance. 4 disks with NCQ supported by mvs(4) should handle random parallel load quite fine. -- Alexander Motin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201005271940.o4RJe3jo088056>