Date: Mon, 07 Jun 2010 17:32:18 -0700 From: "Bradley W. Dutton" <brad@duttonbros.com> To: Bob Friesenhahn <bfriesen@simple.dallas.tx.us> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance of various vdevs (long post) Message-ID: <20100607173218.11716iopp083dbpu@duttonbros.com> In-Reply-To: <alpine.GSO.2.01.1006071811040.12887@freddy.simplesystems.org> References: <20100607154256.941428ovaq2hha0g@duttonbros.com> <alpine.GSO.2.01.1006071811040.12887@freddy.simplesystems.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Quoting Bob Friesenhahn <bfriesen@simple.dallas.tx.us>: > On Mon, 7 Jun 2010, Bradley W. Dutton wrote: >> So the normal vdev performs closest to raw drive speeds. Raidz1 is >> slower and raidz2 even more so. This is observable in the dd tests >> and viewing in gstat. Any ideas why the raid numbers are slower? >> I've tried to account for the fact that the raid vdevs have fewer >> data disks. Would a faster CPU help here? > > The sequential throughput on your new drives is faster than the old > drives, but it is likely that the seek and rotational latencies are > longer. ZFS is transaction-oriented and must tell all the drives to > sync their write cache before proceeding to the next transaction > group. Drives with more latency will slow down this step. > Likewise, ZFS always reads and writes full filesystem blocks > (default 128K) and this may cause more overhead when using raidz. The details are little lacking on the Hitachi site but the HDS722020ALA330 says 8.2 seek time. http://www.hitachigst.com/tech/techlib.nsf/techdocs/5F2DC3B35EA0311386257634000284AD/$file/USA7K2000_DS7K2000_OEMSpec_r1.2.pdf The WDC drives say 8.9 so we should be in the same ballpark on seek times. http://www.wdc.com/en/products/products.asp?driveid=399 I thought the NCQ vs no NCQ might tip the scales in favor of the Hitachi array as well. Are there any tools to check the latencies of the disks? > Using 'dd' from /dev/zero is not a very good benchmark test since > zfs could potentially compress zero-filled blocks down to just a few > bytes (I think recent versions of zfs do this) and of course Unix > supports files with holes. I know it's pretty simple but for checking throughput I thought it would be ok. I don't have compression on and based on the drive lights and gstat, the drives definitely aren't idle. > The higher CPU usage might be due to the device driver or the > interface card being used. Definitely a plausible explanation. If this was the case would the 8 parallel dd processes exhibit the same behavior? or is the type of IO affecting how much CPU the driver is using? > If you could afford to do so, you will likely see considerably > better performance by using mirrors instead of raidz since then 128K > blocks will be sent to each disk and with fewer seeks. I agree with you but at this poing I value the extra space more as I don't have a lot of random IO. I read the following and decided to stick with raidz2 when ditching my old raidz1 setup: http://blogs.sun.com/roch/entry/when_to_and_not_to Thanks for the feedback, Brad
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100607173218.11716iopp083dbpu>