Date: Mon, 7 Jun 2010 18:19:15 -0500 (CDT) From: Bob Friesenhahn <bfriesen@simple.dallas.tx.us> To: "Bradley W. Dutton" <brad@duttonbros.com> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance of various vdevs (long post) Message-ID: <alpine.GSO.2.01.1006071811040.12887@freddy.simplesystems.org> In-Reply-To: <20100607154256.941428ovaq2hha0g@duttonbros.com> References: <20100607154256.941428ovaq2hha0g@duttonbros.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 7 Jun 2010, Bradley W. Dutton wrote: > So the normal vdev performs closest to raw drive speeds. Raidz1 is slower and > raidz2 even more so. This is observable in the dd tests and viewing in gstat. > Any ideas why the raid numbers are slower? I've tried to account for the fact > that the raid vdevs have fewer data disks. Would a faster CPU help here? The sequential throughput on your new drives is faster than the old drives, but it is likely that the seek and rotational latencies are longer. ZFS is transaction-oriented and must tell all the drives to sync their write cache before proceeding to the next transaction group. Drives with more latency will slow down this step. Likewise, ZFS always reads and writes full filesystem blocks (default 128K) and this may cause more overhead when using raidz. Using 'dd' from /dev/zero is not a very good benchmark test since zfs could potentially compress zero-filled blocks down to just a few bytes (I think recent versions of zfs do this) and of course Unix supports files with holes. The higher CPU usage might be due to the device driver or the interface card being used. If you could afford to do so, you will likely see considerably better performance by using mirrors instead of raidz since then 128K blocks will be sent to each disk and with fewer seeks. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.GSO.2.01.1006071811040.12887>