Date: Mon, 11 Feb 2008 20:05:18 +0300 From: Alexey Tarasov <me@lexasoft.ru> To: current@freebsd.org Subject: Re: Disappointing speed with ZFS Message-ID: <50186FCD-F67F-4144-BDF1-FB9A7F9AAB64@lexasoft.ru> In-Reply-To: <fopmlp$qeh$1@ger.gmane.org> References: <9DA6FFCD-11DB-4580-9314-52B0885351D8@lexasoft.ru> <fopmlp$qeh$1@ger.gmane.org>
next in thread | previous in thread | raw e-mail | index | archive | help
I've done similar tests on the other machine, and all looks fine. But why on this machine ZFS works slower than UFS? When I make UFS =20 file system on the same disk, rtorrent hashing works 10 times faster. =20= And while hashing, HDD is used three times intensively with ZFS =20 (noticed by flashing LED). I have an amd64 Core2Duo processor, 4 Gb of RAM, what is not enough =20 for ZFS? What kernel tuning can help me? On 11.02.2008, at 17:38, Ivan Voras wrote: > Alexey Tarasov wrote: >> Hello. >> >> I am trying to use ZFS to store my torrent downloads. I noticed that >> hashing in rtorrent works 10 times slower than the same disk with =20 >> UFS. > > I've done some extensive file system testing and here are my results > with bonnie++ for UFS+SU vs ZFS on AMD64, 6 GB RAM (1 GB for kmem), =20= > on a > RAID10 volume of 15 kRPM SAS drives: > > UFS+SU: write: 109 MB/s, read: 111 MB/s, random file creation: 36500 =20= > f/s > ZFS: write: 95 MB/s, read: 180 MB/s (!!), random file creation: =20 > 40522 f/s > > Read speed for ZFS seems too high to be valid, it's probably some =20 > cache > effects (though tests were done on a file more than twice the RAM =20 > size). > In any case, ordinary hashing should cause sequential reading, and =20 > these > seem really fast. > > There could be one more thing: ZFS tries to write data sequentially, > like a log file system, and if the download was done in "parallel", =20= > many > pieces from different areas of the file at the same time (which is > normally the case for torrents), it might have gotten very =20 > fragmented on > the drive. > > You can verify this by creating a similarily-sized ordinary file =20 > with dd > (the file should be large enough not to fit in the memory cache, or =20= > the > test should be done after a reboot) and then run iostat in one console > while reading the files (separately, one at a time, with dd or cat) in > another. A very fragmented file should have significantly higher tps =20= > count. > > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to = "freebsd-current-unsubscribe@freebsd.org=20 > " -- Alexey Tarasov (\__/) (=3D'.'=3D) E[: | | | | :]=D0=97 (")_(")
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50186FCD-F67F-4144-BDF1-FB9A7F9AAB64>