Date: Mon, 6 Sep 2010 20:34:59 +0200 From: Wiktor Niesiobedzki <bsd@vink.pl> To: Steven Hartland <killing@multiplay.co.uk> Cc: freebsd-fs@freebsd.org Subject: Re: zfs very poor performance compared to ufs due to lack of cache? Message-ID: <AANLkTikNhsj5myhQCoPaNytUbpHtox1vg9AZm1N-OcMO@mail.gmail.com> In-Reply-To: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk> References: <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
2010/9/4 Steven Hartland <killing@multiplay.co.uk>: > When upgrading from 8.0 on our stream server to 8.1 we decided to go > for zfs to eliminate the costly fsck times should we experience > any unexpected reboots on the machine as it has a sizable RAID of > 1.6TB. > > After doing this all seemed good till after our latest event which > generated a significant amount of interest and hence the stream > server started to get quite a few requests. > > Basic install is 8.1 amd64 on a dual 2.8 Xeon with 4GB RAM and > areca controller with 6 disk in RAID 6. > > On that the machine runs nginx with the mp4 module to sudo stream > files. Hi, As far as I have check recently, nginx is using sendfile by default. There is already a reported bug against ZFS+sendfile (http://www.freebsd.org/cgi/query-pr.cgi?pr=141305&cat=) which results in bad performance. The quickest workaround is to set: sendfile off; In http {} sectio of nginx.conf. What I personally have observed, is that memory, that is used by sendfile, once freed lands in Inact group. And ARC is not able to force free of this memory. In my case, where I have 1G of ARC, then after sending 2G of file, my ARC is barerly on minimum level, and my ARC hit ratio drops to ~50%. If I remove the file that was sent through sendfile, memory is moved from Inact to free, from where ARC happly grabs what it wants, and ARC hit ratio comes back to normal (~99%). Cheers, Wiktor Niesiobedzki
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikNhsj5myhQCoPaNytUbpHtox1vg9AZm1N-OcMO>