Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 6 Sep 2010 21:07:44 +0200
From:      Wiktor Niesiobedzki <bsd@vink.pl>
To:        Steven Hartland <killing@multiplay.co.uk>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: zfs very poor performance compared to ufs due to lack of cache?
Message-ID:  <AANLkTi=8KoLLqyOKY-9=CnCu6VaCYo9LFyjapSfxe0-k@mail.gmail.com>
In-Reply-To: <AANLkTikNhsj5myhQCoPaNytUbpHtox1vg9AZm1N-OcMO@mail.gmail.com>
References:  <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk> <AANLkTikNhsj5myhQCoPaNytUbpHtox1vg9AZm1N-OcMO@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
2010/9/6 Wiktor Niesiobedzki <bsd@vink.pl>:
> Hi,
>
> As far as I have check recently, nginx is using sendfile by default.
> There is already a reported bug against ZFS+sendfile
> (http://www.freebsd.org/cgi/query-pr.cgi?pr=141305&cat=) which results
> in bad performance.
>
> What I personally have observed, is that memory, that is used by
> sendfile, once freed lands in Inact group. And ARC is not able to
> force free of this memory.
>
> In my case, where I have 1G of ARC, then after sending 2G of file, my
> ARC is barerly on minimum level, and my ARC hit ratio drops to ~50%.

if did some further tests with nginx and sendfile setting. The setup
is following:
nginx have a 1GB file to be served to client
ARC_MIN=256M, ARC_MAX=1G
KMEM_SIZE=1.5G

With sendfile off:
1. first download leads to read from disk (as suspected)
2. second download, mostly from ARC (some minor activity on disk, like
1-10% of actual transfer) measured by gstat

With sendfile on (after doing previous tests):
1. first download - leads to read from disk (suspicous - this file
should be already in ARC)
2. second download - leads to read from disk (suspicous - this file
should be in both ARC and Inactive memory)

After that the memory looks like:
Mem: 58M Active, 1032M Inact, 723M Wired, 121M Free
arc size: 670M


With a 512MB file, with sendfile on, starting with memory like:

Mem: 59M Active, 8948K Inact, 816M Wired, 1050M Free
arc size: 780M

ARC warmed with test file with:
cat tfile512M > /dev/null

1. first fownload - no disk activity
Mem: 51M Active, 517M Inact, 822M Wired, 545M Free
arc size: 790M
2. second download - no disk activity

Mem: 51M Active, 517M Inact, 822M Wired, 544M Free
arc size: 790M

The test is taking about 90 seconds, and
kstat.zfs.misc.arcstats.hits goes up by ~2M

In normal situation (no download activity) it goes up by: 200

During subsequent cat tfile512M > /dev/null it goes up by: 131k

During nginx download (with sendfile off) it goes up by: 23k

So my gut feelings about this situation are:
1. Sendfile on ZFS is not a "zero-copy" solution (we copy from ARC to
some other memory before sending a file)
2. Whatever sendfile puts in some "cache", it's never used, as we will
use ARC anyway (see big number of arc hits)

Some other side observerations are:
- nginx is faster (by 50%) with sendfile turned off (not benchmarked,
just a feeling)
- arcstat hits is growing extermely fast with sendfile (small request
from sendfile to ARC?)
- nginx is using quite small number of accesses to ARC, even compared
to simple cat, to get the same file size (if I at all properly
understand what does kstat.zfs.misc.arcstats.hits mean)

Hope that helps,

Cheers,

Wiktor Niesiobedzki



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTi=8KoLLqyOKY-9=CnCu6VaCYo9LFyjapSfxe0-k>