Date: Sat, 01 Jan 2011 19:18:48 +0100 From: Attila Nagy <bra@fsn.hu> To: Martin Matuska <mm@FreeBSD.org> Cc: freebsd-fs@FreeBSD.org, freebsd-stable@FreeBSD.org Subject: Re: New ZFSv28 patchset for 8-STABLE Message-ID: <4D1F7008.3050506@fsn.hu> In-Reply-To: <4D0A09AF.3040005@FreeBSD.org> References: <4D0A09AF.3040005@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On 12/16/2010 01:44 PM, Martin Matuska wrote: > Link to the patch: > > http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz > > I've used this: http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101223-nopython.patch.xz on a server with amd64, 8 G RAM, acting as a file server on ftp/http/rsync, the content being read only mounted with nullfs in jails, and the daemons use sendfile (ftp and http). The effects can be seen here: http://people.fsn.hu/~bra/freebsd/20110101-zfsv28-fbsd/ the exact moment of the switch can be seen on zfs_mem-week.png, where the L2 ARC has been discarded. What I see: - increased CPU load - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased hard disk load (IOPS graph) Maybe I could accept the higher system load as normal, because there were a lot of things changed between v15 and v28 (but I was hoping if I use the same feature set, it will require less CPU), but dropping the L2ARC hit rate so radically seems to be a major issue somewhere. As you can see from the memory stats, I have enough kernel memory to hold the L2 headers, so the L2 devices got filled up to their maximum capacity. Any ideas on what could cause these? I haven't upgraded the pool version and nothing was changed in the pool or in the file system. Thanks,
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4D1F7008.3050506>