From owner-freebsd-stable@FreeBSD.ORG Sat Nov 7 07:31:54 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4ECD71065696 for ; Sat, 7 Nov 2009 07:31:54 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from QMTA09.westchester.pa.mail.comcast.net (qmta09.westchester.pa.mail.comcast.net [76.96.62.96]) by mx1.freebsd.org (Postfix) with ESMTP id EC9228FC24 for ; Sat, 7 Nov 2009 07:31:53 +0000 (UTC) Received: from OMTA24.westchester.pa.mail.comcast.net ([76.96.62.76]) by QMTA09.westchester.pa.mail.comcast.net with comcast id 27WJ1d0031ei1Bg597Xtsi; Sat, 07 Nov 2009 07:31:53 +0000 Received: from koitsu.dyndns.org ([98.248.46.159]) by OMTA24.westchester.pa.mail.comcast.net with comcast id 27fY1d0033S48mS3k7fYZR; Sat, 07 Nov 2009 07:39:33 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 27EE91E3035; Fri, 6 Nov 2009 23:31:51 -0800 (PST) Date: Fri, 6 Nov 2009 23:31:51 -0800 From: Jeremy Chadwick To: Miroslav Lachman <000.fbsd@quip.cz> Message-ID: <20091107073151.GA60756@icarus.home.lan> References: <772532900-1257123963-cardhu_decombobulator_blackberry.rim.net-1402739480-@bda715.bisx.prod.on.blackberry> <4AEEBD4B.1050407@quip.cz> <4AEEDB3B.5020600@quip.cz> <4AF46CA9.1040904@quip.cz> <4AF4A608.4020706@quip.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4AF4A608.4020706@quip.cz> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-stable@freebsd.org, Ivan Voras , Thomas Backman Subject: Re: Performance issues with 8.0 ZFS and sendfile/lighttpd X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Nov 2009 07:31:54 -0000 On Fri, Nov 06, 2009 at 11:41:12PM +0100, Miroslav Lachman wrote: > Thomas Backman wrote: > >On Nov 6, 2009, at 7:36 PM, Miroslav Lachman wrote: > > > >>Ivan Voras wrote: > >>>Miroslav Lachman wrote: > >>>>Ivan Voras wrote: > >>>>>Miroslav Lachman wrote: > >>>> > >>>>[..] > >>>> > >>>>>>I have more strange issue with Lighttpd in jail on top of ZFS. > >>>>>>Lighttpd is serving static content (mp3 downloads thru flash player). > >>>>>>Is runs fine for relatively small number of parallel clients with > >>>>>>bandwidth about 30 Mbps, but after some number of clients is reached > >>>>>>(about 50-60 parallel clients) the throughput drops down to 6 Mbps. > >>>>>> > >>>>>>I can server hundereds of clients on same HW using Lighttpd not in > >>>>>>jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe > >>>>>>more). > >>>>>> > >>>>>>I don't know if it is ZFS or Jail issue. > >>>>> > >>>>>Do you have actual disk IO or is the vast majority of your data served > >>>>>from the caches? (actually - the same question to the OP) > >>>> > >>>>I had ZFS zpool as mirror of two SATA II drives (500GB) and in the > >>>>peak iostat (or systat -vm or gstat) shows about 80 tps / 60% busy. > >>>> > >>>>In case of UFS, I am using gmirrored 1TB SATA II drives working nice > >>>>with 160 or more tps. > >>>> > >>>>Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB of > >>>>RAM. > >>>> > >>>>As the ZFS + Lighttpd in jail was unreliable, I am no longer using it, > >>>>but if you want some more info for debuging, I can set it up again. > >>> > >>>For what it's worth, I have just set up a little test on a production > >>>machine with 3 500 GB SATA drives in RAIDZ, FreeBSD 7.2-RELEASE. The > >>>total data set is some 2 GB in 5000 files but the machine has only 2 GB > >>>RAM total so there is some disk IO - about 40 IOPS per drive. I'm also > >>>using Apache-worker, not lighty, and siege to benchmark with 10 > >>>concurrent users. > >>> > >>>In this setup, the machine has no problems saturating a 100 Mbit/s link > >>>- it's not on a LAN but the latency is close enough and I get ~~ 11 > >>>MB/s. > >> > >>[...] > >>/boot/loader.conf: > >> > >>## eLOM support > >>hw.bge.allow_asf="1" > >>## gmirror RAID1 > >>geom_mirror_load="YES" > >>## ZFS tuning > >>vm.kmem_size="1280M" > >>vm.kmem_size_max="1280M" > >>kern.maxvnodes="400000" > >>vfs.zfs.prefetch_disable="1" > >>vfs.zfs.arc_min="16M" > >>vfs.zfs.arc_max="128M" > > >I won't pretend to know much about this area, but your ZFS values here > >are very low. May I assume that they are remnants of the times when the > >ARC grew insanely large and caused a kernel panic? > >You're effectively forcing ZFS to not use more than 128MB cache, which > >doesn't sound like a great idea if you've got 2+ GB of RAM. I've had no > >trouble without any tuning whatsoever on 2GB for a long time now. The > >kmem lines can probably be omitted if you're on amd64, too (the default > >value for kmem_size_max is about 307GB on my machine). > > Yes, loader values are one year old when I installed this machine. > But I think auto tuning was commited after 7.2-RELEASE by Kip Macy, > so some of them are still needed or am I wrong? (this is > 7.2-RELEASE). ... We don't know, because none of the individuals who are maintaining ZFS at this point in time have actually responded to this question. http://lists.freebsd.org/pipermail/freebsd-stable/2009-October/052256.html The community really needs an official answer to this question, and one from those familiar with the code. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |