From owner-freebsd-stable@FreeBSD.ORG Fri Nov 6 18:45:51 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6681A10656A4; Fri, 6 Nov 2009 18:45:51 +0000 (UTC) (envelope-from serenity@exscape.org) Received: from ch-smtp01.sth.basefarm.net (ch-smtp01.sth.basefarm.net [80.76.149.212]) by mx1.freebsd.org (Postfix) with ESMTP id E3C898FC22; Fri, 6 Nov 2009 18:45:50 +0000 (UTC) Received: from c83-253-248-99.bredband.comhem.se ([83.253.248.99]:49538 helo=mx.exscape.org) by ch-smtp01.sth.basefarm.net with esmtp (Exim 4.68) (envelope-from ) id 1N6Tog-0005Ub-59; Fri, 06 Nov 2009 19:45:48 +0100 Received: from [192.168.1.5] (macbookpro [192.168.1.5]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mx.exscape.org (Postfix) with ESMTPSA id 0308E5BEC7; Fri, 6 Nov 2009 19:45:45 +0100 (CET) Mime-Version: 1.0 (Apple Message framework v1076) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes From: Thomas Backman In-Reply-To: <4AF46CA9.1040904@quip.cz> Date: Fri, 6 Nov 2009 19:45:42 +0100 Content-Transfer-Encoding: 7bit Message-Id: References: <772532900-1257123963-cardhu_decombobulator_blackberry.rim.net-1402739480-@bda715.bisx.prod.on.blackberry> <4AEEBD4B.1050407@quip.cz> <4AEEDB3B.5020600@quip.cz> <4AF46CA9.1040904@quip.cz> To: Miroslav Lachman <000.fbsd@quip.cz> X-Mailer: Apple Mail (2.1076) X-Originating-IP: 83.253.248.99 X-Scan-Result: No virus found in message 1N6Tog-0005Ub-59. X-Scan-Signature: ch-smtp01.sth.basefarm.net 1N6Tog-0005Ub-59 24f9736e08fadbd47a76dd9e711808aa Cc: freebsd-stable@freebsd.org, Ivan Voras Subject: Re: Performance issues with 8.0 ZFS and sendfile/lighttpd X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Nov 2009 18:45:51 -0000 On Nov 6, 2009, at 7:36 PM, Miroslav Lachman wrote: > Ivan Voras wrote: >> Miroslav Lachman wrote: >>> Ivan Voras wrote: >>>> Miroslav Lachman wrote: >>> >>> [..] >>> >>>>> I have more strange issue with Lighttpd in jail on top of ZFS. >>>>> Lighttpd is serving static content (mp3 downloads thru flash >>>>> player). >>>>> Is runs fine for relatively small number of parallel clients with >>>>> bandwidth about 30 Mbps, but after some number of clients is >>>>> reached >>>>> (about 50-60 parallel clients) the throughput drops down to 6 >>>>> Mbps. >>>>> >>>>> I can server hundereds of clients on same HW using Lighttpd not in >>>>> jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps >>>>> (maybe >>>>> more). >>>>> >>>>> I don't know if it is ZFS or Jail issue. >>>> >>>> Do you have actual disk IO or is the vast majority of your data >>>> served >>>> from the caches? (actually - the same question to the OP) >>> >>> I had ZFS zpool as mirror of two SATA II drives (500GB) and in the >>> peak iostat (or systat -vm or gstat) shows about 80 tps / 60% busy. >>> >>> In case of UFS, I am using gmirrored 1TB SATA II drives working nice >>> with 160 or more tps. >>> >>> Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB >>> of RAM. >>> >>> As the ZFS + Lighttpd in jail was unreliable, I am no longer using >>> it, >>> but if you want some more info for debuging, I can set it up again. >> >> For what it's worth, I have just set up a little test on a production >> machine with 3 500 GB SATA drives in RAIDZ, FreeBSD 7.2-RELEASE. The >> total data set is some 2 GB in 5000 files but the machine has only >> 2 GB >> RAM total so there is some disk IO - about 40 IOPS per drive. I'm >> also >> using Apache-worker, not lighty, and siege to benchmark with 10 >> concurrent users. >> >> In this setup, the machine has no problems saturating a 100 Mbit/s >> link >> - it's not on a LAN but the latency is close enough and I get ~~ 11 >> MB/s. > > [...] > /boot/loader.conf: > > ## eLOM support > hw.bge.allow_asf="1" > ## gmirror RAID1 > geom_mirror_load="YES" > ## ZFS tuning > vm.kmem_size="1280M" > vm.kmem_size_max="1280M" > kern.maxvnodes="400000" > vfs.zfs.prefetch_disable="1" > vfs.zfs.arc_min="16M" > vfs.zfs.arc_max="128M" I won't pretend to know much about this area, but your ZFS values here are very low. May I assume that they are remnants of the times when the ARC grew insanely large and caused a kernel panic? You're effectively forcing ZFS to not use more than 128MB cache, which doesn't sound like a great idea if you've got 2+ GB of RAM. I've had no trouble without any tuning whatsoever on 2GB for a long time now. The kmem lines can probably be omitted if you're on amd64, too (the default value for kmem_size_max is about 307GB on my machine). Regards, Thomas