From owner-freebsd-fs@FreeBSD.ORG Fri Dec 25 14:35:23 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 35C421065694 for ; Fri, 25 Dec 2009 14:35:23 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 756608FC0A for ; Fri, 25 Dec 2009 14:35:22 +0000 (UTC) Received: from volatile.chemikals.org (adsl-67-215-64.shv.bellsouth.net [98.67.215.64]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 81BD9A1DDAA4; Fri, 25 Dec 2009 08:35:21 -0600 (CST) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id nBPEZHl0094714; Fri, 25 Dec 2009 08:35:18 -0600 (CST) (envelope-from morganw@chemikals.org) Date: Fri, 25 Dec 2009 08:35:17 -0600 (CST) From: Wes Morgan X-X-Sender: morganw@volatile To: Solon Lutz In-Reply-To: <982740779.20091225122331@pyro.de> Message-ID: References: <568624531.20091215163420@pyro.de> <42952D86-6B4D-49A3-8E4F-7A1A53A954C2@spry.com> <957649379.20091216005253@pyro.de> <26F8D203-A923-47D3-9935-BE4BC6DA09B7@corp.spry.com> <1696529130.20091223212612@pyro.de> <1266543768.20091225120330@pyro.de> <982740779.20091225122331@pyro.de> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: clamav-milter 0.95.2 at warped X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org Subject: Re: ZFS RaidZ2 with 24 drives? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Dec 2009 14:35:23 -0000 On Fri, 25 Dec 2009, Solon Lutz wrote: >> Depending on tuning, you can make it flush to disk more often. It is also highly dependent on how much memory you have. > > At the moment: 4GB. I'm about to try upgrading it to 6GB. > > Why can't it work like this all the time: > > device r/s w/s kr/s kw/s wait svc_t %b > da0 0.0 1907.4 0.0 65494.8 0 0.6 6 > ad10 680.7 0.0 87132.0 0.0 35 43.7 92 > > > Effectively, it transfers 8-10MB/s! Took 24h for 1.2TB... > >> I know on my personal system, i see this happen a lot but it doesn't seem to have a hugely negative impact on >> performance for what i use my machine for. Depending on your setup, you may want to try various sysctl settings. I >> found that disabling prefetch can have a huge impact on some systems. > > Prefect is not enabled because of RAM < 4GB... I have my suspicions that this means your filesystem is heavily fragmented. I've had it happen to me on at least 3 pools, some of which were not even close to full, yet rebuilding the pool restored much of the performance. Hopefully with the block pointer rewrite support coming we will get some tools to address this. Right now I am not even aware of a tool that will check for fragmentation.