From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 10:59:52 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 46591106564A for ; Mon, 5 Apr 2010 10:59:52 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id C9B748FC14 for ; Mon, 5 Apr 2010 10:59:51 +0000 (UTC) Received: from volatile.chemikals.org (adsl-67-123-77.shv.bellsouth.net [98.67.123.77]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 93084809337F; Mon, 5 Apr 2010 05:59:50 -0500 (CDT) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.4/8.14.4) with ESMTP id o35Axlpi041439; Mon, 5 Apr 2010 05:59:47 -0500 (CDT) (envelope-from morganw@chemikals.org) Date: Mon, 5 Apr 2010 05:59:47 -0500 (CDT) From: Wes Morgan X-X-Sender: morganw@volatile To: Mikle Krutov In-Reply-To: <20100405065500.GB48707@takino.homeftp.org> Message-ID: References: <20100404191844.GA5071@takino.homeftp.org> <20100405065500.GB48707@takino.homeftp.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: clamav-milter 0.95.3 at warped X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 10:59:52 -0000 On Mon, 5 Apr 2010, Mikle Krutov wrote: > On Sun, Apr 04, 2010 at 10:08:21PM -0500, Wes Morgan wrote: > > On Sun, 4 Apr 2010, Mikle wrote: > > > > > Hello, list! I've got some strange problem with one-disk zfs-pool: > > > read/write performance for the files on the fs (dd if=/dev/zero > > > of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading > > > from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me > > > ~70MB/s. pool is about 80% full; PC with the pool has 2GB of ram (1.5 of > > > which is free); i've done no tuning in loader.conf and sysctl.conf for > > > zfs. In dmesg there is no error-messages related to the disk (dmesg|grep > > > ^ad12); s.m.a.r.t. seems OK. Some time ago disk was OK, nothing in > > > software/hardware has changed from that day. Any ideas what could have > > > happen to the disk? > > > > Has it ever been close to 100% full? How long has it been 80% full and > > what kind of files are on it, size wise? > No, it was never full. It is at 80% for about a week maybe. Most of the files are the video of the 200MB - 1.5GB size per file. I'm wondering if your pool is fragmented. What does gstat or iostat -x output for the device look like when you're doing accessing the raw device versus filesystem access? A very interesting experiment (to me) would be to try these things: 1) using dd to replicate the disc to another disc, block for block 2) zfs send to a newly created, empty pool (could take a while!) Then, without rebooting, compare the performance of the "new" pools. For #1 you would need to export the pool first and detach the original device before importing the duplicate. There might be a script out there somewhere to parse the output from zdb and turn it into a block map to identify fragmentation, but I'm not aware of one. If you did find that was the case, currently the only fix is to rebuild the pool.