From owner-freebsd-fs@FreeBSD.ORG Wed Jun 12 18:01:27 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 324A57C7 for ; Wed, 12 Jun 2013 18:01:27 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id E86481728 for ; Wed, 12 Jun 2013 18:01:26 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 696175FD06; Wed, 12 Jun 2013 19:55:13 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id DD1995FD05 for ; Wed, 12 Jun 2013 19:55:12 +0200 (CEST) Message-ID: <51B8B5DC.2010703@platinum.linux.pl> Date: Wed, 12 Jun 2013 19:54:36 +0200 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130509 Thunderbird/17.0.6 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: An order of magnitude higher IOPS needed with ZFS than UFS References: <51B79023.5020109@fsn.hu> <20130612114937.GA13688@icarus.home.lan> In-Reply-To: <20130612114937.GA13688@icarus.home.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Jun 2013 18:01:27 -0000 On 2013-06-12 13:49, Jeremy Chadwick wrote: > On Wed, Jun 12, 2013 at 06:40:32AM -0500, Mark Felder wrote: >> On Tue, 11 Jun 2013 16:01:23 -0500, Attila Nagy wrote: >> >>> BTW, the file systems are 77-78% full according to df (so ZFS >>> holds more, because UFS is -m 8). >> >> ZFS write performance can begin to drop pretty badly when you get >> around 80% full. I've not seen any benchmarks showing an improvement >> with a very fast and large ZIL or tons of memory, but I'd expect >> that would help significantly. Just note that you're right at the >> edge where performance gets impacted. > > Mark, do you have any references for this? I'd love to learn/read more > about this engineering/design aspect (I won't say flaw, I'll just say > aspect) to ZFS, as it's the first I've heard of it. > > The reason I ask: (respectfully, not judgementally) I'm worried you > might be referring to something that has to do with SSDs and not ZFS, > specifically SSD wear-levelling performing better with lots of free > space (i.e. a small FTL map; TRIM helps with this immensely) -- where > the performance hit tends to begin around the 70-80% mark. (I can talk > more about that if asked, but want to make sure the two things aren't > being mistaken for one another) > So I went hunting for some evidence and created this: http://tepeserwery.pl/nowak/fillingzfs.png Columns are groups of sectors, new row is created every time a FLUSH command is sent to a disk. Percentage is the amount of filled space in the pool. Red means a write happened there, Pool is 1GB with writes of 50MB between black lines. It looks like past 80% there simply isn't enough continuous disk space and writes are becoming more and more random. For some unknown to me reason there is also a lot more flushing which certainly doesn't help for performance. There is also this odd hole left untouched by any write, reserved space of some sort?