From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 09:09:10 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EFAF9106564A for ; Fri, 19 Dec 2008 09:09:10 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id CEE3E8FC17 for ; Fri, 19 Dec 2008 09:09:09 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.13.1) with ESMTP id mBJ8vRkJ041939; Fri, 19 Dec 2008 02:57:30 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=EwjN7PrcokQBfHZg/T7uQOJVIherP1AHj6WObH0A6l5z5owoALJ48CCT7hjzUpNHw D2svVE0pfqV/dj8YzblePgb9q+N2b0Ixjxulw0qHXClIZKbkOcWwRfWLLd7wJ3mFGPI TaPbpK31ERr+6BmNUXX6QmZ7e0g1+bimAqp7HRc= Message-ID: <494B61F7.3030904@jrv.org> Date: Fri, 19 Dec 2008 02:57:27 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.18 (Macintosh/20081105) MIME-Version: 1.0 To: Matt Simerson References: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance gains real or imaginary? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 09:09:11 -0000 Matt Simerson wrote: > I haven't benchmarked it with -HEAD but with FreeBSD 7, using a ZFS > mirror across two 12-disk hardware RAID arrays (Areca 1231ML) was > significantly (not quite double) faster than using JBOD and raidz. I > tested a few variations (four disk pools, six disk zpools, 8 disk > zpools, etc). A backup server is a *highly* specialized type of server. It's likely that data is only rarely updated, meaning that there are very few partial parity-stripe writes for the Areca to deal with. A database server receiving many updates would have an entirely different pattern of write I/O, possibly forcing many partial stripe updates. Since ZFS (almost?) never does partial stripe writes in a RAIDZ the performance comparison between ZFS with JBOD and your hardware setup might change considerably with a database workload. Not to mention the dominance of sequential I/O in a backup server, etc. For a backup server ZFS has other advantages. A client's backup server recently ran low on space so I took over another 4x1GB enclosure and added it to the pool with no downtime: there were a couple of large file writes to that pool running when I arrived that were still going when I left. There's also the issue of cost: once SATA port multiplier support works in FreeBSD it will be very practical to build cheap ~15TB servers for a small business using ZFS.