From owner-freebsd-fs@FreeBSD.ORG Mon Mar 26 16:32:43 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ACAA01065674 for ; Mon, 26 Mar 2012 16:32:43 +0000 (UTC) (envelope-from sven@crashme.org) Received: from celaeno.tauri.mw.lg.virgo.supercluster.net (celaeno.tauri.mw.lg.virgo.supercluster.net [213.252.140.33]) by mx1.freebsd.org (Postfix) with ESMTP id 6545C8FC1F for ; Mon, 26 Mar 2012 16:32:43 +0000 (UTC) Received: from miram.persei.mw.lg.virgo.supercluster.net ([213.252.140.37] helo=[192.168.20.6]) by celaeno.tauri.mw.lg.virgo.supercluster.net with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.76) (envelope-from ) id 1SCCqW-0004TO-Am; Mon, 26 Mar 2012 16:32:42 +0000 Message-ID: <4F709A2C.2080806@crashme.org> Date: Mon, 26 Mar 2012 18:32:44 +0200 From: Sven Brandenburg User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120310 Thunderbird/11.0 MIME-Version: 1.0 To: Bob Friesenhahn References: <4F703815.8070809@crashme.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: 0.0 (/) Cc: freebsd-fs@freebsd.org Subject: Re: NFSv3, ZFS, 10GE performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Mar 2012 16:32:43 -0000 On 03/26/2012 04:30 PM, Bob Friesenhahn wrote: > How are you performing your testing? ZFS directoy is mounted via nfs on client. I created several files (4,8,16GB sizes) with random data in the exported fs. Since I'm currently only interested in network saturation/nfs optimization part of the equation, I only dd'ed those files to /dev/null (testing several block sizes) on the client in one go, no random reads. On a tangent: files made entirely out of zeroes are not a good benchmark for measuring nfs performance. It knows(TM) :) The zpool consists of only one slow disk to work out if data comes out of L1ARC or from disk. After the first dd run all subsequent runs are served from ARC, just as expected (the server has 96GB of RAM, so the files should fit). On another tangent: at first I tried to use md(4) as source for the nfs exports before complicating things with zfs - as it turns out md is rather slow, only about 0.7-1.0GB/s with ufs on it. Local reads of my files once the ARC is 'seeded' are several times faster. > Are you only interested in single threaded read performance to a single > client? My first item on the list is serving one client as fast as possible, next step is multiple client machines (maybe these are conflicting goals?). However, I figure that I should at least be able to press the right buttons to dial in "fast" for one client :-) regards, Sven