From owner-freebsd-fs@FreeBSD.ORG Fri Apr 27 20:22:18 2007 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CA52A16A400 for ; Fri, 27 Apr 2007 20:22:18 +0000 (UTC) (envelope-from staalebk@ifi.uio.no) Received: from smtp.bluecom.no (smtp.bluecom.no [193.75.75.28]) by mx1.freebsd.org (Postfix) with ESMTP id 8D58513C448 for ; Fri, 27 Apr 2007 20:22:16 +0000 (UTC) (envelope-from staalebk@ifi.uio.no) Received: from eschew.pusen.org (unknown [193.69.145.10]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.bluecom.no (Postfix) with ESMTP id E876512C7DA for ; Fri, 27 Apr 2007 22:22:14 +0200 (CEST) Received: from chiller by eschew.pusen.org with local (Exim 4.50) id 1HhWxS-0004gb-I8 for freebsd-fs@freebsd.org; Fri, 27 Apr 2007 22:22:22 +0200 Date: Fri, 27 Apr 2007 22:22:22 +0200 From: =?iso-8859-1?Q?St=E5le?= Kristoffersen To: freebsd-fs@freebsd.org Message-ID: <20070427202222.GA26824@eschew.pusen.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit User-Agent: Mutt/1.5.13 (2006-08-11) Subject: ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Apr 2007 20:22:18 -0000 I'm having trouble with performance using ZFS as the filesystem. I earlier ran UFS and had no problems pushing 50MB/s trough the network. Now with ZFS i even have problems reading 15MB/s locally. From the output of iostat and zpool iostat it looks like it reads about 2 times as much data from the disc compared to how much the program receive. (using dd, ftp or samba) Is there anything I could check? I've tried some sysctl settings: kern.ipc.shm_use_phys=1 vfs.vmiodirenable=1 vfs.hirunningspace=10485760 vfs.lorunningspace=10485760 net.inet.tcp.sendspace=65536 net.inet.tcp.recvspace=65536 net.inet.tcp.delayed_ack=0 but that did not do much difference. -- Ståle Kristoffersen staalebk@ifi.uio.no