From owner-freebsd-geom@FreeBSD.ORG Wed Aug 5 15:34:08 2009 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D42710656D8 for ; Wed, 5 Aug 2009 15:34:08 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from lennier.cc.vt.edu (lennier.cc.vt.edu [198.82.162.213]) by mx1.freebsd.org (Postfix) with ESMTP id 42E448FC23 for ; Wed, 5 Aug 2009 15:34:07 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from dagger.cc.vt.edu (dagger.cc.vt.edu [198.82.163.114]) by lennier.cc.vt.edu (8.13.8/8.13.8) with ESMTP id n75FXbv7016919 for ; Wed, 5 Aug 2009 11:33:37 -0400 Received: from auth3.smtp.vt.edu (EHLO auth3.smtp.vt.edu) ([198.82.161.152]) by dagger.cc.vt.edu (MOS 4.1.6-GA FastPath queued) with ESMTP id ARW64796; Wed, 05 Aug 2009 11:21:57 -0400 (EDT) Received: from gromit.tower.lib.vt.edu (gromit.tower.lib.vt.edu [128.173.51.22]) (authenticated bits=0) by auth3.smtp.vt.edu (8.13.8/8.13.8) with ESMTP id n75FZcwk025852 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Wed, 5 Aug 2009 11:35:38 -0400 Message-Id: <3E2345AC-55AD-4D23-B76C-B0C37CB62A51@gromit.dlib.vt.edu> From: Paul Mather To: freebsd-geom@freebsd.org Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v935.3) Date: Wed, 5 Aug 2009 11:33:36 -0400 X-Mailer: Apple Mail (2.935.3) X-Mirapoint-Received-SPF: 198.82.161.152 auth3.smtp.vt.edu paul@gromit.dlib.vt.edu 5 none X-Mirapoint-IP-Reputation: reputation=neutral-1, source=Fixed, refid=n/a, actions=MAILHURDLE SPF TAG X-Junkmail-Info: (0) X-Junkmail-Status: score=10/50, host=dagger.cc.vt.edu X-Junkmail-SD-Raw: score=unknown, refid=str=0001.0A020206.4A79A651.0179,ss=1,fgs=0, ip=198.82.161.152, so=2009-06-02 18:41:42, dmn=2009-06-06 00:02:10, mode=multiengine X-Junkmail-IWF: false Subject: ZFS slow write performance X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Aug 2009 15:34:08 -0000 I have a system I intend to use to back up a remote system via rsync. It is running FreeBSD/i386 7.2-STABLE and has a ZFS raidz1 pool consisting of four 1 TB SATA drives. The system has 768 MiB of RAM and a 2 GHz Pentium 4 CPU. Currently, I am just trying to rsync data locally from a read-only UFS2-mounted USB-attached hard drive, and am getting (IMHO) poor write speeds of only about 5 MiB/sec. I can't figure out why this is so relatively low. Looking at gstat shows the source drive and destination drives as cruising along at an average of 30%--50% busy (the destination drives averaging between 1800--2000 kBps each in the gstat display). Top shows an average of ~20% system time and ~70% idle (though when I changed "compression=on" to "compression=gzip-9" for the target file system, system CPU load shot up to ~70% utilization). Memory usage is pretty static, with ~165 MiB wired and 512-523 MiB inactive RAM usage. Given there appears to be nothing stressing the system, why isn't it apparently making more use of the available resources, in particular, disk bandwidth? A dd of a large file from the source USB drive reports a transfer rate of about 15 MiB/sec from it, so getting only about a third of this when rsyncing to an otherwise idle ZFS pool is disappointing when the source drive can obviously go faster than it is. If I dd /dev/zero to a file on the target ZFS file system I get about 15 MiB/sec write speed with "compression=off" set and about 18 MiB/sec with "compression=on" set, indicating that the target can go faster, too. Even though I am rsyncing from one local filesystem to another, could the problem lie with rsync overheads? Has anyone else encountered poor rsync performance with ZFS and can offer any tuning advice? Otherwise, does anyone have any advice for speeding up my local copy performance? Here is some dmesg information about the attached hardware: atapci0: port 0xecf8-0xecff, 0xecf0-0xecf3,0xece0-0xece7,0xecd8-0xecdb,0xecc0-0xeccf mem 0xff8ffc00-0xff8fffff irq 16 at device 7.0 on pci1 atapci0: [ITHREAD] ata2: on atapci0 ata2: [ITHREAD] ata3: on atapci0 ata3: [ITHREAD] ata4: on atapci0 ata4: [ITHREAD] ata5: on atapci0 ata5: [ITHREAD] [...] ehci0: mem 0xffa00000-0xffa003ff irq 23 at device 29.7 on pci0 ehci0: [GIANT-LOCKED] ehci0: [ITHREAD] usb3: EHCI version 1.0 usb3: companion controllers, 2 ports each: usb0 usb1 usb2 usb3: on ehci0 usb3: USB revision 2.0 uhub3: on usb3 uhub3: 6 ports with 6 removable, self powered umass0: on uhub3 [...] ad4: 953869MB at ata2-master SATA150 ad6: 953869MB at ata3-master SATA150 ad8: 953869MB at ata4-master SATA150 ad10: 953869MB at ata5-master SATA150 da0 at umass-sim0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-4 device da0: 40.000MB/s transfers da0: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C) I have the following tuning in /boot/loader.conf: vm.kmem_size="640M" vm.kmem_size_max="640M" vfs.zfs.arc_max="320M" #vfs.zfs.vdev.cache.size="5M" vfs.zfs.prefetch_disable="1" Any help or advice is appreciated. Cheers, Paul.