From owner-freebsd-fs@FreeBSD.ORG Wed May 19 09:23:48 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C1A6A106564A for ; Wed, 19 May 2010 09:23:48 +0000 (UTC) (envelope-from tzim@tzim.net) Received: from golanth.tzim.net (unknown [IPv6:2001:41d0:1:d91f:21c:c0ff:fe4b:cf32]) by mx1.freebsd.org (Postfix) with ESMTP id 5E6088FC24 for ; Wed, 19 May 2010 09:23:48 +0000 (UTC) Received: from 12rf.tzim.net ([82.232.60.244] helo=[192.168.0.10]) by golanth.tzim.net with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.71 (FreeBSD)) (envelope-from ) id 1OEfVD-00043W-87; Wed, 19 May 2010 11:23:47 +0200 Message-ID: <4BF3AE24.9080605@tzim.net> Date: Wed, 19 May 2010 11:23:48 +0200 From: Arnaud Houdelette User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: Matthias Gamsjager References: <4BF3A0DD.4080404@tzim.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Authenticated-User: tzim@tzim.net X-Authenticator: plain Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Recordsize tuning & transmission (bittorent daemon) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 May 2010 09:23:48 -0000 On 19/05/2010 10:46, Matthias Gamsjager wrote: >> ZFS recordsize on both pools are default (128k). But as transmission >> bittorrent client has no write (nor read) cache, it could mean that data is >> written is smaller chunks during download. Could this lead to data being >> stored in many not-full records ? Does those unfull records would have to be >> read as whole (128k) during the move, which would explain the above >> difference on read/write ? >> >> I'm just making assumptions here, as my understanding of internals of ZFS is >> limited. Some insights would be appreciated. >> >> > Well ZFS does not write immediately but waits couple of seconds and > write them in 1 single write operation. > That I understand. But bittorrent writes are really random... I'm unsure that ZFS is able to aggregates the writes. > could you give more info about what freebsd version do you use, > hardware used, zfs parameters in /boot/loader.conf and zfs info like > compression etc... > uname -a FreeBSD carenath.tzim.net 8.0-STABLE FreeBSD 8.0-STABLE #0: Tue May 11 18:29:26 CEST 2010 tzim@carenath.tzim.net:/usr/obj/usr/src/sys/CARENATH amd64 Hardware : Timecounter "i8254" frequency 1193182 Hz quality 0 CPU: AMD Athlon(tm) 64 Processor 3200+ (1995.24-MHz K8-class CPU) Origin = "AuthenticAMD" Id = 0x40ff2 Family = f Model = 4f Stepping = 2 Features=0x78bfbff Features2=0x2001 AMD Features=0xea500800 AMD Features2=0x1d real memory = 1610612736 (1536 MB) avail memory = 1508560896 (1438 MB) - For unsafe: atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xff00-0xff0f at device 20.1 on pci0 ad0: 152627MB at ata0-master UDMA33 - For tank ahci0: port 0xb000-0xb007,0xa000-0xa003,0x9000-0x9007,0x8000-0x8003,0x7000-0x700f mem 0xfe7ff800-0xfe7ffbff irq 22 at device 18.0 on pci0 ada0: ATA-7 SATA 2.x device ada1: ATA-7 SATA 2.x device ada2: ATA-7 SATA 2.x device ada3: ATA-7 SATA 2.x device zfs get all unsafe/dl NAME PROPERTY VALUE SOURCE unsafe/dl type filesystem - unsafe/dl creation Thu Nov 26 10:25 2009 - unsafe/dl used 26.2G - unsafe/dl available 106G - unsafe/dl referenced 12.6G - unsafe/dl compressratio 1.00x - unsafe/dl mounted yes - unsafe/dl quota none default unsafe/dl reservation 1G local unsafe/dl recordsize 128K default unsafe/dl mountpoint /store/dl local unsafe/dl sharenfs off default unsafe/dl checksum on default unsafe/dl compression off default unsafe/dl atime off local unsafe/dl devices on default unsafe/dl exec on default unsafe/dl setuid on default unsafe/dl readonly off default unsafe/dl jailed off default unsafe/dl snapdir hidden default unsafe/dl aclmode groupmask default unsafe/dl aclinherit restricted default unsafe/dl canmount on default unsafe/dl shareiscsi off default unsafe/dl xattr off temporary unsafe/dl copies 1 default unsafe/dl version 3 - unsafe/dl utf8only off - unsafe/dl normalization none - unsafe/dl casesensitivity sensitive - unsafe/dl vscan off default unsafe/dl nbmand off default unsafe/dl sharesmb off default unsafe/dl refquota none default unsafe/dl refreservation none default unsafe/dl primarycache all default unsafe/dl secondarycache all default unsafe/dl usedbysnapshots 13.6G - unsafe/dl usedbydataset 12.6G - unsafe/dl usedbychildren 0 - unsafe/dl usedbyrefreservation 0 - cat /boot/loader.conf ahci_load="YES" zfs_load="YES" vfs.root.mountfrom="zfs:unsafe/root" #vm.kmem_size="512M" vm.kmem_size_max="512M" vfs.zfs.arc_max="150M" vfs.zfs.arc_min="64M" vfs.zfs.vdev.cache.size="10M" vfs.zfs.prefetch_disable="0" Bad performance IS expected on this hardware. This is a home NAS, and the "unsafe" pool is on a laptop 2.5" IDE drive. Still, bad performance would'nt explain the discrepancies between read and write stats (both in zpool io stat and gstat). > did you test your pool with Iozone to see if it performance as it should? > > I did not. I just installed the port. What test should I run to get relevant data ? Thanks.