Date: Sun, 27 Feb 2011 13:18:02 +0100 From: Damien Fleuriot <ml@my.gd> To: Jeremy Chadwick <freebsd@jdc.parodius.com> Cc: "freebsd-stable@freebsd.org" <freebsd-stable@freebsd.org> Subject: Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE Message-ID: <AANLkTindQv=fTC-vMesjqpJhRbh533uqZSc_C47UqNBO@mail.gmail.com> In-Reply-To: <20110224075517.GA18146@icarus.home.lan> References: <4D660909.6090202@my.gd> <20110224075517.GA18146@icarus.home.lan>
next in thread | previous in thread | raw e-mail | index | archive | help
On 24 February 2011 08:55, Jeremy Chadwick <freebsd@jdc.parodius.com> wrote= : > On Thu, Feb 24, 2011 at 08:30:17AM +0100, Damien Fleuriot wrote: >> Hello list, >> >> I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and since >> then I've been experiencing *abysmal* performance with samba. >> >> We're talking transfer rates of say 50kbytes/s here, and I'm the only >> client on the box. > > I have a similar system with significantly less disks (two pools, one > disk each; yes, no redundancy). =A0The system can push, via SMB/CIFS > across the network about 65-70MBytes/sec, and 80-90MByte/sec via FTP. > I'll share with you my tunings for Samba, ZFS, and the system. =A0I spent > quite some time messing with different values in Samba and FreeBSD to > find out what got me the "best" performance without destroying the > system horribly. > > Please note the amount of memory matters greatly here, so don't go > blindly setting these if your system has some absurdly small amount of > physical RAM installed. > > Before getting into what my system has, I also want to make clear that > there have been cases in the past where people were seeing abysmal > performance from ZFS, only to find out it was a *single disk* in their > pool which was causing all of the problems (meaning a single disk was > performing horribly, impacting everything). =A0I can try to find the > mailing list post, but I believe the user offlined the disk (and later > replaced it) and everything was fast again. =A0Just a FYI. > > > System specifications > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > * Case - Supermicro SC733T-645B > * =A0 MB - Supermicro X7SBA > * =A0CPU - Intel Core 2 Duo E8400 > * =A0RAM - CT2KIT25672AA800, 4GB ECC > * =A0RAM - CT2KIT25672AA80E, 4GB ECC > * Disk - Intel X25-V SSD (ada0, boot) > * Disk - WD1002FAEX (ada1, ZFS "data" pool) > * Disk - WD2001FASS (ada2, ZFS "backups" pool) > > > > Samba > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Rebuild the port (ports/net/samba35) with AIO_SUPPORT enabled. =A0To use > AIO you will need to load the aio.ko kernel module (kldload aio) first. > > Relevant smb.conf tunings: > > =A0[global] > =A0socket options =3D TCP_NODELAY SO_SNDBUF=3D131072 SO_RCVBUF=3D131072 > =A0use sendfile =3D no > =A0min receivefile size =3D 16384 > =A0aio read size =3D 16384 > =A0aio write size =3D 16384 > =A0aio write behind =3D yes > > > > ZFS pools > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =A0pool: backups > =A0state: ONLINE > =A0scrub: none requested > config: > > =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM > =A0 =A0 =A0 =A0backups =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0ada2 =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > > errors: No known data errors > > =A0pool: data > =A0state: ONLINE > =A0scrub: none requested > config: > > =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM > =A0 =A0 =A0 =A0data =A0 =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0ada1 =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > > errors: No known data errors > > > > ZFS tunings > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Your tunings here are "wild" (meaning all over the place). =A0Your use > of vfs.zfs.txg.synctime=3D"1" is probably hurting you quite badly, in > addition to your choice to enable prefetching (every ZFS FreeBSD system > I've used has benefit tremendously from having prefetching disabled, > even on systems with 8GB RAM and more). =A0You do not need to specify > vm.kmem_size_max, so please remove that. =A0Keeping vm.kmem_size is fine. > Also get rid of your vdev tunings, I'm not sure why you have those. > > My relevant /boot/loader.conf tunings for 8.2-RELEASE (note to readers: > the version of FreeBSD you're running, and build date, matters greatly > here so do not just blindly apply these without thinking first): > > =A0# We use Samba built with AIO support; we need this module! > =A0aio_load=3D"yes" > > =A0# Increase vm.kmem_size to allow for ZFS ARC to utilise more memory. > =A0vm.kmem_size=3D"8192M" > =A0vfs.zfs.arc_max=3D"6144M" > > =A0# Disable ZFS prefetching > =A0# http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.h= tml > =A0# Increases overall speed of ZFS, but when disk flushing/writes occur, > =A0# system is less responsive (due to extreme disk I/O). > =A0# NOTE: Systems with 8GB of RAM or more have prefetch enabled by > =A0# default. > =A0vfs.zfs.prefetch_disable=3D"1" > > =A0# Decrease ZFS txg timeout value from 30 (default) to 5 seconds. =A0Th= is > =A0# should increase throughput and decrease the "bursty" stalls that > =A0# happen during immense I/O with ZFS. > =A0# http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.h= tml > =A0# http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.h= tml > =A0vfs.zfs.txg.timeout=3D"5" > > > > sysctl tunings > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Please note that the below kern.maxvnodes tuning is based on my system > usage, and yours may vary, so you can remove or comment out this option > if you wish. =A0The same goes for vfs.ufs.dirhash_maxmem. =A0As for > vfs.zfs.txg.write_limit_override, I strongly suggest you keep this > commented out for starters; it effectively "rate limits" ZFS I/O, and > this smooths out overall performance (otherwise I was seeing what > appeared to be incredible network transfer speed, then the system would > churn hard for quite some time on physical I/O, then fast network speed, > physical I/O, etc... very "bursty", which I didn't want). > > =A0# Increase send/receive buffer maximums from 256KB to 16MB. > =A0# FreeBSD 7.x and later will auto-tune the size, but only up to the ma= x. > =A0net.inet.tcp.sendbuf_max=3D16777216 > =A0net.inet.tcp.recvbuf_max=3D16777216 > > =A0# Double send/receive TCP datagram memory allocation. =A0This defines = the > =A0# amount of memory taken up by default *per socket*. > =A0net.inet.tcp.sendspace=3D65536 > =A0net.inet.tcp.recvspace=3D131072 > > =A0# dirhash_maxmem defaults to 2097152 (2048KB). =A0dirhash_mem has reac= hed > =A0# this limit a few times, so we should increase dirhash_maxmem to > =A0# something like 16MB (16384*1024). > =A0vfs.ufs.dirhash_maxmem=3D16777216 > > =A0# > =A0# ZFS tuning parameters > =A0# NOTE: Be sure to see /boot/loader.conf for additional tunings > =A0# > > =A0# Increase number of vnodes; we've seen vfs.numvnodes reach 115,000 > =A0# at times. =A0Default max is a little over 200,000. =A0Playing it saf= e... > =A0kern.maxvnodes=3D250000 > > =A0# Set TXG write limit to a lower threshold. =A0This helps "level out" > =A0# the throughput rate (see "zpool iostat"). =A0A value of 256MB works = well > =A0# for systems with 4GB of RAM, while 1GB works well for us w/ 8GB on > =A0# disks which have 64MB cache. > =A0vfs.zfs.txg.write_limit_override=3D1073741824 > > > > Good luck. > > -- > | Jeremy Chadwick =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 jdc@parodius.com | > | Parodius Networking =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http://= www.parodius.com/ | > | UNIX Systems Administrator =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Mountain = View, CA, USA | > | Making life hard for others since 1977. =A0 =A0 =A0 =A0 =A0 =A0 =A0 PGP= 4BD6C0CB | > > Hi again Jeremy, list, I've been able to test with the settings you recommended and this obviously did the trick. I'm now getting much better overall performance, see below the iostat output during the transfer of a ~5gb file through samba: capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- rtank 5.17T 7.08T 1 1 146K 188K rtank 5.17T 7.08T 539 0 67.0M 0 rtank 5.17T 7.08T 564 0 69.9M 0 rtank 5.17T 7.08T 530 0 65.8M 0 rtank 5.17T 7.08T 570 0 70.8M 0 rtank 5.17T 7.08T 567 0 70.3M 0 rtank 5.17T 7.08T 546 0 67.8M 0 rtank 5.17T 7.08T 546 0 67.9M 0 rtank 5.17T 7.08T 571 0 70.9M 0 rtank 5.17T 7.08T 549 0 68.2M 0 rtank 5.17T 7.08T 608 0 75.4M 0 rtank 5.17T 7.08T 565 0 70.3M 0 rtank 5.17T 7.08T 534 0 66.4M 0 rtank 5.17T 7.08T 557 0 69.2M 0 rtank 5.17T 7.08T 557 0 69.1M 0 rtank 5.17T 7.08T 538 0 66.8M 0 Not only am I getting much higher speed (~65mbytes/s as opposed to ~20-40 previously, before getting the abysmal drop) but now the load on the ZFS pool is leveled. Thank you again, settings definitely did it :)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTindQv=fTC-vMesjqpJhRbh533uqZSc_C47UqNBO>