From owner-freebsd-fs@FreeBSD.ORG Fri Jun 7 15:13:42 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id A6BA9F99 for ; Fri, 7 Jun 2013 15:13:42 +0000 (UTC) (envelope-from pierre@lemazurier.fr) Received: from mail.lemazurier.fr (mail.lemazurier.fr [62.147.151.66]) by mx1.freebsd.org (Postfix) with ESMTP id 3FFC415E1 for ; Fri, 7 Jun 2013 15:13:41 +0000 (UTC) Received: from [172.18.8.191] (zup50-1-88-186-33-16.fbx.proxad.net [88.186.33.16]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.lemazurier.fr (Postfix) with ESMTPSA id 9329A235EA for ; Fri, 7 Jun 2013 17:05:39 +0200 (CEST) Message-ID: <51B1F726.7090402@lemazurier.fr> Date: Fri, 07 Jun 2013 17:07:18 +0200 From: Pierre Lemazurier User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130116 Icedove/10.0.12 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: [ZFS] Raid 10 performance issues References: <51B1EBD1.9010207@gmail.com> In-Reply-To: <51B1EBD1.9010207@gmail.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Jun 2013 15:13:42 -0000 Hi, i think i suffer of write and read performance issues on my zpool. About my system and hardware : uname -a FreeBSD bsdnas 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 sysinfo -a : http://www.privatepaste.com/b32f34c938 - 24 (4gbx6) GB DDR3 ECC : http://www.ec.kingston.com/ecom/configurator_new/partsinfo.asp?ktcpartno=KVR16R11D8/4HC - 14x this drive : http://www.wdc.com/global/products/specs/?driveID=1086&language=1 - server : http://www.supermicro.com/products/system/1u/5017/sys-5017r-wrf.cfm?parts=show - CPU : http://ark.intel.com/fr/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2_00-GHz-7_20-GTs-Intel-QPI - chassis : http://www.supermicro.com/products/chassis/4u/847/sc847e16-rjbod1.cfm - HBA sas connector : http://www.lsi.com/products/storagecomponents/Pages/LSISAS9200-8e.aspx - Cable between chassis and server : http://www.provantage.com/supermicro-cbl-0166l~7SUPA01R.htm I use this command for test write speed :dd if=/dev/zero of=test.dd bs=2M count=10000 I use this command for test read speed :dd if=test.dd of=/dev/null bs=2M count=10000 Of course no compression on zfs dataset. Test on one of this disk format with UFS : Write : some gstat raising : http://www.privatepaste.com/dd31fafaa6 speed around 140 mo/s and something like 1100 iops dd result : 20971520000 bytes transferred in 146.722126 secs (142933589 bytes/sec) Read : I think I read on RAM (20971520000 bytes transferred in 8.813298 secs (2379531480 bytes/sec)). Then I make the test on all the drive (dd if=/dev/gpt/disk14.nop of=/dev/null bs=2M count=10000) some gstat raising : http://www.privatepaste.com/d022b7c480 speed around 140 mo/s again an near 1100+ iops dd reslut : 20971520000 bytes transferred in 142.895212 secs (146761530 bytes/sec) ZFS - I make my zpool on this way : http://www.privatepaste.com/e74d9cc3b9 zpool status : http://www.privatepaste.com/0276801ef6 zpool get all : http://www.privatepaste.com/74b37a2429 zfs get all : http://www.privatepaste.com/e56f4a33f8 zfs-stats -a : http://www.privatepaste.com/f017890aa1 zdb : http://www.privatepaste.com/7d723c5556 With this setup I hope to have near 7x more speed for write and near 14x for read than the UFS device alone. Then for be realistic, something like 850 mo/s for write and 1700 mo/s for read. ZFS – test : Write : gstat raising : http://www.privatepaste.com/7cefb9393a zpool iostat -v 1 of a fastest try : http://www.privatepaste.com/8ade4defbe dd result : 20971520000 bytes transferred in 54.326509 secs (386027381 bytes/sec) 386 mo/s more than twice less than I expect. Read : I export and import the pool for limit the ARC effect. I don't know how to do better, I hope that sufficient. gstat raising : http://www.privatepaste.com/130ce43af1 zpool iostat -v 1 : http://privatepaste.com/eb5f9d3432 dd result : 20971520000 bytes transferred in 30.347214 secs (691052563 bytes/sec) 690 mo/s 2,5x less than I expect. It's appear to not be an hardware issue, when I do a dd test of each whole disk at the same time with the command dd if=/dev/gpt/diskX of=/dev/null bs=1M count=10000, I have this gstat raising : http://privatepaste.com/df9f63fd4d Near 130 mo/s for each device, something like I expect. In your opinion where the problem come from ? Forgive me for my English, please keep easy language, i'm not realy easy with English. I can give you more information if you need. Many thanks for your help.