From owner-freebsd-fs@FreeBSD.ORG Mon Mar 18 16:13:21 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 14C3C6E4 for ; Mon, 18 Mar 2013 16:13:21 +0000 (UTC) (envelope-from davide.damico@contactlab.com) Received: from mail2.shared.smtp.contactlab.it (mail2.shared.smtp.contactlab.it [93.94.37.7]) by mx1.freebsd.org (Postfix) with ESMTP id 83DB2733 for ; Mon, 18 Mar 2013 16:13:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; d=contactlab.it; s=clab1; c=relaxed/relaxed; q=dns/txt; i=@contactlab.it; t=1363623199; h=From:Subject:Date:To:MIME-Version:Content-Type; bh=eLBOwZ3vgEy4iR2t3fXLFVToaqfL0EBS2B9xDpgAyz8=; b=q6yMORYgzL0YEMookDdtIcNN8uoZ9J68bXH4NPBiC2bJSXRK/sDUfUaLs9/FtqPN kA7jW6AES/ggrtWelGcHRlVPejsxHAXhQHIZUecw42pRYAec8caoLVJLC7AyLXrt 99UHq22kfDO2vJxyG0DfQn/ZBMFOFdITaMAeDRRp/VM=; Received: from [213.92.90.12] ([213.92.90.12:58431] helo=mail3.tomato.it) by t.contactlab.it (envelope-from ) (ecelerity 3.5.1.37854 r(Momo-dev:3.5.1.0)) with ESMTP id 40/90-24145-F1D37415; Mon, 18 Mar 2013 17:13:19 +0100 Received: from mx3-master.housing.tomato.lan ([172.16.7.55]) by mail3.tomato.it with smtp (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1UHcgY-000P5V-U1 for freebsd-fs@freebsd.org; Mon, 18 Mar 2013 17:13:19 +0100 Received: (qmail 96437 invoked by uid 89); 18 Mar 2013 16:13:18 -0000 Received: from localhost (HELO davepro.local) (127.0.0.1) by mx3-master.housing.tomato.lan with SMTP; 18 Mar 2013 16:13:18 -0000 Message-ID: <51473D1D.3050306@contactlab.com> Date: Mon, 18 Mar 2013 17:13:17 +0100 From: Davide D'Amico User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Steven Hartland Subject: Re: FreBSD 9.1 and ZFS v28 performances References: <514729BD.2000608@contactlab.com> <810E5C08C2D149DBAC94E30678234995@multiplay.co.uk> In-Reply-To: <810E5C08C2D149DBAC94E30678234995@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Mar 2013 16:13:21 -0000 Il 18/03/13 16:31, Steven Hartland ha scritto: > > ----- Original Message ----- From: "Davide D'Amico" > > To: > Sent: Monday, March 18, 2013 2:50 PM > Subject: FreBSD 9.1 and ZFS v28 performances > > >> Hi all, >> I'm trying to use ZFS on a DELL R720 with 2x6-core, 32GB ram, H710 >> controller (no JBOD) and 15K rpm SAS HD: I will use it for a mysql 5.6 >> server, so I am trying to use ZFS to get L2ARC and ZIL benefits. >> >> I created a RAID10 and used zpool to create a pool on top: >> >> # zpool create DATA mfid3 >> # zpool add DATA cache mfid1 log mfid2 >> >> I have a question on zfs performances. Using: >> >> dd if=/dev/zero of=file.out bs=16k count=1M >> >> I cannot go faster than 400MB/s so I think I'm missing something; I >> tried removing zil, removing l2arc but everything is still the same. >> >> Here my configuration details: >> >> OS: FreeBSD 9.1 amd64 GENERIC >> >> /boot/loader.conf >> vfs.zfs.arc_min="4096M" >> vfs.zfs.arc_max="15872M" >> vm.kmem_size_max="64G" >> vm.kmem_size="49152M" >> vfs.zfs.write_limit_override=1073741824 >> >> /etc/sysctl.conf: >> kern.ipc.somaxconn=32768 >> kern.threads.max_threads_per_proc=16384 >> kern.maxfiles=262144 >> kern.maxfilesperproc=131072 >> kern.ipc.nmbclusters=65536 >> kern.corefile="/var/coredumps/%U.%N.%P.core" >> vfs.zfs.prefetch_disable="1" >> kern.maxvnodes=250000 >> >> mfiutil show volumes: >> mfi0 Volumes: >> Id Size Level Stripe State Cache Name >> mfid0 ( 278G) RAID-1 64k OPTIMAL Disabled >> mfid1 ( 118G) RAID-0 64k OPTIMAL Disabled >> mfid2 ( 118G) RAID-0 64k OPTIMAL Disabled >> mfid3 ( 1116G) RAID-10 64k OPTIMAL Disabled >> >> zpool status: >> pool: DATA >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> DATA ONLINE 0 0 0 >> mfid3 ONLINE 0 0 0 >> logs >> mfid2 ONLINE 0 0 0 >> cache >> mfid1 ONLINE 0 0 0 >> >> errors: No known data errors >> >> zfs get all DATA >> NAME PROPERTY VALUE SOURCE >> DATA type filesystem - >> DATA creation Mon Mar 18 13:41 2013 - >> DATA used 53.0G - >> DATA available 1.02T - >> DATA referenced 53.0G - >> DATA compressratio 1.00x - >> DATA mounted yes - >> DATA quota none default >> DATA reservation none default >> DATA recordsize 16K local >> DATA mountpoint /DATA default >> DATA sharenfs off default >> DATA checksum on default >> DATA compression off default >> DATA atime off local >> DATA devices on default >> DATA exec on default >> DATA setuid on default >> DATA readonly off default >> DATA jailed off default >> DATA snapdir hidden default >> DATA aclmode discard default >> DATA aclinherit restricted default >> DATA canmount on default >> DATA xattr off temporary >> DATA copies 1 default >> DATA version 5 - >> DATA utf8only off - >> DATA normalization none - >> DATA casesensitivity sensitive - >> DATA vscan off default >> DATA nbmand off default >> DATA sharesmb off default >> DATA refquota none default >> DATA refreservation none default >> DATA primarycache metadata local >> DATA secondarycache all default >> DATA usedbysnapshots 0 - >> DATA usedbydataset 53.0G - >> DATA usedbychildren 242K - >> DATA usedbyrefreservation 0 - >> DATA logbias latency default >> DATA dedup off default >> DATA mlslabel - >> DATA sync standard default >> DATA refcompressratio 1.00x - >> DATA written 53.0G - >> DATA zfs:zfs_nocacheflush 1 local >> >> >> I'm using recordsize=16k because of mysql. >> >> I am trying to use sysbench (0.5, not in the ports yet) with oltp test >> suite and my performances not so good. > > First off ideally you shouldn't use RAID controllers for ZFS, let it > have the raw disks and use a JBOD controller e.g. mps not a HW RAID > controller like mfi. I tried removing the hardware raid10 and leaving 4 disks unconfigured and then: # mfiutil create jbod mfid3 mfid4 mfid5 mfid6 same behaviour/performance (probably because perc h710 'sees' them as raid0-single disks devices. Here my controller details: mfi0 Firmware Package Version: 21.0.2-0001 mfi0 Firmware Images: Name Version Date Time Status BIOS 5.30.00_4.12.05.00_0x05110000 1/ 7/2012 1/ 7/2012 active CTLR 4.00-0014 Aug 04 2011 12:49:17 active PCLI 05.00-03:#%00008 Feb 17 2011 14:03:12 active APP 3.130.05-1587 Apr 03 2012 09:36:13 active NVDT 2.1108.03-0076 Dec 02 2011 22:55:02 active BTBL 2.03.00.00-0003 Dec 16 2010 17:31:28 active BOOT 06.253.57.219 9/9/2010 15:32:25 active > > HEAD has some significant changes for the mfi driver specifically:- > http://svnweb.freebsd.org/base?view=revision&revision=247369 > > This fixes lots off bugs but also enables full queue support on TBOLT > cards so if your mfi is a TBOLT card you may see some speed up in > random IO, not that this would effect your test here. > > While having a separate ZIL disk is good, your benefits may well be > limited if said disk is a traditional HD, better to look at enterprise > SSD's for this. The same and them some applies to your L2ARC disks. I'm using SSD disks for zfs cache and zfs log: mfi0 Physical Drives: 0 ( 279G) ONLINE SAS E1:S0 1 ( 279G) ONLINE SAS E1:S1 2 ( 558G) ONLINE SAS E1:S2 3 ( 558G) ONLINE SAS E1:S3 4 ( 558G) ONLINE SAS E1:S4 5 ( 558G) ONLINE SAS E1:S5 6 ( 119G) ONLINE SATA E1:S6 7 ( 119G) ONLINE SATA E1:S7 Thanks, d.