From owner-freebsd-stable@FreeBSD.ORG Sun Oct 3 18:20:08 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 49BA6106566B for ; Sun, 3 Oct 2010 18:20:08 +0000 (UTC) (envelope-from freebsd-stable@m.gmane.org) Received: from lo.gmane.org (lo.gmane.org [80.91.229.12]) by mx1.freebsd.org (Postfix) with ESMTP id C64A88FC1E for ; Sun, 3 Oct 2010 18:20:07 +0000 (UTC) Received: from list by lo.gmane.org with local (Exim 4.69) (envelope-from ) id 1P2TAK-0008AX-Fm for freebsd-stable@freebsd.org; Sun, 03 Oct 2010 20:20:04 +0200 Received: from 93-141-115-47.adsl.net.t-com.hr ([93.141.115.47]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 03 Oct 2010 20:20:04 +0200 Received: from ivoras by 93-141-115-47.adsl.net.t-com.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 03 Oct 2010 20:20:04 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-stable@freebsd.org From: Ivan Voras Date: Sun, 03 Oct 2010 20:19:52 +0200 Lines: 87 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: 93-141-115-47.adsl.net.t-com.hr User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.1.9) Gecko/20100620 Thunderbird/3.0.4 In-Reply-To: Subject: Re: MySQL performance concern X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 Oct 2010 18:20:08 -0000 On 10/02/10 22:18, Rumen Telbizov wrote: > pool: tank > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror ONLINE 0 0 0 > gpt/tank0 ONLINE 0 0 0 > gpt/tank1 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > gpt/tank2 ONLINE 0 0 0 > gpt/tank3 ONLINE 0 0 0 > logs ONLINE 0 0 0 > mirror ONLINE 0 0 0 > gpt/zil0 ONLINE 0 0 0 > gpt/zil1 ONLINE 0 0 0 > cache > gpt/l2arc0 ONLINE 0 0 0 > gpt/l2arc1 ONLINE 0 0 0 > > pool: zroot > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > mirror ONLINE 0 0 0 > gpt/zroot0 ONLINE 0 0 0 > gpt/zroot1 ONLINE 0 0 0 > > zroot is a couple of small partitions from two of the same SAS disks. zil > and l2arc are 8 and 22G partitions from 32G SSDs This looks a bit overly complex (your recovery procedure if some of the drives goes bad will include re-creating the partition layout), but it probably shouldn't affect performance. Just to check - mapped to physical drives this looks like this: ("gpt/" prefix omitted for brevity): * tank0..tank3 : on SAS drives * zroot0, zroot1 : on some of the same SAS drives as above * zil0, zil1 : on SSD drives * l2arc0, l2arc1 : on the same SSD drives as above ARC and ZIL have some very different IO characteristics, I don't know if they would interfere with each other. Can you spend some time looking at the output of "gstat" while the database task is running and see if there's something odd? Like "%busy" column going near 100% for some of them? What IO bandwidth and ops/s are you getting? > I pretty much have no zfs tuning done since from what I've found there > shouldn't be any needed since I'm running 8.1 on a 64bit machine. > Let me know if you'd like me to experiment with any ... > > Some additional information: > # sysctl vm.kmem_size > vm.kmem_size: 5539958784 > # sysctl vm.kmem_size_max > vm.kmem_size_max: 329853485875 > # sysctl vfs.zfs.arc_max > vfs.zfs.arc_max: 4466216960 I have done some digging myself and it seems that two settings have noticable impact on MySQL load: * zfs block size - you need to re-create all mysql files to change this; set to 8 KiB (or whatever MyISAM uses for block size) * reducing vfs.zfs.txg.timeout to about 5 seconds Are you using ZFS compression? See http://jp.planet.mysql.com/entry/?id=19489 for more ideas. Other than that, your CPUs are: New: 2 x Dual Core Xeon E5502 1.87Ghz Old: 2 x Xeon Quad E5410 @ 2.33GHz You can see here how different they are: http://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors Specifically, as you are using a single-threaded client, you *need* the additional GHz of the old server. You are quoting 30% CPU usage on the new server - I assume this is the "total" CPU as reported by utilities like "top", "iostat", "vmstat", etc - meaning that if the system has four CPU cores, one of then is 100% busy (meaning 25% of the total) and another is about 20% used.