From owner-freebsd-stable@FreeBSD.ORG Mon Feb 15 18:28:45 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8383B106566B for ; Mon, 15 Feb 2010 18:28:45 +0000 (UTC) (envelope-from 000.fbsd@quip.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) by mx1.freebsd.org (Postfix) with ESMTP id DA4928FC29 for ; Mon, 15 Feb 2010 18:28:44 +0000 (UTC) Received: from elsa.codelab.cz (localhost.codelab.cz [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 5407819E023; Mon, 15 Feb 2010 19:28:43 +0100 (CET) Received: from [192.168.1.2] (r5bb235.net.upc.cz [86.49.61.235]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 5044B19E019; Mon, 15 Feb 2010 19:28:40 +0100 (CET) Message-ID: <4B799257.9080304@quip.cz> Date: Mon, 15 Feb 2010 19:28:39 +0100 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.7) Gecko/20100104 SeaMonkey/2.0.2 MIME-Version: 1.0 To: Jeremy Chadwick References: <20100215170156.GA64731@icarus.home.lan> In-Reply-To: <20100215170156.GA64731@icarus.home.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-stable@freebsd.org Subject: Re: More zfs benchmarks X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2010 18:28:45 -0000 Jeremy Chadwick wrote: > On Sun, Feb 14, 2010 at 05:28:28PM +0000, Jonathan Belson wrote: >> Hiya >> >> After reading some earlier threads about zfs performance, I decided to test my own server. I found the results rather surprising... > > Below are my results from my home machine. Note that my dd size and > count differ from what the OP provided. > > I should note that powerd(8) is in effect on this box; I probably should > have disabled it and forced the CPU frequency to be at max before doing > these tests. I did the same tests as you on my backup storage server HP ML110 G5 with 4x 1TB Samsung drives in RAIDZ. Unfortunately there is no kstat.zfs.misc.arcstats.memory_throttle_count on FreeBSD 7.2 I can run this test on Sun Fire X2100 with 4GB RAM, 2x 500GB Hitachi drives in ZFS mirror on FreeBSD 7.2 (let me know if somebody is interested in results for comparision) root@kiwi ~/# uname -a FreeBSD kiwi.codelab.cz 7.2-RELEASE-p4 FreeBSD 7.2-RELEASE-p4 #0: Fri Oct 2 08:22:32 UTC 2009 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 root@kiwi ~/# uptime 6:46PM up 6 days, 7:30, 1 user, load averages: 0.00, 0.00, 0.00 root@kiwi ~/# sysctl hw.machine hw.model hw.ncpu hw.physmem hw.usermem hw.realmem hw.pagesizes hw.machine: amd64 hw.model: Intel(R) Pentium(R) Dual CPU E2160 @ 1.80GHz hw.ncpu: 2 hw.physmem: 5219966976 hw.usermem: 801906688 hw.realmem: 5637144576 sysctl: unknown oid 'hw.pagesizes' root@kiwi ~/# sysctl vm.kmem_size vm.kmem_size_min vm.kmem_size_max vm.kmem_size_scale vm.kmem_size: 1684733952 vm.kmem_size_min: 0 vm.kmem_size_max: 3865468109 vm.kmem_size_scale: 3 root@kiwi ~/# dmesg | egrep '(ata[01]|atapci0)' atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1c10-0x1c1f,0x1c00-0x1c0f at device 31.2 on pci0 ata0: on atapci0 ata0: [ITHREAD] ata1: on atapci0 ata1: [ITHREAD] ad0: 953869MB at ata0-master SATA300 ad1: 953869MB at ata0-slave SATA300 ad2: 953869MB at ata1-master SATA300 ad3: 953869MB at ata1-slave SATA300 root@kiwi ~/# egrep '^[a-z]' /boot/loader.conf hw.bge.allow_asf="1" root@kiwi ~/# zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad0 ONLINE 0 0 0 ad1 ONLINE 0 0 0 ad2 ONLINE 0 0 0 ad3 ONLINE 0 0 0 errors: No known data errors before tests root@kiwi ~/# sysctl kstat.zfs.misc.arcstats kstat.zfs.misc.arcstats.hits: 350294273 kstat.zfs.misc.arcstats.misses: 8369056 kstat.zfs.misc.arcstats.demand_data_hits: 4336959 kstat.zfs.misc.arcstats.demand_data_misses: 135936 kstat.zfs.misc.arcstats.demand_metadata_hits: 267825050 kstat.zfs.misc.arcstats.demand_metadata_misses: 6177625 kstat.zfs.misc.arcstats.prefetch_data_hits: 138128 kstat.zfs.misc.arcstats.prefetch_data_misses: 400434 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 77994136 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1655061 kstat.zfs.misc.arcstats.mru_hits: 158218094 kstat.zfs.misc.arcstats.mru_ghost_hits: 9777 kstat.zfs.misc.arcstats.mfu_hits: 114654575 kstat.zfs.misc.arcstats.mfu_ghost_hits: 244807 kstat.zfs.misc.arcstats.deleted: 9904481 kstat.zfs.misc.arcstats.recycle_miss: 2855906 kstat.zfs.misc.arcstats.mutex_miss: 9362 kstat.zfs.misc.arcstats.evict_skip: 1483848 kstat.zfs.misc.arcstats.hash_elements: 77770 kstat.zfs.misc.arcstats.hash_elements_max: 553646 kstat.zfs.misc.arcstats.hash_collisions: 8012499 kstat.zfs.misc.arcstats.hash_chains: 15382 kstat.zfs.misc.arcstats.hash_chain_max: 16 kstat.zfs.misc.arcstats.p: 1107222849 kstat.zfs.misc.arcstats.c: 1263550464 kstat.zfs.misc.arcstats.c_min: 52647936 kstat.zfs.misc.arcstats.c_max: 1263550464 kstat.zfs.misc.arcstats.size: 1263430144 test #1 (327,680,000 bytes) [~412MB/s - buffered] ============================= root@kiwi ~/# dd if=/dev/zero of=/tank/test01 bs=64k count=5000 5000+0 records in 5000+0 records out 327680000 bytes transferred in 0.758220 secs (432170107 bytes/sec) test #1 (kstat.zfs.misc.arcstats) =================================== root@kiwi ~/# sysctl kstat.zfs.misc.arcstats kstat.zfs.misc.arcstats.hits: 350294422 kstat.zfs.misc.arcstats.misses: 8369059 kstat.zfs.misc.arcstats.demand_data_hits: 4337042 kstat.zfs.misc.arcstats.demand_data_misses: 135936 kstat.zfs.misc.arcstats.demand_metadata_hits: 267825116 kstat.zfs.misc.arcstats.demand_metadata_misses: 6177628 kstat.zfs.misc.arcstats.prefetch_data_hits: 138128 kstat.zfs.misc.arcstats.prefetch_data_misses: 400434 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 77994136 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1655061 kstat.zfs.misc.arcstats.mru_hits: 158218145 kstat.zfs.misc.arcstats.mru_ghost_hits: 9777 kstat.zfs.misc.arcstats.mfu_hits: 114654673 kstat.zfs.misc.arcstats.mfu_ghost_hits: 244807 kstat.zfs.misc.arcstats.deleted: 9942641 kstat.zfs.misc.arcstats.recycle_miss: 2856395 kstat.zfs.misc.arcstats.mutex_miss: 9362 kstat.zfs.misc.arcstats.evict_skip: 1483848 kstat.zfs.misc.arcstats.hash_elements: 42137 kstat.zfs.misc.arcstats.hash_elements_max: 553646 kstat.zfs.misc.arcstats.hash_collisions: 8013282 kstat.zfs.misc.arcstats.hash_chains: 5506 kstat.zfs.misc.arcstats.hash_chain_max: 16 kstat.zfs.misc.arcstats.p: 1042257654 kstat.zfs.misc.arcstats.c: 1185812496 kstat.zfs.misc.arcstats.c_min: 52647936 kstat.zfs.misc.arcstats.c_max: 1263550464 kstat.zfs.misc.arcstats.size: 1185738752 test #2 (3,276,800,000 bytes) [~126MB/s] =============================== root@kiwi ~/# dd if=/dev/zero of=/tank/test02 bs=64k count=50000 50000+0 records in 50000+0 records out 3276800000 bytes transferred in 24.713113 secs (132593575 bytes/sec) test #2 (kstat.zfs.misc.arcstats) =================================== root@kiwi ~/# sysctl kstat.zfs.misc.arcstats kstat.zfs.misc.arcstats.hits: 350294793 kstat.zfs.misc.arcstats.misses: 8369070 kstat.zfs.misc.arcstats.demand_data_hits: 4337253 kstat.zfs.misc.arcstats.demand_data_misses: 135940 kstat.zfs.misc.arcstats.demand_metadata_hits: 267825276 kstat.zfs.misc.arcstats.demand_metadata_misses: 6177635 kstat.zfs.misc.arcstats.prefetch_data_hits: 138128 kstat.zfs.misc.arcstats.prefetch_data_misses: 400434 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 77994136 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1655061 kstat.zfs.misc.arcstats.mru_hits: 158218309 kstat.zfs.misc.arcstats.mru_ghost_hits: 9777 kstat.zfs.misc.arcstats.mfu_hits: 114654880 kstat.zfs.misc.arcstats.mfu_ghost_hits: 244807 kstat.zfs.misc.arcstats.deleted: 9982199 kstat.zfs.misc.arcstats.recycle_miss: 2857105 kstat.zfs.misc.arcstats.mutex_miss: 9375 kstat.zfs.misc.arcstats.evict_skip: 1483848 kstat.zfs.misc.arcstats.hash_elements: 27799 kstat.zfs.misc.arcstats.hash_elements_max: 553646 kstat.zfs.misc.arcstats.hash_collisions: 8018034 kstat.zfs.misc.arcstats.hash_chains: 2604 kstat.zfs.misc.arcstats.hash_chain_max: 16 kstat.zfs.misc.arcstats.p: 993103671 kstat.zfs.misc.arcstats.c: 1112857236 kstat.zfs.misc.arcstats.c_min: 52647936 kstat.zfs.misc.arcstats.c_max: 1263550464 kstat.zfs.misc.arcstats.size: 1112830464 test #3 (6,553,600,000 bytes) [~115MB/s] =============================== root@kiwi ~/# dd if=/dev/zero of=/tank/test03 bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 54.284611 secs (120726664 bytes/sec) root@kiwi ~/# iostat -x -w 5 ad{0,1,2,3} extended device statistics device r/s w/s kr/s kw/s wait svc_t %b ad0 0.0 948.3 0.0 39875.9 60 43.2 70 ad1 0.0 948.3 0.0 39863.3 58 43.2 70 ad2 0.0 936.1 0.0 39801.2 60 44.4 69 ad3 0.0 935.3 0.0 39799.2 67 49.3 70 extended device statistics device r/s w/s kr/s kw/s wait svc_t %b ad0 0.0 974.0 0.0 41074.1 0 45.5 71 ad1 0.0 974.3 0.0 41074.1 0 45.5 71 ad2 0.0 964.1 0.0 40861.7 50 40.9 71 ad3 0.0 962.5 0.0 40836.3 42 47.6 71 extended device statistics device r/s w/s kr/s kw/s wait svc_t %b ad0 0.0 1024.0 0.0 43083.7 64 41.9 75 ad1 0.0 1024.0 0.0 43079.5 63 42.0 75 ad2 0.0 1021.4 0.0 43140.5 64 42.1 75 ad3 0.0 1022.4 0.0 43112.4 68 47.6 75 test #3 (kstat.zfs.misc.arcstats) =================================== root@kiwi ~/# sysctl kstat.zfs.misc.arcstats kstat.zfs.misc.arcstats.hits: 350318200 kstat.zfs.misc.arcstats.misses: 8369077 kstat.zfs.misc.arcstats.demand_data_hits: 4353290 kstat.zfs.misc.arcstats.demand_data_misses: 135943 kstat.zfs.misc.arcstats.demand_metadata_hits: 267825457 kstat.zfs.misc.arcstats.demand_metadata_misses: 6177639 kstat.zfs.misc.arcstats.prefetch_data_hits: 145317 kstat.zfs.misc.arcstats.prefetch_data_misses: 400434 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 77994136 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1655061 kstat.zfs.misc.arcstats.mru_hits: 158234399 kstat.zfs.misc.arcstats.mru_ghost_hits: 9777 kstat.zfs.misc.arcstats.mfu_hits: 114655008 kstat.zfs.misc.arcstats.mfu_ghost_hits: 244807 kstat.zfs.misc.arcstats.deleted: 10034672 kstat.zfs.misc.arcstats.recycle_miss: 2857257 kstat.zfs.misc.arcstats.mutex_miss: 9375 kstat.zfs.misc.arcstats.evict_skip: 1483848 kstat.zfs.misc.arcstats.hash_elements: 25839 kstat.zfs.misc.arcstats.hash_elements_max: 553646 kstat.zfs.misc.arcstats.hash_collisions: 8026271 kstat.zfs.misc.arcstats.hash_chains: 2265 kstat.zfs.misc.arcstats.hash_chain_max: 16 kstat.zfs.misc.arcstats.p: 978217976 kstat.zfs.misc.arcstats.c: 1078080448 kstat.zfs.misc.arcstats.c_min: 52647936 kstat.zfs.misc.arcstats.c_max: 1263550464 kstat.zfs.misc.arcstats.size: 1077974016 test #4 (9,830,400,000 bytes) [~111MB/s] =============================== root@kiwi ~/# dd if=/dev/zero of=/tank/test04 bs=64k count=150000 150000+0 records in 150000+0 records out 9830400000 bytes transferred in 84.240802 secs (116694046 bytes/sec) test #4 (kstat.zfs.misc.arcstats) root@kiwi ~/# sysctl kstat.zfs.misc.arcstats kstat.zfs.misc.arcstats.hits: 350339957 kstat.zfs.misc.arcstats.misses: 8369627 kstat.zfs.misc.arcstats.demand_data_hits: 4368343 kstat.zfs.misc.arcstats.demand_data_misses: 135948 kstat.zfs.misc.arcstats.demand_metadata_hits: 267827004 kstat.zfs.misc.arcstats.demand_metadata_misses: 6177699 kstat.zfs.misc.arcstats.prefetch_data_hits: 150148 kstat.zfs.misc.arcstats.prefetch_data_misses: 400434 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 77994462 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1655546 kstat.zfs.misc.arcstats.mru_hits: 158249785 kstat.zfs.misc.arcstats.mru_ghost_hits: 9777 kstat.zfs.misc.arcstats.mfu_hits: 114656222 kstat.zfs.misc.arcstats.mfu_ghost_hits: 244808 kstat.zfs.misc.arcstats.deleted: 10114638 kstat.zfs.misc.arcstats.recycle_miss: 2857637 kstat.zfs.misc.arcstats.mutex_miss: 9384 kstat.zfs.misc.arcstats.evict_skip: 1483848 kstat.zfs.misc.arcstats.hash_elements: 22056 kstat.zfs.misc.arcstats.hash_elements_max: 553646 kstat.zfs.misc.arcstats.hash_collisions: 8038056 kstat.zfs.misc.arcstats.hash_chains: 1631 kstat.zfs.misc.arcstats.hash_chain_max: 16 kstat.zfs.misc.arcstats.p: 1009315376 kstat.zfs.misc.arcstats.c: 1078080448 kstat.zfs.misc.arcstats.c_min: 52647936 kstat.zfs.misc.arcstats.c_max: 1263550464 kstat.zfs.misc.arcstats.size: 1078049280 root@kiwi ~/# ~/bin/arc_summary.pl System Memory: Physical RAM: 4978 MB Free Memory : 0 MB ARC Size: Current Size: 1028 MB (arcsize) Target Size (Adaptive): 1028 MB (c) Min Size (Hard Limit): 50 MB (zfs_arc_min) Max Size (Hard Limit): 1205 MB (zfs_arc_max) ARC Size Breakdown: Most Recently Used Cache Size: 93% 962 MB (p) Most Frequently Used Cache Size: 6% 65 MB (c-p) ARC Efficency: Cache Access Total: 358711136 Cache Hit Ratio: 97% 350341492 [Defined State for buffer] Cache Miss Ratio: 2% 8369644 [Undefined State for Buffer] REAL Hit Ratio: 76% 272907542 [MRU/MFU Hits Only] Data Demand Efficiency: 96% Data Prefetch Efficiency: 27% CACHE HITS BY CACHE LIST: Anon: 22% 77179357 [ New Customer, First Cache Hit ] Most Recently Used: 45% 158250219 (mru) [ Return Customer ] Most Frequently Used: 32% 114657323 (mfu) [ Frequent Customer ] Most Recently Used Ghost: 0% 9777 (mru_ghost) [ Return Customer Evicted, Now Back ] Most Frequently Used Ghost: 0% 244816 (mfu_ghost) [ Frequent Customer Evicted, Now Back ] CACHE HITS BY DATA TYPE: Demand Data: 1% 4369362 Prefetch Data: 0% 150148 Demand Metadata: 76% 267827520 Prefetch Metadata: 22% 77994462 CACHE MISSES BY DATA TYPE: Demand Data: 1% 135954 Prefetch Data: 4% 400434 Demand Metadata: 73% 6177710 Prefetch Metadata: 19% 1655546 ---------------------------------------------