From owner-freebsd-questions@freebsd.org Tue Sep 22 17:38:17 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id ECBC0A06B8B for ; Tue, 22 Sep 2015 17:38:16 +0000 (UTC) (envelope-from war@dim.lv) Received: from mail-wi0-f178.google.com (mail-wi0-f178.google.com [209.85.212.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 80A001D73 for ; Tue, 22 Sep 2015 17:38:16 +0000 (UTC) (envelope-from war@dim.lv) Received: by wicgb1 with SMTP id gb1so170782145wic.1 for ; Tue, 22 Sep 2015 10:38:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:subject:to:message-id:date:user-agent :mime-version:content-type:content-transfer-encoding; bh=lhQaZh+NkJsOJaKond5W4kb7GydvG01bCS/nT5GKbiQ=; b=YegUA2aRZ6kZHdWVUXdxvXLLoK+7c3JTj2x0CYWA86TTaRf6RRnN2IFT1nTNrGRCH6 i1ZJhN6vjEyfUJBmWE1/rW/h9Pt9NpS2KNidLKVlpB2SgIW2Ts6xfB+mIOdkgoKW5bhz 8WKvw+g0dPJtUbUTg+DlTaE1rE/tCEPI0XodEXGYSZVBgMYquQRsBYpQd5nZEyFEWj1k +KRexb6vnc53QZhD+9PeQiADq5iPjIUjvNiUOWn3CJJmBoGeI5DlOEf+o7qAb0zm5GbI 6aGOo7cDPfCKQ88823HITHHlh0dLxWFtURJoz7LaUZPYgodwOzyyP1KL0xNqtIpVyOiz jzVA== X-Gm-Message-State: ALoCoQmz8TFj3JUUsaTLJVbVB/UWqWIUq/CH/9X1aIkptw3rgDTf36YX4UH26vi9q29EWqkZYG4O X-Received: by 10.194.240.132 with SMTP id wa4mr26030427wjc.138.1442943494102; Tue, 22 Sep 2015 10:38:14 -0700 (PDT) Received: from [192.168.88.18] (balticom-185-141.balticom.lv. [83.99.185.141]) by smtp.googlemail.com with ESMTPSA id lb10sm2987843wjc.9.2015.09.22.10.38.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Sep 2015 10:38:13 -0700 (PDT) From: Dmitrijs Subject: zfs performance degradation To: freebsd-questions@freebsd.org Message-ID: <56019211.2050307@dim.lv> Date: Tue, 22 Sep 2015 20:38:25 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2015 17:38:17 -0000 Goog afternoon, I've encountered strange ZFS behavior - serious performance degradation over few days. Right after setup on fresh ZFS (2 hdd in a mirror) I made a test on a file 30Gb size with dd like dd if=test.mkv of=/dev/null bs=64k and got 150+Mbs speed. Today I got only 90Mbs, tested with different blocksizes, many times, speed seems to be stable +-5% nas4free: divx# dd if=test.mkv of=/dev/null bs=64k 484486+1 records in 484486+1 records out 31751303111 bytes transferred in 349.423294 secs (90867734 bytes/sec) Computer\system details: nas4free: /mnt# uname -a FreeBSD nas4free.local 10.2-RELEASE-p2 FreeBSD 10.2-RELEASE-p2 #0 r287260M: Fri Aug 28 18:38:18 CEST 2015 root@dev.nas4free.org:/usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64 amd64 RAM 4Gb I've got brand new 2x HGST HDN724040ALE640, 4Тб, 7200rpm (ada0, ada1) for pool data4. Another pool, data2, performs slightly better even on older\cheaper WD Green 5400 HDDs, up to 99Mbs. nas4free: /mnt# zpool status pool: data2 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data2 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada3 ONLINE 0 0 0 errors: No known data errors pool: data4 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data4 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0 ONLINE 0 0 0 ada1 ONLINE 0 0 0 errors: No known data errors While dd is running, gstat is showing like: dT: 1.002s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 366 366 46648 1.1 0 0 0.0 39.6| ada0 1 432 432 54841 1.0 0 0 0.0 45.1| ada1 so iops are very high, while %busy is quite low. It averages about 50%, rare peaks till 85-90% Even top shows no significant load: last pid: 61983; load averages: 0.44, 0.34, 0.37 up 11+07:51:31 16:44:56 40 processes: 1 running, 39 sleeping CPU: 0.3% user, 0.0% nice, 6.4% system, 1.1% interrupt, 92.1% idle Mem: 21M Active, 397M Inact, 2101M Wired, 56M Cache, 94M Buf, 1044M Free ARC: 1024M Total, 232M MFU, 692M MRU, 160K Anon, 9201K Header, 91M Other Swap: 4096M Total, 4096M Free Not displaying idle processes. PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 61981 root 1 30 0 12364K 2084K zio->i 3 0:09 18.80% dd 61966 root 1 22 0 58392K 7144K select 3 0:24 3.86% proftpd Zpool list: nas4free: /mnt# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data2 1.81T 578G 1.25T - 11% 31% 1.00x ONLINE - data4 3.62T 2.85T 797G - 36% 78% 1.00x ONLINE - Could it happen because of pool being 78% full? So I cannot fill puls full? Can anyone please advice how could I fix the situation - or is it normal? I've googled a lot about vmaxnodes, vminnodes but advices are mostly controversial and doesn't help. I can provide additional system output on request. best regards, Dmitry