From owner-freebsd-questions@freebsd.org Thu Sep 24 06:07:34 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A8CB9A07CE6 for ; Thu, 24 Sep 2015 06:07:34 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: from mail-wi0-x235.google.com (mail-wi0-x235.google.com [IPv6:2a00:1450:400c:c05::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3798517B8 for ; Thu, 24 Sep 2015 06:07:34 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: by wicge5 with SMTP id ge5so236765969wic.0 for ; Wed, 23 Sep 2015 23:07:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=aMdVze9/rt30cBSBPw0AuWhmJRNClVarIQ65UVWjR58=; b=UH9UpipPgu9n/QBoHSNSHeKnewa+Gl8edKvdJGQh3A6WLgXqyix63DlADJStvBZP7j TQVBnbiZ75F1e+rNuZ9wEOuGvqRtFrxcM83KUJzFfY6ET1nPx1yOd9bGNARp810VgPjq 1xpwxaQg+5xiTbk/UDX+zUjg9cSrhquK79QtpKoIFpLeuDLVLuOiMPQ1eqryEqVLaHFE 4usEaGmE04rmOuDDAcBflfSgWOVXFzup1xV2MAYGMMUy8Pq7eANwr5HiLLdBmI4eoixJ yaBgefFulElXgQouBlzwcxDhiU23JpXPMYxqgNpyMJLLbE8sCotEup/QyQo6kaqvOveu WTqA== MIME-Version: 1.0 X-Received: by 10.194.174.227 with SMTP id bv3mr43408641wjc.142.1443074851753; Wed, 23 Sep 2015 23:07:31 -0700 (PDT) Received: by 10.194.16.231 with HTTP; Wed, 23 Sep 2015 23:07:31 -0700 (PDT) In-Reply-To: <56019211.2050307@dim.lv> References: <56019211.2050307@dim.lv> Date: Thu, 24 Sep 2015 01:07:31 -0500 Message-ID: Subject: Re: zfs performance degradation From: Adam Vande More To: Dmitrijs Cc: FreeBSD Questions Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Sep 2015 06:07:34 -0000 On Tue, Sep 22, 2015 at 12:38 PM, Dmitrijs wrote: > Goog afternoon, > > I've encountered strange ZFS behavior - serious performance degradation > over few days. Right after setup on fresh ZFS (2 hdd in a mirror) I made = a > test on a file 30Gb size with dd like > dd if=3Dtest.mkv of=3D/dev/null bs=3D64k > and got 150+Mbs speed. > > Today I got only 90Mbs, tested with different blocksizes, many times, > speed seems to be stable +-5% > I doubt that. Block sizes have a large impact on dd read efficiency regardless of the filesystem. So unless you were testing the speed of cached data, there would have been a significant difference between runs of different block sizes. > > nas4free: divx# dd if=3Dtest.mkv of=3D/dev/null bs=3D64k > 484486+1 records in > 484486+1 records out > 31751303111 bytes transferred in 349.423294 secs (90867734 bytes/sec) > Perfectly normal for the parameters you've imposed. What happens if you use bs=3D1m? > Computer\system details: > > nas4free: /mnt# uname -a > FreeBSD nas4free.local 10.2-RELEASE-p2 FreeBSD 10.2-RELEASE-p2 #0 > r287260M: Fri Aug 28 18:38:18 CEST 2015 root@dev.nas4free.org:/usr/obj/na= s4free/usr/src/sys/NAS4FREE-amd64 > amd64 > > RAM 4Gb > I've got brand new 2x HGST HDN724040ALE640, 4=D0=A2=D0=B1, 7200rpm (ada0,= ada1) for > pool data4. > Another pool, data2, performs slightly better even on older\cheaper WD > Green 5400 HDDs, up to 99Mbs. > What parameters for both are you using here to make this claim? > > While dd is running, gstat is showing like: > > dT: 1.002s w: 1.000s > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 0 366 366 46648 1.1 0 0 0.0 39.6| ada0 > 1 432 432 54841 1.0 0 0 0.0 45.1| ada1 > > > > so iops are very high, while %busy is quite low. %busy is a misunderstood stat. Do not use it to evaluate if your drive is being utilized efficiently. L(q), ops and seek times are what is interesting. > It averages about 50%, rare peaks till 85-90% > Basically as close to perfect as you'll ever get considering how you invoked dd. ZFS doesn't split sequential reads across a vdev, only a pool and that's only if multiple vdev's were in the pool when the file was written. Your testing methodology is poorly thought and implemented, or at least the way it was presented to us. Testing needing to a methodical, repeatable, testable process accounting for all the variables involved. All I saw was a bunch of haphazard and scattered attempts to test sequential read speed of a ZFS mirror. Is that really an accurate test of the pool workload? Did you clear caches between tests? Why are there other daemons running like proftpd during the testing? Etc, ad nauseam. --=20 Adam