From owner-freebsd-questions@freebsd.org Wed Sep 23 20:08:41 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 21F5BA0619D for ; Wed, 23 Sep 2015 20:08:41 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qg0-f52.google.com (mail-qg0-f52.google.com [209.85.192.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DB4AE1CD7 for ; Wed, 23 Sep 2015 20:08:40 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by qgez77 with SMTP id z77so28959013qge.1 for ; Wed, 23 Sep 2015 13:08:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:content-type:mime-version:subject:from :in-reply-to:date:content-transfer-encoding:message-id:references:to; bh=k2pfNT8WzQ9TsI8zc7+k28sIYpQqn2jmO8fTOy4ulOw=; b=W808luuA08KrZIYFZVKIp4F63kXnRFl5plW6wbw80bFT5Z8T0jS6yu+Q5PrS75ItWy AIHsKXIV4RsLRz9ZjRULVPwl8Bkz+h/t33x9H3fWaVkMMmHcMdMs48hwoXnfdEnoxkP7 zwvuex59JT0tv3bc08LjVm7/BNfZNyTkefic1Sk5LmttDOyHUP+Rupa3UF4UsQGq58vM z7Dg1YFDVNFfLZeyyvB3WV/MuGdg3UBiYzWpQvXkJjIHNpqL4/NP7ASBqMCsJFnjmKov D7o3WzD4zirA5dH+kjLqlVrp2OV7liXT6nrCvdCG/ixKv2U2YUe+Qdt2ThMbNcMj2cPZ raYA== X-Gm-Message-State: ALoCoQkIqVju6Fz53qoDlnvDtpxliW9swekMZTkzkYkKwgnrbh3zmjyN0xo/OZuGNo3YmB86WcUp X-Received: by 10.140.196.193 with SMTP id r184mr40433556qha.77.1443038919068; Wed, 23 Sep 2015 13:08:39 -0700 (PDT) Received: from mbp-1.thecreativeadvantage.com (mail.thecreativeadvantage.com. [96.236.20.34]) by smtp.gmail.com with ESMTPSA id 103sm2094134qgx.35.2015.09.23.13.08.37 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 23 Sep 2015 13:08:37 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: zfs performance degradation From: Paul Kraus In-Reply-To: <56019211.2050307@dim.lv> Date: Wed, 23 Sep 2015 16:08:32 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <37A37E9D-9D65-4553-BBA2-C5B032163499@kraus-haus.org> References: <56019211.2050307@dim.lv> To: Dmitrijs , FreeBSD Questions X-Mailer: Apple Mail (2.1878.6) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2015 20:08:41 -0000 On Sep 22, 2015, at 13:38, Dmitrijs wrote: > I've encountered strange ZFS behavior - serious performance = degradation over few days. Right after setup on fresh ZFS (2 hdd in a = mirror) I made a test on a file 30Gb size with dd like > dd if=3Dtest.mkv of=3D/dev/null bs=3D64k > and got 150+Mbs speed. > I've got brand new 2x HGST HDN724040ALE640, 4=D0=A2=D0=B1, 7200rpm = (ada0, ada1) for pool data4. > Another pool, data2, performs slightly better even on older\cheaper WD = Green 5400 HDDs, up to 99Mbs. > Zpool list: >=20 > nas4free: /mnt# zpool list > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH = ALTROOT > data2 1.81T 578G 1.25T - 11% 31% 1.00x ONLINE - > data4 3.62T 2.85T 797G - 36% 78% 1.00x ONLINE - >=20 >=20 > Could it happen because of pool being 78% full? So I cannot fill puls = full? > Can anyone please advice how could I fix the situation - or is it = normal? ZFS write performance degrades very steeply when you reach a certain = point in terms of zpool capacity. The exact threshold depends on many = factors including your specific workload. This is essentially due to the = =E2=80=9CCopy on Write=E2=80=9D (CoW) nature of ZFS. When you write to = an existing file ZFS needs to find space for that write operation as it = does not overwrite the existing data. As the zpool fills, it becomes = harder and harder to find contiguous free space and the write operation = ends up fragmenting the data. But, you are seeing READ performance drop. If the file was written when = the ZFS was new (it was one of the first files written) then it is = certainly un-fragmented. But, if you ran the READ test shortly after = writing the file, then some of it will still be in the ARC (Adaptive = Reuse Cache). If there is other activity on the system, then the other = activity will also be using the ARC. If you are rewriting the test file and then reading it, the test file = will be fragmented and that will be part of the performance difference. For my systems (generally VMs using VBox) I have found that 80% is a = good threshold because when I get to 85% capacity the performance drops = to the point where VM I/O starts timing out. So the short answer (way too late for that) is that you can, in fact, = not use all of the capacity of a zpool unless the data is written once, = never modified, and you do not have any snapshots, clones, or the like. P.S. I assume you are not using DeDupe ? You do not have anywhere enough = RAM for that. -- Paul Kraus paul@kraus-haus.org