From owner-freebsd-questions@freebsd.org Thu Sep 24 14:40:44 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 53460A07648 for ; Thu, 24 Sep 2015 14:40:44 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qg0-f53.google.com (mail-qg0-f53.google.com [209.85.192.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 164851339 for ; Thu, 24 Sep 2015 14:40:43 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by qgt47 with SMTP id 47so45762877qgt.2 for ; Thu, 24 Sep 2015 07:40:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:content-type:mime-version:subject:from :in-reply-to:date:content-transfer-encoding:message-id:references:to; bh=Hkp7eu4Wz/1M7JwEx3glqcVhsiCzcKLqDkb7IH9FAaI=; b=CzFt7O0B27PM1lcLz9fWjK8R6BXA/bWTcaLh/PXeZevKI/rOr6OitajqMV3Z8YJ/oP alNVh61e5N9SCt4UkwwAXMrCaYw6eIFXdZwk9z3K/bH1uzU5kvrjjoCUZIPR+fKj92wC aJAg439/vJ+73ha7n8vyN9mo2DyWcduGe5JMYR9+OqR4pLXcWT3ZnYetWE08E+lRRb6D fC9T07F4/QJaOud01wKS4XxxTgp4wi52QY9ftajI7Oi0AYF6/iUFi28BzYDh7pfB1XLA Upn1syfOwwAy6AMmkWTPmRFNP9kgKCzKNmLwjgZvuKw0x+1VXSOO/0JO6HV/kGvp2Pt+ 4XOw== X-Gm-Message-State: ALoCoQmzGgZc5u5vSe8lHKHyHLI/+Clzf8OBrjvcVMgKMAbuph6c+1pqFCxlbop0Ure7AKdK6nT0 X-Received: by 10.140.130.72 with SMTP id 69mr122272qhc.32.1443105637243; Thu, 24 Sep 2015 07:40:37 -0700 (PDT) Received: from mbp-1.thecreativeadvantage.com (mail.thecreativeadvantage.com. [96.236.20.34]) by smtp.gmail.com with ESMTPSA id z19sm3542902qge.38.2015.09.24.07.40.35 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 24 Sep 2015 07:40:35 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: zfs performance degradation From: Paul Kraus In-Reply-To: <56040150.90403@dim.lv> Date: Thu, 24 Sep 2015 10:40:33 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <60BF2FC3-0342-46C9-A718-52492303522F@kraus-haus.org> References: <56019211.2050307@dim.lv> <37A37E9D-9D65-4553-BBA2-C5B032163499@kraus-haus.org> <56038054.5060906@dim.lv> <782C9CEF-BE07-4E05-83ED-133B7DA96780@kraus-haus.org> <56040150.90403@dim.lv> To: Dmitrijs , FreeBSD Questions X-Mailer: Apple Mail (2.1878.6) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Sep 2015 14:40:44 -0000 On Sep 24, 2015, at 9:57, Dmitrijs wrote: >>=20 >> So a zpool made up of one single vdev, no matter how many drives, = will average the performance of one of those drives. It does not really = matter if it is a 2-way mirror vdev, a 3-way mirror vdev, a RAIDz2 vdev, = a RAIDz3 vdev, etc. This is more true for write operations that read = (mirrors can achieve higher performance by reading from multiple copies = at once). > Thanks! Now I understand. Although it is strange, that you did not = mention how RAM and\or CPU matters. Or do they? I start observing that = my 4core Celeron J1900 is throttling writes. Do you have compression turned on ? I have only seen ZFS limited by CPU = (assuming relatively modern CPU) when using compression. If you are = using compression, make sure it is lz4 and not just =93on". RAM effects performance in that pending (async) writes are cached in the = ARC. The ARC also caches both demand read data as well as prefetched = read data. There are a number of utilities out there to give you = visibility into the ARC. `sysctl -a | grep arcstats` will get you the = raw data :-) When you benchmark you _must_ use a test set of data that is larger than = your RAM you you will not be testing all the way to / from the drives = :-) That or artificially reduce the size of the ARC (set = vfs.zfs.arc_max=3D=93=94 in /boot/loader.conf). > Still haven't found at least approximate specification\recommendation = as simple as "if you need zfs mirror 2 drives, take at least core i3 or = e3 processor, 10 drives - go for e5 xeon, etc". I did not notice cpu = impact on windows machine, still i've got " load averages: 1.60, 1.43, = 1.24 " on write on zfs. How many cores / threads ? As long as you have more cores / threads than = the load value you are NOT out of CPU resources, but you may be = saturating ONE CPU with compression or other function. I have been using HP Proliant MicroServer N36L, N40L, and N54L for small = file servers and I am only occasionally CPU limited. But my work load on = these boxes is very different from yours. My backup server is a SuperMicro with dual Xeon E5520 (16 total threads) = and 12 GB RAM. I can handily saturate my single 1 Gbps network. I have = compression (lz4) enabled on all datasets. -- Paul Kraus paul@kraus-haus.org