From owner-freebsd-questions@freebsd.org Thu Sep 24 15:11:51 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5299DA086E5 for ; Thu, 24 Sep 2015 15:11:51 +0000 (UTC) (envelope-from war@dim.lv) Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com [209.85.212.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E5E2410CA for ; Thu, 24 Sep 2015 15:11:49 +0000 (UTC) (envelope-from war@dim.lv) Received: by wicge5 with SMTP id ge5so256515910wic.0 for ; Thu, 24 Sep 2015 08:11:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=AtZ2+HNk0hrREJUraaWB9ydtxgevOvgsSenzhZ/hoIU=; b=M+vDNrP8TW+OwqDGT3wGHMeKnKi2d4f3BhDmyWKCCj64TH1NcA20p1dK0HOgKNe794 uyVofd0zNdX9K/SarcWKFT9k/MlXAcdPgiQfM4AslQhYmhp/bOsZvBMUhxez2KsG23IB G1vfYWKt36FyYeMwgfNlTdmKjV8/zEdpNBKEpROQPRToKWCKV59ffCV/B6yrB1pU6ASt 6eIu2oG9aO3E2b9JwAD2v2fs9/stU6Dbe/D85i7gMSh4ShzuLFK9+mYqsLL60UohtgHN p9g9MVd+n2JSQe8FaR+t9CPO6lgcZji6JlLsMhk8SqN3wPsd4m9QQqIfO5ldxPB0amAb 1AKQ== X-Gm-Message-State: ALoCoQljIVBPWsKGfrxHrOyL183lZPhxmYr4Rgh804oOg2YAMiFxG+MtCFBYuFAJ6fN0NGe0l5ZB X-Received: by 10.194.104.39 with SMTP id gb7mr227223wjb.150.1443107507788; Thu, 24 Sep 2015 08:11:47 -0700 (PDT) Received: from [192.168.88.18] (balticom-185-141.balticom.lv. [83.99.185.141]) by smtp.googlemail.com with ESMTPSA id az6sm5892457wib.12.2015.09.24.08.11.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 Sep 2015 08:11:47 -0700 (PDT) Subject: Re: zfs performance degradation To: Paul Kraus , FreeBSD Questions References: <56019211.2050307@dim.lv> <37A37E9D-9D65-4553-BBA2-C5B032163499@kraus-haus.org> <56038054.5060906@dim.lv> <782C9CEF-BE07-4E05-83ED-133B7DA96780@kraus-haus.org> <56040150.90403@dim.lv> <60BF2FC3-0342-46C9-A718-52492303522F@kraus-haus.org> From: Dmitrijs Message-ID: <560412B2.9070905@dim.lv> Date: Thu, 24 Sep 2015 18:11:46 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <60BF2FC3-0342-46C9-A718-52492303522F@kraus-haus.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Sep 2015 15:11:51 -0000 2015.09.24. 17:40, Paul Kraus wrote: > Do you have compression turned on ? I have only seen ZFS limited by > CPU (assuming relatively modern CPU) when using compression. If you > are using compression, make sure it is lz4 and not just “on". RAM > effects performance in that pending (async) writes are cached in the > ARC. The ARC also caches both demand read data as well as prefetched > read data. There are a number of utilities out there to give you > visibility into the ARC. `sysctl -a | grep arcstats` will get you the > raw data :-) When you benchmark you _must_ use a test set of data that > is larger than your RAM you you will not be testing all the way to / > from the drives :-) That or artificially reduce the size of the ARC > (set vfs.zfs.arc_max=“” in /boot/loader.conf). Nope, no compression, no deduplication, only pure zfs. Even no prefetch, as it is not recommended for machines 4Gb RAM and below. I've tested performance with 40Gb file on 4Gb ram machine, so cache should not count so much. I really hoped that I could get from 2HDD MIRROR at least 1.5x read performance of a single HDD, but it's more tricky as you explained. Now I'm not sure what configuration will make better performance for 4 HDD - raid10 or raid-z2? Or two separate mirrors? Need directions for scale things up in the future. >> Still haven't found at least approximate specification\recommendation as simple as "if you need zfs mirror 2 drives, take at least core i3 or e3 processor, 10 drives - go for e5 xeon, etc". I did not notice cpu impact on windows machine, still i've got " load averages: 1.60, 1.43, 1.24 " on write on zfs. > How many cores / threads ? As long as you have more cores / threads than the load value you are NOT out of CPU resources, but you may be saturating ONE CPU with compression or other function. > > I have been using HP Proliant MicroServer N36L, N40L, and N54L for small file servers and I am only occasionally CPU limited. But my work load on these boxes is very different from yours. > > My backup server is a SuperMicro with dual Xeon E5520 (16 total threads) and 12 GB RAM. I can handily saturate my single 1 Gbps network. I have compression (lz4) enabled on all datasets. I've got http://ark.intel.com/products/78867/Intel-Celeron-Processor-J1900-2M-Cache-up-to-2_42-GHz And 4Gb RAM. Thought it would be sufficient, but now I'm in doubt. I can live with reduced performance for my 1st NAS, but would be nice to have clear performance requirements in mind for planing future storage boxes. I see QNAPs and Synology NAS, they use like 1Ghz CPU and 1Gb of RAM for 4 HDD, so either I'm doing it wrong, either those NASes don't have performance (or safety?) at all. HP Proliant MicroServer is nice, but i've made my diskless system 2-3 times cheaper (200euro vs 530/650euro), so I need a reason or recomendation to spend x2x3 money on the thing, which specification looks the same. best regards, Dmitriy