From owner-freebsd-fs@FreeBSD.ORG Tue Jun 23 15:17:15 2015 Return-Path: Delivered-To: freebsd-fs@nevdull.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DAA0BE1 for ; Tue, 23 Jun 2015 15:17:15 +0000 (UTC) (envelope-from lkateley@kateley.com) Received: from mail-ig0-f172.google.com (mail-ig0-f172.google.com [209.85.213.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A76F7D8D for ; Tue, 23 Jun 2015 15:17:15 +0000 (UTC) (envelope-from lkateley@kateley.com) Received: by igblr2 with SMTP id lr2so55825120igb.0 for ; Tue, 23 Jun 2015 08:17:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=H3lD+6PGax4zErBlAI70lqdJ2FBQCAZ7WoLYx/KjR/g=; b=dF58tihMVHqO7NWZYWGtVlSyRRtwX1UUOhxG1dCsZMje+wz0ZUobW4ZIkqHGYpTBNt 1kVhufoywSbrPPVyCtmnLDjm5V89S4PkzIY5dAz5JevukCF/KRGe61f2nRbMucN8hF1v 1y51gFxCP8jUrAEiKLfTuZEnBaD6QTCTx6PnnyTnT+IQxA091zOybRdYuCWQtjXkKhBs aZCMHVRC5IrvIuBiyRgjC9Q/GslNk693J/Se9zv/dFcliv3Gp0HQ6GdBjXYDYtKHwGre eVGnBsYpFRbxtH07d+gqN2YHjXEhXC5Vi4f9mncOIOCCcoEmZsukl7A8JffVx6yd67ew 2TCg== X-Gm-Message-State: ALoCoQnqNS8LiYXbBAUuBOl1Y8Rnz8WDfBzzF+AXRnBKk+k2LNLkoUqFoPDn9m75qo6yYB4jXdSm X-Received: by 10.107.28.202 with SMTP id c193mr45802637ioc.90.1435072634336; Tue, 23 Jun 2015 08:17:14 -0700 (PDT) Received: from kateleycoimac.local ([63.231.252.189]) by mx.google.com with ESMTPSA id c12sm13487992ioj.39.2015.06.23.08.17.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Jun 2015 08:17:13 -0700 (PDT) Message-ID: <55897878.30708@kateley.com> Date: Tue, 23 Jun 2015 10:17:12 -0500 From: Linda Kateley User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS raid write performance? References: <5587C3FF.9070407@sneakertech.com> <5587C97F.2000407@delphij.net> <55887810.3080301@sneakertech.com> <20150622221422.GA71520@neutralgood.org> <55888E0D.6040704@sneakertech.com> <20150623002854.GB96928@neutralgood.org> <5588D291.4030806@sneakertech.com> <20150623042234.GA66734@neutralgood.org> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Jun 2015 15:17:16 -0000 Is it possible that the suggestion for the "landing pad" could be recommending a smaller ssd pool? Then replicating back to a slower pool? I actually do that kind of architecture once in awhile, especially for uses like large cad drawings, where there is a tendency to work on one big file at a time... With lower costs and higher densities of ssd, this is a nice way to use them On 6/23/15 8:32 AM, Bob Friesenhahn wrote: > On Tue, 23 Jun 2015, kpneal@pobox.com wrote: >> >> When I was testing read speeds I tarred up a tree that was 700+GB in >> size >> on a server with 64GB of memory. > > Tar (and cpio) are only single-threaded. They open and read input > files one by one. Zfs's read-ahead algorithm ramps up the amount of > read-ahead each time the program goes to read data and it is not > already in memory. Due to this ramp-up, input file size has a > significant impact on the apparent read performance. The ramp-up > occurs on a per-file basis. Large files (still much smaller than RAM) > will produce a higher data rate than small files. If read requests > are pending for several files at once (or several read requests for > different parts of the same file), then the observed data rate would > be higher. > > Tar/cpio read tests are often more impacted by disk latencies and zfs > read-ahead algorithms than the peak performance of the data path. A > very large server with many disks may produce similar timings to a > very small server. > > Long ago I wrote a test script > (http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh) > which was intended to expose a zfs bug existing at that time, but is > still a very useful test for zfs caching and read-ahead by testing > initial sequential read performance from a filesystem. This script was > written for Solaris and might need some small adaptation to be used > for FreeBSD. > > Extracting a tar file (particularly on a network client) is a very > interesting test of network server write performance. > > Bob -- Linda Kateley Kateley Company Skype ID-kateleyco http://kateleyco.com