Date: Tue, 23 Jun 2015 10:17:12 -0500 From: Linda Kateley <lkateley@kateley.com> To: freebsd-fs@freebsd.org Subject: Re: ZFS raid write performance? Message-ID: <55897878.30708@kateley.com> In-Reply-To: <alpine.GSO.2.01.1506230812550.4186@freddy.simplesystems.org> References: <5587C3FF.9070407@sneakertech.com> <5587C97F.2000407@delphij.net> <55887810.3080301@sneakertech.com> <20150622221422.GA71520@neutralgood.org> <55888E0D.6040704@sneakertech.com> <20150623002854.GB96928@neutralgood.org> <5588D291.4030806@sneakertech.com> <20150623042234.GA66734@neutralgood.org> <alpine.GSO.2.01.1506230812550.4186@freddy.simplesystems.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Is it possible that the suggestion for the "landing pad" could be recommending a smaller ssd pool? Then replicating back to a slower pool? I actually do that kind of architecture once in awhile, especially for uses like large cad drawings, where there is a tendency to work on one big file at a time... With lower costs and higher densities of ssd, this is a nice way to use them On 6/23/15 8:32 AM, Bob Friesenhahn wrote: > On Tue, 23 Jun 2015, kpneal@pobox.com wrote: >> >> When I was testing read speeds I tarred up a tree that was 700+GB in >> size >> on a server with 64GB of memory. > > Tar (and cpio) are only single-threaded. They open and read input > files one by one. Zfs's read-ahead algorithm ramps up the amount of > read-ahead each time the program goes to read data and it is not > already in memory. Due to this ramp-up, input file size has a > significant impact on the apparent read performance. The ramp-up > occurs on a per-file basis. Large files (still much smaller than RAM) > will produce a higher data rate than small files. If read requests > are pending for several files at once (or several read requests for > different parts of the same file), then the observed data rate would > be higher. > > Tar/cpio read tests are often more impacted by disk latencies and zfs > read-ahead algorithms than the peak performance of the data path. A > very large server with many disks may produce similar timings to a > very small server. > > Long ago I wrote a test script > (http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh) > which was intended to expose a zfs bug existing at that time, but is > still a very useful test for zfs caching and read-ahead by testing > initial sequential read performance from a filesystem. This script was > written for Solaris and might need some small adaptation to be used > for FreeBSD. > > Extracting a tar file (particularly on a network client) is a very > interesting test of network server write performance. > > Bob -- Linda Kateley Kateley Company Skype ID-kateleyco http://kateleyco.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55897878.30708>