Date: Mon, 4 Jan 2016 22:06:22 -0500 From: Paul Kraus <paul@kraus-haus.org> To: FreeBSD Filesystems <freebsd-fs@freebsd.org> Cc: "Mikhail T." <mi+thun@aldan.algebra.com>, Tom Curry <thomasrcurry@gmail.com> Subject: Re: NFS reads vs. writes Message-ID: <D0AC7351-25DC-478C-981E-E32B1F5E353F@kraus-haus.org> In-Reply-To: <CAGtEZUD28UZDYyHtHtzXgys%2Brpv_37u4fotwR%2BqZLc1%2BtK0dwA@mail.gmail.com> References: <8291bb85-bd01-4c8c-80f7-2adcf9947366@email.android.com> <5688D3C1.90301@aldan.algebra.com> <495055121.147587416.1451871433217.JavaMail.zimbra@uoguelph.ca> <568A047B.1010000@aldan.algebra.com> <CAGtEZUD28UZDYyHtHtzXgys%2Brpv_37u4fotwR%2BqZLc1%2BtK0dwA@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Jan 4, 2016, at 18:58, Tom Curry <thomasrcurry@gmail.com> wrote: > SSDs are so fast for three main reasons: low latency, large dram = buffers, > and parallel workloads. Only one of these is of any benefit (latency) = as a > SLOG. Unfortunately that particular metric is not usually advertised = in > consumer SSDs where the benchmarks they use to tout 90,000 random = write > iops consist of massively concurrent, highly compressible, short lived > bursts of data. Add that drive as a SLOG and the onboard dram may as = well > not even exist, and queue depths count for nothing. It will be lucky = to > pull 2,000 IOPS. Once you start adding in ZFS features like checksums = and > compression, or network latency in the case of NFS that 2,000 number = starts > to drop even more. I have a file server that I am going through the task of optimizing for = NFS traffic (to store VM images). My first attempt, because I knew about = the need for an SSD based SLOG for the ZIL was using a pair of Intel 535 = series SSD=92s. The performance with the SLOG/ZIL on the SSD was = _worse_. Turns out that those SSD=92s have poor small block (8 KB) = random write performance (not well advertised). So I asked for advice = for choosing a _fast_ SSD on the OpenZFS list and had a number of people = recommend the Intel DC-Sxxxx series of SSDs. Based on the very thorough data sheets, I am going with a pair of = DC-S3710 200 GB SSDs. Once I get them in and configured I=92ll post = results. Note that my zpool consists of 5 top level vdevs each made up of a 3-way = mirror. So I am striping writes across 5 columns. I am using 500 GB WD = RE series drives leaving the ZIL on the primary vdevs was _faster_ than = adding the consumer SSD as SLOG for NFS writes. -- Paul Kraus paul@kraus-haus.org
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D0AC7351-25DC-478C-981E-E32B1F5E353F>