From owner-freebsd-fs@FreeBSD.ORG Mon Jun 22 12:21:57 2015 Return-Path: Delivered-To: freebsd-fs@nevdull.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 27EFC510 for ; Mon, 22 Jun 2015 12:21:57 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wi0-x22e.google.com (mail-wi0-x22e.google.com [IPv6:2a00:1450:400c:c05::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A4928686 for ; Mon, 22 Jun 2015 12:21:56 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by wibdq8 with SMTP id dq8so73456303wib.1 for ; Mon, 22 Jun 2015 05:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=wBzx43O/qR5X0aWL9TieLLYZUa+aZXD0oGWkEMHv0XQ=; b=KB7x/ECJa+uaKCGiwQGjru99p0Nmh/c//gGsKaytPv3ZBrMR7Nxb9XTNOB+kUJEKki Wn2cKfjFqspeqCOC5nGEzqw8c8eam6EM92bwUBZz1KKZHUIDe0SBhvbILTqP1MYOLygV HSaQ8OZfx0V/gK90MzOKTmn+WlHX+0Yw7b/gRjkC3Knlyg8mmHGnYlFpZajhuQXsjBc0 inbGAvMQJGeB3iGtkEgKD/rgyaFgJ/HHVZnh6ex4uyRKH4qkPZaZvq3xN7IjWzOOof12 3CblWvYgESyiTJ2jHfxgCPw6Z8/qnXRAsvQ9fsGxVENw3WNo56R65gl1+fdnwV3spm65 WG7Q== MIME-Version: 1.0 X-Received: by 10.194.176.68 with SMTP id cg4mr51298644wjc.106.1434975715106; Mon, 22 Jun 2015 05:21:55 -0700 (PDT) Received: by 10.180.73.5 with HTTP; Mon, 22 Jun 2015 05:21:55 -0700 (PDT) In-Reply-To: <20150622121343.GB60684@neutralgood.org> References: <5587C3FF.9070407@sneakertech.com> <20150622121343.GB60684@neutralgood.org> Date: Mon, 22 Jun 2015 13:21:55 +0100 Message-ID: Subject: Re: ZFS raid write performance? From: krad To: kpneal@pobox.com Cc: Quartz , FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Jun 2015 12:21:57 -0000 also ask yourself how big the data transfer is going to be. If its only a few gigs or 10s of gigs at a time and not streaming you could well find its all dumped to the ram on the box anyhow before its committed to the disk. With regards to 10k disks, be careful there as more modern higher platter capacity 7k disks might give better throughput due to the higher data density. On 22 June 2015 at 13:13, wrote: > On Mon, Jun 22, 2015 at 04:14:55AM -0400, Quartz wrote: > > What's sequential write performance like these days for ZFS raidzX? > > Someone suggested to me that I set up a single not-raid disk to act as a > > fast 'landing pad' for receiving files, then move them to the pool later > > in the background. Is that actually necessary? (Assume generic sata > > drives, 250mb-4gb sized files, and transfers are across a LAN using > > single unbonded GigE). > > Tests were posted to ZFS lists a few years ago. That was a while ago, but > at a fundamental level ZFS hasn't changed since then so the results should > still be valid. > > For both reads and writes all levels of raidz* perform slightly faster > than the speed of a single drive. _Slightly_ faster, like, the speed of > a single drive * 1.1 or so roughly speaking. > > For mirrors, writes perform about the same as a single drive, and as more > drives are added they get slightly worse. But reads scale pretty well as > you add drives because reads can be spread across all the drives in the > mirror in parallel. > > Having multiple vdevs helps because ZFS does striping across the vdevs. > However, this striping only happens with writes that are done _after_ new > vdevs are added. There is no rebalancing of data after new vdevs are added. > So adding new vdevs won't change the read performance of data already on > disk. > > ZFS does try to strip across vdevs, but if your old vdevs are nearly full > then adding new ones results in data mostly going to the new, nearly empty > vdevs. So if you only added a single new vdev to expand the pool then > you'll see write performance roughly equal to the performance of that > single vdev. > > Rebalancing can be done roughly with "zfs send | zfs receive". If you do > this enough times, and destroy old, sent datasets after an iteration, then > you can to some extent rebalance a pool. You won't achieve a perfect > rebalance, though. > > We can thank Oracle for the destruction of the archives at sun.com which > made it pretty darn difficult to find those posts. > > Finally, single GigE is _slow_. I see no point in a "landing pad" when > using unbonded GigE. > > -- > Kevin P. Neal http://www.pobox.com/~kpn/ > > Seen on bottom of IBM part number 1887724: > DO NOT EXPOSE MOUSE PAD TO DIRECT SUNLIGHT FOR EXTENDED PERIODS OF TIME. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >