From owner-freebsd-fs@freebsd.org Mon Oct 12 18:39:18 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 22D1EA11F1E for ; Mon, 12 Oct 2015 18:39:18 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E32A814BD for ; Mon, 12 Oct 2015 18:39:17 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id t9CIRmAM016591; Mon, 12 Oct 2015 13:27:48 -0500 (CDT) Date: Mon, 12 Oct 2015 13:27:48 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Quartz cc: FreeBSD FS Subject: Re: A couple ZFS questions In-Reply-To: <56174374.1040609@sneakertech.com> Message-ID: References: <56174374.1040609@sneakertech.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Mon, 12 Oct 2015 13:27:48 -0500 (CDT) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Oct 2015 18:39:18 -0000 On Fri, 9 Oct 2015, Quartz wrote: > Inside a thread on -questions, it was asked if was a bad idea to have a ZFS > array that spanned different controllers (ie; motherboard sata + pci-e sata). > I answered that AFAIK it was ok as long as the speed of the onboard > ports+drives and card+drives aren't drastically different and that the drives > are the same. But it occurred to me that maybe that's not true [anymore]. Can > anyone with more hardware knowledge chime in? Different should not be a problem. Keep in mind that vdev performance is driven by the slowest device in the vdev. If you have multiple vdevs then overall performance is improved by putting similar performance devices in each vdev since zfs will load-share across them, taking current requests, observed performance, and how full the vdev is into account. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/