From owner-freebsd-questions@FreeBSD.ORG Tue Jul 16 18:42:15 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id A9581D8 for ; Tue, 16 Jul 2013 18:42:15 +0000 (UTC) (envelope-from wblock@wonkity.com) Received: from wonkity.com (wonkity.com [67.158.26.137]) by mx1.freebsd.org (Postfix) with ESMTP id 6C845AB3 for ; Tue, 16 Jul 2013 18:42:15 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.7/8.14.7) with ESMTP id r6GIgEZ0082114; Tue, 16 Jul 2013 12:42:14 -0600 (MDT) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.7/8.14.7/Submit) with ESMTP id r6GIgEfY082111; Tue, 16 Jul 2013 12:42:14 -0600 (MDT) (envelope-from wblock@wonkity.com) Date: Tue, 16 Jul 2013 12:42:14 -0600 (MDT) From: Warren Block To: aurfalien Subject: Re: to gmirror or to ZFS In-Reply-To: Message-ID: References: <4DFBC539-3CCC-4B9B-AB62-7BB846F18530@gmail.com> <976836C5-F790-4D55-A80C-5944E8BC2575@gmail.com> <51E51558.50302@ShaneWare.Biz> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Tue, 16 Jul 2013 12:42:14 -0600 (MDT) Cc: freebsd-questions@freebsd.org, Shane Ambler X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Jul 2013 18:42:15 -0000 On Tue, 16 Jul 2013, aurfalien wrote: > On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote: >> >> I doubt that you would save any ram having the os on a non-zfs drive as >> you will already be using zfs chances are that non-zfs drives would only >> increase ram usage by adding a second cache. zfs uses it's own cache >> system and isn't going to share it's cache with other system managed >> drives. I'm not actually certain if the system cache still sits above >> zfs cache or not, I think I read it bypasses the traditional drive cache. >> >> For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max >> that is a system wide setting and isn't going to increase if you have >> two zpools. >> >> Tip: set the arc_max value - by default zfs will use all physical ram >> for cache, set it to be sure you have enough ram left for any services >> you want running. >> >> Have you considered using one or both SSD drives with zfs? They can be >> added as cache or log devices to help performance. >> See man zpool under Intent Log and Cache Devices. > > This is a very interesting point. > > In terms if SSDs for cache, I was planning on using a pair of Samsung Pro 512GB SSDs for this purpose (which I haven't bought yet). > > But I tire of buying stuff, so I have a pair of 40GB Intel SSDs for use as sys disks and several Intel 160GB SSDs lying around that I can combine with the existing 256GB SSDs for a cache. > > Then use my 36x3TB for the beasty NAS. Agreed that 256G mirrored SSDs are kind of wasted as system drives. The 40G mirror sounds ideal.