From owner-freebsd-fs@FreeBSD.ORG Tue Jun 15 13:21:47 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 188DF1065677 for ; Tue, 15 Jun 2010 13:21:47 +0000 (UTC) (envelope-from thomas@gibfest.dk) Received: from mail.tyknet.dk (mail.tyknet.dk [213.150.42.155]) by mx1.freebsd.org (Postfix) with ESMTP id C74CA8FC0A for ; Tue, 15 Jun 2010 13:21:46 +0000 (UTC) Received: from [10.32.67.5] (fw.int.webpartner.dk [213.150.34.98]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mail.tyknet.dk (Postfix) with ESMTPSA id 8BD82639A36 for ; Tue, 15 Jun 2010 15:21:45 +0200 (CEST) X-DKIM: OpenDKIM Filter v1.1.2 mail.tyknet.dk 8BD82639A36 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=gibfest.dk; s=default; t=1276608105; bh=FsRLNer3EeefviOn7v0cboN9ydLsst/WNH65MDs9zJc=; h=Message-ID:Date:From:MIME-Version:To:Subject:Content-Type: Content-Transfer-Encoding; b=fywIpFuhFntyPeDdVb/2WaMEUHxpd5EImNurvxHJZneL9p27oCA/nn+bC9RrnrhVZ TDui9d5DpSD424Ivrt+O0ZcZS2tDXENjWbSrUIeQdQ/sdZSj8IAy4UazPOvQW1AKFc RYF7NN8Npmz7hvuPl5T0l8tmMwQVdqUTws6kq5HQ= Message-ID: <4C177E69.3020204@gibfest.dk> Date: Tue, 15 Jun 2010 15:21:45 +0200 From: Thomas Steen Rasmussen User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: ZFS l2arc and HAST ? newbie question X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jun 2010 13:21:47 -0000 Hello list, I am playing with HAST in order to build some redundant storage for a mailserver, using ZFS as the filesystem. I have the following zpool layout before stating the HAST experiments: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2 ONLINE 0 0 0 label/hd4 ONLINE 0 0 0 label/hd5 ONLINE 0 0 0 label/hd6 ONLINE 0 0 0 label/hd7 ONLINE 0 0 0 logs ONLINE 0 0 0 mirror ONLINE 0 0 0 label/ssd0s1 ONLINE 0 0 0 label/ssd1s1 ONLINE 0 0 0 cache label/ssd0s2 ONLINE 0 0 0 label/ssd1s2 ONLINE 0 0 0 As I understand it, to accomplish this with HAST I will need to make a HAST resource for each physical disk, like so: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2 ONLINE 0 0 0 hast/hahd4 ONLINE 0 0 0 hast/hahd5 ONLINE 0 0 0 hast/hahd6 ONLINE 0 0 0 hast/hahd7 ONLINE 0 0 0 But what about slog and cache devices, currently on SSD disks for performance reasons ? It doesn't really make sense to synchronize a cache disk over the network, does it ? Could I build the zpool with the SSD disks directly (without HAST) and would ZFS survive an export/import on the other host, when the cache disks are suddently different ? I am thinking cache only here, not slog. Do SSD l2arc / slog even make any sense when I am "deliberately" slowing down the filsystem with network redundancy anyway ? Oh, and is there any problems using labels for HAST devices ? My controller likes to give new device names to disks now and then, and it has been a blessing to use labels instead of device names, so I'd like to continue doing that when using HAST. If needed, any testing on my part will unfortunately have to wait a couple of days for the MFC of the HAST fix from yesterday, as the SEQ issue is preventing me from further experiments with HAST for now. Thank you for any input, and _THANK YOU_ for the work on both ZFS and HAST, their combined awesomeness is reaching epic proportions. Best regards Thomas Steen Rasmussen