From owner-freebsd-stable@FreeBSD.ORG Tue Apr 5 15:23:02 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CDABE106564A for ; Tue, 5 Apr 2011 15:23:01 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id 3E4408FC12 for ; Tue, 5 Apr 2011 15:23:00 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.4/8.14.4) with ESMTP id p35F0ooK001511 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Tue, 5 Apr 2011 18:00:56 +0300 (EEST) (envelope-from daniel@digsys.bg) Message-ID: <4D9B2EA2.9020700@digsys.bg> Date: Tue, 05 Apr 2011 18:00:50 +0300 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.15) Gecko/20110307 Thunderbird/3.1.9 MIME-Version: 1.0 To: FreeBSD-STABLE Mailing List Content-Type: text/plain; charset=windows-1251; format=flowed Content-Transfer-Encoding: 7bit Subject: ZFS HAST config preference X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Apr 2011 15:23:03 -0000 This is more of an proof of concept question: I am building an redundant cluster of blade servers, and toy with the idea to use HAST and ZFS for the storage. Blades will work in pairs and each pair will provide various services, from SQL databases, to hosting virtual machines (jails and otherwise). Each pair will use CARP for redundancy. My original idea was to set up blades so that they run HAST on pairs of disks, and run ZFS in number of mirror vdevs on top of HAST. The ZFS pool will exist only on the master HAST node. Let's call this setup1. Or, I could use ZFS volumes and run HAST on top of these. This means, that on each blade, I will have an local ZFS pool. Let's call this setup2. Third idea would be to have the blades completely diskless, boot from separate boot/storage server and mount filesystems or iSCSI volumes as needed from the storage server. HAST might not be necessary here. ZFS pool will exist on the storage server only. Let's call this setup3. While setup1 is most straightforward, it has some drawbacks: - disks handled by HAST need to be either identical or have matching partitions created; - the 'spare' blade would do nothing, as it's disk subsystem will be gone as long as it is HAST slave. As the blades are quite powerful (4x8 core AMD) that would be wasteful, at least in the beginning. With setup2, I can get away with different size disks in each blade. All blades can also be used for whatever additional processing, shared data will be only presented by HAST to whichever node needs it, for "important" services. One drawback here: - can't just pull off one of the blades, without stopping/transferring first all of it's services. It seems that in larger scale, setup3 would be best. I am not yet here, although close (the storage server is missing). HAST replication speed should not be an issue, there is 10Gbit network between the blade servers. Has anyone already setup something similar? What was the experience? There were recently some bugs that sort of plagued setup1, but these seem to be resolved now. Daniel