From owner-freebsd-stable@FreeBSD.ORG Wed Mar 24 17:12:56 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DA3231065672 for ; Wed, 24 Mar 2010 17:12:56 +0000 (UTC) (envelope-from michal@ionic.co.uk) Received: from mail1.sharescope.co.uk (pm1.ionic.co.uk [85.159.80.19]) by mx1.freebsd.org (Postfix) with ESMTP id 96E918FC18 for ; Wed, 24 Mar 2010 17:12:56 +0000 (UTC) Received: from localhost (unknown [127.0.0.1]) by mail1.sharescope.co.uk (Postfix) with ESMTP id AE3E7FC0AB for ; Wed, 24 Mar 2010 17:12:56 +0000 (UTC) X-Virus-Scanned: amavisd-new at sharescope.co.uk Received: from mail1.sharescope.co.uk ([127.0.0.1]) by localhost (mail1.sharescope.co.uk [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GQr4QdIyczBg for ; Wed, 24 Mar 2010 17:12:52 +0000 (GMT) Received: from [192.168.2.37] (office.ionic.co.uk [85.159.85.2]) (Authenticated sender: chris@sharescope.co.uk) by mail1.sharescope.co.uk (Postfix) with ESMTPSA id 772CCFC0A2 for ; Wed, 24 Mar 2010 17:12:52 +0000 (GMT) Message-ID: <4BAA4812.8070307@ionic.co.uk> Date: Wed, 24 Mar 2010 17:12:50 +0000 From: Michal User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.8) Gecko/20100227 Lightning/1.0b1 Thunderbird/3.0.3 MIME-Version: 1.0 To: freebsd-stable@freebsd.org References: <4BAA3409.6080406@ionic.co.uk> In-Reply-To: X-Enigmail-Version: 1.0.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: Multi node storage, ZFS X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Mar 2010 17:12:57 -0000 On 24/03/2010 16:20, Freddie Cash wrote: Horribly, horribly, horribly complex. But, then, that's the Linux world. > :) Yes I know, it's not very clean, but was trying to gather ideas and I found that > > Server 1: bunch of disks exported via iSCSI > Server 2: bunch of disks exported via iSCSI > Server 3: bunch of disks exported via iSCSI > > "SAN" box: uses all those iSCSI exports to create a ZFS pool > > Use 1 iSCSI export from each server to create a raidz vdev. Or multiple > mirror vdevs. When you need more storage, just add another server full of > disks, export them via iSCSI to the "SAN" box, and expand the ZFS pool. > > And, if you need fail-over, on your "SAN" box, you can use HAST at the lower > layers (currently only available in 9-CURRENT) to mirror the storage across > two systems, and use CARP to provide a single IP for the two boxes. > > --------------------------------------------------------------------- This is pretty much what I have been looking for, I don't mind using a SAN Controller server in which to deal with all of this in fact I expected that, but I wanted to present the disks from a server full of HDD's (which in effect is just a storage device) and then join them up. I've briefly looked over RAIDz, will give it a good reading over later. I'm thinking 6 disks in each server, and two raidz vdev created from 3 disks in each server. I can them serve them to the network. I've never used ISCSI on FreeBSD however, I played with AOE on different *nix's so I will give ISCSI a good looking over. > Yes, you save space, but your throughput will be horribly horribly horribly > low. RAID arrays should be narrow (1-9 disks), not wide (30+ disks), and > then combined into a larger array (multiple small RAID6 arrays joined into a > RAID0 stripe). Oh Yes I agree, was doing some very crude calculations and the difference in space was quite a lot, but no I would never do that in reality > If you were to do something like this, I'd make sure to have a fast >local ZIL (log) device on the head node. That would reduce latency >for writes, you might also do the same for reads. Then your bulk >storage comes from the iSCSI boxes. > >Just a thought. I've not come across ZIL so I think I will have to do my research >At least in theory you could use geom_gate and zfs I suppose, never >tried it though. >ggatec(8), ggated(8) are your friends for that. > >Vince Just had a look at ggatec, I've not seen or heard of that so I will continue looking through that. Many thanks to all, if I get something solid working I will be sure to update the list with what will hopefully be a very cheap (other then HDD's) SAN