From owner-freebsd-fs@FreeBSD.ORG Thu Jun 17 11:19:06 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B78F11065675; Thu, 17 Jun 2010 11:19:06 +0000 (UTC) (envelope-from hiroshi@soupacific.com) Received: from mail.soupacific.com (mail.soupacific.com [211.19.53.201]) by mx1.freebsd.org (Postfix) with ESMTP id 7F1078FC25; Thu, 17 Jun 2010 11:19:06 +0000 (UTC) Received: from [127.0.0.1] (unknown [192.168.1.239]) by mail.soupacific.com (Postfix) with ESMTP id 993996C4F1; Thu, 17 Jun 2010 11:10:36 +0000 (UTC) Message-ID: <4C1A04A7.9050107@soupacific.com> Date: Thu, 17 Jun 2010 20:19:03 +0900 From: "hiroshi@soupacific.com" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4 MIME-Version: 1.0 To: Mikolaj Golub References: <4C14215D.9090304@soupacific.com><20100613003635.GA60012@icarus.home.lan><20100613074921.GB1320@garage.freebsd.pl><4C149A5C.3070401@soupacific.com><20100613102401.GE1320@garage.freebsd.pl><86eigavzsg.fsf@kopusha.home.net><20100614095044.GH1721@garage.freebsd.pl><868w6hwt2w.fsf@kopusha.home.net><20100614153746.GN1721@garage.freebsd.pl><4C165871.6060609@soupacific.com><20100614214851.GB2498@garage.freebsd.pl><4C199A70.2060207@soupacific.com> <86631ilzgb.fsf@kopusha.home.net> In-Reply-To: <86631ilzgb.fsf@kopusha.home.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Pawel Jakub Dawidek Subject: Re: FreeBSD 8.1 and HAST X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Jun 2010 11:19:06 -0000 Huuum .... > I didn't. But as I wrote earlier I worked with HAST on CURRENT. So I just > checked if there is some issue with ZFS pools on HAST 8-STABLE. I created > 400Mb md device, configured HAST to use it and created zpool. Manual switching > to failover: > > host1: > zpool export -f storage > hastctl role secondary storage > > host2: > hastctl role primary all > zpool import -f storage > > There were no any issues. Almost same manner I did too. So I now put debug print to zpool to check version's value. It might take a while to finish. > > Just a note, as I don't have many 8-STABLE boxes this HAST has been created > between my host in the office and my laptop at home. Very nice to see that it > works via WAN :-) > Sound great ! After compilation I will check > So, can you try recreating ZFS pool to see if "zpool import" issue has gone? > > Please provide here hastctl status on both nodes and hastctl.cfg. Then we > might ask you about logs :-). > > I would recommend playing at first without ucarp -- switching to failover > manully. In this way you can be sure that HAST works and also be more familar > how it works. After this you can try ucarp. I have not used ucarp myself -- I > have been using our own application for failure detection and initiating > switching to failover. > I used to use CARP untill now. Thanks Hiroshi