From owner-freebsd-fs@FreeBSD.ORG Wed Sep 7 20:35:07 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 312C91065678; Wed, 7 Sep 2011 20:35:07 +0000 (UTC) (envelope-from joh.hendriks@gmail.com) Received: from mail-ew0-f54.google.com (mail-ew0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 6DAD68FC16; Wed, 7 Sep 2011 20:35:06 +0000 (UTC) Received: by ewy1 with SMTP id 1so27872ewy.13 for ; Wed, 07 Sep 2011 13:35:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=LlU37dm00L9mU15OUixl06hQ7ZOGF9DC4jwmbzm6ZDo=; b=m25+uL9nbFxFoW8KZ8mmgCWFH7IMeYKdRGyiuTQbq/EwlPRznNCtYdxgS87UTTRHjG Oo7LiDyAasc3UZd2QM0WGrCK0uq4FTTJlD6K6nbZ6TpGUJoE01cel1M0Xf2Pd98EUmvB h25opuFFz6mIAeySQ72+/6mLKzkjDvbUp2YiI= Received: by 10.14.18.4 with SMTP id k4mr2481272eek.30.1315427705430; Wed, 07 Sep 2011 13:35:05 -0700 (PDT) Received: from [192.168.1.13] (5ED0E470.cm-7-1d.dynamic.ziggo.nl [94.208.228.112]) by mx.google.com with ESMTPS id i6sm2782656eeb.11.2011.09.07.13.35.03 (version=SSLv3 cipher=OTHER); Wed, 07 Sep 2011 13:35:04 -0700 (PDT) Message-ID: <4E67D576.9030100@gmail.com> Date: Wed, 07 Sep 2011 22:35:02 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0.2) Gecko/20110902 Thunderbird/6.0.2 MIME-Version: 1.0 To: Pawel Jakub Dawidek References: <4E60D992.3030802@gmail.com> <20110905084934.GC1662@garage.freebsd.pl> In-Reply-To: <20110905084934.GC1662@garage.freebsd.pl> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS on HAST and reboot. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Sep 2011 20:35:07 -0000 On maandag 5 september 2011 10:49:37, Pawel Jakub Dawidek wrote: > On Fri, Sep 02, 2011 at 03:26:42PM +0200, Johan Hendriks wrote: >> Hello all. >> >> I just started using ZFS on top of HAST. >> >> What i did was first glabel my disks like disk1 to disk3 >> Then I created my hast devices in /etc/hast.conf >> >> /etc/hast.conf looks like this. >> i >> resource disk1 { >> on srv1 { >> local /dev/label/disk1 >> remote 192.168.5.41 >> } >> on srv2 { >> local /dev/label/disk1 >> remote 192.168.5.40 >> } >> } >> resource disk2 { >> on srv1 { >> local /dev/label/disk2 >> remote 192.168.5.41 >> } >> on srv2 { >> local /dev/label/disk2 >> remote 192.168.5.40 >> } >> } >> resource disk3 { >> on srv1 { >> local /dev/label/disk3 >> remote 192.168.5.41 >> } >> on srv2 { >> local /dev/label/disk3 >> remote 192.168.5.40 >> } >> } >> >> This works. >> I can set srv 1 to primary and srv 2 to secondary and visa versa. >> hastctl role primary all and hastctl role secondary all. >> >> Then i created the raidz on the master srv1 >> zpool create storage raidz1 hast/disk1 hast/disk2 hast/disk3 >> >> all looks good. >> zpool status >> pool: storage >> state: ONLINE >> scan: scrub repaired 0 in 0h0m with 0 errors on Wed Aug 31 20:49:19 2011 >> config: >> >> NAME STATE READ WRITE CKSUM >> storage ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> hast/disk1 ONLINE 0 0 0 >> hast/disk2 ONLINE 0 0 0 >> hast/disk3 ONLINE 0 0 0 >> >> errors: No known data errors >> >> then i created the mountpoint and created zfs on it >> # mkdir /usr/local/virtual >> # zfs create storage/virtual >> # zfs list >> # zfs set mountpoint=/usr/local/virtual storage/virtual >> >> # /etc/rc.d/zfs start and whooop there is my /usr/local/virtual zfs >> filesystem. >> # mount >> /dev/ada0p2 on / (ufs, local, journaled soft-updates) >> devfs on /dev (devfs, local, multilabel) >> storage on /storage (zfs, local, nfsv4acls) >> storage/virtual on /usr/local/virtual (zfs, local, nfsv4acls) >> >> if i do a zfs export -f storage on srv1 change the hast role to >> secondary and then set the hast role on srv2 to primary and do zfs >> import -f storage, i can see the files on srv2. >> >> I am a happy camper :D >> >> So it works like advertised. >> Now i rebooted both machines. >> all is working fine. >> >> But if i reboot the server srv1 again, i can not import the pool >> anymore, it tells me the pool is already imported. >> I do load the carp-hast-switch master file with ifstated. >> This does set the hast role to primary. >> But can not import the pool. >> Now this can be true because i did not export it. >> if i do a /etc/rc.d/zfs start, than it gets mounted and the pool is >> again available. >> >> Is there a way i can do this automaticly. >> In my understanding after a reboot zfs try's to start, but fails because >> my hast providers are not yet ready. >> Or am i doing something wrong and should i not do it this way. >> Can i tell zfs to start after the hast providers are primary at reboot. > > You can see the message that pool is already imported, because when you > reboot primary there is still info about the pool in > /boot/zfs/zpool.cache. Pools that are mentioned in this file are > automatically imported on boot (by the kernel), so importing such a pool > will fail. You should still be able to mount file systems (zfs mount -a). > > What I'd recommend is not to use /etc/rc.d/zfs to mount file systems > from pools managed by HAST. Instead such pools should be imported by a > script executed from HA software when it decides it should be primary. > > Also I'd recommend to avoid adding info about HAST pools to the > /boot/zfs/zpool.cache file. You can do that by adding '-c none' option > to 'zpool import'. This will tell ZFS not to cache info about the pool > in zpool.cache. > Thanks for your answer. One thing i can not seem to get done is the -c none option. # zpool import -c none storage failed to open cache file: No such file or directory It looks like it is looking for a cache file named none and not as advertised do not search or do not not cache the pool. Gr johan