Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 5 Sep 2011 10:49:37 +0200
From:      Pawel Jakub Dawidek <pjd@FreeBSD.org>
To:        Johan Hendriks <joh.hendriks@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS on HAST and reboot.
Message-ID:  <20110905084934.GC1662@garage.freebsd.pl>
In-Reply-To: <4E60D992.3030802@gmail.com>
References:  <4E60D992.3030802@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--jy6Sn24JjFx/iggw
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Sep 02, 2011 at 03:26:42PM +0200, Johan Hendriks wrote:
> Hello all.
>=20
> I just started using ZFS on top of HAST.
>=20
> What i did was first glabel my disks like disk1 to disk3
> Then I created my hast devices in /etc/hast.conf
>=20
> /etc/hast.conf looks like this.
> i
> resource disk1 {
> on srv1 {
> local /dev/label/disk1
> remote 192.168.5.41
>      }
> on srv2 {
> local /dev/label/disk1
> remote 192.168.5.40
>      }
> }
> resource disk2 {
> on srv1 {
> local /dev/label/disk2
> remote 192.168.5.41
>      }
> on srv2 {
> local /dev/label/disk2
> remote 192.168.5.40
>      }
> }
> resource disk3 {
> on srv1 {
> local /dev/label/disk3
> remote 192.168.5.41
>      }
> on srv2 {
> local /dev/label/disk3
> remote 192.168.5.40
>      }
> }
>=20
> This works.
> I can set srv 1 to primary and srv 2 to secondary and visa versa.
> hastctl role primary all and hastctl role secondary all.
>=20
> Then i created  the raidz on the master srv1
> zpool create storage raidz1 hast/disk1 hast/disk2 hast/disk3
>=20
> all looks good.
> zpool status
>    pool: storage
>   state: ONLINE
>   scan: scrub repaired 0 in 0h0m with 0 errors on Wed Aug 31 20:49:19 2011
> config:
>=20
>          NAME            STATE     READ WRITE CKSUM
>          storage         ONLINE       0     0     0
>            raidz1-0      ONLINE       0     0     0
>              hast/disk1  ONLINE       0     0     0
>              hast/disk2  ONLINE       0     0     0
>              hast/disk3  ONLINE       0     0     0
>=20
> errors: No known data errors
>=20
> then i created the mountpoint and created  zfs on it
> # mkdir /usr/local/virtual
> # zfs create storage/virtual
> # zfs list
> # zfs set mountpoint=3D/usr/local/virtual storage/virtual
>=20
> # /etc/rc.d/zfs start and whooop there is my /usr/local/virtual zfs=20
> filesystem.
> # mount
> /dev/ada0p2 on / (ufs, local, journaled soft-updates)
> devfs on /dev (devfs, local, multilabel)
> storage on /storage (zfs, local, nfsv4acls)
> storage/virtual on /usr/local/virtual (zfs, local, nfsv4acls)
>=20
> if i do a zfs export -f storage on srv1 change the hast role to=20
> secondary and then set the hast role on srv2 to primary and do zfs=20
> import -f storage, i can see the files on srv2.
>=20
> I am a happy camper :D
>=20
> So it works like advertised.
> Now i rebooted both machines.
> all is working fine.
>=20
> But if i reboot the server srv1 again, i can not import the pool=20
> anymore, it tells me the pool is already imported.
> I do load the carp-hast-switch master file with ifstated.
> This does set the hast role to primary.
> But can not import the pool.
> Now this can be true because i did not export it.
> if i do a /etc/rc.d/zfs start, than it gets mounted and the pool is=20
> again available.
>=20
> Is there a way i can do this automaticly.
> In my understanding after a reboot zfs try's to start, but fails because=
=20
> my hast providers are not yet ready.
> Or am i doing something wrong and should i not do it this way.
> Can i tell zfs to start after the hast providers are primary at reboot.

You can see the message that pool is already imported, because when you
reboot primary there is still info about the pool in
/boot/zfs/zpool.cache. Pools that are mentioned in this file are
automatically imported on boot (by the kernel), so importing such a pool
will fail. You should still be able to mount file systems (zfs mount -a).

What I'd recommend is not to use /etc/rc.d/zfs to mount file systems
=66rom pools managed by HAST. Instead such pools should be imported by a
script executed from HA software when it decides it should be primary.

Also I'd recommend to avoid adding info about HAST pools to the
/boot/zfs/zpool.cache file. You can do that by adding '-c none' option
to 'zpool import'. This will tell ZFS not to cache info about the pool
in zpool.cache.

--=20
Pawel Jakub Dawidek                       http://www.wheelsystems.com
FreeBSD committer                         http://www.FreeBSD.org
Am I Evil? Yes, I Am!                     http://yomoli.com

--jy6Sn24JjFx/iggw
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (FreeBSD)

iEYEARECAAYFAk5kjR4ACgkQForvXbEpPzQcIQCfb9pe83DsyQYf4t+Tc7W5L7K7
eykAn29EgCqHBzM6FLTYg8by5/rRC+GE
=8iWP
-----END PGP SIGNATURE-----

--jy6Sn24JjFx/iggw--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110905084934.GC1662>