Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 07 Jun 2013 09:48:24 +0200
From:      "Ronald Klop" <ronald-freebsd8@klop.yi.org>
To:        freebsd-fs@freebsd.org
Subject:   Re: zpool export/import on failover - The pool metadata is corrupted
Message-ID:  <op.wyataypr8527sy@ronaldradial.versatec.local>
In-Reply-To: <D7F099CB-855F-43F8-ACB5-094B93201B4B@alumni.chalmers.se>
References:  <D7F099CB-855F-43F8-ACB5-094B93201B4B@alumni.chalmers.se>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 06 Jun 2013 21:24:34 +0200, mxb <mxb@alumni.chalmers.se> wrote:

>
> Hello list,
>
> I have two-head ZFS setup with external disk enclosure over SAS expander.
> This is a failover setup with CARP and devd triggering spool  
> export/import.
> One of two nodes is preferred master.
>
> Then master is rebooted, devd kicks in as of CARP becomes master and the  
> second node picks up ZFS-disks from external enclosure.
> Then master comes back, CARP becomes master, devd kicks in and pool gets  
> exported from the second node and imported on the first one.
>
> However, I have experienced metadata corruption several times with this  
> setup.
> Note, that ZIL(mirrored) resides on external enclosure. Only L2ARC is  
> both local and external - da1,da2, da13s2, da14s2
>
> root@nfs2:/root # zpool import
>    pool: jbod
>      id: 17635654860276652744
>   state: FAULTED
>  status: The pool metadata is corrupted.
>  action: The pool cannot be imported due to damaged devices or data.
>    see: http://illumos.org/msg/ZFS-8000-72
>  config:
>
> 	jbod        FAULTED  corrupted data
> 	  raidz3-0  ONLINE
> 	    da3     ONLINE
> 	    da4     ONLINE
> 	    da5     ONLINE
> 	    da6     ONLINE
> 	    da7     ONLINE
> 	    da8     ONLINE
> 	    da9     ONLINE
> 	    da10    ONLINE
> 	    da11    ONLINE
> 	    da12    ONLINE
> 	cache
> 	  da1
> 	  da2
> 	  da13s2
> 	  da14s2
> 	logs
> 	  mirror-1  ONLINE
> 	    da13s1  ONLINE
> 	    da14s1  ONLINE
>
> Any ideas what is going on?
>
> //mxb

I know the Oracle ZFS Appliance you can buy with clustering does a reboot  
of the node which should release the pool.
The mechanism is called like this: http://en.wikipedia.org/wiki/STONITH

Ronald.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?op.wyataypr8527sy>