Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Oct 2012 22:33:59 +0300
From:      Nikolay Denev <ndenev@gmail.com>
To:        "<freebsd-fs@freebsd.org>" <freebsd-fs@FreeBSD.ORG>
Subject:   zpool scrub on pool from geli devices offlines the pool?
Message-ID:  <5A5FE35F-7D68-4E83-A88D-3002B51F2E00@gmail.com>

next in thread | raw e-mail | index | archive | help
Hi,

I have a zfs pool from 24 disks encrypted with geli.

I just did a zpool scrub tank, and that probably reopened all of the =
devices, but this caused geli "detach on last close" to kick in=20
which resulted in offline pool from UNAVAILABLE devices.=20

  pool: tank
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool =
clear'.
   see: http://illumos.org/msg/ZFS-8000-HC
  scan: scrub in progress since Thu Oct  4 21:19:15 2012
        1 scanned out of 8.29T at 1/s, (scan is slow, no estimated time)
        0 repaired, 0.00% done
config:

	NAME                      STATE     READ WRITE CKSUM
	tank                      UNAVAIL      0     0     0
	  raidz2-0                UNAVAIL      0     0     0
	    4340223731536330140   UNAVAIL      0     0     0  was =
/dev/mfid1.eli
	    5260313034754791769   UNAVAIL      0     0     0  was =
/dev/mfid2.eli
	    3388275563832205054   UNAVAIL      0     0     0  was =
/dev/mfid3.eli
	    4279885200356306835   UNAVAIL      0     0     0  was =
/dev/mfid4.eli
	    17520568003934998783  UNAVAIL      0     0     0  was =
/dev/mfid5.eli
	    14683427064986614232  UNAVAIL      0     0     0  was =
/dev/mfid6.eli
	    5604251825626821      UNAVAIL      0     0     0  was =
/dev/mfid7.eli
	    2878395114688866721   UNAVAIL      0     0     0  was =
/dev/mfid8.eli
	  raidz2-1                UNAVAIL      0     0     0
	    1560240233906009318   UNAVAIL      0     0     0  was =
/dev/mfid9.eli
	    17390515268955717943  UNAVAIL      0     0     0  was =
/dev/mfid10.eli
	    16346219034888442254  UNAVAIL      0     0     0  was =
/dev/mfid11.eli
	    16181936453927970171  UNAVAIL      0     0     0  was =
/dev/mfid12.eli
	    13672668419715232053  UNAVAIL      0     0     0  was =
/dev/mfid13.eli
	    8576569675278017750   UNAVAIL      0     0     0  was =
/dev/mfid14.eli
	    7122599902867613575   UNAVAIL      0     0     0  was =
/dev/mfid15.eli
	    6165832151020850637   UNAVAIL      0     0     0  was =
/dev/mfid16.eli
	  raidz2-2                UNAVAIL      0     0     0
	    2529143736541278973   UNAVAIL      0     0     0  was =
/dev/mfid17.eli
	    5815783978070201610   UNAVAIL      0     0     0  was =
/dev/mfid18.eli
	    10521963168174464672  UNAVAIL      0     0     0  was =
/dev/mfid19.eli
	    17880694802593963336  UNAVAIL      0     0     0  was =
/dev/mfid20.eli
	    2868521416175385324   UNAVAIL      0     0     0  was =
/dev/mfid21.eli
	    16369604825508697024  UNAVAIL      0     0     0  was =
/dev/mfid22.eli
	    10849928960759331453  UNAVAIL      0     0     0  was =
/dev/mfid23.eli
	    7128010358193490217   UNAVAIL      0     0     0  was =
/dev/mfid24.eli

errors: 1 data errors, use '-v' for a list

Dmesg shows :

GEOM_ELI: Detached mfid1.eli on last close.
=85
GEOM_ELI: Detached mfid24.eli on last close.

I then did /etc/rc.d/geli restart and zpool clear tank, and it is back =
online, but shows permanent metadata errors=85

Any ideas why this happned from a simple zpool scrub, and how it can be =
prevented?
Just disable "detach on last close" for the geli devices?




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5A5FE35F-7D68-4E83-A88D-3002B51F2E00>