Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Mar 2018 08:09:29 +0100 (CET)
From:      =?ISO-8859-1?Q?Trond_Endrest=F8l?= <Trond.Endrestol@fagskolen.gjovik.no>
To:        KIRIYAMA Kazuhiko <kiri@kx.openedu.org>
Cc:        freebsd-current@freebsd.org
Subject:   Re: ZFS i/o error in recent 12.0
Message-ID:  <alpine.BSF.2.21.1803200759260.66427@mail.fig.ol.no>
In-Reply-To: <201803192300.w2JN04fx007127@kx.openedu.org>
References:  <201803192300.w2JN04fx007127@kx.openedu.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 20 Mar 2018 08:00+0900, KIRIYAMA Kazuhiko wrote:

> Hi,
> 
> I've been encountered suddenly death in ZFS full volume
> machine(r330434) about 10 days after installation[1]:
> 
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool zroot
> gptzfsboot: failed to mount default pool zroot
> 
> FreeBSD/x86 boot
> ZFS: i/o error - all block copies unavailable
> ZFS: can't find dataset u
> Default: zroot/<0x0>:
> boot: 
> 
> Partition is bellow:
> 
>  # gpart show /dev/mfid0
> =>         40  31247564720  mfid0  GPT  (15T)
>            40       409600      1  efi  (200M)
>        409640         1024      2  freebsd-boot  (512K)
>        410664          984         - free -  (492K)
>        411648    268435456      3  freebsd-swap  (128G)
>     268847104  30978715648      4  freebsd-zfs  (14T)
>   31247562752         2008         - free -  (1.0M)
> 
> # 
> 
> But nothing had beed happend in old current ZFS full volume
> machine(r327038M). According to [2] the reason is boot/zfs/zpool.cache
> inconsistent. I've tried to cope with this by repairing
> /boot [3] from rescue bootable USB as follows:
> 
> # kldload zfs
> # zpool import 
>    pool: zroot
>      id: 17762298124265859537
>   state: ONLINE
>  action: The pool can be imported using its name or numeric identifier.
>  config:
> 
>         zroot       ONLINE
>           mfid0p4   ONLINE
> # zpool import -fR /mnt zroot
> # df -h
> Filesystem                  Size    Used   Avail Capacity  Mounted on
> /dev/da0p2                   14G    1.6G     11G    13%    /
> devfs                       1.0K    1.0K      0B   100%    /dev
> zroot/.dake                  14T     18M     14T     0%    /mnt/.dake
> zroot/ds                     14T     96K     14T     0%    /mnt/ds
> zroot/ds/backup              14T     88K     14T     0%    /mnt/ds/backup
> zroot/ds/backup/kazu.pis     14T     31G     14T     0%    /mnt/ds/backup/kazu.pis
> zroot/ds/distfiles           14T    7.9M     14T     0%    /mnt/ds/distfiles
> zroot/ds/obj                 14T     10G     14T     0%    /mnt/ds/obj
> zroot/ds/packages            14T    4.0M     14T     0%    /mnt/ds/packages
> zroot/ds/ports               14T    1.3G     14T     0%    /mnt/ds/ports
> zroot/ds/src                 14T    2.6G     14T     0%    /mnt/ds/src
> zroot/tmp                    14T     88K     14T     0%    /mnt/tmp
> zroot/usr/home               14T    136K     14T     0%    /mnt/usr/home
> zroot/usr/local              14T     10M     14T     0%    /mnt/usr/local
> zroot/var/audit              14T     88K     14T     0%    /mnt/var/audit
> zroot/var/crash              14T     88K     14T     0%    /mnt/var/crash
> zroot/var/log                14T    388K     14T     0%    /mnt/var/log
> zroot/var/mail               14T     92K     14T     0%    /mnt/var/mail
> zroot/var/ports              14T     11M     14T     0%    /mnt/var/ports
> zroot/var/tmp                14T    6.0M     14T     0%    /mnt/var/tmp
> zroot/vm                     14T    2.8G     14T     0%    /mnt/vm
> zroot/vm/tbedfc              14T    1.6G     14T     0%    /mnt/vm/tbedfc
> zroot                        14T     88K     14T     0%    /mnt/zroot
> # zfs list
> NAME                       USED  AVAIL  REFER  MOUNTPOINT
> zroot                     51.1G  13.9T    88K  /mnt/zroot
> zroot/.dake               18.3M  13.9T  18.3M  /mnt/.dake
> zroot/ROOT                1.71G  13.9T    88K  none
> zroot/ROOT/default        1.71G  13.9T  1.71G  /mnt/mnt
> zroot/ds                  45.0G  13.9T    96K  /mnt/ds
> zroot/ds/backup           30.8G  13.9T    88K  /mnt/ds/backup
> zroot/ds/backup/kazu.pis  30.8G  13.9T  30.8G  /mnt/ds/backup/kazu.pis
> zroot/ds/distfiles        7.88M  13.9T  7.88M  /mnt/ds/distfiles
> zroot/ds/obj              10.4G  13.9T  10.4G  /mnt/ds/obj
> zroot/ds/packages         4.02M  13.9T  4.02M  /mnt/ds/packages
> zroot/ds/ports            1.26G  13.9T  1.26G  /mnt/ds/ports
> zroot/ds/src              2.56G  13.9T  2.56G  /mnt/ds/src
> zroot/tmp                   88K  13.9T    88K  /mnt/tmp
> zroot/usr                 10.4M  13.9T    88K  /mnt/usr
> zroot/usr/home             136K  13.9T   136K  /mnt/usr/home
> zroot/usr/local           10.2M  13.9T  10.2M  /mnt/usr/local
> zroot/var                 17.4M  13.9T    88K  /mnt/var
> zroot/var/audit             88K  13.9T    88K  /mnt/var/audit
> zroot/var/crash             88K  13.9T    88K  /mnt/var/crash
> zroot/var/log              388K  13.9T   388K  /mnt/var/log
> zroot/var/mail              92K  13.9T    92K  /mnt/var/mail
> zroot/var/ports           10.7M  13.9T  10.7M  /mnt/var/ports
> zroot/var/tmp             5.98M  13.9T  5.98M  /mnt/var/tmp
> zroot/vm                  4.33G  13.9T  2.75G  /mnt/vm
> zroot/vm/tbedfc           1.58G  13.9T  1.58G  /mnt/vm/tbedfc
> # zfs mount zroot/ROOT/default
> # cd /mnt/mnt/
> # mv boot boot.bak
> # cp -RPp boot.bak boot
> # gpart show /dev/mfid0
> =>         40  31247564720  mfid0  GPT  (15T)
>            40       409600      1  efi  (200M)
>        409640         1024      2  freebsd-boot  (512K)
>        410664          984         - free -  (492K)
>        411648    268435456      3  freebsd-swap  (128G)
>     268847104  30978715648      4  freebsd-zfs  (14T)
>   31247562752         2008         - free -  (1.0M)
> 
> # gpart bootcode -b /mnt/mnt/boot/pmbr -p /boot/gptzfsboot -i 2 mfid0
> partcode written to mfid0p2
> bootcode written to mfid0
> # cd

> # zpool export zroot

This step has been big no-no in the past. Never leave your 
bootpool/rootpool in an exported state if you intend to boot from it. 
For all I know, this advice might be superstition for the present 
versions of FreeBSD.

>From what I can tell from the above, you never created a new 
zpool.cache and copied it to its rightful place.

If you suspect your zpool.cache is out of date, then this should do 
the trick:

zpool import -o cachefile=/tmp/zpool.cache -fR /mnt zroot

If you have additional pools, you may want to treat them the same way.

cp -p /tmp/zpool.cache /mnt/mnt/boot/zfs/zpool.cache
reboot

> # 
> 
> But can not boot:
> 
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool zroot
> gptzfsboot: failed to mount default pool zroot
> 
> FreeBSD/x86 boot
> 
> Any idea?
> 
> Best regards
> 
> [1] http://ds.truefc.org/~kiri/freebsd/current/zfs/messages
> [2] https://lists.freebsd.org/pipermail/freebsd-questions/2016-February/270505.html
> [3] https://forums.freebsd.org/threads/zfs-i-o-error-all-block-copies-unavailable-invalid-format.55227/#post-312830

-- 
Trond.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.21.1803200759260.66427>