Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 Nov 2018 13:16:40 -0500
From:      Alejandro Imass <aimass@yabarana.com>
To:        freebsd-en@lists.vlassakakis.de
Cc:        FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: ZFS re attaching failed device to pool
Message-ID:  <CAHieY7QXk7OH=Py9H%2BDpF5utztjCLypWzot-y8TKab%2B0zjAnCg@mail.gmail.com>
In-Reply-To: <4C7379FD-8646-4246-9CD0-AC3B281B32C9@lists.vlassakakis.de>
References:  <CAHieY7RP9hjZ8TDqz8PFNoeo2gVY5%2BA2icA288mmvh__e1j5XA@mail.gmail.com> <CAHieY7R%2BCNNzPQwCGJW8ugy266EXXBZmF1EPs2gvacJ3=56eiA@mail.gmail.com> <4C7379FD-8646-4246-9CD0-AC3B281B32C9@lists.vlassakakis.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Nov 6, 2018 at 9:05 AM Philipp Vlassakakis <
freebsd-en@lists.vlassakakis.de> wrote:

> Hi Alex,
>
> Did you try =E2=80=9Ezpool online zroot NAME-OF-DEGRADED-DISK=E2=80=9C an=
d =E2=80=9Ezpool zroot
> clear=E2=80=9C ?
>
> Regards,
> Philipp
>

Hey Phillip, thanks for the suggestion.

I just tried it and it says:

Device xxx onlined but remains in faulted state
And the "Action"suggests to run replace. Then I tried clear and waited for
the scrub to finish bu the device still says UNAVAIL.

So I went ahead and RTFM again and did "detach" and the "add"like the
handbook suggests.

Now the pool says ONLINE. BUT, why is the first disk labeled as "p3" and
not the second one???

# zpool status -v
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 0h6m with 0 errors on Tue Nov  6 07:24:56 2018
config:

        NAME                             STATE     READ WRITE CKSUM
        zroot                            ONLINE       0     0     0
          diskid/DISK-WD-WCC4N2YTRX40p3  ONLINE       0     0     0
          diskid/DISK-WD-WCC4N6XZY8C2    ONLINE       0     0     0

# zpool history
History for 'zroot':
2017-06-30.21:38:33 zpool create -o altroot=3D/mnt -O compress=3Dlz4 -O
atime=3Doff -m none -f zroot mirror ada0p3 ada1p3
2017-06-30.21:38:33 zfs create -o mountpoint=3Dnone zroot/ROOT
2017-06-30.21:38:33 zfs create -o mountpoint=3D/ zroot/ROOT/default
2017-06-30.21:38:33 zfs create -o mountpoint=3D/tmp -o exec=3Don -o setuid=
=3Doff
zroot/tmp
2017-06-30.21:38:33 zfs create -o mountpoint=3D/usr -o canmount=3Doff zroot=
/usr
2017-06-30.21:38:33 zfs create zroot/usr/home
2017-06-30.21:38:34 zfs create -o setuid=3Doff zroot/usr/ports
2017-06-30.21:38:34 zfs create zroot/usr/src
2017-06-30.21:38:34 zfs create -o mountpoint=3D/var -o canmount=3Doff zroot=
/var
2017-06-30.21:38:34 zfs create -o exec=3Doff -o setuid=3Doff zroot/var/audi=
t
2017-06-30.21:38:34 zfs create -o exec=3Doff -o setuid=3Doff zroot/var/cras=
h
2017-06-30.21:38:34 zfs create -o exec=3Doff -o setuid=3Doff zroot/var/log
2017-06-30.21:38:35 zfs create -o atime=3Don zroot/var/mail
2017-06-30.21:38:35 zfs create -o setuid=3Doff zroot/var/tmp
2017-06-30.21:38:35 zfs set mountpoint=3D/zroot zroot
2017-06-30.21:38:35 zpool set bootfs=3Dzroot/ROOT/default zroot
2017-06-30.21:38:35 zpool export zroot
2017-06-30.21:38:37 zpool import -o altroot=3D/mnt zroot
2017-06-30.21:38:42 zpool set cachefile=3D/mnt/boot/zfs/zpool.cache zroot
2018-11-06.05:18:34 zpool clear zroot
2018-11-06.07:13:34 zpool online zroot /dev/diskid/DISK-WD-WCC4N6XZY8C2
2018-11-06.07:18:01 zpool clear zroot
2018-11-06.07:35:55 zpool detach zroot /dev/diskid/DISK-WD-WCC4N6XZY8C2
2018-11-06.07:36:24 zpool add zroot /dev/diskid/DISK-WD-WCC4N6XZY8C2





> > On 6. Nov 2018, at 14:53, Alejandro Imass <aimass@yabarana.com> wrote:
> >
> >> On Tue, Nov 6, 2018 at 8:50 AM Alejandro Imass <aimass@yabarana.com>
> wrote:
> >>
> >> Dear Beasties,
> >>
> >> I have a simple 2 disk pool and one disk started failing and zfs put i=
t
> in
> >


[...]

>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAHieY7QXk7OH=Py9H%2BDpF5utztjCLypWzot-y8TKab%2B0zjAnCg>