Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 2 Nov 2018 10:16:51 -0400
From:      Rich <rincebrain@gmail.com>
To:        oshogbo@freebsd.org
Cc:        Miroslav Lachman <000.fbsd@quip.cz>, freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: zpool scrub 9TB finishes in ~40 mins
Message-ID:  <CAOeNLurzJc6j6Hea3=bV8OWPXwbK-EeUnev9wqwuo8SqjUfiPQ@mail.gmail.com>
In-Reply-To: <20181102133254.GA35599@jarvis>
References:  <5BDC3DF5.9020501@andyit.com.au> <d073e2c5-38f8-c152-b37a-f7e6e08f520b@quip.cz> <20181102133254.GA35599@jarvis>

next in thread | previous in thread | raw e-mail | index | archive | help
Unless FBSD merged it incompletely, that's very unlikely, b/c zpool
status output during resilvers/scrubs changes post-sequential resilver
work:

from:
(void) printf(gettext("\t%s scanned out of %s at %s/s")
to:
(void) printf(gettext("\t%s scanned at %s/s, "%s issued at %s/s, %s total\n"),

So those status messages are older than that work, which is also
validated by the version running predating the work merging (r331113
versus r334844).

I would _guess_ that it encountered so many metadata errors it
couldn't get to a lot of the pool, so it vacuously completed faster.

Unless large swathes of the disks were unreadable, that's far more
errors than I'd expect from a couple of unreadable blocks each. You
and ddrescue might want to be friends, though that might make things
worse since the pool has continued being some value of "in use" since
the failing disks were last in it.

- Rich
On Fri, Nov 2, 2018 at 9:36 AM Mariusz Zaborski <oshogbo@freebsd.org> wrote:
>
> Probably because:
> https://github.com/zfsonlinux/zfs/pull/6256/commits/1c6275b1fcdacd734bb4eefd02a123b6b610ca48
>
> FreeBSD commit - r334844.
>
>
> On Fri, Nov 02, 2018 at 02:21:28PM +0100, Miroslav Lachman wrote:
> > Andy Farkas wrote on 2018/11/02 13:07:
> >
> > > # zpool status z
> > >    pool: z
> > >   state: ONLINE
> > > status: One or more devices has experienced an error resulting in data
> > >      corruption.  Applications may be affected.
> > > action: Restore the file in question if possible.  Otherwise restore the
> > >      entire pool from backup.
> > >     see: http://illumos.org/msg/ZFS-8000-8A
> > >    scan: scrub in progress since Fri Nov  2 16:59:24 2018
> > >      365G scanned out of 11.9T at 228M/s, 14h47m to go
> > >          2.47M repaired, 2.99% done
> >
> > I definitely need this speed of scrub! 238M/s is awesome.
> > I have RAIDZ from 4x ST4000VN000-1H4168 SC43 and the speed of scrub is
> > about 20MB/s.
> >
> > Scrub takes more than week to finish:
> >
> >    pool: tank0
> >   state: ONLINE
> > status: Some supported features are not enabled on the pool. The pool can
> >          still be used, but some features are unavailable.
> > action: Enable all features using 'zpool upgrade'. Once this is done,
> >          the pool may no longer be accessible by software that does not
> > support
> >          the features. See zpool-features(7) for details.
> >    scan: scrub repaired 0 in 262h56m with 0 errors on Sun Sep 16
> > 02:04:25 2018
> > config:
> >
> >          NAME                STATE     READ WRITE CKSUM
> >          tank0               ONLINE       0     0     0
> >            raidz1-0          ONLINE       0     0     0
> >              gpt/disk0tank0  ONLINE       0     0     0
> >              gpt/disk1tank0  ONLINE       0     0     0
> >              gpt/disk2tank0  ONLINE       0     0     0
> >              gpt/disk3tank0  ONLINE       0     0     0
> >
> > This is on HP ProLiant ML 110 G5 (very old machine) with only 5GB of RAM.
> >
> > Miroslav Lachman
> > _______________________________________________
> > freebsd-fs@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>
> --
> Mariusz Zaborski
> oshogbo//vx             | http://oshogbo.vexillium.org
> FreeBSD committer       | https://freebsd.org
> Software developer      | http://wheelsystems.com
> If it's not broken, let's fix it till it is!!1



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOeNLurzJc6j6Hea3=bV8OWPXwbK-EeUnev9wqwuo8SqjUfiPQ>