Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Feb 2016 20:20:00 -0500
From:      Tom Curry <thomasrcurry@gmail.com>
To:        David Adam <zanchey@ucc.gu.uwa.edu.au>
Cc:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: Poor ZFS+NFSv3 read/write performance and panic
Message-ID:  <CAGtEZUCBapbAEUQawVnFS%2BUuUYGSrhyk=i3VEkQaKV4zRQuhJA@mail.gmail.com>
In-Reply-To: <alpine.DEB.2.11.1602141625300.1862@motsugo.ucc.gu.uwa.edu.au>
References:  <alpine.DEB.2.11.1601292153420.26396@motsugo.ucc.gu.uwa.edu.au> <alpine.DEB.2.11.1602080056390.17583@motsugo.ucc.gu.uwa.edu.au> <CAGtEZUDCAENGcUjpZDjUBg93F_MWQO40Q4WScm1BogAOUjgEaA@mail.gmail.com> <alpine.DEB.2.11.1602141625300.1862@motsugo.ucc.gu.uwa.edu.au>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Feb 14, 2016 at 3:40 AM, David Adam <zanchey@ucc.gu.uwa.edu.au>
wrote:

> On Mon, 8 Feb 2016, Tom Curry wrote:
> > On Sun, Feb 7, 2016 at 11:58 AM, David Adam <zanchey@ucc.gu.uwa.edu.au>
> > wrote:
> >
> > > Just wondering if anyone has any idea how to identify which devices are
> > > implicated in ZFS' vdev_deadman(). I have updated the firmware on the
> > > mps(4) card that has our disks attached but that hasn't helped.
> >
> > I too ran into this problem and spent quite some time troubleshooting
> > hardware. For me it turns out it was not hardware at all, but software.
> > Specifically the ZFS ARC. Looking at your stack I see some arc reclaim up
> > top, it's possible you're running into the same issue. There is a monster
> > of a PR that details this here
> > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594
> >
> > If you would like to test this theory out, the fastest way is to limit
> the
> > ARC by adding the following to /boot/loader.conf and rebooting
> > vfs.zfs.arc_max="24G"
> >
> > Replacing 24G with what makes sense for your system, aim for 3/4 of total
> > memory for starters. If this solves the problem there are more scientific
> > methods to a permanent fix, one would be applying the patch in the PR
> > above, another would be a more finely tuned arc_max value.
>
> Thanks Tom - this certainly did sound promising, but setting the ARC to
> 11G of our 16G of RAM didn't help. `zfs-stats` confirmed that the ARC was
> the expected size and that there was still 461 MB of RAM free.
>
> We'll keep looking!
>
> David Adam
> zanchey@ucc.gu.uwa.edu.au
>


Did the system still panic or did it merely degrade in performance? When
performance heads south are you swapping?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAGtEZUCBapbAEUQawVnFS%2BUuUYGSrhyk=i3VEkQaKV4zRQuhJA>