Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Jul 2015 15:49:04 +0000
From:      Matt Churchyard <matt.churchyard@userve.net>
To:        Sean Chittenden <seanc@groupon.com>, Adrian Gschwend <ml-ktk@netlabs.org>
Cc:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   RE: FreeBSD 10.1 Memory Exhaustion
Message-ID:  <b148a97fef04403e990dd02970738187@SERVER.ad.usd-group.com>
In-Reply-To: <CACfj5vJvAz9StvjTrA1TzfS%2BMhi_qSrOc_qBNHr8qXbiAj81xw@mail.gmail.com>
References:  <CAB2_NwCngPqFH4q-YZk00RO_aVF9JraeSsVX3xS0z5EV3YGa1Q@mail.gmail.com> <55A3A800.5060904@denninger.net> <55A4D5B7.2030603@freebsd.org> <55A4E5AB.8060909@netlabs.org> <CACfj5vJvAz9StvjTrA1TzfS%2BMhi_qSrOc_qBNHr8qXbiAj81xw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Yes, I'm one of those and I suspect it's very common.
I generally just use official FreeBSD releases and let the devs decide what=
 patches are/aren't applied.
However, I limit max ARC on all my ZFS systems to leave a few GB or so (my =
latest system has limit of 28G with 32GB total)

Last time I didn't limit ARC was a new system about a year ago and it panic=
ed due to memory after a few days
I don't bother letting it near all my RAM anymore.

For me (and probably many other ZFS users that want their machines to stay =
up more than a few days) it's much easier to do this, than to run manually =
patched kernels that *might* fix it, but *might* also cause other problems.=
 Also allows me control over how much ZFS has, and how much I leave for my =
other applications.

Would be nice if it 'just worked', but I'll be very reluctant to take the l=
imits off.

-Matt

-----Original Message-----
From: owner-freebsd-fs@freebsd.org [mailto:owner-freebsd-fs@freebsd.org] On=
 Behalf Of Sean Chittenden
Sent: 14 July 2015 16:10
To: Adrian Gschwend
Cc: FreeBSD Filesystems
Subject: Re: FreeBSD 10.1 Memory Exhaustion

I think the reason this is not seen more often is because people frequently=
 throw limits on the arc in /boot/loader.conf:

vfs.zfs.arc_min=3D"18G"
vfs.zfs.arc_max=3D"149G"

ZFS ARC *should* not require those settings, but does currently for mixed w=
orkloads (i.e. databases) in order to be "stable".  By setting fixed sizes =
on the ARC, UMA and ARC are much more cooperative in that they have their o=
wn memory regions to manage so this behavior is not seen as often.

To be clear, however, it should not be necessary to set parameters like the=
se in /boot/loader.conf in order to obtain consistent operational behavior.=
  I'd be curious to know if someone running 10.2 BETA without patches is ab=
le to trigger this behavior or not.  There was work done that reported help=
ed with this between 10.1 and now.  To what extent it helped, however, I do=
n't have any advice yet.

-sc



On Tue, Jul 14, 2015 at 3:34 AM, Adrian Gschwend <ml-ktk@netlabs.org> wrote=
:

> On 14.07.15 11:26, Matthew Seaman wrote:
>
>
> > On 07/13/15 12:58, Karl Denninger wrote:
> >> Put this on your box and see if the problem goes away.... :-)
>
> [...]
>
> > I know that you, Karl, and a number of others have been advocating=20
> > to get this patch set committed.  Having now personally run into the=20
> > sort of problems that this addresses I can say that I would very=20
> > much like to see this go in.  Conditional of course on this actually=20
> > solving the problems I and others have been experiencing without=20
> > introducing significant regressions elsewhere. It's only had a day's=20
> > testing from me so far, but it's looking good.  If it survives a=20
> > week without the system locking up, I'll be convinced.
>
> I was the one which posted the message last year which triggered Karl=20
> to analyze it as he saw similar issues:
>
> https://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019043.html
>
> https://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019057.html
>
> Since then I run on Karls patch and never had any issue anymore. Not=20
> that my boxes were basically unusable without the patch.
>
> So I'm basically hoping since then that the patch will be committed soon.
>
> >    * The memory exhaustion effect or equivalent memory pressures can be
> >      triggered at will
> >    * The test doesn't require unfeasibly large resources to run
> >    * The behaviour provides a good model for real-world deployments
> >
> > Maybe these tests would be too large-scale to run every day in=20
> > Jenkins, but having them available as part of, say, the release=20
> > process, seems like a no-brainer to me.
>
> I wouldn't consider my setup as "unfeasibly large resources", in fact=20
> I triggered it with a bunch of jails running on a machine and=20
> providing various Internet-services for a small Open Source community.=20
> I was always surprised that not more people ran into this issue as I=20
> had it since 8.x.
>
> regards
>
> Adrian
>
>
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>



--
Sean Chittenden
_______________________________________________
freebsd-fs@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b148a97fef04403e990dd02970738187>