Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 4 Jun 2008 23:53:30 -0700
From:      Jeremy Chadwick <koitsu@FreeBSD.org>
To:        Pawel Jakub Dawidek <pjd@FreeBSD.org>
Cc:        Dag-Erling Sm??rgrav <des@des.no>, Tz-Huan Huang <tzhuan@csie.org>, freebsd-hackers@freebsd.org
Subject:   Re: Is there any way to increase the KVM?
Message-ID:  <20080605065330.GA62591@eos.sc1.parodius.com>
In-Reply-To: <20080605062728.GA4278@garage.freebsd.pl>
References:  <6a7033710805302252v43a7b240x66ca3f5e3dd5fda4@mail.gmail.com> <20080603135308.GC3434@garage.freebsd.pl> <6a7033710806032317g4dbe8845h26a1196016b9c440@mail.gmail.com> <86zlq140x0.fsf@ds4.des.no> <6a7033710806041053g4a5c2fdftd7202b708bff363c@mail.gmail.com> <20080605062728.GA4278@garage.freebsd.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Jun 05, 2008 at 08:27:28AM +0200, Pawel Jakub Dawidek wrote:
> On Thu, Jun 05, 2008 at 01:53:37AM +0800, Tz-Huan Huang wrote:
> > On Thu, Jun 5, 2008 at 12:31 AM, Dag-Erling Sm??rgrav <des@des.no> wrote:
> > > "Tz-Huan Huang" <tzhuan@csie.org> writes:
> > >> The vfs.zfs.arc_max was set to 512M originally, the machine survived for
> > >> 4 days and panicked this morning. Now the vfs.zfs.arc_max is set to 64M
> > >> by Oliver's suggestion, let's see how long it will survive. :-)
> > >
> > > des@ds4 ~% uname -a
> > > FreeBSD ds4.des.no 8.0-CURRENT FreeBSD 8.0-CURRENT #27: Sat Feb 23 01:24:32 CET 2008     des@ds4.des.no:/usr/obj/usr/src/sys/ds4  amd64
> > > des@ds4 ~% sysctl -h vm.kmem_size_min vm.kmem_size_max vm.kmem_size vfs.zfs.arc_min vfs.zfs.arc_max
> > > vm.kmem_size_min: 1,073,741,824
> > > vm.kmem_size_max: 1,073,741,824
> > > vm.kmem_size: 1,073,741,824
> > > vfs.zfs.arc_min: 67,108,864
> > > vfs.zfs.arc_max: 536,870,912
> > > des@ds4 ~% zpool list
> > > NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> > > raid                   1.45T    435G   1.03T    29%  ONLINE     -
> > > des@ds4 ~% zfs list | wc -l
> > >     210
> > >
> > > Haven't had a single panic in over six months.
> > 
> > Thanks for your information, the major difference is that we
> > runs on 7-stable and the size of our zfs pool is much bigger.
> 
> I'm don't think the panics are related to pool size. More to the load
> and characteristics of your workload.

Not to add superfluous comments, but I agree.  It has little to do with
actual pool size, but more with I/O activity.  However, multiple zpools
will very likely result in the panic happening sooner, just based on the
"nature of the beast" -- more zpools likely means more "overall" I/O
because there's more things people will be utilising via ZFS, thus
you'll exhaust kmem quicker.

> beast:root:~# zpool list
> NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> tank                    732G    604G    128G    82%  ONLINE     -
> 
> but:
> 
> beast:root:~# zfs list | wc -l
>     1932
> 
> No panics.
> 
> PS. I'm quite sure the ZFS version I've in perforce will fix most if not
> all 'kmem_map too small' panics. It's not yet committed, but I do want
> to MFC it into RELENG_7.

That's great to hear, but the point I've made regarding kmem_size not
being able to extend past 2GB (on i386 and amd64) still stands.  I've
looked at the code myself, in attempt to figure out where the actual
limitation is, and the code is beyond my understanding.  (It's somewhat
abstracted, but only to those who are completely unfamiliar to the VM
piece -- like me :-) ).

-- 
| Jeremy Chadwick                                jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080605065330.GA62591>