Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 25 Jun 2001 19:22:52 -0500
From:      Alfred Perlstein <bright@sneakerz.org>
To:        Matt Dillon <dillon@earth.backplane.com>
Cc:        freebsd-stable@FreeBSD.ORG, tegge@FreeBSD.ORG
Subject:   Re: -stable weird panics
Message-ID:  <20010625192252.G64836@sneakerz.org>
In-Reply-To: <200106251740.f5PHeAY11356@earth.backplane.com>; from dillon@earth.backplane.com on Mon, Jun 25, 2001 at 10:40:10AM -0700
References:  <20010625145124.D64836@sneakerz.org> <200106251740.f5PHeAY11356@earth.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
* Matt Dillon <dillon@earth.backplane.com> [010625 17:43] wrote:
> :
> :So why is zalloc dying when it looks like only about 90 megs of 
> :kernel memory is allocated?
> 
>     Are those active vnodes or cached vnodes?  What is kern.maxvnodes
>     set to?

(kgdb) print desiredvnodes
$1 = 132756
(kgdb) print wantfreevnodes
$2 = 25
(kgdb) print freevnodes
$3 = 24

It looks like we're at a low watermark:

        if (wantfreevnodes && freevnodes < wantfreevnodes) {
                vp = NULL;
...

which forces us to call zalloc, which is returning NULL.

(kgdb) print *vnode_zone
$5 = {zlock = {lock_data = 0}, zitems = 0x0, zfreecnt = 0, zfreemin = 21, 
  znalloc = 91584, zkva = 0, zpagecount = 0, zpagemax = 0, zmax = 0, 
  ztotal = 91584, zsize = 192, zalloc = 5, zflags = 0, zallocflag = 2, 
  zobj = 0x0, zname = 0xc03578c8 "VNODE", znext = 0xc464ce80}


>     Also, what's the full vmstat -m output on the crash dump?

See bottom of this mail for latest core.

> :Anyhow, I've added a check in getnewvnode to return ENOMEM if zalloc
> :fails, my concern is that other parts of the kernel are going to
> :blow up immediately after that is caught because it looks like
> :the majority of places don't expect zalloc to fail.
> :
> :Any suggestions will be helpful, any requests for more information
> :will happily be attempted.
> :
> :thanks,
> :-Alfred
> 
>     Well, there's definitely some kind of limit being hit here.  You
>     have to figure out what it is first.   Print out the zalloc zone
>     structure being used to see why it is returning NULL.  Maybe it
>     has hit it's max count or something and the bug is that the zalloc
>     zone isn't scaled with kern.maxvnodes, or something like that.

There is no zone max, there's a malloc max, but afaik zalloc can't
hit this sort of limit.

Why is zalloc using kmem_alloc() instead of kmem_alloc_wait() ?

I think this is what's getting hit, i don't really understand why
we're short on memory, it sure doesn't seem like we should be
failing in kmem_alloc() :(.

Here's the stats from the latest crash, it's about the same place,
90megs and getnewvnode's zalloc call starts not behaving properly.

Making getnewvnode() catch NULL return from zalloc() and return
ENOMEM kept the system up, but sure pissed off the userland
applications trying to open files.

Memory statistics by bucket size
Size   In Use   Free   Requests  HighWater  Couldfree
  16      744    280     754299       0       1280
  32    90540    212     987589       0        640
  64    93864    216    4267909       0        320
 128     1575    185     959343       0        160
 256     2049     95     594972       0         80
 512      952     24       4167       0         40
  1K      210     70       6191       0         20
  2K       22      8        171       0         10
  4K       29      1        335       0          5
  8K        2      0         31       0          5
 16K       11      0         11       0          5
 32K        3      0         11       0          5
 64K        1      0          1       0          5
128K        1      0         15       0          5
256K        1      0          1       0          5
512K        7      0          7       0          5

Memory usage type by bucket size
Size  Type(s)
  16  uc_devlist, kld, MD disk, USB, p1003.1b, routetbl, ether_multi,
          vnodes, mount, pcb, soname, rman, bus, sysctl, temp, devbuf, atexit,
          proc-args
  32  atkbddev, kld, USB, tseg_qent, in_multi, routetbl, ether_multi,
          ifaddr, BPF, vnodes, cluster_save buffer, pcb, soname, taskqueue,
          SWAP, eventhandler, bus, sysctl, uidinfo, subproc, pgrp, temp,
          devbuf, proc-args, sigio
  64  AD driver, isadev, NFS req, in6_multi, routetbl, ether_multi, ifaddr,
          vnodes, cluster_save buffer, vfscache, pcb, rman, eventhandler, bus,
          subproc, session, ip6ndp, temp, devbuf, lockf, proc-args, file
 128  kld, USBdev, USB, ZONE, routetbl, vnodes, mount, vfscache, soname,
          ppbusdev, ttys, bus, cred, temp, devbuf, zombie, proc-args, dev_t,
          timecounter
 256  FFS node, newblk, NFS daemon, routetbl, ifaddr, vnodes, ttys, bus,
          subproc, temp, devbuf, proc-args, file desc
 512  ATA generic, USBdev, UFS mount, NFSV3 diroff, ifaddr, mount,
          BIO buffer, ptys, msg, ioctlops, bus, ip6ndp, temp, devbuf,
          file desc
  1K  MD disk, AD driver, NQNFS Lease, Export Host, ifaddr, BIO buffer,
          sem, ioctlops, bus, uidinfo, temp, devbuf
  2K  uc_devlist, UFS mount, BIO buffer, pcb, bus, temp, devbuf
  4K  memdesc, USB, UFS mount, sem, msg, bus, proc, temp, devbuf
  8K  mbuf, shm, bus
 16K  msg, devbuf
 32K  UFS mount, bus, devbuf
 64K  pagedep
128K  bus, temp
256K  MSDOSFS mount
512K  VM pgdata, UFS ihash, inodedep, NFS hash, vfscache, ISOFS mount,
          SWAP

Memory statistics by type                          Type  Kern
        Type  InUse MemUse HighUse  Limit Requests Limit Limit Size(s)
     atkbddev     2     1K      1K204800K        2    0     0  32
   uc_devlist    30     3K      3K204800K       30    0     0  16,2K
      memdesc     1     4K      4K204800K        1    0     0  4K
         mbuf     1     8K      8K204800K        1    0     0  8K
          kld     4     1K      1K204800K       35    0     0  16,32,128
      MD disk     2     2K      2K204800K        2    0     0  16,1K
    AD driver     1     1K      2K204800K        6    0     0  64,1K
  ATA generic     0     1K      1K204800K        1    0     0  512
       isadev    23     2K      2K204800K       23    0     0  64
       USBdev     1     1K      1K204800K        2    0     0  128,512
          USB    14    17K     17K204800K       30    0     0  16,32,128,4K
         ZONE    15     2K      2K204800K       15    0     0  128
    VM pgdata     1   512K    512K204800K        1    0     0  512K
    UFS mount    15    47K     47K204800K       15    0     0  512,2K,4K,32K
    UFS ihash     1   512K    512K204800K        1    0     0  512K
     FFS node  1577   395K    396K204800K   152351    0     0  256
       newblk     1     1K      1K204800K        1    0     0  256
     inodedep     1   512K    512K204800K        1    0     0  512K
      pagedep     1    64K     64K204800K        1    0     0  64K
     p1003.1b     1     1K      1K204800K        1    0     0  16
     NFS hash     1   512K    512K204800K        1    0     0  512K
  NQNFS Lease     1     1K      1K204800K        1    0     0  1K
 NFSV3 diroff   872   436K    436K204800K      872    0     0  512
   NFS daemon     1     1K      1K204800K        1    0     0  256
      NFS req     0     0K      2K204800K   864655    0     0  64
    in6_multi     8     1K      1K204800K        8    0     0  64
    tseg_qent     3     1K      2K204800K    75610    0     0  32
  Export Host     8     8K      8K204800K        8    0     0  1K
     in_multi     3     1K      1K204800K        3    0     0  32
     routetbl   428    61K    252K204800K    39047    0     0  16,32,64,128,256
  ether_multi    46     2K      2K204800K       46    0     0  16,32,64
       ifaddr    39    10K     10K204800K       39    0     0  32,64,256,512,1K
          BPF    11     1K      1K204800K       11    0     0  32
MSDOSFS mount     1   256K    256K204800K        1    0     0  256K
       vnodes 89998  2818K   2818K204800K   444161    0     0  16,32,64,128,256
        mount    32    16K     16K204800K       34    0     0  16,128,512
cluster_save buffer     0     0K      1K204800K     6790    0     0  32,64
     vfscache 93127  6845K   6845K204800K   690773    0     0  64,128,512K
   BIO buffer   172   175K    237K204800K      550    0     0  512,1K,2K
  ISOFS mount     1   512K    512K204800K        1    0     0  512K
          pcb    70    10K     11K204800K   149922    0     0  16,32,64,2K
       soname    27     1K      1K204800K   569923    0     0  16,32,128
     ppbusdev     3     1K      1K204800K        3    0     0  128
         ptys     3     2K      2K204800K        3    0     0  512
         ttys   451    58K     63K204800K      958    0     0  128,256
          shm     1     8K      8K204800K        1    0     0  8K
          sem     3     6K      6K204800K        3    0     0  1K,4K
          msg     4    25K     25K204800K        4    0     0  512,4K,16K
         rman    65     4K      4K204800K      157    0     0  16,64
     ioctlops     0     0K      1K204800K       16    0     0  512,1K
    taskqueue     1     1K      1K204800K        1    0     0  32
         SWAP     2   549K    549K204800K        2    0     0  32,512K
 eventhandler    13     1K      1K204800K       13    0     0  32,64
          bus   769    68K    150K204800K     1636    0     0  16,32,64,128,256,
512,1K,2K,4K,8K,32K,128K
       sysctl     0     0K      1K204800K     1022    0     0  16,32
      uidinfo     8     2K      2K204800K    19554    0     0  32,1K
         cred   193    25K     38K204800K   314724    0     0  128
      subproc   169    11K     23K204800K   646965    0     0  32,64,256
         proc     2     8K      8K204800K        2    0     0  4K
      session    18     2K      2K204800K      236    0     0  64
         pgrp    18     1K      1K204800K      524    0     0  32
       ip6ndp     1     1K      1K204800K        3    0     0  64,512
         temp   375    99K    128K204800K   380712    0     0  16,32,64,128,256,
512,1K,2K,4K,128K
       devbuf   438   348K    348K204800K    10670    0     0  16,32,64,128,256,
512,1K,2K,4K,16K,32K
        lockf     3     1K      1K204800K    11617    0     0  64
       atexit     1     1K      1K204800K        1    0     0  16
       zombie     5     1K      3K204800K   323293    0     0  128
    proc-args    58     5K     15K204800K   311493    0     0  16,32,64,128,256
        sigio     1     1K      1K204800K        1    0     0  32
         file   160    10K     33K204800K  2227982    0     0  64
    file desc    76    20K     90K204800K   327851    0     0  256,512
        dev_t   619    78K     78K204800K      619    0     0  128
  timecounter    10     2K      2K204800K       10    0     0  128

Memory Totals:  In Use    Free    Requests
                15063K    174K     7575053




ZONE            used    total   mem-use 
PIPE            31      306        4/47K
SWAPMETA        0       0          0/0K
unpcb           4       128        0/8K
ripcb           0       21         0/3K
tcpcb           237     735      125/390K
udpcb           35      84         6/15K
tcpcb           0       0          0/0K
socket          276     798       51/149K
KNOTE           0       128        0/8K
NFSNODE         89972   90000   28116/28125K
NFSMOUNT        26      35        13/18K
VNODE           91584   91584   17172/17172K
NAMEI           0       32         0/32K
VMSPACE         75      256       14/48K
PROC            79      245       32/99K
DP fakepg       0       0          0/0K
PV ENTRY        28489   524263   778/14335K
MAP ENTRY       791     2253      37/105K
KMAP ENTRY      598     978       28/45K
MAP             7       10         0/1K
VM OBJECT       56500   57040   5296/5347K
------------------------------------------
TOTAL                           51679/65954K

Yup, about the same place, 90megs.

vfs.vmiodirenable=1 is set.

-Alfred

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010625192252.G64836>