Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 13 Mar 2015 06:32:03 +0100
From:      Mateusz Guzik <mjguzik@gmail.com>
To:        alc@freebsd.org
Cc:        FreeBSD Current <freebsd-current@freebsd.org>, Ryan Stone <rysto32@gmail.com>
Subject:   Re: [PATCH] Convert the VFS cache lock to an rmlock
Message-ID:  <20150313053203.GC9153@dft-labs.eu>
In-Reply-To: <CAJUyCcMoRu7JMCWfYb3acBF=fNopKAV4Ge8-mhApjuJ7ujOqFg@mail.gmail.com>
References:  <CAFMmRNysnUezX9ozGrCpivPCTMYRJtoxm9ijR0yQO03LpXnwBQ@mail.gmail.com> <20150312173635.GB9153@dft-labs.eu> <CAJUyCcMoRu7JMCWfYb3acBF=fNopKAV4Ge8-mhApjuJ7ujOqFg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Mar 12, 2015 at 06:13:00PM -0500, Alan Cox wrote:
> Below is partial results from a profile of a parallel (-j7) "buildworld" on
> a 6-core machine that I did after the introduction of pmap_advise, so this
> is not a new profile.  The results are sorted by total waiting time and
> only the top 20 entries are listed.
> 

Well, I ran stuff on lynx2 in the zoo on fresh -head with debugging
disabled (MALLOC_PRODUCTION included) and got quite different results.

The machine is  Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
 2 package(s) x 10 core(s) x 2 SMT threads

32GB of ram

Stuff was built in a chroot with world hosted on zfs.

>      max  wait_max       total  wait_total       count    avg wait_avg
> cnt_hold cnt_lock name
> 
>     1027    208500    16292932  1658585700     5297163      3    313  0
> 3313855 kern/vfs_cache.c:629 (rw:Name Cache)
> 
>   208564    186514 19080891106  1129189627   355575930     53      3  0
> 1323051 kern/vfs_subr.c:2099 (lockmgr:ufs)
> 
>   169241    148057   193721142   419075449    13819553     14     30  0
> 110089 kern/vfs_subr.c:2210 (lockmgr:ufs)
> 
>   187092    191775  1923061952   257319238   328416784      5      0  0
> 5106537 kern/vfs_cache.c:488 (rw:Name Cache)
> 

make -j 12 buildworld on freshly booted system (i.e. the most namecache insertions):

      32       292     3042019    33400306     8419725      0      3  0 2578026 kern/sys_pipe.c:1438 (sleep mutex:pipe mutex)
  170608    152572   642385744    27054977   202605015      3      0  0 1306662 kern/vfs_subr.c:2176 (lockmgr:zfs)
      66       198    45170221    22523597   161976016      0      0  0 19988525 vm/vm_page.c:1502 (sleep mutex:vm page free queue)
      45       413    17804028    20770896   160786478      0      0  0 19368394 vm/vm_page.c:2294 (sleep mutex:vm page free queue)
      32       625    19406548     8414459   142554059      0      0  0 11198547 vm/vm_page.c:2053 (sleep mutex:vm active pagequeue)
      35      1704    19560396     7867435   142655646      0      0  0 9641161 vm/vm_page.c:2097 (sleep mutex:vm active pagequeue)
      14   6675994       27451     6677152       53550      0    124  0   2394 kern/sched_ule.c:2630 (spin mutex:sched lock 23)
    2121       879    19982319     4157880     7753007      2      0  0 235477 vm/vm_fault.c:785 (rw:vm object)
   27715      1104     9922805     3339829    12614622      0      0  0  83840 vm/vm_map.c:2883 (rw:vm object)
       6   2240335       26594     2833529       55057      0     51  0   2643 kern/sched_ule.c:2630 (spin mutex:sched lock 17)
      31     22617     1424889     2768798      368786      3      7  0  11555 kern/kern_exec.c:590 (lockmgr:zfs)
       7   2027019       26247     2218701       53980      0     41  0   2432 kern/sched_ule.c:2630 (spin mutex:sched lock 5)
   57942    153184    41616375     2120917      368786    112      5  0   9438 kern/imgact_elf.c:829 (lockmgr:zfs)
     184       557    65168745     1715154   214930217      0      0  0 2104013 kern/vfs_cache.c:487 (rw:Name Cache)
^^^^ name cache only here
       3   1695608       26302     1696719       56150      0     30  0   2377 kern/sched_ule.c:2630 (spin mutex:sched lock 18)
      52       176    49658348     1606545   199234071      0      0  0 2212598 kern/vfs_cache.c:668 (sleep mutex:vnode interlock)
       6   1497134       26337     1583199       55416      0     28  0   2096 kern/sched_ule.c:2630 (spin mutex:sched lock 13)
    1705      2155    55312677     1519894   142655701      0      0  0 435090 vm/vm_fault.c:997 (sleep mutex:vm page)
      14       721      187832     1449400     2126043      0      0  0  28314 vm/vm_object.c:646 (rw:vm object)
      74        62    31785614     1342727   268853124      0      0  0 2235545 kern/vfs_subr.c:2254 (sleep mutex:vnode interlock)

So even despite the need for a lot of insertions name cache contention was not a
big concern.

Here is another buildworld after clearing /usr/obj and resetting stats (no reboot,
so the cache was populated):

      31       378     3827573    40116363     8544224      0      4  0 2602464 kern/sys_pipe.c:1438 (sleep mutex:pipe mutex)
      53       680    45790806    26978449   161004693      0      0  0 21077331 vm/vm_page.c:1502 (sleep mutex:vm page free queue)
      39       210    18513457    25286194   160721062      0      0  0 20946513 vm/vm_page.c:2294 (sleep mutex:vm page free queue)
   19806     19377   596036045    19086474   202605527      2      0  0 1361883 kern/vfs_subr.c:2176 (lockmgr:zfs)
      40       810    19593696     9458254   142544401      0      0  0 11659059 vm/vm_page.c:2053 (sleep mutex:vm active pagequeue)
      45      1713    19955926     8883884   142638570      0      0  0 10061154 vm/vm_page.c:2097 (sleep mutex:vm active pagequeue)
      15   4702161       28765     4715991       59859      0     78  0   2659 kern/sched_ule.c:2630 (spin mutex:sched lock 6)
    1838      1213    20189264     4246013     7751227      2      0  0 243511 vm/vm_fault.c:785 (rw:vm object)
   34942       782    10815453     3461181    12611561      0      0  0  87884 vm/vm_map.c:2883 (rw:vm object)
       7   2111512       27390     3164572       55775      0     56  0   2239 kern/sched_ule.c:2630 (spin mutex:sched lock 7)
      18      2503     1417189     2849233      368785      3      7  0  12099 kern/kern_exec.c:590 (lockmgr:zfs)
      52       898    66378192     1861837   214861582      0      0  0 2221482 kern/vfs_cache.c:487 (rw:Name Cache)
      16        52    49359798     1685568   199202365      0      0  0 2288836 kern/vfs_cache.c:668 (sleep mutex:vnode interlock)
      13       811      190617     1527468     2125719      0      0  0  30154 vm/vm_object.c:646 (rw:vm object)
      38        39    31672997     1393102   268812945      0      0  0 2304916 kern/vfs_subr.c:2254 (sleep mutex:vnode interlock)
    1714      2111    56782095     1303511   142638594      0      0  0 199781 vm/vm_fault.c:997 (sleep mutex:vm page)
      15    765633       28820     1220541       59670      0     20  0   2805 kern/sched_ule.c:2630 (spin mutex:sched lock 8)
     177       143    59407392     1213817    58267983      1      0  0 377555 amd64/amd64/pmap.c:5362 (rw:pmap pv list)
      37        21    28518097     1199499   291370530      0      0  0 1372317 kern/subr_sleepqueue.c:251 (spin mutex:sleepq chain)
      15    809971       29102     1103478       59356      0     18  0   2737 kern/sched_ule.c:2630 (spin mutex:sched lock 19)

So, it may be somewthing is wrong with my test environment, but as it is I do
not expect namecache lock contention to have significant impact on buildworld/kernel.

-- 
Mateusz Guzik <mjguzik gmail.com>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150313053203.GC9153>