Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 12 Nov 2012 13:28:02 -0800 (PST)
From:      Sushanth Rai <sushanth_rai@yahoo.com>
To:        alc@freebsd.org, Konstantin Belousov <kostikbel@gmail.com>
Cc:        pho@freebsd.org, StevenSears <Steven.Sears@netapp.com>, "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>
Subject:   Re: Memory reserves or lack thereof
Message-ID:  <1352755682.93266.YahooMailClassic@web181701.mail.ne1.yahoo.com>
In-Reply-To: <20121112133638.GZ73505@kib.kiev.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
This patch still doesn't address the issue of M_NOWAIT calls driving the me=
mory the all the way down to 2 pages, right ? It would be nice to have M_NO=
WAIT just do non-sleep version of M_WAITOK and M_USE_RESERVE flag to dig de=
ep. =0A=0ASushanth =0A=0A--- On Mon, 11/12/12, Konstantin Belousov <kostikb=
el@gmail.com> wrote:=0A=0A> From: Konstantin Belousov <kostikbel@gmail.com>=
=0A> Subject: Re: Memory reserves or lack thereof=0A> To: alc@freebsd.org=
=0A> Cc: pho@freebsd.org, "Sears, Steven" <Steven.Sears@netapp.com>, "freeb=
sd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>=0A> Date: Monday, Nov=
ember 12, 2012, 5:36 AM=0A> On Sun, Nov 11, 2012 at 03:40:24PM=0A> -0600, A=
lan Cox wrote:=0A> > On Sat, Nov 10, 2012 at 7:20 AM, Konstantin Belousov=
=0A> <kostikbel@gmail.com>wrote:=0A> > =0A> > > On Fri, Nov 09, 2012 at 07:=
10:04PM +0000, Sears,=0A> Steven wrote:=0A> > > > I have a memory subsystem=
 design question=0A> that I'm hoping someone can=0A> > > answer.=0A> > > >=
=0A> > > > I've been looking at a machine that is=0A> completely out of mem=
ory, as in=0A> > > >=0A> > > >=A0 v_free_count =3D 0,=0A> > > >=A0 v_cache_=
count =3D 0,=0A> > > >=0A> > > > I wondered how a machine could completely =
run=0A> out of memory like this,=0A> > > especially after finding a lack of=
 interrupt=0A> storms or other pathologies=0A> > > that would tend to overc=
ommit memory. So I started=0A> investigating.=0A> > > >=0A> > > > Most allo=
cators come down to vm_page_alloc(),=0A> which has this guard:=0A> > > >=0A=
> > > >=A0 =A0 =A0=A0=A0if ((curproc=0A> =3D=3D pageproc) && (page_req !=3D=
 VM_ALLOC_INTERRUPT)) {=0A> > > >=A0 =A0 =A0 =A0 =A0 =A0=0A> =A0=A0=A0page_=
req =3D VM_ALLOC_SYSTEM;=0A> > > >=A0 =A0 =A0=A0=A0};=0A> > > >=0A> > > >=
=A0 =A0 =A0=A0=A0if=0A> (cnt.v_free_count + cnt.v_cache_count >=0A> cnt.v_f=
ree_reserved ||=0A> > > >=A0 =A0 =A0 =A0=0A> =A0=A0=A0(page_req =3D=3D VM_A=
LLOC_SYSTEM &&=0A> > > >=A0 =A0 =A0 =A0=0A> =A0=A0=A0cnt.v_free_count + cnt=
.v_cache_count >=0A> > > cnt.v_interrupt_free_min) ||=0A> > > >=A0 =A0 =A0 =
=A0=0A> =A0=A0=A0(page_req =3D=3D VM_ALLOC_INTERRUPT=0A> &&=0A> > > >=A0 =
=A0 =A0 =A0=0A> =A0=A0=A0cnt.v_free_count + cnt.v_cache_count >=0A> 0)) {=
=0A> > > >=0A> > > > The key observation is if VM_ALLOC_INTERRUPT=0A> is se=
t, it will allocate=0A> > > every last page.=0A> > > >=0A> > > > >From the =
name one might expect=0A> VM_ALLOC_INTERRUPT to be somewhat rare,=0A> > > p=
erhaps only used from interrupt threads. Not so,=0A> see kmem_malloc() or=
=0A> > > uma_small_alloc() which both contain this=0A> mapping:=0A> > > >=
=0A> > > >=A0 =A0 =A0=A0=A0if ((flags=0A> & (M_NOWAIT|M_USE_RESERVE)) =3D=
=3D M_NOWAIT)=0A> > > >=A0 =A0 =A0 =A0 =A0 =A0=0A> =A0=A0=A0pflags =3D VM_A=
LLOC_INTERRUPT |=0A> VM_ALLOC_WIRED;=0A> > > >=A0 =A0 =A0=A0=A0else=0A> > >=
 >=A0 =A0 =A0 =A0 =A0 =A0=0A> =A0=A0=A0pflags =3D VM_ALLOC_SYSTEM |=0A> VM_=
ALLOC_WIRED;=0A> > > >=0A> > > > Note that M_USE_RESERVE has been deprecate=
d=0A> and is used in just a=0A> > > handful of places. Also note that lots =
of code=0A> paths come through these=0A> > > routines.=0A> > > >=0A> > > > =
What this means is essentially _any_=0A> allocation using M_NOWAIT will=0A>=
 > > bypass whatever reserves have been held back and=0A> will take every l=
ast page=0A> > > available.=0A> > > >=0A> > > > There is no documentation s=
tating M_NOWAIT=0A> has this side effect of=0A> > > essentially being privi=
leged, so any innocuous=0A> piece of code that can't=0A> > > block will use=
 it. And of course M_NOWAIT is=0A> literally used all over.=0A> > > >=0A> >=
 > > It looks to me like the design goal of the=0A> BSD allocators is on=0A=
> > > recovery; it will give all pages away knowing it=0A> can recover.=0A>=
 > > >=0A> > > > Am I missing anything? I would have expected=0A> some smal=
l number of pages=0A> > > to be held in reserve just in case. And I didn't=
=0A> expect M_NOWAIT to be a=0A> > > sort of back door for grabbing memory.=
=0A> > > >=0A> > >=0A> > > Your analysis is right, there is nothing to add =
or=0A> correct.=0A> > > This is the reason to strongly prefer M_WAITOK.=0A>=
 > >=0A> > =0A> > Agreed.=A0 Once upon time, before SMPng, M_NOWAIT=0A> was=
 rarely used.=A0 It was=0A> > well understand that it should only be used b=
y=0A> interrupt handlers.=0A> > =0A> > The trouble is that M_NOWAIT conflat=
es two orthogonal=0A> things.=A0 The obvious=0A> > being that the allocatio=
n shouldn't sleep.=A0 The=0A> other being how far we're=0A> > willing to de=
plete the cache/free page queues.=0A> > =0A> > When fine-grained locking go=
t sprinkled throughout the=0A> kernel, we all to=0A> > often found ourselve=
s wanting to do allocations without=0A> the possibility of=0A> > blocking.=
=A0 So, M_NOWAIT became commonplace, where=0A> it wasn't before.=0A> > =0A>=
 > This had the unintended consequence of introducing a=0A> lot of memory=
=0A> > allocations in the top-half of the kernel, i.e.,=0A> non-interrupt h=
andling=0A> > code, that were digging deep into the cache/free page=0A> que=
ues.=0A> > =0A> > Also, ironically, in today's kernel an "M_NOWAIT |=0A> M_=
USE_RESERVE"=0A> > allocation is less likely to succeed than an "M_NOWAIT"=
=0A> allocation.=0A> > However, prior to FreeBSD 7.x, M_NOWAIT couldn't=0A>=
 allocate a cached page; it=0A> > could only allocate a free page.=A0 M_USE=
_RESERVE=0A> said that it ok to allocate=0A> > a cached page even though M_=
NOWAIT was specified.=A0=0A> Consequently, the system=0A> > wouldn't dig as=
 far into the free page queue if=0A> M_USE_RESERVE was=0A> > specified, bec=
ause it was allowed to reclaim a cached=0A> page.=0A> > =0A> > In conclusio=
n, I think it's time that we change=0A> M_NOWAIT so that it doesn't=0A> > d=
ig any deeper into the cache/free page queues than=0A> M_WAITOK does and=0A=
> > reintroduce a M_USE_RESERVE-like flag that says dig=0A> deep into the=
=0A> > cache/free page queues.=A0 The trouble is that we=0A> then need to i=
dentify all=0A> > of those places that are implicitly depending on the=0A> =
current behavior of=0A> > M_NOWAIT also digging deep into the cache/free pa=
ge=0A> queues so that we can=0A> > add an explicit M_USE_RESERVE.=0A> > =0A=
> > Alan=0A> > =0A> > P.S. I suspect that we should also increase the size =
of=0A> the "page reserve"=0A> > that is kept for VM_ALLOC_INTERRUPT allocat=
ions in=0A> vm_page_alloc*().=A0 How=0A> > many legitimate users of a new M=
_USE_RESERVE-like flag=0A> in today's kernel=0A> > could actually be satisf=
ied by two pages?=0A> =0A> I am almost sure that most of people who put the=
 M_NOWAIT=0A> flag, do not=0A> know the 'allow the deeper drain of free que=
ue' effect. As=0A> such, I believe=0A> we should flip the meaning of M_NOWA=
IT/M_USE_RESERVE. My=0A> only expectations=0A> of the problematic places wo=
uld be in the swapout path.=0A> =0A> I found a single explicit use of M_USE=
_RESERVE in the=0A> kernel,=0A> so the flip is relatively simple.=0A> =0A> =
Below is the patch which I only compile-tested on amd64, and=0A> which boot=
ed=0A> fine.=0A> =0A> Peter, could you, please, give it a run, to see obvio=
us=0A> deadlocks, if any ?=0A> =0A> diff --git a/sys/amd64/amd64/uma_machde=
p.c=0A> b/sys/amd64/amd64/uma_machdep.c=0A> index dc9c307..ab1e869 100644=
=0A> --- a/sys/amd64/amd64/uma_machdep.c=0A> +++ b/sys/amd64/amd64/uma_mach=
dep.c=0A> @@ -29,6 +29,7 @@ __FBSDID("$FreeBSD$");=0A>  =0A>  #include <sys=
/param.h>=0A>  #include <sys/lock.h>=0A> +#include <sys/malloc.h>=0A>  #inc=
lude <sys/mutex.h>=0A>  #include <sys/systm.h>=0A>  #include <vm/vm.h>=0A> =
@@ -48,12 +49,7 @@ uma_small_alloc(uma_zone_t zone, int=0A> bytes, u_int8_t=
 *flags, int wait)=0A>  =A0=A0=A0 int pflags;=0A>  =0A>  =A0=A0=A0 *flags =
=3D UMA_SLAB_PRIV;=0A> -=A0=A0=A0 if ((wait &=0A> (M_NOWAIT|M_USE_RESERVE))=
 =3D=3D M_NOWAIT)=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_INTERRUP=
T | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED;=0A> -=A0=A0=A0 else=0A> -=A0=A0=A0 =A0=
=A0=A0 pflags =3D=0A> VM_ALLOC_SYSTEM | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED;=0A=
> -=A0=A0=A0 if (wait & M_ZERO)=0A> -=A0=A0=A0 =A0=A0=A0 pflags |=3D=0A> VM=
_ALLOC_ZERO;=0A> +=A0=A0=A0 pflags =3D m2vm_flags(wait, VM_ALLOC_NOOBJ=0A> =
| VM_ALLOC_WIRED);=0A>  =A0=A0=A0 for (;;) {=0A>  =A0=A0=A0 =A0=A0=A0 m =3D=
=0A> vm_page_alloc(NULL, 0, pflags);=0A>  =A0=A0=A0 =A0=A0=A0 if (m =3D=3D =
NULL) {=0A> diff --git a/sys/arm/arm/vm_machdep.c=0A> b/sys/arm/arm/vm_mach=
dep.c=0A> index f60cdb1..75366e3 100644=0A> --- a/sys/arm/arm/vm_machdep.c=
=0A> +++ b/sys/arm/arm/vm_machdep.c=0A> @@ -651,12 +651,7 @@ uma_small_allo=
c(uma_zone_t zone, int=0A> bytes, u_int8_t *flags, int wait)=0A>  =A0=A0=A0=
 =A0=A0=A0 =A0=A0=A0=0A> ret =3D ((void *)kmem_malloc(kmem_map, bytes, M_NO=
WAIT));=0A>  =A0=A0=A0 =A0=A0=A0 =A0=A0=A0=0A> return (ret);=0A>  =A0=A0=A0=
 =A0=A0=A0 }=0A> -=A0=A0=A0 =A0=A0=A0 if ((wait &=0A> (M_NOWAIT|M_USE_RESER=
VE)) =3D=3D M_NOWAIT)=0A> -=A0=A0=A0 =A0=A0=A0 =A0=A0=A0=0A> pflags =3D VM_=
ALLOC_INTERRUPT | VM_ALLOC_WIRED;=0A> -=A0=A0=A0 =A0=A0=A0 else=0A> -=A0=A0=
=A0 =A0=A0=A0 =A0=A0=A0=0A> pflags =3D VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;=0A=
> -=A0=A0=A0 =A0=A0=A0 if (wait &=0A> M_ZERO)=0A> -=A0=A0=A0 =A0=A0=A0 =A0=
=A0=A0=0A> pflags |=3D VM_ALLOC_ZERO;=0A> +=A0=A0=A0 =A0=A0=A0 pflags =3D=
=0A> m2vm_flags(wait, VM_ALLOC_WIRED);=0A>  =A0=A0=A0 =A0=A0=A0 for (;;) {=
=0A>  =A0=A0=A0 =A0=A0=A0 =A0=A0=A0 m=0A> =3D vm_page_alloc(NULL, 0, pflags=
 | VM_ALLOC_NOOBJ);=0A>  =A0=A0=A0 =A0=A0=A0 =A0=A0=A0 if=0A> (m =3D=3D NUL=
L) {=0A> diff --git a/sys/fs/devfs/devfs_devs.c=0A> b/sys/fs/devfs/devfs_de=
vs.c=0A> index 71caa29..2ce1ca6 100644=0A> --- a/sys/fs/devfs/devfs_devs.c=
=0A> +++ b/sys/fs/devfs/devfs_devs.c=0A> @@ -121,7 +121,7 @@ devfs_alloc(in=
t flags)=0A>  =A0=A0=A0 struct cdev *cdev;=0A>  =A0=A0=A0 struct timespec t=
s;=0A>  =0A> -=A0=A0=A0 cdp =3D malloc(sizeof *cdp, M_CDEVP,=0A> M_USE_RESE=
RVE | M_ZERO |=0A> +=A0=A0=A0 cdp =3D malloc(sizeof *cdp, M_CDEVP,=0A> M_ZE=
RO |=0A>  =A0=A0=A0 =A0 =A0 ((flags &=0A> MAKEDEV_NOWAIT) ? M_NOWAIT : M_WA=
ITOK));=0A>  =A0=A0=A0 if (cdp =3D=3D NULL)=0A>  =A0=A0=A0 =A0=A0=A0 return=
 (NULL);=0A> diff --git a/sys/ia64/ia64/uma_machdep.c=0A> b/sys/ia64/ia64/u=
ma_machdep.c=0A> index 37353ff..9f77762 100644=0A> --- a/sys/ia64/ia64/uma_=
machdep.c=0A> +++ b/sys/ia64/ia64/uma_machdep.c=0A> @@ -46,12 +46,7 @@ uma_=
small_alloc(uma_zone_t zone, int=0A> bytes, u_int8_t *flags, int wait)=0A> =
 =A0=A0=A0 int pflags;=0A>  =0A>  =A0=A0=A0 *flags =3D UMA_SLAB_PRIV;=0A> -=
=A0=A0=A0 if ((wait &=0A> (M_NOWAIT|M_USE_RESERVE)) =3D=3D M_NOWAIT)=0A> -=
=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;=0A>=
 -=A0=A0=A0 else=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_SYSTEM | =
VM_ALLOC_WIRED;=0A> -=A0=A0=A0 if (wait & M_ZERO)=0A> -=A0=A0=A0 =A0=A0=A0 =
pflags |=3D=0A> VM_ALLOC_ZERO;=0A> +=A0=A0=A0 pflags =3D m2vm_flags(wait,=
=0A> VM_ALLOC_WIRED);=0A>  =0A>  =A0=A0=A0 for (;;) {=0A>  =A0=A0=A0 =A0=A0=
=A0 m =3D=0A> vm_page_alloc(NULL, 0, pflags | VM_ALLOC_NOOBJ);=0A> diff --g=
it a/sys/mips/mips/uma_machdep.c=0A> b/sys/mips/mips/uma_machdep.c=0A> inde=
x 798e632..24baef0 100644=0A> --- a/sys/mips/mips/uma_machdep.c=0A> +++ b/s=
ys/mips/mips/uma_machdep.c=0A> @@ -48,11 +48,7 @@ uma_small_alloc(uma_zone_=
t zone, int=0A> bytes, u_int8_t *flags, int wait)=0A>  =A0=A0=A0 void *va;=
=0A>  =0A>  =A0=A0=A0 *flags =3D UMA_SLAB_PRIV;=0A> -=0A> -=A0=A0=A0 if ((w=
ait &=0A> (M_NOWAIT|M_USE_RESERVE)) =3D=3D M_NOWAIT)=0A> -=A0=A0=A0 =A0=A0=
=A0 pflags =3D=0A> VM_ALLOC_INTERRUPT;=0A> -=A0=A0=A0 else=0A> -=A0=A0=A0 =
=A0=A0=A0 pflags =3D=0A> VM_ALLOC_SYSTEM;=0A> +=A0=A0=A0 pflags =3D m2vm_fl=
ags(wait, 0);=0A>  =0A>  =A0=A0=A0 for (;;) {=0A>  =A0=A0=A0 =A0=A0=A0 m =
=3D=0A> pmap_alloc_direct_page(0, pflags);=0A> diff --git a/sys/powerpc/aim=
/mmu_oea64.c=0A> b/sys/powerpc/aim/mmu_oea64.c=0A> index a491680..3e320b9 1=
00644=0A> --- a/sys/powerpc/aim/mmu_oea64.c=0A> +++ b/sys/powerpc/aim/mmu_o=
ea64.c=0A> @@ -1369,12 +1369,7 @@ moea64_uma_page_alloc(uma_zone_t=0A> zone=
, int bytes, u_int8_t *flags, int wait)=0A>  =A0=A0=A0 *flags =3D UMA_SLAB_=
PRIV;=0A>  =A0=A0=A0 needed_lock =3D=0A> !PMAP_LOCKED(kernel_pmap);=0A>  =
=0A> -=A0 =A0 =A0 =A0 if ((wait &=0A> (M_NOWAIT|M_USE_RESERVE)) =3D=3D M_NO=
WAIT)=0A> -=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=0A> pflags =3D VM_ALLOC_INTERRUP=
T | VM_ALLOC_WIRED;=0A> -=A0 =A0 =A0 =A0 else=0A> -=A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0=0A> pflags =3D VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;=0A> -=A0 =A0 =A0 =
=A0 if (wait & M_ZERO)=0A> -=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=0A> pflags |=3D=
 VM_ALLOC_ZERO;=0A> +=A0=A0=A0 pflags =3D m2vm_flags(wait,=0A> VM_ALLOC_WIR=
ED);=0A>  =0A> =A0 =A0 =A0 =A0=A0=A0for (;;) {=0A> =A0 =A0 =A0 =A0 =A0 =A0 =
=A0=0A> =A0=A0=A0m =3D vm_page_alloc(NULL, 0, pflags |=0A> VM_ALLOC_NOOBJ);=
=0A> diff --git a/sys/powerpc/aim/slb.c b/sys/powerpc/aim/slb.c=0A> index 1=
62c7fb..3882bfa 100644=0A> --- a/sys/powerpc/aim/slb.c=0A> +++ b/sys/powerp=
c/aim/slb.c=0A> @@ -483,12 +483,7 @@ slb_uma_real_alloc(uma_zone_t zone, in=
t=0A> bytes, u_int8_t *flags, int wait)=0A>  =A0=A0=A0 =A0=A0=A0 realmax =
=3D=0A> platform_real_maxaddr();=0A>  =0A>  =A0=A0=A0 *flags =3D UMA_SLAB_P=
RIV;=0A> -=A0=A0=A0 if ((wait & (M_NOWAIT |=0A> M_USE_RESERVE)) =3D=3D M_NO=
WAIT)=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_INTERRUPT | VM_ALLOC=
_NOOBJ | VM_ALLOC_WIRED;=0A> -=A0=A0=A0 else=0A> -=A0=A0=A0 =A0=A0=A0 pflag=
s =3D=0A> VM_ALLOC_SYSTEM | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED;=0A> -=A0=A0=A0=
 if (wait & M_ZERO)=0A> -=A0=A0=A0 =A0=A0=A0 pflags |=3D=0A> VM_ALLOC_ZERO;=
=0A> +=A0=A0=A0 pflags =3D m2vm_flags(wait, VM_ALLOC_NOOBJ=0A> | VM_ALLOC_W=
IRED);=0A>  =0A>  =A0=A0=A0 for (;;) {=0A>  =A0=A0=A0 =A0=A0=A0 m =3D=0A> v=
m_page_alloc_contig(NULL, 0, pflags, 1, 0, realmax,=0A> diff --git a/sys/po=
werpc/aim/uma_machdep.c=0A> b/sys/powerpc/aim/uma_machdep.c=0A> index 39deb=
43..23a333f 100644=0A> --- a/sys/powerpc/aim/uma_machdep.c=0A> +++ b/sys/po=
werpc/aim/uma_machdep.c=0A> @@ -56,12 +56,7 @@ uma_small_alloc(uma_zone_t z=
one, int=0A> bytes, u_int8_t *flags, int wait)=0A>  =A0=A0=A0 int pflags;=
=0A>  =A0=A0=A0 =0A>  =A0=A0=A0 *flags =3D UMA_SLAB_PRIV;=0A> -=A0=A0=A0 if=
 ((wait &=0A> (M_NOWAIT|M_USE_RESERVE)) =3D=3D M_NOWAIT)=0A> -=A0=A0=A0 =A0=
=A0=A0 pflags =3D=0A> VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;=0A> -=A0=A0=A0 e=
lse=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_SYSTEM | VM_ALLOC_WIRE=
D;=0A> -=A0=A0=A0 if (wait & M_ZERO)=0A> -=A0=A0=A0 =A0=A0=A0 pflags |=3D=
=0A> VM_ALLOC_ZERO;=0A> +=A0=A0=A0 pflags =3D m2vm_flags(wait,=0A> VM_ALLOC=
_WIRED);=0A>  =0A>  =A0=A0=A0 for (;;) {=0A>  =A0=A0=A0 =A0=A0=A0 m =3D=0A>=
 vm_page_alloc(NULL, 0, pflags | VM_ALLOC_NOOBJ);=0A> diff --git a/sys/spar=
c64/sparc64/vm_machdep.c=0A> b/sys/sparc64/sparc64/vm_machdep.c=0A> index c=
db94c7..573ab3a 100644=0A> --- a/sys/sparc64/sparc64/vm_machdep.c=0A> +++ b=
/sys/sparc64/sparc64/vm_machdep.c=0A> @@ -501,14 +501,7 @@ uma_small_alloc(=
uma_zone_t zone, int=0A> bytes, u_int8_t *flags, int wait)=0A>  =A0=A0=A0 P=
MAP_STATS_INC(uma_nsmall_alloc);=0A>  =0A>  =A0=A0=A0 *flags =3D UMA_SLAB_P=
RIV;=0A> -=0A> -=A0=A0=A0 if ((wait &=0A> (M_NOWAIT|M_USE_RESERVE)) =3D=3D =
M_NOWAIT)=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_INTERRUPT | VM_A=
LLOC_WIRED;=0A> -=A0=A0=A0 else=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_=
ALLOC_SYSTEM | VM_ALLOC_WIRED;=0A> -=0A> -=A0=A0=A0 if (wait & M_ZERO)=0A> =
-=A0=A0=A0 =A0=A0=A0 pflags |=3D=0A> VM_ALLOC_ZERO;=0A> +=A0=A0=A0 pflags =
=3D m2vm_flags(wait,=0A> VM_ALLOC_WIRED);=0A>  =0A>  =A0=A0=A0 for (;;) {=
=0A>  =A0=A0=A0 =A0=A0=A0 m =3D=0A> vm_page_alloc(NULL, 0, pflags | VM_ALLO=
C_NOOBJ);=0A> diff --git a/sys/vm/vm_kern.c b/sys/vm/vm_kern.c=0A> index 46=
e7f1c..e4c3704 100644=0A> --- a/sys/vm/vm_kern.c=0A> +++ b/sys/vm/vm_kern.c=
=0A> @@ -222,12 +222,7 @@ kmem_alloc_attr(vm_map_t map, vm_size_t=0A> size,=
 int flags, vm_paddr_t low,=0A>  =A0=A0=A0 vm_object_reference(object);=0A>=
  =A0=A0=A0 vm_map_insert(map, object, offset, addr,=0A> addr + size, VM_PR=
OT_ALL,=0A>  =A0=A0=A0 =A0 =A0 VM_PROT_ALL, 0);=0A> -=A0=A0=A0 if ((flags &=
 (M_NOWAIT |=0A> M_USE_RESERVE)) =3D=3D M_NOWAIT)=0A> -=A0=A0=A0 =A0=A0=A0 =
pflags =3D=0A> VM_ALLOC_INTERRUPT | VM_ALLOC_NOBUSY;=0A> -=A0=A0=A0 else=0A=
> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_SYSTEM | VM_ALLOC_NOBUSY;=0A=
> -=A0=A0=A0 if (flags & M_ZERO)=0A> -=A0=A0=A0 =A0=A0=A0 pflags |=3D=0A> V=
M_ALLOC_ZERO;=0A> +=A0=A0=A0 pflags =3D m2vm_flags(flags,=0A> VM_ALLOC_NOBU=
SY);=0A>  =A0=A0=A0 VM_OBJECT_LOCK(object);=0A>  =A0=A0=A0 end_offset =3D o=
ffset + size;=0A>  =A0=A0=A0 for (; offset < end_offset; offset +=3D=0A> PA=
GE_SIZE) {=0A> @@ -296,14 +291,7 @@ kmem_alloc_contig(vm_map_t map,=0A> vm_=
size_t size, int flags, vm_paddr_t low,=0A>  =A0=A0=A0 vm_object_reference(=
object);=0A>  =A0=A0=A0 vm_map_insert(map, object, offset, addr,=0A> addr +=
 size, VM_PROT_ALL,=0A>  =A0=A0=A0 =A0 =A0 VM_PROT_ALL, 0);=0A> -=A0=A0=A0 =
if ((flags & (M_NOWAIT |=0A> M_USE_RESERVE)) =3D=3D M_NOWAIT)=0A> -=A0=A0=
=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_INTERRUPT | VM_ALLOC_NOBUSY;=0A> -=A0=
=A0=A0 else=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_ALLOC_SYSTEM | VM_AL=
LOC_NOBUSY;=0A> -=A0=A0=A0 if (flags & M_ZERO)=0A> -=A0=A0=A0 =A0=A0=A0 pfl=
ags |=3D=0A> VM_ALLOC_ZERO;=0A> -=A0=A0=A0 if (flags & M_NODUMP)=0A> -=A0=
=A0=A0 =A0=A0=A0 pflags |=3D=0A> VM_ALLOC_NODUMP;=0A> +=A0=A0=A0 pflags =3D=
 m2vm_flags(flags,=0A> VM_ALLOC_NOBUSY);=0A>  =A0=A0=A0 VM_OBJECT_LOCK(obje=
ct);=0A>  =A0=A0=A0 tries =3D 0;=0A>  retry:=0A> @@ -487,11 +475,7 @@ kmem_=
back(vm_map_t map, vm_offset_t=0A> addr, vm_size_t size, int flags)=0A>  =
=A0=A0=A0 =A0 =A0 entry->wired_count =3D=3D 0=0A> && (entry->eflags & MAP_E=
NTRY_IN_TRANSITION)=0A>  =A0=A0=A0 =A0 =A0 =3D=3D 0, ("kmem_back: entry=0A>=
 not found or misaligned"));=0A>  =0A> -=A0=A0=A0 if ((flags &=0A> (M_NOWAI=
T|M_USE_RESERVE)) =3D=3D M_NOWAIT)=0A> -=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> =
VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;=0A> -=A0=A0=A0 else=0A> -=A0=A0=A0 =A0=
=A0=A0 pflags =3D=0A> VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;=0A> -=0A> +=A0=A0=
=A0 pflags =3D m2vm_flags(flags,=0A> VM_ALLOC_WIRED);=0A>  =A0=A0=A0 if (fl=
ags & M_ZERO)=0A>  =A0=A0=A0 =A0=A0=A0 pflags |=3D=0A> VM_ALLOC_ZERO;=0A>  =
=A0=A0=A0 if (flags & M_NODUMP)=0A> diff --git a/sys/vm/vm_page.h b/sys/vm/=
vm_page.h=0A> index 70b8416..0286a6d 100644=0A> --- a/sys/vm/vm_page.h=0A> =
+++ b/sys/vm/vm_page.h=0A> @@ -344,6 +344,24 @@ extern struct mtx_padalign=
=0A> vm_page_queue_mtx;=0A>  #define=A0=A0=A0=0A> VM_ALLOC_COUNT_SHIFT=A0=
=A0=A0 16=0A>  #define=A0=A0=A0=0A> VM_ALLOC_COUNT(count)=A0=A0=A0 ((count)=
 <<=0A> VM_ALLOC_COUNT_SHIFT)=0A>  =0A> +#ifdef M_NOWAIT=0A> +static inline=
 int=0A> +m2vm_flags(int malloc_flags, int alloc_flags)=0A> +{=0A> +=A0=A0=
=A0 int pflags;=0A> +=0A> +=A0=A0=A0 if ((malloc_flags & (M_NOWAIT |=0A> M_=
USE_RESERVE)) =3D=3D M_NOWAIT)=0A> +=A0=A0=A0 =A0=A0=A0 pflags =3D=0A> VM_A=
LLOC_SYSTEM | alloc_flags;=0A> +=A0=A0=A0 else=0A> +=A0=A0=A0 =A0=A0=A0 pfl=
ags =3D=0A> VM_ALLOC_INTERRUPT | alloc_flags;=0A> +=A0=A0=A0 if (malloc_fla=
gs & M_ZERO)=0A> +=A0=A0=A0 =A0=A0=A0 pflags |=3D=0A> VM_ALLOC_ZERO;=0A> +=
=A0=A0=A0 if (malloc_flags & M_NODUMP)=0A> +=A0=A0=A0 =A0=A0=A0 pflags |=3D=
=0A> VM_ALLOC_NODUMP;=0A> +=A0=A0=A0 return (pflags);=0A> +}=0A> +#endif=0A=
> +=0A>  void vm_page_busy(vm_page_t m);=0A>  void vm_page_flash(vm_page_t =
m);=0A>  void vm_page_io_start(vm_page_t m);=0A> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1352755682.93266.YahooMailClassic>