Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Jun 2011 10:36:06 +0800
From:      Adrian Chadd <adrian@freebsd.org>
To:        Alan Cox <alc@freebsd.org>
Cc:        svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org
Subject:   Re: svn commit: r223307 - head/sys/vm
Message-ID:  <BANLkTi=Ep2BSLmRd2xe9NNznvMDuUy1C0w@mail.gmail.com>
In-Reply-To: <201106191913.p5JJDOqJ006272@svn.freebsd.org>
References:  <201106191913.p5JJDOqJ006272@svn.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Can this commit please be reverted whilst the kinks are worked out for MIPS=
?

I'm currently knee deep in MIPS related hackery and I'd rather stay on
an unpatched -HEAD so dogfood can be properly self-consumed.
Thanks,


Adrian

On 20 June 2011 03:13, Alan Cox <alc@freebsd.org> wrote:
> Author: alc
> Date: Sun Jun 19 19:13:24 2011
> New Revision: 223307
> URL: http://svn.freebsd.org/changeset/base/223307
>
> Log:
> =A0Precisely document the synchronization rules for the page's dirty fiel=
d.
> =A0(Saying that the lock on the object that the page belongs to must be h=
eld
> =A0only represents one aspect of the rules.)
>
> =A0Eliminate the use of the page queues lock for atomically performing re=
ad-
> =A0modify-write operations on the dirty field when the underlying archite=
cture
> =A0supports atomic operations on char and short types.
>
> =A0Document the fact that 32KB pages aren't really supported.
>
> =A0Reviewed by: =A0attilio, kib
>
> Modified:
> =A0head/sys/vm/vm_fault.c
> =A0head/sys/vm/vm_page.c
> =A0head/sys/vm/vm_page.h
>
> Modified: head/sys/vm/vm_fault.c
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D
> --- head/sys/vm/vm_fault.c =A0 =A0 =A0Sun Jun 19 18:34:49 2011 =A0 =A0 =
=A0 =A0(r223306)
> +++ head/sys/vm/vm_fault.c =A0 =A0 =A0Sun Jun 19 19:13:24 2011 =A0 =A0 =
=A0 =A0(r223307)
> @@ -1089,10 +1089,20 @@ vm_fault_quick_hold_pages(vm_map_t map,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * caller's changes may go=
 unnoticed because they are
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * performed through an un=
managed mapping or by a DMA
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * operation.
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* The object lock is not=
 held here. =A0Therefore, like
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* a pmap operation, the =
page queues lock may be
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* required in order to c=
all vm_page_dirty(). =A0See
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* vm_page_clear_dirty_ma=
sk().
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 */
> +#if defined(__amd64__) || defined(__i386__) || defined(__ia64__) || \
> + =A0 =A0defined(__mips__)
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 vm_page_dirty(*mp);
> +#else
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_lock_queues();
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_dirty(*mp);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_unlock_queues();
> +#endif
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}
> =A0 =A0 =A0 =A0}
> =A0 =A0 =A0 =A0if (pmap_failed) {
>
> Modified: head/sys/vm/vm_page.c
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D
> --- head/sys/vm/vm_page.c =A0 =A0 =A0 Sun Jun 19 18:34:49 2011 =A0 =A0 =
=A0 =A0(r223306)
> +++ head/sys/vm/vm_page.c =A0 =A0 =A0 Sun Jun 19 19:13:24 2011 =A0 =A0 =
=A0 =A0(r223307)
> @@ -729,7 +729,12 @@ vm_page_sleep(vm_page_t m, const char *m
> =A0/*
> =A0* =A0 =A0 vm_page_dirty:
> =A0*
> - * =A0 =A0 make page all dirty
> + * =A0 =A0 Set all bits in the page's dirty field.
> + *
> + * =A0 =A0 The object containing the specified page must be locked if th=
e call is
> + * =A0 =A0 made from the machine-independent layer. =A0If, however, the =
call is
> + * =A0 =A0 made from the pmap layer, then the page queues lock may be re=
quired.
> + * =A0 =A0 See vm_page_clear_dirty_mask().
> =A0*/
> =A0void
> =A0vm_page_dirty(vm_page_t m)
> @@ -2325,15 +2330,41 @@ vm_page_clear_dirty_mask(vm_page_t m, in
> =A0 =A0 =A0 =A0/*
> =A0 =A0 =A0 =A0 * If the object is locked and the page is neither VPO_BUS=
Y nor
> =A0 =A0 =A0 =A0 * PG_WRITEABLE, then the page's dirty field cannot possib=
ly be
> - =A0 =A0 =A0 =A0* modified by a concurrent pmap operation.
> + =A0 =A0 =A0 =A0* set by a concurrent pmap operation.
> =A0 =A0 =A0 =A0 */
> =A0 =A0 =A0 =A0VM_OBJECT_LOCK_ASSERT(m->object, MA_OWNED);
> =A0 =A0 =A0 =A0if ((m->oflags & VPO_BUSY) =3D=3D 0 && (m->flags & PG_WRIT=
EABLE) =3D=3D 0)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0m->dirty &=3D ~pagebits;
> =A0 =A0 =A0 =A0else {
> +#if defined(__amd64__) || defined(__i386__) || defined(__ia64__) || \
> + =A0 =A0defined(__mips__)
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /*
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* On the aforementioned architectures, t=
he page queues lock
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* is not required by the following read-=
modify-write
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* operation. =A0The combination of the o=
bject's lock and an
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* atomic operation suffice. =A0Moreover,=
 the pmap layer on
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* these architectures can call vm_page_d=
irty() without
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* holding the page queues lock.
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/
> +#if PAGE_SIZE =3D=3D 4096
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 atomic_clear_char(&m->dirty, pagebits);
> +#elif PAGE_SIZE =3D=3D 8192
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 atomic_clear_short(&m->dirty, pagebits);
> +#elif PAGE_SIZE =3D=3D 16384
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 atomic_clear_int(&m->dirty, pagebits);
> +#else
> +#error "PAGE_SIZE is not supported."
> +#endif
> +#else
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /*
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* Otherwise, the page queues lock is req=
uired to ensure that
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* a concurrent pmap operation does not s=
et the page's dirty
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* field during the following read-modify=
-write operation.
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_lock_queues();
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0m->dirty &=3D ~pagebits;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_unlock_queues();
> +#endif
> =A0 =A0 =A0 =A0}
> =A0}
>
>
> Modified: head/sys/vm/vm_page.h
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D
> --- head/sys/vm/vm_page.h =A0 =A0 =A0 Sun Jun 19 18:34:49 2011 =A0 =A0 =
=A0 =A0(r223306)
> +++ head/sys/vm/vm_page.h =A0 =A0 =A0 Sun Jun 19 19:13:24 2011 =A0 =A0 =
=A0 =A0(r223307)
> @@ -89,10 +89,26 @@
> =A0* =A0 =A0 and offset to which this page belongs (for pageout),
> =A0* =A0 =A0 and sundry status bits.
> =A0*
> - * =A0 =A0 Fields in this structure are locked either by the lock on the
> - * =A0 =A0 object that the page belongs to (O), its corresponding page l=
ock (P),
> - * =A0 =A0 or by the lock on the page queues (Q).
> - *
> + * =A0 =A0 In general, operations on this structure's mutable fields are
> + * =A0 =A0 synchronized using either one of or a combination of the lock=
 on the
> + * =A0 =A0 object that the page belongs to (O), the pool lock for the pa=
ge (P),
> + * =A0 =A0 or the lock for either the free or paging queues (Q). =A0If a=
 field is
> + * =A0 =A0 annotated below with two of these locks, then holding either =
lock is
> + * =A0 =A0 sufficient for read access, but both locks are required for w=
rite
> + * =A0 =A0 access.
> + *
> + * =A0 =A0 In contrast, the synchronization of accesses to the page's di=
rty field
> + * =A0 =A0 is machine dependent (M). =A0In the machine-independent layer=
, the lock
> + * =A0 =A0 on the object that the page belongs to must be held in order =
to
> + * =A0 =A0 operate on the field. =A0However, the pmap layer is permitted=
 to set
> + * =A0 =A0 all bits within the field without holding that lock. =A0There=
fore, if
> + * =A0 =A0 the underlying architecture does not support atomic read-modi=
fy-write
> + * =A0 =A0 operations on the field's type, then the machine-independent =
layer
> + * =A0 =A0 must also hold the page queues lock when performing read-modi=
fy-write
> + * =A0 =A0 operations and the pmap layer must hold the page queues lock =
when
> + * =A0 =A0 setting the field. =A0In the machine-independent layer, the
> + * =A0 =A0 implementation of read-modify-write operations on the field i=
s
> + * =A0 =A0 encapsulated in vm_page_clear_dirty_mask().
> =A0*/
>
> =A0TAILQ_HEAD(pglist, vm_page);
> @@ -120,18 +136,19 @@ struct vm_page {
> =A0 =A0 =A0 =A0u_char =A0busy; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* pag=
e busy count (O) */
> =A0 =A0 =A0 =A0/* NOTE that these must support one bit per DEV_BSIZE in a=
 page!!! */
> =A0 =A0 =A0 =A0/* so, on normal X86 kernels, they must be at least 8 bits=
 wide */
> + =A0 =A0 =A0 /* In reality, support for 32KB pages is not fully implemen=
ted. */
> =A0#if PAGE_SIZE =3D=3D 4096
> =A0 =A0 =A0 =A0u_char =A0valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map=
 of valid DEV_BSIZE chunks (O) */
> - =A0 =A0 =A0 u_char =A0dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map =
of dirty DEV_BSIZE chunks (O) */
> + =A0 =A0 =A0 u_char =A0dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map =
of dirty DEV_BSIZE chunks (M) */
> =A0#elif PAGE_SIZE =3D=3D 8192
> =A0 =A0 =A0 =A0u_short valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map o=
f valid DEV_BSIZE chunks (O) */
> - =A0 =A0 =A0 u_short dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map of=
 dirty DEV_BSIZE chunks (O) */
> + =A0 =A0 =A0 u_short dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map of=
 dirty DEV_BSIZE chunks (M) */
> =A0#elif PAGE_SIZE =3D=3D 16384
> =A0 =A0 =A0 =A0u_int valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map=
 of valid DEV_BSIZE chunks (O) */
> - =A0 =A0 =A0 u_int dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map =
of dirty DEV_BSIZE chunks (O) */
> + =A0 =A0 =A0 u_int dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map =
of dirty DEV_BSIZE chunks (M) */
> =A0#elif PAGE_SIZE =3D=3D 32768
> =A0 =A0 =A0 =A0u_long valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* map o=
f valid DEV_BSIZE chunks (O) */
> - =A0 =A0 =A0 u_long dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* map of=
 dirty DEV_BSIZE chunks (O) */
> + =A0 =A0 =A0 u_long dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* map of=
 dirty DEV_BSIZE chunks (M) */
> =A0#endif
> =A0};
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTi=Ep2BSLmRd2xe9NNznvMDuUy1C0w>