From owner-svn-src-head@FreeBSD.ORG Tue Nov 19 23:37:51 2013 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 294C1511; Tue, 19 Nov 2013 23:37:51 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id F2CCE265E; Tue, 19 Nov 2013 23:37:50 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id rAJNboDj025943; Tue, 19 Nov 2013 23:37:50 GMT (envelope-from zbb@svn.freebsd.org) Received: (from zbb@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id rAJNbo4A025942; Tue, 19 Nov 2013 23:37:50 GMT (envelope-from zbb@svn.freebsd.org) Message-Id: <201311192337.rAJNbo4A025942@svn.freebsd.org> From: Zbigniew Bodek Date: Tue, 19 Nov 2013 23:37:50 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r258359 - head/sys/arm/arm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 23:37:51 -0000 Author: zbb Date: Tue Nov 19 23:37:50 2013 New Revision: 258359 URL: http://svnweb.freebsd.org/changeset/base/258359 Log: Apply access flags for managed and unmanaged pages properly on ARMv6/v7 When entering a mapping via pmap_enter() unmanaged pages ought to be naturally excluded from the "modified" and "referenced" emulation. RW permission should be granted implicitly when requested, otherwise unmanaged page will not recover from the permission fault since there will be no PV entry to indicate that the page can be written. In addition, only managed pages that participate in "modified" emulation need to be marked as "dirty" and "writeable" when entered with RW permissions. Likewise with "referenced" flag for managed pages. Unmanaged ones however should not be marked as such. Reviewed by: cognet, gber Modified: head/sys/arm/arm/pmap-v6.c Modified: head/sys/arm/arm/pmap-v6.c ============================================================================== --- head/sys/arm/arm/pmap-v6.c Tue Nov 19 23:31:39 2013 (r258358) +++ head/sys/arm/arm/pmap-v6.c Tue Nov 19 23:37:50 2013 (r258359) @@ -3079,36 +3079,38 @@ validate: * then continue setting mapping parameters */ if (m != NULL) { - if (prot & (VM_PROT_ALL)) { - if ((m->oflags & VPO_UNMANAGED) == 0) + if ((m->oflags & VPO_UNMANAGED) == 0) { + if (prot & (VM_PROT_ALL)) { vm_page_aflag_set(m, PGA_REFERENCED); - } else { - /* - * Need to do page referenced emulation. - */ - npte &= ~L2_S_REF; + } else { + /* + * Need to do page referenced emulation. + */ + npte &= ~L2_S_REF; + } } if (prot & VM_PROT_WRITE) { - /* - * Enable write permission if the access type - * indicates write intention. Emulate modified - * bit otherwise. - */ - if ((access & VM_PROT_WRITE) != 0) - npte &= ~(L2_APX); - if ((m->oflags & VPO_UNMANAGED) == 0) { - vm_page_aflag_set(m, PGA_WRITEABLE); /* - * The access type and permissions indicate - * that the page will be written as soon as - * returned from fault service. - * Mark it dirty from the outset. + * Enable write permission if the access type + * indicates write intention. Emulate modified + * bit otherwise. */ - if ((access & VM_PROT_WRITE) != 0) + if ((access & VM_PROT_WRITE) != 0) { + npte &= ~(L2_APX); + vm_page_aflag_set(m, PGA_WRITEABLE); + /* + * The access type and permissions + * indicate that the page will be + * written as soon as returned from + * fault service. + * Mark it dirty from the outset. + */ vm_page_dirty(m); - } + } + } else + npte &= ~(L2_APX); } if (!(prot & VM_PROT_EXECUTE)) npte |= L2_XN;