From owner-svn-src-head@freebsd.org Thu Apr 28 12:05:09 2016 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0510EB1D1F0; Thu, 28 Apr 2016 12:05:09 +0000 (UTC) (envelope-from mmel@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B6FA11FFF; Thu, 28 Apr 2016 12:05:08 +0000 (UTC) (envelope-from mmel@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id u3SC57G3011166; Thu, 28 Apr 2016 12:05:07 GMT (envelope-from mmel@FreeBSD.org) Received: (from mmel@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id u3SC57G9011165; Thu, 28 Apr 2016 12:05:07 GMT (envelope-from mmel@FreeBSD.org) Message-Id: <201604281205.u3SC57G9011165@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mmel set sender to mmel@FreeBSD.org using -f From: Michal Meloun Date: Thu, 28 Apr 2016 12:05:07 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r298740 - head/sys/arm/arm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Apr 2016 12:05:09 -0000 Author: mmel Date: Thu Apr 28 12:05:07 2016 New Revision: 298740 URL: https://svnweb.freebsd.org/changeset/base/298740 Log: ARM: Use kernel pmap as intermediate mapping in context switch. On ARM, we can directly switch between translation tables only when the size of the mapping for any given virtual address is the same in the old and new translation tables. The load of new TTB and subsequent TLB flush is not atomic operation. So speculative page table walk can load TLB entry from new mapping while rest of TLB entries are still the old ones. In worst case, this can lead to situation when TLB cache can contain multiple matching TLB entries. One (from old mapping) L2 entry for VA + 4k and one (from new mapping) L1 entry for VA. Thus, we must switch to kernel pmap translation table as intermediate mapping because all sizes of these (old pmap and kernel pmap) mappings are same (or unmapped). The same is true for switch from kernel pmap translation table to new pmap one. Modified: head/sys/arm/arm/swtch-v6.S Modified: head/sys/arm/arm/swtch-v6.S ============================================================================== --- head/sys/arm/arm/swtch-v6.S Thu Apr 28 12:04:12 2016 (r298739) +++ head/sys/arm/arm/swtch-v6.S Thu Apr 28 12:05:07 2016 (r298740) @@ -114,25 +114,37 @@ __FBSDID("$FreeBSD$"); .Lblocked_lock: .word _C_LABEL(blocked_lock) -ENTRY(cpu_context_switch) /* QQQ: What about macro instead of function? */ +ENTRY(cpu_context_switch) DSB - mcr CP15_TTBR0(r0) /* set the new TTB */ + /* + * We can directly switch between translation tables only when the + * size of the mapping for any given virtual address is the same + * in the old and new translation tables. + * Thus, we must switch to kernel pmap translation table as + * intermediate mapping because all sizes of these mappings are same + * (or unmapped). The same is true for switch from kernel pmap + * translation table to new pmap one. + */ + mov r2, #(CPU_ASID_KERNEL) + ldr r1, =(_C_LABEL(pmap_kern_ttb)) + ldr r1, [r1] + mcr CP15_TTBR0(r1) /* switch to kernel TTB */ + ISB + mcr CP15_TLBIASID(r2) /* flush not global TLBs */ + DSB + mcr CP15_TTBR0(r0) /* switch to new TTB */ ISB - mov r0, #(CPU_ASID_KERNEL) - mcr CP15_TLBIASID(r0) /* flush not global TLBs */ + /* + * We must flush not global TLBs again because PT2MAP mapping + * is different. + */ + mcr CP15_TLBIASID(r2) /* flush not global TLBs */ /* * Flush entire Branch Target Cache because of the branch predictor * is not architecturally invisible. See ARM Architecture Reference * Manual ARMv7-A and ARMv7-R edition, page B2-1264(65), Branch * predictors and Requirements for branch predictor maintenance * operations sections. - * - * QQQ: The predictor is virtually addressed and holds virtual target - * addresses. Therefore, if mapping is changed, the predictor cache - * must be flushed.The flush is part of entire i-cache invalidation - * what is always called when code mapping is changed. So herein, - * it's the only place where standalone predictor flush must be - * executed in kernel (except self modifying code case). */ mcr CP15_BPIALL /* flush entire Branch Target Cache */ DSB