From owner-svn-src-all@FreeBSD.ORG Sat Oct 30 23:07:30 2010 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9A753106564A; Sat, 30 Oct 2010 23:07:30 +0000 (UTC) (envelope-from nwhitehorn@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 8733E8FC15; Sat, 30 Oct 2010 23:07:30 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id o9UN7UhA029766; Sat, 30 Oct 2010 23:07:30 GMT (envelope-from nwhitehorn@svn.freebsd.org) Received: (from nwhitehorn@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id o9UN7U2u029753; Sat, 30 Oct 2010 23:07:30 GMT (envelope-from nwhitehorn@svn.freebsd.org) Message-Id: <201010302307.o9UN7U2u029753@svn.freebsd.org> From: Nathan Whitehorn Date: Sat, 30 Oct 2010 23:07:30 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r214574 - in head/sys/powerpc: aim include powerpc X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Oct 2010 23:07:30 -0000 Author: nwhitehorn Date: Sat Oct 30 23:07:30 2010 New Revision: 214574 URL: http://svn.freebsd.org/changeset/base/214574 Log: Restructure the way the copyin/copyout segment is stored to prevent a concurrency bug. Since all SLB/SR entries were invalidated during an exception, a decrementer exception could cause the user segment to be invalidated during a copyin()/copyout() without a thread switch that would cause it to be restored from the PCB, potentially causing the operation to continue on invalid memory. This is now handled by explicit restoration of segment 12 from the PCB on 32-bit systems and a check in the Data Segment Exception handler on 64-bit. While here, cause copyin()/copyout() to check whether the requested user segment is already installed, saving some pipeline flushes, and fix the synchronization primitives around the mtsr and slbmte instructions to prevent accessing stale segments. MFC after: 2 weeks Modified: head/sys/powerpc/aim/copyinout.c head/sys/powerpc/aim/slb.c head/sys/powerpc/aim/swtch64.S head/sys/powerpc/aim/trap.c head/sys/powerpc/aim/trap_subr32.S head/sys/powerpc/aim/trap_subr64.S head/sys/powerpc/aim/vm_machdep.c head/sys/powerpc/include/pcb.h head/sys/powerpc/include/slb.h head/sys/powerpc/include/sr.h head/sys/powerpc/powerpc/exec_machdep.c head/sys/powerpc/powerpc/genassym.c Modified: head/sys/powerpc/aim/copyinout.c ============================================================================== --- head/sys/powerpc/aim/copyinout.c Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/aim/copyinout.c Sat Oct 30 23:07:30 2010 (r214574) @@ -81,9 +81,7 @@ static __inline void set_user_sr(pmap_t pm, const void *addr) { struct slb *slb; - register_t esid, vsid, slb1, slb2; - - esid = USER_ADDR >> ADDR_SR_SHFT; + register_t slbv; /* Try lockless look-up first */ slb = user_va_to_slb_entry(pm, (vm_offset_t)addr); @@ -91,20 +89,21 @@ set_user_sr(pmap_t pm, const void *addr) if (slb == NULL) { /* If it isn't there, we need to pre-fault the VSID */ PMAP_LOCK(pm); - vsid = va_to_vsid(pm, (vm_offset_t)addr); + slbv = va_to_vsid(pm, (vm_offset_t)addr) << SLBV_VSID_SHIFT; PMAP_UNLOCK(pm); } else { - vsid = slb->slbv >> SLBV_VSID_SHIFT; + slbv = slb->slbv; } - slb1 = vsid << SLBV_VSID_SHIFT; - slb2 = (esid << SLBE_ESID_SHIFT) | SLBE_VALID | USER_SR; + /* If we have already set this VSID, we can just return */ + if (curthread->td_pcb->pcb_cpu.aim.usr_vsid == slbv) + return; + __asm __volatile ("isync; slbie %0; slbmte %1, %2; isync" :: + "r"(USER_ADDR), "r"(slbv), "r"(USER_SLB_SLBE)); curthread->td_pcb->pcb_cpu.aim.usr_segm = (uintptr_t)addr >> ADDR_SR_SHFT; - __asm __volatile ("slbie %0; slbmte %1, %2" :: "r"(esid << 28), - "r"(slb1), "r"(slb2)); - isync(); + curthread->td_pcb->pcb_cpu.aim.usr_vsid = slbv; } #else static __inline void @@ -114,9 +113,13 @@ set_user_sr(pmap_t pm, const void *addr) vsid = va_to_vsid(pm, (vm_offset_t)addr); - isync(); - __asm __volatile ("mtsr %0,%1" :: "n"(USER_SR), "r"(vsid)); - isync(); + /* If we have already set this VSID, we can just return */ + if (curthread->td_pcb->pcb_cpu.aim.usr_vsid == vsid) + return; + + __asm __volatile ("sync; mtsr %0,%1; sync; isync" :: "n"(USER_SR), + "r"(vsid)); + curthread->td_pcb->pcb_cpu.aim.usr_vsid = vsid; } #endif Modified: head/sys/powerpc/aim/slb.c ============================================================================== --- head/sys/powerpc/aim/slb.c Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/aim/slb.c Sat Oct 30 23:07:30 2010 (r214574) @@ -200,7 +200,7 @@ kernel_va_to_slbv(vm_offset_t va) esid = (uintptr_t)va >> ADDR_SR_SHFT; /* Set kernel VSID to deterministic value */ - slbv = va_to_vsid(kernel_pmap, va) << SLBV_VSID_SHIFT; + slbv = (KERNEL_VSID((uintptr_t)va >> ADDR_SR_SHFT)) << SLBV_VSID_SHIFT; /* Figure out if this is a large-page mapping */ if (hw_direct_map && va < VM_MIN_KERNEL_ADDRESS) { @@ -421,19 +421,19 @@ slb_insert_kernel(uint64_t slbe, uint64_ slbcache = PCPU_GET(slb); - /* Check for an unused slot, abusing the USER_SR slot as a full flag */ - if (slbcache[USER_SR].slbe == 0) { - for (i = 0; i < USER_SR; i++) { + /* Check for an unused slot, abusing the user slot as a full flag */ + if (slbcache[USER_SLB_SLOT].slbe == 0) { + for (i = 0; i < USER_SLB_SLOT; i++) { if (!(slbcache[i].slbe & SLBE_VALID)) goto fillkernslb; } - if (i == USER_SR) - slbcache[USER_SR].slbe = 1; + if (i == USER_SLB_SLOT) + slbcache[USER_SLB_SLOT].slbe = 1; } for (i = mftb() % 64, j = 0; j < 64; j++, i = (i+1) % 64) { - if (i == USER_SR) + if (i == USER_SLB_SLOT) continue; if (SLB_SPILLABLE(slbcache[i].slbe)) Modified: head/sys/powerpc/aim/swtch64.S ============================================================================== --- head/sys/powerpc/aim/swtch64.S Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/aim/swtch64.S Sat Oct 30 23:07:30 2010 (r214574) @@ -110,13 +110,10 @@ ENTRY(cpu_switch) std %r1,PCB_SP(%r6) /* Save the stack pointer */ std %r2,PCB_TOC(%r6) /* Save the TOC pointer */ - li %r14,0 /* Save USER_SR for copyin/out */ - li %r15,0 - li %r16,USER_SR - slbmfee %r14, %r16 + li %r15,0 /* Save user segment for copyin/out */ + li %r16,USER_SLB_SLOT slbmfev %r15, %r16 isync - std %r14,PCB_AIM_USR_ESID(%r6) std %r15,PCB_AIM_USR_VSID(%r6) mr %r14,%r3 /* Copy the old thread ptr... */ @@ -221,14 +218,17 @@ blocked_loop: ld %r1,PCB_SP(%r3) /* Load the stack pointer */ ld %r2,PCB_TOC(%r3) /* Load the TOC pointer */ - lis %r5,USER_ADDR@highesta /* Load the USER_SR segment reg */ + lis %r5,USER_ADDR@highesta /* Load the copyin/out segment reg */ ori %r5,%r5,USER_ADDR@highera sldi %r5,%r5,32 oris %r5,%r5,USER_ADDR@ha slbie %r5 + lis %r6,USER_SLB_SLBE@highesta + ori %r6,%r6,USER_SLB_SLBE@highera + sldi %r6,%r6,32 + oris %r6,%r6,USER_SLB_SLBE@ha + ori %r6,%r6,USER_SLB_SLBE@l ld %r5,PCB_AIM_USR_VSID(%r3) - ld %r6,PCB_AIM_USR_ESID(%r3) - ori %r6,%r6,USER_SR slbmte %r5,%r6 isync Modified: head/sys/powerpc/aim/trap.c ============================================================================== --- head/sys/powerpc/aim/trap.c Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/aim/trap.c Sat Oct 30 23:07:30 2010 (r214574) @@ -249,8 +249,16 @@ trap(struct trapframe *frame) return; break; #ifdef __powerpc64__ - case EXC_ISE: case EXC_DSE: + if ((frame->cpu.aim.dar & SEGMENT_MASK) == USER_ADDR) { + __asm __volatile ("slbmte %0, %1" :: + "r"(td->td_pcb->pcb_cpu.aim.usr_vsid), + "r"(USER_SLB_SLBE)); + return; + } + + /* FALLTHROUGH */ + case EXC_ISE: if (handle_slb_spill(kernel_pmap, (type == EXC_ISE) ? frame->srr0 : frame->cpu.aim.dar) != 0) Modified: head/sys/powerpc/aim/trap_subr32.S ============================================================================== --- head/sys/powerpc/aim/trap_subr32.S Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/aim/trap_subr32.S Sat Oct 30 23:07:30 2010 (r214574) @@ -54,7 +54,7 @@ lwz sr,9*4(pmap); mtsr 9,sr; \ lwz sr,10*4(pmap); mtsr 10,sr; \ lwz sr,11*4(pmap); mtsr 11,sr; \ - lwz sr,12*4(pmap); mtsr 12,sr; \ + /* Skip segment 12 (USER_SR), which is restored differently */ \ lwz sr,13*4(pmap); mtsr 13,sr; \ lwz sr,14*4(pmap); mtsr 14,sr; \ lwz sr,15*4(pmap); mtsr 15,sr; isync; @@ -66,7 +66,9 @@ GET_CPUINFO(pmap); \ lwz pmap,PC_CURPMAP(pmap); \ lwzu sr,PM_SR(pmap); \ - RESTORE_SRS(pmap,sr) + RESTORE_SRS(pmap,sr) \ + /* Restore SR 12 */ \ + lwz sr,12*4(pmap); mtsr 12,sr /* * Kernel SRs are loaded directly from kernel_pmap_ @@ -537,6 +539,11 @@ u_trap: */ k_trap: FRAME_SETUP(PC_TEMPSAVE) + /* Restore USER_SR */ + GET_CPUINFO(%r30) + lwz %r30,PC_CURPCB(%r30) + lwz %r30,PCB_AIM_USR_VSID(%r30) + mtsr USER_SR,%r30; sync; isync /* Call C interrupt dispatcher: */ trapagain: addi %r3,%r1,8 Modified: head/sys/powerpc/aim/trap_subr64.S ============================================================================== --- head/sys/powerpc/aim/trap_subr64.S Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/aim/trap_subr64.S Sat Oct 30 23:07:30 2010 (r214574) @@ -99,7 +99,7 @@ instkernslb: addi %r28, %r28, 16; /* Advance pointer */ addi %r29, %r29, 1; - cmpli 0, %r29, USER_SR; /* Repeat if we are not at the end */ + cmpli 0, %r29, USER_SLB_SLOT; /* Repeat if we are not at the end */ blt instkernslb; blr; Modified: head/sys/powerpc/aim/vm_machdep.c ============================================================================== --- head/sys/powerpc/aim/vm_machdep.c Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/aim/vm_machdep.c Sat Oct 30 23:07:30 2010 (r214574) @@ -197,7 +197,6 @@ cpu_fork(struct thread *td1, struct proc pcb->pcb_lr = (register_t)fork_trampoline; #endif pcb->pcb_cpu.aim.usr_vsid = 0; - pcb->pcb_cpu.aim.usr_esid = 0; /* Setup to release spin count in fork_exit(). */ td2->td_md.md_spinlock_count = 1; Modified: head/sys/powerpc/include/pcb.h ============================================================================== --- head/sys/powerpc/include/pcb.h Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/include/pcb.h Sat Oct 30 23:07:30 2010 (r214574) @@ -67,7 +67,6 @@ struct pcb { union { struct { vm_offset_t usr_segm; /* Base address */ - register_t usr_esid; /* USER_SR segment */ register_t usr_vsid; /* USER_SR segment */ } aim; struct { Modified: head/sys/powerpc/include/slb.h ============================================================================== --- head/sys/powerpc/include/slb.h Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/include/slb.h Sat Oct 30 23:07:30 2010 (r214574) @@ -62,6 +62,13 @@ #define SLBE_ESID_MASK 0xfffffffff0000000UL /* Effective segment ID mask */ #define SLBE_ESID_SHIFT 28 +/* + * User segment for copyin/out + */ +#define USER_SLB_SLOT 63 +#define USER_SLB_SLBE (((USER_ADDR >> ADDR_SR_SHFT) << SLBE_ESID_SHIFT) | \ + SLBE_VALID | USER_SLB_SLOT) + struct slb { uint64_t slbv; uint64_t slbe; Modified: head/sys/powerpc/include/sr.h ============================================================================== --- head/sys/powerpc/include/sr.h Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/include/sr.h Sat Oct 30 23:07:30 2010 (r214574) @@ -42,11 +42,7 @@ #define SR_VSID_MASK 0x00ffffff /* Virtual Segment ID mask */ /* Kernel segment register usage */ -#ifdef __powerpc64__ -#define USER_SR 63 -#else #define USER_SR 12 -#endif #define KERNEL_SR 13 #define KERNEL2_SR 14 #define KERNEL3_SR 15 Modified: head/sys/powerpc/powerpc/exec_machdep.c ============================================================================== --- head/sys/powerpc/powerpc/exec_machdep.c Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/powerpc/exec_machdep.c Sat Oct 30 23:07:30 2010 (r214574) @@ -986,7 +986,6 @@ cpu_set_upcall(struct thread *td, struct pcb2->pcb_lr = (register_t)fork_trampoline; #endif pcb2->pcb_cpu.aim.usr_vsid = 0; - pcb2->pcb_cpu.aim.usr_esid = 0; /* Setup to release spin count in fork_exit(). */ td->td_md.md_spinlock_count = 1; Modified: head/sys/powerpc/powerpc/genassym.c ============================================================================== --- head/sys/powerpc/powerpc/genassym.c Sat Oct 30 23:04:54 2010 (r214573) +++ head/sys/powerpc/powerpc/genassym.c Sat Oct 30 23:07:30 2010 (r214574) @@ -103,13 +103,15 @@ ASSYM(TLBSAVE_BOOKE_R31, TLBSAVE_BOOKE_R ASSYM(MTX_LOCK, offsetof(struct mtx, mtx_lock)); #if defined(AIM) -ASSYM(USER_SR, USER_SR); ASSYM(USER_ADDR, USER_ADDR); #ifdef __powerpc64__ ASSYM(PC_KERNSLB, offsetof(struct pcpu, pc_slb)); ASSYM(PC_USERSLB, offsetof(struct pcpu, pc_userslb)); +ASSYM(USER_SLB_SLOT, USER_SLB_SLOT); +ASSYM(USER_SLB_SLBE, USER_SLB_SLBE); #else ASSYM(PM_SR, offsetof(struct pmap, pm_sr)); +ASSYM(USER_SR, USER_SR); #endif #elif defined(E500) ASSYM(PM_PDIR, offsetof(struct pmap, pm_pdir)); @@ -187,7 +189,6 @@ ASSYM(PCB_FLAGS, offsetof(struct pcb, pc ASSYM(PCB_FPU, PCB_FPU); ASSYM(PCB_VEC, PCB_VEC); -ASSYM(PCB_AIM_USR_ESID, offsetof(struct pcb, pcb_cpu.aim.usr_esid)); ASSYM(PCB_AIM_USR_VSID, offsetof(struct pcb, pcb_cpu.aim.usr_vsid)); ASSYM(PCB_BOOKE_CTR, offsetof(struct pcb, pcb_cpu.booke.ctr)); ASSYM(PCB_BOOKE_XER, offsetof(struct pcb, pcb_cpu.booke.xer));