From owner-svn-src-all@freebsd.org Mon Jan 15 06:46:35 2018 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7774AEB3058; Mon, 15 Jan 2018 06:46:35 +0000 (UTC) (envelope-from nwhitehorn@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 42441826B2; Mon, 15 Jan 2018 06:46:35 +0000 (UTC) (envelope-from nwhitehorn@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 8B0995578; Mon, 15 Jan 2018 06:46:34 +0000 (UTC) (envelope-from nwhitehorn@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w0F6kY3n092581; Mon, 15 Jan 2018 06:46:34 GMT (envelope-from nwhitehorn@FreeBSD.org) Received: (from nwhitehorn@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w0F6kXNa092574; Mon, 15 Jan 2018 06:46:33 GMT (envelope-from nwhitehorn@FreeBSD.org) Message-Id: <201801150646.w0F6kXNa092574@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: nwhitehorn set sender to nwhitehorn@FreeBSD.org using -f From: Nathan Whitehorn Date: Mon, 15 Jan 2018 06:46:33 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r327992 - in head/sys/powerpc: aim booke include powerpc X-SVN-Group: head X-SVN-Commit-Author: nwhitehorn X-SVN-Commit-Paths: in head/sys/powerpc: aim booke include powerpc X-SVN-Commit-Revision: 327992 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jan 2018 06:46:35 -0000 Author: nwhitehorn Date: Mon Jan 15 06:46:33 2018 New Revision: 327992 URL: https://svnweb.freebsd.org/changeset/base/327992 Log: Move the pmap-specific code in copyinout.c that gets pointers to userland buffers into a new pmap-module function pmap_map_user_ptr() that can be implemented by the respective modules. This is required to implement non-segment-based AIM-ish MMU systems such as the radix-tree page tables introduced by POWER ISA 3.0 and present on POWER9. Reviewed by: jhibbits Modified: head/sys/powerpc/aim/mmu_oea.c head/sys/powerpc/aim/mmu_oea64.c head/sys/powerpc/booke/pmap.c head/sys/powerpc/include/pmap.h head/sys/powerpc/powerpc/copyinout.c head/sys/powerpc/powerpc/mmu_if.m head/sys/powerpc/powerpc/pmap_dispatch.c Modified: head/sys/powerpc/aim/mmu_oea.c ============================================================================== --- head/sys/powerpc/aim/mmu_oea.c Mon Jan 15 05:00:26 2018 (r327991) +++ head/sys/powerpc/aim/mmu_oea.c Mon Jan 15 06:46:33 2018 (r327992) @@ -320,7 +320,10 @@ void moea_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size_t void moea_scan_init(mmu_t mmu); vm_offset_t moea_quick_enter_page(mmu_t mmu, vm_page_t m); void moea_quick_remove_page(mmu_t mmu, vm_offset_t addr); +static int moea_map_user_ptr(mmu_t mmu, pmap_t pm, + volatile const void *uaddr, void **kaddr, size_t ulen, size_t *klen); + static mmu_method_t moea_methods[] = { MMUMETHOD(mmu_clear_modify, moea_clear_modify), MMUMETHOD(mmu_copy_page, moea_copy_page), @@ -370,6 +373,7 @@ static mmu_method_t moea_methods[] = { MMUMETHOD(mmu_dev_direct_mapped,moea_dev_direct_mapped), MMUMETHOD(mmu_scan_init, moea_scan_init), MMUMETHOD(mmu_dumpsys_map, moea_dumpsys_map), + MMUMETHOD(mmu_map_user_ptr, moea_map_user_ptr), { 0, 0 } }; @@ -1542,6 +1546,45 @@ moea_kremove(mmu_t mmu, vm_offset_t va) { moea_remove(mmu, kernel_pmap, va, va + PAGE_SIZE); +} + +/* + * Provide a kernel pointer corresponding to a given userland pointer. + * The returned pointer is valid until the next time this function is + * called in this thread. This is used internally in copyin/copyout. + */ +int +moea_map_user_ptr(mmu_t mmu, pmap_t pm, volatile const void *uaddr, + void **kaddr, size_t ulen, size_t *klen) +{ + size_t l; + register_t vsid; + + *kaddr = (char *)USER_ADDR + ((uintptr_t)uaddr & ~SEGMENT_MASK); + l = ((char *)USER_ADDR + SEGMENT_LENGTH) - (char *)(*kaddr); + if (l > ulen) + l = ulen; + if (klen) + *klen = l; + else if (l != ulen) + return (EFAULT); + + vsid = va_to_vsid(pm, (vm_offset_t)uaddr); + + /* Mark segment no-execute */ + vsid |= SR_N; + + /* If we have already set this VSID, we can just return */ + if (curthread->td_pcb->pcb_cpu.aim.usr_vsid == vsid) + return (0); + + __asm __volatile("isync"); + curthread->td_pcb->pcb_cpu.aim.usr_segm = + (uintptr_t)uaddr >> ADDR_SR_SHFT; + curthread->td_pcb->pcb_cpu.aim.usr_vsid = vsid; + __asm __volatile("mtsr %0,%1; isync" :: "n"(USER_SR), "r"(vsid)); + + return (0); } /* Modified: head/sys/powerpc/aim/mmu_oea64.c ============================================================================== --- head/sys/powerpc/aim/mmu_oea64.c Mon Jan 15 05:00:26 2018 (r327991) +++ head/sys/powerpc/aim/mmu_oea64.c Mon Jan 15 06:46:33 2018 (r327992) @@ -284,7 +284,10 @@ void moea64_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size void moea64_scan_init(mmu_t mmu); vm_offset_t moea64_quick_enter_page(mmu_t mmu, vm_page_t m); void moea64_quick_remove_page(mmu_t mmu, vm_offset_t addr); +static int moea64_map_user_ptr(mmu_t mmu, pmap_t pm, + volatile const void *uaddr, void **kaddr, size_t ulen, size_t *klen); + static mmu_method_t moea64_methods[] = { MMUMETHOD(mmu_clear_modify, moea64_clear_modify), MMUMETHOD(mmu_copy_page, moea64_copy_page), @@ -333,6 +336,7 @@ static mmu_method_t moea64_methods[] = { MMUMETHOD(mmu_dev_direct_mapped,moea64_dev_direct_mapped), MMUMETHOD(mmu_scan_init, moea64_scan_init), MMUMETHOD(mmu_dumpsys_map, moea64_dumpsys_map), + MMUMETHOD(mmu_map_user_ptr, moea64_map_user_ptr), { 0, 0 } }; @@ -1831,6 +1835,70 @@ void moea64_kremove(mmu_t mmu, vm_offset_t va) { moea64_remove(mmu, kernel_pmap, va, va + PAGE_SIZE); +} + +/* + * Provide a kernel pointer corresponding to a given userland pointer. + * The returned pointer is valid until the next time this function is + * called in this thread. This is used internally in copyin/copyout. + */ +static int +moea64_map_user_ptr(mmu_t mmu, pmap_t pm, volatile const void *uaddr, + void **kaddr, size_t ulen, size_t *klen) +{ + size_t l; +#ifdef __powerpc64__ + struct slb *slb; +#endif + register_t slbv; + + *kaddr = (char *)USER_ADDR + ((uintptr_t)uaddr & ~SEGMENT_MASK); + l = ((char *)USER_ADDR + SEGMENT_LENGTH) - (char *)(*kaddr); + if (l > ulen) + l = ulen; + if (klen) + *klen = l; + else if (l != ulen) + return (EFAULT); + +#ifdef __powerpc64__ + /* Try lockless look-up first */ + slb = user_va_to_slb_entry(pm, (vm_offset_t)uaddr); + + if (slb == NULL) { + /* If it isn't there, we need to pre-fault the VSID */ + PMAP_LOCK(pm); + slbv = va_to_vsid(pm, (vm_offset_t)uaddr) << SLBV_VSID_SHIFT; + PMAP_UNLOCK(pm); + } else { + slbv = slb->slbv; + } + + /* Mark segment no-execute */ + slbv |= SLBV_N; +#else + slbv = va_to_vsid(pm, (vm_offset_t)uaddr); + + /* Mark segment no-execute */ + slbv |= SR_N; +#endif + + /* If we have already set this VSID, we can just return */ + if (curthread->td_pcb->pcb_cpu.aim.usr_vsid == slbv) + return (0); + + __asm __volatile("isync"); + curthread->td_pcb->pcb_cpu.aim.usr_segm = + (uintptr_t)uaddr >> ADDR_SR_SHFT; + curthread->td_pcb->pcb_cpu.aim.usr_vsid = slbv; +#ifdef __powerpc64__ + __asm __volatile ("slbie %0; slbmte %1, %2; isync" :: + "r"(USER_ADDR), "r"(slbv), "r"(USER_SLB_SLBE)); +#else + __asm __volatile("mtsr %0,%1; isync" :: "n"(USER_SR), "r"(slbv)); +#endif + + return (0); } /* Modified: head/sys/powerpc/booke/pmap.c ============================================================================== --- head/sys/powerpc/booke/pmap.c Mon Jan 15 05:00:26 2018 (r327991) +++ head/sys/powerpc/booke/pmap.c Mon Jan 15 06:46:33 2018 (r327992) @@ -380,7 +380,10 @@ static vm_offset_t mmu_booke_quick_enter_page(mmu_t mm static void mmu_booke_quick_remove_page(mmu_t mmu, vm_offset_t addr); static int mmu_booke_change_attr(mmu_t mmu, vm_offset_t addr, vm_size_t sz, vm_memattr_t mode); +static int mmu_booke_map_user_ptr(mmu_t mmu, pmap_t pm, + volatile const void *uaddr, void **kaddr, size_t ulen, size_t *klen); + static mmu_method_t mmu_booke_methods[] = { /* pmap dispatcher interface */ MMUMETHOD(mmu_clear_modify, mmu_booke_clear_modify), @@ -432,6 +435,7 @@ static mmu_method_t mmu_booke_methods[] = { MMUMETHOD(mmu_kremove, mmu_booke_kremove), MMUMETHOD(mmu_unmapdev, mmu_booke_unmapdev), MMUMETHOD(mmu_change_attr, mmu_booke_change_attr), + MMUMETHOD(mmu_map_user_ptr, mmu_booke_map_user_ptr), /* dumpsys() support */ MMUMETHOD(mmu_dumpsys_map, mmu_booke_dumpsys_map), @@ -2265,6 +2269,26 @@ mmu_booke_kremove(mmu_t mmu, vm_offset_t va) tlb_miss_unlock(); mtx_unlock_spin(&tlbivax_mutex); +} + +/* + * Provide a kernel pointer corresponding to a given userland pointer. + * The returned pointer is valid until the next time this function is + * called in this thread. This is used internally in copyin/copyout. + */ +int +mmu_booke_map_user_ptr(mmu_t mmu, pmap_t pm, volatile const void *uaddr, + void **kaddr, size_t ulen, size_t *klen) +{ + + if ((uintptr_t)uaddr + ulen > VM_MAXUSER_ADDRESS + PAGE_SIZE) + return (EFAULT); + + *kaddr = (void *)(uintptr_t)uaddr; + if (klen) + *klen = ulen; + + return (0); } /* Modified: head/sys/powerpc/include/pmap.h ============================================================================== --- head/sys/powerpc/include/pmap.h Mon Jan 15 05:00:26 2018 (r327991) +++ head/sys/powerpc/include/pmap.h Mon Jan 15 06:46:33 2018 (r327992) @@ -260,6 +260,8 @@ void *pmap_mapdev_attr(vm_paddr_t, vm_size_t, vm_mema void pmap_unmapdev(vm_offset_t, vm_size_t); void pmap_page_set_memattr(vm_page_t, vm_memattr_t); int pmap_change_attr(vm_offset_t, vm_size_t, vm_memattr_t); +int pmap_map_user_ptr(pmap_t pm, volatile const void *uaddr, + void **kaddr, size_t ulen, size_t *klen); void pmap_deactivate(struct thread *); vm_paddr_t pmap_kextract(vm_offset_t); int pmap_dev_direct_mapped(vm_paddr_t, vm_size_t); Modified: head/sys/powerpc/powerpc/copyinout.c ============================================================================== --- head/sys/powerpc/powerpc/copyinout.c Mon Jan 15 05:00:26 2018 (r327991) +++ head/sys/powerpc/powerpc/copyinout.c Mon Jan 15 06:46:33 2018 (r327992) @@ -69,108 +69,8 @@ __FBSDID("$FreeBSD$"); #include #include -#include -#include #include -#ifdef AIM -/* - * Makes sure that the right segment of userspace is mapped in. - */ - -#ifdef __powerpc64__ -static __inline void -set_user_sr(pmap_t pm, volatile const void *addr) -{ - struct slb *slb; - register_t slbv; - - /* Try lockless look-up first */ - slb = user_va_to_slb_entry(pm, (vm_offset_t)addr); - - if (slb == NULL) { - /* If it isn't there, we need to pre-fault the VSID */ - PMAP_LOCK(pm); - slbv = va_to_vsid(pm, (vm_offset_t)addr) << SLBV_VSID_SHIFT; - PMAP_UNLOCK(pm); - } else { - slbv = slb->slbv; - } - - /* Mark segment no-execute */ - slbv |= SLBV_N; - - /* If we have already set this VSID, we can just return */ - if (curthread->td_pcb->pcb_cpu.aim.usr_vsid == slbv) - return; - - __asm __volatile("isync"); - curthread->td_pcb->pcb_cpu.aim.usr_segm = - (uintptr_t)addr >> ADDR_SR_SHFT; - curthread->td_pcb->pcb_cpu.aim.usr_vsid = slbv; - __asm __volatile ("slbie %0; slbmte %1, %2; isync" :: - "r"(USER_ADDR), "r"(slbv), "r"(USER_SLB_SLBE)); -} -#else -static __inline void -set_user_sr(pmap_t pm, volatile const void *addr) -{ - register_t vsid; - - vsid = va_to_vsid(pm, (vm_offset_t)addr); - - /* Mark segment no-execute */ - vsid |= SR_N; - - /* If we have already set this VSID, we can just return */ - if (curthread->td_pcb->pcb_cpu.aim.usr_vsid == vsid) - return; - - __asm __volatile("isync"); - curthread->td_pcb->pcb_cpu.aim.usr_segm = - (uintptr_t)addr >> ADDR_SR_SHFT; - curthread->td_pcb->pcb_cpu.aim.usr_vsid = vsid; - __asm __volatile("mtsr %0,%1; isync" :: "n"(USER_SR), "r"(vsid)); -} -#endif - -static __inline int -map_user_ptr(pmap_t pm, volatile const void *uaddr, void **kaddr, size_t ulen, - size_t *klen) -{ - size_t l; - - *kaddr = (char *)USER_ADDR + ((uintptr_t)uaddr & ~SEGMENT_MASK); - - l = ((char *)USER_ADDR + SEGMENT_LENGTH) - (char *)(*kaddr); - if (l > ulen) - l = ulen; - if (klen) - *klen = l; - else if (l != ulen) - return (EFAULT); - - set_user_sr(pm, uaddr); - - return (0); -} -#else /* Book-E uses a combined kernel/user mapping */ -static __inline int -map_user_ptr(pmap_t pm, volatile const void *uaddr, void **kaddr, size_t ulen, - size_t *klen) -{ - - if ((uintptr_t)uaddr + ulen > VM_MAXUSER_ADDRESS + PAGE_SIZE) - return (EFAULT); - - *kaddr = (void *)(uintptr_t)uaddr; - if (klen) - *klen = ulen; - - return (0); -} -#endif - int copyout(const void *kaddr, void *udaddr, size_t len) { @@ -194,7 +94,7 @@ copyout(const void *kaddr, void *udaddr, size_t len) up = udaddr; while (len > 0) { - if (map_user_ptr(pm, udaddr, (void **)&p, len, &l)) { + if (pmap_map_user_ptr(pm, udaddr, (void **)&p, len, &l)) { td->td_pcb->pcb_onfault = NULL; return (EFAULT); } @@ -233,7 +133,7 @@ copyin(const void *udaddr, void *kaddr, size_t len) up = udaddr; while (len > 0) { - if (map_user_ptr(pm, udaddr, (void **)&p, len, &l)) { + if (pmap_map_user_ptr(pm, udaddr, (void **)&p, len, &l)) { td->td_pcb->pcb_onfault = NULL; return (EFAULT); } @@ -299,7 +199,7 @@ subyte(volatile void *addr, int byte) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -328,7 +228,7 @@ suword32(volatile void *addr, int word) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -357,7 +257,7 @@ suword(volatile void *addr, long word) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -400,7 +300,7 @@ fubyte(volatile const void *addr) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -428,7 +328,7 @@ fuword16(volatile const void *addr) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -456,7 +356,7 @@ fueword32(volatile const void *addr, int32_t *val) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -485,7 +385,7 @@ fueword64(volatile const void *addr, int64_t *val) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -514,7 +414,7 @@ fueword(volatile const void *addr, long *val) return (-1); } - if (map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { + if (pmap_map_user_ptr(pm, addr, (void **)&p, sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -543,8 +443,8 @@ casueword32(volatile uint32_t *addr, uint32_t old, uin return (-1); } - if (map_user_ptr(pm, (void *)(uintptr_t)addr, (void **)&p, sizeof(*p), - NULL)) { + if (pmap_map_user_ptr(pm, (void *)(uintptr_t)addr, (void **)&p, + sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } @@ -595,8 +495,8 @@ casueword(volatile u_long *addr, u_long old, u_long *o return (-1); } - if (map_user_ptr(pm, (void *)(uintptr_t)addr, (void **)&p, sizeof(*p), - NULL)) { + if (pmap_map_user_ptr(pm, (void *)(uintptr_t)addr, (void **)&p, + sizeof(*p), NULL)) { td->td_pcb->pcb_onfault = NULL; return (-1); } Modified: head/sys/powerpc/powerpc/mmu_if.m ============================================================================== --- head/sys/powerpc/powerpc/mmu_if.m Mon Jan 15 05:00:26 2018 (r327991) +++ head/sys/powerpc/powerpc/mmu_if.m Mon Jan 15 06:46:33 2018 (r327992) @@ -817,6 +817,27 @@ METHOD void unmapdev { vm_size_t _size; }; +/** + * @brief Provide a kernel-space pointer that can be used to access the + * given userland address. The kernel accessible length returned in klen + * may be less than the requested length of the userland buffer (ulen). If + * so, retry with a higher address to get access to the later parts of the + * buffer. Returns EFAULT if no mapping can be made, else zero. + * + * @param _pm PMAP for the user pointer. + * @param _uaddr Userland address to map. + * @param _kaddr Corresponding kernel address. + * @param _ulen Length of user buffer. + * @param _klen Available subset of ulen with _kaddr. + */ +METHOD int map_user_ptr { + mmu_t _mmu; + pmap_t _pm; + volatile const void *_uaddr; + void **_kaddr; + size_t _ulen; + size_t *_klen; +}; /** * @brief Reverse-map a kernel virtual address Modified: head/sys/powerpc/powerpc/pmap_dispatch.c ============================================================================== --- head/sys/powerpc/powerpc/pmap_dispatch.c Mon Jan 15 05:00:26 2018 (r327991) +++ head/sys/powerpc/powerpc/pmap_dispatch.c Mon Jan 15 06:46:33 2018 (r327992) @@ -511,6 +511,15 @@ pmap_kremove(vm_offset_t va) return (MMU_KREMOVE(mmu_obj, va)); } +int +pmap_map_user_ptr(pmap_t pm, volatile const void *uaddr, void **kaddr, + size_t ulen, size_t *klen) +{ + + CTR2(KTR_PMAP, "%s(%p)", __func__, uaddr); + return (MMU_MAP_USER_PTR(mmu_obj, pm, uaddr, kaddr, ulen, klen)); +} + boolean_t pmap_dev_direct_mapped(vm_paddr_t pa, vm_size_t size) {