From owner-svn-src-all@freebsd.org Fri Nov 10 04:14:50 2017 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 32FD3E6475F; Fri, 10 Nov 2017 04:14:50 +0000 (UTC) (envelope-from jhibbits@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DC9DE64CB2; Fri, 10 Nov 2017 04:14:49 +0000 (UTC) (envelope-from jhibbits@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id vAA4En44070426; Fri, 10 Nov 2017 04:14:49 GMT (envelope-from jhibbits@FreeBSD.org) Received: (from jhibbits@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id vAA4EnOd070425; Fri, 10 Nov 2017 04:14:49 GMT (envelope-from jhibbits@FreeBSD.org) Message-Id: <201711100414.vAA4EnOd070425@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jhibbits set sender to jhibbits@FreeBSD.org using -f From: Justin Hibbits Date: Fri, 10 Nov 2017 04:14:49 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r325628 - head/sys/powerpc/booke X-SVN-Group: head X-SVN-Commit-Author: jhibbits X-SVN-Commit-Paths: head/sys/powerpc/booke X-SVN-Commit-Revision: 325628 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Nov 2017 04:14:50 -0000 Author: jhibbits Date: Fri Nov 10 04:14:48 2017 New Revision: 325628 URL: https://svnweb.freebsd.org/changeset/base/325628 Log: Book-E pmap_mapdev_attr() improvements * Check TLB1 in all mapdev cases, in case the memattr matches an existing mapping (doesn't need to be MAP_DEFAULT). * Fix mapping where the starting address is not a multiple of the widest size base. For instance, it will now properly map 0xffffef000, size 0x11000 using 2 TLB entries, basing it at 0x****f000, instead of 0x***00000. MFC after: 2 weeks Modified: head/sys/powerpc/booke/pmap.c Modified: head/sys/powerpc/booke/pmap.c ============================================================================== --- head/sys/powerpc/booke/pmap.c Fri Nov 10 02:09:37 2017 (r325627) +++ head/sys/powerpc/booke/pmap.c Fri Nov 10 04:14:48 2017 (r325628) @@ -3471,16 +3471,17 @@ mmu_booke_mapdev_attr(mmu_t mmu, vm_paddr_t pa, vm_siz * check whether a sequence of TLB1 entries exist that match the * requirement, but now only checks the easy case. */ - if (ma == VM_MEMATTR_DEFAULT) { - for (i = 0; i < TLB1_ENTRIES; i++) { - tlb1_read_entry(&e, i); - if (!(e.mas1 & MAS1_VALID)) - continue; - if (pa >= e.phys && - (pa + size) <= (e.phys + e.size)) - return (void *)(e.virt + - (vm_offset_t)(pa - e.phys)); - } + for (i = 0; i < TLB1_ENTRIES; i++) { + tlb1_read_entry(&e, i); + if (!(e.mas1 & MAS1_VALID)) + continue; + if (pa >= e.phys && + (pa + size) <= (e.phys + e.size) && + (ma == VM_MEMATTR_DEFAULT || + tlb_calc_wimg(pa, ma) == + (e.mas2 & (MAS2_WIMGE_MASK & ~_TLB_ENTRY_SHARED)))) + return (void *)(e.virt + + (vm_offset_t)(pa - e.phys)); } size = roundup(size, PAGE_SIZE); @@ -3494,10 +3495,19 @@ mmu_booke_mapdev_attr(mmu_t mmu, vm_paddr_t pa, vm_siz * With a sparse mapdev, align to the largest starting region. This * could feasibly be optimized for a 'best-fit' alignment, but that * calculation could be very costly. + * Align to the smaller of: + * - first set bit in overlap of (pa & size mask) + * - largest size envelope + * + * It's possible the device mapping may start at a PA that's not larger + * than the size mask, so we need to offset in to maximize the TLB entry + * range and minimize the number of used TLB entries. */ do { tmpva = tlb1_map_base; - va = roundup(tlb1_map_base, 1 << flsl(size)); + sz = ffsl(((1 << flsl(size-1)) - 1) & pa); + sz = sz ? min(roundup(sz + 3, 4), flsl(size) - 1) : flsl(size) - 1; + va = roundup(tlb1_map_base, 1 << sz) | (((1 << sz) - 1) & pa); #ifdef __powerpc64__ } while (!atomic_cmpset_long(&tlb1_map_base, tmpva, va + size)); #else @@ -3514,6 +3524,13 @@ mmu_booke_mapdev_attr(mmu_t mmu, vm_paddr_t pa, vm_siz do { sz = 1 << (ilog2(size) & ~1); + /* Align size to PA */ + if (pa % sz != 0) { + do { + sz >>= 2; + } while (pa % sz != 0); + } + /* Now align from there to VA */ if (va % sz != 0) { do { sz >>= 2; @@ -3522,8 +3539,9 @@ mmu_booke_mapdev_attr(mmu_t mmu, vm_paddr_t pa, vm_siz if (bootverbose) printf("Wiring VA=%lx to PA=%jx (size=%lx)\n", va, (uintmax_t)pa, sz); - tlb1_set_entry(va, pa, sz, - _TLB_ENTRY_SHARED | tlb_calc_wimg(pa, ma)); + if (tlb1_set_entry(va, pa, sz, + _TLB_ENTRY_SHARED | tlb_calc_wimg(pa, ma)) < 0) + return (NULL); size -= sz; pa += sz; va += sz;