From owner-svn-src-all@FreeBSD.ORG Sat Jul 10 18:22:45 2010 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 77FD11065672; Sat, 10 Jul 2010 18:22:45 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 5DF188FC17; Sat, 10 Jul 2010 18:22:45 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id o6AIMjmx032401; Sat, 10 Jul 2010 18:22:45 GMT (envelope-from alc@svn.freebsd.org) Received: (from alc@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id o6AIMjnG032398; Sat, 10 Jul 2010 18:22:45 GMT (envelope-from alc@svn.freebsd.org) Message-Id: <201007101822.o6AIMjnG032398@svn.freebsd.org> From: Alan Cox Date: Sat, 10 Jul 2010 18:22:44 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r209887 - in head/sys: amd64/amd64 i386/i386 X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 10 Jul 2010 18:22:45 -0000 Author: alc Date: Sat Jul 10 18:22:44 2010 New Revision: 209887 URL: http://svn.freebsd.org/changeset/base/209887 Log: Reduce the number of global TLB shootdowns generated by pmap_qenter(). Specifically, teach pmap_qenter() to recognize the case when it is being asked to replace a mapping with the very same mapping and not generate a shootdown. Unfortunately, the buffer cache commonly passes an entire buffer to pmap_qenter() when only a subset of the mappings are changing. For the extension of buffers in allocbuf() this was resulting in unnecessary shootdowns. The addition of new pages to the end of the buffer need not and did not trigger a shootdown, but overwriting the initial mappings with the very same mappings was seen as a change that necessitated a shootdown. With this change, that is no longer so. For a "buildworld" on amd64, this change eliminates 14-15% of the pmap_invalidate_range() shootdowns, and about 4% of the overall shootdowns. MFC after: 3 weeks Modified: head/sys/amd64/amd64/pmap.c head/sys/i386/i386/pmap.c Modified: head/sys/amd64/amd64/pmap.c ============================================================================== --- head/sys/amd64/amd64/pmap.c Sat Jul 10 17:46:53 2010 (r209886) +++ head/sys/amd64/amd64/pmap.c Sat Jul 10 18:22:44 2010 (r209887) @@ -1331,19 +1331,22 @@ pmap_map(vm_offset_t *virt, vm_paddr_t s void pmap_qenter(vm_offset_t sva, vm_page_t *ma, int count) { - pt_entry_t *endpte, oldpte, *pte; + pt_entry_t *endpte, oldpte, pa, *pte; + vm_page_t m; oldpte = 0; pte = vtopte(sva); endpte = pte + count; while (pte < endpte) { - oldpte |= *pte; - pte_store(pte, VM_PAGE_TO_PHYS(*ma) | PG_G | - pmap_cache_bits((*ma)->md.pat_mode, 0) | PG_RW | PG_V); + m = *ma++; + pa = VM_PAGE_TO_PHYS(m) | pmap_cache_bits(m->md.pat_mode, 0); + if ((*pte & (PG_FRAME | PG_PTE_CACHE)) != pa) { + oldpte |= *pte; + pte_store(pte, pa | PG_G | PG_RW | PG_V); + } pte++; - ma++; } - if ((oldpte & PG_V) != 0) + if (__predict_false((oldpte & PG_V) != 0)) pmap_invalidate_range(kernel_pmap, sva, sva + count * PAGE_SIZE); } Modified: head/sys/i386/i386/pmap.c ============================================================================== --- head/sys/i386/i386/pmap.c Sat Jul 10 17:46:53 2010 (r209886) +++ head/sys/i386/i386/pmap.c Sat Jul 10 18:22:44 2010 (r209887) @@ -1461,19 +1461,22 @@ pmap_map(vm_offset_t *virt, vm_paddr_t s void pmap_qenter(vm_offset_t sva, vm_page_t *ma, int count) { - pt_entry_t *endpte, oldpte, *pte; + pt_entry_t *endpte, oldpte, pa, *pte; + vm_page_t m; oldpte = 0; pte = vtopte(sva); endpte = pte + count; while (pte < endpte) { - oldpte |= *pte; - pte_store(pte, VM_PAGE_TO_PHYS(*ma) | pgeflag | - pmap_cache_bits((*ma)->md.pat_mode, 0) | PG_RW | PG_V); + m = *ma++; + pa = VM_PAGE_TO_PHYS(m) | pmap_cache_bits(m->md.pat_mode, 0); + if ((*pte & (PG_FRAME | PG_PTE_CACHE)) != pa) { + oldpte |= *pte; + pte_store(pte, pa | pgeflag | PG_RW | PG_V); + } pte++; - ma++; } - if ((oldpte & PG_V) != 0) + if (__predict_false((oldpte & PG_V) != 0)) pmap_invalidate_range(kernel_pmap, sva, sva + count * PAGE_SIZE); }