From owner-p4-projects@FreeBSD.ORG Sat Jun 16 16:15:24 2012 Return-Path: Delivered-To: p4-projects@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 32767) id 45B3F1065674; Sat, 16 Jun 2012 16:15:23 +0000 (UTC) Delivered-To: perforce@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DC90D1065673 for ; Sat, 16 Jun 2012 16:15:22 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from skunkworks.freebsd.org (skunkworks.freebsd.org [IPv6:2001:4f8:fff6::2d]) by mx1.freebsd.org (Postfix) with ESMTP id 846BC8FC16 for ; Sat, 16 Jun 2012 16:15:22 +0000 (UTC) Received: from skunkworks.freebsd.org (localhost [127.0.0.1]) by skunkworks.freebsd.org (8.14.4/8.14.4) with ESMTP id q5GGFMVk061176 for ; Sat, 16 Jun 2012 16:15:22 GMT (envelope-from jhb@freebsd.org) Received: (from perforce@localhost) by skunkworks.freebsd.org (8.14.4/8.14.4/Submit) id q5GGFMpm061173 for perforce@freebsd.org; Sat, 16 Jun 2012 16:15:22 GMT (envelope-from jhb@freebsd.org) Date: Sat, 16 Jun 2012 16:15:22 GMT Message-Id: <201206161615.q5GGFMpm061173@skunkworks.freebsd.org> X-Authentication-Warning: skunkworks.freebsd.org: perforce set sender to jhb@freebsd.org using -f From: John Baldwin To: Perforce Change Reviews Precedence: bulk Cc: Subject: PERFORCE change 212958 for review X-BeenThere: p4-projects@freebsd.org X-Mailman-Version: 2.1.5 List-Id: p4 projects tree changes List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 16 Jun 2012 16:15:24 -0000 http://p4web.freebsd.org/@@212958?ac=10 Change 212958 by jhb@jhb_jhbbsd on 2012/06/16 16:14:19 Debugging and test hacks for the problem of write(2) buffers reclaiming cache pages instead of free pages. Affected files ... .. //depot/projects/fadvise/sys/vm/vm_phys.c#6 edit Differences ... ==== //depot/projects/fadvise/sys/vm/vm_phys.c#6 (text+ko) ==== @@ -128,6 +128,15 @@ static void vm_phys_split_pages(vm_page_t m, int oind, struct vm_freelist *fl, int order); +static int vm_phys_uncached; +SYSCTL_INT(_vm, OID_AUTO, phys_uncached, CTLFLAG_RD, &vm_phys_uncached, 0, ""); +static int vm_phys_uc_alloc_pages; +SYSCTL_INT(_vm, OID_AUTO, phys_uc_alloc_pages, CTLFLAG_RD, + &vm_phys_uc_alloc_pages, 0, ""); +static int vm_phys_uc_free_pages; +SYSCTL_INT(_vm, OID_AUTO, phys_uc_free_pages, CTLFLAG_RD, + &vm_phys_uc_free_pages, 0, ""); + /* * Outputs the state of the physical memory allocator, specifically, * the amount of physical memory in each free list. @@ -495,12 +504,21 @@ TAILQ_REMOVE(&alt[oind].pl, m, pageq); alt[oind].lcnt--; m->order = VM_NFREEORDER; + if (m->pool == VM_FREEPOOL_CACHE && + pool != VM_FREEPOOL_CACHE) + vm_phys_uc_alloc_pages++; vm_phys_set_pool(pool, m, oind); vm_phys_split_pages(m, oind, fl, order); return (m); } } } + + /* + * XXX: If we get here, do deferred merging of cache pages + * with pages from another pool to satisfy the request and + * try again. This may be quite hard to do. + */ return (NULL); } @@ -681,8 +699,30 @@ TAILQ_REMOVE(&fl[order].pl, m_buddy, pageq); fl[order].lcnt--; m_buddy->order = VM_NFREEORDER; - if (m_buddy->pool != m->pool) + if (m_buddy->pool != m->pool) { +#if 1 +#if 1 + if (m_buddy->pool == VM_FREEPOOL_CACHE || + m->pool == VM_FREEPOOL_CACHE) + break; +#endif + if (m_buddy->pool == VM_FREEPOOL_CACHE) + vm_phys_uc_free_pages++; vm_phys_set_pool(m->pool, m_buddy, order); +#else + if (m_buddy->pool < m->pool) { + if (m_buddy->pool == VM_FREEPOOL_CACHE) + vm_phys_uc_free_pages++; + vm_phys_set_pool(m->pool, m_buddy, + order); + } else { + if (m->pool == VM_FREEPOOL_CACHE) + vm_phys_uc_free_pages++; + vm_phys_set_pool(m_buddy->pool, m, + order); + } +#endif + } order++; pa &= ~(((vm_paddr_t)1 << (PAGE_SHIFT + order)) - 1); m = &seg->first_page[atop(pa - seg->start)]; @@ -743,8 +783,12 @@ { vm_page_t m_tmp; - for (m_tmp = m; m_tmp < &m[1 << order]; m_tmp++) + for (m_tmp = m; m_tmp < &m[1 << order]; m_tmp++) { + if (m_tmp->pool == VM_FREEPOOL_CACHE && + pool != VM_FREEPOOL_CACHE) + vm_phys_uncached++; m_tmp->pool = pool; + } } /*