From owner-svn-src-head@FreeBSD.ORG Sat Mar 20 23:00:44 2010 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 367E01065673; Sat, 20 Mar 2010 23:00:44 +0000 (UTC) (envelope-from marius@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 259818FC20; Sat, 20 Mar 2010 23:00:44 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id o2KN0igS013652; Sat, 20 Mar 2010 23:00:44 GMT (envelope-from marius@svn.freebsd.org) Received: (from marius@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id o2KN0inE013650; Sat, 20 Mar 2010 23:00:44 GMT (envelope-from marius@svn.freebsd.org) Message-Id: <201003202300.o2KN0inE013650@svn.freebsd.org> From: Marius Strobl Date: Sat, 20 Mar 2010 23:00:44 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r205399 - head/sys/sparc64/sparc64 X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Mar 2010 23:00:44 -0000 Author: marius Date: Sat Mar 20 23:00:43 2010 New Revision: 205399 URL: http://svn.freebsd.org/changeset/base/205399 Log: Improve the KVA space sizing of 186682; on machines with large dTLBs we can actually use all of the available lockable entries of the tiny dTLB for the kernel TSB. With this change the KVA space sizing happens to be more in line with the MI one so up to at least 24GB machines KVA doesn't need to be limited manually. This is just another stopgap though, the real solution is to take advantage of ASI_ATOMIC_QUAD_LDD_PHYS on CPUs providing it so we don't need to lock the kernel TSB pages into the dTLB in the first place. Modified: head/sys/sparc64/sparc64/pmap.c Modified: head/sys/sparc64/sparc64/pmap.c ============================================================================== --- head/sys/sparc64/sparc64/pmap.c Sat Mar 20 22:58:54 2010 (r205398) +++ head/sys/sparc64/sparc64/pmap.c Sat Mar 20 23:00:43 2010 (r205399) @@ -236,6 +236,8 @@ PMAP_STATS_VAR(pmap_ncopy_page_soc); PMAP_STATS_VAR(pmap_nnew_thread); PMAP_STATS_VAR(pmap_nnew_thread_oc); +static inline u_long dtlb_get_data(u_int slot); + /* * Quick sort callout for comparing memory regions */ @@ -274,6 +276,18 @@ om_cmp(const void *a, const void *b) return (0); } +static inline u_long +dtlb_get_data(u_int slot) +{ + + /* + * We read ASI_DTLB_DATA_ACCESS_REG twice in order to work + * around errata of USIII and beyond. + */ + (void)ldxa(TLB_DAR_SLOT(slot), ASI_DTLB_DATA_ACCESS_REG); + return (ldxa(TLB_DAR_SLOT(slot), ASI_DTLB_DATA_ACCESS_REG)); +} + /* * Bootstrap the system enough to run with virtual memory. */ @@ -287,11 +301,13 @@ pmap_bootstrap(u_int cpu_impl) vm_paddr_t pa; vm_size_t physsz; vm_size_t virtsz; + u_long data; phandle_t pmem; phandle_t vmem; - int sz; + u_int dtlb_slots_avail; int i; int j; + int sz; /* * Find out what physical memory is available from the PROM and @@ -336,22 +352,30 @@ pmap_bootstrap(u_int cpu_impl) /* * Calculate the size of kernel virtual memory, and the size and mask * for the kernel TSB based on the phsyical memory size but limited - * by letting the kernel TSB take up no more than half of the dTLB - * slots available for locked entries. - */ + * by the amount of dTLB slots available for locked entries (given + * that for spitfire-class CPUs all of the dt64 slots can hold locked + * entries but there is no large dTLB for unlocked ones, we don't use + * more than half of it for locked entries). + */ + dtlb_slots_avail = 0; + for (i = 0; i < dtlb_slots; i++) { + data = dtlb_get_data(i); + if ((data & (TD_V | TD_L)) != (TD_V | TD_L)) + dtlb_slots_avail++; + } +#ifdef SMP + dtlb_slots_avail -= PCPU_PAGES; +#endif + if (cpu_impl >= CPU_IMPL_ULTRASPARCI && + cpu_impl < CPU_IMPL_ULTRASPARCIII) + dtlb_slots_avail /= 2; virtsz = roundup(physsz, PAGE_SIZE_4M << (PAGE_SHIFT - TTE_SHIFT)); virtsz = MIN(virtsz, - (dtlb_slots / 2 * PAGE_SIZE_4M) << (PAGE_SHIFT - TTE_SHIFT)); + (dtlb_slots_avail * PAGE_SIZE_4M) << (PAGE_SHIFT - TTE_SHIFT)); vm_max_kernel_address = VM_MIN_KERNEL_ADDRESS + virtsz; tsb_kernel_size = virtsz >> (PAGE_SHIFT - TTE_SHIFT); tsb_kernel_mask = (tsb_kernel_size >> TTE_SHIFT) - 1; - if (kernel_tlb_slots + PCPU_PAGES + tsb_kernel_size / PAGE_SIZE_4M + - 1 /* PROM page */ + 1 /* spare */ > dtlb_slots) - panic("pmap_bootstrap: insufficient dTLB entries"); - if (kernel_tlb_slots + 1 /* PROM page */ + 1 /* spare */ > itlb_slots) - panic("pmap_bootstrap: insufficient iTLB entries"); - /* * Allocate the kernel TSB and lock it in the TLB. */