From owner-svn-src-head@FreeBSD.ORG Sun Apr 18 22:32:08 2010 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 569451065673; Sun, 18 Apr 2010 22:32:08 +0000 (UTC) (envelope-from jmallett@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 445978FC0C; Sun, 18 Apr 2010 22:32:08 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id o3IMW8wx013969; Sun, 18 Apr 2010 22:32:08 GMT (envelope-from jmallett@svn.freebsd.org) Received: (from jmallett@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id o3IMW860013954; Sun, 18 Apr 2010 22:32:08 GMT (envelope-from jmallett@svn.freebsd.org) Message-Id: <201004182232.o3IMW860013954@svn.freebsd.org> From: Juli Mallett Date: Sun, 18 Apr 2010 22:32:08 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r206819 - in head/sys: mips/include mips/mips vm X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Apr 2010 22:32:08 -0000 Author: jmallett Date: Sun Apr 18 22:32:07 2010 New Revision: 206819 URL: http://svn.freebsd.org/changeset/base/206819 Log: o) Add a VM find-space option, VMFS_TLB_ALIGNED_SPACE, which searches the address space for an address as aligned by the new pmap_align_tlb() function, which is for constraints imposed by the TLB. [1] o) Add a kmem_alloc_nofault_space() function, which acts like kmem_alloc_nofault() but allows the caller to specify which find-space option to use. [1] o) Use kmem_alloc_nofault_space() with VMFS_TLB_ALIGNED_SPACE to allocate the kernel stack address on MIPS. [1] o) Make pmap_align_tlb() on MIPS align addresses so that they do not start on an odd boundary within the TLB, so that they are suitable for insertion as wired entries and do not have to share a TLB entry with another mapping, assuming they are appropriately-sized. o) Eliminate md_realstack now that the kstack will be appropriately-aligned on MIPS. o) Increase the number of guard pages to 2 so that we retain the proper alignment of the kstack address. Reviewed by: [1] alc X-MFC-after: Making sure alc has not come up with a better interface. Modified: head/sys/mips/include/param.h head/sys/mips/include/proc.h head/sys/mips/mips/exception.S head/sys/mips/mips/genassym.c head/sys/mips/mips/machdep.c head/sys/mips/mips/pmap.c head/sys/mips/mips/swtch.S head/sys/mips/mips/vm_machdep.c head/sys/vm/pmap.h head/sys/vm/vm_extern.h head/sys/vm/vm_glue.c head/sys/vm/vm_kern.c head/sys/vm/vm_map.c head/sys/vm/vm_map.h Modified: head/sys/mips/include/param.h ============================================================================== --- head/sys/mips/include/param.h Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/include/param.h Sun Apr 18 22:32:07 2010 (r206819) @@ -113,12 +113,9 @@ /* * The kernel stack needs to be aligned on a (PAGE_SIZE * 2) boundary. - * - * Although we allocate 3 pages for the kernel stack we end up using - * only the 2 pages that are aligned on a (PAGE_SIZE * 2) boundary. */ -#define KSTACK_PAGES 3 /* kernel stack*/ -#define KSTACK_GUARD_PAGES 1 /* pages of kstack guard; 0 disables */ +#define KSTACK_PAGES 2 /* kernel stack*/ +#define KSTACK_GUARD_PAGES 2 /* pages of kstack guard; 0 disables */ #define UPAGES 2 Modified: head/sys/mips/include/proc.h ============================================================================== --- head/sys/mips/include/proc.h Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/include/proc.h Sun Apr 18 22:32:07 2010 (r206819) @@ -44,7 +44,7 @@ */ struct mdthread { int md_flags; /* machine-dependent flags */ - int md_upte[KSTACK_PAGES - 1]; /* ptes for mapping u pcb */ + int md_upte[KSTACK_PAGES]; /* ptes for mapping u pcb */ int md_ss_addr; /* single step address for ptrace */ int md_ss_instr; /* single step instruction for ptrace */ register_t md_saved_intr; @@ -53,7 +53,6 @@ struct mdthread { int md_pc_ctrl; /* performance counter control */ int md_pc_count; /* performance counter */ int md_pc_spill; /* performance counter spill */ - vm_offset_t md_realstack; void *md_tls; }; Modified: head/sys/mips/mips/exception.S ============================================================================== --- head/sys/mips/mips/exception.S Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/mips/exception.S Sun Apr 18 22:32:07 2010 (r206819) @@ -928,7 +928,7 @@ tlb_insert_random: */ GET_CPU_PCPU(k1) lw k0, PC_CURTHREAD(k1) - lw k0, TD_REALKSTACK(k0) + lw k0, TD_KSTACK(k0) sltu k0, k0, sp bnez k0, _C_LABEL(MipsKernGenException) nop @@ -975,7 +975,7 @@ tlb_insert_random: */ GET_CPU_PCPU(k1) lw k0, PC_CURTHREAD(k1) - sw zero, TD_REALKSTACK(k0) + sw zero, TD_KSTACK(k0) move a1, a0 PANIC("kernel stack overflow - trapframe at %p") Modified: head/sys/mips/mips/genassym.c ============================================================================== --- head/sys/mips/mips/genassym.c Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/mips/genassym.c Sun Apr 18 22:32:07 2010 (r206819) @@ -65,7 +65,7 @@ __FBSDID("$FreeBSD$"); ASSYM(TD_PCB, offsetof(struct thread, td_pcb)); ASSYM(TD_UPTE, offsetof(struct thread, td_md.md_upte)); -ASSYM(TD_REALKSTACK, offsetof(struct thread, td_md.md_realstack)); +ASSYM(TD_KSTACK, offsetof(struct thread, td_kstack)); ASSYM(TD_FLAGS, offsetof(struct thread, td_flags)); ASSYM(TD_LOCK, offsetof(struct thread, td_lock)); ASSYM(TD_FRAME, offsetof(struct thread, td_frame)); Modified: head/sys/mips/mips/machdep.c ============================================================================== --- head/sys/mips/mips/machdep.c Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/mips/machdep.c Sun Apr 18 22:32:07 2010 (r206819) @@ -298,14 +298,13 @@ mips_proc0_init(void) (long)kstack0)); thread0.td_kstack = kstack0; thread0.td_kstack_pages = KSTACK_PAGES; - thread0.td_md.md_realstack = roundup2(thread0.td_kstack, PAGE_SIZE * 2); /* * Do not use cpu_thread_alloc to initialize these fields * thread0 is the only thread that has kstack located in KSEG0 * while cpu_thread_alloc handles kstack allocated in KSEG2. */ - thread0.td_pcb = (struct pcb *)(thread0.td_md.md_realstack + - (thread0.td_kstack_pages - 1) * PAGE_SIZE) - 1; + thread0.td_pcb = (struct pcb *)(thread0.td_kstack + + thread0.td_kstack_pages * PAGE_SIZE) - 1; thread0.td_frame = &thread0.td_pcb->pcb_regs; /* Steal memory for the dynamic per-cpu area. */ Modified: head/sys/mips/mips/pmap.c ============================================================================== --- head/sys/mips/mips/pmap.c Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/mips/pmap.c Sun Apr 18 22:32:07 2010 (r206819) @@ -2813,6 +2813,21 @@ pmap_align_superpage(vm_object_t object, *addr = ((*addr + SEGOFSET) & ~SEGOFSET) + superpage_offset; } +/* + * Increase the starting virtual address of the given mapping so + * that it is aligned to not be the second page in a TLB entry. + * This routine assumes that the length is appropriately-sized so + * that the allocation does not share a TLB entry at all if required. + */ +void +pmap_align_tlb(vm_offset_t *addr) +{ + if ((*addr & PAGE_SIZE) == 0) + return; + *addr += PAGE_SIZE; + return; +} + int pmap_pid_dump(int pid); int Modified: head/sys/mips/mips/swtch.S ============================================================================== --- head/sys/mips/mips/swtch.S Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/mips/swtch.S Sun Apr 18 22:32:07 2010 (r206819) @@ -339,7 +339,7 @@ blocked_loop: sw a1, PC_CURTHREAD(a3) lw a2, TD_PCB(a1) sw a2, PC_CURPCB(a3) - lw v0, TD_REALKSTACK(a1) + lw v0, TD_KSTACK(a1) li s0, MIPS_KSEG2_START # If Uarea addr is below kseg2, bltu v0, s0, sw2 # no need to insert in TLB. lw a1, TD_UPTE+0(s7) # t0 = first u. pte Modified: head/sys/mips/mips/vm_machdep.c ============================================================================== --- head/sys/mips/mips/vm_machdep.c Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/mips/mips/vm_machdep.c Sun Apr 18 22:32:07 2010 (r206819) @@ -217,13 +217,9 @@ cpu_thread_swapin(struct thread *td) * part of the thread struct so cpu_switch() can quickly map in * the pcb struct and kernel stack. */ - if (!(pte = pmap_segmap(kernel_pmap, td->td_md.md_realstack))) - panic("cpu_thread_swapin: invalid segmap"); - pte += ((vm_offset_t)td->td_md.md_realstack >> PAGE_SHIFT) & (NPTEPG - 1); - - for (i = 0; i < KSTACK_PAGES - 1; i++) { + for (i = 0; i < KSTACK_PAGES; i++) { + pte = pmap_pte(kernel_pmap, td->td_kstack + i * PAGE_SIZE); td->td_md.md_upte[i] = *pte & ~(PTE_RO|PTE_WIRED); - pte++; } } @@ -238,22 +234,14 @@ cpu_thread_alloc(struct thread *td) pt_entry_t *pte; int i; - if (td->td_kstack & (1 << PAGE_SHIFT)) - td->td_md.md_realstack = td->td_kstack + PAGE_SIZE; - else - td->td_md.md_realstack = td->td_kstack; - - td->td_pcb = (struct pcb *)(td->td_md.md_realstack + - (td->td_kstack_pages - 1) * PAGE_SIZE) - 1; + KASSERT((td->td_kstack & (1 << PAGE_SHIFT)) == 0, ("kernel stack must be aligned.")); + td->td_pcb = (struct pcb *)(td->td_kstack + + td->td_kstack_pages * PAGE_SIZE) - 1; td->td_frame = &td->td_pcb->pcb_regs; - if (!(pte = pmap_segmap(kernel_pmap, td->td_md.md_realstack))) - panic("cpu_thread_alloc: invalid segmap"); - pte += ((vm_offset_t)td->td_md.md_realstack >> PAGE_SHIFT) & (NPTEPG - 1); - - for (i = 0; i < KSTACK_PAGES - 1; i++) { + for (i = 0; i < KSTACK_PAGES; i++) { + pte = pmap_pte(kernel_pmap, td->td_kstack + i * PAGE_SIZE); td->td_md.md_upte[i] = *pte & ~(PTE_RO|PTE_WIRED); - pte++; } } Modified: head/sys/vm/pmap.h ============================================================================== --- head/sys/vm/pmap.h Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/vm/pmap.h Sun Apr 18 22:32:07 2010 (r206819) @@ -98,6 +98,9 @@ extern vm_offset_t kernel_vm_end; void pmap_align_superpage(vm_object_t, vm_ooffset_t, vm_offset_t *, vm_size_t); +#if defined(__mips__) +void pmap_align_tlb(vm_offset_t *); +#endif void pmap_change_wiring(pmap_t, vm_offset_t, boolean_t); void pmap_clear_modify(vm_page_t m); void pmap_clear_reference(vm_page_t m); Modified: head/sys/vm/vm_extern.h ============================================================================== --- head/sys/vm/vm_extern.h Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/vm/vm_extern.h Sun Apr 18 22:32:07 2010 (r206819) @@ -47,6 +47,7 @@ vm_offset_t kmem_alloc_contig(vm_map_t m vm_paddr_t low, vm_paddr_t high, unsigned long alignment, unsigned long boundary, vm_memattr_t memattr); vm_offset_t kmem_alloc_nofault(vm_map_t, vm_size_t); +vm_offset_t kmem_alloc_nofault_space(vm_map_t, vm_size_t, int); vm_offset_t kmem_alloc_wait(vm_map_t, vm_size_t); void kmem_free(vm_map_t, vm_offset_t, vm_size_t); void kmem_free_wakeup(vm_map_t, vm_offset_t, vm_size_t); Modified: head/sys/vm/vm_glue.c ============================================================================== --- head/sys/vm/vm_glue.c Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/vm/vm_glue.c Sun Apr 18 22:32:07 2010 (r206819) @@ -373,8 +373,17 @@ vm_thread_new(struct thread *td, int pag /* * Get a kernel virtual address for this thread's kstack. */ +#if defined(__mips__) + /* + * We need to align the kstack's mapped address to fit within + * a single TLB entry. + */ + ks = kmem_alloc_nofault_space(kernel_map, + (pages + KSTACK_GUARD_PAGES) * PAGE_SIZE, VMFS_TLB_ALIGNED_SPACE); +#else ks = kmem_alloc_nofault(kernel_map, (pages + KSTACK_GUARD_PAGES) * PAGE_SIZE); +#endif if (ks == 0) { printf("vm_thread_new: kstack allocation failed\n"); vm_object_deallocate(ksobj); Modified: head/sys/vm/vm_kern.c ============================================================================== --- head/sys/vm/vm_kern.c Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/vm/vm_kern.c Sun Apr 18 22:32:07 2010 (r206819) @@ -119,6 +119,35 @@ kmem_alloc_nofault(map, size) } /* + * kmem_alloc_nofault_space: + * + * Allocate a virtual address range with no underlying object and + * no initial mapping to physical memory within the specified + * address space. Any mapping from this range to physical memory + * must be explicitly created prior to its use, typically with + * pmap_qenter(). Any attempt to create a mapping on demand + * through vm_fault() will result in a panic. + */ +vm_offset_t +kmem_alloc_nofault_space(map, size, find_space) + vm_map_t map; + vm_size_t size; + int find_space; +{ + vm_offset_t addr; + int result; + + size = round_page(size); + addr = vm_map_min(map); + result = vm_map_find(map, NULL, 0, &addr, size, find_space, + VM_PROT_ALL, VM_PROT_ALL, MAP_NOFAULT); + if (result != KERN_SUCCESS) { + return (0); + } + return (addr); +} + +/* * Allocate wired-down memory in the kernel's address map * or a submap. */ Modified: head/sys/vm/vm_map.c ============================================================================== --- head/sys/vm/vm_map.c Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/vm/vm_map.c Sun Apr 18 22:32:07 2010 (r206819) @@ -1394,9 +1394,20 @@ vm_map_find(vm_map_t map, vm_object_t ob vm_map_unlock(map); return (KERN_NO_SPACE); } - if (find_space == VMFS_ALIGNED_SPACE) + switch (find_space) { + case VMFS_ALIGNED_SPACE: pmap_align_superpage(object, offset, addr, length); + break; +#ifdef VMFS_TLB_ALIGNED_SPACE + case VMFS_TLB_ALIGNED_SPACE: + pmap_align_tlb(addr); + break; +#endif + default: + break; + } + start = *addr; } result = vm_map_insert(map, object, offset, start, start + Modified: head/sys/vm/vm_map.h ============================================================================== --- head/sys/vm/vm_map.h Sun Apr 18 22:21:23 2010 (r206818) +++ head/sys/vm/vm_map.h Sun Apr 18 22:32:07 2010 (r206819) @@ -326,6 +326,9 @@ long vmspace_wired_count(struct vmspace #define VMFS_NO_SPACE 0 /* don't find; use the given range */ #define VMFS_ANY_SPACE 1 /* find a range with any alignment */ #define VMFS_ALIGNED_SPACE 2 /* find a superpage-aligned range */ +#if defined(__mips__) +#define VMFS_TLB_ALIGNED_SPACE 3 /* find a TLB entry aligned range */ +#endif /* * vm_map_wire and vm_map_unwire option flags