From owner-freebsd-arch@freebsd.org Wed Aug 5 13:14:52 2015 Return-Path: Delivered-To: freebsd-arch@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9E3389B4667 for ; Wed, 5 Aug 2015 13:14:52 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 827A61639 for ; Wed, 5 Aug 2015 13:14:52 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 7FAB09B4666; Wed, 5 Aug 2015 13:14:52 +0000 (UTC) Delivered-To: arch@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7F2A69B4665 for ; Wed, 5 Aug 2015 13:14:52 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D37F81638 for ; Wed, 5 Aug 2015 13:14:51 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.15.2/8.15.2) with ESMTPS id t75DEiph045782 (version=TLSv1 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Wed, 5 Aug 2015 16:14:44 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.9.2 kib.kiev.ua t75DEiph045782 Received: (from kostik@localhost) by tom.home (8.15.2/8.15.2/Submit) id t75DEit6045781 for arch@freebsd.org; Wed, 5 Aug 2015 16:14:44 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Wed, 5 Aug 2015 16:14:44 +0300 From: Konstantin Belousov To: arch@freebsd.org Subject: The kern.kstack_pages tunable for some architectures Message-ID: <20150805131444.GY2072@kib.kiev.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on tom.home X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Aug 2015 13:14:52 -0000 Patch at the end of the message adds kern.kstack_pages tunable for the amd64, arm, i386, and powerpc architectures. I tested it on amd64 and i386. From the visual inspection, it should work on arm and powerpc, on all listed arches (except i386) the thread0 kstack is initialized after the init_param1() is done. For amd64 this is ensured in the patch, for i386 the TD0_KSTACK_PAGES param.h define defines the thread0 stack size, it is impossible to use environment from the locore. What makes me wonder is the USPACE_SVC_STACK_TOP define for arm and the USPACE define for powerpc. They use the global value (KSTACK_PAGES before the patch, kstack_pages after) to calculate the address of pcb, which is wrong for non-default stack size. I gave up on arm64 and sparc64, because they size statically defined objects from the KSTACK_PAGES. I do not understand the arches bootstrap to touch the code. For mips, there is even more wondering use of KSTACK_PAGES to size the store of the kstack ptes in the thread md struct, which should make the same problems as the USPACE_SVC_STACK_TOP and USPACE. Does anybody have opinion on the change ? Could somebody test at least some arm boards and powerpc ? diff --git a/sys/amd64/amd64/genassym.c b/sys/amd64/amd64/genassym.c index 5b1e089..d087fdc 100644 --- a/sys/amd64/amd64/genassym.c +++ b/sys/amd64/amd64/genassym.c @@ -93,7 +93,6 @@ ASSYM(TDP_KTHREAD, TDP_KTHREAD); ASSYM(V_TRAP, offsetof(struct vmmeter, v_trap)); ASSYM(V_SYSCALL, offsetof(struct vmmeter, v_syscall)); ASSYM(V_INTR, offsetof(struct vmmeter, v_intr)); -ASSYM(KSTACK_PAGES, KSTACK_PAGES); ASSYM(PAGE_SIZE, PAGE_SIZE); ASSYM(NPTEPG, NPTEPG); ASSYM(NPDEPG, NPDEPG); diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c index a571390..f579f98 100644 --- a/sys/amd64/amd64/machdep.c +++ b/sys/amd64/amd64/machdep.c @@ -1516,12 +1516,6 @@ hammer_time(u_int64_t modulep, u_int64_t physfree) char *env; size_t kstack0_sz; - thread0.td_kstack = physfree + KERNBASE; - thread0.td_kstack_pages = KSTACK_PAGES; - kstack0_sz = thread0.td_kstack_pages * PAGE_SIZE; - bzero((void *)thread0.td_kstack, kstack0_sz); - physfree += kstack0_sz; - /* * This may be done better later if it gets more high level * components in it. If so just link td->td_proc here. @@ -1533,6 +1527,12 @@ hammer_time(u_int64_t modulep, u_int64_t physfree) /* Init basic tunables, hz etc */ init_param1(); + thread0.td_kstack = physfree + KERNBASE; + thread0.td_kstack_pages = kstack_pages; + kstack0_sz = thread0.td_kstack_pages * PAGE_SIZE; + bzero((void *)thread0.td_kstack, kstack0_sz); + physfree += kstack0_sz; + /* * make gdt memory segments */ diff --git a/sys/amd64/amd64/mp_machdep.c b/sys/amd64/amd64/mp_machdep.c index a2ca9e2..0562ca4 100644 --- a/sys/amd64/amd64/mp_machdep.c +++ b/sys/amd64/amd64/mp_machdep.c @@ -348,7 +348,7 @@ native_start_all_aps(void) /* allocate and set up an idle stack data page */ bootstacks[cpu] = (void *)kmem_malloc(kernel_arena, - KSTACK_PAGES * PAGE_SIZE, M_WAITOK | M_ZERO); + kstack_pages * PAGE_SIZE, M_WAITOK | M_ZERO); doublefault_stack = (char *)kmem_malloc(kernel_arena, PAGE_SIZE, M_WAITOK | M_ZERO); nmi_stack = (char *)kmem_malloc(kernel_arena, PAGE_SIZE, @@ -356,7 +356,7 @@ native_start_all_aps(void) dpcpu = (void *)kmem_malloc(kernel_arena, DPCPU_SIZE, M_WAITOK | M_ZERO); - bootSTK = (char *)bootstacks[cpu] + KSTACK_PAGES * PAGE_SIZE - 8; + bootSTK = (char *)bootstacks[cpu] + kstack_pages * PAGE_SIZE - 8; bootAP = cpu; /* attempt to start the Application Processor */ diff --git a/sys/arm/arm/machdep.c b/sys/arm/arm/machdep.c index 67e081d..a664ac4 100644 --- a/sys/arm/arm/machdep.c +++ b/sys/arm/arm/machdep.c @@ -1066,7 +1066,7 @@ init_proc0(vm_offset_t kstack) proc_linkup0(&proc0, &thread0); thread0.td_kstack = kstack; thread0.td_pcb = (struct pcb *) - (thread0.td_kstack + KSTACK_PAGES * PAGE_SIZE) - 1; + (thread0.td_kstack + kstack_pages * PAGE_SIZE) - 1; thread0.td_pcb->pcb_flags = 0; thread0.td_pcb->pcb_vfpcpu = -1; thread0.td_pcb->pcb_vfpstate.fpscr = VFPSCR_DN | VFPSCR_FZ; @@ -1360,7 +1360,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE * MAXCPU); valloc_pages(abtstack, ABT_STACK_SIZE * MAXCPU); valloc_pages(undstack, UND_STACK_SIZE * MAXCPU); - valloc_pages(kernelstack, KSTACK_PAGES * MAXCPU); + valloc_pages(kernelstack, kstack_pages * MAXCPU); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* @@ -1614,7 +1614,7 @@ initarm(struct arm_boot_params *abp) irqstack = pmap_preboot_get_vpages(IRQ_STACK_SIZE * MAXCPU); abtstack = pmap_preboot_get_vpages(ABT_STACK_SIZE * MAXCPU); undstack = pmap_preboot_get_vpages(UND_STACK_SIZE * MAXCPU ); - kernelstack = pmap_preboot_get_vpages(KSTACK_PAGES * MAXCPU); + kernelstack = pmap_preboot_get_vpages(kstack_pages * MAXCPU); /* Allocate message buffer. */ msgbufp = (void *)pmap_preboot_get_vpages( diff --git a/sys/arm/at91/at91_machdep.c b/sys/arm/at91/at91_machdep.c index 62edfa6..2d5dda2 100644 --- a/sys/arm/at91/at91_machdep.c +++ b/sys/arm/at91/at91_machdep.c @@ -512,7 +512,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE * MAXCPU); valloc_pages(abtstack, ABT_STACK_SIZE * MAXCPU); valloc_pages(undstack, UND_STACK_SIZE * MAXCPU); - valloc_pages(kernelstack, KSTACK_PAGES * MAXCPU); + valloc_pages(kernelstack, kstack_pages * MAXCPU); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* @@ -553,7 +553,7 @@ initarm(struct arm_boot_params *abp) pmap_map_chunk(l1pagetable, undstack.pv_va, undstack.pv_pa, UND_STACK_SIZE * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); pmap_map_chunk(l1pagetable, kernelstack.pv_va, kernelstack.pv_pa, - KSTACK_PAGES * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); + kstack_pages * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); pmap_map_chunk(l1pagetable, kernel_l1pt.pv_va, kernel_l1pt.pv_pa, L1_TABLE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_PAGETABLE); diff --git a/sys/arm/cavium/cns11xx/econa_machdep.c b/sys/arm/cavium/cns11xx/econa_machdep.c index 1532cec..1591053 100644 --- a/sys/arm/cavium/cns11xx/econa_machdep.c +++ b/sys/arm/cavium/cns11xx/econa_machdep.c @@ -222,7 +222,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE); valloc_pages(abtstack, ABT_STACK_SIZE); valloc_pages(undstack, UND_STACK_SIZE); - valloc_pages(kernelstack, KSTACK_PAGES); + valloc_pages(kernelstack, kstack_pages); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* @@ -260,7 +260,7 @@ initarm(struct arm_boot_params *abp) pmap_map_chunk(l1pagetable, undstack.pv_va, undstack.pv_pa, UND_STACK_SIZE * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); pmap_map_chunk(l1pagetable, kernelstack.pv_va, kernelstack.pv_pa, - KSTACK_PAGES * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); + kstack_pages * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); pmap_map_chunk(l1pagetable, kernel_l1pt.pv_va, kernel_l1pt.pv_pa, L1_TABLE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_PAGETABLE); diff --git a/sys/arm/include/param.h b/sys/arm/include/param.h index 6267154..d3aa01b 100644 --- a/sys/arm/include/param.h +++ b/sys/arm/include/param.h @@ -131,7 +131,7 @@ #define KSTACK_GUARD_PAGES 1 #endif /* !KSTACK_GUARD_PAGES */ -#define USPACE_SVC_STACK_TOP (KSTACK_PAGES * PAGE_SIZE) +#define USPACE_SVC_STACK_TOP (kstack_pages * PAGE_SIZE) /* * Mach derived conversion macros diff --git a/sys/arm/samsung/s3c2xx0/s3c24x0_machdep.c b/sys/arm/samsung/s3c2xx0/s3c24x0_machdep.c index bdd6cc6..bd3c230 100644 --- a/sys/arm/samsung/s3c2xx0/s3c24x0_machdep.c +++ b/sys/arm/samsung/s3c2xx0/s3c24x0_machdep.c @@ -271,7 +271,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE); valloc_pages(abtstack, ABT_STACK_SIZE); valloc_pages(undstack, UND_STACK_SIZE); - valloc_pages(kernelstack, KSTACK_PAGES); + valloc_pages(kernelstack, kstack_pages); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* * Now we start construction of the L1 page table @@ -307,7 +307,7 @@ initarm(struct arm_boot_params *abp) pmap_map_chunk(l1pagetable, undstack.pv_va, undstack.pv_pa, UND_STACK_SIZE * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); pmap_map_chunk(l1pagetable, kernelstack.pv_va, kernelstack.pv_pa, - KSTACK_PAGES * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); + kstack_pages * PAGE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); pmap_map_chunk(l1pagetable, kernel_l1pt.pv_va, kernel_l1pt.pv_pa, L1_TABLE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_PAGETABLE); diff --git a/sys/arm/xscale/i80321/ep80219_machdep.c b/sys/arm/xscale/i80321/ep80219_machdep.c index 9881371..d93ed74 100644 --- a/sys/arm/xscale/i80321/ep80219_machdep.c +++ b/sys/arm/xscale/i80321/ep80219_machdep.c @@ -225,7 +225,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE); valloc_pages(abtstack, ABT_STACK_SIZE); valloc_pages(undstack, UND_STACK_SIZE); - valloc_pages(kernelstack, KSTACK_PAGES); + valloc_pages(kernelstack, kstack_pages); alloc_pages(minidataclean.pv_pa, 1); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* diff --git a/sys/arm/xscale/i80321/iq31244_machdep.c b/sys/arm/xscale/i80321/iq31244_machdep.c index 0df3609..52d94af 100644 --- a/sys/arm/xscale/i80321/iq31244_machdep.c +++ b/sys/arm/xscale/i80321/iq31244_machdep.c @@ -226,7 +226,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE); valloc_pages(abtstack, ABT_STACK_SIZE); valloc_pages(undstack, UND_STACK_SIZE); - valloc_pages(kernelstack, KSTACK_PAGES); + valloc_pages(kernelstack, kstack_pages); alloc_pages(minidataclean.pv_pa, 1); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* diff --git a/sys/arm/xscale/i8134x/crb_machdep.c b/sys/arm/xscale/i8134x/crb_machdep.c index 568be9f..138ed09 100644 --- a/sys/arm/xscale/i8134x/crb_machdep.c +++ b/sys/arm/xscale/i8134x/crb_machdep.c @@ -225,7 +225,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE); valloc_pages(abtstack, ABT_STACK_SIZE); valloc_pages(undstack, UND_STACK_SIZE); - valloc_pages(kernelstack, KSTACK_PAGES); + valloc_pages(kernelstack, kstack_pages); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* * Now we start construction of the L1 page table diff --git a/sys/arm/xscale/ixp425/avila_machdep.c b/sys/arm/xscale/ixp425/avila_machdep.c index f37aa29..0d5d9bb 100644 --- a/sys/arm/xscale/ixp425/avila_machdep.c +++ b/sys/arm/xscale/ixp425/avila_machdep.c @@ -295,7 +295,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE); valloc_pages(abtstack, ABT_STACK_SIZE); valloc_pages(undstack, UND_STACK_SIZE); - valloc_pages(kernelstack, KSTACK_PAGES); + valloc_pages(kernelstack, kstack_pages); alloc_pages(minidataclean.pv_pa, 1); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); diff --git a/sys/arm/xscale/pxa/pxa_machdep.c b/sys/arm/xscale/pxa/pxa_machdep.c index 4480c95..41e49c3 100644 --- a/sys/arm/xscale/pxa/pxa_machdep.c +++ b/sys/arm/xscale/pxa/pxa_machdep.c @@ -206,7 +206,7 @@ initarm(struct arm_boot_params *abp) valloc_pages(irqstack, IRQ_STACK_SIZE); valloc_pages(abtstack, ABT_STACK_SIZE); valloc_pages(undstack, UND_STACK_SIZE); - valloc_pages(kernelstack, KSTACK_PAGES); + valloc_pages(kernelstack, kstack_pages); alloc_pages(minidataclean.pv_pa, 1); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* diff --git a/sys/ddb/db_ps.c b/sys/ddb/db_ps.c index 553c22e..f38c89f 100644 --- a/sys/ddb/db_ps.c +++ b/sys/ddb/db_ps.c @@ -462,7 +462,7 @@ db_findstack_cmd(db_expr_t addr, bool have_addr, db_expr_t dummy3 __unused, for (ks_ce = kstack_cache; ks_ce != NULL; ks_ce = ks_ce->next_ks_entry) { if ((vm_offset_t)ks_ce <= saddr && saddr < (vm_offset_t)ks_ce + - PAGE_SIZE * KSTACK_PAGES) { + PAGE_SIZE * kstack_pages) { db_printf("Cached stack %p\n", ks_ce); return; } diff --git a/sys/i386/i386/genassym.c b/sys/i386/i386/genassym.c index 6a00d23..3087834 100644 --- a/sys/i386/i386/genassym.c +++ b/sys/i386/i386/genassym.c @@ -101,8 +101,6 @@ ASSYM(TDF_NEEDRESCHED, TDF_NEEDRESCHED); ASSYM(V_TRAP, offsetof(struct vmmeter, v_trap)); ASSYM(V_SYSCALL, offsetof(struct vmmeter, v_syscall)); ASSYM(V_INTR, offsetof(struct vmmeter, v_intr)); -/* ASSYM(UPAGES, UPAGES);*/ -ASSYM(KSTACK_PAGES, KSTACK_PAGES); ASSYM(TD0_KSTACK_PAGES, TD0_KSTACK_PAGES); ASSYM(PAGE_SIZE, PAGE_SIZE); ASSYM(NPTEPG, NPTEPG); diff --git a/sys/i386/i386/mp_machdep.c b/sys/i386/i386/mp_machdep.c index 0942523..4812cb0 100644 --- a/sys/i386/i386/mp_machdep.c +++ b/sys/i386/i386/mp_machdep.c @@ -348,7 +348,7 @@ start_all_aps(void) /* allocate and set up a boot stack data page */ bootstacks[cpu] = - (char *)kmem_malloc(kernel_arena, KSTACK_PAGES * PAGE_SIZE, + (char *)kmem_malloc(kernel_arena, kstack_pages * PAGE_SIZE, M_WAITOK | M_ZERO); dpcpu = (void *)kmem_malloc(kernel_arena, DPCPU_SIZE, M_WAITOK | M_ZERO); @@ -360,7 +360,8 @@ start_all_aps(void) outb(CMOS_DATA, BIOS_WARM); /* 'warm-start' */ #endif - bootSTK = (char *)bootstacks[cpu] + KSTACK_PAGES * PAGE_SIZE - 4; + bootSTK = (char *)bootstacks[cpu] + kstack_pages * + PAGE_SIZE - 4; bootAP = cpu; /* attempt to start the Application Processor */ diff --git a/sys/i386/i386/sys_machdep.c b/sys/i386/i386/sys_machdep.c index 0928b72..dc367a6 100644 --- a/sys/i386/i386/sys_machdep.c +++ b/sys/i386/i386/sys_machdep.c @@ -275,7 +275,7 @@ i386_extend_pcb(struct thread *td) ext = (struct pcb_ext *)kmem_malloc(kernel_arena, ctob(IOPAGES+1), M_WAITOK | M_ZERO); /* -16 is so we can convert a trapframe into vm86trapframe inplace */ - ext->ext_tss.tss_esp0 = td->td_kstack + ctob(KSTACK_PAGES) - + ext->ext_tss.tss_esp0 = td->td_kstack + ctob(td->td_kstack_pages) - sizeof(struct pcb) - 16; ext->ext_tss.tss_ss0 = GSEL(GDATA_SEL, SEL_KPL); /* diff --git a/sys/i386/include/privatespace.h b/sys/i386/include/privatespace.h deleted file mode 100644 index 5eb54c2..0000000 --- a/sys/i386/include/privatespace.h +++ /dev/null @@ -1,49 +0,0 @@ -/*- - * Copyright (c) Peter Wemm - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * $FreeBSD$ - */ - -#ifndef _MACHINE_PRIVATESPACE_H_ -#define _MACHINE_PRIVATESPACE_H_ - -/* - * This is the upper (0xff800000) address space layout that is per-cpu. - * It is setup in locore.s and pmap.c for the BSP and in mp_machdep.c for - * each AP. This is only applicable to the x86 SMP kernel. - */ -struct privatespace { - /* page 0 - data page */ - struct pcpu pcpu; - char __filler0[PAGE_SIZE - sizeof(struct pcpu)]; - - /* page 1 - idle stack (KSTACK_PAGES pages) */ - char idlekstack[KSTACK_PAGES * PAGE_SIZE]; - /* page 1+KSTACK_PAGES... */ -}; - -extern struct privatespace SMP_prvspace[]; - -#endif /* ! _MACHINE_PRIVATESPACE_H_ */ diff --git a/sys/kern/kern_fork.c b/sys/kern/kern_fork.c index 85dbd94..4aa5314 100644 --- a/sys/kern/kern_fork.c +++ b/sys/kern/kern_fork.c @@ -832,7 +832,7 @@ fork1(struct thread *td, int flags, int pages, struct proc **procp, mem_charged = 0; vm2 = NULL; if (pages == 0) - pages = KSTACK_PAGES; + pages = kstack_pages; /* Allocate new proc. */ newproc = uma_zalloc(proc_zone, M_WAITOK); td2 = FIRST_THREAD_IN_PROC(newproc); diff --git a/sys/kern/subr_param.c b/sys/kern/subr_param.c index 5043a57..36608d1 100644 --- a/sys/kern/subr_param.c +++ b/sys/kern/subr_param.c @@ -159,6 +159,9 @@ void init_param1(void) { +#if !defined(__mips__) && !defined(__arm64__) && !defined(__sparc64__) + TUNABLE_INT_FETCH("kern.kstack_pages", &kstack_pages); +#endif hz = -1; TUNABLE_INT_FETCH("kern.hz", &hz); if (hz == -1) diff --git a/sys/pc98/include/privatespace.h b/sys/pc98/include/privatespace.h deleted file mode 100644 index 5db57c3..0000000 --- a/sys/pc98/include/privatespace.h +++ /dev/null @@ -1,6 +0,0 @@ -/*- - * This file is in the public domain. - */ -/* $FreeBSD$ */ - -#include diff --git a/sys/powerpc/aim/mmu_oea.c b/sys/powerpc/aim/mmu_oea.c index 4734738..d45b34e 100644 --- a/sys/powerpc/aim/mmu_oea.c +++ b/sys/powerpc/aim/mmu_oea.c @@ -932,13 +932,13 @@ moea_bootstrap(mmu_t mmup, vm_offset_t kernelstart, vm_offset_t kernelend) * Allocate a kernel stack with a guard page for thread0 and map it * into the kernel page map. */ - pa = moea_bootstrap_alloc(KSTACK_PAGES * PAGE_SIZE, PAGE_SIZE); + pa = moea_bootstrap_alloc(kstack_pages * PAGE_SIZE, PAGE_SIZE); va = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE; - virtual_avail = va + KSTACK_PAGES * PAGE_SIZE; + virtual_avail = va + kstack_pages * PAGE_SIZE; CTR2(KTR_PMAP, "moea_bootstrap: kstack0 at %#x (%#x)", pa, va); thread0.td_kstack = va; - thread0.td_kstack_pages = KSTACK_PAGES; - for (i = 0; i < KSTACK_PAGES; i++) { + thread0.td_kstack_pages = kstack_pages; + for (i = 0; i < kstack_pages; i++) { moea_kenter(mmup, va, pa); pa += PAGE_SIZE; va += PAGE_SIZE; diff --git a/sys/powerpc/aim/mmu_oea64.c b/sys/powerpc/aim/mmu_oea64.c index 44caec6..3766d86 100644 --- a/sys/powerpc/aim/mmu_oea64.c +++ b/sys/powerpc/aim/mmu_oea64.c @@ -917,13 +917,13 @@ moea64_late_bootstrap(mmu_t mmup, vm_offset_t kernelstart, vm_offset_t kernelend * Allocate a kernel stack with a guard page for thread0 and map it * into the kernel page map. */ - pa = moea64_bootstrap_alloc(KSTACK_PAGES * PAGE_SIZE, PAGE_SIZE); + pa = moea64_bootstrap_alloc(kstack_pages * PAGE_SIZE, PAGE_SIZE); va = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE; - virtual_avail = va + KSTACK_PAGES * PAGE_SIZE; + virtual_avail = va + kstack_pages * PAGE_SIZE; CTR2(KTR_PMAP, "moea64_bootstrap: kstack0 at %#x (%#x)", pa, va); thread0.td_kstack = va; - thread0.td_kstack_pages = KSTACK_PAGES; - for (i = 0; i < KSTACK_PAGES; i++) { + thread0.td_kstack_pages = kstack_pages; + for (i = 0; i < kstack_pages; i++) { moea64_kenter(mmup, va, pa); pa += PAGE_SIZE; va += PAGE_SIZE; diff --git a/sys/powerpc/booke/pmap.c b/sys/powerpc/booke/pmap.c index 275ae8d..223500c 100644 --- a/sys/powerpc/booke/pmap.c +++ b/sys/powerpc/booke/pmap.c @@ -1207,7 +1207,7 @@ mmu_booke_bootstrap(mmu_t mmu, vm_offset_t start, vm_offset_t kernelend) /* Steal physical memory for kernel stack from the end */ /* of the first avail region */ /*******************************************************/ - kstack0_sz = KSTACK_PAGES * PAGE_SIZE; + kstack0_sz = kstack_pages * PAGE_SIZE; kstack0_phys = availmem_regions[0].mr_start + availmem_regions[0].mr_size; kstack0_phys -= kstack0_sz; @@ -1312,7 +1312,7 @@ mmu_booke_bootstrap(mmu_t mmu, vm_offset_t start, vm_offset_t kernelend) /* Enter kstack0 into kernel map, provide guard page */ kstack0 = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE; thread0.td_kstack = kstack0; - thread0.td_kstack_pages = KSTACK_PAGES; + thread0.td_kstack_pages = kstack_pages; debugf("kstack_sz = 0x%08x\n", kstack0_sz); debugf("kstack0_phys at 0x%08x - 0x%08x\n", @@ -1320,7 +1320,7 @@ mmu_booke_bootstrap(mmu_t mmu, vm_offset_t start, vm_offset_t kernelend) debugf("kstack0 at 0x%08x - 0x%08x\n", kstack0, kstack0 + kstack0_sz); virtual_avail += KSTACK_GUARD_PAGES * PAGE_SIZE + kstack0_sz; - for (i = 0; i < KSTACK_PAGES; i++) { + for (i = 0; i < kstack_pages; i++) { mmu_booke_kenter(mmu, kstack0, kstack0_phys); kstack0 += PAGE_SIZE; kstack0_phys += PAGE_SIZE; diff --git a/sys/powerpc/include/param.h b/sys/powerpc/include/param.h index 5c25e8a..4780a68 100644 --- a/sys/powerpc/include/param.h +++ b/sys/powerpc/include/param.h @@ -111,7 +111,7 @@ #endif #endif #define KSTACK_GUARD_PAGES 1 /* pages of kstack guard; 0 disables */ -#define USPACE (KSTACK_PAGES * PAGE_SIZE) /* total size of pcb */ +#define USPACE (kstack_pages * PAGE_SIZE) /* total size of pcb */ /* * Mach derived conversion macros diff --git a/sys/vm/vm_glue.c b/sys/vm/vm_glue.c index 1ff17c2..92ee794 100644 --- a/sys/vm/vm_glue.c +++ b/sys/vm/vm_glue.c @@ -327,11 +327,11 @@ vm_thread_new(struct thread *td, int pages) /* Bounds check */ if (pages <= 1) - pages = KSTACK_PAGES; + pages = kstack_pages; else if (pages > KSTACK_MAX_PAGES) pages = KSTACK_MAX_PAGES; - if (pages == KSTACK_PAGES) { + if (pages == kstack_pages) { mtx_lock(&kstack_cache_mtx); if (kstack_cache != NULL) { ks_ce = kstack_cache; @@ -340,7 +340,7 @@ vm_thread_new(struct thread *td, int pages) td->td_kstack_obj = ks_ce->ksobj; td->td_kstack = (vm_offset_t)ks_ce; - td->td_kstack_pages = KSTACK_PAGES; + td->td_kstack_pages = kstack_pages; return (1); } mtx_unlock(&kstack_cache_mtx); @@ -444,7 +444,7 @@ vm_thread_dispose(struct thread *td) ks = td->td_kstack; td->td_kstack = 0; td->td_kstack_pages = 0; - if (pages == KSTACK_PAGES && kstacks <= kstack_cache_size) { + if (pages == kstack_pages && kstacks <= kstack_cache_size) { ks_ce = (struct kstack_cache_entry *)ks; ks_ce->ksobj = ksobj; mtx_lock(&kstack_cache_mtx); @@ -471,7 +471,7 @@ vm_thread_stack_lowmem(void *nulll) ks_ce = ks_ce->next_ks_entry; vm_thread_stack_dispose(ks_ce1->ksobj, (vm_offset_t)ks_ce1, - KSTACK_PAGES); + kstack_pages); } } diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c index 6b913fb..50d2e76 100644 --- a/sys/x86/xen/pv.c +++ b/sys/x86/xen/pv.c @@ -215,7 +215,7 @@ start_xen_ap(int cpu) { struct vcpu_guest_context *ctxt; int ms, cpus = mp_naps; - const size_t stacksize = KSTACK_PAGES * PAGE_SIZE; + const size_t stacksize = kstack_pages * PAGE_SIZE; /* allocate and set up an idle stack data page */ bootstacks[cpu] = @@ -227,7 +227,7 @@ start_xen_ap(int cpu) dpcpu = (void *)kmem_malloc(kernel_arena, DPCPU_SIZE, M_WAITOK | M_ZERO); - bootSTK = (char *)bootstacks[cpu] + KSTACK_PAGES * PAGE_SIZE - 8; + bootSTK = (char *)bootstacks[cpu] + kstack_pages * PAGE_SIZE - 8; bootAP = cpu; ctxt = malloc(sizeof(*ctxt), M_TEMP, M_WAITOK | M_ZERO);