Date: Sun, 14 Mar 2010 00:32:18 +0000 (UTC) From: Nathan Whitehorn <nwhitehorn@FreeBSD.org> To: src-committers@freebsd.org, svn-src-projects@freebsd.org Subject: svn commit: r205139 - projects/ppc64/sys/powerpc/aim Message-ID: <201003140032.o2E0WIN0080337@svn.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: nwhitehorn Date: Sun Mar 14 00:32:18 2010 New Revision: 205139 URL: http://svn.freebsd.org/changeset/base/205139 Log: Don't spill existing SLB entries. We can't handle this yet, and it results in randomly demapping bits of the kernel. Which is bad. Reported by: Andreas Tobler Modified: projects/ppc64/sys/powerpc/aim/slb.c Modified: projects/ppc64/sys/powerpc/aim/slb.c ============================================================================== --- projects/ppc64/sys/powerpc/aim/slb.c Sat Mar 13 22:53:17 2010 (r205138) +++ projects/ppc64/sys/powerpc/aim/slb.c Sun Mar 14 00:32:18 2010 (r205139) @@ -91,12 +91,16 @@ allocate_vsid(pmap_t pm, uint64_t esid) return (vsid); } +#ifdef NOTYET /* We don't have a back-up list. Spills are a bad idea. */ /* Lock entries mapping kernel text and stacks */ #define SLB_SPILLABLE(slbe) \ (((slbe & SLBE_ESID_MASK) < VM_MIN_KERNEL_ADDRESS && \ (slbe & SLBE_ESID_MASK) > SEGMENT_LENGTH) || \ (slbe & SLBE_ESID_MASK) > VM_MAX_KERNEL_ADDRESS) +#else +#define SLB_SPILLABLE(slbe) 0 +#endif void slb_spill(pmap_t pm, uint64_t esid, uint64_t vsid)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201003140032.o2E0WIN0080337>