Date: Mon, 6 Sep 2010 03:51:03 +0000 From: mdf@FreeBSD.org To: Nathan Whitehorn <nwhitehorn@freebsd.org> Cc: freebsd-hackers@freebsd.org Subject: Re: UMA allocations from a specific physical range Message-ID: <AANLkTik59AOwPNgxXfjZnp74NGXvEsUFSN41RPk0WFF9@mail.gmail.com> In-Reply-To: <4C844609.9050505@freebsd.org> References: <4C844609.9050505@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Sep 6, 2010 at 1:38 AM, Nathan Whitehorn <nwhitehorn@freebsd.org> wrote: > PowerPC hypervisors typically provided a restricted range on memory when > the MMU is disabled, as it is when initially handling exceptions. In > order to restore virtual memory, the powerpc64 code needs to read a data > structure called the SLB cache, which is currently allocated out of a > UMA zone, and must be mapped into wired memory, ideally 1:1 > physical->virtual address. Since this must be accessible in real mode, > it must have a physical address in a certain range. I am trying to > figure out the best way to do this. > > My first run at this code uses a custom UMA allocator that calls > vm_phys_alloc_contig() to get a memory page. The trouble I have run into > is that I cannot figure out a way to free the page. Marking the zone > NOFREE is a bad solution, vm_page_free() panics the kernel due to > inconsistent tracking of page wiring, and vm_phys_free_pages() causes > panics in vm_page_alloc() later on ("page is not free"). What is the > correct way to deallocate these pages? Or is there a different approach > I should adopt? I assume this is for the SLB flih? What AIX did was to have a 1-1 simple esid to vsid translation for kernel addresses, reserve the first 16 SLB entries for various uses, including one for the current process's process private segment, and if the slb miss was on a process address we'd turn on translation and look up the answer, the tables holding the answer being in the process private segment effective address space so we wouldn't take another slb miss. This required one level deep recursion in the slb slih, in case there was a miss on kernel data with xlate on in the SLB slih. For historical reasons due to the per-process segment table for POWER3, we also had a one-page hashed lookup table per process that we stored the real address of in the process private segment, so the assembly code in the flih looked here before turning on MSR_DR IIRC. I was trying to find ways to kill this code when I left IBM, since we'd ended support for POWER3 a few years earlier. I haven't had the time to look at FreeBSD ppc64 sources; how large are the uma-allocated slb entries and what is stored in them? The struct and filename is sufficient, though I don't have convenient access to sources until Tuesday. V=R space is rather limited (well, depending on a lot of factors; for AIX on Power5 and later the hypervisor only gave us 128M, though for ppc64 on a Mac G4 I assume all of memory can be mapped V=R if desired) so it was best to find a non V=R solution if possible. Turning on translation in the flih after some setup and recursion stopping is one of the easier ways, and also has the advantage of not needing to either have separate code or macro access to data structures used in both V and R modes. Cheers, matthew
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTik59AOwPNgxXfjZnp74NGXvEsUFSN41RPk0WFF9>