Date: Sat, 27 Sep 2014 10:51:43 +0200 From: Svatopluk Kraus <onwahe@gmail.com> To: alc@freebsd.org Cc: FreeBSD Arch <freebsd-arch@freebsd.org> Subject: Re: vm_page_array and VM_PHYSSEG_SPARSE Message-ID: <CAFHCsPWq9WqeFnx1a%2BStfSxj=jwcE9GPyVsoyh0%2Bazr3HmM6vQ@mail.gmail.com> In-Reply-To: <CAJUyCcPXBuLu0nvaCqpg8NJ6KzAX9BA1Rt%2BooD%2B3pzq%2BFV%2B%2BTQ@mail.gmail.com> References: <CAFHCsPWkq09_RRDz7fy3UgsRFv8ZbNKdAH2Ft0x6aVSwLPi6BQ@mail.gmail.com> <CAJUyCcPXBuLu0nvaCqpg8NJ6KzAX9BA1Rt%2BooD%2B3pzq%2BFV%2B%2BTQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Sep 26, 2014 at 8:08 PM, Alan Cox <alan.l.cox@gmail.com> wrote: > > > On Wed, Sep 24, 2014 at 7:27 AM, Svatopluk Kraus <onwahe@gmail.com> > wrote: > >> Hi, >> >> I and Michal are finishing new ARM pmap-v6 code. There is one problem >> we've >> dealt with somehow, but now we would like to do it better. It's about >> physical pages which are allocated before vm subsystem is initialized. >> While later on these pages could be found in vm_page_array when >> VM_PHYSSEG_DENSE memory model is used, it's not true for VM_PHYSSEG_SPARSE >> memory model. And ARM world uses VM_PHYSSEG_SPARSE model. >> >> It really would be nice to utilize vm_page_array for such preallocated >> physical pages even when VM_PHYSSEG_SPARSE memory model is used. Things >> could be much easier then. In our case, it's about pages which are used >> for >> level 2 page tables. In VM_PHYSSEG_SPARSE model, we have two sets of such >> pages. First ones are preallocated and second ones are allocated after vm >> subsystem was inited. We must deal with each set differently. So code is >> more complex and so is debugging. >> >> Thus we need some method how to say that some part of physical memory >> should be included in vm_page_array, but the pages from that region should >> not be put to free list during initialization. We think that such >> possibility could be utilized in general. There could be a need for some >> physical space which: >> >> (1) is needed only during boot and later on it can be freed and put to vm >> subsystem, >> >> (2) is needed for something else and vm_page_array code could be used >> without some kind of its duplication. >> >> There is already some code which deals with blacklisted pages in vm_page.c >> file. So the easiest way how to deal with presented situation is to add >> some callback to this part of code which will be able to either exclude >> whole phys_avail[i], phys_avail[i+1] region or single pages. As the >> biggest >> phys_avail region is used for vm subsystem allocations, there should be >> some more coding. (However, blacklisted pages are not dealt with on that >> part of region.) >> >> We would like to know if there is any objection: >> >> (1) to deal with presented problem, >> (2) to deal with the problem presented way. >> Some help is very appreciated. Thanks >> >> > > As an experiment, try modifying vm_phys.c to use dump_avail instead of > phys_avail when sizing vm_page_array. On amd64, where the same problem > exists, this allowed me to use VM_PHYSSEG_SPARSE. Right now, this is > probably my preferred solution. The catch being that not all architectures > implement dump_avail, but my recollection is that arm does. > Frankly, I would prefer this too, but there is one big open question: What is dump_avail for? Using it for vm_page_array initialization and segmentation means that phys_avail must be a subset of it. And this must be stated and be visible enough. Maybe it should be even checked in code. I like the idea of thinking about dump_avail as something what desribes all memory in a system, but it's not how dump_avail is defined in archs now. I will experiment with it on monday then. However, it's not only about how memory segments are created in vm_phys.c, but it's about how vm_page_array size is computed in vm_page.c too. Svata
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFHCsPWq9WqeFnx1a%2BStfSxj=jwcE9GPyVsoyh0%2Bazr3HmM6vQ>