Date: Fri, 31 Jan 2020 14:31:44 +0200 From: Konstantin Belousov <kostikbel@gmail.com> To: Hans Petter Selasky <hps@selasky.org> Cc: Rick Macklem <rmacklem@uoguelph.ca>, "freebsd-current@FreeBSD.org" <freebsd-current@freebsd.org> Subject: Re: easy way to work around a lack of a direct map on i386 Message-ID: <20200131123144.GW4808@kib.kiev.ua> In-Reply-To: <f8551c4c-0447-4103-76f1-710c4885a2ec@selasky.org> References: <YTBPR01MB3374AA25792499A796DB7CAADD040@YTBPR01MB3374.CANPRD01.PROD.OUTLOOK.COM> <20200130233734.GV4808@kib.kiev.ua> <f8551c4c-0447-4103-76f1-710c4885a2ec@selasky.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Jan 31, 2020 at 10:13:58AM +0100, Hans Petter Selasky wrote: > On 2020-01-31 00:37, Konstantin Belousov wrote: > > On Thu, Jan 30, 2020 at 11:23:02PM +0000, Rick Macklem wrote: > > > Hi, > > > > > > The current code for KERN_TLS uses PHYS_TO_DMAP() > > > to access unmapped external pages on m_ext.ext_pgs > > > mbufs. > > > I also need to do this to implement RPC-over-TLS. > > > > > > The problem is that some arches, like i386, don't > > > support PHYS_TO_DMAP(). > > > > > > Since it appears that there will be at most 4 pages on > > > one of these mbufs, my thinking was... > > > - Acquire four pages of kva from the kernel_map during > > > booting. > > > - Then just use pmap_qenter() to fill in the physical page > > > mappings for long enough to copy the data. > > > > > > Does this sound reasonable? > > > Is there a better way? > > > > Use sfbufs, they should work on all arches. In essence, they provide MI > > interface to DMAP where possible. I do not remember did I bumped the > > limit for i386 after 4/4 went in. > > > > There is currently no limits for sfbufs use per subsystem, but I think it > > is not very likely to cause too much troubles. Main rule is to not sleep > > waiting for more sfbufs if you already own one.. > > In the DRM-KMS LinuxKPI we have: > > void * > kmap(vm_page_t page) > { > #ifdef LINUXKPI_HAVE_DMAP > vm_offset_t daddr; > > daddr = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(page)); > > return ((void *)daddr); > #else > struct sf_buf *sf; > > sched_pin(); > sf = sf_buf_alloc(page, SFB_NOWAIT | SFB_CPUPRIVATE); > if (sf == NULL) { > sched_unpin(); > return (NULL); > } > return ((void *)sf_buf_kva(sf)); > #endif > } > > void > kunmap(vm_page_t page) > { > #ifdef LINUXKPI_HAVE_DMAP > /* NOP */ > #else > struct sf_buf *sf; > > /* lookup SF buffer in list */ > sf = sf_buf_alloc(page, SFB_NOWAIT | SFB_CPUPRIVATE); > > /* double-free */ > sf_buf_free(sf); > sf_buf_free(sf); > > sched_unpin(); > #endif > } > > I think that is the fastest way to do this. So the kmap address is only valid on the CPU that called the function ? This is strange, I was not able to find mention of it in references to kmap.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20200131123144.GW4808>