From owner-freebsd-hackers Mon Aug 20 17: 6:41 2001 Delivered-To: freebsd-hackers@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 618) id 92CF937B406; Mon, 20 Aug 2001 17:06:30 -0700 (PDT) Subject: Re: Where to put new bus_dmamap_load_mbuf() code In-Reply-To: from Matthew Jacob at "Aug 20, 2001 04:37:11 pm" To: mjacob@feral.com Date: Mon, 20 Aug 2001 17:06:30 -0700 (PDT) Cc: hackers@freebsd.org, current@freebsd.org X-Mailer: ELM [version 2.4ME+ PL54 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-Id: <20010821000630.92CF937B406@hub.freebsd.org> From: wpaul@FreeBSD.ORG (Bill Paul) Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG > > Another thing- maybe I'm confused- but I still don't see why you want to > require the creating of a map each time you want to load an mbuf > chain. Wouldn't it be better and more efficient to let the driver decide when > and where the map is created and just use the common code for loads/unloads? Every hear the phrase "you get what you pay for?" The API isn't all that clear, and we don't have a man page or document that describes in detail how to use it properly. Rather than whining about that, I decided to tinker with it and Use The Source, Luke (tm). This is the result. My understanding is that you need a dmamap for every buffer that you want to map into bus space. Each mbuf has a single data buffer associated with it (either the data area in the mbuf itself, or external storage). We're not allowed to make assumptions about where these buffers are. Also, a single ethernet frame can be fragmented across multiple mbufs in a list. So unless I'm mistaken, for each mbuf in an mbuf list, what we have to do is this: - create a bus_dmamap_t for the data area in the mbuf using bus_dmamap_create() - do the physical to bus mapping with bus_dmamap_load() - call bus_dmamap_sync() as needed (might handle copying if bounce buffers are required) - - do post-DMA sync as needed (again, might require bounce copying) - call bus_dmamap_unload() to un-do the bus mapping (which might free bounce buffers if some were allocated by bus_dmamap_load()) - destroy the bus_dmamap_t One memory region, one DMA map. It seems to me that you can't use a single dmamap for multiple memory buffers, unless you make certain assumptions about where in physical memory those buffers reside, and I thought the idea of busdma was to provide a consistent, opaque API so that you would not have to make any assumptions. Now if I've gotten any of this wrong, please tell me how I should be doing it. Remember to show all work. I don't give partial credit, nor do I grade on a curve. > > Yay! > > > > The current suggestion is fine except that each platform might have a more > > efficient, or even required, actual h/w mechanism for mapping mbufs. It might, but right now, it doesn't. All I have to work with is the existing API. I'm not here to stick my fingers in it and change it all around. I just want to add a bit of code on top of it so that I don't have to go through quite so many contortions when I use the API in network adapter drivers. > > I'd also be a little concerned with the way you're overloading stuff into mbuf > > itself- but I'm a little shakier on this. I thought about this. Like it says in the comments, at the device driver level, you're almost never going to be using some of the pointers in the mbuf header. On the RX side, *we* (i.e. the driver) are allocating the mbufs, so we can do whatever the heck we want with them until such time as we hand them off to ether_input(), and by then we will have put things back the way they were. For the TX side, by the time we get the mbufs off the send queue, we always know we're going to have just an mbuf list (and not an mbuf chain), and we're going to toss the mbufs once we're done with them, so we can trample on certain things that we know don't matter to the OS or network stack anymore. The alternatives are: - Allocate some extra space in the DMA descriptor structures for the necessary bus_dmamap_t pointers. This is tricky with this particular NIC, and a little awkward. - Allocate my own private arrays of bus_dmamap_t that mirror the DMA rings. This is yet more memory I need to allocate and free at device attach and detach time. I've got space in the mbuf header. It's not being used. It's right where I need it. Why not take advantage of it? > > Finally- why not make this an inline? Er... because that idea offended my delicate sensibilities? :) -Bill -- ============================================================================= -Bill Paul (510) 749-2329 | Senior Engineer, Master of Unix-Fu wpaul@windriver.com | Wind River Systems ============================================================================= "I like zees guys. Zey are fonny guys. Just keel one of zem." -- The 3 Amigos ============================================================================= To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message