Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Aug 2001 15:15:44 -0600
From:      "Justin T. Gibbs" <gibbs@scsiguy.com>
To:        wpaul@FreeBSD.ORG (Bill Paul)
Cc:        mjacob@feral.com, hackers@FreeBSD.ORG, current@FreeBSD.ORG
Subject:   Re: Where to put new bus_dmamap_load_mbuf() code 
Message-ID:  <200108222115.f7MLFiY16846@aslan.scsiguy.com>
In-Reply-To: Your message of "Wed, 22 Aug 2001 13:55:14 PDT." <20010822205514.52C0A37B42B@hub.freebsd.org> 

next in thread | previous in thread | raw e-mail | index | archive | help
>> >My understanding is that you need a dmamap for every buffer that you want
>> >to map into bus space.
>> 
>> You need one dmamap for each independantly manageable mapping.  A
>> single mapping may result in a long list of segments, regardless
>> of whether you have a single KVA buffer or multiple KVA buffers
>> that might contribute to the mapping.
>
>Yes yes, I understand that. But that's only if you want to map
>a buffer that's larger than PAGE_SIZE bytes, like, say, a 64K
>buffer being sent to a disk controller. What I want to make sure
>everyone understands here is that I'm not typically dealing with
>buffers this large: instead I have lots of small buffers that are
>smaller than PAGE_SIZE bytes. A single mbuf alone is only 256
>bytes, of which only a fraction is used for data. An mbuf cluster
>buffer is usually only 2048 bytes. Transmitted packets are typically
>fragmented across 2 or 3 mbufs: the first mbuf contains the header,
>and the other two contain data. (Or the first one contains part
>of the header, the second one contains additional header data,
>and the third contains data -- whatever.) At most I will have 1500
>bytes of data to send, which is less than PAGE_SIZE, and that 1500
>bytes will be fragmented across a bunch of smaller buffers that
>are also smaller than PAGE_SIZE. Therefore I will not have one
>dmamap with multiple segments: I will have a bunch of dmamaps
>with one segment each.

The fact that the data is less than a page in size matters little
to the bus dma concept.  In other words, how is this packet presented
to the hardware?  Does it care that all of the component pieces are
< PAGE_SIZE in length?  Probably not.  It just wants the list of
address/length pairs that compose that packet and there is no reason
that each chunk needs to have it own, and potentially expensive, dmamap.

>> Creating a dmamap, depending on the architecture, could be expensive.
>> You really want to create them in advance (or pool them), with at most
>> one dmamap per concurrent transaction you support in your driver.
>
>The only problem here is that I can't really predict how many transactions
>will be going at one time. I will have at least RX_DMA_RING maps (one for
>each mbuf in the RX DMA ring), and some fraction of TX_DMA_RING maps.
>I could have the TX DMA ring completely filled with packets waiting
>to be DMA'ed and transmitted, or I may have only one entry in the ring
>currently in use. So I guess I have to allocate RX_DMA_RING + TX_DMA_RING
>dmamaps in order to be safe.

Yes or allocate them in chunks so that the total amount is only as large
as the greatest demand your driver has ever seen.

>> With the added complications of deferring the mapping if we're
>> out of space, issuing the callback, etc.
>
>Why can't I just call bus_dmamap_load() multiple times, once for
>each mbuf in the mbuf list?

Due to the cost of the dmamaps, the cost of which is platform and
bus-dma implementation dependent - e.g. could be a 1-1 mapping to
a hardware resource.  Consider the case of having a full TX and RX
ring in your driver.  Instead of #TX*#RX dmamaps, you will now have
three or more times that number.

There is also the issue of coalessing the discontiguous chunks if
there are too many chunks for your driver to handle.  Bus dma is
supposed to handle that for you (the x86 implementation doesn't
yet, but it should) but it can't if it doesn't understand the segment
limit per transaction.  You've hidden that from bus dma by using a
map per segment.

>(Note: for the record, an mbuf list usually contains one packet
>fragmented across multiple mbufs. An mbuf chain contains several
>mbuf lists, linked together via the m_nextpkt pointer in the
>header of the first mbuf in each list. By the time we get to
>the device driver, we always have mbuf lists only.)

Okay, so I haven't written a network driver yet, but you got the idea,
right? 8-)

>> Chances are you are going to use the map again soon, so destroying
>> it on every transaction is a waste.
>
>Ok, I spent some more time on this. I updated the code at:
>
>http://www.freebsd.org/~wpaul/busdma

I'll take a look.

>The changes are:

...

>- Added routines to allocate a chunk of maps in a singly linked list,
>  from which the other routines can grab them as needed.

Are these hung off the dma tag or something?  dmamaps may hold settings
that are peculuar to the device that allocated them, so they cannot
be shared with other clients of bus_dmamap_load_mbuf.

--
Justin

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200108222115.f7MLFiY16846>