Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 11 Feb 1998 14:36:43 -0500
From:      Brian McGovern <bmcgover@cisco.com>
To:        dg@root.com
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Re: Mapping phyical memory in to the PCI address range... 
Message-ID:  <199802111936.OAA02653@bmcgover-pc.cisco.com>

next in thread | raw e-mail | index | archive | help
>>This question may be somewhat ill formed. Hopefully, I'll make myself clear.
>>
>>I'm looking at taking the Cyclades driver, and moving the I/O buffers that are
>>normally in the on-board RAM of the card, and possibly moving them to be
>>within the physical RAM of the PC. 
>>
>>Cyclades supports this to some levels (although they never tried it). 
>>Apparently, the big requirement is the ability to lock down the physical
>>memory for the buffers, and then manipulate this memory in such a way as
>>it can be seen by devices on the PCI bus, su that the card's processor can
>>DMA to it.
>>
>>The questions I have are:
>>
>>1 - Does FreeBSD support the ability to map system memory so its available
>>to the PCI bus. Also, what is the proper procedure for determining the
>>physical address of this memory, and locking it in such a way as to always
>>be available to the card.
>
>   The PCI devices have access to all of the PC's memory via DMA. The CPU can
>also access the PCI device's memory if it is mapped. I'm familiar with the
>PLX 9060 and it's a bit quirky, but not that difficult to setup DMA. See the
>fxp driver for an example of a driver that does PCI DMA.
>

I'll take a peek. The docs I have on the PLX9060 say about the same - that
it shouldn't be too hard to do.

>>2 - Would it really be worthwhile pursuing this endeavor? After all, a 1-2%
>>gain on moving a single character really isn't a big win. However 25+%
>>very well might be.
>
>   Maybe, maybe not. DMA will likely be slower when dealing with a small
>number of characters since there is a significant amount of work to do
>per DMA. I would guess that the fastest access would be to map the card's
>RAM via the PCI space and access it directly via the CPU.

Lets make sure I'm following you here.... :) There are too many components
labeled 'CPU' to throw it around lightly. What I _think_ you're saying
(I'm starting to sound like a psychiatrist) is that it makes the most sense
to map the card's memory in to the PCI address space, let the on-board CPU
access its (local) RAM, and then copy large chunks across the PCI bus, which
is how the card is manipulated in the current driver.

The "win" I'm going for is to keep the host CPU load to a bare minimum. To
be honest, barring overloading the PCI bus, I could care less about the card's
CPU having to work hard, so long as it has enough time to move the data. So,
to summarize:


Host CPU		Board CPU		Is it a good thing? (tm)

BUSY DMA'ing Data	Driving the UARTs via 	Its ok. Thats how we do it 
			local RAM		today

Busy Doing other	Busy driving UARTs and	So long as we don't loose
things,occationally	DMAing in to host 	throughput, this is 
moving data in/out of	memory			optimal to me.
clists/buffers.

Busy moving data	Busy moving Data	Not what I want.
around			around



	-Brian

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199802111936.OAA02653>