Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 24 Dec 2012 22:13:10 -0500
From:      Scott Long <scott4long@yahoo.com>
To:        Ian Lepore <freebsd@damnhippie.dyndns.org>
Cc:        powerpc@freebsd.org, marcel@freebsd.org, mips@freebsd.org, "mav@freebsd.org Motin" <mav@freebsd.org>, "attilio@FreeBSD.org Rao" <attilio@freebsd.org>, Jeff Roberson <jroberson@jroberson.net>, sparc64@freebsd.org, arm@freebsd.org, kib@freebsd.org
Subject:   Re: Call for testing and review, busdma changes
Message-ID:  <2D98F70D-4031-4860-BABB-1F4663896234@yahoo.com>
In-Reply-To: <1356390225.1129.217.camel@revolution.hippie.lan>
References:  <alpine.BSF.2.00.1212080841370.4081@desktop> <1355077061.87661.320.camel@revolution.hippie.lan> <alpine.BSF.2.00.1212090840080.4081@desktop> <1355085250.87661.345.camel@revolution.hippie.lan> <alpine.BSF.2.00.1212231418120.2005@desktop> <1356381775.1129.181.camel@revolution.hippie.lan> <alpine.BSF.2.00.1212241104040.2005@desktop> <1356390225.1129.217.camel@revolution.hippie.lan>

next in thread | previous in thread | raw e-mail | index | archive | help

On Dec 24, 2012, at 6:03 PM, Ian Lepore <freebsd@damnhippie.dyndns.org> =
wrote:

>=20
> Yeah, I've done some low-level storage driver stuff myself (mmc/sd) =
and
> I can see how easy the deferred load solutions are to implement in =
that
> sort of driver that's already structured to operate asychronously.  =
I'm
> not very familiar with how network hardware drivers interface with the
> rest of the network stack.  I have some idea, I'm just not sure of all
> the subtleties involved and whether there are any implications for
> something like a deferred load.
>=20
> This is one of those situations where I tend to say to myself... the
> folks who designed this stuff and imposed the "no deferred load"
> restriction on mbufs and uio but not other cases were not stupid or
> lazy, so they must have had some other reason.  I'd want to know what
> that was before I went too far with trying to undo it.
>=20

Deferring is expensive from a latency standpoint.  For disks, this =
latency was comparatively small (until recent advances in SSD), so it =
didn't matter, but it did matter with network devices.  Also, network =
drivers already had the concept of dropping mbufs due to resource =
shortages, and the strict requirement of guaranteed transactions with =
storage didn't apply.  Deferring and freezing queues to guarantee =
delivery order is a pain in the ass, so the decision was made that it =
was cheaper to drop an mbuf on a resource shortage rather than defer.  =
As for uio's, they're the neglected part of the API and there's really =
been no formal direction or master plan put into their evolution.  =
Anyways, that's my story and I'm sticking to it =3D-)

Also, eliminating the concept of deferred load from mbufs then freed us =
to look at ways to make the load operation cheaper.  There's a lot of =
code in _bus_dmamap_load_buffer() that is expensive, but a big one was =
the indirect function pointer for the callback in the load wrappers.  =
The extra storage for filling in the temporary s/g list was also looked =
at.  Going with direct loads allowed me to remove these and reduce most =
of the speed penalties.

>=20
>>>=20
>>> Still unresolved is what to do about the remaining cases -- attempts =
to
>>> do dma in arbitrary buffers not obtained from bus_dmamem_alloc() =
which
>>> are not aligned and padded appropriately.  There was some discussion =
a
>>> while back, but no clear resolution.  I decided not to get bogged =
down
>>> by that fact and to fix the mbuf and allocated-buffer situations =
that we
>>> know how to deal with for now.
>>=20

Why would these allocations not be handled as normal dynamic buffers =
would with bus_dmamap_load()?

Scott




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2D98F70D-4031-4860-BABB-1F4663896234>