Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 27 Aug 2012 10:08:36 -0600
From:      Ian Lepore <freebsd@damnhippie.dyndns.org>
To:        Warner Losh <imp@bsdimp.com>
Cc:        freebsd-arch@freebsd.org, freebsd-arm@freebsd.org, freebsd-mips@freebsd.org, Hans Petter Selasky <hans.petter.selasky@bitfrost.no>
Subject:   Re: Partial cacheline flush problems on ARM and MIPS
Message-ID:  <1346083716.1140.212.camel@revolution.hippie.lan>
In-Reply-To: <6D83AF9D-577B-4C83-84B7-C4E3B32695FC@bsdimp.com>
References:  <1345757300.27688.535.camel@revolution.hippie.lan> <3A08EB08-2BBF-4B0F-97F2-A3264754C4B7@bsdimp.com> <1345763393.27688.578.camel@revolution.hippie.lan> <FD8DC82C-AD3B-4EBC-A625-62A37B9ECBF1@bsdimp.com> <1345765503.27688.602.camel@revolution.hippie.lan> <CAJ-VmonOwgR7TNuYGtTOhAbgz-opti_MRJgc8G%2BB9xB3NvPFJQ@mail.gmail.com> <1345766109.27688.606.camel@revolution.hippie.lan> <CAJ-VmomFhqV5rTDf-kKQfbSuW7SSiSnqPEjGPtxWjaHFA046kQ@mail.gmail.com> <F8C9E811-8597-4ED0-9F9D-786EB2301D6F@bsdimp.com> <1346002922.1140.56.camel@revolution.hippie.lan> <6D83AF9D-577B-4C83-84B7-C4E3B32695FC@bsdimp.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 2012-08-26 at 17:03 -0600, Warner Losh wrote:
> On Aug 26, 2012, at 11:42 AM, Ian Lepore wrote:
> > 
> > The busdma manpage currently has some vague words about the usage and
> > sequencing of sync ops, such as "If read and write operations are not
> > preceded and followed by the appropriate synchronization operations,
> > behavior is undefined."  I think we should more explicitly spell out
> > what the appropriate sequences are.  In particular:
> > 
> >      * The PRE and POST operations must occur in pairs; a PREREAD must
> >        be followed eventually by a POSTREAD and a PREWRITE must be
> >        followed by a POSTWRITE. 
> 
> PREREAD means "I am about to tell the device to put data here, have whaterver things might be pending in the CPU complex to get out of the way." usually this means 'invalidate the cache for that range', but not always.  POSTREAD means 'The device's DMA is done, I'd like to start accessing it now.' If the memory will be thrown away without being looked at, then does the driver necessarily need to issue the POSTREAD?  I think so, but I don't know if that's a new requirement.
> 

One of the things that scares me most is the idea that driver writers
will glance at an existing implementation and think "Oh I have no need
to ever call POSTWRITE because it's implemented as a no-op and I can
save the call overhead."  In fact we have drivers coded like that now.

We've got an API here to support arbitrary hardware, some of which may
not have been designed yet.  I think it's really unsafe to say that a
driver can decide that it's safe to elide some calls in some situations
just because that's safe with the busdma implementations that exist
today.

> > We also need some rules about working with buffers obtained from
> > bus_dmamem_alloc() and external buffers passed to bus_dmamap_load().  I
> > think the rule should be that a buffer obtained from bus_dmamem_alloc(),
> > or more formally any region of memory mapped by a bus_dmamap_load(), is
> > a single logical object which can only be accessed by one entity at a
> > time.  That means that there cannot be two concurrent DMA operations
> > happening in different regions of the same buffer, nor can DMA and CPU
> > access be happening concurrently even if in different parts of the
> > buffer.  
> 
> There's something subtle that I'm missing.  Why would two DMA operations be disallowed?  The rest makes good sense.
> 

If two DMAs are going on concurrently in the same buffer, one is going
to finish before the other, leading to a POSTxxxx sync op happening for
one DMA operation while the other is still in progress.  The unit of
granularity for sync operations is the mapped region, so now you're
syncing access to a region which still has active DMA happening within
it.

While I think it's really an API definition issue, think about it in
terms of a potential implementation... What if the CPU had to access the
memory as part of the sync for the first DMA that completes, while the
second is still running?  Now you've got pretty much exactly the same
situation as when a driver subdivides a buffer without knowing about the
cache alignment; you end up with the CPU and DMA touching data in the
same cachline and no sequence of flush/invalidate can be g'teed to
preserve all data correctly.

> > I've always thought that allocating a dma buffer feels like a big
> > hassle.  You sometimes have to create a tag for the sole purpose of
> > setting the maxsize to get the buffer size you need when you call
> > bus_dmamem_alloc().  If bus_dmamem_alloc() took a size parm you could
> > just use your parent tag, or a generic tag appropriate to all the IO
> > you're doing for a given device.  If you need a variety of buffers for
> > small control and command and status transfers of different sizes, you
> > end up having to manage up to a dozen tags and maps and buffers.  It's
> > all very clunky and inconvenient.  It's just the sort of thing that
> > makes you want to allocate a big buffer and subdivide it. Surely we
> > could do something to make it easier?
> 
> You'd wind up creating a quick tag on the fly for the bus_dmamap_alloc if you wanted to do this.  Cleanup then becomes unclear.
> 

My point is that the only piece of information in the tag that's
specific to the allocation is the maxsize.  If the allocation size were
passed to bus_dmamem_alloc() then you wouldn't need a tag specific to
that buffer, a generic tag for the device would work and you could
allocate a dozen different-sized buffers all using that one tag, and the
allocator would just have to sanity-check the allocation size against
the tag's maxsize.

-- Ian






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1346083716.1140.212.camel>