From owner-svn-src-all@FreeBSD.ORG Sun Nov 16 21:39:57 2014 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EB5142A4; Sun, 16 Nov 2014 21:39:56 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D7D1ACCE; Sun, 16 Nov 2014 21:39:56 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.9/8.14.9) with ESMTP id sAGLduwQ017565; Sun, 16 Nov 2014 21:39:56 GMT (envelope-from ian@FreeBSD.org) Received: (from ian@localhost) by svn.freebsd.org (8.14.9/8.14.9/Submit) id sAGLduHO017564; Sun, 16 Nov 2014 21:39:56 GMT (envelope-from ian@FreeBSD.org) Message-Id: <201411162139.sAGLduHO017564@svn.freebsd.org> X-Authentication-Warning: svn.freebsd.org: ian set sender to ian@FreeBSD.org using -f From: Ian Lepore Date: Sun, 16 Nov 2014 21:39:56 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r274605 - head/sys/arm/arm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Nov 2014 21:39:57 -0000 Author: ian Date: Sun Nov 16 21:39:56 2014 New Revision: 274605 URL: https://svnweb.freebsd.org/changeset/base/274605 Log: No functional changes. Remove a couple outdated or inane comments and add new comment blocks describing why the cache maintenance sequences are done in the order they are for each case. Modified: head/sys/arm/arm/busdma_machdep-v6.c Modified: head/sys/arm/arm/busdma_machdep-v6.c ============================================================================== --- head/sys/arm/arm/busdma_machdep-v6.c Sun Nov 16 21:22:42 2014 (r274604) +++ head/sys/arm/arm/busdma_machdep-v6.c Sun Nov 16 21:39:56 2014 (r274605) @@ -1314,17 +1314,20 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus /* * If the buffer was from user space, it is possible that this is not * the same vm map, especially on a POST operation. It's not clear that - * dma on userland buffers can work at all right now, certainly not if a - * partial cacheline flush has to be handled. To be safe, until we're - * able to test direct userland dma, panic on a map mismatch. + * dma on userland buffers can work at all right now. To be safe, until + * we're able to test direct userland dma, panic on a map mismatch. */ if ((bpage = STAILQ_FIRST(&map->bpages)) != NULL) { if (!pmap_dmap_iscurrent(map->pmap)) panic("_bus_dmamap_sync: wrong user map for bounce sync."); - /* Handle data bouncing. */ + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x op 0x%x " "performing bounce", __func__, dmat, dmat->flags, op); + /* + * For PREWRITE do a writeback. Clean the caches from the + * innermost to the outermost levels. + */ if (op & BUS_DMASYNC_PREWRITE) { while (bpage != NULL) { if (bpage->datavaddr != 0) @@ -1345,6 +1348,17 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus dmat->bounce_zone->total_bounced++; } + /* + * Do an invalidate for PREREAD unless a writeback was already + * done above due to PREWRITE also being set. The reason for a + * PREREAD invalidate is to prevent dirty lines currently in the + * cache from being evicted during the DMA. If a writeback was + * done due to PREWRITE also being set there will be no dirty + * lines and the POSTREAD invalidate handles the rest. The + * invalidate is done from the innermost to outermost level. If + * L2 were done first, a dirty cacheline could be automatically + * evicted from L1 before we invalidated it, re-dirtying the L2. + */ if ((op & BUS_DMASYNC_PREREAD) && !(op & BUS_DMASYNC_PREWRITE)) { bpage = STAILQ_FIRST(&map->bpages); while (bpage != NULL) { @@ -1356,6 +1370,16 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus bpage = STAILQ_NEXT(bpage, links); } } + + /* + * Re-invalidate the caches on a POSTREAD, even though they were + * already invalidated at PREREAD time. Aggressive prefetching + * due to accesses to other data near the dma buffer could have + * brought buffer data into the caches which is now stale. The + * caches are invalidated from the outermost to innermost; the + * prefetches could be happening right now, and if L1 were + * invalidated first, stale L2 data could be prefetched into L1. + */ if (op & BUS_DMASYNC_POSTREAD) { while (bpage != NULL) { vm_offset_t startv; @@ -1404,10 +1428,16 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus return; } + /* + * Cache maintenance for normal (non-COHERENT non-bounce) buffers. All + * the comments about the sequences for flushing cache levels in the + * bounce buffer code above apply here as well. In particular, the fact + * that the sequence is inner-to-outer for PREREAD invalidation and + * outer-to-inner for POSTREAD invalidation is not a mistake. + */ if (map->sync_count != 0) { if (!pmap_dmap_iscurrent(map->pmap)) panic("_bus_dmamap_sync: wrong user map for sync."); - /* ARM caches are not self-snooping for dma */ sl = &map->slist[0]; end = &map->slist[map->sync_count];