From owner-freebsd-arch@FreeBSD.ORG Wed Apr 29 22:23:38 2015 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 21945B87; Wed, 29 Apr 2015 22:23:38 +0000 (UTC) Received: from relay.mailchannels.net (tkt-001-i373.relay.mailchannels.net [174.136.5.175]) by mx1.freebsd.org (Postfix) with ESMTP id 08EF91004; Wed, 29 Apr 2015 22:23:36 +0000 (UTC) X-Sender-Id: duocircle|x-authuser|hippie Received: from smtp2.ore.mailhop.org (ip-10-204-4-183.us-west-2.compute.internal [10.204.4.183]) by relay.mailchannels.net (Postfix) with ESMTPA id 5FF60A11A0; Wed, 29 Apr 2015 22:23:28 +0000 (UTC) X-Sender-Id: duocircle|x-authuser|hippie Received: from smtp2.ore.mailhop.org (smtp2.ore.mailhop.org [10.45.8.167]) (using TLSv1 with cipher DHE-RSA-AES256-SHA) by 0.0.0.0:2500 (trex/5.4.8); Wed, 29 Apr 2015 22:23:28 +0000 X-MC-Relay: Neutral X-MailChannels-SenderId: duocircle|x-authuser|hippie X-MailChannels-Auth-Id: duocircle X-MC-Loop-Signature: 1430346208532:536895691 X-MC-Ingress-Time: 1430346208532 Received: from c-73-34-117-227.hsd1.co.comcast.net ([73.34.117.227] helo=ilsoft.org) by smtp2.ore.mailhop.org with esmtpsa (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.82) (envelope-from ) id 1YnaO7-0006MA-7y; Wed, 29 Apr 2015 22:23:27 +0000 Received: from revolution.hippie.lan (revolution.hippie.lan [172.22.42.240]) by ilsoft.org (8.14.9/8.14.9) with ESMTP id t3TMNOWI050105; Wed, 29 Apr 2015 16:23:24 -0600 (MDT) (envelope-from ian@freebsd.org) X-Mail-Handler: DuoCircle Outbound SMTP X-Originating-IP: 73.34.117.227 X-Report-Abuse-To: abuse@duocircle.com (see https://support.duocircle.com/support/solutions/articles/5000540958-duocircle-standard-smtp-abuse-information for abuse reporting information) X-MHO-User: U2FsdGVkX1/ky0KcCx9j8ENGWudC41dk Message-ID: <1430346204.1157.107.camel@freebsd.org> Subject: Re: bus_dmamap_sync() for bounced client buffers from user address space From: Ian Lepore To: Jason Harmening Cc: Konstantin Belousov , Adrian Chadd , Svatopluk Kraus , freebsd-arch Date: Wed, 29 Apr 2015 16:23:24 -0600 In-Reply-To: References: <38574E63-2D74-4ECB-8D68-09AC76DFB30C@bsdimp.com> <1761247.Bq816CMB8v@ralph.baldwin.cx> <20150429132017.GM2390@kib.kiev.ua> <20150429165432.GN2390@kib.kiev.ua> <20150429185019.GO2390@kib.kiev.ua> <20150429193337.GQ2390@kib.kiev.ua> Content-Type: text/plain; charset="us-ascii" X-Mailer: Evolution 3.12.10 FreeBSD GNOME Team Port Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-AuthUser: hippie X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Apr 2015 22:23:38 -0000 On Wed, 2015-04-29 at 14:59 -0500, Jason Harmening wrote: > > > > Even without SMP, VIPT cache cannot hold two mappings of the same page. > > As I understand, sometimes it is more involved, eg if mappings have > > correct color (eg. on ultrasparcs), then cache can deal with aliasing. > > Otherwise pmap has to map the page uncached for all mappings. > > > > Yes, you are right. Regardless of whatever logic the cache uses (or > doesn't use), FreeBSD's page-coloring scheme should prevent that. > > > > > > I do not see what would make this case special for SMP after that. > > Cache invalidation would be either not needed, or coherency domain > > propagation of the virtual address does the right thing. > > > > Since VIPT cache operations require a virtual address, I'm wondering about > the case where different processes are running on different cores, and the > same UVA corresponds to a completely different physical page for each of > those processes. If the d-cache for each core contains that UVA, then what > does it mean when one core issues a flush/invalidate instruction for that > UVA? > > Admittedly, there's a lot I don't know about how that's supposed to work in > the arm/mips SMP world. For all I know, the SMP targets could be > fully-snooped and we don't need to worry about cache maintenance at all. For what we call armv6 (which is mostly armv7)... The cache maintenance operations require virtual addresses, which means it looks a lot like a VIPT cache. Under the hood the implementation behaves as if it were a PIPT cache so even in the presence of multiple mappings of the same physical page into different virtual addresses, the SMP coherency hardware works correctly. The ARM ARM says... [Stuff about ARMv6 and page coloring when a cache way exceeds 4K.] ARMv7 does not support page coloring, and requires that all data and unified caches behave as Physically Indexed Physically Tagged (PIPT) caches. The only true armv6 chip we support isn't SMP and has a 16K/4-way cache that neatly sidesteps the aliasing problem that requires page coloring solutions. So modern arm chips we get to act like we've got PIPT data caches, but with the quirk that cache ops are initiated by virtual address. Basically, when you perform a cache maintainence operation, a translation table walk is done on the core that issued the cache op, then from that point on the physical address is used within the cache hardware and that's what gets broadcast to the other cores by the snoop control unit or cache coherency fabric (depending on the chip). Not that it's germane to this discussion, but an ARM instruction cache can really be VIPT with no "behave as if" restrictions in the spec. That means when doing i-cache maintenance on a virtual address that could be multiply-mapped our only option a rather expensive all-cores "invalidate entire i-cache and branch predictor cache". For the older armv4/v5 world which is VIVT, we have a restriction that a page that is multiply-mapped cannot have cache enabled (it's handled in pmap). That's also probably not very germane to this discussion, because it doesn't seem likely that anyone is going to try to add physical IO or userspace DMA support to that old code. -- Ian