From owner-svn-src-head@FreeBSD.ORG Sun Apr 27 00:46:02 2014 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B1D68A78; Sun, 27 Apr 2014 00:46:02 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9EDCF16DE; Sun, 27 Apr 2014 00:46:02 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s3R0k2Kq039754; Sun, 27 Apr 2014 00:46:02 GMT (envelope-from ian@svn.freebsd.org) Received: (from ian@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s3R0k1w0039748; Sun, 27 Apr 2014 00:46:01 GMT (envelope-from ian@svn.freebsd.org) Message-Id: <201404270046.s3R0k1w0039748@svn.freebsd.org> From: Ian Lepore Date: Sun, 27 Apr 2014 00:46:01 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r264994 - in head/sys/arm: arm include X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Apr 2014 00:46:02 -0000 Author: ian Date: Sun Apr 27 00:46:01 2014 New Revision: 264994 URL: http://svnweb.freebsd.org/changeset/base/264994 Log: Provide a proper armv7 implementation of icache_sync_all rather than using armv7_idcache_wbinv_all, because wbinv_all doesn't broadcast the operation to other cores. In elf_cpu_load_file() use icache_sync_all() and explain why it's needed (and why other sync operations aren't). As part of doing this, all callers of cpu_icache_sync_all() were inspected to ensure they weren't relying on the old side effect of doing a wbinv_all along with the icache work. Modified: head/sys/arm/arm/cpufunc.c head/sys/arm/arm/cpufunc_asm_armv7.S head/sys/arm/arm/elf_machdep.c head/sys/arm/include/cpufunc.h Modified: head/sys/arm/arm/cpufunc.c ============================================================================== --- head/sys/arm/arm/cpufunc.c Sun Apr 27 00:45:08 2014 (r264993) +++ head/sys/arm/arm/cpufunc.c Sun Apr 27 00:46:01 2014 (r264994) @@ -769,7 +769,7 @@ struct cpu_functions cortexa_cpufuncs = /* Cache operations */ - armv7_idcache_wbinv_all, /* icache_sync_all */ + armv7_icache_sync_all, /* icache_sync_all */ armv7_icache_sync_range, /* icache_sync_range */ armv7_dcache_wbinv_all, /* dcache_wbinv_all */ Modified: head/sys/arm/arm/cpufunc_asm_armv7.S ============================================================================== --- head/sys/arm/arm/cpufunc_asm_armv7.S Sun Apr 27 00:45:08 2014 (r264993) +++ head/sys/arm/arm/cpufunc_asm_armv7.S Sun Apr 27 00:46:01 2014 (r264994) @@ -250,6 +250,13 @@ ENTRY(armv7_idcache_wbinv_range) RET END(armv7_idcache_wbinv_range) +ENTRY_NP(armv7_icache_sync_all) + mcr p15, 0, r0, c7, c1, 0 /* Invalidate all I cache to PoU Inner Shareable */ + isb /* instruction synchronization barrier */ + dsb /* data synchronization barrier */ + RET +END(armv7_icache_sync_all) + ENTRY_NP(armv7_icache_sync_range) ldr ip, .Larmv7_line_size .Larmv7_sync_next: Modified: head/sys/arm/arm/elf_machdep.c ============================================================================== --- head/sys/arm/arm/elf_machdep.c Sun Apr 27 00:45:08 2014 (r264993) +++ head/sys/arm/arm/elf_machdep.c Sun Apr 27 00:46:01 2014 (r264994) @@ -220,9 +220,19 @@ int elf_cpu_load_file(linker_file_t lf __unused) { - cpu_idcache_wbinv_all(); - cpu_l2cache_wbinv_all(); - cpu_tlb_flushID(); + /* + * The pmap code does not do an icache sync upon establishing executable + * mappings in the kernel pmap. It's an optimization based on the fact + * that kernel memory allocations always have EXECUTABLE protection even + * when the memory isn't going to hold executable code. The only time + * kernel memory holding instructions does need a sync is after loading + * a kernel module, and that's when this function gets called. Normal + * data cache maintenance has already been done by the IO code, and TLB + * maintenance has been done by the pmap code, so all we have to do here + * is invalidate the instruction cache (which also invalidates the + * branch predictor cache on platforms that have one). + */ + cpu_icache_sync_all(); return (0); } Modified: head/sys/arm/include/cpufunc.h ============================================================================== --- head/sys/arm/include/cpufunc.h Sun Apr 27 00:45:08 2014 (r264993) +++ head/sys/arm/include/cpufunc.h Sun Apr 27 00:46:01 2014 (r264994) @@ -411,6 +411,7 @@ void armv6_idcache_wbinv_range (vm_offse void armv7_setttb (u_int); void armv7_tlb_flushID (void); void armv7_tlb_flushID_SE (u_int); +void armv7_icache_sync_all (); void armv7_icache_sync_range (vm_offset_t, vm_size_t); void armv7_idcache_wbinv_range (vm_offset_t, vm_size_t); void armv7_idcache_inv_all (void);