Date: Mon, 21 Aug 2017 18:12:32 +0000 (UTC) From: Andrew Turner <andrew@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r322769 - head/sys/arm64/arm64 Message-ID: <201708211812.v7LICWEO039995@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: andrew Date: Mon Aug 21 18:12:32 2017 New Revision: 322769 URL: https://svnweb.freebsd.org/changeset/base/322769 Log: Improve the performance of the arm64 thread switching code. The full system memory barrier around a TLB invalidation is stricter than required. It needs to wait on accesses to main memory, with just the weaker store variant before the invalidate. As such use the dsb istst, tlbi, dlb ish sequence already used in pmap. The tlbi instruction in this sequence is also unnecessarily using a broadcast invalidate when it just needs to invalidate the local CPUs TLB. Switch to a non-broadcast variant of this instruction. Sponsored by: DARPA, AFRL Modified: head/sys/arm64/arm64/swtch.S Modified: head/sys/arm64/arm64/swtch.S ============================================================================== --- head/sys/arm64/arm64/swtch.S Mon Aug 21 18:00:26 2017 (r322768) +++ head/sys/arm64/arm64/swtch.S Mon Aug 21 18:12:32 2017 (r322769) @@ -91,9 +91,9 @@ ENTRY(cpu_throw) isb /* Invalidate the TLB */ - dsb sy - tlbi vmalle1is - dsb sy + dsb ishst + tlbi vmalle1 + dsb ish isb /* If we are single stepping, enable it */ @@ -192,9 +192,9 @@ ENTRY(cpu_switch) isb /* Invalidate the TLB */ - dsb sy - tlbi vmalle1is - dsb sy + dsb ishst + tlbi vmalle1 + dsb ish isb /*
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201708211812.v7LICWEO039995>