From nobody Wed Jun 14 16:46:36 2023 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4QhBCK1Pwcz4dHhR; Wed, 14 Jun 2023 16:46:37 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4QhBCK0nfdz4PKD; Wed, 14 Jun 2023 16:46:37 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1686761197; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=quYHE+wR3P/xHPUD3Fwx+f/O8ldUzOAJXACH3t8s0AE=; b=ZmW1EdTqHAqGcGqSMzPTnMMVYEsO3mw14VLtKq6C1mA05R3xKDd3JAWzFGuTu/z1D8+S49 QwmKI9lIJmoT77YuXwF8qqdGQfGgTxiql1hDzu6EqVp7MAfjTq8IXGP74T7i52lHzU87ci wRVSW+xhsaiyMuGN2ow1iIaEKw5sk6+5yfhaDjC0qhtoCWWgzMe6cbQoOD+TKTrYgK76cC jD0sG6//qv4lfAabkm0IlFysqXkMuPtgJ66Ukvh779nkQ7J3dBk9yPHkTJLpDfgcRvuvDM pDVhpRCaMkrLNkOf4qMtuc2UR+tBjDV015jkVHqDuW5KxI4HIjPpRYu63ni7eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1686761197; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=quYHE+wR3P/xHPUD3Fwx+f/O8ldUzOAJXACH3t8s0AE=; b=e6gFaVAdwYZjqKkYI56365YoE/dveyYCcremHpP9SKfXmNmksVtmrPDQIjJ2Cnv2yu5mAn gvaKWjc9pvHUK03KM5VCdcqUe+28VHSmO9U6b5mNECycLN4inyBuZMdzo7K0OeW3Bdzb7H /FPs+wibUkemixsntp3RqcN2Yid/Xrh+bi9YiBbbfE3RyYW/2lkxz7FlTO+zlqD4xJZCMh 3byoz0i7762IqVjle7p7tImN8XaWj4vBIm3PMA+KCdQyt3wloprbXDCuG4bxX400UECJZv 7UMcJunPBZk+ZushmMva8GGMNAhD4SLmM6ipChvCq+A5v5zrwRHOTHLF1Azysg== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1686761197; a=rsa-sha256; cv=none; b=XySdvOURQoIg8SYdpffE47/LP+GiQIAokkT+Oi1ou7hxd53uoEJz+6gWdlbeVUyVWRNkYr arvyn5PxHvm7Xfe4CRbvqPyuWLhZu1/Ms0fjb0eFUCkHqRctfsZRtXxOOBg14iXddqF//+ +5p1Ke/rNEs3AgLmuZMDJ6YmoRXgw5Z8VY1B89Slbb4NsNIXtXViHwBjrUrxDonE2BV3uH W763T73HIm3+Fl/I4L55RgZjy2kj2qGujacgZhE4g6EybGsA+Ji9/x0ZlwreJGvQ7uShcU M8yA8CnQjC6KJXkwQEYeP4XhY1xveIXcLI++yjSJEjQVnT1bMnYoJT7l/U5dYQ== Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4QhBCJ70fbz1B07; Wed, 14 Jun 2023 16:46:36 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.16.1/8.16.1) with ESMTP id 35EGkagO028321; Wed, 14 Jun 2023 16:46:36 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.16.1/8.16.1/Submit) id 35EGkana028320; Wed, 14 Jun 2023 16:46:36 GMT (envelope-from git) Date: Wed, 14 Jun 2023 16:46:36 GMT Message-Id: <202306141646.35EGkana028320@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org From: Mitchell Horne Subject: git: 693cd30797b8 - main - hwpmc_mod.c: whitespace style cleanup List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-src-all@freebsd.org X-BeenThere: dev-commits-src-all@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: mhorne X-Git-Repository: src X-Git-Refname: refs/heads/main X-Git-Reftype: branch X-Git-Commit: 693cd30797b8e613436d6c1829a28d417ffb997e Auto-Submitted: auto-generated X-ThisMailContainsUnwantedMimeParts: N The branch main has been updated by mhorne: URL: https://cgit.FreeBSD.org/src/commit/?id=693cd30797b8e613436d6c1829a28d417ffb997e commit 693cd30797b8e613436d6c1829a28d417ffb997e Author: Mitchell Horne AuthorDate: 2023-06-14 16:31:15 +0000 Commit: Mitchell Horne CommitDate: 2023-06-14 16:34:20 +0000 hwpmc_mod.c: whitespace style cleanup Handle a few things related to spacing: - Remove redundant/superfluous blank lines (and add a couple where helpful) - Add spacing around binary operators - Remove spacing after casts and before goto labels - Adjustments for line width of 80 chars - Tab/space character issues Reviewed by: jkoshy MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D40514 --- sys/dev/hwpmc/hwpmc_mod.c | 460 +++++++++++++++++----------------------------- 1 file changed, 169 insertions(+), 291 deletions(-) diff --git a/sys/dev/hwpmc/hwpmc_mod.c b/sys/dev/hwpmc/hwpmc_mod.c index 779f6bf4dc32..7b8950f3c024 100644 --- a/sys/dev/hwpmc/hwpmc_mod.c +++ b/sys/dev/hwpmc/hwpmc_mod.c @@ -29,7 +29,6 @@ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. - * */ #include @@ -79,8 +78,12 @@ __FBSDID("$FreeBSD$"); #include "hwpmc_soft.h" -#define PMC_EPOCH_ENTER() struct epoch_tracker pmc_et; epoch_enter_preempt(global_epoch_preempt, &pmc_et) -#define PMC_EPOCH_EXIT() epoch_exit_preempt(global_epoch_preempt, &pmc_et) +#define PMC_EPOCH_ENTER() \ + struct epoch_tracker pmc_et; \ + epoch_enter_preempt(global_epoch_preempt, &pmc_et) + +#define PMC_EPOCH_EXIT() \ + epoch_exit_preempt(global_epoch_preempt, &pmc_et) /* * Types @@ -96,12 +99,12 @@ enum pmc_flags { /* * The offset in sysent where the syscall is allocated. */ - static int pmc_syscall_num = NO_SYSCALL; + struct pmc_cpu **pmc_pcpu; /* per-cpu state */ pmc_value_t *pmc_pcpu_saved; /* saved PMC values: CSW handling */ -#define PMC_PCPU_SAVED(C,R) pmc_pcpu_saved[(R) + md->pmd_npmc*(C)] +#define PMC_PCPU_SAVED(C, R) pmc_pcpu_saved[(R) + md->pmd_npmc * (C)] struct mtx_pool *pmc_mtxpool; static int *pmc_pmcdisp; /* PMC row dispositions */ @@ -140,7 +143,6 @@ static int *pmc_pmcdisp; /* PMC row dispositions */ __LINE__)); \ } while (0) - /* various event handlers */ static eventhandler_tag pmc_exit_tag, pmc_fork_tag, pmc_kld_load_tag, pmc_kld_unload_tag; @@ -148,41 +150,37 @@ static eventhandler_tag pmc_exit_tag, pmc_fork_tag, pmc_kld_load_tag, /* Module statistics */ struct pmc_driverstats pmc_stats; - /* Machine/processor dependent operations */ static struct pmc_mdep *md; /* * Hash tables mapping owner processes and target threads to PMCs. */ - struct mtx pmc_processhash_mtx; /* spin mutex */ static u_long pmc_processhashmask; -static LIST_HEAD(pmc_processhash, pmc_process) *pmc_processhash; +static LIST_HEAD(pmc_processhash, pmc_process) *pmc_processhash; /* * Hash table of PMC owner descriptors. This table is protected by * the shared PMC "sx" lock. */ - static u_long pmc_ownerhashmask; -static LIST_HEAD(pmc_ownerhash, pmc_owner) *pmc_ownerhash; +static LIST_HEAD(pmc_ownerhash, pmc_owner) *pmc_ownerhash; /* * List of PMC owners with system-wide sampling PMCs. */ - -static CK_LIST_HEAD(, pmc_owner) pmc_ss_owners; +static CK_LIST_HEAD(, pmc_owner) pmc_ss_owners; /* * List of free thread entries. This is protected by the spin * mutex. */ static struct mtx pmc_threadfreelist_mtx; /* spin mutex */ -static LIST_HEAD(, pmc_thread) pmc_threadfreelist; -static int pmc_threadfreelist_entries=0; -#define THREADENTRY_SIZE \ -(sizeof(struct pmc_thread) + (md->pmd_npmc * sizeof(struct pmc_threadpmcstate))) +static LIST_HEAD(, pmc_thread) pmc_threadfreelist; +static int pmc_threadfreelist_entries = 0; +#define THREADENTRY_SIZE (sizeof(struct pmc_thread) + \ + (md->pmd_npmc * sizeof(struct pmc_threadpmcstate))) /* * Task to free thread descriptors @@ -198,13 +196,14 @@ static struct pmc_classdep **pmc_rowindex_to_classdep; * Prototypes */ -#ifdef HWPMC_DEBUG +#ifdef HWPMC_DEBUG static int pmc_debugflags_sysctl_handler(SYSCTL_HANDLER_ARGS); static int pmc_debugflags_parse(char *newstr, char *fence); #endif static int load(struct module *module, int cmd, void *arg); -static int pmc_add_sample(ring_type_t ring, struct pmc *pm, struct trapframe *tf); +static int pmc_add_sample(ring_type_t ring, struct pmc *pm, + struct trapframe *tf); static void pmc_add_thread_descriptors_from_proc(struct proc *p, struct pmc_process *pp); static int pmc_attach_process(struct proc *p, struct pmc *pm); @@ -214,7 +213,8 @@ static int pmc_attach_one_process(struct proc *p, struct pmc *pm); static int pmc_can_allocate_rowindex(struct proc *p, unsigned int ri, int cpu); static int pmc_can_attach(struct pmc *pm, struct proc *p); -static void pmc_capture_user_callchain(int cpu, int soft, struct trapframe *tf); +static void pmc_capture_user_callchain(int cpu, int soft, + struct trapframe *tf); static void pmc_cleanup(void); static int pmc_detach_process(struct proc *p, struct pmc *pm); static int pmc_detach_one_process(struct proc *p, struct pmc *pm, @@ -259,6 +259,7 @@ static void pmc_unlink_target_process(struct pmc *pmc, struct pmc_process *pp); static int generic_switch_in(struct pmc_cpu *pc, struct pmc_process *pp); static int generic_switch_out(struct pmc_cpu *pc, struct pmc_process *pp); + static struct pmc_mdep *pmc_generic_cpu_initialize(void); static void pmc_generic_cpu_finalize(struct pmc_mdep *md); static void pmc_post_callchain_callback(void); @@ -275,37 +276,49 @@ SYSCTL_DECL(_kern_hwpmc); SYSCTL_NODE(_kern_hwpmc, OID_AUTO, stats, CTLFLAG_RW | CTLFLAG_MPSAFE, 0, "HWPMC stats"); - /* Stats. */ SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, intr_ignored, CTLFLAG_RW, - &pmc_stats.pm_intr_ignored, "# of interrupts ignored"); + &pmc_stats.pm_intr_ignored, + "# of interrupts ignored"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, intr_processed, CTLFLAG_RW, - &pmc_stats.pm_intr_processed, "# of interrupts processed"); + &pmc_stats.pm_intr_processed, + "# of interrupts processed"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, intr_bufferfull, CTLFLAG_RW, - &pmc_stats.pm_intr_bufferfull, "# of interrupts where buffer was full"); + &pmc_stats.pm_intr_bufferfull, + "# of interrupts where buffer was full"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, syscalls, CTLFLAG_RW, - &pmc_stats.pm_syscalls, "# of syscalls"); + &pmc_stats.pm_syscalls, + "# of syscalls"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, syscall_errors, CTLFLAG_RW, - &pmc_stats.pm_syscall_errors, "# of syscall_errors"); + &pmc_stats.pm_syscall_errors, + "# of syscall_errors"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, buffer_requests, CTLFLAG_RW, - &pmc_stats.pm_buffer_requests, "# of buffer requests"); -SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, buffer_requests_failed, CTLFLAG_RW, - &pmc_stats.pm_buffer_requests_failed, "# of buffer requests which failed"); + &pmc_stats.pm_buffer_requests, + "# of buffer requests"); +SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, buffer_requests_failed, + CTLFLAG_RW, &pmc_stats.pm_buffer_requests_failed, + "# of buffer requests which failed"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, log_sweeps, CTLFLAG_RW, - &pmc_stats.pm_log_sweeps, "# of times samples were processed"); + &pmc_stats.pm_log_sweeps, + "# of times samples were processed"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, merges, CTLFLAG_RW, - &pmc_stats.pm_merges, "# of times kernel stack was found for user trace"); + &pmc_stats.pm_merges, + "# of times kernel stack was found for user trace"); SYSCTL_COUNTER_U64(_kern_hwpmc_stats, OID_AUTO, overwrites, CTLFLAG_RW, - &pmc_stats.pm_overwrites, "# of times a sample was overwritten before being logged"); + &pmc_stats.pm_overwrites, + "# of times a sample was overwritten before being logged"); static int pmc_callchaindepth = PMC_CALLCHAIN_DEPTH; SYSCTL_INT(_kern_hwpmc, OID_AUTO, callchaindepth, CTLFLAG_RDTUN, - &pmc_callchaindepth, 0, "depth of call chain records"); + &pmc_callchaindepth, 0, + "depth of call chain records"); char pmc_cpuid[PMC_CPUID_LEN]; SYSCTL_STRING(_kern_hwpmc, OID_AUTO, cpuid, CTLFLAG_RD, - pmc_cpuid, 0, "cpu version string"); -#ifdef HWPMC_DEBUG + pmc_cpuid, 0, + "cpu version string"); + +#ifdef HWPMC_DEBUG struct pmc_debugflags pmc_debugflags = PMC_DEBUG_DEFAULT_FLAGS; char pmc_debugstr[PMC_DEBUG_STRSIZE]; TUNABLE_STR(PMC_SYSCTL_NAME_PREFIX "debugflags", pmc_debugstr, @@ -316,53 +329,48 @@ SYSCTL_PROC(_kern_hwpmc, OID_AUTO, debugflags, "debug flags"); #endif - /* * kern.hwpmc.hashrows -- determines the number of rows in the * of the hash table used to look up threads */ - static int pmc_hashsize = PMC_HASH_SIZE; SYSCTL_INT(_kern_hwpmc, OID_AUTO, hashsize, CTLFLAG_RDTUN, - &pmc_hashsize, 0, "rows in hash tables"); + &pmc_hashsize, 0, + "rows in hash tables"); /* * kern.hwpmc.nsamples --- number of PC samples/callchain stacks per CPU */ - static int pmc_nsamples = PMC_NSAMPLES; SYSCTL_INT(_kern_hwpmc, OID_AUTO, nsamples, CTLFLAG_RDTUN, - &pmc_nsamples, 0, "number of PC samples per CPU"); + &pmc_nsamples, 0, + "number of PC samples per CPU"); -static uint64_t pmc_sample_mask = PMC_NSAMPLES-1; +static uint64_t pmc_sample_mask = PMC_NSAMPLES - 1; /* * kern.hwpmc.mtxpoolsize -- number of mutexes in the mutex pool. */ - static int pmc_mtxpool_size = PMC_MTXPOOL_SIZE; SYSCTL_INT(_kern_hwpmc, OID_AUTO, mtxpoolsize, CTLFLAG_RDTUN, - &pmc_mtxpool_size, 0, "size of spin mutex pool"); - + &pmc_mtxpool_size, 0, + "size of spin mutex pool"); /* * kern.hwpmc.threadfreelist_entries -- number of free entries */ - SYSCTL_INT(_kern_hwpmc, OID_AUTO, threadfreelist_entries, CTLFLAG_RD, - &pmc_threadfreelist_entries, 0, "number of available thread entries"); - + &pmc_threadfreelist_entries, 0, + "number of available thread entries"); /* * kern.hwpmc.threadfreelist_max -- maximum number of free entries */ - static int pmc_threadfreelist_max = PMC_THREADLIST_MAX; SYSCTL_INT(_kern_hwpmc, OID_AUTO, threadfreelist_max, CTLFLAG_RW, &pmc_threadfreelist_max, 0, "maximum number of available thread entries before freeing some"); - /* * kern.hwpmc.mincount -- minimum sample count */ @@ -379,7 +387,6 @@ SYSCTL_INT(_kern_hwpmc, OID_AUTO, mincount, CTLFLAG_RWTUN, * if system-wide measurements need to be taken concurrently with other * per-process measurements. This feature is turned off by default. */ - static int pmc_unprivileged_syspmcs = 0; SYSCTL_INT(_security_bsd, OID_AUTO, unprivileged_syspmcs, CTLFLAG_RWTUN, &pmc_unprivileged_syspmcs, 0, @@ -390,7 +397,6 @@ SYSCTL_INT(_security_bsd, OID_AUTO, unprivileged_syspmcs, CTLFLAG_RWTUN, * these are always zero for our uses. The hash multiplier is * round((2^LONG_BIT) * ((sqrt(5)-1)/2)). */ - #if LONG_BIT == 64 #define _PMC_HM 11400714819323198486u #elif LONG_BIT == 32 @@ -433,7 +439,7 @@ DECLARE_MODULE(pmc, pmc_mod, SI_SUB_SMP, SI_ORDER_ANY); #endif MODULE_VERSION(pmc, PMC_VERSION); -#ifdef HWPMC_DEBUG +#ifdef HWPMC_DEBUG enum pmc_dbgparse_state { PMCDS_WS, /* in whitespace */ PMCDS_MAJOR, /* seen a major keyword */ @@ -448,7 +454,7 @@ pmc_debugflags_parse(char *newstr, char *fence) int error, found, *newbits, tmp; size_t kwlen; - tmpflags = malloc(sizeof(*tmpflags), M_PMC, M_WAITOK|M_ZERO); + tmpflags = malloc(sizeof(*tmpflags), M_PMC, M_WAITOK | M_ZERO); p = newstr; error = 0; @@ -564,8 +570,7 @@ pmc_debugflags_parse(char *newstr, char *fence) /* save the new flag set */ bcopy(tmpflags, &pmc_debugflags, sizeof(pmc_debugflags)); - - done: +done: free(tmpflags, M_PMC); return (error); } @@ -580,7 +585,7 @@ pmc_debugflags_sysctl_handler(SYSCTL_HANDLER_ARGS) (void) arg1; (void) arg2; /* unused parameters */ n = sizeof(pmc_debugstr); - newstr = malloc(n, M_PMC, M_WAITOK|M_ZERO); + newstr = malloc(n, M_PMC, M_WAITOK | M_ZERO); (void) strlcpy(newstr, pmc_debugstr, n); error = sysctl_handle_string(oidp, newstr, n, req); @@ -614,12 +619,10 @@ pmc_ri_to_classdep(struct pmc_mdep *md, int ri, int *adjri) ("[pmc,%d] illegal row-index %d", __LINE__, ri)); pcd = pmc_rowindex_to_classdep[ri]; - KASSERT(pcd != NULL, ("[pmc,%d] ri %d null pcd", __LINE__, ri)); *adjri = ri - pcd->pcd_ri; - KASSERT(*adjri >= 0 && *adjri < pcd->pcd_num, ("[pmc,%d] adjusted row-index %d", __LINE__, *adjri)); @@ -743,7 +746,6 @@ pmc_ri_to_classdep(struct pmc_mdep *md, int ri, int *adjri) /* * save the cpu binding of the current kthread */ - void pmc_save_cpu_binding(struct pmc_binding *pb) { @@ -759,7 +761,6 @@ pmc_save_cpu_binding(struct pmc_binding *pb) /* * restore the cpu binding of the current thread */ - void pmc_restore_cpu_binding(struct pmc_binding *pb) { @@ -777,7 +778,6 @@ pmc_restore_cpu_binding(struct pmc_binding *pb) /* * move execution over the specified cpu and bind it there. */ - void pmc_select_cpu(int cpu) { @@ -807,7 +807,6 @@ pmc_select_cpu(int cpu) * We do this by pause'ing for 1 tick -- invoking mi_switch() is not * guaranteed to force a context switch. */ - static void pmc_force_context_switch(void) { @@ -832,7 +831,6 @@ pmc_rdtsc(void) * Get the file name for an executable. This is a simple wrapper * around vn_fullpath(9). */ - static void pmc_getfilename(struct vnode *v, char **fullpath, char **freepath) { @@ -845,7 +843,6 @@ pmc_getfilename(struct vnode *v, char **fullpath, char **freepath) /* * remove an process owning PMCs */ - void pmc_remove_owner(struct pmc_owner *po) { @@ -881,7 +878,6 @@ pmc_remove_owner(struct pmc_owner *po) /* * remove an owner process record if all conditions are met. */ - static void pmc_maybe_remove_owner(struct pmc_owner *po) { @@ -893,7 +889,6 @@ pmc_maybe_remove_owner(struct pmc_owner *po) * - this process does not own any PMCs * - this process has not allocated a system-wide sampling buffer */ - if (LIST_EMPTY(&po->po_pmcs) && ((po->po_flags & PMC_PO_OWNS_LOGFILE) == 0)) { pmc_remove_owner(po); @@ -904,7 +899,6 @@ pmc_maybe_remove_owner(struct pmc_owner *po) /* * Add an association between a target process and a PMC. */ - static void pmc_link_target_process(struct pmc *pm, struct pmc_process *pp) { @@ -915,7 +909,6 @@ pmc_link_target_process(struct pmc *pm, struct pmc_process *pp) #endif sx_assert(&pmc_sx, SX_XLOCKED); - KASSERT(pm != NULL && pp != NULL, ("[pmc,%d] Null pm %p or pp %p", __LINE__, pm, pp)); KASSERT(PMC_IS_VIRTUAL_MODE(PMC_TO_MODE(pm)), @@ -936,8 +929,7 @@ pmc_link_target_process(struct pmc *pm, struct pmc_process *pp) KASSERT(0, ("[pmc,%d] pp %p already in pmc %p targets", __LINE__, pp, pm)); #endif - - pt = malloc(sizeof(struct pmc_target), M_PMC, M_WAITOK|M_ZERO); + pt = malloc(sizeof(struct pmc_target), M_PMC, M_WAITOK | M_ZERO); pt->pt_process = pp; LIST_INSERT_HEAD(&pm->pm_targets, pt, pt_next); @@ -953,7 +945,6 @@ pmc_link_target_process(struct pmc *pm, struct pmc_process *pp) */ pp->pp_pmcs[ri].pp_pmcval = PMC_TO_MODE(pm) == PMC_MODE_TS ? pm->pm_sc.pm_reloadcount : 0; - pp->pp_refcnt++; #ifdef INVARIANTS @@ -973,7 +964,6 @@ pmc_link_target_process(struct pmc *pm, struct pmc_process *pp) /* * Removes the association between a target process and a PMC. */ - static void pmc_unlink_target_process(struct pmc *pm, struct pmc_process *pp) { @@ -1001,13 +991,13 @@ pmc_unlink_target_process(struct pmc *pm, struct pmc_process *pp) ri, pm, pp->pp_pmcs[ri].pp_pmc)); pp->pp_pmcs[ri].pp_pmc = NULL; - pp->pp_pmcs[ri].pp_pmcval = (pmc_value_t) 0; + pp->pp_pmcs[ri].pp_pmcval = (pmc_value_t)0; /* Clear the per-thread values at this row index. */ if (PMC_TO_MODE(pm) == PMC_MODE_TS) { mtx_lock_spin(pp->pp_tdslock); LIST_FOREACH(pt, &pp->pp_tds, pt_next) - pt->pt_pmcs[ri].pt_pmcval = (pmc_value_t) 0; + pt->pt_pmcs[ri].pt_pmcval = (pmc_value_t)0; mtx_unlock_spin(pp->pp_tdslock); } @@ -1037,8 +1027,7 @@ pmc_unlink_target_process(struct pmc *pm, struct pmc_process *pp) kern_psignal(p, SIGIO); PROC_UNLOCK(p); - PMCDBG2(PRC,SIG,2, "signalling proc=%p signal=%d", p, - SIGIO); + PMCDBG2(PRC,SIG,2, "signalling proc=%p signal=%d", p, SIGIO); } } @@ -1100,7 +1089,6 @@ pmc_can_attach(struct pmc *pm, struct proc *t) /* * Attach a process to a PMC. */ - static int pmc_attach_one_process(struct proc *p, struct pmc *pm) { @@ -1168,7 +1156,7 @@ pmc_attach_one_process(struct proc *p, struct pmc *pm) } return (0); - fail: +fail: PROC_LOCK(p); p->p_flag &= ~P_HWPMC; PROC_UNLOCK(p); @@ -1178,7 +1166,6 @@ pmc_attach_one_process(struct proc *p, struct pmc *pm) /* * Attach a process and optionally its children */ - static int pmc_attach_process(struct proc *p, struct pmc *pm) { @@ -1190,12 +1177,10 @@ pmc_attach_process(struct proc *p, struct pmc *pm) PMCDBG5(PRC,ATT,1, "attach pm=%p ri=%d proc=%p (%d, %s)", pm, PMC_TO_ROWINDEX(pm), p, p->p_pid, p->p_comm); - /* * If this PMC successfully allowed a GETMSR operation * in the past, disallow further ATTACHes. */ - if ((pm->pm_flags & PMC_PP_ENABLE_MSR_ACCESS) != 0) return (EPERM); @@ -1206,11 +1191,9 @@ pmc_attach_process(struct proc *p, struct pmc *pm) * Traverse all child processes, attaching them to * this PMC. */ - sx_slock(&proctree_lock); top = p; - for (;;) { if ((error = pmc_attach_one_process(p, pm)) != 0) break; @@ -1228,9 +1211,9 @@ pmc_attach_process(struct proc *p, struct pmc *pm) } if (error) - (void) pmc_detach_process(top, pm); + (void)pmc_detach_process(top, pm); - done: +done: sx_sunlock(&proctree_lock); return (error); } @@ -1240,7 +1223,6 @@ pmc_attach_process(struct proc *p, struct pmc *pm) * this process, remove the process structure from its hash table. If * 'flags' contains PMC_FLAG_REMOVE, then free the process structure. */ - static int pmc_detach_one_process(struct proc *p, struct pmc *pm, int flags) { @@ -1296,7 +1278,6 @@ pmc_detach_one_process(struct proc *p, struct pmc *pm, int flags) /* * Detach a process and optionally its descendants from a PMC. */ - static int pmc_detach_process(struct proc *p, struct pmc *pm) { @@ -1315,13 +1296,11 @@ pmc_detach_process(struct proc *p, struct pmc *pm) * ignore errors since we could be detaching a PMC from a * partially attached proc tree. */ - sx_slock(&proctree_lock); top = p; - for (;;) { - (void) pmc_detach_one_process(p, pm, PMC_FLAG_REMOVE); + (void)pmc_detach_one_process(p, pm, PMC_FLAG_REMOVE); if (!LIST_EMPTY(&p->p_children)) p = LIST_FIRST(&p->p_children); @@ -1335,21 +1314,17 @@ pmc_detach_process(struct proc *p, struct pmc *pm) p = p->p_pptr; } } - - done: +done: sx_sunlock(&proctree_lock); - if (LIST_EMPTY(&pm->pm_targets)) pm->pm_flags &= ~PMC_F_ATTACH_DONE; return (0); } - /* * Thread context switch IN */ - static void pmc_process_csw_in(struct thread *td) { @@ -1383,25 +1358,21 @@ pmc_process_csw_in(struct thread *td) ("[pmc,%d] weird CPU id %d", __LINE__, cpu)); pc = pmc_pcpu[cpu]; - for (ri = 0; ri < md->pmd_npmc; ri++) { - if ((pm = pp->pp_pmcs[ri].pp_pmc) == NULL) continue; KASSERT(PMC_IS_VIRTUAL_MODE(PMC_TO_MODE(pm)), ("[pmc,%d] Target PMC in non-virtual mode (%d)", - __LINE__, PMC_TO_MODE(pm))); - + __LINE__, PMC_TO_MODE(pm))); KASSERT(PMC_TO_ROWINDEX(pm) == ri, ("[pmc,%d] Row index mismatch pmc %d != ri %d", - __LINE__, PMC_TO_ROWINDEX(pm), ri)); + __LINE__, PMC_TO_ROWINDEX(pm), ri)); /* * Only PMCs that are marked as 'RUNNING' need * be placed on hardware. */ - if (pm->pm_state != PMC_STATE_RUNNING) continue; @@ -1446,7 +1417,7 @@ pmc_process_csw_in(struct thread *td) /* * If we have a thread descriptor, use the per-thread * counter in the descriptor. If not, we will use - * a per-process counter. + * a per-process counter. * * TODO: Remove the per-process "safety net" once * we have thoroughly tested that we don't hit the @@ -1465,7 +1436,6 @@ pmc_process_csw_in(struct thread *td) * another thread from this process switches in * before any threads switch out. */ - newvalue = pp->pp_pmcs[ri].pp_pmcval; pp->pp_pmcs[ri].pp_pmcval = pm->pm_sc.pm_reloadcount; @@ -1505,17 +1475,14 @@ pmc_process_csw_in(struct thread *td) * perform any other architecture/cpu dependent thread * switch-in actions. */ - - (void) (*md->pmd_switch_in)(pc, pp); + (void)(*md->pmd_switch_in)(pc, pp); critical_exit(); - } /* * Thread context switch OUT. */ - static void pmc_process_csw_out(struct thread *td) { @@ -1545,14 +1512,9 @@ pmc_process_csw_out(struct thread *td) * found we still need to deconfigure any PMCs that * are currently running on hardware. */ - p = td->td_proc; pp = pmc_find_process_descriptor(p, PMC_FLAG_NONE); - /* - * save PMCs - */ - critical_enter(); cpu = PCPU_GET(cpuid); /* td->td_oncpu is invalid */ @@ -1575,12 +1537,10 @@ pmc_process_csw_out(struct thread *td) * the hardware to determine if a PMC is scheduled on * it. */ - for (ri = 0; ri < md->pmd_npmc; ri++) { - pcd = pmc_ri_to_classdep(md, ri, &adjri); pm = NULL; - (void) (*pcd->pcd_get_config)(cpu, adjri, &pm); + (void)(*pcd->pcd_get_config)(cpu, adjri, &pm); if (pm == NULL) /* nothing at this row index */ continue; @@ -1614,13 +1574,11 @@ pmc_process_csw_out(struct thread *td) * If this PMC is associated with this process, * save the reading. */ - if (pm->pm_state != PMC_STATE_DELETED && pp != NULL && pp->pp_pmcs[ri].pp_pmc != NULL) { KASSERT(pm == pp->pp_pmcs[ri].pp_pmc, ("[pmc,%d] pm %p != pp_pmcs[%d] %p", __LINE__, pm, ri, pp->pp_pmcs[ri].pp_pmc)); - KASSERT(pp->pp_refcnt > 0, ("[pmc,%d] pp refcnt = %d", __LINE__, pp->pp_refcnt)); @@ -1673,7 +1631,7 @@ pmc_process_csw_out(struct thread *td) } mtx_pool_unlock_spin(pmc_mtxpool, pm); } else { - tmp = newvalue - PMC_PCPU_SAVED(cpu,ri); + tmp = newvalue - PMC_PCPU_SAVED(cpu, ri); PMCDBG3(CSW,SWO,1,"cpu=%d ri=%d tmp=%jd (count)", cpu, ri, tmp); @@ -1688,7 +1646,7 @@ pmc_process_csw_out(struct thread *td) ("[pmc,%d] negative increment cpu=%d " "ri=%d newvalue=%jx saved=%jx " "incr=%jx", __LINE__, cpu, ri, - newvalue, PMC_PCPU_SAVED(cpu,ri), tmp)); + newvalue, PMC_PCPU_SAVED(cpu, ri), tmp)); mtx_pool_lock_spin(pmc_mtxpool, pm); pm->pm_gv.pm_savedvalue += tmp; @@ -1708,8 +1666,7 @@ pmc_process_csw_out(struct thread *td) * perform any other architecture/cpu dependent thread * switch out functions. */ - - (void) (*md->pmd_switch_out)(pc, pp); + (void)(*md->pmd_switch_out)(pc, pp); critical_exit(); } @@ -1755,7 +1712,6 @@ pmc_process_thread_userret(struct thread *td) /* * A mapping change for a process. */ - static void pmc_process_mmap(struct thread *td, struct pmckern_map_in *pkm) { @@ -1768,7 +1724,7 @@ pmc_process_mmap(struct thread *td, struct pmckern_map_in *pkm) freepath = fullpath = NULL; MPASS(!in_epoch(global_epoch_preempt)); - pmc_getfilename((struct vnode *) pkm->pm_file, &fullpath, &freepath); + pmc_getfilename((struct vnode *)pkm->pm_file, &fullpath, &freepath); pid = td->td_proc->p_pid; @@ -1790,17 +1746,15 @@ pmc_process_mmap(struct thread *td, struct pmckern_map_in *pkm) pmclog_process_map_in(pm->pm_owner, pid, pkm->pm_address, fullpath); - done: +done: if (freepath) free(freepath, M_TEMP); PMC_EPOCH_EXIT(); } - /* * Log an munmap request. */ - static void pmc_process_munmap(struct thread *td, struct pmckern_map_out *pkm) { @@ -1832,7 +1786,6 @@ pmc_process_munmap(struct thread *td, struct pmckern_map_out *pkm) /* * Log mapping information about the kernel. */ - static void pmc_log_kernel_mappings(struct pmc *pm) { @@ -1848,16 +1801,18 @@ pmc_log_kernel_mappings(struct pmc *pm) if (po->po_flags & PMC_PO_INITIAL_MAPPINGS_DONE) return; + if (PMC_TO_MODE(pm) == PMC_MODE_SS) pmc_process_allproc(pm); + /* * Log the current set of kernel modules. */ kmbase = linker_hwpmc_list_objects(); for (km = kmbase; km->pm_file != NULL; km++) { - PMCDBG2(LOG,REG,1,"%s %p", (char *) km->pm_file, - (void *) km->pm_address); - pmclog_process_map_in(po, (pid_t) -1, km->pm_address, + PMCDBG2(LOG,REG,1,"%s %p", (char *)km->pm_file, + (void *)km->pm_address); + pmclog_process_map_in(po, (pid_t)-1, km->pm_address, km->pm_file); } free(kmbase, M_LINKER); @@ -1868,7 +1823,6 @@ pmc_log_kernel_mappings(struct pmc *pm) /* * Log the mappings for a single process. */ - static void pmc_log_process_mappings(struct pmc_owner *po, struct proc *p) { @@ -1884,7 +1838,7 @@ pmc_log_process_mappings(struct pmc_owner *po, struct proc *p) char *fullpath, *freepath; last_vp = NULL; - last_end = (vm_offset_t) 0; + last_end = (vm_offset_t)0; fullpath = freepath = NULL; if ((vm = vmspace_acquire_ref(p)) == NULL) @@ -1892,9 +1846,7 @@ pmc_log_process_mappings(struct pmc_owner *po, struct proc *p) map = &vm->vm_map; vm_map_lock_read(map); - VM_MAP_ENTRY_FOREACH(entry, map) { - if (entry == NULL) { PMCDBG2(LOG,OPS,2, "hwpmc: vm_map entry unexpectedly " "NULL! pid=%d vm_map=%p\n", p->p_pid, map); @@ -1929,7 +1881,8 @@ pmc_log_process_mappings(struct pmc_owner *po, struct proc *p) * At this point lobj is the base vm_object and it is locked. */ if (lobj == NULL) { - PMCDBG3(LOG,OPS,2, "hwpmc: lobj unexpectedly NULL! pid=%d " + PMCDBG3(LOG,OPS,2, + "hwpmc: lobj unexpectedly NULL! pid=%d " "vm_map=%p vm_obj=%p\n", p->p_pid, map, obj); VM_OBJECT_RUNLOCK(obj); continue; @@ -1974,7 +1927,6 @@ pmc_log_process_mappings(struct pmc_owner *po, struct proc *p) vref(vp); if (lobj != obj) VM_OBJECT_RUNLOCK(lobj); - VM_OBJECT_RUNLOCK(obj); freepath = NULL; @@ -1998,7 +1950,7 @@ pmc_log_process_mappings(struct pmc_owner *po, struct proc *p) * for this address range, vm_map_lookup_entry() will * return the previous one, so we always want to go to * the next entry on the next loop iteration. - * + * * There is an edge condition here that can occur if * there is no entry at or before this address. In * this situation, vm_map_lookup_entry returns @@ -2024,7 +1976,6 @@ pmc_log_process_mappings(struct pmc_owner *po, struct proc *p) /* * Log mappings for all processes in the system. */ - static void pmc_log_all_process_mappings(struct pmc_owner *po) { @@ -2040,7 +1991,6 @@ pmc_log_all_process_mappings(struct pmc_owner *po) sx_slock(&proctree_lock); top = p; - for (;;) { pmc_log_process_mappings(po, p); if (!LIST_EMPTY(&p->p_children)) @@ -2055,7 +2005,7 @@ pmc_log_all_process_mappings(struct pmc_owner *po) p = p->p_pptr; } } - done: +done: sx_sunlock(&proctree_lock); } @@ -2102,7 +2052,6 @@ pmc_hook_handler(struct thread *td, int function, void *arg) /* * Process exec() */ - case PMC_FN_PROCESS_EXEC: { char *fullpath, *freepath; @@ -2190,7 +2139,6 @@ pmc_hook_handler(struct thread *td, int function, void *arg) * than before, allow it to be the target of a PMC only if * the PMC's owner has sufficient privilege. */ - for (ri = 0; ri < md->pmd_npmc; ri++) if ((pm = pp->pp_pmcs[ri].pp_pmc) != NULL) if (pmc_can_attach(pm, td->td_proc) != 0) @@ -2206,13 +2154,11 @@ pmc_hook_handler(struct thread *td, int function, void *arg) * PMCs, we can remove the process entry and free * up space. */ - if (pp->pp_refcnt == 0) { pmc_remove_process_descriptor(pp); pmc_destroy_process_descriptor(pp); break; } - } break; @@ -2234,7 +2180,6 @@ pmc_hook_handler(struct thread *td, int function, void *arg) * are being processed. */ case PMC_FN_DO_SAMPLES: - /* * Clear the cpu specific bit in the CPU mask before * do the rest of the processing. If the NMI handler @@ -2254,12 +2199,12 @@ pmc_hook_handler(struct thread *td, int function, void *arg) break; case PMC_FN_MMAP: - pmc_process_mmap(td, (struct pmckern_map_in *) arg); + pmc_process_mmap(td, (struct pmckern_map_in *)arg); break; case PMC_FN_MUNMAP: MPASS(in_epoch(global_epoch_preempt) || sx_xlocked(&pmc_sx)); - pmc_process_munmap(td, (struct pmckern_map_out *) arg); + pmc_process_munmap(td, (struct pmckern_map_out *)arg); break; case PMC_FN_PROC_CREATE_LOG: @@ -2274,10 +2219,10 @@ pmc_hook_handler(struct thread *td, int function, void *arg) __LINE__)); pmc_capture_user_callchain(PCPU_GET(cpuid), PMC_HR, - (struct trapframe *) arg); + (struct trapframe *)arg); KASSERT(td->td_pinned == 1, - ("[pmc,%d] invalid td_pinned value", __LINE__)); + ("[pmc,%d] invalid td_pinned value", __LINE__)); sched_unpin(); /* Can migrate safely now. */ td->td_pflags &= ~TDP_CALLCHAIN; @@ -2306,7 +2251,7 @@ pmc_hook_handler(struct thread *td, int function, void *arg) /* * Call soft PMC sampling intr. */ - pmc_soft_intr((struct pmckern_soft *) arg); + pmc_soft_intr((struct pmckern_soft *)arg); break; case PMC_FN_THR_CREATE: @@ -2332,13 +2277,11 @@ pmc_hook_handler(struct thread *td, int function, void *arg) __LINE__)); pmc_process_thread_userret(td); break; - default: -#ifdef HWPMC_DEBUG +#ifdef HWPMC_DEBUG KASSERT(0, ("[pmc,%d] unknown hook %d\n", __LINE__, function)); #endif break; - } return (0); @@ -2347,7 +2290,6 @@ pmc_hook_handler(struct thread *td, int function, void *arg) /* * allocate a 'struct pmc_owner' descriptor in the owner hash table. */ - static struct pmc_owner * pmc_allocate_owner_descriptor(struct proc *p) { @@ -2359,7 +2301,7 @@ pmc_allocate_owner_descriptor(struct proc *p) poh = &pmc_ownerhash[hindex]; /* allocate space for N pointers and one descriptor struct */ - po = malloc(sizeof(struct pmc_owner), M_PMC, M_WAITOK|M_ZERO); *** 936 LINES SKIPPED ***