From owner-svn-src-stable-8@FreeBSD.ORG Tue Nov 9 20:00:23 2010 Return-Path: Delivered-To: svn-src-stable-8@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7A1941065679; Tue, 9 Nov 2010 20:00:23 +0000 (UTC) (envelope-from jhb@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 640218FC25; Tue, 9 Nov 2010 20:00:23 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id oA9K0NSA040407; Tue, 9 Nov 2010 20:00:23 GMT (envelope-from jhb@svn.freebsd.org) Received: (from jhb@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id oA9K0NA8040391; Tue, 9 Nov 2010 20:00:23 GMT (envelope-from jhb@svn.freebsd.org) Message-Id: <201011092000.oA9K0NA8040391@svn.freebsd.org> From: John Baldwin Date: Tue, 9 Nov 2010 20:00:23 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-8@freebsd.org X-SVN-Group: stable-8 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r215050 - in stable/8/sys: amd64/amd64 amd64/include i386/i386 i386/include i386/xen ia64/ia64 ia64/include kern powerpc/include powerpc/powerpc sparc64/include sun4v/include sun4v/sun4v X-BeenThere: svn-src-stable-8@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for only the 8-stable src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Nov 2010 20:00:23 -0000 Author: jhb Date: Tue Nov 9 20:00:23 2010 New Revision: 215050 URL: http://svn.freebsd.org/changeset/base/215050 Log: MFC 210939: Add a new ipi_cpu() function to the MI IPI API that can be used to send an IPI to a specific CPU by its cpuid. Replace calls to ipi_selected() that constructed a mask for a single CPU with calls to ipi_cpu() instead. Modified: stable/8/sys/amd64/amd64/mp_machdep.c stable/8/sys/amd64/include/smp.h stable/8/sys/i386/i386/mp_machdep.c stable/8/sys/i386/include/smp.h stable/8/sys/i386/xen/mp_machdep.c stable/8/sys/ia64/ia64/mp_machdep.c stable/8/sys/ia64/include/smp.h stable/8/sys/kern/sched_4bsd.c stable/8/sys/kern/sched_ule.c stable/8/sys/kern/subr_smp.c stable/8/sys/powerpc/include/smp.h stable/8/sys/powerpc/powerpc/mp_machdep.c stable/8/sys/sparc64/include/smp.h stable/8/sys/sun4v/include/smp.h stable/8/sys/sun4v/sun4v/mp_machdep.c Directory Properties: stable/8/sys/ (props changed) stable/8/sys/amd64/include/xen/ (props changed) stable/8/sys/cddl/contrib/opensolaris/ (props changed) stable/8/sys/contrib/dev/acpica/ (props changed) stable/8/sys/contrib/pf/ (props changed) stable/8/sys/dev/xen/xenpci/ (props changed) Modified: stable/8/sys/amd64/amd64/mp_machdep.c ============================================================================== --- stable/8/sys/amd64/amd64/mp_machdep.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/amd64/amd64/mp_machdep.c Tue Nov 9 20:00:23 2010 (r215050) @@ -1201,15 +1201,51 @@ ipi_selected(cpumask_t cpus, u_int ipi) do { old_pending = cpu_ipi_pending[cpu]; new_pending = old_pending | bitmap; - } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu],old_pending, new_pending)); - + } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu], + old_pending, new_pending)); if (old_pending) continue; } - lapic_ipi_vectored(ipi, cpu_apic_ids[cpu]); } +} + +/* + * send an IPI to a specific CPU. + */ +void +ipi_cpu(int cpu, u_int ipi) +{ + u_int bitmap = 0; + u_int old_pending; + u_int new_pending; + + if (IPI_IS_BITMAPED(ipi)) { + bitmap = 1 << ipi; + ipi = IPI_BITMAP_VECTOR; + } + /* + * IPI_STOP_HARD maps to a NMI and the trap handler needs a bit + * of help in order to understand what is the source. + * Set the mask of receiving CPUs for this purpose. + */ + if (ipi == IPI_STOP_HARD) + atomic_set_int(&ipi_nmi_pending, 1 << cpu); + + CTR3(KTR_SMP, "%s: cpu: %d ipi: %x", __func__, cpu, ipi); + KASSERT(cpu_apic_ids[cpu] != -1, ("IPI to non-existent CPU %d", cpu)); + + if (bitmap) { + do { + old_pending = cpu_ipi_pending[cpu]; + new_pending = old_pending | bitmap; + } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu], + old_pending, new_pending)); + if (old_pending) + return; + } + lapic_ipi_vectored(ipi, cpu_apic_ids[cpu]); } /* Modified: stable/8/sys/amd64/include/smp.h ============================================================================== --- stable/8/sys/amd64/include/smp.h Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/amd64/include/smp.h Tue Nov 9 20:00:23 2010 (r215050) @@ -52,10 +52,11 @@ void cpu_add(u_int apic_id, char boot_cp void cpustop_handler(void); void cpususpend_handler(void); void init_secondary(void); -int ipi_nmi_handler(void); -void ipi_selected(cpumask_t cpus, u_int ipi); void ipi_all_but_self(u_int ipi); void ipi_bitmap_handler(struct trapframe frame); +void ipi_cpu(int cpu, u_int ipi); +int ipi_nmi_handler(void); +void ipi_selected(cpumask_t cpus, u_int ipi); u_int mp_bootaddress(u_int); int mp_grab_cpu_hlt(void); void smp_cache_flush(void); Modified: stable/8/sys/i386/i386/mp_machdep.c ============================================================================== --- stable/8/sys/i386/i386/mp_machdep.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/i386/i386/mp_machdep.c Tue Nov 9 20:00:23 2010 (r215050) @@ -1362,15 +1362,51 @@ ipi_selected(cpumask_t cpus, u_int ipi) do { old_pending = cpu_ipi_pending[cpu]; new_pending = old_pending | bitmap; - } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu],old_pending, new_pending)); - + } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu], + old_pending, new_pending)); if (old_pending) continue; } - lapic_ipi_vectored(ipi, cpu_apic_ids[cpu]); } +} + +/* + * send an IPI to a specific CPU. + */ +void +ipi_cpu(int cpu, u_int ipi) +{ + u_int bitmap = 0; + u_int old_pending; + u_int new_pending; + + if (IPI_IS_BITMAPED(ipi)) { + bitmap = 1 << ipi; + ipi = IPI_BITMAP_VECTOR; + } + /* + * IPI_STOP_HARD maps to a NMI and the trap handler needs a bit + * of help in order to understand what is the source. + * Set the mask of receiving CPUs for this purpose. + */ + if (ipi == IPI_STOP_HARD) + atomic_set_int(&ipi_nmi_pending, 1 << cpu); + + CTR3(KTR_SMP, "%s: cpu: %d ipi: %x", __func__, cpu, ipi); + KASSERT(cpu_apic_ids[cpu] != -1, ("IPI to non-existent CPU %d", cpu)); + + if (bitmap) { + do { + old_pending = cpu_ipi_pending[cpu]; + new_pending = old_pending | bitmap; + } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu], + old_pending, new_pending)); + if (old_pending) + return; + } + lapic_ipi_vectored(ipi, cpu_apic_ids[cpu]); } /* Modified: stable/8/sys/i386/include/smp.h ============================================================================== --- stable/8/sys/i386/include/smp.h Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/i386/include/smp.h Tue Nov 9 20:00:23 2010 (r215050) @@ -60,12 +60,13 @@ inthand_t void cpu_add(u_int apic_id, char boot_cpu); void cpustop_handler(void); void init_secondary(void); -int ipi_nmi_handler(void); -void ipi_selected(cpumask_t cpus, u_int ipi); void ipi_all_but_self(u_int ipi); #ifndef XEN void ipi_bitmap_handler(struct trapframe frame); #endif +void ipi_cpu(int cpu, u_int ipi); +int ipi_nmi_handler(void); +void ipi_selected(cpumask_t cpus, u_int ipi); u_int mp_bootaddress(u_int); int mp_grab_cpu_hlt(void); void smp_cache_flush(void); Modified: stable/8/sys/i386/xen/mp_machdep.c ============================================================================== --- stable/8/sys/i386/xen/mp_machdep.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/i386/xen/mp_machdep.c Tue Nov 9 20:00:23 2010 (r215050) @@ -1124,19 +1124,14 @@ ipi_selected(cpumask_t cpus, u_int ipi) cpu--; cpus &= ~(1 << cpu); - KASSERT(cpu_apic_ids[cpu] != -1, - ("IPI to non-existent CPU %d", cpu)); - if (bitmap) { do { old_pending = cpu_ipi_pending[cpu]; new_pending = old_pending | bitmap; - } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu],old_pending, new_pending)); - + } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu], + old_pending, new_pending)); if (!old_pending) ipi_pcpu(cpu, RESCHEDULE_VECTOR); - continue; - } else { KASSERT(call_data != NULL, ("call_data not set")); ipi_pcpu(cpu, CALL_FUNCTION_VECTOR); @@ -1145,6 +1140,45 @@ ipi_selected(cpumask_t cpus, u_int ipi) } /* + * send an IPI to a specific CPU. + */ +void +ipi_cpu(int cpu, u_int ipi) +{ + u_int bitmap = 0; + u_int old_pending; + u_int new_pending; + + if (IPI_IS_BITMAPED(ipi)) { + bitmap = 1 << ipi; + ipi = IPI_BITMAP_VECTOR; + } + + /* + * IPI_STOP_HARD maps to a NMI and the trap handler needs a bit + * of help in order to understand what is the source. + * Set the mask of receiving CPUs for this purpose. + */ + if (ipi == IPI_STOP_HARD) + atomic_set_int(&ipi_nmi_pending, 1 << cpu); + + CTR3(KTR_SMP, "%s: cpu: %d ipi: %x", __func__, cpu, ipi); + + if (bitmap) { + do { + old_pending = cpu_ipi_pending[cpu]; + new_pending = old_pending | bitmap; + } while (!atomic_cmpset_int(&cpu_ipi_pending[cpu], + old_pending, new_pending)); + if (!old_pending) + ipi_pcpu(cpu, RESCHEDULE_VECTOR); + } else { + KASSERT(call_data != NULL, ("call_data not set")); + ipi_pcpu(cpu, CALL_FUNCTION_VECTOR); + } +} + +/* * send an IPI to all CPUs EXCEPT myself */ void Modified: stable/8/sys/ia64/ia64/mp_machdep.c ============================================================================== --- stable/8/sys/ia64/ia64/mp_machdep.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/ia64/ia64/mp_machdep.c Tue Nov 9 20:00:23 2010 (r215050) @@ -405,6 +405,16 @@ ipi_selected(cpumask_t cpus, int ipi) } /* + * send an IPI to a specific CPU. + */ +void +ipi_cpu(int cpu, u_int ipi) +{ + + ipi_send(cpuid_to_pcpu[cpu], ipi); +} + +/* * send an IPI to all CPUs EXCEPT myself. */ void Modified: stable/8/sys/ia64/include/smp.h ============================================================================== --- stable/8/sys/ia64/include/smp.h Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/ia64/include/smp.h Tue Nov 9 20:00:23 2010 (r215050) @@ -25,6 +25,7 @@ extern int ia64_ipi_stop; extern int ia64_ipi_wakeup; void ipi_all_but_self(int ipi); +void ipi_cpu(int cpu, u_int ipi); void ipi_selected(cpumask_t cpus, int ipi); void ipi_send(struct pcpu *, int ipi); Modified: stable/8/sys/kern/sched_4bsd.c ============================================================================== --- stable/8/sys/kern/sched_4bsd.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/kern/sched_4bsd.c Tue Nov 9 20:00:23 2010 (r215050) @@ -1154,7 +1154,7 @@ kick_other_cpu(int pri, int cpuid) pcpu = pcpu_find(cpuid); if (idle_cpus_mask & pcpu->pc_cpumask) { forward_wakeups_delivered++; - ipi_selected(pcpu->pc_cpumask, IPI_AST); + ipi_cpu(cpuid, IPI_AST); return; } @@ -1167,13 +1167,13 @@ kick_other_cpu(int pri, int cpuid) if (pri <= PRI_MAX_ITHD) #endif /* ! FULL_PREEMPTION */ { - ipi_selected(pcpu->pc_cpumask, IPI_PREEMPT); + ipi_cpu(cpuid, IPI_PREEMPT); return; } #endif /* defined(IPI_PREEMPTION) && defined(PREEMPTION) */ pcpu->pc_curthread->td_flags |= TDF_NEEDRESCHED; - ipi_selected(pcpu->pc_cpumask, IPI_AST); + ipi_cpu(cpuid, IPI_AST); return; } #endif /* SMP */ @@ -1670,7 +1670,7 @@ sched_affinity(struct thread *td) td->td_flags |= TDF_NEEDRESCHED; if (td != curthread) - ipi_selected(1 << cpu, IPI_AST); + ipi_cpu(cpu, IPI_AST); break; default: break; Modified: stable/8/sys/kern/sched_ule.c ============================================================================== --- stable/8/sys/kern/sched_ule.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/kern/sched_ule.c Tue Nov 9 20:00:23 2010 (r215050) @@ -851,7 +851,7 @@ sched_balance_pair(struct tdq *high, str * IPI the target cpu to force it to reschedule with the new * workload. */ - ipi_selected(1 << TDQ_ID(low), IPI_PREEMPT); + ipi_cpu(TDQ_ID(low), IPI_PREEMPT); } tdq_unlock_pair(high, low); return (moved); @@ -974,7 +974,7 @@ tdq_notify(struct tdq *tdq, struct threa return; } tdq->tdq_ipipending = 1; - ipi_selected(1 << cpu, IPI_PREEMPT); + ipi_cpu(cpu, IPI_PREEMPT); } /* @@ -2416,7 +2416,7 @@ sched_affinity(struct thread *td) */ td->td_flags |= TDF_NEEDRESCHED; if (td != curthread) - ipi_selected(1 << ts->ts_cpu, IPI_PREEMPT); + ipi_cpu(ts->ts_cpu, IPI_PREEMPT); #endif } Modified: stable/8/sys/kern/subr_smp.c ============================================================================== --- stable/8/sys/kern/subr_smp.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/kern/subr_smp.c Tue Nov 9 20:00:23 2010 (r215050) @@ -181,7 +181,7 @@ forward_signal(struct thread *td) id = td->td_oncpu; if (id == NOCPU) return; - ipi_selected(1 << id, IPI_AST); + ipi_cpu(id, IPI_AST); } /* Modified: stable/8/sys/powerpc/include/smp.h ============================================================================== --- stable/8/sys/powerpc/include/smp.h Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/powerpc/include/smp.h Tue Nov 9 20:00:23 2010 (r215050) @@ -40,6 +40,7 @@ #ifndef LOCORE void ipi_all_but_self(int ipi); +void ipi_cpu(int cpu, u_int ipi); void ipi_selected(cpumask_t cpus, int ipi); struct cpuref { Modified: stable/8/sys/powerpc/powerpc/mp_machdep.c ============================================================================== --- stable/8/sys/powerpc/powerpc/mp_machdep.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/powerpc/powerpc/mp_machdep.c Tue Nov 9 20:00:23 2010 (r215050) @@ -336,6 +336,14 @@ ipi_selected(cpumask_t cpus, int ipi) } } +/* Send an IPI to a specific CPU. */ +void +ipi_cpu(int cpu, u_int ipi) +{ + + ipi_send(cpuid_to_pcpu[cpu], ipi); +} + /* Send an IPI to all CPUs EXCEPT myself. */ void ipi_all_but_self(int ipi) Modified: stable/8/sys/sparc64/include/smp.h ============================================================================== --- stable/8/sys/sparc64/include/smp.h Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/sparc64/include/smp.h Tue Nov 9 20:00:23 2010 (r215050) @@ -145,6 +145,13 @@ ipi_selected(u_int cpus, u_int ipi) cpu_ipi_selected(cpus, 0, (u_long)tl_ipi_level, ipi); } +static __inline void +ipi_cpu(int cpu, u_int ipi) +{ + + cpu_ipi_single(cpu, 0, (u_long)tl_ipi_level, ipi); +} + #if defined(_MACHINE_PMAP_H_) && defined(_SYS_MUTEX_H_) static __inline void * Modified: stable/8/sys/sun4v/include/smp.h ============================================================================== --- stable/8/sys/sun4v/include/smp.h Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/sun4v/include/smp.h Tue Nov 9 20:00:23 2010 (r215050) @@ -82,8 +82,9 @@ void cpu_ipi_ast(struct trapframe *tf); void cpu_ipi_stop(struct trapframe *tf); void cpu_ipi_preempt(struct trapframe *tf); -void ipi_selected(u_int cpus, u_int ipi); void ipi_all_but_self(u_int ipi); +void ipi_cpu(int cpu, u_int ipi); +void ipi_selected(u_int cpus, u_int ipi); vm_offset_t mp_tramp_alloc(void); void mp_set_tsb_desc_ra(vm_paddr_t); Modified: stable/8/sys/sun4v/sun4v/mp_machdep.c ============================================================================== --- stable/8/sys/sun4v/sun4v/mp_machdep.c Tue Nov 9 19:45:29 2010 (r215049) +++ stable/8/sys/sun4v/sun4v/mp_machdep.c Tue Nov 9 20:00:23 2010 (r215050) @@ -518,7 +518,6 @@ retry: } } - void ipi_selected(u_int icpus, u_int ipi) { @@ -533,7 +532,6 @@ ipi_selected(u_int icpus, u_int ipi) * 4) handling 4-way threading vs 2-way threading should happen here * and not in forward wakeup */ - cpulist = PCPU_GET(cpulist); cpus = (icpus & ~PCPU_GET(cpumask)); @@ -545,8 +543,32 @@ ipi_selected(u_int icpus, u_int ipi) cpu_count++; } - cpu_ipi_selected(cpu_count, cpulist, (u_long)tl_ipi_level, ipi, 0, &ackmask); - + cpu_ipi_selected(cpu_count, cpulist, (u_long)tl_ipi_level, ipi, 0, + &ackmask); +} + +void +ipi_cpu(int cpu, u_int ipi) +{ + int cpu_count; + uint16_t *cpulist; + uint64_t ackmask; + + /* + * + * 3) forward_wakeup appears to abuse ASTs + * 4) handling 4-way threading vs 2-way threading should happen here + * and not in forward wakeup + */ + cpulist = PCPU_GET(cpulist); + if (PCPU_GET(cpumask) & (1 << cpu)) + cpu_count = 0; + else { + cpulist[0] = (uint16_t)cpu; + cpu_count = 1; + } + cpu_ipi_selected(cpu_count, cpulist, (u_long)tl_ipi_level, ipi, 0, + &ackmask); } void