Date: Sun, 9 Jan 2011 23:46:24 +0000 (UTC) From: Juli Mallett <jmallett@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r217212 - head/sys/mips/cavium/octe Message-ID: <201101092346.p09NkOqp060751@svn.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: jmallett Date: Sun Jan 9 23:46:24 2011 New Revision: 217212 URL: http://svn.freebsd.org/changeset/base/217212 Log: Now that we correctly enable rx interrupts on all cores, performance has gotten quite awful, because e.g. 4 packets will come in and get processed on 4 different cores at the same time, really battling with the TCP stack quite painfully. For now, just run one task at a time. This gets performance up in most cases to where it was before the correctness fixes that got interrupts to run on all cores (except in high-load TCP transmit cases where all we're handling receive for is ACKs) and in some cases it's better now. What would be ideal would be to use a more advanced interrupt mitigation strategy and possibly to use different workqueue groups per port for multi-port systems, and so on, but this is a fine stopgap. Modified: head/sys/mips/cavium/octe/ethernet-rx.c Modified: head/sys/mips/cavium/octe/ethernet-rx.c ============================================================================== --- head/sys/mips/cavium/octe/ethernet-rx.c Sun Jan 9 23:20:01 2011 (r217211) +++ head/sys/mips/cavium/octe/ethernet-rx.c Sun Jan 9 23:46:24 2011 (r217212) @@ -54,6 +54,8 @@ extern struct ifnet *cvm_oct_device[]; static struct task cvm_oct_task; static struct taskqueue *cvm_oct_taskq; +static int cvm_oct_rx_active; + /** * Interrupt handler. The interrupt occurs whenever the POW * transitions from 0->1 packets in our group. @@ -70,7 +72,13 @@ int cvm_oct_do_interrupt(void *dev_id) cvmx_write_csr(CVMX_POW_WQ_INT, 1<<pow_receive_group); else cvmx_write_csr(CVMX_POW_WQ_INT, 0x10001<<pow_receive_group); - taskqueue_enqueue(cvm_oct_taskq, &cvm_oct_task); + + /* + * Schedule task if there isn't one running. + */ + if (atomic_cmpset_int(&cvm_oct_rx_active, 0, 1)) + taskqueue_enqueue(cvm_oct_taskq, &cvm_oct_task); + return FILTER_HANDLED; } @@ -353,6 +361,19 @@ void cvm_oct_tasklet_rx(void *context, i cvm_oct_free_work(work); } + /* + * If we hit our limit, schedule another task while we clean up. + */ + if (INTERRUPT_LIMIT != 0 && rx_count == MAX_RX_PACKETS) { + taskqueue_enqueue(cvm_oct_taskq, &cvm_oct_task); + } else { + /* + * No more packets, all done. + */ + if (!atomic_cmpset_int(&cvm_oct_rx_active, 1, 0)) + panic("%s: inconsistent rx active state.", __func__); + } + /* Restore the original POW group mask */ cvmx_write_csr(CVMX_POW_PP_GRP_MSKX(coreid), old_group_mask); if (USE_ASYNC_IOBDMA) {
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201101092346.p09NkOqp060751>