From owner-svn-src-stable-10@freebsd.org Wed May 25 07:09:55 2016 Return-Path: Delivered-To: svn-src-stable-10@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AEC96B4917F; Wed, 25 May 2016 07:09:55 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5E65216DB; Wed, 25 May 2016 07:09:55 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id u4P79sQx018730; Wed, 25 May 2016 07:09:54 GMT (envelope-from mav@FreeBSD.org) Received: (from mav@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id u4P79sUl018726; Wed, 25 May 2016 07:09:54 GMT (envelope-from mav@FreeBSD.org) Message-Id: <201605250709.u4P79sUl018726@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mav set sender to mav@FreeBSD.org using -f From: Alexander Motin Date: Wed, 25 May 2016 07:09:54 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-10@freebsd.org Subject: svn commit: r300661 - in stable/10: share/man/man4 sys/conf sys/dev/ioat sys/modules sys/modules/ioat tools/tools/ioat X-SVN-Group: stable-10 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-stable-10@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: SVN commit messages for only the 10-stable src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 May 2016 07:09:55 -0000 Author: mav Date: Wed May 25 07:09:54 2016 New Revision: 300661 URL: https://svnweb.freebsd.org/changeset/base/300661 Log: MFC ioat(4) driver in its present state. Added: stable/10/sys/dev/ioat/ stable/10/sys/dev/ioat/ioat.c (contents, props changed) stable/10/sys/dev/ioat/ioat.h (contents, props changed) stable/10/sys/dev/ioat/ioat_hw.h (contents, props changed) stable/10/sys/dev/ioat/ioat_internal.h (contents, props changed) stable/10/sys/dev/ioat/ioat_test.c (contents, props changed) stable/10/sys/dev/ioat/ioat_test.h (contents, props changed) stable/10/sys/modules/ioat/ stable/10/sys/modules/ioat/Makefile (contents, props changed) stable/10/tools/tools/ioat/ stable/10/tools/tools/ioat/Makefile (contents, props changed) stable/10/tools/tools/ioat/ioatcontrol.8 (contents, props changed) stable/10/tools/tools/ioat/ioatcontrol.c (contents, props changed) Modified: stable/10/share/man/man4/Makefile stable/10/sys/conf/files.amd64 stable/10/sys/modules/Makefile Modified: stable/10/share/man/man4/Makefile ============================================================================== --- stable/10/share/man/man4/Makefile Wed May 25 06:55:53 2016 (r300660) +++ stable/10/share/man/man4/Makefile Wed May 25 07:09:54 2016 (r300661) @@ -199,6 +199,7 @@ MAN= aac.4 \ intpm.4 \ intro.4 \ ${_io.4} \ + ${_ioat.4} \ ip.4 \ ip6.4 \ ipfirewall.4 \ @@ -797,6 +798,7 @@ MLINKS+=lindev.4 full.4 .if ${MACHINE_CPUARCH} == "amd64" _if_ntb.4= if_ntb.4 +_ioat.4= ioat.4 _ntb.4= ntb.4 _ntb_hw.4= ntb_hw.4 _qlxge.4= qlxge.4 Modified: stable/10/sys/conf/files.amd64 ============================================================================== --- stable/10/sys/conf/files.amd64 Wed May 25 06:55:53 2016 (r300660) +++ stable/10/sys/conf/files.amd64 Wed May 25 07:09:54 2016 (r300661) @@ -203,6 +203,8 @@ dev/if_ndis/if_ndis_pccard.c optional nd dev/if_ndis/if_ndis_pci.c optional ndis cardbus | ndis pci dev/if_ndis/if_ndis_usb.c optional ndis usb dev/io/iodev.c optional io +dev/ioat/ioat.c optional ioat pci +dev/ioat/ioat_test.c optional ioat pci dev/ipmi/ipmi.c optional ipmi dev/ipmi/ipmi_acpi.c optional ipmi acpi dev/ipmi/ipmi_isa.c optional ipmi isa Added: stable/10/sys/dev/ioat/ioat.c ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ stable/10/sys/dev/ioat/ioat.c Wed May 25 07:09:54 2016 (r300661) @@ -0,0 +1,2091 @@ +/*- + * Copyright (C) 2012 Intel Corporation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ioat.h" +#include "ioat_hw.h" +#include "ioat_internal.h" + +#ifndef BUS_SPACE_MAXADDR_40BIT +#define BUS_SPACE_MAXADDR_40BIT 0xFFFFFFFFFFULL +#endif +#define IOAT_INTR_TIMO (hz / 10) +#define IOAT_REFLK (&ioat->submit_lock) + +static int ioat_probe(device_t device); +static int ioat_attach(device_t device); +static int ioat_detach(device_t device); +static int ioat_setup_intr(struct ioat_softc *ioat); +static int ioat_teardown_intr(struct ioat_softc *ioat); +static int ioat3_attach(device_t device); +static int ioat_start_channel(struct ioat_softc *ioat); +static int ioat_map_pci_bar(struct ioat_softc *ioat); +static void ioat_dmamap_cb(void *arg, bus_dma_segment_t *segs, int nseg, + int error); +static void ioat_interrupt_handler(void *arg); +static boolean_t ioat_model_resets_msix(struct ioat_softc *ioat); +static int chanerr_to_errno(uint32_t); +static void ioat_process_events(struct ioat_softc *ioat); +static inline uint32_t ioat_get_active(struct ioat_softc *ioat); +static inline uint32_t ioat_get_ring_space(struct ioat_softc *ioat); +static void ioat_free_ring(struct ioat_softc *, uint32_t size, + struct ioat_descriptor **); +static void ioat_free_ring_entry(struct ioat_softc *ioat, + struct ioat_descriptor *desc); +static struct ioat_descriptor *ioat_alloc_ring_entry(struct ioat_softc *, + int mflags); +static int ioat_reserve_space(struct ioat_softc *, uint32_t, int mflags); +static struct ioat_descriptor *ioat_get_ring_entry(struct ioat_softc *ioat, + uint32_t index); +static struct ioat_descriptor **ioat_prealloc_ring(struct ioat_softc *, + uint32_t size, boolean_t need_dscr, int mflags); +static int ring_grow(struct ioat_softc *, uint32_t oldorder, + struct ioat_descriptor **); +static int ring_shrink(struct ioat_softc *, uint32_t oldorder, + struct ioat_descriptor **); +static void ioat_halted_debug(struct ioat_softc *, uint32_t); +static void ioat_timer_callback(void *arg); +static void dump_descriptor(void *hw_desc); +static void ioat_submit_single(struct ioat_softc *ioat); +static void ioat_comp_update_map(void *arg, bus_dma_segment_t *seg, int nseg, + int error); +static int ioat_reset_hw(struct ioat_softc *ioat); +static void ioat_reset_hw_task(void *, int); +static void ioat_setup_sysctl(device_t device); +static int sysctl_handle_reset(SYSCTL_HANDLER_ARGS); +static inline struct ioat_softc *ioat_get(struct ioat_softc *, + enum ioat_ref_kind); +static inline void ioat_put(struct ioat_softc *, enum ioat_ref_kind); +static inline void _ioat_putn(struct ioat_softc *, uint32_t, + enum ioat_ref_kind, boolean_t); +static inline void ioat_putn(struct ioat_softc *, uint32_t, + enum ioat_ref_kind); +static inline void ioat_putn_locked(struct ioat_softc *, uint32_t, + enum ioat_ref_kind); +static void ioat_drain_locked(struct ioat_softc *); + +#define ioat_log_message(v, ...) do { \ + if ((v) <= g_ioat_debug_level) { \ + device_printf(ioat->device, __VA_ARGS__); \ + } \ +} while (0) + +MALLOC_DEFINE(M_IOAT, "ioat", "ioat driver memory allocations"); +SYSCTL_NODE(_hw, OID_AUTO, ioat, CTLFLAG_RD, 0, "ioat node"); + +static int g_force_legacy_interrupts; +SYSCTL_INT(_hw_ioat, OID_AUTO, force_legacy_interrupts, CTLFLAG_RDTUN, + &g_force_legacy_interrupts, 0, "Set to non-zero to force MSI-X disabled"); + +int g_ioat_debug_level = 0; +SYSCTL_INT(_hw_ioat, OID_AUTO, debug_level, CTLFLAG_RWTUN, &g_ioat_debug_level, + 0, "Set log level (0-3) for ioat(4). Higher is more verbose."); + +/* + * OS <-> Driver interface structures + */ +static device_method_t ioat_pci_methods[] = { + /* Device interface */ + DEVMETHOD(device_probe, ioat_probe), + DEVMETHOD(device_attach, ioat_attach), + DEVMETHOD(device_detach, ioat_detach), + DEVMETHOD_END +}; + +static driver_t ioat_pci_driver = { + "ioat", + ioat_pci_methods, + sizeof(struct ioat_softc), +}; + +static devclass_t ioat_devclass; +DRIVER_MODULE(ioat, pci, ioat_pci_driver, ioat_devclass, 0, 0); +MODULE_VERSION(ioat, 1); + +/* + * Private data structures + */ +static struct ioat_softc *ioat_channel[IOAT_MAX_CHANNELS]; +static int ioat_channel_index = 0; +SYSCTL_INT(_hw_ioat, OID_AUTO, channels, CTLFLAG_RD, &ioat_channel_index, 0, + "Number of IOAT channels attached"); + +static struct _pcsid +{ + u_int32_t type; + const char *desc; +} pci_ids[] = { + { 0x34308086, "TBG IOAT Ch0" }, + { 0x34318086, "TBG IOAT Ch1" }, + { 0x34328086, "TBG IOAT Ch2" }, + { 0x34338086, "TBG IOAT Ch3" }, + { 0x34298086, "TBG IOAT Ch4" }, + { 0x342a8086, "TBG IOAT Ch5" }, + { 0x342b8086, "TBG IOAT Ch6" }, + { 0x342c8086, "TBG IOAT Ch7" }, + + { 0x37108086, "JSF IOAT Ch0" }, + { 0x37118086, "JSF IOAT Ch1" }, + { 0x37128086, "JSF IOAT Ch2" }, + { 0x37138086, "JSF IOAT Ch3" }, + { 0x37148086, "JSF IOAT Ch4" }, + { 0x37158086, "JSF IOAT Ch5" }, + { 0x37168086, "JSF IOAT Ch6" }, + { 0x37178086, "JSF IOAT Ch7" }, + { 0x37188086, "JSF IOAT Ch0 (RAID)" }, + { 0x37198086, "JSF IOAT Ch1 (RAID)" }, + + { 0x3c208086, "SNB IOAT Ch0" }, + { 0x3c218086, "SNB IOAT Ch1" }, + { 0x3c228086, "SNB IOAT Ch2" }, + { 0x3c238086, "SNB IOAT Ch3" }, + { 0x3c248086, "SNB IOAT Ch4" }, + { 0x3c258086, "SNB IOAT Ch5" }, + { 0x3c268086, "SNB IOAT Ch6" }, + { 0x3c278086, "SNB IOAT Ch7" }, + { 0x3c2e8086, "SNB IOAT Ch0 (RAID)" }, + { 0x3c2f8086, "SNB IOAT Ch1 (RAID)" }, + + { 0x0e208086, "IVB IOAT Ch0" }, + { 0x0e218086, "IVB IOAT Ch1" }, + { 0x0e228086, "IVB IOAT Ch2" }, + { 0x0e238086, "IVB IOAT Ch3" }, + { 0x0e248086, "IVB IOAT Ch4" }, + { 0x0e258086, "IVB IOAT Ch5" }, + { 0x0e268086, "IVB IOAT Ch6" }, + { 0x0e278086, "IVB IOAT Ch7" }, + { 0x0e2e8086, "IVB IOAT Ch0 (RAID)" }, + { 0x0e2f8086, "IVB IOAT Ch1 (RAID)" }, + + { 0x2f208086, "HSW IOAT Ch0" }, + { 0x2f218086, "HSW IOAT Ch1" }, + { 0x2f228086, "HSW IOAT Ch2" }, + { 0x2f238086, "HSW IOAT Ch3" }, + { 0x2f248086, "HSW IOAT Ch4" }, + { 0x2f258086, "HSW IOAT Ch5" }, + { 0x2f268086, "HSW IOAT Ch6" }, + { 0x2f278086, "HSW IOAT Ch7" }, + { 0x2f2e8086, "HSW IOAT Ch0 (RAID)" }, + { 0x2f2f8086, "HSW IOAT Ch1 (RAID)" }, + + { 0x0c508086, "BWD IOAT Ch0" }, + { 0x0c518086, "BWD IOAT Ch1" }, + { 0x0c528086, "BWD IOAT Ch2" }, + { 0x0c538086, "BWD IOAT Ch3" }, + + { 0x6f508086, "BDXDE IOAT Ch0" }, + { 0x6f518086, "BDXDE IOAT Ch1" }, + { 0x6f528086, "BDXDE IOAT Ch2" }, + { 0x6f538086, "BDXDE IOAT Ch3" }, + + { 0x6f208086, "BDX IOAT Ch0" }, + { 0x6f218086, "BDX IOAT Ch1" }, + { 0x6f228086, "BDX IOAT Ch2" }, + { 0x6f238086, "BDX IOAT Ch3" }, + { 0x6f248086, "BDX IOAT Ch4" }, + { 0x6f258086, "BDX IOAT Ch5" }, + { 0x6f268086, "BDX IOAT Ch6" }, + { 0x6f278086, "BDX IOAT Ch7" }, + { 0x6f2e8086, "BDX IOAT Ch0 (RAID)" }, + { 0x6f2f8086, "BDX IOAT Ch1 (RAID)" }, + + { 0x00000000, NULL } +}; + +/* + * OS <-> Driver linkage functions + */ +static int +ioat_probe(device_t device) +{ + struct _pcsid *ep; + u_int32_t type; + + type = pci_get_devid(device); + for (ep = pci_ids; ep->type; ep++) { + if (ep->type == type) { + device_set_desc(device, ep->desc); + return (0); + } + } + return (ENXIO); +} + +static int +ioat_attach(device_t device) +{ + struct ioat_softc *ioat; + int error; + + ioat = DEVICE2SOFTC(device); + ioat->device = device; + + error = ioat_map_pci_bar(ioat); + if (error != 0) + goto err; + + ioat->version = ioat_read_cbver(ioat); + if (ioat->version < IOAT_VER_3_0) { + error = ENODEV; + goto err; + } + + error = ioat3_attach(device); + if (error != 0) + goto err; + + error = pci_enable_busmaster(device); + if (error != 0) + goto err; + + error = ioat_setup_intr(ioat); + if (error != 0) + goto err; + + error = ioat_reset_hw(ioat); + if (error != 0) + goto err; + + ioat_process_events(ioat); + ioat_setup_sysctl(device); + + ioat->chan_idx = ioat_channel_index; + ioat_channel[ioat_channel_index++] = ioat; + ioat_test_attach(); + +err: + if (error != 0) + ioat_detach(device); + return (error); +} + +static int +ioat_detach(device_t device) +{ + struct ioat_softc *ioat; + + ioat = DEVICE2SOFTC(device); + + ioat_test_detach(); + taskqueue_drain(taskqueue_thread, &ioat->reset_task); + + mtx_lock(IOAT_REFLK); + ioat->quiescing = TRUE; + ioat->destroying = TRUE; + wakeup(&ioat->quiescing); + + ioat_channel[ioat->chan_idx] = NULL; + + ioat_drain_locked(ioat); + mtx_unlock(IOAT_REFLK); + + ioat_teardown_intr(ioat); + callout_drain(&ioat->timer); + + pci_disable_busmaster(device); + + if (ioat->pci_resource != NULL) + bus_release_resource(device, SYS_RES_MEMORY, + ioat->pci_resource_id, ioat->pci_resource); + + if (ioat->ring != NULL) + ioat_free_ring(ioat, 1 << ioat->ring_size_order, ioat->ring); + + if (ioat->comp_update != NULL) { + bus_dmamap_unload(ioat->comp_update_tag, ioat->comp_update_map); + bus_dmamem_free(ioat->comp_update_tag, ioat->comp_update, + ioat->comp_update_map); + bus_dma_tag_destroy(ioat->comp_update_tag); + } + + bus_dma_tag_destroy(ioat->hw_desc_tag); + + return (0); +} + +static int +ioat_teardown_intr(struct ioat_softc *ioat) +{ + + if (ioat->tag != NULL) + bus_teardown_intr(ioat->device, ioat->res, ioat->tag); + + if (ioat->res != NULL) + bus_release_resource(ioat->device, SYS_RES_IRQ, + rman_get_rid(ioat->res), ioat->res); + + pci_release_msi(ioat->device); + return (0); +} + +static int +ioat_start_channel(struct ioat_softc *ioat) +{ + uint64_t status; + uint32_t chanerr; + int i; + + ioat_acquire(&ioat->dmaengine); + ioat_null(&ioat->dmaengine, NULL, NULL, 0); + ioat_release(&ioat->dmaengine); + + for (i = 0; i < 100; i++) { + DELAY(1); + status = ioat_get_chansts(ioat); + if (is_ioat_idle(status)) + return (0); + } + + chanerr = ioat_read_4(ioat, IOAT_CHANERR_OFFSET); + ioat_log_message(0, "could not start channel: " + "status = %#jx error = %b\n", (uintmax_t)status, (int)chanerr, + IOAT_CHANERR_STR); + return (ENXIO); +} + +/* + * Initialize Hardware + */ +static int +ioat3_attach(device_t device) +{ + struct ioat_softc *ioat; + struct ioat_descriptor **ring; + struct ioat_descriptor *next; + struct ioat_dma_hw_descriptor *dma_hw_desc; + int i, num_descriptors; + int error; + uint8_t xfercap; + + error = 0; + ioat = DEVICE2SOFTC(device); + ioat->capabilities = ioat_read_dmacapability(ioat); + + ioat_log_message(1, "Capabilities: %b\n", (int)ioat->capabilities, + IOAT_DMACAP_STR); + + xfercap = ioat_read_xfercap(ioat); + ioat->max_xfer_size = 1 << xfercap; + + ioat->intrdelay_supported = (ioat_read_2(ioat, IOAT_INTRDELAY_OFFSET) & + IOAT_INTRDELAY_SUPPORTED) != 0; + if (ioat->intrdelay_supported) + ioat->intrdelay_max = IOAT_INTRDELAY_US_MASK; + + /* TODO: need to check DCA here if we ever do XOR/PQ */ + + mtx_init(&ioat->submit_lock, "ioat_submit", NULL, MTX_DEF); + mtx_init(&ioat->cleanup_lock, "ioat_cleanup", NULL, MTX_DEF); + callout_init(&ioat->timer, 1); + TASK_INIT(&ioat->reset_task, 0, ioat_reset_hw_task, ioat); + + /* Establish lock order for Witness */ + mtx_lock(&ioat->submit_lock); + mtx_lock(&ioat->cleanup_lock); + mtx_unlock(&ioat->cleanup_lock); + mtx_unlock(&ioat->submit_lock); + + ioat->is_resize_pending = FALSE; + ioat->is_completion_pending = FALSE; + ioat->is_reset_pending = FALSE; + ioat->is_channel_running = FALSE; + + bus_dma_tag_create(bus_get_dma_tag(ioat->device), sizeof(uint64_t), 0x0, + BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR, NULL, NULL, + sizeof(uint64_t), 1, sizeof(uint64_t), 0, NULL, NULL, + &ioat->comp_update_tag); + + error = bus_dmamem_alloc(ioat->comp_update_tag, + (void **)&ioat->comp_update, BUS_DMA_ZERO, &ioat->comp_update_map); + if (ioat->comp_update == NULL) + return (ENOMEM); + + error = bus_dmamap_load(ioat->comp_update_tag, ioat->comp_update_map, + ioat->comp_update, sizeof(uint64_t), ioat_comp_update_map, ioat, + 0); + if (error != 0) + return (error); + + ioat->ring_size_order = IOAT_MIN_ORDER; + + num_descriptors = 1 << ioat->ring_size_order; + + bus_dma_tag_create(bus_get_dma_tag(ioat->device), 0x40, 0x0, + BUS_SPACE_MAXADDR_40BIT, BUS_SPACE_MAXADDR, NULL, NULL, + sizeof(struct ioat_dma_hw_descriptor), 1, + sizeof(struct ioat_dma_hw_descriptor), 0, NULL, NULL, + &ioat->hw_desc_tag); + + ioat->ring = malloc(num_descriptors * sizeof(*ring), M_IOAT, + M_ZERO | M_WAITOK); + + ring = ioat->ring; + for (i = 0; i < num_descriptors; i++) { + ring[i] = ioat_alloc_ring_entry(ioat, M_WAITOK); + if (ring[i] == NULL) + return (ENOMEM); + + ring[i]->id = i; + } + + for (i = 0; i < num_descriptors - 1; i++) { + next = ring[i + 1]; + dma_hw_desc = ring[i]->u.dma; + + dma_hw_desc->next = next->hw_desc_bus_addr; + } + + ring[i]->u.dma->next = ring[0]->hw_desc_bus_addr; + + ioat->head = ioat->hw_head = 0; + ioat->tail = 0; + ioat->last_seen = 0; + return (0); +} + +static int +ioat_map_pci_bar(struct ioat_softc *ioat) +{ + + ioat->pci_resource_id = PCIR_BAR(0); + ioat->pci_resource = bus_alloc_resource_any(ioat->device, + SYS_RES_MEMORY, &ioat->pci_resource_id, RF_ACTIVE); + + if (ioat->pci_resource == NULL) { + ioat_log_message(0, "unable to allocate pci resource\n"); + return (ENODEV); + } + + ioat->pci_bus_tag = rman_get_bustag(ioat->pci_resource); + ioat->pci_bus_handle = rman_get_bushandle(ioat->pci_resource); + return (0); +} + +static void +ioat_comp_update_map(void *arg, bus_dma_segment_t *seg, int nseg, int error) +{ + struct ioat_softc *ioat = arg; + + KASSERT(error == 0, ("%s: error:%d", __func__, error)); + ioat->comp_update_bus_addr = seg[0].ds_addr; +} + +static void +ioat_dmamap_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error) +{ + bus_addr_t *baddr; + + KASSERT(error == 0, ("%s: error:%d", __func__, error)); + baddr = arg; + *baddr = segs->ds_addr; +} + +/* + * Interrupt setup and handlers + */ +static int +ioat_setup_intr(struct ioat_softc *ioat) +{ + uint32_t num_vectors; + int error; + boolean_t use_msix; + boolean_t force_legacy_interrupts; + + use_msix = FALSE; + force_legacy_interrupts = FALSE; + + if (!g_force_legacy_interrupts && pci_msix_count(ioat->device) >= 1) { + num_vectors = 1; + pci_alloc_msix(ioat->device, &num_vectors); + if (num_vectors == 1) + use_msix = TRUE; + } + + if (use_msix) { + ioat->rid = 1; + ioat->res = bus_alloc_resource_any(ioat->device, SYS_RES_IRQ, + &ioat->rid, RF_ACTIVE); + } else { + ioat->rid = 0; + ioat->res = bus_alloc_resource_any(ioat->device, SYS_RES_IRQ, + &ioat->rid, RF_SHAREABLE | RF_ACTIVE); + } + if (ioat->res == NULL) { + ioat_log_message(0, "bus_alloc_resource failed\n"); + return (ENOMEM); + } + + ioat->tag = NULL; + error = bus_setup_intr(ioat->device, ioat->res, INTR_MPSAFE | + INTR_TYPE_MISC, NULL, ioat_interrupt_handler, ioat, &ioat->tag); + if (error != 0) { + ioat_log_message(0, "bus_setup_intr failed\n"); + return (error); + } + + ioat_write_intrctrl(ioat, IOAT_INTRCTRL_MASTER_INT_EN); + return (0); +} + +static boolean_t +ioat_model_resets_msix(struct ioat_softc *ioat) +{ + u_int32_t pciid; + + pciid = pci_get_devid(ioat->device); + switch (pciid) { + /* BWD: */ + case 0x0c508086: + case 0x0c518086: + case 0x0c528086: + case 0x0c538086: + /* BDXDE: */ + case 0x6f508086: + case 0x6f518086: + case 0x6f528086: + case 0x6f538086: + return (TRUE); + } + + return (FALSE); +} + +static void +ioat_interrupt_handler(void *arg) +{ + struct ioat_softc *ioat = arg; + + ioat->stats.interrupts++; + ioat_process_events(ioat); +} + +static int +chanerr_to_errno(uint32_t chanerr) +{ + + if (chanerr == 0) + return (0); + if ((chanerr & (IOAT_CHANERR_XSADDERR | IOAT_CHANERR_XDADDERR)) != 0) + return (EFAULT); + if ((chanerr & (IOAT_CHANERR_RDERR | IOAT_CHANERR_WDERR)) != 0) + return (EIO); + /* This one is probably our fault: */ + if ((chanerr & IOAT_CHANERR_NDADDERR) != 0) + return (EIO); + return (EIO); +} + +static void +ioat_process_events(struct ioat_softc *ioat) +{ + struct ioat_descriptor *desc; + struct bus_dmadesc *dmadesc; + uint64_t comp_update, status; + uint32_t completed, chanerr; + int error; + + mtx_lock(&ioat->cleanup_lock); + + completed = 0; + comp_update = *ioat->comp_update; + status = comp_update & IOAT_CHANSTS_COMPLETED_DESCRIPTOR_MASK; + + CTR0(KTR_IOAT, __func__); + + if (status == ioat->last_seen) { + /* + * If we landed in process_events and nothing has been + * completed, check for a timeout due to channel halt. + */ + comp_update = ioat_get_chansts(ioat); + goto out; + } + + while (1) { + desc = ioat_get_ring_entry(ioat, ioat->tail); + dmadesc = &desc->bus_dmadesc; + CTR1(KTR_IOAT, "completing desc %d", ioat->tail); + + if (dmadesc->callback_fn != NULL) + dmadesc->callback_fn(dmadesc->callback_arg, 0); + + completed++; + ioat->tail++; + if (desc->hw_desc_bus_addr == status) + break; + } + + ioat->last_seen = desc->hw_desc_bus_addr; + + if (ioat->head == ioat->tail) { + ioat->is_completion_pending = FALSE; + callout_reset(&ioat->timer, IOAT_INTR_TIMO, + ioat_timer_callback, ioat); + } + + ioat->stats.descriptors_processed += completed; + +out: + ioat_write_chanctrl(ioat, IOAT_CHANCTRL_RUN); + mtx_unlock(&ioat->cleanup_lock); + + if (completed != 0) { + ioat_putn(ioat, completed, IOAT_ACTIVE_DESCR_REF); + wakeup(&ioat->tail); + } + + if (!is_ioat_halted(comp_update) && !is_ioat_suspended(comp_update)) + return; + + ioat->stats.channel_halts++; + + /* + * Fatal programming error on this DMA channel. Flush any outstanding + * work with error status and restart the engine. + */ + ioat_log_message(0, "Channel halted due to fatal programming error\n"); + mtx_lock(&ioat->submit_lock); + mtx_lock(&ioat->cleanup_lock); + ioat->quiescing = TRUE; + + chanerr = ioat_read_4(ioat, IOAT_CHANERR_OFFSET); + ioat_halted_debug(ioat, chanerr); + ioat->stats.last_halt_chanerr = chanerr; + + while (ioat_get_active(ioat) > 0) { + desc = ioat_get_ring_entry(ioat, ioat->tail); + dmadesc = &desc->bus_dmadesc; + CTR1(KTR_IOAT, "completing err desc %d", ioat->tail); + + if (dmadesc->callback_fn != NULL) + dmadesc->callback_fn(dmadesc->callback_arg, + chanerr_to_errno(chanerr)); + + ioat_putn_locked(ioat, 1, IOAT_ACTIVE_DESCR_REF); + ioat->tail++; + ioat->stats.descriptors_processed++; + ioat->stats.descriptors_error++; + } + + /* Clear error status */ + ioat_write_4(ioat, IOAT_CHANERR_OFFSET, chanerr); + + mtx_unlock(&ioat->cleanup_lock); + mtx_unlock(&ioat->submit_lock); + + ioat_log_message(0, "Resetting channel to recover from error\n"); + error = taskqueue_enqueue(taskqueue_thread, &ioat->reset_task); + KASSERT(error == 0, + ("%s: taskqueue_enqueue failed: %d", __func__, error)); +} + +static void +ioat_reset_hw_task(void *ctx, int pending __unused) +{ + struct ioat_softc *ioat; + int error; + + ioat = ctx; + ioat_log_message(1, "%s: Resetting channel\n", __func__); + + error = ioat_reset_hw(ioat); + KASSERT(error == 0, ("%s: reset failed: %d", __func__, error)); + (void)error; +} + +/* + * User API functions + */ +bus_dmaengine_t +ioat_get_dmaengine(uint32_t index, int flags) +{ + struct ioat_softc *ioat; + + KASSERT((flags & ~(M_NOWAIT | M_WAITOK)) == 0, + ("invalid flags: 0x%08x", flags)); + KASSERT((flags & (M_NOWAIT | M_WAITOK)) != (M_NOWAIT | M_WAITOK), + ("invalid wait | nowait")); + + if (index >= ioat_channel_index) + return (NULL); + + ioat = ioat_channel[index]; + if (ioat == NULL || ioat->destroying) + return (NULL); + + if (ioat->quiescing) { + if ((flags & M_NOWAIT) != 0) + return (NULL); + + mtx_lock(IOAT_REFLK); + while (ioat->quiescing && !ioat->destroying) + msleep(&ioat->quiescing, IOAT_REFLK, 0, "getdma", 0); + mtx_unlock(IOAT_REFLK); + + if (ioat->destroying) + return (NULL); + } + + /* + * There's a race here between the quiescing check and HW reset or + * module destroy. + */ + return (&ioat_get(ioat, IOAT_DMAENGINE_REF)->dmaengine); +} + +void +ioat_put_dmaengine(bus_dmaengine_t dmaengine) +{ + struct ioat_softc *ioat; + + ioat = to_ioat_softc(dmaengine); + ioat_put(ioat, IOAT_DMAENGINE_REF); +} + +int +ioat_get_hwversion(bus_dmaengine_t dmaengine) +{ + struct ioat_softc *ioat; + + ioat = to_ioat_softc(dmaengine); + return (ioat->version); +} + +size_t +ioat_get_max_io_size(bus_dmaengine_t dmaengine) +{ + struct ioat_softc *ioat; + + ioat = to_ioat_softc(dmaengine); + return (ioat->max_xfer_size); +} + +int +ioat_set_interrupt_coalesce(bus_dmaengine_t dmaengine, uint16_t delay) +{ + struct ioat_softc *ioat; + + ioat = to_ioat_softc(dmaengine); + if (!ioat->intrdelay_supported) + return (ENODEV); + if (delay > ioat->intrdelay_max) + return (ERANGE); + + ioat_write_2(ioat, IOAT_INTRDELAY_OFFSET, delay); + ioat->cached_intrdelay = + ioat_read_2(ioat, IOAT_INTRDELAY_OFFSET) & IOAT_INTRDELAY_US_MASK; + return (0); +} + +uint16_t +ioat_get_max_coalesce_period(bus_dmaengine_t dmaengine) +{ + struct ioat_softc *ioat; + + ioat = to_ioat_softc(dmaengine); + return (ioat->intrdelay_max); +} + +void +ioat_acquire(bus_dmaengine_t dmaengine) +{ + struct ioat_softc *ioat; + + ioat = to_ioat_softc(dmaengine); + mtx_lock(&ioat->submit_lock); + CTR0(KTR_IOAT, __func__); +} + +int +ioat_acquire_reserve(bus_dmaengine_t dmaengine, unsigned n, int mflags) +{ + struct ioat_softc *ioat; + int error; + + ioat = to_ioat_softc(dmaengine); + ioat_acquire(dmaengine); + + error = ioat_reserve_space(ioat, n, mflags); + if (error != 0) + ioat_release(dmaengine); + return (error); +} + +void +ioat_release(bus_dmaengine_t dmaengine) +{ + struct ioat_softc *ioat; + + ioat = to_ioat_softc(dmaengine); + CTR0(KTR_IOAT, __func__); + ioat_write_2(ioat, IOAT_DMACOUNT_OFFSET, (uint16_t)ioat->hw_head); + mtx_unlock(&ioat->submit_lock); +} + +static struct ioat_descriptor * +ioat_op_generic(struct ioat_softc *ioat, uint8_t op, + uint32_t size, uint64_t src, uint64_t dst, + bus_dmaengine_callback_t callback_fn, void *callback_arg, + uint32_t flags) +{ + struct ioat_generic_hw_descriptor *hw_desc; + struct ioat_descriptor *desc; + int mflags; + + mtx_assert(&ioat->submit_lock, MA_OWNED); + + KASSERT((flags & ~_DMA_GENERIC_FLAGS) == 0, + ("Unrecognized flag(s): %#x", flags & ~_DMA_GENERIC_FLAGS)); + if ((flags & DMA_NO_WAIT) != 0) + mflags = M_NOWAIT; + else + mflags = M_WAITOK; + + if (size > ioat->max_xfer_size) { + ioat_log_message(0, "%s: max_xfer_size = %d, requested = %u\n", + __func__, ioat->max_xfer_size, (unsigned)size); + return (NULL); + } + + if (ioat_reserve_space(ioat, 1, mflags) != 0) + return (NULL); + + desc = ioat_get_ring_entry(ioat, ioat->head); + hw_desc = desc->u.generic; + + hw_desc->u.control_raw = 0; + hw_desc->u.control_generic.op = op; + hw_desc->u.control_generic.completion_update = 1; + + if ((flags & DMA_INT_EN) != 0) + hw_desc->u.control_generic.int_enable = 1; + if ((flags & DMA_FENCE) != 0) + hw_desc->u.control_generic.fence = 1; + + hw_desc->size = size; + hw_desc->src_addr = src; + hw_desc->dest_addr = dst; + + desc->bus_dmadesc.callback_fn = callback_fn; + desc->bus_dmadesc.callback_arg = callback_arg; + return (desc); +} + +struct bus_dmadesc * +ioat_null(bus_dmaengine_t dmaengine, bus_dmaengine_callback_t callback_fn, + void *callback_arg, uint32_t flags) +{ + struct ioat_dma_hw_descriptor *hw_desc; + struct ioat_descriptor *desc; + struct ioat_softc *ioat; + + CTR0(KTR_IOAT, __func__); + ioat = to_ioat_softc(dmaengine); + + desc = ioat_op_generic(ioat, IOAT_OP_COPY, 8, 0, 0, callback_fn, + callback_arg, flags); + if (desc == NULL) + return (NULL); + + hw_desc = desc->u.dma; + hw_desc->u.control.null = 1; + ioat_submit_single(ioat); + return (&desc->bus_dmadesc); +} + +struct bus_dmadesc * +ioat_copy(bus_dmaengine_t dmaengine, bus_addr_t dst, + bus_addr_t src, bus_size_t len, bus_dmaengine_callback_t callback_fn, + void *callback_arg, uint32_t flags) +{ + struct ioat_dma_hw_descriptor *hw_desc; + struct ioat_descriptor *desc; + struct ioat_softc *ioat; + + CTR0(KTR_IOAT, __func__); + ioat = to_ioat_softc(dmaengine); + + if (((src | dst) & (0xffffull << 48)) != 0) { + ioat_log_message(0, "%s: High 16 bits of src/dst invalid\n", + __func__); + return (NULL); + } + + desc = ioat_op_generic(ioat, IOAT_OP_COPY, len, src, dst, callback_fn, + callback_arg, flags); + if (desc == NULL) + return (NULL); + + hw_desc = desc->u.dma; + if (g_ioat_debug_level >= 3) + dump_descriptor(hw_desc); + *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***