From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB1EAA0548; Fri, 24 Sep 2021 16:34:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A06A44134C; Fri, 24 Sep 2021 16:33:56 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id BA0E24134C for ; Fri, 24 Sep 2021 16:33:54 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640177" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640177" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871493" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:51 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:30 +0000 Message-Id: <20210924143335.1092300-8-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v5 07/12] dma/ioat: add data path completion functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the data path functions for gathering completed operations from IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Kevin Laatz Acked-by: Bruce Richardson --- doc/guides/dmadevs/ioat.rst | 33 +++++++- drivers/dma/ioat/ioat_dmadev.c | 141 +++++++++++++++++++++++++++++++++ 2 files changed, 173 insertions(+), 1 deletion(-) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index ec8ce5a8e5..fc1b3131c7 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -94,7 +94,38 @@ Performing Data Copies ~~~~~~~~~~~~~~~~~~~~~~~ Refer to the :ref:`Enqueue / Dequeue APIs ` section of the dmadev library -documentation for details on operation enqueue and submission API usage. +documentation for details on operation enqueue, submission and completion API usage. It is expected that, for efficiency reasons, a burst of operations will be enqueued to the device via multiple enqueue calls between calls to the ``rte_dma_submit()`` function. + +When gathering completions, ``rte_dma_completed()`` should be used, up until the point an error +occurs with an operation. If an error was encountered, ``rte_dma_completed_status()`` must be used +to reset the device and continue processing operations. This function will also gather the status +of each individual operation which is filled in to the ``status`` array provided as parameter +by the application. + +The status codes supported by IOAT are: + +* ``RTE_DMA_STATUS_SUCCESSFUL``: The operation was successful. +* ``RTE_DMA_STATUS_INVALID_SRC_ADDR``: The operation failed due to an invalid source address. +* ``RTE_DMA_STATUS_INVALID_DST_ADDR``: The operation failed due to an invalid destination address. +* ``RTE_DMA_STATUS_INVALID_LENGTH``: The operation failed due to an invalid descriptor length. +* ``RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR``: The device could not read the descriptor. +* ``RTE_DMA_STATUS_ERROR_UNKNOWN``: The operation failed due to an unspecified error. + +The following code shows how to retrieve the number of successfully completed +copies within a burst and then uses ``rte_dma_completed_status()`` to check +which operation failed and reset the device to continue processing operations: + +.. code-block:: C + + enum rte_dma_status_code status[COMP_BURST_SZ]; + uint16_t count, idx, status_count; + bool error = 0; + + count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error); + + if (error){ + status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, &idx, status); + } diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 0e92c80fb0..b2b7ebb3db 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -6,6 +6,7 @@ #include #include #include +#include #include "ioat_internal.h" @@ -355,6 +356,144 @@ ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) return 0; } +/* Returns the index of the last completed operation. */ +static inline uint16_t +__get_last_completed(const struct ioat_dmadev *ioat, int *state) +{ + /* Status register contains the address of the completed operation */ + uint64_t status = ioat->status; + + /* lower 3 bits indicate "transfer status" : active, idle, halted. + * We can ignore bit 0. + */ + *state = status & IOAT_CHANSTS_STATUS; + + /* If we are just after recovering from an error the address returned by + * status will be 0, in this case we return the offset - 1 as the last + * completed. If not return the status value minus the chainaddr which + * gives us an offset into the ring. Right shifting by 6 (divide by 64) + * gives the index of the completion from the HW point of view and adding + * the offset translates the ring index from HW to SW point of view. + */ + if ((status & ~IOAT_CHANSTS_STATUS) == 0) + return ioat->offset - 1; + + return (status - ioat->ring_addr) >> 6; +} + +/* Translates IOAT ChanERRs to DMA error codes. */ +static inline enum rte_dma_status_code +__translate_status_ioat_to_dma(uint32_t chanerr) +{ + if (chanerr & IOAT_CHANERR_INVALID_SRC_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_SRC_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_DST_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_DST_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_LENGTH_MASK) + return RTE_DMA_STATUS_INVALID_LENGTH; + else if (chanerr & IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK) + return RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR; + else + return RTE_DMA_STATUS_ERROR_UNKNOWN; +} + +/* Returns details of operations that have been completed. */ +static uint16_t +ioat_completed(struct rte_dma_dev *dev, uint16_t qid __rte_unused, const uint16_t max_ops, + uint16_t *last_idx, bool *has_error) +{ + struct ioat_dmadev *ioat = dev->dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short last_completed, count; + int state, fails = 0; + + /* Do not do any work if there is an uncleared error. */ + if (ioat->failure != 0) { + *has_error = true; + *last_idx = ioat->next_read - 2; + return 0; + } + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) { + ioat->next_read = read + count; + *last_idx = ioat->next_read - 1; + } else { + *has_error = true; + rte_errno = EIO; + ioat->failure = ioat->regs->chanerr; + ioat->next_read = read + count + 1; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + ioat_dev_dump(dev, stdout); + return 0; + } + __submit(ioat); + fails++; + *last_idx = ioat->next_read - 2; + } + + return count; +} + +/* Returns detailed status information about operations that have been completed. */ +static uint16_t +ioat_completed_status(struct rte_dma_dev *dev, uint16_t qid __rte_unused, + uint16_t max_ops, uint16_t *last_idx, enum rte_dma_status_code *status) +{ + struct ioat_dmadev *ioat = dev->dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short count, last_completed; + uint64_t fails = 0; + int state, i; + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + for (i = 0; i < RTE_MIN(count + 1, max_ops); i++) + status[i] = RTE_DMA_STATUS_SUCCESSFUL; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) + ioat->next_read = read + count; + else { + rte_errno = EIO; + status[count] = __translate_status_ioat_to_dma(ioat->regs->chanerr); + count++; + ioat->next_read = read + count; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + ioat_dev_dump(dev, stdout); + return 0; + } + __submit(ioat); + fails++; + } + + if (ioat->failure > 0) { + status[0] = __translate_status_ioat_to_dma(ioat->failure); + count = RTE_MIN(count + 1, max_ops); + ioat->failure = 0; + } + + *last_idx = ioat->next_read - 1; + + return count; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -391,6 +530,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->completed = ioat_completed; + dmadev->completed_status = ioat_completed_status; dmadev->copy = ioat_enqueue_copy; dmadev->fill = ioat_enqueue_fill; dmadev->submit = ioat_submit; -- 2.25.1