From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F1E4EA0C4C; Thu, 14 Oct 2021 11:49:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BCF2C41244; Thu, 14 Oct 2021 11:49:27 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 8BB2D41212 for ; Thu, 14 Oct 2021 11:49:25 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10136"; a="225100239" X-IronPort-AV: E=Sophos;i="5.85,372,1624345200"; d="scan'208";a="225100239" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2021 02:49:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,372,1624345200"; d="scan'208";a="442042114" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga006.jf.intel.com with ESMTP; 14 Oct 2021 02:49:23 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Thu, 14 Oct 2021 09:48:57 +0000 Message-Id: <20211014094902.489159-8-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211014094902.489159-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211014094902.489159-1-conor.walsh@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v7 07/12] dma/ioat: add data path completion functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the data path functions for gathering completed operations from IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Kevin Laatz Acked-by: Bruce Richardson --- doc/guides/dmadevs/ioat.rst | 33 +++++++- drivers/dma/ioat/ioat_dmadev.c | 141 +++++++++++++++++++++++++++++++++ 2 files changed, 173 insertions(+), 1 deletion(-) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index 9ee4e372a8..9ac90e3108 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -90,7 +90,38 @@ Performing Data Copies ~~~~~~~~~~~~~~~~~~~~~~~ Refer to the :ref:`Enqueue / Dequeue APIs ` section of the dmadev library -documentation for details on operation enqueue and submission API usage. +documentation for details on operation enqueue, submission and completion API usage. It is expected that, for efficiency reasons, a burst of operations will be enqueued to the device via multiple enqueue calls between calls to the ``rte_dma_submit()`` function. + +When gathering completions, ``rte_dma_completed()`` should be used, up until the point an error +occurs with an operation. If an error was encountered, ``rte_dma_completed_status()`` must be used +to reset the device and continue processing operations. This function will also gather the status +of each individual operation which is filled in to the ``status`` array provided as parameter +by the application. + +The status codes supported by IOAT are: + +* ``RTE_DMA_STATUS_SUCCESSFUL``: The operation was successful. +* ``RTE_DMA_STATUS_INVALID_SRC_ADDR``: The operation failed due to an invalid source address. +* ``RTE_DMA_STATUS_INVALID_DST_ADDR``: The operation failed due to an invalid destination address. +* ``RTE_DMA_STATUS_INVALID_LENGTH``: The operation failed due to an invalid descriptor length. +* ``RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR``: The device could not read the descriptor. +* ``RTE_DMA_STATUS_ERROR_UNKNOWN``: The operation failed due to an unspecified error. + +The following code shows how to retrieve the number of successfully completed +copies within a burst and then uses ``rte_dma_completed_status()`` to check +which operation failed and reset the device to continue processing operations: + +.. code-block:: C + + enum rte_dma_status_code status[COMP_BURST_SZ]; + uint16_t count, idx, status_count; + bool error = 0; + + count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error); + + if (error){ + status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, &idx, status); + } diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 4d00fec5c8..0318f67772 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -6,6 +6,7 @@ #include #include #include +#include #include "ioat_internal.h" @@ -362,6 +363,144 @@ ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) return __dev_dump(dev->fp_obj->dev_private, f); } +/* Returns the index of the last completed operation. */ +static inline uint16_t +__get_last_completed(const struct ioat_dmadev *ioat, int *state) +{ + /* Status register contains the address of the completed operation */ + uint64_t status = ioat->status; + + /* lower 3 bits indicate "transfer status" : active, idle, halted. + * We can ignore bit 0. + */ + *state = status & IOAT_CHANSTS_STATUS; + + /* If we are just after recovering from an error the address returned by + * status will be 0, in this case we return the offset - 1 as the last + * completed. If not return the status value minus the chainaddr which + * gives us an offset into the ring. Right shifting by 6 (divide by 64) + * gives the index of the completion from the HW point of view and adding + * the offset translates the ring index from HW to SW point of view. + */ + if ((status & ~IOAT_CHANSTS_STATUS) == 0) + return ioat->offset - 1; + + return (status - ioat->ring_addr) >> 6; +} + +/* Translates IOAT ChanERRs to DMA error codes. */ +static inline enum rte_dma_status_code +__translate_status_ioat_to_dma(uint32_t chanerr) +{ + if (chanerr & IOAT_CHANERR_INVALID_SRC_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_SRC_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_DST_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_DST_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_LENGTH_MASK) + return RTE_DMA_STATUS_INVALID_LENGTH; + else if (chanerr & IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK) + return RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR; + else + return RTE_DMA_STATUS_ERROR_UNKNOWN; +} + +/* Returns details of operations that have been completed. */ +static uint16_t +ioat_completed(void *dev_private, uint16_t qid __rte_unused, const uint16_t max_ops, + uint16_t *last_idx, bool *has_error) +{ + struct ioat_dmadev *ioat = dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short last_completed, count; + int state, fails = 0; + + /* Do not do any work if there is an uncleared error. */ + if (ioat->failure != 0) { + *has_error = true; + *last_idx = ioat->next_read - 2; + return 0; + } + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) { + ioat->next_read = read + count; + *last_idx = ioat->next_read - 1; + } else { + *has_error = true; + rte_errno = EIO; + ioat->failure = ioat->regs->chanerr; + ioat->next_read = read + count + 1; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + __dev_dump(dev_private, stdout); + return 0; + } + __submit(ioat); + fails++; + *last_idx = ioat->next_read - 2; + } + + return count; +} + +/* Returns detailed status information about operations that have been completed. */ +static uint16_t +ioat_completed_status(void *dev_private, uint16_t qid __rte_unused, + uint16_t max_ops, uint16_t *last_idx, enum rte_dma_status_code *status) +{ + struct ioat_dmadev *ioat = dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short count, last_completed; + uint64_t fails = 0; + int state, i; + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + for (i = 0; i < RTE_MIN(count + 1, max_ops); i++) + status[i] = RTE_DMA_STATUS_SUCCESSFUL; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) + ioat->next_read = read + count; + else { + rte_errno = EIO; + status[count] = __translate_status_ioat_to_dma(ioat->regs->chanerr); + count++; + ioat->next_read = read + count; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + __dev_dump(dev_private, stdout); + return 0; + } + __submit(ioat); + fails++; + } + + if (ioat->failure > 0) { + status[0] = __translate_status_ioat_to_dma(ioat->failure); + count = RTE_MIN(count + 1, max_ops); + ioat->failure = 0; + } + + *last_idx = ioat->next_read - 1; + + return count; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -398,6 +537,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->fp_obj->completed = ioat_completed; + dmadev->fp_obj->completed_status = ioat_completed_status; dmadev->fp_obj->copy = ioat_enqueue_copy; dmadev->fp_obj->fill = ioat_enqueue_fill; dmadev->fp_obj->submit = ioat_submit; -- 2.25.1