From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83F4F42604; Tue, 19 Sep 2023 15:43:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3B0540E2D; Tue, 19 Sep 2023 15:43:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 239B040E6E for ; Tue, 19 Sep 2023 15:43:14 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38JDZvc1011940; Tue, 19 Sep 2023 06:43:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=UQ5OLJG+K3OGY1Hx0LmY5Cc2PvqPDowe2DWwBrgXQMs=; b=B5CMC9RX+DcjhOf98gzdS/DB/nlxKP22YdYizKALYCq68yfSBr5TuZjBMa9ryrIRqj0n q9FsX+drjMG9TLdjvpSr9ElvfpzQ1zoMUvaZdnq1EqO+Gq1q5AR9zktBFUWyImAXiVI4 9SjI89+az+L+D5lUy+phI8CL8Qvs1PyGhXQ1naS0YQtQDSR/rDbvdQJ9O6hmzIaGYij+ cmngA4nK1+/0YRLxfi+GPJRtE4FKw8ZfWhkNLmTz/OzHl8PNUfo0tW29d7/Pup6T76lV JmpLCrBDHVBest/YTbwmZLTx8BZirVXRGEU4QP+y3T5+WhLyFzjyf2OwpD+6af1yUXW7 yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7cnq00wy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:43:14 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:43:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:43:12 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 27D863F709B; Tue, 19 Sep 2023 06:43:07 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 6/7] dma/cnxk: support for DMA event enqueue dequeue Date: Tue, 19 Sep 2023 19:12:21 +0530 Message-ID: <20230919134222.2500033-6-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: CRoc0BnV0j6ASP-iYdV_J5Bn--kP98c4 X-Proofpoint-GUID: CRoc0BnV0j6ASP-iYdV_J5Bn--kP98c4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk driver support for dma event enqueue and dequeue. Signed-off-by: Amit Prakash Shukla --- drivers/dma/cnxk/cnxk_dma_event_dp.h | 22 +++ drivers/dma/cnxk/cnxk_dmadev.h | 9 +- drivers/dma/cnxk/cnxk_dmadev_fp.c | 209 +++++++++++++++++++++++++++ drivers/dma/cnxk/meson.build | 6 +- drivers/dma/cnxk/version.map | 9 ++ 5 files changed, 253 insertions(+), 2 deletions(-) create mode 100644 drivers/dma/cnxk/cnxk_dma_event_dp.h create mode 100644 drivers/dma/cnxk/version.map diff --git a/drivers/dma/cnxk/cnxk_dma_event_dp.h b/drivers/dma/cnxk/cnxk_dma_event_dp.h new file mode 100644 index 0000000000..bf9b01f8f1 --- /dev/null +++ b/drivers/dma/cnxk/cnxk_dma_event_dp.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef _CNXK_DMA_EVENT_DP_H_ +#define _CNXK_DMA_EVENT_DP_H_ + +#include + +#include +#include + +__rte_internal +uint16_t cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events); + +__rte_internal +uint16_t cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events); + +__rte_internal +uintptr_t cnxk_dma_adapter_dequeue(uintptr_t get_work1); + +#endif /* _CNXK_DMA_EVENT_DP_H_ */ diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 75059b8843..9cba388d02 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -40,6 +40,11 @@ */ #define CNXK_DPI_REQ_CDATA 0xFF +/* Set Completion data to 0xDEADBEEF when request submitted for SSO. + * This helps differentiate if the dequeue is called after cnxk enueue. + */ +#define CNXK_DPI_REQ_SSO_CDATA 0xDEADBEEF + union cnxk_dpi_instr_cmd { uint64_t u; struct cn9k_dpi_instr_cmd { @@ -85,7 +90,9 @@ union cnxk_dpi_instr_cmd { struct cnxk_dpi_compl_s { uint64_t cdata; - void *cb_data; + void *op; + uint16_t dev_id; + uint16_t vchan; uint32_t wqecs; }; diff --git a/drivers/dma/cnxk/cnxk_dmadev_fp.c b/drivers/dma/cnxk/cnxk_dmadev_fp.c index 16d7b5426b..c7cd036a5b 100644 --- a/drivers/dma/cnxk/cnxk_dmadev_fp.c +++ b/drivers/dma/cnxk/cnxk_dmadev_fp.c @@ -5,6 +5,8 @@ #include #include "cnxk_dmadev.h" +#include "cnxk_dma_event_dp.h" +#include static __plt_always_inline void __dpi_cpy_scalar(uint64_t *src, uint64_t *dst, uint8_t n) @@ -434,3 +436,210 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge return dpi_conf->desc_idx++; } + +uint16_t +cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events) +{ + union rte_event_dma_metadata *dma_mdata; + struct rte_event_dma_request *req_info; + const struct rte_dma_sge *src, *dst; + struct rte_event_dma_adapter_op *op; + struct cnxk_dpi_compl_s *comp_ptr; + struct cnxk_dpi_conf *dpi_conf; + struct cnxk_dpi_vf_s *dpivf; + struct rte_event *rsp_info; + uint16_t nb_src, nb_dst; + struct rte_dma_dev *dev; + uint64_t hdr[4]; + uint16_t count; + int rc; + + PLT_SET_USED(ws); + + for (count = 0; count < nb_events; count++) { + op = ev[count].event_ptr; + dma_mdata = (union rte_event_dma_metadata *)((uint8_t *)op + + sizeof(struct rte_event_dma_adapter_op)); + rsp_info = &dma_mdata->response_info; + req_info = &dma_mdata->request_info; + dev = rte_dma_pmd_dev_get(req_info->dma_dev_id); + dpivf = dev->data->dev_private; + dpi_conf = &dpivf->conf[req_info->vchan]; + + if (unlikely(((dpi_conf->c_desc.tail + 1) & dpi_conf->c_desc.max_cnt) == + dpi_conf->c_desc.head)) + return count; + + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; + CNXK_DPI_STRM_INC(dpi_conf->c_desc, tail); + comp_ptr->op = op; + comp_ptr->dev_id = req_info->dma_dev_id; + comp_ptr->vchan = req_info->vchan; + comp_ptr->cdata = CNXK_DPI_REQ_SSO_CDATA; + + nb_src = op->nb_src & CNXK_DPI_MAX_POINTER; + nb_dst = op->nb_dst & CNXK_DPI_MAX_POINTER; + + hdr[0] = dpi_conf->cmd.u | ((uint64_t)DPI_HDR_PT_WQP << 54); + hdr[0] |= (nb_dst << 6) | nb_src; + hdr[1] = ((uint64_t)comp_ptr); + hdr[2] = (RTE_EVENT_TYPE_DMADEV << 28 | (rsp_info->sub_event_type << 20) | + rsp_info->flow_id); + hdr[2] |= ((uint64_t)(rsp_info->sched_type & DPI_HDR_TT_MASK)) << 32; + hdr[2] |= ((uint64_t)(rsp_info->queue_id & DPI_HDR_GRP_MASK)) << 34; + + src = &op->src_seg[0]; + dst = &op->dst_seg[0]; + + rc = __dpi_queue_write_sg(dpivf, hdr, src, dst, nb_src, nb_dst); + if (unlikely(rc)) { + CNXK_DPI_STRM_DEC(dpi_conf->c_desc, tail); + return rc; + } + + if (op->flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(dpi_conf->pnum_words + CNXK_DPI_CMD_LEN(nb_src, nb_dst), + dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpi_conf->stats.submitted += dpi_conf->pending + 1; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; + } else { + dpi_conf->pnum_words += CNXK_DPI_CMD_LEN(nb_src, nb_dst); + dpi_conf->pending++; + } + } + + return count; +} + +uint16_t +cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events) +{ + union rte_event_dma_metadata *dma_mdata; + struct rte_event_dma_request *req_info; + const struct rte_dma_sge *fptr, *lptr; + struct rte_event_dma_adapter_op *op; + struct cnxk_dpi_compl_s *comp_ptr; + struct cnxk_dpi_conf *dpi_conf; + struct cnxk_dpi_vf_s *dpivf; + struct rte_event *rsp_info; + uint16_t nb_src, nb_dst; + struct rte_dma_dev *dev; + uint64_t hdr[4]; + uint16_t count; + int rc; + + PLT_SET_USED(ws); + + for (count = 0; count < nb_events; count++) { + op = ev[count].event_ptr; + dma_mdata = (union rte_event_dma_metadata *)((uint8_t *)op + + sizeof(struct rte_event_dma_adapter_op)); + rsp_info = &dma_mdata->response_info; + req_info = &dma_mdata->request_info; + dev = rte_dma_pmd_dev_get(req_info->dma_dev_id); + dpivf = dev->data->dev_private; + dpi_conf = &dpivf->conf[req_info->vchan]; + + if (unlikely(((dpi_conf->c_desc.tail + 1) & dpi_conf->c_desc.max_cnt) == + dpi_conf->c_desc.head)) + return count; + + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; + CNXK_DPI_STRM_INC(dpi_conf->c_desc, tail); + comp_ptr->op = op; + comp_ptr->dev_id = req_info->dma_dev_id; + comp_ptr->vchan = req_info->vchan; + comp_ptr->cdata = CNXK_DPI_REQ_SSO_CDATA; + + hdr[1] = dpi_conf->cmd.u | ((uint64_t)DPI_HDR_PT_WQP << 36); + hdr[2] = (uint64_t)comp_ptr; + + nb_src = op->nb_src & CNXK_DPI_MAX_POINTER; + nb_dst = op->nb_dst & CNXK_DPI_MAX_POINTER; + /* + * For inbound case, src pointers are last pointers. + * For all other cases, src pointers are first pointers. + */ + if (((dpi_conf->cmd.u >> 48) & DPI_HDR_XTYPE_MASK) == DPI_XTYPE_INBOUND) { + fptr = &op->dst_seg[0]; + lptr = &op->src_seg[0]; + RTE_SWAP(nb_src, nb_dst); + } else { + fptr = &op->src_seg[0]; + lptr = &op->dst_seg[0]; + } + + hdr[0] = ((uint64_t)nb_dst << 54) | (uint64_t)nb_src << 48; + hdr[0] |= (RTE_EVENT_TYPE_DMADEV << 28 | (rsp_info->sub_event_type << 20) | + rsp_info->flow_id); + hdr[0] |= ((uint64_t)(rsp_info->sched_type & DPI_HDR_TT_MASK)) << 32; + hdr[0] |= ((uint64_t)(rsp_info->queue_id & DPI_HDR_GRP_MASK)) << 34; + + rc = __dpi_queue_write_sg(dpivf, hdr, fptr, lptr, nb_src, nb_dst); + if (unlikely(rc)) { + CNXK_DPI_STRM_DEC(dpi_conf->c_desc, tail); + return rc; + } + + if (op->flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(dpi_conf->pnum_words + CNXK_DPI_CMD_LEN(nb_src, nb_dst), + dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpi_conf->stats.submitted += dpi_conf->pending + 1; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; + } else { + dpi_conf->pnum_words += CNXK_DPI_CMD_LEN(nb_src, nb_dst); + dpi_conf->pending++; + } + } + + return count; +} + +uintptr_t +cnxk_dma_adapter_dequeue(uintptr_t get_work1) +{ + struct rte_event_dma_adapter_op *op; + struct cnxk_dpi_compl_s *comp_ptr; + struct cnxk_dpi_conf *dpi_conf; + struct cnxk_dpi_vf_s *dpivf; + struct rte_dma_dev *dev; + uint8_t *wqecs; + + comp_ptr = (struct cnxk_dpi_compl_s *)get_work1; + + /* Dequeue can be called without calling cnx_enqueue in case of + * dma_adapter. When its called from adapter, dma op will not be + * embedded in completion pointer. In those cases return op. + */ + if (comp_ptr->cdata != CNXK_DPI_REQ_SSO_CDATA) + return (uintptr_t)comp_ptr; + + dev = rte_dma_pmd_dev_get(comp_ptr->dev_id); + dpivf = dev->data->dev_private; + dpi_conf = &dpivf->conf[comp_ptr->vchan]; + + wqecs = (uint8_t *)&comp_ptr->wqecs; + if (__atomic_load_n(wqecs, __ATOMIC_RELAXED) != 0) + dpi_conf->stats.errors++; + + op = (struct rte_event_dma_adapter_op *)comp_ptr->op; + + /* We are done here. Reset completion buffer.*/ + comp_ptr->wqecs = ~0; + comp_ptr->op = NULL; + comp_ptr->dev_id = ~0; + comp_ptr->vchan = ~0; + comp_ptr->cdata = CNXK_DPI_REQ_CDATA; + + CNXK_DPI_STRM_INC(dpi_conf->c_desc, head); + /* Take into account errors also. This is similar to + * cnxk_dmadev_completed_status(). + */ + dpi_conf->stats.completed++; + + return (uintptr_t)op; +} diff --git a/drivers/dma/cnxk/meson.build b/drivers/dma/cnxk/meson.build index e557349368..9cf5453b0b 100644 --- a/drivers/dma/cnxk/meson.build +++ b/drivers/dma/cnxk/meson.build @@ -8,6 +8,10 @@ foreach flag: error_cflags endif endforeach -deps += ['bus_pci', 'common_cnxk', 'dmadev'] +driver_sdk_headers = files( + 'cnxk_dma_event_dp.h', +) + +deps += ['bus_pci', 'common_cnxk', 'dmadev', 'eventdev'] sources = files('cnxk_dmadev.c', 'cnxk_dmadev_fp.c') require_iova_in_mbuf = false diff --git a/drivers/dma/cnxk/version.map b/drivers/dma/cnxk/version.map new file mode 100644 index 0000000000..6cc1c6aaa5 --- /dev/null +++ b/drivers/dma/cnxk/version.map @@ -0,0 +1,9 @@ +INTERNAL { + global: + + cn10k_dma_adapter_enqueue; + cn9k_dma_adapter_enqueue; + cnxk_dma_adapter_dequeue; + + local: *; +}; -- 2.25.1