From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E8C642FA3; Mon, 31 Jul 2023 14:13:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E53D43255; Mon, 31 Jul 2023 14:12:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AE7E54067B for ; Mon, 31 Jul 2023 14:12:50 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36V0xBjY013941 for ; Mon, 31 Jul 2023 05:12:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ilKj1bLrkbD3+Glw3ihWOjA7c0NaH91EOMlkwqgsrKE=; b=fKJFEthi+SZ1+h0MxOG0fHUCMQci1CPRGgjJXggBEYD8hlQ/ppORyZa4qgIIq2tKZPpL Papw6OZ80h2bC2t3fE/macjSOhMMlTaaog0O34lfWKYO3pWJTOaoE4XZEbhM2C3PI1Uq vpo+EThOLrK2yg0CMciYRhb8Is4QGsPllqqzvyZefn/1Uy0mb2lt86N6+ySup7by8C17 rap4jyoG+tuHF+LCRoph8pGZLfJG5leAlYLe5RPs/aDqMAKuB2wHPSFuLRkreDiFVXZ0 GwMC4AGjiuoL4LW67+uWXmqdkq27iEvEEeH4LeAMymgsJWdqWwT1hTdWpmzltGXDZG0B MQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3s529k4tfe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 31 Jul 2023 05:12:49 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 31 Jul 2023 05:12:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 31 Jul 2023 05:12:47 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 95E023F70B7; Mon, 31 Jul 2023 05:12:45 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Radha Mohan Chintakuntla Subject: [PATCH v2 4/7] dma/cnxk: increase vchan per queue to max 4 Date: Mon, 31 Jul 2023 17:42:22 +0530 Message-ID: <20230731121225.1545318-4-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230731121225.1545318-1-amitprakashs@marvell.com> References: <20230628171834.771431-1-amitprakashs@marvell.com> <20230731121225.1545318-1-amitprakashs@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: N17shfyiQYyVv7wobQKzfdTZ6mcRVgj3 X-Proofpoint-ORIG-GUID: N17shfyiQYyVv7wobQKzfdTZ6mcRVgj3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-31_06,2023-07-31_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To support multiple directions in same queue make use of multiple vchan per queue. Each vchan can be configured in some direction and used. Signed-off-by: Amit Prakash Shukla Signed-off-by: Radha Mohan Chintakuntla --- v2: - Fix for bugs observed in v1. - Squashed few commits. drivers/dma/cnxk/cnxk_dmadev.c | 68 +++++++++++++++------------------- drivers/dma/cnxk/cnxk_dmadev.h | 11 +++--- 2 files changed, 36 insertions(+), 43 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index d8cfb98cd7..7d83b70e8b 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -22,8 +22,8 @@ cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_inf RTE_SET_USED(dev); RTE_SET_USED(size); - dev_info->max_vchans = 1; - dev_info->nb_vchans = 1; + dev_info->max_vchans = MAX_VCHANS_PER_QUEUE; + dev_info->nb_vchans = MAX_VCHANS_PER_QUEUE; dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG; @@ -65,13 +65,12 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; union dpi_instr_hdr_s *header = &dpi_conf->hdr; uint16_t max_desc; uint32_t size; int i; - RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) @@ -149,13 +148,12 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; union dpi_instr_hdr_s *header = &dpi_conf->hdr; uint16_t max_desc; uint32_t size; int i; - RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) @@ -360,18 +358,17 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn9k.nfst = 1; header->cn9k.nlst = 1; @@ -400,7 +397,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -421,18 +418,17 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); /* * For inbound case, src pointers are last pointers. @@ -468,7 +464,7 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -489,18 +485,17 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t uint32_t length, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn10k.nfst = 1; header->cn10k.nlst = 1; @@ -520,7 +515,7 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -542,18 +537,17 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn10k.nfst = nb_src & DPI_MAX_POINTER; header->cn10k.nlst = nb_dst & DPI_MAX_POINTER; @@ -579,7 +573,7 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -600,12 +594,11 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, bool *has_error) { struct cnxk_dpi_vf_s *dpivf = dev_private; - struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpi_conf->c_desc; struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(vchan); - for (cnt = 0; cnt < nb_cpls; cnt++) { comp_ptr = c_desc->compl_ptr[c_desc->head]; @@ -633,11 +626,11 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n uint16_t *last_idx, enum rte_dma_status_code *status) { struct cnxk_dpi_vf_s *dpivf = dev_private; - struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpi_conf->c_desc; struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(vchan); RTE_SET_USED(last_idx); for (cnt = 0; cnt < nb_cpls; cnt++) { @@ -663,11 +656,10 @@ static uint16_t cnxk_damdev_burst_capacity(const void *dev_private, uint16_t vchan) { const struct cnxk_dpi_vf_s *dpivf = (const struct cnxk_dpi_vf_s *)dev_private; + const struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; uint16_t burst_cap; - RTE_SET_USED(vchan); - - burst_cap = dpivf->conf.c_desc.max_cnt - + burst_cap = dpi_conf->c_desc.max_cnt - ((dpivf->stats.submitted - dpivf->stats.completed) + dpivf->pending) + 1; return burst_cap; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 9563295af0..4693960a19 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -6,10 +6,11 @@ #include -#define DPI_MAX_POINTER 15 -#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) -#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) -#define DPI_MAX_DESC 1024 +#define DPI_MAX_POINTER 15 +#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) +#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) +#define DPI_MAX_DESC 1024 +#define MAX_VCHANS_PER_QUEUE 4 /* Set Completion data to 0xFF when request submitted, * upon successful request completion engine reset to completion status @@ -39,7 +40,7 @@ struct cnxk_dpi_conf { struct cnxk_dpi_vf_s { struct roc_dpi rdpi; - struct cnxk_dpi_conf conf; + struct cnxk_dpi_conf conf[MAX_VCHANS_PER_QUEUE]; struct rte_dma_stats stats; uint16_t pending; uint16_t pnum_words; -- 2.25.1