From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 5D6EB42534;
	Thu,  7 Sep 2023 10:24:57 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id F03EC402AF;
	Thu,  7 Sep 2023 10:24:56 +0200 (CEST)
Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com
 [67.231.148.174])
 by mails.dpdk.org (Postfix) with ESMTP id 477444026C
 for <dev@dpdk.org>; Thu,  7 Sep 2023 10:24:55 +0200 (CEST)
Received: from pps.filterd (m0045849.ppops.net [127.0.0.1])
 by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 38710hVZ013385 for <dev@dpdk.org>; Thu, 7 Sep 2023 01:24:54 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;
 h=from : to : cc :
 subject : date : message-id : mime-version : content-transfer-encoding :
 content-type; s=pfpt0220; bh=m3ORu5Sa+R+qcNW+B6I2HOxpvBhQtzlp67naiPFdxYM=;
 b=KRA9ByWIDEiRlr+TgohNdDh1n1NKZjSYhVzCtU7DPLxXEtGBdpI6IP/WqPPnvfRYWgRY
 RF1kgRRzobK3LKvyCrccI98hcrnIBuZhHJJYAsfMIuxhxwsIbiMHOJT4GCMU9k2HnRBt
 tzry4jXPhPUUQ3m0wfDU66CZgxuLwFY86z5BlKUKBE6rlc5Z8nxIXHvI1soIYR/wCu5d
 /kCvWRV3LFzTQ7Zwa7DTUgS2BZrcsHm53E3mCXitxQzxN5nAIF7Q7SFhwUsqRX5gx+8b
 3SnuTyyD++wdAT3K58qW7P5Y87a1AeS6jrw05Z3qDUdgMnvVj5mHuHUUKx/qYoN+4arO Rg== 
Received: from dc5-exch01.marvell.com ([199.233.59.181])
 by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sxu7ckmbt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT)
 for <dev@dpdk.org>; Thu, 07 Sep 2023 01:24:53 -0700
Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com
 (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48;
 Thu, 7 Sep 2023 01:24:48 -0700
Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com
 (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend
 Transport; Thu, 7 Sep 2023 01:24:48 -0700
Received: from localhost.localdomain (unknown [10.28.36.157])
 by maili.marvell.com (Postfix) with ESMTP id 5E7FF5B6962;
 Thu,  7 Sep 2023 01:24:46 -0700 (PDT)
From: Amit Prakash Shukla <amitprakashs@marvell.com>
To: Vamsi Attunuru <vattunuru@marvell.com>
CC: <dev@dpdk.org>, <jerinj@marvell.com>, <ndabilpuram@marvell.com>,
 <anoobj@marvell.com>, Amit Prakash Shukla <amitprakashs@marvell.com>
Subject: [PATCH v1] dma/cnxk: offload source buffer free
Date: Thu, 7 Sep 2023 13:54:43 +0530
Message-ID: <20230907082443.1002665-1-amitprakashs@marvell.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Proofpoint-GUID: 6L_ywEqvziEcZJpAdvceRujInEcP_8dS
X-Proofpoint-ORIG-GUID: 6L_ywEqvziEcZJpAdvceRujInEcP_8dS
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26
 definitions=2023-09-06_12,2023-09-05_01,2023-05-22_02
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Added support in driver, to offload source buffer free to hardware
on completion of DMA transfer.

Signed-off-by: Amit Prakash Shukla <amitprakashs@marvell.com>
---
Depends-on: series-29427 ("use mempool for DMA chunk pool")
Depends-on: series-29442 ("offload support to free dma source buffer")

v1:
- Driver implementation from RFC.

 drivers/dma/cnxk/cnxk_dmadev.c    | 48 +++++++++++++++++++++++++++----
 drivers/dma/cnxk/cnxk_dmadev_fp.c |  8 +++---
 2 files changed, 46 insertions(+), 10 deletions(-)

diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c
index 588b3783a9..3be1547793 100644
--- a/drivers/dma/cnxk/cnxk_dmadev.c
+++ b/drivers/dma/cnxk/cnxk_dmadev.c
@@ -16,7 +16,8 @@ cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_inf
 	dev_info->nb_vchans = dpivf->num_vchans;
 	dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV |
 			     RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV |
-			     RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG;
+			     RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG |
+			     RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE;
 	dev_info->max_desc = DPI_MAX_DESC;
 	dev_info->min_desc = DPI_MIN_DESC;
 	dev_info->max_sges = DPI_MAX_POINTER;
@@ -159,9 +160,26 @@ cnxk_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf,
 	return rc;
 }
 
-static void
+static int
+dmadev_src_buf_aura_get(struct rte_mempool *sb_mp, const char *mp_ops_name)
+{
+	struct rte_mempool_ops *ops;
+
+	if (sb_mp == NULL)
+		return 0;
+
+	ops = rte_mempool_get_ops(sb_mp->ops_index);
+	if (strcmp(ops->name, mp_ops_name) != 0)
+		return -EINVAL;
+
+	return roc_npa_aura_handle_to_aura(sb_mp->pool_id);
+}
+
+static int
 cn9k_dmadev_setup_hdr(union cnxk_dpi_instr_cmd *header, const struct rte_dma_vchan_conf *conf)
 {
+	int aura;
+
 	header->cn9k.pt = DPI_HDR_PT_ZBW_CA;
 
 	switch (conf->direction) {
@@ -184,6 +202,11 @@ cn9k_dmadev_setup_hdr(union cnxk_dpi_instr_cmd *header, const struct rte_dma_vch
 			header->cn9k.func = conf->dst_port.pcie.pfid << 12;
 			header->cn9k.func |= conf->dst_port.pcie.vfid;
 		}
+		aura = dmadev_src_buf_aura_get(conf->mem_to_dev_src_buf_pool, "cn9k_mempool_ops");
+		if (aura < 0)
+			return aura;
+		header->cn9k.aura = aura;
+		header->cn9k.ii = 1;
 		break;
 	case RTE_DMA_DIR_MEM_TO_MEM:
 		header->cn9k.xtype = DPI_XTYPE_INTERNAL_ONLY;
@@ -197,11 +220,15 @@ cn9k_dmadev_setup_hdr(union cnxk_dpi_instr_cmd *header, const struct rte_dma_vch
 		header->cn9k.fport = conf->dst_port.pcie.coreid;
 		header->cn9k.pvfe = 0;
 	};
+
+	return 0;
 }
 
-static void
+static int
 cn10k_dmadev_setup_hdr(union cnxk_dpi_instr_cmd *header, const struct rte_dma_vchan_conf *conf)
 {
+	int aura;
+
 	header->cn10k.pt = DPI_HDR_PT_ZBW_CA;
 
 	switch (conf->direction) {
@@ -224,6 +251,10 @@ cn10k_dmadev_setup_hdr(union cnxk_dpi_instr_cmd *header, const struct rte_dma_vc
 			header->cn10k.func = conf->dst_port.pcie.pfid << 12;
 			header->cn10k.func |= conf->dst_port.pcie.vfid;
 		}
+		aura = dmadev_src_buf_aura_get(conf->mem_to_dev_src_buf_pool, "cn10k_mempool_ops");
+		if (aura < 0)
+			return aura;
+		header->cn10k.aura = aura;
 		break;
 	case RTE_DMA_DIR_MEM_TO_MEM:
 		header->cn10k.xtype = DPI_XTYPE_INTERNAL_ONLY;
@@ -237,6 +268,8 @@ cn10k_dmadev_setup_hdr(union cnxk_dpi_instr_cmd *header, const struct rte_dma_vc
 		header->cn10k.fport = conf->dst_port.pcie.coreid;
 		header->cn10k.pvfe = 0;
 	};
+
+	return 0;
 }
 
 static int
@@ -248,7 +281,7 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
 	union cnxk_dpi_instr_cmd *header;
 	uint16_t max_desc;
 	uint32_t size;
-	int i;
+	int i, ret;
 
 	RTE_SET_USED(conf_sz);
 
@@ -257,9 +290,12 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
 		return 0;
 
 	if (dpivf->is_cn10k)
-		cn10k_dmadev_setup_hdr(header, conf);
+		ret = cn10k_dmadev_setup_hdr(header, conf);
 	else
-		cn9k_dmadev_setup_hdr(header, conf);
+		ret = cn9k_dmadev_setup_hdr(header, conf);
+
+	if (ret)
+		return ret;
 
 	/* Free up descriptor memory before allocating. */
 	cnxk_dmadev_vchan_free(dpivf, vchan);
diff --git a/drivers/dma/cnxk/cnxk_dmadev_fp.c b/drivers/dma/cnxk/cnxk_dmadev_fp.c
index d1f27ba2a6..5049ad503d 100644
--- a/drivers/dma/cnxk/cnxk_dmadev_fp.c
+++ b/drivers/dma/cnxk/cnxk_dmadev_fp.c
@@ -271,7 +271,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d
 	STRM_INC(dpi_conf->c_desc, tail);
 
 	cmd[0] = (1UL << 54) | (1UL << 48);
-	cmd[1] = dpi_conf->cmd.u;
+	cmd[1] = dpi_conf->cmd.u | ((flags & RTE_DMA_OP_FLAG_FREE_SBUF) << 37);
 	cmd[2] = (uint64_t)comp_ptr;
 	cmd[4] = length;
 	cmd[6] = length;
@@ -327,7 +327,7 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge
 	comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail];
 	STRM_INC(dpi_conf->c_desc, tail);
 
-	hdr[1] = dpi_conf->cmd.u;
+	hdr[1] = dpi_conf->cmd.u | ((flags & RTE_DMA_OP_FLAG_FREE_SBUF) << 37);
 	hdr[2] = (uint64_t)comp_ptr;
 
 	/*
@@ -384,7 +384,7 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t
 
 	cmd[0] = dpi_conf->cmd.u | (1U << 6) | 1U;
 	cmd[1] = (uint64_t)comp_ptr;
-	cmd[2] = 0;
+	cmd[2] = (1UL << 47) | ((flags & RTE_DMA_OP_FLAG_FREE_SBUF) << 43);
 	cmd[4] = length;
 	cmd[5] = src;
 	cmd[6] = length;
@@ -431,7 +431,7 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge
 
 	hdr[0] = dpi_conf->cmd.u | (nb_dst << 6) | nb_src;
 	hdr[1] = (uint64_t)comp_ptr;
-	hdr[2] = 0;
+	hdr[2] = (1UL << 47) | ((flags & RTE_DMA_OP_FLAG_FREE_SBUF) << 43);
 
 	rc = __dpi_queue_write_sg(dpivf, hdr, src, dst, nb_src, nb_dst);
 	if (unlikely(rc)) {
-- 
2.25.1