From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8886643016; Wed, 9 Aug 2023 08:08:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 29FE340691; Wed, 9 Aug 2023 08:08:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5F1B1400D6 for ; Wed, 9 Aug 2023 08:08:49 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 379661rM023098; Tue, 8 Aug 2023 23:08:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=O+8FCJr/VR18Tsgh3zWA2hwZy4MhYQJXtgXSRCMnxWg=; b=RGlLnn0biHAOpvw/BdzYmgsQmqIxdA867RzHt/KDANmy39x6dz0+Np7yFIc98QQmqObc xumrxtM5qseMEQuzCIK7M3ilDwA7oNQl6J/2UrjYcmDgzXKe/RFsZLIV3lnZlQM9Tlt5 dW3Kk3c0Z+SI+QEJ59nNPyM5IalLMU//VJSm4twq6ZhoQb2bNrNOcvi2FWV+8kYbkg7V wCxzJaGVVetExVfp2WSfy+8JCVmedIy0x8CUcQbVk0HXFDmCtk72kN7cWt1FN343iUhb aYe9Gol4dKWkBeXUL1KVIjLjBhvX0gA3wvpUVCqFHRnj22/LSrVsjzFmLweUYOeE/yzt 3Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sc57sg070-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 08 Aug 2023 23:08:47 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 8 Aug 2023 23:08:46 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 8 Aug 2023 23:08:45 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 9C9DB3F705B; Tue, 8 Aug 2023 23:08:41 -0700 (PDT) From: Amit Prakash Shukla To: Chengwen Feng , Kevin Laatz , Bruce Richardson CC: , , , , , , , , , , Amit Prakash Shukla Subject: [RFC PATCH] dmadev: offload to free source buffer Date: Wed, 9 Aug 2023 11:38:35 +0530 Message-ID: <20230809060835.2030833-1-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: LesXm8KBd_Ykapo-EOl_4O7IQRLWj8bK X-Proofpoint-ORIG-GUID: LesXm8KBd_Ykapo-EOl_4O7IQRLWj8bK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-09_03,2023-08-08_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This changeset adds support in DMA library to free source DMA buffer by hardware. On a supported hardware, application can pass on the mempool information as part of vchan config when the DMA transfer direction is configured as RTE_DMA_DIR_MEM_TO_DEV. Signed-off-by: Amit Prakash Shukla --- lib/dmadev/rte_dmadev.h | 45 +++++++++++++++++++++++++++++------------ 1 file changed, 32 insertions(+), 13 deletions(-) diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index e61d71959e..98539e5830 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -241,26 +241,26 @@ int16_t rte_dma_next_dev(int16_t start_dev_id); * @see struct rte_dma_info::dev_capa */ /** Support memory-to-memory transfer */ -#define RTE_DMA_CAPA_MEM_TO_MEM RTE_BIT64(0) +#define RTE_DMA_CAPA_MEM_TO_MEM RTE_BIT64(0) /** Support memory-to-device transfer. */ -#define RTE_DMA_CAPA_MEM_TO_DEV RTE_BIT64(1) +#define RTE_DMA_CAPA_MEM_TO_DEV RTE_BIT64(1) /** Support device-to-memory transfer. */ -#define RTE_DMA_CAPA_DEV_TO_MEM RTE_BIT64(2) +#define RTE_DMA_CAPA_DEV_TO_MEM RTE_BIT64(2) /** Support device-to-device transfer. */ -#define RTE_DMA_CAPA_DEV_TO_DEV RTE_BIT64(3) +#define RTE_DMA_CAPA_DEV_TO_DEV RTE_BIT64(3) /** Support SVA which could use VA as DMA address. * If device support SVA then application could pass any VA address like memory * from rte_malloc(), rte_memzone(), malloc, stack memory. * If device don't support SVA, then application should pass IOVA address which * from rte_malloc(), rte_memzone(). */ -#define RTE_DMA_CAPA_SVA RTE_BIT64(4) +#define RTE_DMA_CAPA_SVA RTE_BIT64(4) /** Support work in silent mode. * In this mode, application don't required to invoke rte_dma_completed*() * API. * @see struct rte_dma_conf::silent_mode */ -#define RTE_DMA_CAPA_SILENT RTE_BIT64(5) +#define RTE_DMA_CAPA_SILENT RTE_BIT64(5) /** Supports error handling * * With this bit set, invalid input addresses will be reported as operation failures @@ -268,16 +268,18 @@ int16_t rte_dma_next_dev(int16_t start_dev_id); * Without this bit set, invalid data is not handled by either HW or driver, so user * must ensure that all memory addresses are valid and accessible by HW. */ -#define RTE_DMA_CAPA_HANDLES_ERRORS RTE_BIT64(6) +#define RTE_DMA_CAPA_HANDLES_ERRORS RTE_BIT64(6) /** Support copy operation. * This capability start with index of 32, so that it could leave gap between * normal capability and ops capability. */ -#define RTE_DMA_CAPA_OPS_COPY RTE_BIT64(32) +#define RTE_DMA_CAPA_OPS_COPY RTE_BIT64(32) /** Support scatter-gather list copy operation. */ -#define RTE_DMA_CAPA_OPS_COPY_SG RTE_BIT64(33) +#define RTE_DMA_CAPA_OPS_COPY_SG RTE_BIT64(33) /** Support fill operation. */ -#define RTE_DMA_CAPA_OPS_FILL RTE_BIT64(34) +#define RTE_DMA_CAPA_OPS_FILL RTE_BIT64(34) +/** Support for source buffer free for mem to dev transfer. */ +#define RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE RTE_BIT64(35) /**@}*/ /** @@ -582,6 +584,16 @@ struct rte_dma_vchan_conf { * @see struct rte_dma_port_param */ struct rte_dma_port_param dst_port; + /** mempool from which source buffer is allocated. mempool info is used + * for freeing source buffer by hardware when configured direction is + * RTE_DMA_DIR_MEM_TO_DEV. To free the source buffer by hardware, + * RTE_DMA_OP_FLAG_FREE_SBUF must be set while calling rte_dma_copy and + * rte_dma_copy_sg(). + * + * @see RTE_DMA_OP_FLAG_FREE_SBUF + */ + struct rte_mempool *mem_to_dev_src_buf_pool; + }; /** @@ -808,17 +820,24 @@ struct rte_dma_sge { * If the specify DMA HW works in-order (it means it has default fence between * operations), this flag could be NOP. */ -#define RTE_DMA_OP_FLAG_FENCE RTE_BIT64(0) +#define RTE_DMA_OP_FLAG_FENCE RTE_BIT64(0) /** Submit flag. * It means the operation with this flag must issue doorbell to hardware after * enqueued jobs. */ -#define RTE_DMA_OP_FLAG_SUBMIT RTE_BIT64(1) +#define RTE_DMA_OP_FLAG_SUBMIT RTE_BIT64(1) /** Write data to low level cache hint. * Used for performance optimization, this is just a hint, and there is no * capability bit for this, driver should not return error if this flag was set. */ -#define RTE_DMA_OP_FLAG_LLC RTE_BIT64(2) +#define RTE_DMA_OP_FLAG_LLC RTE_BIT64(2) +/** Mem to dev source buffer free flag. + * Used for freeing source DMA buffer by hardware when the transfer direction is + * configured as RTE_DMA_DIR_MEM_TO_DEV. + * + * @see struct rte_dma_vchan_conf::mem_to_dev_src_buf_pool + */ +#define RTE_DMA_OP_FLAG_FREE_SBUF RTE_BIT64(3) /**@}*/ /** -- 2.25.1