From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 84488A0C43; Sat, 4 Sep 2021 03:31:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0CCCE40DDD; Sat, 4 Sep 2021 03:31:46 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 90CAD4067E for ; Sat, 4 Sep 2021 03:31:43 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4H1cV74H35zbgqw; Sat, 4 Sep 2021 09:27:43 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 4 Sep 2021 09:31:41 +0800 Received: from [10.40.190.165] (10.40.190.165) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 4 Sep 2021 09:31:40 +0800 To: Gagandeep Singh , "thomas@monjalon.net" , "ferruh.yigit@intel.com" , "bruce.richardson@intel.com" , "jerinj@marvell.com" , "jerinjacobk@gmail.com" , "andrew.rybchenko@oktetlabs.ru" CC: "dev@dpdk.org" , "mb@smartsharesystems.com" , Nipun Gupta , Hemant Agrawal , "maxime.coquelin@redhat.com" , "honnappa.nagarahalli@arm.com" , "david.marchand@redhat.com" , "sburla@marvell.com" , "pkapoor@marvell.com" , "konstantin.ananyev@intel.com" , "conor.walsh@intel.com" References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1630588395-2804-1-git-send-email-fengchengwen@huawei.com> <1630588395-2804-2-git-send-email-fengchengwen@huawei.com> From: fengchengwen Message-ID: <86ab7cee-0adb-0e44-94f5-1931f1f8082b@huawei.com> Date: Sat, 4 Sep 2021 09:31:40 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.40.190.165] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: Re: [dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library public APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2021/9/3 19:42, Gagandeep Singh wrote: > Hi, > > >> + >> +/** >> + * @warning >> + * @b EXPERIMENTAL: this API may change without prior notice. >> + * >> + * Close a DMA device. >> + * >> + * The device cannot be restarted after this call. >> + * >> + * @param dev_id >> + * The identifier of the device. >> + * >> + * @return >> + * 0 on success. Otherwise negative value is returned. >> + */ >> +__rte_experimental >> +int >> +rte_dmadev_close(uint16_t dev_id); >> + >> +/** >> + * rte_dma_direction - DMA transfer direction defines. >> + */ >> +enum rte_dma_direction { >> + RTE_DMA_DIR_MEM_TO_MEM, >> + /**< DMA transfer direction - from memory to memory. >> + * >> + * @see struct rte_dmadev_vchan_conf::direction >> + */ >> + RTE_DMA_DIR_MEM_TO_DEV, >> + /**< DMA transfer direction - from memory to device. >> + * In a typical scenario, the SoCs are installed on host servers as >> + * iNICs through the PCIe interface. In this case, the SoCs works in >> + * EP(endpoint) mode, it could initiate a DMA move request from >> memory >> + * (which is SoCs memory) to device (which is host memory). >> + * >> + * @see struct rte_dmadev_vchan_conf::direction >> + */ >> + RTE_DMA_DIR_DEV_TO_MEM, >> + /**< DMA transfer direction - from device to memory. >> + * In a typical scenario, the SoCs are installed on host servers as >> + * iNICs through the PCIe interface. In this case, the SoCs works in >> + * EP(endpoint) mode, it could initiate a DMA move request from device >> + * (which is host memory) to memory (which is SoCs memory). >> + * >> + * @see struct rte_dmadev_vchan_conf::direction >> + */ >> + RTE_DMA_DIR_DEV_TO_DEV, >> + /**< DMA transfer direction - from device to device. >> + * In a typical scenario, the SoCs are installed on host servers as >> + * iNICs through the PCIe interface. In this case, the SoCs works in >> + * EP(endpoint) mode, it could initiate a DMA move request from device >> + * (which is host memory) to the device (which is another host memory). >> + * >> + * @see struct rte_dmadev_vchan_conf::direction >> + */ >> +}; >> + >> +/** >> .. > The enum rte_dma_direction must have a member RTE_DMA_DIR_ANY for a channel that supports all 4 directions. We've discussed this issue before. The earliest solution was to set up channels to support multiple DIRs, but no hardware/driver actually used this (at least at that time). they (like octeontx2_dma/dpaa) all setup one logic channel server single transfer direction. So, do you have that kind of desire for your driver ? If you have a strong desire, we'll consider the following options: Once the channel was setup, there are no other parameters to indicate the copy request's transfer direction. So I think it is not enough to define RTE_DMA_DIR_ANY only. Maybe we could add RTE_DMA_OP_xxx marco (RTE_DMA_OP_FLAG_M2M/M2D/D2M/D2D), these macro will as the flags parameter passsed to enqueue API, so the enqueue API knows which transfer direction the request corresponding. We can easily expand from the existing framework with following: a. define capability RTE_DMADEV_CAPA_DIR_ANY, for those device which support it could declare it. b. define direction macro: RTE_DMA_DIR_ANY c. define dma_op: RTE_DMA_OP_FLAG_DIR_M2M/M2D/D2M/D2D which will passed as the flags parameters. For that driver which don't support this feature, just don't declare support it, and framework ensure that RTE_DMA_DIR_ANY is not passed down, and it can ignored RTE_DMA_OP_FLAG_DIR_xxx flag when enqueue API. For that driver which support this feature, application could create one channel with RTE_DMA_DIR_ANY or RTE_DMA_DIR_MEM_TO_MEM. If created with RTE_DMA_DIR_ANY, the RTE_DMA_OP_FLAG_DIR_xxx should be sensed in the driver. If created with RTE_DMA_DIR_MEM_TO_MEM, the RTE_DMA_OP_FLAG_DIR_xxx could be ignored. > > > > Regards, > Gagan >