From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3E526A0A0C; Thu, 1 Jul 2021 18:34:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E299E41329; Thu, 1 Jul 2021 18:34:15 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 56C3841325 for ; Thu, 1 Jul 2021 18:34:13 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10032"; a="195732412" X-IronPort-AV: E=Sophos;i="5.83,315,1616482800"; d="scan'208";a="195732412" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2021 09:33:39 -0700 X-IronPort-AV: E=Sophos;i="5.83,315,1616482800"; d="scan'208";a="642130431" Received: from bricha3-mobl.ger.corp.intel.com ([10.252.3.218]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 01 Jul 2021 09:33:34 -0700 Date: Thu, 1 Jul 2021 17:33:30 +0100 From: Bruce Richardson To: Jerin Jacob Cc: fengchengwen , Jerin Jacob , Morten =?iso-8859-1?Q?Br=F8rup?= , Nipun Gupta , Thomas Monjalon , Ferruh Yigit , dpdk-dev , Hemant Agrawal , Maxime Coquelin , Honnappa Nagarahalli , David Marchand , Satananda Burla , Prasun Kapoor Message-ID: References: <25d29598-c26d-8497-2867-9b650c79df49@huawei.com> <3db2eda0-4490-2b8f-c65d-636bcf794494@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [dpdk-dev] dmadev discussion summary X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Jul 01, 2021 at 08:31:00PM +0530, Jerin Jacob wrote: > On Sat, Jun 26, 2021 at 9:29 AM fengchengwen wrote: > > > > Hi, all > > I analyzed the current DPAM DMA driver and drew this summary in conjunction > > with the previous discussion, and this will as a basis for the V2 implementation. > > Feedback is welcome, thanks > > Thanks for the write-up. > > > > > [queue_setup]: create one virt-queue, has following main parameters: > > HW-queue-index: the HW-queue index used > > nb_desc: the number of HW descriptors > > opaque: driver's specific info > > Note1: this API return virt-queue index which will used in later API. > > If user want create multiple virt-queue one the same HW-queue, > > they could achieved by call queue_setup with the same > > HW-queue-index. > > Note2: I think it's hard to define queue_setup config paramter, and > > also this is control API, so I think it's OK to use opaque > > pointer to implement it. > > [dma_copy/memset/sg]: all has vq_id input parameter. > > Note: I notice dpaa can't support single and sg in one virt-queue, and > > I think it's maybe software implement policy other than HW > > restriction because virt-queue could share the same HW-queue. > > Here we use vq_id to tackle different scenario, like local-to-local/ > > local-to-host and etc. > > IMO, The index representation has an additional overhead as one needs > to translate it > to memory pointer. I prefer to avoid by having object handle and use > _lookup() API get to make it work > in multi-process cases to avoid the additional indirection. Like mempool object. While it doesn't matter to me that much which is chosen, an index seems cleaner to me and more consistent with other device types in DPDK. The objects pointed to by the memory pointers you refer too can't just be stored in an internal array in your driver and accessed directly by index, saving that layer of redirection? > > 5) And the dmadev public data-plane API (just prototype): > > dma_cookie_t rte_dmadev_memset(dev, vq_id, pattern, dst, len, flags) > > -- flags: used as an extended parameter, it could be uint32_t > > dma_cookie_t rte_dmadev_memcpy(dev, vq_id, src, dst, len, flags) > > dma_cookie_t rte_dmadev_memcpy_sg(dev, vq_id, sg, sg_len, flags) > > -- sg: struct dma_scatterlist array > > uint16_t rte_dmadev_completed(dev, vq_id, dma_cookie_t *cookie, > > uint16_t nb_cpls, bool *has_error) > > -- nb_cpls: indicate max process operations number > > -- has_error: indicate if there is an error > > -- return value: the number of successful completed operations. > > -- example: > > 1) If there are already 32 completed ops, and 4th is error, and > > nb_cpls is 32, then the ret will be 3(because 1/2/3th is OK), and > > has_error will be true. > > 2) If there are already 32 completed ops, and all successful > > completed, then the ret will be min(32, nb_cpls), and has_error > > will be false. > > 3) If there are already 32 completed ops, and all failed completed, > > then the ret will be 0, and has_error will be true. > > +1. IMO, it is better to call ring_idx instead of a cookie. To enforce > that it the ring index. > +1, I like that name better too. > > uint16_t rte_dmadev_completed_status(dev_id, vq_id, dma_cookie_t *cookie, > > uint16_t nb_status, uint32_t *status) > > -- return value: the number of failed completed operations. > > See above. Here we are assuming it is an index otherwise we need to > pass an array > cookies. > > > And here I agree with Morten: we should design API which adapts to DPDK > > service scenarios. So we don't support some sound-cards DMA, and 2D memory > > copy which mainly used in video scenarios. > > 6) The dma_cookie_t is signed int type, when <0 it mean error, it's > > monotonically increasing base on HW-queue (other than virt-queue). The > > driver needs to make sure this because the damdev framework don't manage > > the dma_cookie's creation. > > +1 and see above. > > > 7) Because data-plane APIs are not thread-safe, and user could determine > > virt-queue to HW-queue's map (at the queue-setup stage), so it is user's > > duty to ensure thread-safe. > > +1. But I am not sure how easy for the fast-path application to have this logic, > Instead, I think, it is better to tell the capa for queue by driver > and in channel configuration, > the application can request for requirement (Is multiple producers enq > to the same HW queue or not). > Based on the request, the implementation can pick the correct function > pointer for enq.(lock vs lockless version if HW does not support > lockless) > Non-thread safety is the norm in DPDK for all other queue resources, however, haivng multi-thread safety as a capability sounds reasonable. > > > 8) One example: vq_id = rte_dmadev_queue_setup(dev, > > config.{HW-queue-index=x, opaque}); if (vq_id < 0) { // create > > virt-queue failed return; } // submit memcpy task cookit = > > rte_dmadev_memcpy(dev, vq_id, src, dst, len, flags); if (cookie < 0) > > { // submit failed return; } // get complete task ret = > > rte_dmadev_completed(dev, vq_id, &cookie, 1, has_error); if > > (!has_error && ret == 1) { // the memcpy successful complete } > > +1 > > > 9) As octeontx2_dma support sg-list which has many valid buffers in > > dpi_dma_buf_ptr_s, it could call the rte_dmadev_memcpy_sg API. > > +1 > > > 10) As ioat, it could delcare support one HW-queue at dev_configure > > stage, and only support create one virt-queue. 11) As dpaa2_qdma, I > > think it could migrate to new framework, but still wait for > > dpaa2_qdma guys feedback. 12) About the prototype src/dst parameters > > of rte_dmadev_memcpy API, we have two candidates which are iova and > > void *, how about introduce dma_addr_t type which could be va or iova > > ? > > I think, conversion looks ugly, better to have void * and share the > constraints of void * as limitation/capability using flag. So that driver > can update it. > I'm ok with either rte_iova_t or void * as parameter type. Let's not define a new type though, and +1 to just using capabilities to define what kinds of addressing are supported by the device instances. /Bruce