From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28463A0C49; Wed, 16 Jun 2021 21:10:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4CD640683; Wed, 16 Jun 2021 21:10:24 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 911DA4067A for ; Wed, 16 Jun 2021 21:10:23 +0200 (CEST) IronPort-SDR: 2vrjSA9l3WFQwkGVyFwzirs/FLmx2o7Qr3eAoY3q4Fc62lQNS8u3PX5BgT2t8LXEbJjAS+vnmq CxvAi1+JfrlA== X-IronPort-AV: E=McAfee;i="6200,9189,10016"; a="186619406" X-IronPort-AV: E=Sophos;i="5.83,278,1616482800"; d="scan'208";a="186619406" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2021 12:10:10 -0700 IronPort-SDR: 7xxBMPMhkC3ZvMBHgP4bq5NUkfpbpCymtVtPAQp47ang916VmLT13la6nhLjqRYoZ0xJYN9TH3 H+z1Lfb9CBgQ== X-IronPort-AV: E=Sophos;i="5.83,278,1616482800"; d="scan'208";a="404402003" Received: from mwhelan-mobl2.ger.corp.intel.com (HELO bricha3-MOBL.ger.corp.intel.com) ([10.252.12.169]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 16 Jun 2021 12:10:07 -0700 Date: Wed, 16 Jun 2021 20:10:03 +0100 From: Bruce Richardson To: Honnappa Nagarahalli Cc: David Marchand , Chengwen Feng , "thomas@monjalon.net" , "Yigit, Ferruh" , dev , "Nipun.gupta@nxp.com" , "hemant.agrawal@nxp.com" , Maxime Coquelin , "jerinj@marvell.com" , Jerin Jacob , nd Message-ID: References: <1623763327-30987-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [dpdk-dev] [RFC PATCH] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Jun 16, 2021 at 04:48:59PM +0000, Honnappa Nagarahalli wrote: > > > > > > On Wed, Jun 16, 2021 at 02:14:54PM +0200, David Marchand wrote: > > > On Tue, Jun 15, 2021 at 3:25 PM Chengwen Feng > > wrote: > > > > + > > > > +#define RTE_DMADEV_NAME_MAX_LEN (64) > > > > +/**< @internal Max length of name of DMA PMD */ > > > > + > > > > +/** @internal > > > > + * The data structure associated with each DMA device. > > > > + */ > > > > +struct rte_dmadev { > > > > + /**< Device ID for this instance */ > > > > + uint16_t dev_id; > > > > + /**< Functions exported by PMD */ > > > > + const struct rte_dmadev_ops *dev_ops; > > > > + /**< Device info. supplied during device initialization */ > > > > + struct rte_device *device; > > > > + /**< Driver info. supplied by probing */ > > > > + const char *driver_name; > > > > + > > > > + /**< Device name */ > > > > + char name[RTE_DMADEV_NAME_MAX_LEN]; } __rte_cache_aligned; > > > > + > > > > > > I see no queue/channel notion. > > > How does a rte_dmadev object relate to a physical hw engine? > > > > > One queue, one device. > > When looking to update the ioat driver for 20.11 release when I added the > > idxd part, I considered adding a queue parameter to the API to look like one > > device with multiple queues. However, since each queue acts completely > > independently of each other, there was no benefit to doing so. It's just easier > > to have a single id to identify a device queue. > Does it mean, the queue is multi thread safe? Do we need queues per core to avoid locking? The design is for each queue to be like the queue on a NIC, not thread-safe. However, if the hardware supports thread-safe queues too, that can be supported. But the API should be like other data-plane ones and be lock free. For the DMA devices that I am working on, the number of queues is not very large, and in most cases each queue appears as a separate entity, e.g. for ioat each queue/channel appears as a separate PCI ID, and when using idxd kernel driver each queue is a separate dev node to mmap. For other cases right now we just create one rawdev instance per queue in software. /Bruce