From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37831A0C46; Fri, 18 Jun 2021 12:04:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF28240150; Fri, 18 Jun 2021 12:04:06 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 5B0B540142 for ; Fri, 18 Jun 2021 12:04:05 +0200 (CEST) IronPort-SDR: 3aKDbEuTXyLrGhCHYclHriHtVoBk+4vaavYHaohxycccS+S8PDuzb5zLVNg1rk4a+HdhkhLA0o 4l4I7Xs2/6DA== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="206568756" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="206568756" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jun 2021 03:04:04 -0700 IronPort-SDR: CCMgSz2jtyiilKZt8u737YTXt/bxg8kGQjHfZWyGa4wboUtdkan5sVoX5mndqFl9v/xgJzE2vm RGI5YhMhN7gg== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="554690601" Received: from bricha3-mobl.ger.corp.intel.com ([10.252.30.209]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 18 Jun 2021 03:04:01 -0700 Date: Fri, 18 Jun 2021 11:03:58 +0100 From: Bruce Richardson To: Jerin Jacob Cc: fengchengwen , Thomas Monjalon , Ferruh Yigit , dpdk-dev , Nipun Gupta , Hemant Agrawal , Maxime Coquelin , Honnappa Nagarahalli , Jerin Jacob , David Marchand Message-ID: References: <1623763327-30987-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [dpdk-dev] [RFC PATCH] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Jun 18, 2021 at 10:46:08AM +0530, Jerin Jacob wrote: > On Thu, Jun 17, 2021 at 1:30 PM Bruce Richardson > wrote: > > > > On Thu, Jun 17, 2021 at 01:12:22PM +0530, Jerin Jacob wrote: > > > On Thu, Jun 17, 2021 at 12:43 AM Bruce Richardson > > > wrote: > > > > > > > > On Wed, Jun 16, 2021 at 11:38:08PM +0530, Jerin Jacob wrote: > > > > > On Wed, Jun 16, 2021 at 11:01 PM Bruce Richardson > > > > > wrote: > > > > > > > > > > > > On Wed, Jun 16, 2021 at 05:41:45PM +0800, fengchengwen wrote: > > > > > > > On 2021/6/16 0:38, Bruce Richardson wrote: > > > > > > > > On Tue, Jun 15, 2021 at 09:22:07PM +0800, Chengwen Feng wrote: > > > > > > > >> This patch introduces 'dmadevice' which is a generic type of DMA > > > > > > > >> device. > > > > > > > >> > > > > > > > >> The APIs of dmadev library exposes some generic operations which can > > > > > > > >> enable configuration and I/O with the DMA devices. > > > > > > > >> > > > > > > > >> Signed-off-by: Chengwen Feng > > > > > > > >> --- > > > > > > > > Thanks for sending this. > > > > > > > > > > > > > > > > Of most interest to me right now are the key data-plane APIs. While we are > > > > > > > > still in the prototyping phase, below is a draft of what we are thinking > > > > > > > > for the key enqueue/perform_ops/completed_ops APIs. > > > > > > > > > > > > > > > > Some key differences I note in below vs your original RFC: > > > > > > > > * Use of void pointers rather than iova addresses. While using iova's makes > > > > > > > > sense in the general case when using hardware, in that it can work with > > > > > > > > both physical addresses and virtual addresses, if we change the APIs to use > > > > > > > > void pointers instead it will still work for DPDK in VA mode, while at the > > > > > > > > same time allow use of software fallbacks in error cases, and also a stub > > > > > > > > driver than uses memcpy in the background. Finally, using iova's makes the > > > > > > > > APIs a lot more awkward to use with anything but mbufs or similar buffers > > > > > > > > where we already have a pre-computed physical address. > > > > > > > > > > > > > > The iova is an hint to application, and widely used in DPDK. > > > > > > > If switch to void, how to pass the address (iova or just va ?) > > > > > > > this may introduce implementation dependencies here. > > > > > > > > > > > > > > Or always pass the va, and the driver performs address translation, and this > > > > > > > translation may cost too much cpu I think. > > > > > > > > > > > > > > > > > > > On the latter point, about driver doing address translation I would agree. > > > > > > However, we probably need more discussion about the use of iova vs just > > > > > > virtual addresses. My thinking on this is that if we specify the API using > > > > > > iovas it will severely hurt usability of the API, since it forces the user > > > > > > to take more inefficient codepaths in a large number of cases. Given a > > > > > > pointer to the middle of an mbuf, one cannot just pass that straight as an > > > > > > iova but must instead do a translation into offset from mbuf pointer and > > > > > > then readd the offset to the mbuf base address. > > > > > > > > > > > > My preference therefore is to require the use of an IOMMU when using a > > > > > > dmadev, so that it can be a much closer analog of memcpy. Once an iommu is > > > > > > present, DPDK will run in VA mode, allowing virtual addresses to our > > > > > > hugepage memory to be sent directly to hardware. Also, when using > > > > > > dmadevs on top of an in-kernel driver, that kernel driver may do all iommu > > > > > > management for the app, removing further the restrictions on what memory > > > > > > can be addressed by hardware. > > > > > > > > > > > > > > > One issue of keeping void * is that memory can come from stack or heap . > > > > > which HW can not really operate it on. > > > > > > > > when kernel driver is managing the IOMMU all process memory can be worked > > > > on, not just hugepage memory, so using iova is wrong in these cases. > > > > > > But not for stack and heap memory. Right? > > > > > Yes, even stack and heap can be accessed. > > The HW device cannot as that memory is NOT mapped to IOMMU. It will > result in the transaction > fault. > Not if the kernel driver rather than DPDK is managing the IOMMU: https://www.kernel.org/doc/html/latest/x86/sva.html "Shared Virtual Addressing (SVA) allows the processor and device to use the same virtual addresses avoiding the need for software to translate virtual addresses to physical addresses. SVA is what PCIe calls Shared Virtual Memory (SVM)." > At least, In octeon, DMA HW job descriptor will have a pointer (IOVA) > which will be updated by _HW_ > upon copy job completion. That memory can not be from the > heap(malloc()) or stack as those are not > mapped by IOMMU. > > > > > > > > > > > > As I previously said, using iova prevents the creation of a pure software > > > > dummy driver too using memcpy in the background. > > > > > > Why ? the memory alloced uing rte_alloc/rte_memzone etc can be touched by CPU. > > > > > Yes, but it can't be accessed using physical address, so again only VA mode > > where iova's are "void *" make sense. > > I agree that it should be a physical address. My only concern that > void * does not express > it can not be from stack/heap. If API tells the memory need to > allotted by rte_alloc() or rte_memzone() etc > is fine with me. > That could be a capability field too. Hardware supporting SVA/SVM does not have this limitation so can specify that any virtual address may be used. I suppose it really doesn't matter whether the APIs are written to take pointers or iova's so long as the restrictions are clear. Since iova is the default for other HW ops, I'm ok for functions to take params as iovas and have the capability definitons provide the info to the user that in some cases virtual addresses can be used.