Hi Feng, Thanks for depicting the feature use case. From the application’s perspective, inter VM/process communication is required to exchange the src & dst buffer details, however the specifics of this communication mechanism are outside the scope in this context. Regarding the address translations, these buffer addresses can be either IOVA as PA or IOVA as VA. The DMA hardware must use the appropriate IOMMU stream IDs when initiating the DMA transfers. For example, in the use case shown in the diagram, dmadev-1 and dmadev-2 would join an access group managed by the kernel DMA controller driver. This controller driver will configure the access group on the DMA hardware, enabling the hardware to select the correct stream IDs for read/write operations. New rte_dma APIs could be introduced to join or leave the access group or to query the access group details. Additionally, a secure token mechanism (similar to vfio-pci token) can be implemented to validate any dmadev attempting to join the access group. Regards. From: fengchengwen Sent: Tuesday, July 15, 2025 6:29 AM To: Vamsi Krishna Attunuru ; dev@dpdk.org; Pavan Nikhilesh Bhagavatula ; kevin.laatz@intel.com; bruce.richardson@intel.com; mb@smartsharesystems.com Cc: Jerin Jacob ; thomas@monjalon.net Subject: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA capability support in dmadev Hi Vamsi, From the commit log, I guess this commit mainly want to meet following case: --------------- ---------------- | Container | | VirtMachine | | | | | | dmadev-1 | | dmadev2 | --------------- ---------------- | | ------------------------------ ZjQcmQRYFpfptBannerStart Prioritize security for external emails: Confirm sender and content safety before clicking links or opening attachments Report Suspicious ‌ ZjQcmQRYFpfptBannerEnd Hi Vamsi, From the commit log, I guess this commit mainly want to meet following case: --------------- ---------------- | Container | | VirtMachine | | | | | | dmadev-1 | | dmadev2 | --------------- ---------------- | | ------------------------------ App run in the container could launch DMA transfer from local buffer to the VirtMachine by config dmadev-1/2 (the dmadev-1/2 are passthrough to diffent OS domain). Could you explain how to use it from application perspective (for example address translation) and application & hardware restrictions? BTW: In this case, there are two OS domain communication, and I remember there are also inter-process DMA RFC, so maybe we could design more generic solution if you provide more info. Thanks On 2025/7/10 16:51, Vamsi Krishna wrote: > From: Vamsi Attunuru > > > Modern DMA hardware supports data transfer between multiple > DMA devices, enabling data communication across isolated domains or > containers. To facilitate this, the ``dmadev`` library requires changes > to allow devices to register with or unregisters from DMA groups for > inter-device communication. This feature is planned for inclusion > in DPDK 25.11. > > Signed-off-by: Vamsi Attunuru > > --- > doc/guides/rel_notes/deprecation.rst | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst > index e2d4125308..46836244dd 100644 > --- a/doc/guides/rel_notes/deprecation.rst > +++ b/doc/guides/rel_notes/deprecation.rst > @@ -152,3 +152,10 @@ Deprecation Notices > * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in > ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK. > Those API functions are used internally by DPDK core and netvsc PMD. > + > +* dmadev: a new capability flag ``RTE_DMA_CAPA_INTER_DEV`` will be added > + to advertise DMA device's inter-device DMA copy capability. To enable > + this functionality, a few dmadev APIs will be added to configure the DMA > + access groups, facilitating coordinated data communication between devices. > + A new ``dev_idx`` field will be added to the ``struct rte_dma_vchan_conf`` > + structure to configure a vchan for data transfers between any two DMA devices.