DPDK patches and discussions
 help / color / mirror / Atom feed
From: Vamsi Krishna Attunuru <vattunuru@marvell.com>
To: fengchengwen <fengchengwen@huawei.com>,
	"bruce.richardson@intel.com" <bruce.richardson@intel.com>,
	Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
	Anatoly Burakov <anatoly.burakov@intel.com>
Cc: Jerin Jacob <jerinj@marvell.com>,
	"thomas@monjalon.net" <thomas@monjalon.net>,
	"dev@dpdk.org" <dev@dpdk.org>,
	Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>,
	"kevin.laatz@intel.com" <kevin.laatz@intel.com>,
	Vamsi Krishna Attunuru <vattunuru@marvell.com>
Subject: RE: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA capability support in dmadev
Date: Mon, 28 Jul 2025 05:35:57 +0000	[thread overview]
Message-ID: <SJ4PPFEA6F74CA23C5196C41F03ED52284CA65AA@SJ4PPFEA6F74CA2.namprd18.prod.outlook.com> (raw)
In-Reply-To: <SJ4PPFEA6F74CA2D1893CA98B40B00BB5E4A650A@SJ4PPFEA6F74CA2.namprd18.prod.outlook.com>

Hi Bruce, Vladimir, Anatoly

Regarding inter-device or inter-domain DMA capability, could please clarify if Intel idxd driver will support this feature.
I believe the changes Feng has suggested here are in line with the earlier "[PATCH v1 0/3] Add support for inter-domain
DMA operations" proposal. We are planning to implement this feature support in version 25.11.

Your feedback would be appreciated, we are aiming for a more generic solution.

Regards
Vamsi


>>On 2025/7/16 18:59, Vamsi Krishna Attunuru wrote:
>>>
>>>>
>>>> Thanks for the explanation.
>>>>
>>>> Let me tell you what I understand:
>>>> 1\ Two dmadev (must belong to the same DMA controller?) each
>>>> passthrough to diffent domain (VM or container) 2\ The kernel DMA
>>>> controller driver could config access groups --- there is a secure
>>>> mechanism
>>(like Intel IDPTE)
>>>>   and the two dmadev could communicate if the kernel DMA controller
>>>> driver has put them in the same access groups.
>>>> 3\ Application setup access group and get handle (maybe the new
>>'dev_idx'
>>>> which you announce in this commit),
>>>>   and then setup one vchan which config the handle.
>>>>   and later launch copy request based on this vchan.
>>>> 4\ The driver will pass the request to dmadev-1 hardware, and
>>>> dmadev-1 hardware will do some verification,
>>>>   and maybe use dmadev-2 stream ID for read/write operations?
>>>>
>>>> A few question about this:
>>>> 1\ What the prototype of 'dev_idx', is it uint16_t?
>>> Yes, it can be uint16_t and use two different dev_idx (src_dev_idx &
>>> dest_dev_idx) for read & write.
>>>
>>>> 2\ How to implement read/write between two dmadev ?  use two
>>>> different dev_idx? the first for read and the second for write?
>>> Yes, two different dev_idx will be used.
>>>
>>>>
>>>>
>>>> I also re-read the patchset "[PATCH v1 0/3] Add support for
>>>> inter-domain DMA operations", it introduce:
>>>> 1\ One 'int controller-id' in the rte_dma_info. which maybe used in
>>>> vendor- specific secure mechanism.
>>>> 2\ Two new OP_flag and two new datapath API.
>>>> The reason why this patch didn't continue (I guess) is whether setup
>>>> one new vchan. Yes, vchan was designed to represents different
>>>> transfer contexts. But each vchan has its own enqueue/dequeue/ring,
>>>> it more act like one logic dmadev, some of the hardware can fit this
>>>> model well, some may not (like Intel in this case).
>>>>
>>>>
>>>> So how about the following scheme:
>>>> 1\ Add inter-domain capability bits, for example:
>>>> RTE_DMA_CAPA_INTER_PROCESS_DOMAIN,
>>>> RTE_DMA_CAPA_INTER_OS_DOMAIN 2\ Add one domain_controller_id
>in
>>the
>>>> rte_dma_info which maybe used in vendor-specific secure mechanism.
>>>> 3\ Add four OP_FLAGs:
>>>> RTE_DMA_OP_FLAG_SRC_INTER_PROCESS_DOMAIN_HANDLE,
>>>> RTE_DMA_OP_FLAG_DST_INTER_PROCESS_DOMAIN_HANDLE
>>>>                      RTE_DMA_OP_FLAG_SRC_INTER_OS_DOMAIN_HANDLE,
>>>> RTE_DMA_OP_FLAG_DST_INTER_OS_DOMAIN_HANDLE
>>>> 4\ Reserved 32bit from flag parameter (which all enqueue API both
>>>> supports) as the src and dst handle.
>>>>   or only reserved 16bit from flag parameter if we restrict don't
>>>> support 3rd transfer.
>>>
>>> Yes, the above approach seems acceptable to me. I believe src & dst
>>> handles require 16-bit values. Reserving 32-bits from flag parameter
>>> would leave 32 flags available, which should be fine.
>>
>>Great
>>tip: there are still 24bit flag reserved after apply this scheme.
>>
>>Would like more comments.
>>
>
>If there are no major comments at this time, can we proceed with accepting
>and merging this notice in this release. Further review can continue once the
>RFC is available next month.
>
>Thanks & Regards
>Vamsi
>
>>>
>>>>
>>>> Thanks
>>>>
>>>> On 2025/7/15 13:35, Vamsi Krishna Attunuru wrote:
>>>>> Hi Feng,
>>>>>
>>>>> Thanks for depicting the feature use case.
>>>>>
>>>>> From the application’s perspective, inter VM/process communication
>>>>> is
>>>> required to exchange the src & dst buffer details, however the
>>>> specifics of this communication mechanism are outside the scope in
>>>> this context. Regarding the address translations, these buffer
>>>> addresses can be either IOVA as PA or IOVA as VA. The DMA hardware
>>>> must use the appropriate IOMMU stream IDs when initiating the DMA
>>>> transfers. For example, in the use case shown in the diagram,
>>>> dmadev-1 and dmadev-2 would join an access group managed by the
>>>> kernel DMA controller driver. This controller driver will configure
>>>> the access group on the DMA hardware, enabling the hardware to
>>>> select the correct stream IDs for read/write operations. New rte_dma
>>>> APIs could be introduced to join or leave the access group or to
>>>> query the access group details. Additionally, a secure token
>>>> mechanism (similar to
>>vfio-pci token) can be implemented to validate any dmadev attempting to
>>join the access group.
>>>>>
>>>>> Regards.
>>>>>
>>>>> From: fengchengwen <fengchengwen@huawei.com>
>>>>> Sent: Tuesday, July 15, 2025 6:29 AM
>>>>> To: Vamsi Krishna Attunuru <vattunuru@marvell.com>; dev@dpdk.org;
>>>>> Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
>>>>> kevin.laatz@intel.com; bruce.richardson@intel.com;
>>>>> mb@smartsharesystems.com
>>>>> Cc: Jerin Jacob <jerinj@marvell.com>; thomas@monjalon.net
>>>>> Subject: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device
>>>>> DMA capability support in dmadev
>>>>>
>>>>> Hi Vamsi, From the commit log, I guess this commit mainly want to
>>>>> meet following case: --------------- ---------------- | Container |
>>>>> | VirtMachine | | | | | | dmadev-1 | | dmadev2 | ---------------
>>>>> ---------------- |
>>>> | ------------------------------ ZjQcmQRYFpfptBannerStart Prioritize
>>>> | security for
>>>> external emails:
>>>>> Confirm sender and content safety before clicking links or opening
>>>> attachments
>>>>>     Report Suspicious
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>2Dphishala
>>>>> r
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>2Dphishala
>>>>> r  >>>>> m-
>>2D&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=WllrYaumVkxaWjgKto6E_r
>t
>>DQsh
>>>>>
>>hIhik2jkvzFyRhW8&m=3uFGFVHxC4YLkjWHNg9s9rNDHd_ozbhLepOYCAkiZK
>x
>>M0sQ0m
>>>>>
>>43gqgTQl1cK9koZ&s=3_mvLYuMWu7RbHD3mj21CP65O5JY8L8AK8oVFutdTW
>U
>>&e=
>>>>
>>ewt.proofpoint.com/EWT/v1/CRVmXkqW!tg3ZldV0Yr_wdSwWmT2aDdKMi-
>>>>
>>4rn2z58vFaxwfOeocS1P19w1BeRdyGs5sjnhV2rU_6m8MOWj4KFbuXKkKJIvc
>q
>>>> wWD2WEwJW_0$ >
>>>>> ZjQcmQRYFpfptBannerEnd
>>>>>
>>>>> Hi Vamsi,
>>>>>
>>>>>
>>>>>
>>>>> From the commit log, I guess this commit mainly want to meet
>>>>> following
>>>> case:
>>>>>
>>>>>
>>>>>
>>>>>      ---------------             ----------------
>>>>>
>>>>>      |  Container  |             |  VirtMachine |
>>>>>
>>>>>      |             |             |              |
>>>>>
>>>>>      |  dmadev-1   |             |   dmadev2    |
>>>>>
>>>>>      ---------------             ----------------
>>>>>
>>>>>            |                            |
>>>>>
>>>>>            ------------------------------
>>>>>
>>>>>
>>>>>
>>>>> App run in the container could launch DMA transfer from local
>>>>> buffer to the VirtMachine by config
>>>>>
>>>>> dmadev-1/2 (the dmadev-1/2 are passthrough to diffent OS domain).
>>>>>
>>>>>
>>>>>
>>>>> Could you explain how to use it from application perspective (for
>>>>> example address translation) and
>>>>>
>>>>> application & hardware restrictions?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> BTW: In this case, there are two OS domain communication, and I
>>>>> remember there are also inter-process
>>>>>
>>>>> DMA RFC, so maybe we could design more generic solution if you
>>>>> provide
>>>> more info.
>>>>>
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> On 2025/7/10 16:51, Vamsi Krishna wrote:
>>>>>
>>>>>> From: Vamsi Attunuru
>>>>>> <vattunuru@marvell.com<mailto:vattunuru@marvell.com>>
>>>>>
>>>>>>
>>>>>
>>>>>> Modern DMA hardware supports data transfer between multiple
>>>>>
>>>>>> DMA devices, enabling data communication across isolated domains
>>>>>> or
>>>>>
>>>>>> containers. To facilitate this, the ``dmadev`` library requires
>>>>>> changes
>>>>>
>>>>>> to allow devices to register with or unregisters from DMA groups
>>>>>> for
>>>>>
>>>>>> inter-device communication. This feature is planned for inclusion
>>>>>
>>>>>> in DPDK 25.11.
>>>>>
>>>>>>
>>>>>
>>>>>> Signed-off-by: Vamsi Attunuru
>>>>>> <vattunuru@marvell.com<mailto:vattunuru@marvell.com>>
>>>>>
>>>>>> ---
>>>>>
>>>>>>  doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>>>
>>>>>>  1 file changed, 7 insertions(+)
>>>>>
>>>>>>
>>>>>
>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>
>>>>>> index e2d4125308..46836244dd 100644
>>>>>
>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>
>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>
>>>>>> @@ -152,3 +152,10 @@ Deprecation Notices
>>>>>
>>>>>>  * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in
>>>>>
>>>>>>    ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to
>DPDK.
>>>>>
>>>>>>    Those API functions are used internally by DPDK core and netvsc
>PMD.
>>>>>
>>>>>> +
>>>>>
>>>>>> +* dmadev: a new capability flag ``RTE_DMA_CAPA_INTER_DEV`` will
>>>>>> +be added
>>>>>
>>>>>> +  to advertise DMA device's inter-device DMA copy capability. To
>>>>>> + enable
>>>>>
>>>>>> +  this functionality, a few dmadev APIs will be added to
>>>>>> + configure the DMA
>>>>>
>>>>>> +  access groups, facilitating coordinated data communication
>>>>>> + between
>>>> devices.
>>>>>
>>>>>> +  A new ``dev_idx`` field will be added to the ``struct
>>>>>> + rte_dma_vchan_conf``
>>>>>
>>>>>> +  structure to configure a vchan for data transfers between any
>>>>>> + two DMA
>>>> devices.
>>>>>
>>>>>
>>>


  reply	other threads:[~2025-07-28  5:36 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-10  8:51 Vamsi Krishna
2025-07-11 15:05 ` Vamsi Krishna Attunuru
2025-07-14 10:32   ` Vamsi Krishna Attunuru
2025-07-14  5:24 ` [EXTERNAL] " Anoob Joseph
2025-07-14  5:30   ` Pavan Nikhilesh Bhagavatula
2025-07-14 10:17 ` Medvedkin, Vladimir
2025-07-14 10:53   ` [EXTERNAL] " Vamsi Krishna Attunuru
2025-07-15  0:59 ` fengchengwen
2025-07-15  5:35   ` [EXTERNAL] " Vamsi Krishna Attunuru
2025-07-16  4:14     ` fengchengwen
2025-07-16 10:59       ` Vamsi Krishna Attunuru
2025-07-17  1:40         ` fengchengwen
2025-07-18  2:29           ` Vamsi Krishna Attunuru
2025-07-28  5:35             ` Vamsi Krishna Attunuru [this message]
2025-07-21 17:52 ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ4PPFEA6F74CA23C5196C41F03ED52284CA65AA@SJ4PPFEA6F74CA2.namprd18.prod.outlook.com \
    --to=vattunuru@marvell.com \
    --cc=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    --cc=jerinj@marvell.com \
    --cc=kevin.laatz@intel.com \
    --cc=pbhagavatula@marvell.com \
    --cc=thomas@monjalon.net \
    --cc=vladimir.medvedkin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).