From: fengchengwen <fengchengwen@huawei.com>
To: Jerin Jacob <jerinj@marvell.com>,
Vamsi Krishna Attunuru <vattunuru@marvell.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"anatoly.burakov@intel.com" <anatoly.burakov@intel.com>
Cc: "thomas@monjalon.net" <thomas@monjalon.net>,
"bruce.richardson@intel.com" <bruce.richardson@intel.com>,
"vladimir.medvedkin@intel.com" <vladimir.medvedkin@intel.com>,
"kevin.laatz@intel.com" <kevin.laatz@intel.com>
Subject: Re: [EXTERNAL] Re: [RFC] lib/dma: introduce inter-process and inter-OS DMA
Date: Thu, 9 Oct 2025 19:08:15 +0800 [thread overview]
Message-ID: <a9555f5b-9051-4ed9-94c2-ffe014cf5168@huawei.com> (raw)
In-Reply-To: <BY3PR18MB478537EA13F30E686068F557C8E6A@BY3PR18MB4785.namprd18.prod.outlook.com>
On 10/1/2025 1:57 PM, Jerin Jacob wrote:
>
>
>> -----Original Message-----
>> From: Vamsi Krishna Attunuru <vattunuru@marvell.com>
>> Sent: Wednesday, September 24, 2025 9:45 AM
>> To: fengchengwen <fengchengwen@huawei.com>; dev@dpdk.org;
>> anatoly.burakov@intel.com
>> Cc: thomas@monjalon.net; bruce.richardson@intel.com;
>> vladimir.medvedkin@intel.com; anatoly.burakov@intel.com;
>> kevin.laatz@intel.com; Jerin Jacob <jerinj@marvell.com>
>> Subject: RE: [EXTERNAL] Re: [RFC] lib/dma: introduce inter-process and inter-OS
>> DMA
>>
>> Hi Feng, Anatoly,
>>
>> Gentle ping for below review.
>>
>>> -----Original Message-----
>>> From: Vamsi Krishna Attunuru
>>> Sent: Monday, September 22, 2025 5:19 PM
>>> To: fengchengwen <fengchengwen@huawei.com>; dev@dpdk.org;
>>> anatoly.burakov@intel.com
>>> Cc: thomas@monjalon.net; bruce.richardson@intel.com;
>>> vladimir.medvedkin@intel.com; anatoly.burakov@intel.com;
>>> kevin.laatz@intel.com; Jerin Jacob <jerinj@marvell.com>
>>> Subject: RE: [EXTERNAL] Re: [RFC] lib/dma: introduce inter-process and
>>> inter- OS DMA
>>>
>>> Hi Feng,
>>>
>>>> Hi Vamsi, This commit change is more than discussed, it add control
>>>> API which for group management. 1. Control API: I check this commit
>>>> and Intel commit [1], it seem has a quite difference. I hope Intel
>>>> guys can express views
>>>> ZjQcmQRYFpfptBannerEnd
>>>> Hi Vamsi,
>>>>
>>>> This commit change is more than discussed, it add control API which
>>>> for group management.
>>>>
>>>> 1. Control API: I check this commit and Intel commit [1], it seem has
>>>> a quite difference.
>>>> I hope Intel guys can express views. I prefer not add this part if
>>>> no
>>> response.
>>>
>>> This new feature needs to be securely managed through control APIs. It
>>> would be extremely helpful if the folks at Intel and you as well could
>>> provide support or inputs on this.
>
>
> Beyond adding Intel folks to this thread, I don't see any further steps we can take to drive review at this stage.
>
> That said, the table-based concept used in the current API may not be portable, and we may need improvements here.
>
> Based on my understanding, DMA devices used for inter process copy can be classified into three categories:
>
> Class A: Requires a pair of DMA devices (one on each end of process/domain) for data transfer. Marvell DMA devices fall into this category.
> Class B: Requires only a single DMA device (one process/domain has a DMA device, the other process does not). Intel DMA devices fall here.
> Class C: Other types of devices that we are not yet aware of.
>
> Abstracting all of these under a single API will be challenging. Linux and other OSes do not provide control-plane APIs for this,
> so DPDK must provide control plane mechanisms to support Class A, Class B, and Class C devices.
>
> Proposal: Split development into separate sets:
> -----------------------------------------------
> Set A: Focus only on the datapath. Assume uint16_t *src_handle and uint16_t *dst_handle come from elsewhere (Class C).
> Set B: Introduce capabilities for Class A devices with portable APIs (proposal below, without table concept).
> Set C: Introduce capabilities for Class B devices and relevant APIs, to be added when needed.
>
> We can merge Set A in the current release and move Set B to a next release _if_ review or support for Class A devices requires more time.
>
> @fengchengwen Thoughts?
okay
>
> Class A API Proposal:
> ---------------------
> These APIs are based on a new capability flag for inter-process or inter-OS DMA transfers for Class A devices.
>
>
> /** Creates an access group for pair-type inter-process or inter-OS DMA transfers. */
> int rte_dma_access_pair_group_create(const struct rte_dma_dev *dev,
> rte_uuid_t process_id,
> rte_uuid_t token,
> uint16_t *group_id);
how about rte_dma_access_group_create(), and add pair-type as one parameter, and also rename process_id as domain_id
>
> /** Destroys an access group once all participating devices have exited. */
> int rte_dma_access_pair_group_destroy(const struct rte_dma_dev *dev,
> uint16_t group_id);
rte_dma_access_group_destroy()
>
> /** Allows a device to join an existing access group using a device handle and token. */
> int rte_dma_access_pair_group_join(const struct rte_dma_dev *dev,
> uint16_t group_id,
> rte_uuid_t process_id,
> rte_uuid_t token,
> rte_dma_access_pair_leave_cb_t leave_cb);
rte_dma_access_group_join()
>
> /** Removes a device from an access group. */
> int rte_dma_access_pair_group_leave(const struct rte_dma_dev *dev,
> uint16_t group_id);
>
> /** Retrieves the source and destination handles for a given device within the group. */
> int rte_dma_access_pair_gorup_src_dst_handles_get(const struct rte_dma_dev *dev,
> uint16_t group_id,
> rte_uuid_t src_process_id,
> rte_uuid_t dst_process_id,
> uint16_t *src_handle,
> uint16_t *dst_handle);
rte_dma_access_group_handle_get(const struct rte_dma_dev *dev,
uint16_t group_id,
rte_uuid_t domain_id,
uint16_t *handle);
so user could invoke multiple time if they want to get differ domain_id's handle.
>
>
> Parameters that need explanation:
> --------------------------------
> process_id: Unique ID for the process, generated via rte_uuid_ APIs
> token: Provided by an administrative actor to grant access, similar to VFIO VF token creation used in VFIO PF driver.
> leave_cb: Callback to notify when another side leaves the group
>
>
> Example Workflow for Class A Inter-Domain DMA Transfer:
> -------------------------------------------------------
>
> This example demonstrates how three processes — p0, p1, and p2 — coordinate inter-domain DMA transfers using pair-type(Class A) DMA devices.
>
> Step 1: Group Creation (p0)
> Process p0 calls rte_dma_access_pair_group_create() with a unique process handle and token. A group_id is returned.
I prefer the group_id is int type so it could hold such like file descriptor.
>
> Step 2: Group Sharing
> group_id and token are shared with p1 and p2 via IPC or shared memory.
>
> Step 3: Group Joining (p1 & p2)
> Processes p1 and p2 call rte_dma_access_pair_group_join() with their process id and the shared token from admin
>
> Step 4: Handle Discovery
> Each process uses rte_dma_access_pair_gorup_src_dst_handles_get() to retrieve source and destination handles for other processes.
>
> Step 5: Transfer Coordination
> Using the handles, each process configures a virtual channel (vchan) and initiates DMA transfers.
>
> Step 6: Group Teardown
> When a process no longer needs to participate, it calls rte_dma_access_pair_group_leave(). Other processes are notified via the registered callback with rte_dma_access_pair_group_join().
> Once all devices have exited, p0 calls rte_dma_access_pair_group_destroy() to clean up.
>
>
> For Class B: We can add new capability flag and have new set of APIs rte_dma_access_master_ or so. When such devices comes or when intel wants to add it
>
>
>
>
>
next prev parent reply other threads:[~2025-10-09 11:08 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-01 12:33 Vamsi Krishna
2025-09-18 11:06 ` Vamsi Krishna Attunuru
2025-09-19 9:02 ` fengchengwen
2025-09-22 11:48 ` [EXTERNAL] " Vamsi Krishna Attunuru
2025-09-24 4:14 ` Vamsi Krishna Attunuru
2025-09-25 1:34 ` fengchengwen
2025-10-01 5:57 ` Jerin Jacob
2025-10-06 13:59 ` Vamsi Krishna Attunuru
2025-10-09 2:27 ` Vamsi Krishna Attunuru
2025-10-09 11:08 ` fengchengwen [this message]
2025-10-10 10:40 ` Jerin Jacob
2025-09-25 2:06 ` fengchengwen
2025-10-10 14:46 ` [PATCH v2 1/1] " Vamsi Krishna
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a9555f5b-9051-4ed9-94c2-ffe014cf5168@huawei.com \
--to=fengchengwen@huawei.com \
--cc=anatoly.burakov@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=kevin.laatz@intel.com \
--cc=thomas@monjalon.net \
--cc=vattunuru@marvell.com \
--cc=vladimir.medvedkin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).