DPDK patches and discussions
 help / color / mirror / Atom feed
From: Satananda Burla <sburla@marvell.com>
To: Anatoly Burakov <anatoly.burakov@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "bruce.richardson@intel.com" <bruce.richardson@intel.com>
Subject: RE: [EXT] [PATCH v1 0/3] Add support for inter-domain DMA operations
Date: Tue, 15 Aug 2023 19:20:41 +0000	[thread overview]
Message-ID: <SJ0PR18MB4477617ED4A3E45C41C4BE6EC314A@SJ0PR18MB4477.namprd18.prod.outlook.com> (raw)
In-Reply-To: <cover.1691768109.git.anatoly.burakov@intel.com>

Hi Anatoly

> -----Original Message-----
> From: Anatoly Burakov <anatoly.burakov@intel.com>
> Sent: Friday, August 11, 2023 9:15 AM
> To: dev@dpdk.org
> Cc: bruce.richardson@intel.com
> Subject: [EXT] [PATCH v1 0/3] Add support for inter-domain DMA
> operations
> 
> External Email
> 
> ----------------------------------------------------------------------
> This patchset adds inter-domain DMA operations, and implements driver
> support
> for them in Intel(R) IDXD driver.
> 
> Inter-domain DMA operations are similar to regular DMA operations,
> except that
> source and/or destination addresses will be in virtual address space of
> another
> process. In this patchset, DMA device is extended to support two new
> data plane
> operations: inter-domain copy, and inter-domain fill. No control plane
> API is
> provided for dmadev to set up inter-domain communication (see below for
> more
> info).
Thanks for posting this.
Do you have usecases where a process from 3rd domain sets up transfer 
between memories from 2 domains? i.e process 1 is src, process 2 is
dest and process 3 executes transfer. The SDXI spec also defines this kind
of a transfer.
Have you considered extending  rte_dma_port_param and rte_dma_vchan_conf
to represent interdomain memory transfer setup as a separate port type like
RTE_DMA_PORT_INTER_DOMAIN ?
And then we could have a separate vchan dedicated for this transfer.
The rte_dma_vchan  can be setup with separate struct rte_dma_port_param
each for source and destination. The union could be extended to provide
the necessary information to pmd, this could be set of fields that
would be needed by different architectures like controller id,
pasid, smmu streamid and substreamid etc, if an opaque handle is needed,
it could also be accommodated in the union.
These transfers could also be initiated between 2 processes each having 2
dmadev VFs from the same PF as well. Marvell hardware supports this mode.
Since control plane for this can differ between PMDs, it is better to
setup the memory sharing outside dmadev and only pass the fields of interest to
the PMD for completing the transfer. For instance, for PCIe EP to Host
DMA transactions (MEM_TO_DEV and DEV_TO_MEM), the process of setting up
shared memory from PCIe host is not part of dmadev.
If we wish to make the memory sharing interface as a part of dmadev, then
preferably the control plane has to be abstracted to work for all the modes
and architectures.

Regards
Satananda


      parent reply	other threads:[~2023-08-15 19:20 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11 16:14 Anatoly Burakov
2023-08-11 16:14 ` [PATCH v1 1/3] dmadev: add inter-domain operations Anatoly Burakov
2023-08-18  8:08   ` [EXT] " Anoob Joseph
2023-10-08  2:33   ` fengchengwen
2023-10-09  5:05     ` Jerin Jacob
2023-10-27 13:46       ` Medvedkin, Vladimir
2023-11-23  5:24         ` Jerin Jacob
2023-08-11 16:14 ` [PATCH v1 2/3] dma/idxd: implement " Anatoly Burakov
2023-08-11 16:14 ` [PATCH v1 3/3] dma/idxd: add API to create and attach to window Anatoly Burakov
2023-08-14  4:39   ` Jerin Jacob
2023-08-14  9:55     ` Burakov, Anatoly
2023-08-15 19:20 ` Satananda Burla [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR18MB4477617ED4A3E45C41C4BE6EC314A@SJ0PR18MB4477.namprd18.prod.outlook.com \
    --to=sburla@marvell.com \
    --cc=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).