DPDK patches and discussions
 help / color / mirror / Atom feed
From: Anatoly Burakov <anatoly.burakov@intel.com>
To: dev@dpdk.org
Cc: bruce.richardson@intel.com
Subject: [PATCH v1 0/3] Add support for inter-domain DMA operations
Date: Fri, 11 Aug 2023 16:14:43 +0000	[thread overview]
Message-ID: <cover.1691768109.git.anatoly.burakov@intel.com> (raw)

This patchset adds inter-domain DMA operations, and implements driver support
for them in Intel(R) IDXD driver.

Inter-domain DMA operations are similar to regular DMA operations, except that
source and/or destination addresses will be in virtual address space of another
process. In this patchset, DMA device is extended to support two new data plane
operations: inter-domain copy, and inter-domain fill. No control plane API is
provided for dmadev to set up inter-domain communication (see below for more
info).

DMA device API is extended with inter-domain operations, along with their
respective capability flag. Two new op flags are also added to allow for
inter-domain operations to select whether the source and/or destination address
is in an address space of another process. Finally, the `rte_dma_info` struct is
extended with a "controller ID" value (set to -1 by default for all drivers that
don't implement it), representing a hardware DMA controller ID. This is because
under current IDXD implementation the IDPTE (Inter-Domain Permission Table
Entry) table is global to each device. That is, even though there may be
multiple dmadev devices used by IDXD driver, they will all share their IDPTE
entries if they belong to the same hardware controller, so some sort of value
indicating where each dmadev belongs was needed.

Similarly, IDXD driver is extended to support the new dmadev API, as well as use
the new "controller ID" value. IDXD driver is also extended to have a private
API for control-plane operations related to creating/attaching to memory regions
which are shared between processes.

In the current implementation, control-plane operations were made as a private
API, instead of extending the DMA device API. This is because technically, only
the submitter (a process which is using IDXD driver to perform inter-domain
operations) has to have a DMA device available, while the owner (a process which
shares its memory regions with the submitter) does not have to manage a DMA
device to give access to its memory to another process. Another consideration is
that currently, this API is Linux*-specific and relies on passing file
descriptors over IPC, and this process, if implemented on other vendors'
hardware, may not map to the same scheme.

NOTE: currently, no publicly released hardware is available to test this feature
or this patchset

We are seeking community review on the following aspects of the patchset:
- The fact that control-plane API is supposed to be private to specific drivers
- The design of inter-domain data-plane operations API with respect to how
  "inter-domain handles" are being used and whether it's possible to make the
  API more vendor-neutral
- New data-plane ops in dmadev will extend the data plane struct into the second
  cache line - this should not be an issue since non-inter-domain operations are
  still in the first cache line, and thus existing fast path is not affected
- Any other feedback is welcome as well!

Anatoly Burakov (3):
  dmadev: add inter-domain operations
  dma/idxd: implement inter-domain operations
  dma/idxd: add API to create and attach to window

 doc/guides/dmadevs/idxd.rst           |  52 ++++++++
 doc/guides/prog_guide/dmadev.rst      |  22 ++++
 drivers/dma/idxd/idxd_bus.c           |  35 ++++++
 drivers/dma/idxd/idxd_common.c        | 123 ++++++++++++++++---
 drivers/dma/idxd/idxd_hw_defs.h       |  14 ++-
 drivers/dma/idxd/idxd_inter_dom.c     | 166 ++++++++++++++++++++++++++
 drivers/dma/idxd/idxd_internal.h      |   7 ++
 drivers/dma/idxd/meson.build          |   7 +-
 drivers/dma/idxd/rte_idxd_inter_dom.h |  79 ++++++++++++
 drivers/dma/idxd/version.map          |  11 ++
 lib/dmadev/rte_dmadev.c               |   2 +
 lib/dmadev/rte_dmadev.h               | 133 +++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h          |  12 ++
 13 files changed, 644 insertions(+), 19 deletions(-)
 create mode 100644 drivers/dma/idxd/idxd_inter_dom.c
 create mode 100644 drivers/dma/idxd/rte_idxd_inter_dom.h
 create mode 100644 drivers/dma/idxd/version.map

-- 
2.37.2


             reply	other threads:[~2023-08-11 16:14 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11 16:14 Anatoly Burakov [this message]
2023-08-11 16:14 ` [PATCH v1 1/3] dmadev: add inter-domain operations Anatoly Burakov
2023-08-18  8:08   ` [EXT] " Anoob Joseph
2023-10-08  2:33   ` fengchengwen
2023-10-09  5:05     ` Jerin Jacob
2023-10-27 13:46       ` Medvedkin, Vladimir
2023-11-23  5:24         ` Jerin Jacob
2023-08-11 16:14 ` [PATCH v1 2/3] dma/idxd: implement " Anatoly Burakov
2023-08-11 16:14 ` [PATCH v1 3/3] dma/idxd: add API to create and attach to window Anatoly Burakov
2023-08-14  4:39   ` Jerin Jacob
2023-08-14  9:55     ` Burakov, Anatoly
2023-08-15 19:20 ` [EXT] [PATCH v1 0/3] Add support for inter-domain DMA operations Satananda Burla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cover.1691768109.git.anatoly.burakov@intel.com \
    --to=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).