From: Bruce Richardson <bruce.richardson@intel.com>
To: dev@dpdk.org
Cc: Bruce Richardson <bruce.richardson@intel.com>,
Kevin Laatz <kevin.laatz@intel.com>,
Anatoly Burakov <anatoly.burakov@intel.com>
Subject: [PATCH] dma/idxd: add support for multi-process when using VFIO
Date: Mon, 15 May 2023 17:29:07 +0100 [thread overview]
Message-ID: <20230515162907.8456-1-bruce.richardson@intel.com> (raw)
When using vfio-pci/uio for hardware access, we need to avoid
reinitializing the hardware when mapping from a secondary process.
Instead, just configure the function pointers and reuse the data
mappings from the primary process.
With the code change, update driver doc with the information that
vfio-pci can be used for multi-process support, and explicitly state the
limitation on multi-process support being unavailable when using idxd
kernel driver.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
doc/guides/dmadevs/idxd.rst | 5 +++++
drivers/dma/idxd/idxd_common.c | 6 ++++--
drivers/dma/idxd/idxd_pci.c | 30 ++++++++++++++++++++++++++++++
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/dmadevs/idxd.rst b/doc/guides/dmadevs/idxd.rst
index bdfd3e78ad..f75d1d0a85 100644
--- a/doc/guides/dmadevs/idxd.rst
+++ b/doc/guides/dmadevs/idxd.rst
@@ -35,6 +35,11 @@ Device Setup
Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,
such as ``vfio-pci``. Both are supported by the IDXD PMD.
+.. note::
+ To use Intel\ |reg| DSA devices in DPDK multi-process applications,
+ the devices should be bound to the vfio-pci driver.
+ Multi-process is not supported when using the kernel IDXD driver.
+
Intel\ |reg| DSA devices using IDXD kernel driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c
index 6fe8ad4884..83d53942eb 100644
--- a/drivers/dma/idxd/idxd_common.c
+++ b/drivers/dma/idxd/idxd_common.c
@@ -599,6 +599,10 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,
dmadev->fp_obj->completed = idxd_completed;
dmadev->fp_obj->completed_status = idxd_completed_status;
dmadev->fp_obj->burst_capacity = idxd_burst_capacity;
+ dmadev->fp_obj->dev_private = dmadev->data->dev_private;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
idxd = dmadev->data->dev_private;
*idxd = *base_idxd; /* copy over the main fields already passed in */
@@ -619,8 +623,6 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,
idxd->batch_idx_ring = (void *)&idxd->batch_comp_ring[idxd->max_batches+1];
idxd->batch_iova = rte_mem_virt2iova(idxd->batch_comp_ring);
- dmadev->fp_obj->dev_private = idxd;
-
idxd->dmadev->state = RTE_DMA_DEV_READY;
return 0;
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index 781fa02db3..5fe9314d01 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -309,6 +309,36 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
IDXD_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
dev->device.driver = &drv->driver;
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ char qname[32];
+ int max_qid;
+
+ /* look up queue 0 to get the pci structure */
+ snprintf(qname, sizeof(qname), "%s-q0", name);
+ IDXD_PMD_INFO("Looking up %s\n", qname);
+ ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
+ if (ret != 0) {
+ IDXD_PMD_ERR("Failed to create dmadev %s", name);
+ return ret;
+ }
+ qid = rte_dma_get_dev_id_by_name(qname);
+ max_qid = rte_atomic16_read(
+ &((struct idxd_dmadev *)rte_dma_fp_objs[qid].dev_private)->u.pci->ref_count);
+
+ /* we have queue 0 done, now configure the rest of the queues */
+ for (qid = 1; qid < max_qid; qid++) {
+ /* add the queue number to each device name */
+ snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+ IDXD_PMD_INFO("Looking up %s\n", qname);
+ ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
+ if (ret != 0) {
+ IDXD_PMD_ERR("Failed to create dmadev %s", name);
+ return ret;
+ }
+ }
+ return 0;
+ }
+
if (dev->device.devargs && dev->device.devargs->args[0] != '\0') {
/* if the number of devargs grows beyond just 1, use rte_kvargs */
if (sscanf(dev->device.devargs->args,
--
2.39.2
next reply other threads:[~2023-05-15 16:29 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-15 16:29 Bruce Richardson [this message]
2023-05-17 10:17 ` Burakov, Anatoly
2023-05-24 19:14 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230515162907.8456-1-bruce.richardson@intel.com \
--to=bruce.richardson@intel.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=kevin.laatz@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).