From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E57B942B17; Mon, 15 May 2023 18:29:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6628D40A80; Mon, 15 May 2023 18:29:27 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id B8F8640687 for ; Mon, 15 May 2023 18:29:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684168165; x=1715704165; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=+jne+hzfoY+Dg3IPPM7H/680lH7LXpXXMmVaqLX+5vE=; b=WdoDMoU6MwZjBo5iPDCbS1Agv0WfMaqjD/u5/8DOOlKr8Y8WmidfELSN k3DW9ZuZG+uOhwZJ3zRSJjQgLVFTFodd1Nw8il9yS0d37cwWAa/A5+RyF JiQ+xwRKl8P5Fn0Esl3hn+3anFTxgj6GDbbKtXCmFm15xbPEUO+qCph7I NeI3yRIZCsjzWHFq/dQw+xE54onHwuFu+8SyWVkq34HLFNU7LNp5HAc2I jyyDDckzvohBCC+rObqpD4KjnBDP00Gdw2ow+1q/4cYs3affx3LI61bfI KCr8zjEla70/TBjjRIKYyH0GOJbEgoK06tGatDMMgVE+GJFmiGuU1yKsJ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10711"; a="340600965" X-IronPort-AV: E=Sophos;i="5.99,277,1677571200"; d="scan'208";a="340600965" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2023 09:29:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10711"; a="651477464" X-IronPort-AV: E=Sophos;i="5.99,277,1677571200"; d="scan'208";a="651477464" Received: from silpixa00401385.ir.intel.com ([10.237.214.135]) by orsmga003.jf.intel.com with ESMTP; 15 May 2023 09:29:21 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Kevin Laatz , Anatoly Burakov Subject: [PATCH] dma/idxd: add support for multi-process when using VFIO Date: Mon, 15 May 2023 17:29:07 +0100 Message-Id: <20230515162907.8456-1-bruce.richardson@intel.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When using vfio-pci/uio for hardware access, we need to avoid reinitializing the hardware when mapping from a secondary process. Instead, just configure the function pointers and reuse the data mappings from the primary process. With the code change, update driver doc with the information that vfio-pci can be used for multi-process support, and explicitly state the limitation on multi-process support being unavailable when using idxd kernel driver. Signed-off-by: Bruce Richardson --- doc/guides/dmadevs/idxd.rst | 5 +++++ drivers/dma/idxd/idxd_common.c | 6 ++++-- drivers/dma/idxd/idxd_pci.c | 30 ++++++++++++++++++++++++++++++ 3 files changed, 39 insertions(+), 2 deletions(-) diff --git a/doc/guides/dmadevs/idxd.rst b/doc/guides/dmadevs/idxd.rst index bdfd3e78ad..f75d1d0a85 100644 --- a/doc/guides/dmadevs/idxd.rst +++ b/doc/guides/dmadevs/idxd.rst @@ -35,6 +35,11 @@ Device Setup Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers, such as ``vfio-pci``. Both are supported by the IDXD PMD. +.. note:: + To use Intel\ |reg| DSA devices in DPDK multi-process applications, + the devices should be bound to the vfio-pci driver. + Multi-process is not supported when using the kernel IDXD driver. + Intel\ |reg| DSA devices using IDXD kernel driver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c index 6fe8ad4884..83d53942eb 100644 --- a/drivers/dma/idxd/idxd_common.c +++ b/drivers/dma/idxd/idxd_common.c @@ -599,6 +599,10 @@ idxd_dmadev_create(const char *name, struct rte_device *dev, dmadev->fp_obj->completed = idxd_completed; dmadev->fp_obj->completed_status = idxd_completed_status; dmadev->fp_obj->burst_capacity = idxd_burst_capacity; + dmadev->fp_obj->dev_private = dmadev->data->dev_private; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; idxd = dmadev->data->dev_private; *idxd = *base_idxd; /* copy over the main fields already passed in */ @@ -619,8 +623,6 @@ idxd_dmadev_create(const char *name, struct rte_device *dev, idxd->batch_idx_ring = (void *)&idxd->batch_comp_ring[idxd->max_batches+1]; idxd->batch_iova = rte_mem_virt2iova(idxd->batch_comp_ring); - dmadev->fp_obj->dev_private = idxd; - idxd->dmadev->state = RTE_DMA_DEV_READY; return 0; diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c index 781fa02db3..5fe9314d01 100644 --- a/drivers/dma/idxd/idxd_pci.c +++ b/drivers/dma/idxd/idxd_pci.c @@ -309,6 +309,36 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev) IDXD_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node); dev->device.driver = &drv->driver; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + char qname[32]; + int max_qid; + + /* look up queue 0 to get the pci structure */ + snprintf(qname, sizeof(qname), "%s-q0", name); + IDXD_PMD_INFO("Looking up %s\n", qname); + ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops); + if (ret != 0) { + IDXD_PMD_ERR("Failed to create dmadev %s", name); + return ret; + } + qid = rte_dma_get_dev_id_by_name(qname); + max_qid = rte_atomic16_read( + &((struct idxd_dmadev *)rte_dma_fp_objs[qid].dev_private)->u.pci->ref_count); + + /* we have queue 0 done, now configure the rest of the queues */ + for (qid = 1; qid < max_qid; qid++) { + /* add the queue number to each device name */ + snprintf(qname, sizeof(qname), "%s-q%d", name, qid); + IDXD_PMD_INFO("Looking up %s\n", qname); + ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops); + if (ret != 0) { + IDXD_PMD_ERR("Failed to create dmadev %s", name); + return ret; + } + } + return 0; + } + if (dev->device.devargs && dev->device.devargs->args[0] != '\0') { /* if the number of devargs grows beyond just 1, use rte_kvargs */ if (sscanf(dev->device.devargs->args, -- 2.39.2