From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CA64A0562; Tue, 4 May 2021 15:16:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3915041104; Tue, 4 May 2021 15:15:39 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id E5CCF40141 for ; Tue, 4 May 2021 15:15:34 +0200 (CEST) IronPort-SDR: 0M5D9Bfd6bJ0h5fRKm7j8pvHyhDg0fdYmH/P2TYfMGr/7p5DVyYXnBCjQIPbnyg7vxeEZeVyXq oUkUYLruRQAw== X-IronPort-AV: E=McAfee;i="6200,9189,9973"; a="259259298" X-IronPort-AV: E=Sophos;i="5.82,272,1613462400"; d="scan'208";a="259259298" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 06:15:27 -0700 IronPort-SDR: HvwgpCc8hYYUBdZ1ShYapu9fBZLzrsUf/EzUUwOI/WWyAjzM9fdZTyFxwqGGcCp93NwHcrPaQt BsU0Odtl6+pw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,272,1613462400"; d="scan'208";a="406105501" Received: from silpixa00399126.ir.intel.com ([10.237.223.78]) by orsmga002.jf.intel.com with ESMTP; 04 May 2021 06:15:25 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: kevin.laatz@intel.com, sunil.pai.g@intel.com, jiayu.hu@intel.com, Bruce Richardson Date: Tue, 4 May 2021 14:14:48 +0100 Message-Id: <20210504131458.593429-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210504131458.593429-1-bruce.richardson@intel.com> References: <20210318182042.43658-1-bruce.richardson@intel.com> <20210504131458.593429-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v5 02/12] raw/ioat: support limiting queues for idxd PCI device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using a full device instance via vfio, allow the user to specify a maximum number of queues to configure rather than always using the max number of supported queues. Signed-off-by: Bruce Richardson --- doc/guides/rawdevs/ioat.rst | 8 ++++++++ drivers/raw/ioat/idxd_pci.c | 28 ++++++++++++++++++++++++++-- 2 files changed, 34 insertions(+), 2 deletions(-) diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst index 250cfc48a6..60438cc3bc 100644 --- a/doc/guides/rawdevs/ioat.rst +++ b/doc/guides/rawdevs/ioat.rst @@ -106,6 +106,14 @@ For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices w be found as part of the device scan done at application initialization time without the need to pass parameters to the application. +For Intel\ |reg| DSA devices, DPDK will automatically configure the device with the +maximum number of workqueues available on it, partitioning all resources equally +among the queues. +If fewer workqueues are required, then the ``max_queues`` parameter may be passed to +the device driver on the EAL commandline, via the ``allowlist`` or ``-a`` flag e.g.:: + + $ dpdk-test -a ,max_queues=4 + If the device is bound to the IDXD kernel driver (and previously configured with sysfs), then a specific work queue needs to be passed to the application via a vdev parameter. This vdev parameter take the driver name and work queue name as parameters. diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c index 01623f33f6..b48e565b4c 100644 --- a/drivers/raw/ioat/idxd_pci.c +++ b/drivers/raw/ioat/idxd_pci.c @@ -4,6 +4,7 @@ #include #include +#include #include "ioat_private.h" #include "ioat_spec.h" @@ -123,7 +124,8 @@ static const struct rte_rawdev_ops idxd_pci_ops = { #define IDXD_PORTAL_SIZE (4096 * 4) static int -init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd) +init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd, + unsigned int max_queues) { struct idxd_pci_common *pci; uint8_t nb_groups, nb_engines, nb_wqs; @@ -179,6 +181,16 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd) for (i = 0; i < nb_wqs; i++) idxd_get_wq_cfg(pci, i)[0] = 0; + /* limit queues if necessary */ + if (max_queues != 0 && nb_wqs > max_queues) { + nb_wqs = max_queues; + if (nb_engines > max_queues) + nb_engines = max_queues; + if (nb_groups > max_queues) + nb_engines = max_queues; + IOAT_PMD_DEBUG("Limiting queues to %u", nb_wqs); + } + /* put each engine into a separate group to avoid reordering */ if (nb_groups > nb_engines) nb_groups = nb_engines; @@ -242,12 +254,23 @@ idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev) uint8_t nb_wqs; int qid, ret = 0; char name[PCI_PRI_STR_SIZE]; + unsigned int max_queues = 0; rte_pci_device_name(&dev->addr, name, sizeof(name)); IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node); dev->device.driver = &drv->driver; - ret = init_pci_device(dev, &idxd); + if (dev->device.devargs && dev->device.devargs->args[0] != '\0') { + /* if the number of devargs grows beyond just 1, use rte_kvargs */ + if (sscanf(dev->device.devargs->args, + "max_queues=%u", &max_queues) != 1) { + IOAT_PMD_ERR("Invalid device parameter: '%s'", + dev->device.devargs->args); + return -1; + } + } + + ret = init_pci_device(dev, &idxd, max_queues); if (ret < 0) { IOAT_PMD_ERR("Error initializing PCI hardware"); return ret; @@ -353,3 +376,4 @@ RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci); RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map); RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI, "* igb_uio | uio_pci_generic | vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(rawdev_idxd_pci, "max_queues=0"); -- 2.30.2