From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CFECA0546; Fri, 30 Apr 2021 13:17:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 22E2241129; Fri, 30 Apr 2021 13:17:45 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 696F6410F6 for ; Fri, 30 Apr 2021 13:17:42 +0200 (CEST) IronPort-SDR: +VE2xWqqbss21sLVU7goEVCnvjvGNEqiIY+Ga2pAsXBXJxMmRIykoO8s/n5Z8TbcmJ2dKlppGm rdYlBW5/0xwA== X-IronPort-AV: E=McAfee;i="6200,9189,9969"; a="177410629" X-IronPort-AV: E=Sophos;i="5.82,262,1613462400"; d="scan'208";a="177410629" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2021 04:17:41 -0700 IronPort-SDR: GhC9cZpjXLIFuvgj5Z9bTatxmbwPI8/cOfJ/bCDXe5stO1qoHxeImdiGtOHwgmRUIDylgAtJF4 U25XwBwgSttw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,262,1613462400"; d="scan'208";a="387325059" Received: from silpixa00399126.ir.intel.com ([10.237.223.78]) by orsmga003.jf.intel.com with ESMTP; 30 Apr 2021 04:17:40 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: kevin.laatz@intel.com, sunil.pai.g@intel.com, jiayu.hu@intel.com, Bruce Richardson Date: Fri, 30 Apr 2021 12:17:17 +0100 Message-Id: <20210430111727.12203-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210430111727.12203-1-bruce.richardson@intel.com> References: <20210318182042.43658-1-bruce.richardson@intel.com> <20210430111727.12203-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v3 02/12] raw/ioat: support limiting queues for idxd PCI device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using a full device instance via vfio, allow the user to specify a maximum number of queues to configure rather than always using the max number of supported queues. Signed-off-by: Bruce Richardson --- doc/guides/rawdevs/ioat.rst | 8 ++++++++ drivers/raw/ioat/idxd_pci.c | 28 ++++++++++++++++++++++++++-- 2 files changed, 34 insertions(+), 2 deletions(-) diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst index 250cfc48a6..60438cc3bc 100644 --- a/doc/guides/rawdevs/ioat.rst +++ b/doc/guides/rawdevs/ioat.rst @@ -106,6 +106,14 @@ For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices w be found as part of the device scan done at application initialization time without the need to pass parameters to the application. +For Intel\ |reg| DSA devices, DPDK will automatically configure the device with the +maximum number of workqueues available on it, partitioning all resources equally +among the queues. +If fewer workqueues are required, then the ``max_queues`` parameter may be passed to +the device driver on the EAL commandline, via the ``allowlist`` or ``-a`` flag e.g.:: + + $ dpdk-test -a ,max_queues=4 + If the device is bound to the IDXD kernel driver (and previously configured with sysfs), then a specific work queue needs to be passed to the application via a vdev parameter. This vdev parameter take the driver name and work queue name as parameters. diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c index 01623f33f6..b48e565b4c 100644 --- a/drivers/raw/ioat/idxd_pci.c +++ b/drivers/raw/ioat/idxd_pci.c @@ -4,6 +4,7 @@ #include #include +#include #include "ioat_private.h" #include "ioat_spec.h" @@ -123,7 +124,8 @@ static const struct rte_rawdev_ops idxd_pci_ops = { #define IDXD_PORTAL_SIZE (4096 * 4) static int -init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd) +init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd, + unsigned int max_queues) { struct idxd_pci_common *pci; uint8_t nb_groups, nb_engines, nb_wqs; @@ -179,6 +181,16 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_rawdev *idxd) for (i = 0; i < nb_wqs; i++) idxd_get_wq_cfg(pci, i)[0] = 0; + /* limit queues if necessary */ + if (max_queues != 0 && nb_wqs > max_queues) { + nb_wqs = max_queues; + if (nb_engines > max_queues) + nb_engines = max_queues; + if (nb_groups > max_queues) + nb_engines = max_queues; + IOAT_PMD_DEBUG("Limiting queues to %u", nb_wqs); + } + /* put each engine into a separate group to avoid reordering */ if (nb_groups > nb_engines) nb_groups = nb_engines; @@ -242,12 +254,23 @@ idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev) uint8_t nb_wqs; int qid, ret = 0; char name[PCI_PRI_STR_SIZE]; + unsigned int max_queues = 0; rte_pci_device_name(&dev->addr, name, sizeof(name)); IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node); dev->device.driver = &drv->driver; - ret = init_pci_device(dev, &idxd); + if (dev->device.devargs && dev->device.devargs->args[0] != '\0') { + /* if the number of devargs grows beyond just 1, use rte_kvargs */ + if (sscanf(dev->device.devargs->args, + "max_queues=%u", &max_queues) != 1) { + IOAT_PMD_ERR("Invalid device parameter: '%s'", + dev->device.devargs->args); + return -1; + } + } + + ret = init_pci_device(dev, &idxd, max_queues); if (ret < 0) { IOAT_PMD_ERR("Error initializing PCI hardware"); return ret; @@ -353,3 +376,4 @@ RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci); RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map); RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI, "* igb_uio | uio_pci_generic | vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(rawdev_idxd_pci, "max_queues=0"); -- 2.30.2