From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id CDA6B728E for ; Wed, 21 Mar 2018 15:27:53 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Mar 2018 07:27:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,340,1517904000"; d="scan'208";a="213360428" Received: from rhorton-mobl1.ger.corp.intel.com (HELO FC23.ir.intel.com) ([163.33.230.232]) by fmsmga006.fm.intel.com with ESMTP; 21 Mar 2018 07:27:51 -0700 From: Remy Horton To: dev@dpdk.org Cc: John McNamara , Wenzhuo Lu , Jingjing Wu , Qi Zhang , Beilei Xing , Shreyansh Jain , Thomas Monjalon Date: Wed, 21 Mar 2018 14:27:46 +0000 Message-Id: <20180321142749.27520-2-remy.horton@intel.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180321142749.27520-1-remy.horton@intel.com> References: <20180321142749.27520-1-remy.horton@intel.com> Subject: [dpdk-dev] [PATCH v2 1/4] ethdev: add support for PMD-tuned Tx/Rx parameters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2018 14:27:57 -0000 The optimal values of several transmission & reception related parameters, such as burst sizes, descriptor ring sizes, and number of queues, varies between different network interface devices. This patch allows individual PMDs to specify preferred parameter values. Signed-off-by: Remy Horton --- doc/guides/rel_notes/deprecation.rst | 13 ----------- lib/librte_ether/rte_ethdev.c | 44 ++++++++++++++++++++++++++++-------- lib/librte_ether/rte_ethdev.h | 23 +++++++++++++++++++ 3 files changed, 58 insertions(+), 22 deletions(-) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 74c18ed..803038b 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -115,19 +115,6 @@ Deprecation Notices The new API add rss_level field to ``rte_eth_rss_conf`` to enable a choice of RSS hash calculation on outer or inner header of tunneled packet. -* ethdev: Currently, if the rte_eth_rx_burst() function returns a value less - than *nb_pkts*, the application will assume that no more packets are present. - Some of the hw queue based hardware can only support smaller burst for RX - and TX and thus break the expectation of the rx_burst API. Similar is the - case for TX burst as well as ring sizes. ``rte_eth_dev_info`` will be added - with following new parameters so as to support semantics for drivers to - define a preferred size for Rx/Tx burst and rings. - - - Member ``struct preferred_size`` would be added to enclose all preferred - size to be fetched from driver/implementation. - - Members ``uint16_t rx_burst``, ``uint16_t tx_burst``, ``uint16_t rx_ring``, - and ``uint16_t tx_ring`` would be added to ``struct preferred_size``. - * ethdev: A work is being planned for 18.05 to expose VF port representors as a mean to perform control and data path operation on the different VFs. As VF representor is an ethdev port, new fields are needed in order to map diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 0590f0c..e5fba7c 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -1045,6 +1045,26 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); + dev = &rte_eth_devices[port_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP); + (*dev->dev_ops->dev_infos_get)(dev, &dev_info); + + /* If number of queues specified by application for both Rx and Tx is + * zero, use driver preferred values. This cannot be done individually + * as it is valid for either Tx or Rx (but not both) to be zero. + * If driver does not provide any preferred valued, fall back on + * EAL defaults. + */ + if (nb_rx_q == 0 && nb_tx_q == 0) { + nb_rx_q = dev_info.default_rxportconf.nb_queues; + if (nb_rx_q == 0) + nb_rx_q = RTE_ETH_DEV_FALLBACK_RX_NBQUEUES; + nb_tx_q = dev_info.default_txportconf.nb_queues; + if (nb_tx_q == 0) + nb_tx_q = RTE_ETH_DEV_FALLBACK_TX_NBQUEUES; + } + if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) { RTE_PMD_DEBUG_TRACE( "Number of RX queues requested (%u) is greater than max supported(%d)\n", @@ -1059,8 +1079,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -EINVAL; } - dev = &rte_eth_devices[port_id]; - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP); RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP); @@ -1090,13 +1108,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * than the maximum number of RX and TX queues supported by the * configured device. */ - (*dev->dev_ops->dev_infos_get)(dev, &dev_info); - - if (nb_rx_q == 0 && nb_tx_q == 0) { - RTE_PMD_DEBUG_TRACE("ethdev port_id=%d both rx and tx queue cannot be 0\n", port_id); - return -EINVAL; - } - if (nb_rx_q > dev_info.max_rx_queues) { RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n", port_id, nb_rx_q, dev_info.max_rx_queues); @@ -1461,6 +1472,14 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, return -EINVAL; } + /* Use default specified by driver, if nb_rx_desc is zero */ + if (nb_rx_desc == 0) { + nb_rx_desc = dev_info.default_rxportconf.ring_size; + /* If driver default is also zero, fall back on EAL default */ + if (nb_rx_desc == 0) + nb_rx_desc = RTE_ETH_DEV_FALLBACK_RX_RINGSIZE; + } + if (nb_rx_desc > dev_info.rx_desc_lim.nb_max || nb_rx_desc < dev_info.rx_desc_lim.nb_min || nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { @@ -1584,6 +1603,13 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, rte_eth_dev_info_get(port_id, &dev_info); + /* Use default specified by driver, if nb_tx_desc is zero */ + if (nb_tx_desc == 0) { + nb_tx_desc = dev_info.default_txportconf.ring_size; + /* If driver default is zero, fall back on EAL default */ + if (nb_tx_desc == 0) + nb_tx_desc = RTE_ETH_DEV_FALLBACK_TX_RINGSIZE; + } if (nb_tx_desc > dev_info.tx_desc_lim.nb_max || nb_tx_desc < dev_info.tx_desc_lim.nb_min || nb_tx_desc % dev_info.tx_desc_lim.nb_align != 0) { diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 0361533..72d138d 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -988,6 +988,25 @@ struct rte_eth_conf { struct rte_pci_device; +/* + * Fallback default preferred Rx/Tx port parameters. + * These are used if an application requests default parameters + * but the PMD does not provide preferred values. + */ +#define RTE_ETH_DEV_FALLBACK_RX_RINGSIZE 512 +#define RTE_ETH_DEV_FALLBACK_TX_RINGSIZE 512 +#define RTE_ETH_DEV_FALLBACK_RX_NBQUEUES 1 +#define RTE_ETH_DEV_FALLBACK_TX_NBQUEUES 1 + +/* + * Preferred Rx/Tx port parameters. + */ +struct rte_eth_dev_portconf { + uint16_t burst_size; + uint16_t ring_size; + uint16_t nb_queues; +}; + /** * Ethernet device information */ @@ -1029,6 +1048,10 @@ struct rte_eth_dev_info { /** Configured number of rx/tx queues */ uint16_t nb_rx_queues; /**< Number of RX queues. */ uint16_t nb_tx_queues; /**< Number of TX queues. */ + + /** Tx/Rx parameter recommendations */ + struct rte_eth_dev_portconf default_rxportconf; + struct rte_eth_dev_portconf default_txportconf; }; /** -- 2.9.5