From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 7910EA0C4D;
	Mon,  4 Oct 2021 16:20:41 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 0C82B4123C;
	Mon,  4 Oct 2021 16:20:41 +0200 (CEST)
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
 by mails.dpdk.org (Postfix) with ESMTP id A051741236
 for <dev@dpdk.org>; Mon,  4 Oct 2021 16:20:38 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="222887645"
X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="222887645"
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 04 Oct 2021 06:56:56 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558508355"
Received: from sivswdev08.ir.intel.com ([10.237.217.47])
 by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:56:48 -0700
From: Konstantin Ananyev <konstantin.ananyev@intel.com>
To: dev@dpdk.org
Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com,
 ndabilpuram@marvell.com, adwivedi@marvell.com,
 shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com,
 john.miller@atomicrules.com, irusskikh@marvell.com,
 ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com,
 rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com,
 sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com,
 hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com,
 humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com,
 beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com,
 matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com,
 longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com,
 andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com,
 jiawenwu@trustnetic.com, jianwang@trustnetic.com,
 maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net,
 ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com,
 Konstantin Ananyev <konstantin.ananyev@intel.com>
Date: Mon,  4 Oct 2021 14:55:57 +0100
Message-Id: <20211004135603.20593-2-konstantin.ananyev@intel.com>
X-Mailer: git-send-email 2.18.0
In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com>
References: <20211001140255.5726-1-konstantin.ananyev@intel.com>
 <20211004135603.20593-1-konstantin.ananyev@intel.com>
Subject: [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal
 queue array
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

At queue configure stage always allocate space for maximum possible
number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
pointer to internal queue data without extra checking of current number
of configured queues.
That would help in future to hide rte_eth_dev and related structures.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
 1 file changed, 9 insertions(+), 27 deletions(-)

diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..424bc260fa 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -898,7 +898,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
-				sizeof(dev->data->rx_queues[0]) * nb_queues,
+				sizeof(dev->data->rx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
 				RTE_CACHE_LINE_SIZE);
 		if (dev->data->rx_queues == NULL) {
 			dev->data->nb_rx_queues = 0;
@@ -909,21 +910,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		rxq = dev->data->rx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
-		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
-				RTE_CACHE_LINE_SIZE);
-		if (rxq == NULL)
-			return -(ENOMEM);
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(rxq + old_nb_queues, 0,
-				sizeof(rxq[0]) * new_qs);
+			rxq[i] = NULL;
 		}
 
-		dev->data->rx_queues = rxq;
-
 	} else if (dev->data->rx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
@@ -1138,8 +1129,9 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
-						   sizeof(dev->data->tx_queues[0]) * nb_queues,
-						   RTE_CACHE_LINE_SIZE);
+				sizeof(dev->data->tx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->tx_queues == NULL) {
 			dev->data->nb_tx_queues = 0;
 			return -(ENOMEM);
@@ -1149,21 +1141,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		txq = dev->data->tx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->tx_queue_release)(txq[i]);
-		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
-				  RTE_CACHE_LINE_SIZE);
-		if (txq == NULL)
-			return -ENOMEM;
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(txq + old_nb_queues, 0,
-			       sizeof(txq[0]) * new_qs);
+			txq[i] = NULL;
 		}
 
-		dev->data->tx_queues = txq;
-
 	} else if (dev->data->tx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
-- 
2.26.3