From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BC8745E03; Mon, 2 Dec 2024 12:27:00 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D5DB40E0B; Mon, 2 Dec 2024 12:26:15 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by mails.dpdk.org (Postfix) with ESMTP id 5CFAC40B9B for ; Mon, 2 Dec 2024 12:26:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733138767; x=1764674767; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c6zAM8IbX6Je9ho46p/zpG5+0L9Zguad40CJJjhdp/0=; b=M4iiR1LuSEPTqNQfvVOd2no17627DAU0oV8PpvYQKxG6ctFZN8buCjxs cC+vVMSXZMpKnQ9vyaWZNoqNPHICPIRr/BRWoxGdKvUtrgZFEZkDewzmD Fv+XvrI1gywsCBLKrSLWarnxC/t3fXBcMOQIMqMq6yRQ6DOmwnbZ0tzBF s12DG6E0G//X3xBfq/qRlcfGmrnf30ROcYFom7qjf3ri7pEtLi3Fx4ygQ 4NSn82t8WWD3jMbc+Fuc6VsB+eRIS8++HWXpzzPNF1lJG4qTSq88JxQtk JsZAaNOhCEaZWAWFzoE5bu6n1OZWsvo7DJQqv/aBkYn8fltUx9eQQIndj Q==; X-CSE-ConnectionGUID: okvEAkpKQ9SR2VsoNbLDSg== X-CSE-MsgGUID: 2WCRK3buSyaXyjN0hfyvYA== X-IronPort-AV: E=McAfee;i="6700,10204,11273"; a="43786187" X-IronPort-AV: E=Sophos;i="6.12,202,1728975600"; d="scan'208";a="43786187" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2024 03:26:06 -0800 X-CSE-ConnectionGUID: lrooC4PJRmWfWIDcl2iBug== X-CSE-MsgGUID: 3+UC/rDMS62FTVAbGaAbbA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,202,1728975600"; d="scan'208";a="98040286" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa005.jf.intel.com with ESMTP; 02 Dec 2024 03:26:06 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Vladimir Medvedkin Subject: [PATCH v1 08/21] net/ixgbe: convert Tx queue context cache field to ptr Date: Mon, 2 Dec 2024 11:24:28 +0000 Message-ID: <20241202112444.1517416-9-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241202112444.1517416-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241202112444.1517416-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than having a two element array of context cache values inside the Tx queue structure, convert it to a pointer to a cache at the end of the structure. This makes future merging of the structure easier as we don't need the "ixgbe_advctx_info" struct defined when defining a combined queue structure. Signed-off-by: Bruce Richardson --- drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++--- drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++-- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index f7ddbba1b6..2ca26cd132 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); txq->ctx_curr = 0; - memset((void *)&txq->ctx_cache, 0, - IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); + memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); } static const struct ixgbe_txq_ops def_txq_ops = { @@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, } /* First allocate the tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue), + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) + + sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM, RTE_CACHE_LINE_SIZE, socket_id); if (txq == NULL) return -ENOMEM; + txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue)); /* * Allocate TX ring hardware descriptors. A memzone large enough to diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index f6bae37cf3..847cacf7b5 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -215,8 +215,8 @@ struct ixgbe_tx_queue { uint8_t wthresh; /**< Write-back threshold reg. */ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */ uint32_t ctx_curr; /**< Hardware context states. */ - /** Hardware context0 history. */ - struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM]; + /** Hardware context history. */ + struct ixgbe_advctx_info *ctx_cache; const struct ixgbe_txq_ops *ops; /**< txq ops */ bool tx_deferred_start; /**< not in global dev start. */ #ifdef RTE_LIB_SECURITY -- 2.43.0