From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59BBD45EFA; Fri, 20 Dec 2024 15:40:48 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BABF84065F; Fri, 20 Dec 2024 15:40:13 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) by mails.dpdk.org (Postfix) with ESMTP id CA9FF402E5 for ; Fri, 20 Dec 2024 15:40:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1734705610; x=1766241610; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=trCoGgVV2uKzWu0IL5wI1u2IxnHZxPrnp+7M1jjEZzo=; b=bvi8gzwBD83rBKmoMd9zwiZYvkl70F4uogwX08f+V3W4rPYxzYREthTK dM5y1bYCU1+e1s1uDgTUdeMaf64wXks5ZFtinV+LBo7tCnv0Wl7evIs/B 1vfeXFHnOvWLTdYJZQZ+mfN1mpkFEGugLC2i47kzw3OmZJQGKSqRHYY07 4ZJC4seApeSUdHrWPJ9AyQ9hvosJmeJopKG2pkwC4AehgMpM2mBtkrTOe fcvawwaKU9tCLrHNwIl7GXCR9hmT1Hf8wedgJSjq2+7IDZpQ2cnpvJurK 3zA/r5EKCWAYpxuZCL8kF3ERyTb7R/H353w+e0We7yulSSqGSlqWZKsiE w==; X-CSE-ConnectionGUID: JTetJMjCSD2ea7hRWAvtJw== X-CSE-MsgGUID: TBvjcRilRXmCwXTl3v2vdQ== X-IronPort-AV: E=McAfee;i="6700,10204,11292"; a="39040167" X-IronPort-AV: E=Sophos;i="6.12,251,1728975600"; d="scan'208";a="39040167" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2024 06:40:08 -0800 X-CSE-ConnectionGUID: l82qM9BzQNKPXzoXOngRIg== X-CSE-MsgGUID: /QQzVtuaR3uuJdj5qMyrwQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,251,1728975600"; d="scan'208";a="98310664" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by fmviesa006.fm.intel.com with ESMTP; 20 Dec 2024 06:40:06 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Vladimir Medvedkin Subject: [PATCH v4 08/24] net/ixgbe: convert Tx queue context cache field to ptr Date: Fri, 20 Dec 2024 14:39:05 +0000 Message-ID: <20241220143925.609044-9-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241220143925.609044-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241220143925.609044-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than having a two element array of context cache values inside the Tx queue structure, convert it to a pointer to a cache at the end of the structure. This makes future merging of the structure easier as we don't need the "ixgbe_advctx_info" struct defined when defining a combined queue structure. Signed-off-by: Bruce Richardson --- drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++--- drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++-- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 3 +-- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index f7ddbba1b6..2ca26cd132 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); txq->ctx_curr = 0; - memset((void *)&txq->ctx_cache, 0, - IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); + memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); } static const struct ixgbe_txq_ops def_txq_ops = { @@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, } /* First allocate the tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue), + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) + + sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM, RTE_CACHE_LINE_SIZE, socket_id); if (txq == NULL) return -ENOMEM; + txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue)); /* * Allocate TX ring hardware descriptors. A memzone large enough to diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index f6bae37cf3..847cacf7b5 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -215,8 +215,8 @@ struct ixgbe_tx_queue { uint8_t wthresh; /**< Write-back threshold reg. */ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */ uint32_t ctx_curr; /**< Hardware context states. */ - /** Hardware context0 history. */ - struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM]; + /** Hardware context history. */ + struct ixgbe_advctx_info *ctx_cache; const struct ixgbe_txq_ops *ops; /**< txq ops */ bool tx_deferred_start; /**< not in global dev start. */ #ifdef RTE_LIB_SECURITY diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index cc51bf6eed..ec334b5f65 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -176,8 +176,7 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq) txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); txq->ctx_curr = 0; - memset((void *)&txq->ctx_cache, 0, - IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); + memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); } static inline int -- 2.43.0