From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3E04F4612D; Fri, 24 Jan 2025 17:30:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 959154281D; Fri, 24 Jan 2025 17:29:54 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by mails.dpdk.org (Postfix) with ESMTP id D4CA3427E9 for ; Fri, 24 Jan 2025 17:29:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1737736193; x=1769272193; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GXx+uV7m0jBthA7HT5bmtLzwKme5e80VkbCQOxvz/XY=; b=NO8HCDYnFY1vXBVpQqiNEMvK866OKgxBpqHbjTX2OyPmUzuCfx0kMjYq 5VESYH38o+zUE+x3wrHTV34xXoQPwAnWowJrHYSekSZlIaBvJICNCjKmZ kYA5tPjXcdZHderpR5Rs58FVYYqntaawmLYQyaMdiMfLHfTu7kSod05SF kkROjoXbauWcBNVV6Pge1GeG/kCh5X4lgNn/CDWO4qbzzlGJrEBC08RjJ vsK5US2FNM2IxVNTCxbhCpuSCDQI1mxtMYbVwi9M4QtY8k4Txx3a8VM75 aXIR5qxPghPcyU9pxmL3pYoRpJg08t/DfsgnM14oExdDLB7qWbNsBZ7Qc A==; X-CSE-ConnectionGUID: DV360VMsS8+v4fsYB6Qrag== X-CSE-MsgGUID: 0G874lfJRqe6Yx1FkEumVw== X-IronPort-AV: E=McAfee;i="6700,10204,11325"; a="60740236" X-IronPort-AV: E=Sophos;i="6.13,231,1732608000"; d="scan'208";a="60740236" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2025 08:29:52 -0800 X-CSE-ConnectionGUID: +p7WzrBYTc6Y47AAdSno2A== X-CSE-MsgGUID: E6z/wwEVTFiiDQk14nYaLw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,231,1732608000"; d="scan'208";a="108352502" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by fmviesa010.fm.intel.com with ESMTP; 24 Jan 2025 08:29:50 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: david.marchand@redhat.com, anatoly.burakov@intel.com, vladimir.medvedkin@intel.com, ian.stokes@intel.com, praveen.shetty@intel.com, Bruce Richardson Subject: [PATCH v6 09/25] net/ixgbe: convert Tx queue context cache field to ptr Date: Fri, 24 Jan 2025 16:29:04 +0000 Message-ID: <20250124162921.1406103-10-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250124162921.1406103-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20250124162921.1406103-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than having a two element array of context cache values inside the Tx queue structure, convert it to a pointer to a cache at the end of the structure. This makes future merging of the structure easier as we don't need the "ixgbe_advctx_info" struct defined when defining a combined queue structure. Signed-off-by: Bruce Richardson --- drivers/net/intel/ixgbe/ixgbe_rxtx.c | 7 ++++--- drivers/net/intel/ixgbe/ixgbe_rxtx.h | 4 ++-- drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h | 3 +-- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c index f7ddbba1b6..2ca26cd132 100644 --- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c @@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); txq->ctx_curr = 0; - memset((void *)&txq->ctx_cache, 0, - IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); + memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); } static const struct ixgbe_txq_ops def_txq_ops = { @@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, } /* First allocate the tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue), + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) + + sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM, RTE_CACHE_LINE_SIZE, socket_id); if (txq == NULL) return -ENOMEM; + txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue)); /* * Allocate TX ring hardware descriptors. A memzone large enough to diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h index 581676d01c..b8cbbe4d24 100644 --- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h @@ -215,8 +215,8 @@ struct ixgbe_tx_queue { uint8_t wthresh; /**< Write-back threshold reg. */ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */ uint32_t ctx_curr; /**< Hardware context states. */ - /** Hardware context0 history. */ - struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM]; + /** Hardware context history. */ + struct ixgbe_advctx_info *ctx_cache; const struct ixgbe_txq_ops *ops; /**< txq ops */ bool tx_deferred_start; /**< not in global dev start. */ #ifdef RTE_LIB_SECURITY diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h index 6e6b24e5fd..5c38ba13d0 100644 --- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_common.h @@ -176,8 +176,7 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq) txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); txq->ctx_curr = 0; - memset((void *)&txq->ctx_cache, 0, - IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); + memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); } static inline int -- 2.43.0