From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90D8F45D6F; Fri, 22 Nov 2024 13:54:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E657433B3; Fri, 22 Nov 2024 13:54:38 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by mails.dpdk.org (Postfix) with ESMTP id E5DB543397 for ; Fri, 22 Nov 2024 13:54:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732280076; x=1763816076; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=03JddHokFAihZj0VPaisuAWFHwO5d8Zc10v3K6CgKsc=; b=hRrunkNPChrEseXU+mbjFuIGq9Q01z0Idk32kYe/32ihSnYp25v8JiPp ig+mic3BfHlDN1nQ2gskpvS35Obvy5xUUWKreK9sSWhMu2QeRgCMIKIzg iI2TfJIAakPYKM0kgiUlQM/wXTJVtFlKnNKBFn9xFhH3N++Qj1dzXw+Wd onCIdJzo2GbSxSIi/i3SjO6H9uLkIVBFy58y5QlJCxJqVjGXzUfe33C/3 e3qA04LxSzEkugGnYHAXg97fAmuRvA+Faz6zzCWEK69p41csS0ZcxcMX5 CqdeUuq7mCLc9HJXOo8w7w+VQyBioPGt1Y7yfESjytNYEGgWHCsMmOGhe w==; X-CSE-ConnectionGUID: Z8ob+AqnSC+LmTpe7MI1rw== X-CSE-MsgGUID: cTLqiZYrQsu0+ei+DeqqhA== X-IronPort-AV: E=McAfee;i="6700,10204,11263"; a="43085339" X-IronPort-AV: E=Sophos;i="6.12,175,1728975600"; d="scan'208";a="43085339" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 04:54:35 -0800 X-CSE-ConnectionGUID: HmIwM+UrSEq8vnzDxTVV7w== X-CSE-MsgGUID: H0VzXLb1Q7eV8FkDZQzPeQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,175,1728975600"; d="scan'208";a="90373168" Received: from unknown (HELO silpixa00401385.ir.intel.com) ([10.237.214.25]) by fmviesa007.fm.intel.com with ESMTP; 22 Nov 2024 04:54:33 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , David Christensen , Konstantin Ananyev , Wathsala Vithanage , Vladimir Medvedkin , Anatoly Burakov Subject: [RFC PATCH 02/21] common/intel_eth: provide common Tx entry structures Date: Fri, 22 Nov 2024 12:53:55 +0000 Message-ID: <20241122125418.2857301-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241122125418.2857301-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The Tx entry structures, both vector and scalar, are common across Intel drivers, so provide a single definition to be used everywhere. Signed-off-by: Bruce Richardson --- drivers/common/intel_eth/ieth_rxtx.h | 29 +++++++++++++++++++ .../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +- drivers/net/i40e/i40e_rxtx.c | 18 ++++++------ drivers/net/i40e/i40e_rxtx.h | 14 ++------- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 ++-- drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +-- drivers/net/i40e/i40e_rxtx_vec_neon.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_sse.c | 2 +- drivers/net/iavf/iavf_rxtx.c | 12 ++++---- drivers/net/iavf/iavf_rxtx.h | 14 ++------- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 2 +- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 10 +++---- drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +-- drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +- drivers/net/ice/ice_dcf_ethdev.c | 2 +- drivers/net/ice/ice_rxtx.c | 16 +++++----- drivers/net/ice/ice_rxtx.h | 13 ++------- drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +- drivers/net/ice/ice_rxtx_vec_avx512.c | 6 ++-- drivers/net/ice/ice_rxtx_vec_common.h | 6 ++-- drivers/net/ice/ice_rxtx_vec_sse.c | 2 +- .../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++++----- drivers/net/ixgbe/ixgbe_rxtx.h | 22 +++----------- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 8 ++--- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +- 29 files changed, 107 insertions(+), 117 deletions(-) create mode 100644 drivers/common/intel_eth/ieth_rxtx.h diff --git a/drivers/common/intel_eth/ieth_rxtx.h b/drivers/common/intel_eth/ieth_rxtx.h new file mode 100644 index 0000000000..95a3cff048 --- /dev/null +++ b/drivers/common/intel_eth/ieth_rxtx.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Intel Corporation + */ + +#ifndef IETH_RXTX_H_ +#define IETH_RXTX_H_ + +#include +#include + +/** + * Structure associated with each descriptor of the TX ring of a TX queue. + */ +struct ieth_tx_entry +{ + struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */ + uint16_t next_id; /* Index of next descriptor in ring. */ + uint16_t last_id; /* Index of last scattered descriptor. */ +}; + +/** + * Structure associated with each descriptor of the TX ring of a TX queue in vector Tx. + */ +struct ieth_vec_tx_entry +{ + struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */ +}; + +#endif /* IETH_RXTX_H_ */ diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c index 14424c9921..5a23adc6a4 100644 --- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c +++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c @@ -56,7 +56,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue, struct rte_eth_recycle_rxq_info *recycle_rxq_info) { struct i40e_tx_queue *txq = tx_queue; - struct i40e_tx_entry *txep; + struct ieth_tx_entry *txep; struct rte_mbuf **rxep; int i, n; uint16_t nb_recycle_mbufs; diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 839c8a5442..b628d83a42 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -378,7 +378,7 @@ i40e_build_ctob(uint32_t td_cmd, static inline int i40e_xmit_cleanup(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *sw_ring = txq->sw_ring; + struct ieth_tx_entry *sw_ring = txq->sw_ring; volatile struct i40e_tx_desc *txd = txq->tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; @@ -1081,8 +1081,8 @@ uint16_t i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct i40e_tx_queue *txq; - struct i40e_tx_entry *sw_ring; - struct i40e_tx_entry *txe, *txn; + struct ieth_tx_entry *sw_ring; + struct ieth_tx_entry *txe, *txn; volatile struct i40e_tx_desc *txd; volatile struct i40e_tx_desc *txr; struct rte_mbuf *tx_pkt; @@ -1331,7 +1331,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) static __rte_always_inline int i40e_tx_free_bufs(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t tx_rs_thresh = txq->tx_rs_thresh; uint16_t i = 0, j = 0; struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; @@ -1418,7 +1418,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq, uint16_t nb_pkts) { volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); - struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); + struct ieth_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP - 1; int mainpart, leftover; @@ -2555,7 +2555,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket("i40e tx sw ring", - sizeof(struct i40e_tx_entry) * nb_desc, + sizeof(struct ieth_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (!txq->sw_ring) { @@ -2723,7 +2723,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq) */ #ifdef CC_AVX512_SUPPORT if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) { - struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring; + struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring; i = txq->tx_next_dd - txq->tx_rs_thresh + 1; if (txq->tx_tail < i) { @@ -2768,7 +2768,7 @@ static int i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq, uint32_t free_cnt) { - struct i40e_tx_entry *swr_ring = txq->sw_ring; + struct ieth_tx_entry *swr_ring = txq->sw_ring; uint16_t i, tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; @@ -2874,7 +2874,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt) void i40e_reset_tx_queue(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *txe; + struct ieth_tx_entry *txe; uint16_t i, prev, size; if (!txq) { diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 33fc9770d9..47ece1eb7d 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -5,6 +5,8 @@ #ifndef _I40E_RXTX_H_ #define _I40E_RXTX_H_ +#include + #define RTE_PMD_I40E_RX_MAX_BURST 32 #define RTE_PMD_I40E_TX_MAX_BURST 32 @@ -122,16 +124,6 @@ struct i40e_rx_queue { const struct rte_memzone *mz; }; -struct i40e_tx_entry { - struct rte_mbuf *mbuf; - uint16_t next_id; - uint16_t last_id; -}; - -struct i40e_vec_tx_entry { - struct rte_mbuf *mbuf; -}; - /* * Structure associated with each TX queue. */ @@ -139,7 +131,7 @@ struct i40e_tx_queue { uint16_t nb_tx_desc; /**< number of TX descriptors */ uint64_t tx_ring_phys_addr; /**< TX ring DMA address */ volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */ - struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */ + struct ieth_tx_entry *sw_ring; /**< virtual address of SW ring */ uint16_t tx_tail; /**< current value of tail register */ volatile uint8_t *qtx_tail; /**< register address of tail */ uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */ diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index 526355f61d..382a4d9305 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -553,7 +553,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index 231c5f6d4b..48909d6230 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -745,7 +745,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index 30ce24634a..25ed4c78a7 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -757,7 +757,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue, static __rte_always_inline int i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) { - struct i40e_vec_tx_entry *txep; + struct ieth_vec_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -920,7 +920,7 @@ vtx(volatile struct i40e_tx_desc *txdp, } static __rte_always_inline void -tx_backlog_entry_avx512(struct i40e_vec_tx_entry *txep, +tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -935,7 +935,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_vec_tx_entry *txep; + struct ieth_vec_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 7cefbc98ef..3f6319ee65 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -19,7 +19,7 @@ static __rte_always_inline int i40e_tx_free_bufs(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *txep; + struct ieth_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry(struct i40e_tx_entry *txep, +tx_backlog_entry(struct ieth_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index ab0d4f1a15..09f52d0409 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -681,7 +681,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 03fb9eb59b..cff33343e7 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -700,7 +700,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 6a093c6746..1db34fd12f 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -284,7 +284,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq) static inline void reset_tx_queue(struct iavf_tx_queue *txq) { - struct iavf_tx_entry *txe; + struct ieth_tx_entry *txe; uint32_t i, size; uint16_t prev; @@ -860,7 +860,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket("iavf tx sw ring", - sizeof(struct iavf_tx_entry) * nb_desc, + sizeof(struct ieth_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (!txq->sw_ring) { @@ -2379,7 +2379,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue, static inline int iavf_xmit_cleanup(struct iavf_tx_queue *txq) { - struct iavf_tx_entry *sw_ring = txq->sw_ring; + struct ieth_tx_entry *sw_ring = txq->sw_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; uint16_t desc_to_clean_to; @@ -2797,8 +2797,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct iavf_tx_queue *txq = tx_queue; volatile struct iavf_tx_desc *txr = txq->tx_ring; - struct iavf_tx_entry *txe_ring = txq->sw_ring; - struct iavf_tx_entry *txe, *txn; + struct ieth_tx_entry *txe_ring = txq->sw_ring; + struct ieth_tx_entry *txe, *txn; struct rte_mbuf *mb, *mb_seg; uint64_t buf_dma_addr; uint16_t desc_idx, desc_idx_last; @@ -4268,7 +4268,7 @@ static int iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, uint32_t free_cnt) { - struct iavf_tx_entry *swr_ring = txq->sw_ring; + struct ieth_tx_entry *swr_ring = txq->sw_ring; uint16_t tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 7b56076d32..63abe1cdb3 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -5,6 +5,8 @@ #ifndef _IAVF_RXTX_H_ #define _IAVF_RXTX_H_ +#include + /* In QLEN must be whole number of 32 descriptors. */ #define IAVF_ALIGN_RING_DESC 32 #define IAVF_MIN_RING_DESC 64 @@ -271,22 +273,12 @@ struct iavf_rx_queue { uint64_t hw_time_update; }; -struct iavf_tx_entry { - struct rte_mbuf *mbuf; - uint16_t next_id; - uint16_t last_id; -}; - -struct iavf_tx_vec_entry { - struct rte_mbuf *mbuf; -}; - /* Structure associated with each TX queue. */ struct iavf_tx_queue { const struct rte_memzone *mz; /* memzone for Tx ring */ volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */ - struct iavf_tx_entry *sw_ring; /* address array of SW ring */ + struct ieth_tx_entry *sw_ring; /* address array of SW ring */ uint16_t nb_tx_desc; /* ring length */ uint16_t tx_tail; /* current value of tail */ volatile uint8_t *qtx_tail; /* register address of tail */ diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index a05494891b..79c6b2027a 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1736,7 +1736,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 20ce9e2a3a..91f42670db 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -1847,7 +1847,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue, static __rte_always_inline int iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) { - struct iavf_tx_vec_entry *txep; + struct ieth_vec_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -1960,7 +1960,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry_avx512(struct iavf_tx_vec_entry *txep, +tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -2313,7 +2313,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_vec_entry *txep; + struct ieth_vec_tx_entry *txep; uint16_t n, nb_commit, tx_id; /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; @@ -2380,7 +2380,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_vec_entry *txep; + struct ieth_vec_tx_entry *txep; uint16_t n, nb_commit, nb_mbuf, tx_id; /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; @@ -2478,7 +2478,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq) const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */ const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */ - struct iavf_tx_vec_entry *swr = (void *)txq->sw_ring; + struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring; if (!txq->sw_ring || txq->nb_free == max_desc) return; diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 874e10fd59..b237c9ab93 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -19,7 +19,7 @@ static __rte_always_inline int iavf_tx_free_bufs(struct iavf_tx_queue *txq) { - struct iavf_tx_entry *txep; + struct ieth_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -74,7 +74,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry(struct iavf_tx_entry *txep, +tx_backlog_entry(struct ieth_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 7c1a1b8fa9..48028c2e32 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1368,7 +1368,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */ uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 91f4943a11..f37dd2fdc1 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -389,7 +389,7 @@ reset_rx_queue(struct ice_rx_queue *rxq) static inline void reset_tx_queue(struct ice_tx_queue *txq) { - struct ice_tx_entry *txe; + struct ieth_tx_entry *txe; uint32_t i, size; uint16_t prev; diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 0c7106c7e0..9faa878caf 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1028,7 +1028,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq) static void ice_reset_tx_queue(struct ice_tx_queue *txq) { - struct ice_tx_entry *txe; + struct ieth_tx_entry *txe; uint16_t i, prev, size; if (!txq) { @@ -1509,7 +1509,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket(NULL, - sizeof(struct ice_tx_entry) * nb_desc, + sizeof(struct ieth_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (!txq->sw_ring) { @@ -2837,7 +2837,7 @@ ice_txd_enable_checksum(uint64_t ol_flags, static inline int ice_xmit_cleanup(struct ice_tx_queue *txq) { - struct ice_tx_entry *sw_ring = txq->sw_ring; + struct ieth_tx_entry *sw_ring = txq->sw_ring; volatile struct ice_tx_desc *txd = txq->tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; @@ -2961,8 +2961,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct ice_tx_queue *txq; volatile struct ice_tx_desc *tx_ring; volatile struct ice_tx_desc *txd; - struct ice_tx_entry *sw_ring; - struct ice_tx_entry *txe, *txn; + struct ieth_tx_entry *sw_ring; + struct ieth_tx_entry *txe, *txn; struct rte_mbuf *tx_pkt; struct rte_mbuf *m_seg; uint32_t cd_tunneling_params; @@ -3184,7 +3184,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) static __rte_always_inline int ice_tx_free_bufs(struct ice_tx_queue *txq) { - struct ice_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t i; if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & @@ -3221,7 +3221,7 @@ static int ice_tx_done_cleanup_full(struct ice_tx_queue *txq, uint32_t free_cnt) { - struct ice_tx_entry *swr_ring = txq->sw_ring; + struct ieth_tx_entry *swr_ring = txq->sw_ring; uint16_t i, tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; @@ -3361,7 +3361,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail]; - struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; + struct ieth_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP - 1; int mainpart, leftover; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index 45f25b3609..615bed8a60 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -5,6 +5,7 @@ #ifndef _ICE_RXTX_H_ #define _ICE_RXTX_H_ +#include #include "ice_ethdev.h" #define ICE_ALIGN_RING_DESC 32 @@ -144,21 +145,11 @@ struct ice_rx_queue { bool ts_enable; /* if rxq timestamp is enabled */ }; -struct ice_tx_entry { - struct rte_mbuf *mbuf; - uint16_t next_id; - uint16_t last_id; -}; - -struct ice_vec_tx_entry { - struct rte_mbuf *mbuf; -}; - struct ice_tx_queue { uint16_t nb_tx_desc; /* number of TX descriptors */ rte_iova_t tx_ring_dma; /* TX ring DMA address */ volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */ - struct ice_tx_entry *sw_ring; /* virtual address of SW ring */ + struct ieth_tx_entry *sw_ring; /* virtual address of SW ring */ uint16_t tx_tail; /* current value of tail register */ volatile uint8_t *qtx_tail; /* register address of tail */ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */ diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index 1a3df29503..190e80a34e 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ice_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index 5e18f23204..5ba6d15ef0 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -862,7 +862,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue, static __rte_always_inline int ice_tx_free_bufs_avx512(struct ice_tx_queue *txq) { - struct ice_vec_tx_entry *txep; + struct ieth_vec_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -1040,7 +1040,7 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt, } static __rte_always_inline void -ice_tx_backlog_entry_avx512(struct ice_vec_tx_entry *txep, +ice_tx_backlog_entry_avx512(struct ieth_vec_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -1055,7 +1055,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ice_vec_tx_entry *txep; + struct ieth_vec_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 89e45939e7..5c30ecb674 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -15,7 +15,7 @@ static __rte_always_inline int ice_tx_free_bufs_vec(struct ice_tx_queue *txq) { - struct ice_tx_entry *txep; + struct ieth_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -70,7 +70,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq) } static __rte_always_inline void -ice_tx_backlog_entry(struct ice_tx_entry *txep, +ice_tx_backlog_entry(struct ieth_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -135,7 +135,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq) if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 || dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) { - struct ice_vec_tx_entry *swr = (void *)txq->sw_ring; + struct ieth_vec_tx_entry *swr = (void *)txq->sw_ring; if (txq->tx_tail < i) { for (; i < txq->nb_tx_desc; i++) { diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 9fcd975ed2..1bfed8f310 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ice_tx_entry *txep; + struct ieth_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c index d451562269..4c8f6b7b64 100644 --- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c +++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c @@ -52,7 +52,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue, struct rte_eth_recycle_rxq_info *recycle_rxq_info) { struct ixgbe_tx_queue *txq = tx_queue; - struct ixgbe_tx_entry *txep; + struct ieth_tx_entry *txep; struct rte_mbuf **rxep; int i, n; uint32_t status; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 0d42fd8a3b..28dca3fb7b 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -100,7 +100,7 @@ static __rte_always_inline int ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) { - struct ixgbe_tx_entry *txep; + struct ieth_tx_entry *txep; uint32_t status; int i, nb_free = 0; struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ]; @@ -199,7 +199,7 @@ ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); - struct ixgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); + struct ieth_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP-1; int mainpart, leftover; @@ -563,7 +563,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags) static inline int ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) { - struct ixgbe_tx_entry *sw_ring = txq->sw_ring; + struct ieth_tx_entry *sw_ring = txq->sw_ring; volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; @@ -624,8 +624,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct ixgbe_tx_queue *txq; - struct ixgbe_tx_entry *sw_ring; - struct ixgbe_tx_entry *txe, *txn; + struct ieth_tx_entry *sw_ring; + struct ieth_tx_entry *txe, *txn; volatile union ixgbe_adv_tx_desc *txr; volatile union ixgbe_adv_tx_desc *txd, *txp; struct rte_mbuf *tx_pkt; @@ -2352,7 +2352,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq) static int ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt) { - struct ixgbe_tx_entry *swr_ring = txq->sw_ring; + struct ieth_tx_entry *swr_ring = txq->sw_ring; uint16_t i, tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; @@ -2490,7 +2490,7 @@ static void __rte_cold ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) { static const union ixgbe_adv_tx_desc zeroed_desc = {{0}}; - struct ixgbe_tx_entry *txe = txq->sw_ring; + struct ieth_tx_entry *txe = txq->sw_ring; uint16_t prev, i; /* Zero out HW ring memory */ @@ -2795,7 +2795,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket("txq->sw_ring", - sizeof(struct ixgbe_tx_entry) * nb_desc, + sizeof(struct ieth_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (txq->sw_ring == NULL) { ixgbe_tx_queue_release(txq); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 0550c1da60..552dd2b340 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -5,6 +5,8 @@ #ifndef _IXGBE_RXTX_H_ #define _IXGBE_RXTX_H_ +#include + /* * Rings setup and release. * @@ -75,22 +77,6 @@ struct ixgbe_scattered_rx_entry { struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */ }; -/** - * Structure associated with each descriptor of the TX ring of a TX queue. - */ -struct ixgbe_tx_entry { - struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */ - uint16_t next_id; /**< Index of next descriptor in ring. */ - uint16_t last_id; /**< Index of last scattered descriptor. */ -}; - -/** - * Structure associated with each descriptor of the TX ring of a TX queue. - */ -struct ixgbe_tx_entry_v { - struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */ -}; - /** * Structure associated with each RX queue. */ @@ -202,8 +188,8 @@ struct ixgbe_tx_queue { volatile union ixgbe_adv_tx_desc *tx_ring; uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */ union { - struct ixgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */ - struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */ + struct ieth_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */ + struct ieth_vec_tx_entry *sw_ring_v; /**< address of SW ring for vector PMD */ }; volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */ uint16_t nb_tx_desc; /**< number of TX descriptors. */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 275af944f7..d25875935e 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -14,7 +14,7 @@ static __rte_always_inline int ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) { - struct ixgbe_tx_entry_v *txep; + struct ieth_vec_tx_entry *txep; uint32_t status; uint32_t n; uint32_t i; @@ -69,7 +69,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry(struct ixgbe_tx_entry_v *txep, +tx_backlog_entry(struct ieth_vec_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -82,7 +82,7 @@ static inline void _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) { unsigned int i; - struct ixgbe_tx_entry_v *txe; + struct ieth_vec_tx_entry *txe; const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc) @@ -149,7 +149,7 @@ static inline void _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq) { static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } }; - struct ixgbe_tx_entry_v *txe = txq->sw_ring_v; + struct ieth_vec_tx_entry *txe = txq->sw_ring_v; uint16_t i; /* Zero out HW ring memory */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index 91ba8036cf..b8edef5228 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -573,7 +573,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; volatile union ixgbe_adv_tx_desc *txdp; - struct ixgbe_tx_entry_v *txep; + struct ieth_vec_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = DCMD_DTYP_FLAGS; uint64_t rs = IXGBE_ADVTXD_DCMD_RS | DCMD_DTYP_FLAGS; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index a108a718a8..0a9d21eaf3 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -695,7 +695,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; volatile union ixgbe_adv_tx_desc *txdp; - struct ixgbe_tx_entry_v *txep; + struct ieth_vec_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = DCMD_DTYP_FLAGS; uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS; -- 2.43.0