DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC 0/4] remove use of VLA
@ 2024-05-23 16:26 Konstantin Ananyev
  2024-05-23 16:26 ` [RFC 1/4] gro: fix overwrite unprocessed packets Konstantin Ananyev
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Konstantin Ananyev @ 2024-05-23 16:26 UTC (permalink / raw)
  To: dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev

From: Konstantin Ananyev <konstantin.ananyev@huawei.com>

To continue further work on VLA replacement for these series:
https://patches.dpdk.org/project/dpdk/list/?series=31887
extra few patches to remove VLA for gro lib and ixgbe and ice PMDs.
DISCLAIMER: I don't have ice and ixgbe HW available on my box, so
didn't make a proper testing for patches #3,4.

Konstantin Ananyev (4):
  gro: fix overwrite unprocessed packets
  gro: remove use of VLAs
  net/ixgbe: remove use of VLAs
  net/ice: remove use of VLAs

 drivers/net/ice/ice_rxtx.c                | 18 ++++++----
 drivers/net/ice/ice_rxtx.h                |  2 ++
 drivers/net/ixgbe/ixgbe_ethdev.c          | 21 +++++++-----
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h |  4 ++-
 lib/gro/rte_gro.c                         | 42 ++++++++---------------
 5 files changed, 45 insertions(+), 42 deletions(-)

-- 
2.35.3


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC 1/4] gro: fix overwrite unprocessed packets
  2024-05-23 16:26 [RFC 0/4] remove use of VLA Konstantin Ananyev
@ 2024-05-23 16:26 ` Konstantin Ananyev
  2024-06-12  0:48   ` Ferruh Yigit
  2024-05-23 16:26 ` [RFC 2/4] gro: remove use of VLAs Konstantin Ananyev
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Konstantin Ananyev @ 2024-05-23 16:26 UTC (permalink / raw)
  To: dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev, stable

From: Konstantin Ananyev <konstantin.ananyev@huawei.com>

gro_vxlan_tcp4_tbl_timeout_flush() is called without taking into account
that first entries in pkts[] can be already occupied by
un-processed packets.

Fixes: 74080d7dcf31 ("gro: support IPv6 for TCP")
Cc: stable@dpdk.org

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 lib/gro/rte_gro.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
index d824eebd93..db86117609 100644
--- a/lib/gro/rte_gro.c
+++ b/lib/gro/rte_gro.c
@@ -327,7 +327,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 		/* Flush all packets from the tables */
 		if (do_vxlan_tcp_gro) {
 			i += gro_vxlan_tcp4_tbl_timeout_flush(&vxlan_tcp_tbl,
-					0, pkts, nb_pkts);
+					0, &pkts[i], nb_pkts - i);
 		}
 
 		if (do_vxlan_udp_gro) {
-- 
2.35.3


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC 2/4] gro: remove use of VLAs
  2024-05-23 16:26 [RFC 0/4] remove use of VLA Konstantin Ananyev
  2024-05-23 16:26 ` [RFC 1/4] gro: fix overwrite unprocessed packets Konstantin Ananyev
@ 2024-05-23 16:26 ` Konstantin Ananyev
  2024-06-12  0:48   ` Ferruh Yigit
  2024-05-23 16:26 ` [RFC 3/4] net/ixgbe: " Konstantin Ananyev
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Konstantin Ananyev @ 2024-05-23 16:26 UTC (permalink / raw)
  To: dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev

From: Konstantin Ananyev <konstantin.ananyev@huawei.com>

../lib/gro/rte_gro.c:182:34: warning: variable length array used [-Wvla]
../lib/gro/rte_gro.c:363:34: warning: variable length array used [-Wvla]

In both cases the pattern is the same: we use unprocess_pkts[nb_pkts] to
collect un-used by GRO packets, and then copy them to the start of
input/output pkts[] array.
In both cases, we can safely copy pkts[i] into already
processed entry at the same array, i.e. into pkts[unprocess_num].
Such change eliminates need of temporary VLA: unprocess_pkts[nb_pkts].

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 lib/gro/rte_gro.c | 40 ++++++++++++++--------------------------
 1 file changed, 14 insertions(+), 26 deletions(-)

diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
index db86117609..6d5aadf32a 100644
--- a/lib/gro/rte_gro.c
+++ b/lib/gro/rte_gro.c
@@ -179,7 +179,6 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 	struct gro_vxlan_udp4_item vxlan_udp_items[RTE_GRO_MAX_BURST_ITEM_NUM]
 			= {{{0}} };
 
-	struct rte_mbuf *unprocess_pkts[nb_pkts];
 	uint32_t item_num;
 	int32_t ret;
 	uint16_t i, unprocess_num = 0, nb_after_gro = nb_pkts;
@@ -275,7 +274,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 				/* Merge successfully */
 				nb_after_gro--;
 			else if (ret < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV4_VXLAN_UDP4_PKT(pkts[i]->packet_type) &&
 				do_vxlan_udp_gro) {
 			ret = gro_vxlan_udp4_reassemble(pkts[i],
@@ -284,7 +283,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 				/* Merge successfully */
 				nb_after_gro--;
 			else if (ret < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) &&
 				do_tcp4_gro) {
 			ret = gro_tcp4_reassemble(pkts[i], &tcp_tbl, 0);
@@ -292,7 +291,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 				/* merge successfully */
 				nb_after_gro--;
 			else if (ret < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV4_UDP_PKT(pkts[i]->packet_type) &&
 				do_udp4_gro) {
 			ret = gro_udp4_reassemble(pkts[i], &udp_tbl, 0);
@@ -300,7 +299,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 				/* merge successfully */
 				nb_after_gro--;
 			else if (ret < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV6_TCP_PKT(pkts[i]->packet_type) &&
 				do_tcp6_gro) {
 			ret = gro_tcp6_reassemble(pkts[i], &tcp6_tbl, 0);
@@ -308,21 +307,15 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 				/* merge successfully */
 				nb_after_gro--;
 			else if (ret < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else
-			unprocess_pkts[unprocess_num++] = pkts[i];
+			pkts[unprocess_num++] = pkts[i];
 	}
 
 	if ((nb_after_gro < nb_pkts)
 		 || (unprocess_num < nb_pkts)) {
-		i = 0;
-		/* Copy unprocessed packets */
-		if (unprocess_num > 0) {
-			memcpy(&pkts[i], unprocess_pkts,
-					sizeof(struct rte_mbuf *) *
-					unprocess_num);
-			i = unprocess_num;
-		}
+
+		i = unprocess_num;
 
 		/* Flush all packets from the tables */
 		if (do_vxlan_tcp_gro) {
@@ -360,7 +353,6 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
 		uint16_t nb_pkts,
 		void *ctx)
 {
-	struct rte_mbuf *unprocess_pkts[nb_pkts];
 	struct gro_ctx *gro_ctx = ctx;
 	void *tcp_tbl, *udp_tbl, *vxlan_tcp_tbl, *vxlan_udp_tbl, *tcp6_tbl;
 	uint64_t current_time;
@@ -396,33 +388,29 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
 				do_vxlan_tcp_gro) {
 			if (gro_vxlan_tcp4_reassemble(pkts[i], vxlan_tcp_tbl,
 						current_time) < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV4_VXLAN_UDP4_PKT(pkts[i]->packet_type) &&
 				do_vxlan_udp_gro) {
 			if (gro_vxlan_udp4_reassemble(pkts[i], vxlan_udp_tbl,
 						current_time) < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) &&
 				do_tcp4_gro) {
 			if (gro_tcp4_reassemble(pkts[i], tcp_tbl,
 						current_time) < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV4_UDP_PKT(pkts[i]->packet_type) &&
 				do_udp4_gro) {
 			if (gro_udp4_reassemble(pkts[i], udp_tbl,
 						current_time) < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else if (IS_IPV6_TCP_PKT(pkts[i]->packet_type) &&
 				do_tcp6_gro) {
 			if (gro_tcp6_reassemble(pkts[i], tcp6_tbl,
 						current_time) < 0)
-				unprocess_pkts[unprocess_num++] = pkts[i];
+				pkts[unprocess_num++] = pkts[i];
 		} else
-			unprocess_pkts[unprocess_num++] = pkts[i];
-	}
-	if (unprocess_num > 0) {
-		memcpy(pkts, unprocess_pkts, sizeof(struct rte_mbuf *) *
-				unprocess_num);
+			pkts[unprocess_num++] = pkts[i];
 	}
 
 	return unprocess_num;
-- 
2.35.3


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC 3/4] net/ixgbe: remove use of VLAs
  2024-05-23 16:26 [RFC 0/4] remove use of VLA Konstantin Ananyev
  2024-05-23 16:26 ` [RFC 1/4] gro: fix overwrite unprocessed packets Konstantin Ananyev
  2024-05-23 16:26 ` [RFC 2/4] gro: remove use of VLAs Konstantin Ananyev
@ 2024-05-23 16:26 ` Konstantin Ananyev
  2024-06-12  1:00   ` Ferruh Yigit
  2024-05-23 16:26 ` [RFC 4/4] net/ice: " Konstantin Ananyev
  2024-06-12  1:14 ` [RFC 0/4] remove use of VLA Ferruh Yigit
  4 siblings, 1 reply; 15+ messages in thread
From: Konstantin Ananyev @ 2024-05-23 16:26 UTC (permalink / raw)
  To: dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev

From: Konstantin Ananyev <konstantin.ananyev@huawei.com>

1) ../drivers/net/ixgbe/ixgbe_ethdev.c:3556:46: warning: variable length array used [-Wvla]
2) ../drivers/net/ixgbe/ixgbe_ethdev.c:3739:23: warning: variable length array used [-Wvla]
3) ../drivers/net/ixgbe/ixgbe_rxtx_vec_common.h:17:24: warning: variable length array used [-Wvla]

For first two cases: in fact ixgbe_xstats_calc_num() always returns
constant value, so it should be safe to replace that function invocation
just with a macro that performs same calculations.
For case #3: reassemble_packets() is invoked only by
ixgbe_recv_scattered_burst_vec().
And in turn, ixgbe_recv_scattered_burst_vec() operates only on fixed
max amount of packets per call: RTE_IXGBE_MAX_RX_BURST.
So, it should be safe to replace VLA with fixed size array.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c          | 21 +++++++++++++--------
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h |  4 +++-
 2 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index bbdc996f31..7843cd41d3 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3432,11 +3432,16 @@ ixgbe_dev_stats_reset(struct rte_eth_dev *dev)
 }
 
 /* This function calculates the number of xstats based on the current config */
+
+#define IXGBE_XSTATS_CALC_NUM	\
+	(IXGBE_NB_HW_STATS + IXGBE_NB_MACSEC_STATS + \
+	(IXGBE_NB_RXQ_PRIO_STATS * IXGBE_NB_RXQ_PRIO_VALUES) + \
+	(IXGBE_NB_TXQ_PRIO_STATS * IXGBE_NB_TXQ_PRIO_VALUES))
+
 static unsigned
-ixgbe_xstats_calc_num(void) {
-	return IXGBE_NB_HW_STATS + IXGBE_NB_MACSEC_STATS +
-		(IXGBE_NB_RXQ_PRIO_STATS * IXGBE_NB_RXQ_PRIO_VALUES) +
-		(IXGBE_NB_TXQ_PRIO_STATS * IXGBE_NB_TXQ_PRIO_VALUES);
+ixgbe_xstats_calc_num(void)
+{
+	return IXGBE_XSTATS_CALC_NUM;
 }
 
 static int ixgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
@@ -3552,8 +3557,8 @@ static int ixgbe_dev_xstats_get_names_by_id(
 	}
 
 	uint16_t i;
-	uint16_t size = ixgbe_xstats_calc_num();
-	struct rte_eth_xstat_name xstats_names_copy[size];
+	struct rte_eth_xstat_name xstats_names_copy[IXGBE_XSTATS_CALC_NUM];
+	const uint16_t size = RTE_DIM(xstats_names_copy);
 
 	ixgbe_dev_xstats_get_names_by_id(dev, NULL, xstats_names_copy,
 			size);
@@ -3735,8 +3740,8 @@ ixgbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	}
 
 	uint16_t i;
-	uint16_t size = ixgbe_xstats_calc_num();
-	uint64_t values_copy[size];
+	uint64_t values_copy[IXGBE_XSTATS_CALC_NUM];
+	const uint16_t size = RTE_DIM(values_copy);
 
 	ixgbe_dev_xstats_get_by_id(dev, NULL, values_copy, size);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index a4d9ec9b08..c1cf0a581a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -14,11 +14,13 @@ static inline uint16_t
 reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs,
 		   uint16_t nb_bufs, uint8_t *split_flags)
 {
-	struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/
+	struct rte_mbuf *pkts[RTE_IXGBE_MAX_RX_BURST]; /*finished pkts*/
 	struct rte_mbuf *start = rxq->pkt_first_seg;
 	struct rte_mbuf *end =  rxq->pkt_last_seg;
 	unsigned int pkt_idx, buf_idx;
 
+	RTE_ASSERT(nb_bufs <= RTE_DIM(pkts));
+
 	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
 		if (end != NULL) {
 			/* processing a split packet */
-- 
2.35.3


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC 4/4] net/ice: remove use of VLAs
  2024-05-23 16:26 [RFC 0/4] remove use of VLA Konstantin Ananyev
                   ` (2 preceding siblings ...)
  2024-05-23 16:26 ` [RFC 3/4] net/ixgbe: " Konstantin Ananyev
@ 2024-05-23 16:26 ` Konstantin Ananyev
  2024-06-12  1:12   ` Ferruh Yigit
  2024-06-12  1:14 ` [RFC 0/4] remove use of VLA Ferruh Yigit
  4 siblings, 1 reply; 15+ messages in thread
From: Konstantin Ananyev @ 2024-05-23 16:26 UTC (permalink / raw)
  To: dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev

From: Konstantin Ananyev <konstantin.ananyev@huawei.com>

../drivers/net/ice/ice_rxtx.c:1871:29: warning: variable length array used [-Wvla]

Here VLA is used as a temp array for mbufs that will be used as a split
RX data buffers.
As at any given time only one thread can do RX from particular queue,
at rx_queue_setup() we can allocate extra space for that array, and then
safely use it at RX fast-path.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 drivers/net/ice/ice_rxtx.c | 18 ++++++++++++------
 drivers/net/ice/ice_rxtx.h |  2 ++
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 95a2db3432..6395a3b50a 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1171,7 +1171,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 	struct ice_vsi *vsi = pf->main_vsi;
 	struct ice_rx_queue *rxq;
 	const struct rte_memzone *rz;
-	uint32_t ring_size;
+	uint32_t ring_size, tlen;
 	uint16_t len;
 	int use_def_burst_func = 1;
 	uint64_t offloads;
@@ -1279,9 +1279,14 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 	/* always reserve more for bulk alloc */
 	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
 
+	/* allocate extra entries for SW split buffer */
+	tlen = ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0) ?
+		rxq->rx_free_thresh : 0;
+	tlen += len;
+
 	/* Allocate the software ring. */
 	rxq->sw_ring = rte_zmalloc_socket(NULL,
-					  sizeof(struct ice_rx_entry) * len,
+					  sizeof(struct ice_rx_entry) * tlen,
 					  RTE_CACHE_LINE_SIZE,
 					  socket_id);
 	if (!rxq->sw_ring) {
@@ -1290,6 +1295,8 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 		return -ENOMEM;
 	}
 
+	rxq->sw_split_buf = (tlen == len) ? NULL : rxq->sw_ring + len;
+
 	ice_reset_rx_queue(rxq);
 	rxq->q_set = true;
 	dev->data->rx_queues[queue_idx] = rxq;
@@ -1868,7 +1875,6 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
 	uint64_t dma_addr;
 	int diag, diag_pay;
 	uint64_t pay_addr;
-	struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh];
 
 	/* Allocate buffers in bulk */
 	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
@@ -1883,7 +1889,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
 
 	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
 		diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp,
-				(void *)mbufs_pay, rxq->rx_free_thresh);
+				(void *)rxq->sw_split_buf, rxq->rx_free_thresh);
 		if (unlikely(diag_pay != 0)) {
 			PMD_RX_LOG(ERR, "Failed to get payload mbufs in bulk");
 			return -ENOMEM;
@@ -1908,8 +1914,8 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
 			rxdp[i].read.hdr_addr = 0;
 			rxdp[i].read.pkt_addr = dma_addr;
 		} else {
-			mb->next = mbufs_pay[i];
-			pay_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbufs_pay[i]));
+			mb->next = rxq->sw_split_buf[i].mbuf;
+			pay_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb->next));
 			rxdp[i].read.hdr_addr = dma_addr;
 			rxdp[i].read.pkt_addr = pay_addr;
 		}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index f7276cfc9f..d0f0b6c1d2 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -139,6 +139,8 @@ struct ice_rx_queue {
 	uint32_t hw_time_high; /* high 32 bits of timestamp */
 	uint32_t hw_time_low; /* low 32 bits of timestamp */
 	uint64_t hw_time_update; /* SW time of HW record updating */
+	struct ice_rx_entry *sw_split_buf;
+	/* address of temp buffer for RX split mbufs */
 	struct rte_eth_rxseg_split rxseg[ICE_RX_MAX_NSEG];
 	uint32_t rxseg_nb;
 	bool ts_enable; /* if rxq timestamp is enabled */
-- 
2.35.3


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 1/4] gro: fix overwrite unprocessed packets
  2024-05-23 16:26 ` [RFC 1/4] gro: fix overwrite unprocessed packets Konstantin Ananyev
@ 2024-06-12  0:48   ` Ferruh Yigit
  0 siblings, 0 replies; 15+ messages in thread
From: Ferruh Yigit @ 2024-06-12  0:48 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev, stable, kumaraparamesh92

On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 
> gro_vxlan_tcp4_tbl_timeout_flush() is called without taking into account
> that first entries in pkts[] can be already occupied by
> un-processed packets.
> 
> Fixes: 74080d7dcf31 ("gro: support IPv6 for TCP")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> ---
>  lib/gro/rte_gro.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
> index d824eebd93..db86117609 100644
> --- a/lib/gro/rte_gro.c
> +++ b/lib/gro/rte_gro.c
> @@ -327,7 +327,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  		/* Flush all packets from the tables */
>  		if (do_vxlan_tcp_gro) {
>  			i += gro_vxlan_tcp4_tbl_timeout_flush(&vxlan_tcp_tbl,
> -					0, pkts, nb_pkts);
> +					0, &pkts[i], nb_pkts - i);
>  		}
>  
>  		if (do_vxlan_udp_gro) {

Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 2/4] gro: remove use of VLAs
  2024-05-23 16:26 ` [RFC 2/4] gro: remove use of VLAs Konstantin Ananyev
@ 2024-06-12  0:48   ` Ferruh Yigit
  2024-06-13 10:20     ` Konstantin Ananyev
  0 siblings, 1 reply; 15+ messages in thread
From: Ferruh Yigit @ 2024-06-12  0:48 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev, kumaraparamesh92

On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 
> ../lib/gro/rte_gro.c:182:34: warning: variable length array used [-Wvla]
> ../lib/gro/rte_gro.c:363:34: warning: variable length array used [-Wvla]
> 
> In both cases the pattern is the same: we use unprocess_pkts[nb_pkts] to
> collect un-used by GRO packets, and then copy them to the start of
> input/output pkts[] array.
> In both cases, we can safely copy pkts[i] into already
> processed entry at the same array, i.e. into pkts[unprocess_num].
> Such change eliminates need of temporary VLA: unprocess_pkts[nb_pkts].
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> ---
>  lib/gro/rte_gro.c | 40 ++++++++++++++--------------------------
>  1 file changed, 14 insertions(+), 26 deletions(-)
> 
> diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
> index db86117609..6d5aadf32a 100644
> --- a/lib/gro/rte_gro.c
> +++ b/lib/gro/rte_gro.c
> @@ -179,7 +179,6 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  	struct gro_vxlan_udp4_item vxlan_udp_items[RTE_GRO_MAX_BURST_ITEM_NUM]
>  			= {{{0}} };
>  
> -	struct rte_mbuf *unprocess_pkts[nb_pkts];
>  	uint32_t item_num;
>  	int32_t ret;
>  	uint16_t i, unprocess_num = 0, nb_after_gro = nb_pkts;
> @@ -275,7 +274,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  				/* Merge successfully */
>  				nb_after_gro--;
>  			else if (ret < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV4_VXLAN_UDP4_PKT(pkts[i]->packet_type) &&
>  				do_vxlan_udp_gro) {
>  			ret = gro_vxlan_udp4_reassemble(pkts[i],
> @@ -284,7 +283,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  				/* Merge successfully */
>  				nb_after_gro--;
>  			else if (ret < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) &&
>  				do_tcp4_gro) {
>  			ret = gro_tcp4_reassemble(pkts[i], &tcp_tbl, 0);
> @@ -292,7 +291,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  				/* merge successfully */
>  				nb_after_gro--;
>  			else if (ret < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV4_UDP_PKT(pkts[i]->packet_type) &&
>  				do_udp4_gro) {
>  			ret = gro_udp4_reassemble(pkts[i], &udp_tbl, 0);
> @@ -300,7 +299,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  				/* merge successfully */
>  				nb_after_gro--;
>  			else if (ret < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV6_TCP_PKT(pkts[i]->packet_type) &&
>  				do_tcp6_gro) {
>  			ret = gro_tcp6_reassemble(pkts[i], &tcp6_tbl, 0);
> @@ -308,21 +307,15 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  				/* merge successfully */
>  				nb_after_gro--;
>  			else if (ret < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else
> -			unprocess_pkts[unprocess_num++] = pkts[i];
> +			pkts[unprocess_num++] = pkts[i];
>  	}
>  
>  	if ((nb_after_gro < nb_pkts)
>  		 || (unprocess_num < nb_pkts)) {
> -		i = 0;
> -		/* Copy unprocessed packets */
> -		if (unprocess_num > 0) {
> -			memcpy(&pkts[i], unprocess_pkts,
> -					sizeof(struct rte_mbuf *) *
> -					unprocess_num);
> -			i = unprocess_num;
> -		}
> +
> +		i = unprocess_num;
>  
>  		/* Flush all packets from the tables */
>  		if (do_vxlan_tcp_gro) {
>

ack to re-use 'pkts[]' buffer for unprocessed packets, that should work.

But as a more general GRO question, above 'rte_gro_reassemble_burst()'
functions seems returns 'nb_after_gro' and as far as I can see that
amount of mbufs sits in the 'pkts[]'.
When packets flushed from tables, flushed packets are replaced to
'pkts[]' but still 'nb_after_gro' returned, there is no way for
application to know that more than 'nb_after_gro' mbufs available in the
'pkts[]'. Shouldn't return value increased per flushed packet?

Ahh, I can see it was the case before, but it is updated (perhaps
broken) in commit:
74080d7dcf31 ("gro: support IPv6 for TCP")


I wonder when GRO last tested!
@Jiayu, did you have a chance to test GRO recently?


> @@ -360,7 +353,6 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
>  		uint16_t nb_pkts,
>  		void *ctx)
>  {
> -	struct rte_mbuf *unprocess_pkts[nb_pkts];
>  	struct gro_ctx *gro_ctx = ctx;
>  	void *tcp_tbl, *udp_tbl, *vxlan_tcp_tbl, *vxlan_udp_tbl, *tcp6_tbl;
>  	uint64_t current_time;
> @@ -396,33 +388,29 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
>  				do_vxlan_tcp_gro) {
>  			if (gro_vxlan_tcp4_reassemble(pkts[i], vxlan_tcp_tbl,
>  						current_time) < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV4_VXLAN_UDP4_PKT(pkts[i]->packet_type) &&
>  				do_vxlan_udp_gro) {
>  			if (gro_vxlan_udp4_reassemble(pkts[i], vxlan_udp_tbl,
>  						current_time) < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) &&
>  				do_tcp4_gro) {
>  			if (gro_tcp4_reassemble(pkts[i], tcp_tbl,
>  						current_time) < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV4_UDP_PKT(pkts[i]->packet_type) &&
>  				do_udp4_gro) {
>  			if (gro_udp4_reassemble(pkts[i], udp_tbl,
>  						current_time) < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else if (IS_IPV6_TCP_PKT(pkts[i]->packet_type) &&
>  				do_tcp6_gro) {
>  			if (gro_tcp6_reassemble(pkts[i], tcp6_tbl,
>  						current_time) < 0)
> -				unprocess_pkts[unprocess_num++] = pkts[i];
> +				pkts[unprocess_num++] = pkts[i];
>  		} else
> -			unprocess_pkts[unprocess_num++] = pkts[i];
> -	}
> -	if (unprocess_num > 0) {
> -		memcpy(pkts, unprocess_pkts, sizeof(struct rte_mbuf *) *
> -				unprocess_num);
> +			pkts[unprocess_num++] = pkts[i];
> 

ack


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 3/4] net/ixgbe: remove use of VLAs
  2024-05-23 16:26 ` [RFC 3/4] net/ixgbe: " Konstantin Ananyev
@ 2024-06-12  1:00   ` Ferruh Yigit
  0 siblings, 0 replies; 15+ messages in thread
From: Ferruh Yigit @ 2024-06-12  1:00 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev

On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 
> 1) ../drivers/net/ixgbe/ixgbe_ethdev.c:3556:46: warning: variable length array used [-Wvla]
> 2) ../drivers/net/ixgbe/ixgbe_ethdev.c:3739:23: warning: variable length array used [-Wvla]
> 3) ../drivers/net/ixgbe/ixgbe_rxtx_vec_common.h:17:24: warning: variable length array used [-Wvla]
> 
> For first two cases: in fact ixgbe_xstats_calc_num() always returns
> constant value, so it should be safe to replace that function invocation
> just with a macro that performs same calculations.
>

ack


> For case #3: reassemble_packets() is invoked only by
> ixgbe_recv_scattered_burst_vec().
> And in turn, ixgbe_recv_scattered_burst_vec() operates only on fixed
> max amount of packets per call: RTE_IXGBE_MAX_RX_BURST.
> So, it should be safe to replace VLA with fixed size array.
>

Ack

I see you already add assert, +1.
Do you think adding a comment that caller calls with max
RTE_IXGBE_MAX_RX_BURST helps?

> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c          | 21 +++++++++++++--------
>  drivers/net/ixgbe/ixgbe_rxtx_vec_common.h |  4 +++-
>  2 files changed, 16 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index bbdc996f31..7843cd41d3 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -3432,11 +3432,16 @@ ixgbe_dev_stats_reset(struct rte_eth_dev *dev)
>  }
>  
>  /* This function calculates the number of xstats based on the current config */
> +
> +#define IXGBE_XSTATS_CALC_NUM	\
> +	(IXGBE_NB_HW_STATS + IXGBE_NB_MACSEC_STATS + \
> +	(IXGBE_NB_RXQ_PRIO_STATS * IXGBE_NB_RXQ_PRIO_VALUES) + \
> +	(IXGBE_NB_TXQ_PRIO_STATS * IXGBE_NB_TXQ_PRIO_VALUES))
> +
>  static unsigned
> -ixgbe_xstats_calc_num(void) {
> -	return IXGBE_NB_HW_STATS + IXGBE_NB_MACSEC_STATS +
> -		(IXGBE_NB_RXQ_PRIO_STATS * IXGBE_NB_RXQ_PRIO_VALUES) +
> -		(IXGBE_NB_TXQ_PRIO_STATS * IXGBE_NB_TXQ_PRIO_VALUES);
> +ixgbe_xstats_calc_num(void)
> +{
> +	return IXGBE_XSTATS_CALC_NUM;
>  }
>  
>  static int ixgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
> @@ -3552,8 +3557,8 @@ static int ixgbe_dev_xstats_get_names_by_id(
>  	}
>  
>  	uint16_t i;
> -	uint16_t size = ixgbe_xstats_calc_num();
> -	struct rte_eth_xstat_name xstats_names_copy[size];
> +	struct rte_eth_xstat_name xstats_names_copy[IXGBE_XSTATS_CALC_NUM];
> +	const uint16_t size = RTE_DIM(xstats_names_copy);
>  
>  	ixgbe_dev_xstats_get_names_by_id(dev, NULL, xstats_names_copy,
>  			size);
> @@ -3735,8 +3740,8 @@ ixgbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
>  	}
>  
>  	uint16_t i;
> -	uint16_t size = ixgbe_xstats_calc_num();
> -	uint64_t values_copy[size];
> +	uint64_t values_copy[IXGBE_XSTATS_CALC_NUM];
> +	const uint16_t size = RTE_DIM(values_copy);
>  
>  	ixgbe_dev_xstats_get_by_id(dev, NULL, values_copy, size);
>  
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
> index a4d9ec9b08..c1cf0a581a 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
> @@ -14,11 +14,13 @@ static inline uint16_t
>  reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs,
>  		   uint16_t nb_bufs, uint8_t *split_flags)
>  {
> -	struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/
> +	struct rte_mbuf *pkts[RTE_IXGBE_MAX_RX_BURST]; /*finished pkts*/
>  	struct rte_mbuf *start = rxq->pkt_first_seg;
>  	struct rte_mbuf *end =  rxq->pkt_last_seg;
>  	unsigned int pkt_idx, buf_idx;
>  
> +	RTE_ASSERT(nb_bufs <= RTE_DIM(pkts));
> +
>  	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
>  		if (end != NULL) {
>  			/* processing a split packet */


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 4/4] net/ice: remove use of VLAs
  2024-05-23 16:26 ` [RFC 4/4] net/ice: " Konstantin Ananyev
@ 2024-06-12  1:12   ` Ferruh Yigit
  2024-06-13 10:32     ` Konstantin Ananyev
  0 siblings, 1 reply; 15+ messages in thread
From: Ferruh Yigit @ 2024-06-12  1:12 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, Konstantin Ananyev

On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 
> ../drivers/net/ice/ice_rxtx.c:1871:29: warning: variable length array used [-Wvla]
> 
> Here VLA is used as a temp array for mbufs that will be used as a split
> RX data buffers.
> As at any given time only one thread can do RX from particular queue,
> at rx_queue_setup() we can allocate extra space for that array, and then
> safely use it at RX fast-path.
> 

Is there a reason to allocate extra space in sw_ring and used some part
of it for split buffer, instead of allocating a new buffer for it?

> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> ---
>  drivers/net/ice/ice_rxtx.c | 18 ++++++++++++------
>  drivers/net/ice/ice_rxtx.h |  2 ++
>  2 files changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 95a2db3432..6395a3b50a 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -1171,7 +1171,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>  	struct ice_vsi *vsi = pf->main_vsi;
>  	struct ice_rx_queue *rxq;
>  	const struct rte_memzone *rz;
> -	uint32_t ring_size;
> +	uint32_t ring_size, tlen;
>  	uint16_t len;
>  	int use_def_burst_func = 1;
>  	uint64_t offloads;
> @@ -1279,9 +1279,14 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>  	/* always reserve more for bulk alloc */
>  	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
>  
> +	/* allocate extra entries for SW split buffer */
> +	tlen = ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0) ?
> +		rxq->rx_free_thresh : 0;
> +	tlen += len;
> +
>  	/* Allocate the software ring. */
>  	rxq->sw_ring = rte_zmalloc_socket(NULL,
> -					  sizeof(struct ice_rx_entry) * len,
> +					  sizeof(struct ice_rx_entry) * tlen,
>  					  RTE_CACHE_LINE_SIZE,
>  					  socket_id);
>  	if (!rxq->sw_ring) {
> @@ -1290,6 +1295,8 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>  		return -ENOMEM;
>  	}
>  
> +	rxq->sw_split_buf = (tlen == len) ? NULL : rxq->sw_ring + len;
> +
>  	ice_reset_rx_queue(rxq);
>  	rxq->q_set = true;
>  	dev->data->rx_queues[queue_idx] = rxq;
> @@ -1868,7 +1875,6 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
>  	uint64_t dma_addr;
>  	int diag, diag_pay;
>  	uint64_t pay_addr;
> -	struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh];
>  
>  	/* Allocate buffers in bulk */
>  	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
> @@ -1883,7 +1889,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
>  
>  	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
>  		diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp,
> -				(void *)mbufs_pay, rxq->rx_free_thresh);
> +				(void *)rxq->sw_split_buf, rxq->rx_free_thresh);
>

Are we allowed to call 'rte_mempool_get_bulk()' with NULL object_table,
as 'rxq->sw_split_buf' can be NULL?
Perhaps can allocate 'rxq->sw_split_buf' even buffer split offload is
not enabled?


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 0/4] remove use of VLA
  2024-05-23 16:26 [RFC 0/4] remove use of VLA Konstantin Ananyev
                   ` (3 preceding siblings ...)
  2024-05-23 16:26 ` [RFC 4/4] net/ice: " Konstantin Ananyev
@ 2024-06-12  1:14 ` Ferruh Yigit
  2024-06-13 10:43   ` Konstantin Ananyev
  4 siblings, 1 reply; 15+ messages in thread
From: Ferruh Yigit @ 2024-06-12  1:14 UTC (permalink / raw)
  To: Konstantin Ananyev, dev, bruce.richardson, Mcnamara, John
  Cc: hujiayu.hu, roretzla, anatoly.burakov, vladimir.medvedkin,
	Konstantin Ananyev

On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 
> To continue further work on VLA replacement for these series:
> https://patches.dpdk.org/project/dpdk/list/?series=31887
> extra few patches to remove VLA for gro lib and ixgbe and ice PMDs.
> DISCLAIMER: I don't have ice and ixgbe HW available on my box, so
> didn't make a proper testing for patches #3,4.
> 
> Konstantin Ananyev (4):
>   gro: fix overwrite unprocessed packets
>   gro: remove use of VLAs
>   net/ixgbe: remove use of VLAs
>   net/ice: remove use of VLAs
>

Hi Konstantin,

I guess this is send as RFC because the set is not tested?
If so, @Bruce, @John, can you please help testing this set?

Btw, it fixes some GRO issues, additional GRO tests also helps if possible?

Thanks,
ferruh


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [RFC 2/4] gro: remove use of VLAs
  2024-06-12  0:48   ` Ferruh Yigit
@ 2024-06-13 10:20     ` Konstantin Ananyev
  2024-06-14 15:11       ` Ferruh Yigit
  0 siblings, 1 reply; 15+ messages in thread
From: Konstantin Ananyev @ 2024-06-13 10:20 UTC (permalink / raw)
  To: Ferruh Yigit, Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, kumaraparamesh92



> 
> On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> > From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> >
> > ../lib/gro/rte_gro.c:182:34: warning: variable length array used [-Wvla]
> > ../lib/gro/rte_gro.c:363:34: warning: variable length array used [-Wvla]
> >
> > In both cases the pattern is the same: we use unprocess_pkts[nb_pkts] to
> > collect un-used by GRO packets, and then copy them to the start of
> > input/output pkts[] array.
> > In both cases, we can safely copy pkts[i] into already
> > processed entry at the same array, i.e. into pkts[unprocess_num].
> > Such change eliminates need of temporary VLA: unprocess_pkts[nb_pkts].
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> > ---
> >  lib/gro/rte_gro.c | 40 ++++++++++++++--------------------------
> >  1 file changed, 14 insertions(+), 26 deletions(-)
> >
> > diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
> > index db86117609..6d5aadf32a 100644
> > --- a/lib/gro/rte_gro.c
> > +++ b/lib/gro/rte_gro.c
> > @@ -179,7 +179,6 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
> >  	struct gro_vxlan_udp4_item vxlan_udp_items[RTE_GRO_MAX_BURST_ITEM_NUM]
> >  			= {{{0}} };
> >
> > -	struct rte_mbuf *unprocess_pkts[nb_pkts];
> >  	uint32_t item_num;
> >  	int32_t ret;
> >  	uint16_t i, unprocess_num = 0, nb_after_gro = nb_pkts;
> > @@ -275,7 +274,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
> >  				/* Merge successfully */
> >  				nb_after_gro--;
> >  			else if (ret < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV4_VXLAN_UDP4_PKT(pkts[i]->packet_type) &&
> >  				do_vxlan_udp_gro) {
> >  			ret = gro_vxlan_udp4_reassemble(pkts[i],
> > @@ -284,7 +283,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
> >  				/* Merge successfully */
> >  				nb_after_gro--;
> >  			else if (ret < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) &&
> >  				do_tcp4_gro) {
> >  			ret = gro_tcp4_reassemble(pkts[i], &tcp_tbl, 0);
> > @@ -292,7 +291,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
> >  				/* merge successfully */
> >  				nb_after_gro--;
> >  			else if (ret < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV4_UDP_PKT(pkts[i]->packet_type) &&
> >  				do_udp4_gro) {
> >  			ret = gro_udp4_reassemble(pkts[i], &udp_tbl, 0);
> > @@ -300,7 +299,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
> >  				/* merge successfully */
> >  				nb_after_gro--;
> >  			else if (ret < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV6_TCP_PKT(pkts[i]->packet_type) &&
> >  				do_tcp6_gro) {
> >  			ret = gro_tcp6_reassemble(pkts[i], &tcp6_tbl, 0);
> > @@ -308,21 +307,15 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
> >  				/* merge successfully */
> >  				nb_after_gro--;
> >  			else if (ret < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else
> > -			unprocess_pkts[unprocess_num++] = pkts[i];
> > +			pkts[unprocess_num++] = pkts[i];
> >  	}
> >
> >  	if ((nb_after_gro < nb_pkts)
> >  		 || (unprocess_num < nb_pkts)) {
> > -		i = 0;
> > -		/* Copy unprocessed packets */
> > -		if (unprocess_num > 0) {
> > -			memcpy(&pkts[i], unprocess_pkts,
> > -					sizeof(struct rte_mbuf *) *
> > -					unprocess_num);
> > -			i = unprocess_num;
> > -		}
> > +
> > +		i = unprocess_num;
> >
> >  		/* Flush all packets from the tables */
> >  		if (do_vxlan_tcp_gro) {
> >
> 
> ack to re-use 'pkts[]' buffer for unprocessed packets, that should work.
> 
> But as a more general GRO question, above 'rte_gro_reassemble_burst()'
> functions seems returns 'nb_after_gro' and as far as I can see that
> amount of mbufs sits in the 'pkts[]'.
> When packets flushed from tables, flushed packets are replaced to
> 'pkts[]' but still 'nb_after_gro' returned, there is no way for
> application to know that more than 'nb_after_gro' mbufs available in the
> 'pkts[]'. Shouldn't return value increased per flushed packet?
> 
> Ahh, I can see it was the case before, but it is updated (perhaps
> broken) in commit:
> 74080d7dcf31 ("gro: support IPv6 for TCP")

Actually my first thought was - we should return 'I' here.
but then looking at the code more carefully, I realized that it is correct:
nb_after_gro - would contain valid number of packets
(at least I wasn't able to find a case when it wouldn't). 
Though yeh, it wasn't very obvious for me at first place, so might be
extra comment wouldn't hurt here.

> 
> 
> I wonder when GRO last tested!
> @Jiayu, did you have a chance to test GRO recently?
> 
> 
> > @@ -360,7 +353,6 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
> >  		uint16_t nb_pkts,
> >  		void *ctx)
> >  {
> > -	struct rte_mbuf *unprocess_pkts[nb_pkts];
> >  	struct gro_ctx *gro_ctx = ctx;
> >  	void *tcp_tbl, *udp_tbl, *vxlan_tcp_tbl, *vxlan_udp_tbl, *tcp6_tbl;
> >  	uint64_t current_time;
> > @@ -396,33 +388,29 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
> >  				do_vxlan_tcp_gro) {
> >  			if (gro_vxlan_tcp4_reassemble(pkts[i], vxlan_tcp_tbl,
> >  						current_time) < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV4_VXLAN_UDP4_PKT(pkts[i]->packet_type) &&
> >  				do_vxlan_udp_gro) {
> >  			if (gro_vxlan_udp4_reassemble(pkts[i], vxlan_udp_tbl,
> >  						current_time) < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) &&
> >  				do_tcp4_gro) {
> >  			if (gro_tcp4_reassemble(pkts[i], tcp_tbl,
> >  						current_time) < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV4_UDP_PKT(pkts[i]->packet_type) &&
> >  				do_udp4_gro) {
> >  			if (gro_udp4_reassemble(pkts[i], udp_tbl,
> >  						current_time) < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else if (IS_IPV6_TCP_PKT(pkts[i]->packet_type) &&
> >  				do_tcp6_gro) {
> >  			if (gro_tcp6_reassemble(pkts[i], tcp6_tbl,
> >  						current_time) < 0)
> > -				unprocess_pkts[unprocess_num++] = pkts[i];
> > +				pkts[unprocess_num++] = pkts[i];
> >  		} else
> > -			unprocess_pkts[unprocess_num++] = pkts[i];
> > -	}
> > -	if (unprocess_num > 0) {
> > -		memcpy(pkts, unprocess_pkts, sizeof(struct rte_mbuf *) *
> > -				unprocess_num);
> > +			pkts[unprocess_num++] = pkts[i];
> >
> 
> ack


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [RFC 4/4] net/ice: remove use of VLAs
  2024-06-12  1:12   ` Ferruh Yigit
@ 2024-06-13 10:32     ` Konstantin Ananyev
  2024-06-14 15:31       ` Ferruh Yigit
  0 siblings, 1 reply; 15+ messages in thread
From: Konstantin Ananyev @ 2024-06-13 10:32 UTC (permalink / raw)
  To: Ferruh Yigit, Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin



> 
> On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> > From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> >
> > ../drivers/net/ice/ice_rxtx.c:1871:29: warning: variable length array used [-Wvla]
> >
> > Here VLA is used as a temp array for mbufs that will be used as a split
> > RX data buffers.
> > As at any given time only one thread can do RX from particular queue,
> > at rx_queue_setup() we can allocate extra space for that array, and then
> > safely use it at RX fast-path.
> >
> 
> Is there a reason to allocate extra space in sw_ring and used some part
> of it for split buffer, instead of allocating a new buffer for it?

Less allocations - less points to fail, less checks to do.
Again, having it close to sw_ring is probably a good thing too,
possibly less pressure on MMU, etc. - even though I don't think it is really critical.
But yes,  it could be a separate rte_zmalloc(), even though
I don't see good reason for that.
 
> 
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> > ---
> >  drivers/net/ice/ice_rxtx.c | 18 ++++++++++++------
> >  drivers/net/ice/ice_rxtx.h |  2 ++
> >  2 files changed, 14 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> > index 95a2db3432..6395a3b50a 100644
> > --- a/drivers/net/ice/ice_rxtx.c
> > +++ b/drivers/net/ice/ice_rxtx.c
> > @@ -1171,7 +1171,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
> >  	struct ice_vsi *vsi = pf->main_vsi;
> >  	struct ice_rx_queue *rxq;
> >  	const struct rte_memzone *rz;
> > -	uint32_t ring_size;
> > +	uint32_t ring_size, tlen;
> >  	uint16_t len;
> >  	int use_def_burst_func = 1;
> >  	uint64_t offloads;
> > @@ -1279,9 +1279,14 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
> >  	/* always reserve more for bulk alloc */
> >  	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
> >
> > +	/* allocate extra entries for SW split buffer */
> > +	tlen = ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0) ?
> > +		rxq->rx_free_thresh : 0;
> > +	tlen += len;
> > +
> >  	/* Allocate the software ring. */
> >  	rxq->sw_ring = rte_zmalloc_socket(NULL,
> > -					  sizeof(struct ice_rx_entry) * len,
> > +					  sizeof(struct ice_rx_entry) * tlen,
> >  					  RTE_CACHE_LINE_SIZE,
> >  					  socket_id);
> >  	if (!rxq->sw_ring) {
> > @@ -1290,6 +1295,8 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
> >  		return -ENOMEM;
> >  	}
> >
> > +	rxq->sw_split_buf = (tlen == len) ? NULL : rxq->sw_ring + len;
> > +
> >  	ice_reset_rx_queue(rxq);
> >  	rxq->q_set = true;
> >  	dev->data->rx_queues[queue_idx] = rxq;
> > @@ -1868,7 +1875,6 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
> >  	uint64_t dma_addr;
> >  	int diag, diag_pay;
> >  	uint64_t pay_addr;
> > -	struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh];
> >
> >  	/* Allocate buffers in bulk */
> >  	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
> > @@ -1883,7 +1889,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
> >
> >  	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> >  		diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp,
> > -				(void *)mbufs_pay, rxq->rx_free_thresh);
> > +				(void *)rxq->sw_split_buf, rxq->rx_free_thresh);
> >
> 
> Are we allowed to call 'rte_mempool_get_bulk()' with NULL object_table,

Nope.

> as 'rxq->sw_split_buf' can be NULL?
> Perhaps can allocate 'rxq->sw_split_buf' even buffer split offload is
> not enabled?

No,  if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=0, then
rxq->sw_split_buf should not be NULL.
If it is, then there is a bug in my changes, though right now I don't see
how it can happen: as in ice_rx_queue_setup() we always allocate space
rxq->sw_split_buf when RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is set.





^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [RFC 0/4] remove use of VLA
  2024-06-12  1:14 ` [RFC 0/4] remove use of VLA Ferruh Yigit
@ 2024-06-13 10:43   ` Konstantin Ananyev
  0 siblings, 0 replies; 15+ messages in thread
From: Konstantin Ananyev @ 2024-06-13 10:43 UTC (permalink / raw)
  To: Ferruh Yigit, Konstantin Ananyev, dev, bruce.richardson, Mcnamara, John
  Cc: hujiayu.hu, roretzla, anatoly.burakov, vladimir.medvedkin


Hi Ferruh,

> On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
> > From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> >
> > To continue further work on VLA replacement for these series:
> > https://patches.dpdk.org/project/dpdk/list/?series=31887
> > extra few patches to remove VLA for gro lib and ixgbe and ice PMDs.
> > DISCLAIMER: I don't have ice and ixgbe HW available on my box, so
> > didn't make a proper testing for patches #3,4.
> >
> > Konstantin Ananyev (4):
> >   gro: fix overwrite unprocessed packets
> >   gro: remove use of VLAs
> >   net/ixgbe: remove use of VLAs
> >   net/ice: remove use of VLAs
> >
> 
> Hi Konstantin,
> 
> I guess this is send as RFC because the set is not tested?

Two reasons:
 - I wasn't able to test all Intel PMD changes: 
   don't have a box with ixbe HW.
   Tested ice changes just a bit (test-pmd rx_only mode).
   In fact, there seems no that many use-cases for RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT.
 - I expect it will become part of greater VLA removal series:
   https://patchwork.dpdk.org/project/dpdk/list/?series=31887 

> If so, @Bruce, @John, can you please help testing this set?
> Btw, it fixes some GRO issues, additional GRO tests also helps if possible?

That would be good, but right now I don't see any unit-tests  for it.
Have to do some manual testing with scapy+testpmd.  
I presume there are some GRO tests in DTS, but I never looked inside it so far.

> 
> Thanks,
> ferruh


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 2/4] gro: remove use of VLAs
  2024-06-13 10:20     ` Konstantin Ananyev
@ 2024-06-14 15:11       ` Ferruh Yigit
  0 siblings, 0 replies; 15+ messages in thread
From: Ferruh Yigit @ 2024-06-14 15:11 UTC (permalink / raw)
  To: Konstantin Ananyev, Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin, kumaraparamesh92

On 6/13/2024 11:20 AM, Konstantin Ananyev wrote:
> 
> 
>>
>> On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
>>> From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
>>>
>>> ../lib/gro/rte_gro.c:182:34: warning: variable length array used [-Wvla]
>>> ../lib/gro/rte_gro.c:363:34: warning: variable length array used [-Wvla]
>>>
>>> In both cases the pattern is the same: we use unprocess_pkts[nb_pkts] to
>>> collect un-used by GRO packets, and then copy them to the start of
>>> input/output pkts[] array.
>>> In both cases, we can safely copy pkts[i] into already
>>> processed entry at the same array, i.e. into pkts[unprocess_num].
>>> Such change eliminates need of temporary VLA: unprocess_pkts[nb_pkts].
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
>>> ---
>>>  lib/gro/rte_gro.c | 40 ++++++++++++++--------------------------
>>>  1 file changed, 14 insertions(+), 26 deletions(-)
>>>
>>> diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
>>> index db86117609..6d5aadf32a 100644
>>> --- a/lib/gro/rte_gro.c
>>> +++ b/lib/gro/rte_gro.c
>>> @@ -179,7 +179,6 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>>>  	struct gro_vxlan_udp4_item vxlan_udp_items[RTE_GRO_MAX_BURST_ITEM_NUM]
>>>  			= {{{0}} };
>>>
>>> -	struct rte_mbuf *unprocess_pkts[nb_pkts];
>>>  	uint32_t item_num;
>>>  	int32_t ret;
>>>  	uint16_t i, unprocess_num = 0, nb_after_gro = nb_pkts;
>>> @@ -275,7 +274,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>>>  				/* Merge successfully */
>>>  				nb_after_gro--;
>>>  			else if (ret < 0)
>>> -				unprocess_pkts[unprocess_num++] = pkts[i];
>>> +				pkts[unprocess_num++] = pkts[i];
>>>  		} else if (IS_IPV4_VXLAN_UDP4_PKT(pkts[i]->packet_type) &&
>>>  				do_vxlan_udp_gro) {
>>>  			ret = gro_vxlan_udp4_reassemble(pkts[i],
>>> @@ -284,7 +283,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>>>  				/* Merge successfully */
>>>  				nb_after_gro--;
>>>  			else if (ret < 0)
>>> -				unprocess_pkts[unprocess_num++] = pkts[i];
>>> +				pkts[unprocess_num++] = pkts[i];
>>>  		} else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) &&
>>>  				do_tcp4_gro) {
>>>  			ret = gro_tcp4_reassemble(pkts[i], &tcp_tbl, 0);
>>> @@ -292,7 +291,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>>>  				/* merge successfully */
>>>  				nb_after_gro--;
>>>  			else if (ret < 0)
>>> -				unprocess_pkts[unprocess_num++] = pkts[i];
>>> +				pkts[unprocess_num++] = pkts[i];
>>>  		} else if (IS_IPV4_UDP_PKT(pkts[i]->packet_type) &&
>>>  				do_udp4_gro) {
>>>  			ret = gro_udp4_reassemble(pkts[i], &udp_tbl, 0);
>>> @@ -300,7 +299,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>>>  				/* merge successfully */
>>>  				nb_after_gro--;
>>>  			else if (ret < 0)
>>> -				unprocess_pkts[unprocess_num++] = pkts[i];
>>> +				pkts[unprocess_num++] = pkts[i];
>>>  		} else if (IS_IPV6_TCP_PKT(pkts[i]->packet_type) &&
>>>  				do_tcp6_gro) {
>>>  			ret = gro_tcp6_reassemble(pkts[i], &tcp6_tbl, 0);
>>> @@ -308,21 +307,15 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>>>  				/* merge successfully */
>>>  				nb_after_gro--;
>>>  			else if (ret < 0)
>>> -				unprocess_pkts[unprocess_num++] = pkts[i];
>>> +				pkts[unprocess_num++] = pkts[i];
>>>  		} else
>>> -			unprocess_pkts[unprocess_num++] = pkts[i];
>>> +			pkts[unprocess_num++] = pkts[i];
>>>  	}
>>>
>>>  	if ((nb_after_gro < nb_pkts)
>>>  		 || (unprocess_num < nb_pkts)) {
>>> -		i = 0;
>>> -		/* Copy unprocessed packets */
>>> -		if (unprocess_num > 0) {
>>> -			memcpy(&pkts[i], unprocess_pkts,
>>> -					sizeof(struct rte_mbuf *) *
>>> -					unprocess_num);
>>> -			i = unprocess_num;
>>> -		}
>>> +
>>> +		i = unprocess_num;
>>>
>>>  		/* Flush all packets from the tables */
>>>  		if (do_vxlan_tcp_gro) {
>>>
>>
>> ack to re-use 'pkts[]' buffer for unprocessed packets, that should work.
>>
>> But as a more general GRO question, above 'rte_gro_reassemble_burst()'
>> functions seems returns 'nb_after_gro' and as far as I can see that
>> amount of mbufs sits in the 'pkts[]'.
>> When packets flushed from tables, flushed packets are replaced to
>> 'pkts[]' but still 'nb_after_gro' returned, there is no way for
>> application to know that more than 'nb_after_gro' mbufs available in the
>> 'pkts[]'. Shouldn't return value increased per flushed packet?
>>
>> Ahh, I can see it was the case before, but it is updated (perhaps
>> broken) in commit:
>> 74080d7dcf31 ("gro: support IPv6 for TCP")
> 
> Actually my first thought was - we should return 'I' here.
> but then looking at the code more carefully, I realized that it is correct:
> nb_after_gro - would contain valid number of packets
> (at least I wasn't able to find a case when it wouldn't). 
> Though yeh, it wasn't very obvious for me at first place, so might be
> extra comment wouldn't hurt here.
> 

In first half of the function, 'nb_after_gro' is number of packets not
assembled and decided to pass back to user via 'pkts' buffer.

In second half, timed out packets are decided to turn back to user
(flushed), as they are not reassembled, and these packets are added to
'pkts' array for user, but 'nb_after_gro' not increased. So how user can
know about it?

Basically, I think we should return 'i', what am I missing, can you
please detail?


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 4/4] net/ice: remove use of VLAs
  2024-06-13 10:32     ` Konstantin Ananyev
@ 2024-06-14 15:31       ` Ferruh Yigit
  0 siblings, 0 replies; 15+ messages in thread
From: Ferruh Yigit @ 2024-06-14 15:31 UTC (permalink / raw)
  To: Konstantin Ananyev, Konstantin Ananyev, dev
  Cc: hujiayu.hu, roretzla, bruce.richardson, anatoly.burakov,
	vladimir.medvedkin

On 6/13/2024 11:32 AM, Konstantin Ananyev wrote:
> 
> 
>>
>> On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
>>> From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
>>>
>>> ../drivers/net/ice/ice_rxtx.c:1871:29: warning: variable length array used [-Wvla]
>>>
>>> Here VLA is used as a temp array for mbufs that will be used as a split
>>> RX data buffers.
>>> As at any given time only one thread can do RX from particular queue,
>>> at rx_queue_setup() we can allocate extra space for that array, and then
>>> safely use it at RX fast-path.
>>>
>>
>> Is there a reason to allocate extra space in sw_ring and used some part
>> of it for split buffer, instead of allocating a new buffer for it?
> 
> Less allocations - less points to fail, less checks to do.
> Again, having it close to sw_ring is probably a good thing too,
> possibly less pressure on MMU, etc. - even though I don't think it is really critical.
> But yes,  it could be a separate rte_zmalloc(), even though
> I don't see good reason for that.
>  

ack

>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
>>> ---
>>>  drivers/net/ice/ice_rxtx.c | 18 ++++++++++++------
>>>  drivers/net/ice/ice_rxtx.h |  2 ++
>>>  2 files changed, 14 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
>>> index 95a2db3432..6395a3b50a 100644
>>> --- a/drivers/net/ice/ice_rxtx.c
>>> +++ b/drivers/net/ice/ice_rxtx.c
>>> @@ -1171,7 +1171,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>>>  	struct ice_vsi *vsi = pf->main_vsi;
>>>  	struct ice_rx_queue *rxq;
>>>  	const struct rte_memzone *rz;
>>> -	uint32_t ring_size;
>>> +	uint32_t ring_size, tlen;
>>>  	uint16_t len;
>>>  	int use_def_burst_func = 1;
>>>  	uint64_t offloads;
>>> @@ -1279,9 +1279,14 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>>>  	/* always reserve more for bulk alloc */
>>>  	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
>>>
>>> +	/* allocate extra entries for SW split buffer */
>>> +	tlen = ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0) ?
>>> +		rxq->rx_free_thresh : 0;
>>> +	tlen += len;
>>> +
>>>  	/* Allocate the software ring. */
>>>  	rxq->sw_ring = rte_zmalloc_socket(NULL,
>>> -					  sizeof(struct ice_rx_entry) * len,
>>> +					  sizeof(struct ice_rx_entry) * tlen,
>>>  					  RTE_CACHE_LINE_SIZE,
>>>  					  socket_id);
>>>  	if (!rxq->sw_ring) {
>>> @@ -1290,6 +1295,8 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>>>  		return -ENOMEM;
>>>  	}
>>>
>>> +	rxq->sw_split_buf = (tlen == len) ? NULL : rxq->sw_ring + len;
>>> +
>>>  	ice_reset_rx_queue(rxq);
>>>  	rxq->q_set = true;
>>>  	dev->data->rx_queues[queue_idx] = rxq;
>>> @@ -1868,7 +1875,6 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
>>>  	uint64_t dma_addr;
>>>  	int diag, diag_pay;
>>>  	uint64_t pay_addr;
>>> -	struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh];
>>>
>>>  	/* Allocate buffers in bulk */
>>>  	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
>>> @@ -1883,7 +1889,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
>>>
>>>  	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
>>>  		diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp,
>>> -				(void *)mbufs_pay, rxq->rx_free_thresh);
>>> +				(void *)rxq->sw_split_buf, rxq->rx_free_thresh);
>>>
>>
>> Are we allowed to call 'rte_mempool_get_bulk()' with NULL object_table,
> 
> Nope.
> 
>> as 'rxq->sw_split_buf' can be NULL?
>> Perhaps can allocate 'rxq->sw_split_buf' even buffer split offload is
>> not enabled?
> 
> No,  if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=0, then
> rxq->sw_split_buf should not be NULL.
> If it is, then there is a bug in my changes, though right now I don't see
> how it can happen: as in ice_rx_queue_setup() we always allocate space
> rxq->sw_split_buf when RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is set.
> 

Ack, I see logical condition now.
This logic assumes 'rxq->rx_free_thresh' is not '0', although
practically this is true, technically can configure it 0 and crash the
application.


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-06-14 15:31 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-23 16:26 [RFC 0/4] remove use of VLA Konstantin Ananyev
2024-05-23 16:26 ` [RFC 1/4] gro: fix overwrite unprocessed packets Konstantin Ananyev
2024-06-12  0:48   ` Ferruh Yigit
2024-05-23 16:26 ` [RFC 2/4] gro: remove use of VLAs Konstantin Ananyev
2024-06-12  0:48   ` Ferruh Yigit
2024-06-13 10:20     ` Konstantin Ananyev
2024-06-14 15:11       ` Ferruh Yigit
2024-05-23 16:26 ` [RFC 3/4] net/ixgbe: " Konstantin Ananyev
2024-06-12  1:00   ` Ferruh Yigit
2024-05-23 16:26 ` [RFC 4/4] net/ice: " Konstantin Ananyev
2024-06-12  1:12   ` Ferruh Yigit
2024-06-13 10:32     ` Konstantin Ananyev
2024-06-14 15:31       ` Ferruh Yigit
2024-06-12  1:14 ` [RFC 0/4] remove use of VLA Ferruh Yigit
2024-06-13 10:43   ` Konstantin Ananyev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).