patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH 20.11 1/2] net/mlx5: fix TSO multi-segment inline length
@ 2021-07-12 11:41 Ali Alnubani
  2021-07-12 11:41 ` [dpdk-stable] [PATCH 20.11 2/2] net/mlx5: fix multi-segment inline for the first segments Ali Alnubani
  0 siblings, 1 reply; 2+ messages in thread
From: Ali Alnubani @ 2021-07-12 11:41 UTC (permalink / raw)
  To: stable; +Cc: Viacheslav Ovsiienko

From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

[ upstream commit 52e1ece50aaf526b900120283284834b0a59e3ce ]

The inline data length for TSO ethernet segment should be
calculated from the TSO header instead of the inline size
configured by txq_inline_min devarg or reported by the NIC.
It is imposed by the nature of TSO offload - inline header
is being duplicated to every output TCP packet.

Fixes: cacb44a09962 ("net/mlx5: add no-inline Tx flag")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index c765314068..752357e342 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -2743,7 +2743,8 @@ mlx5_tx_eseg_mdat(struct mlx5_txq_data *__rte_restrict txq,
 		 * Copying may be interrupted inside the routine
 		 * if run into no inline hint flag.
 		 */
-		copy = tlen >= txq->inlen_mode ? 0 : (txq->inlen_mode - tlen);
+		copy = tso ? inlen : txq->inlen_mode;
+		copy = tlen >= copy ? 0 : (copy - tlen);
 		copy = mlx5_tx_mseg_memcpy(pdst, loc, part, copy, olx);
 		tlen += copy;
 		if (likely(inlen <= tlen) || copy < part) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-stable] [PATCH 20.11 2/2] net/mlx5: fix multi-segment inline for the first segments
  2021-07-12 11:41 [dpdk-stable] [PATCH 20.11 1/2] net/mlx5: fix TSO multi-segment inline length Ali Alnubani
@ 2021-07-12 11:41 ` Ali Alnubani
  0 siblings, 0 replies; 2+ messages in thread
From: Ali Alnubani @ 2021-07-12 11:41 UTC (permalink / raw)
  To: stable; +Cc: Viacheslav Ovsiienko

From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

[ upstream commit ec837ad0fc7c6df4912cc2706b9cd54b225f4a34 ]

Before 19.08 release the Tx burst routines of mlx5 PMD
provided data inline for the first short segments of the
multi-segment packets. In the release 19.08 mlx5 Tx datapath
was refactored and this behavior was broken, affecting the
performance.

For example, the T-Rex traffic generator might use small
leading segments to handle packet headers and performance
degradation was noticed.

If the first segments of the multi-segment packet are short
and the overall length is below the inline threshold it
should be inline into the WQE to fix the performance.

Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 752357e342..d562afde1f 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -3457,6 +3457,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		unsigned int nxlen;
 		uintptr_t start;
 
+		mbuf = loc->mbuf;
+		nxlen = rte_pktmbuf_data_len(mbuf);
 		/*
 		 * Packet length exceeds the allowed inline
 		 * data length, check whether the minimal
@@ -3467,28 +3469,23 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 				    MLX5_ESEG_MIN_INLINE_SIZE);
 			MLX5_ASSERT(txq->inlen_mode <= txq->inlen_send);
 			inlen = txq->inlen_mode;
-		} else {
-			if (loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
-			    !vlan || txq->vlan_en) {
+		} else if (vlan && !txq->vlan_en) {
 				/*
-				 * VLAN insertion will be done inside by HW.
-				 * It is not utmost effective - VLAN flag is
-				 * checked twice, but we should proceed the
-				 * inlining length correctly and take into
-				 * account the VLAN header being inserted.
+				 * VLAN insertion is requested and hardware does not
+				 * support the offload, will do with software inline.
 				 */
-				return mlx5_tx_packet_multi_send
-							(txq, loc, olx);
-			}
 			inlen = MLX5_ESEG_MIN_INLINE_SIZE;
+		} else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
+			   nxlen > txq->inlen_send) {
+			return mlx5_tx_packet_multi_send(txq, loc, olx);
+		} else {
+			goto do_first;
 		}
 		/*
 		 * Now we know the minimal amount of data is requested
 		 * to inline. Check whether we should inline the buffers
 		 * from the chain beginning to eliminate some mbufs.
 		 */
-		mbuf = loc->mbuf;
-		nxlen = rte_pktmbuf_data_len(mbuf);
 		if (unlikely(nxlen <= txq->inlen_send)) {
 			/* We can inline first mbuf at least. */
 			if (nxlen < inlen) {
@@ -3510,6 +3507,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 					goto do_align;
 				}
 			}
+do_first:
 			do {
 				inlen = nxlen;
 				mbuf = NEXT(mbuf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-07-12 11:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-12 11:41 [dpdk-stable] [PATCH 20.11 1/2] net/mlx5: fix TSO multi-segment inline length Ali Alnubani
2021-07-12 11:41 ` [dpdk-stable] [PATCH 20.11 2/2] net/mlx5: fix multi-segment inline for the first segments Ali Alnubani

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).