DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets
@ 2023-11-10  9:49 Viacheslav Ovsiienko
  2023-11-12 14:41 ` Raslan Darawsheh
  0 siblings, 1 reply; 2+ messages in thread
From: Viacheslav Ovsiienko @ 2023-11-10  9:49 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, suanmingm, stable

If packet data length exceeds the configured limit for packet
to be inlined in the queue descriptor the driver checks if hardware
requires to do minimal data inline or the VLAN insertion offload is
requested and not supported in hardware (that means we have to do VLAN
insertion in software with inline data). Then driver scans the mbuf
chain to find the minimal segment amount to satisfy the data needed
for minimal inline.

There was incorrect first segment inline data length calculation
with missing VLAN header being inserted, that could lead to the
segmentation fault in the mbuf chain scanning, for example for
the packets:

  packet:
    mbuf0 pkt_len = 288, data_len = 156
    mbuf1 pkt_len = 132, data_len = 132

  txq->inlen_send = 290

The driver was trying to reach the inlen_send inline data length
with missing VLAN header length added and was running out of the
mbuf chain (there were just not enough data in the packet to satisfy
the criteria).

Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")
Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first segments")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>

with '#' will be ignored, and an empty message aborts the commit.  # #
Date:      Fri Nov 10 11:12:14 2023 +0200 # # On branch tx_fix_firstseg
---
 drivers/net/mlx5/mlx5_tx.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 264cc192dc..e59ce37667 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -2046,7 +2046,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		uintptr_t start;
 
 		mbuf = loc->mbuf;
-		nxlen = rte_pktmbuf_data_len(mbuf);
+		nxlen = rte_pktmbuf_data_len(mbuf) + vlan;
 		/*
 		 * Packet length exceeds the allowed inline data length,
 		 * check whether the minimal inlining is required.
-- 
2.18.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* RE: [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets
  2023-11-10  9:49 [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets Viacheslav Ovsiienko
@ 2023-11-12 14:41 ` Raslan Darawsheh
  0 siblings, 0 replies; 2+ messages in thread
From: Raslan Darawsheh @ 2023-11-12 14:41 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Suanming Mou, stable

Hi,

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Friday, November 10, 2023 11:50 AM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>;
> stable@dpdk.org
> Subject: [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets
> 
> If packet data length exceeds the configured limit for packet
> to be inlined in the queue descriptor the driver checks if hardware
> requires to do minimal data inline or the VLAN insertion offload is
> requested and not supported in hardware (that means we have to do VLAN
> insertion in software with inline data). Then driver scans the mbuf
> chain to find the minimal segment amount to satisfy the data needed
> for minimal inline.
> 
> There was incorrect first segment inline data length calculation
> with missing VLAN header being inserted, that could lead to the
> segmentation fault in the mbuf chain scanning, for example for
> the packets:
> 
>   packet:
>     mbuf0 pkt_len = 288, data_len = 156
>     mbuf1 pkt_len = 132, data_len = 132
> 
>   txq->inlen_send = 290
> 
> The driver was trying to reach the inlen_send inline data length
> with missing VLAN header length added and was running out of the
> mbuf chain (there were just not enough data in the packet to satisfy
> the criteria).
> 
> Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")
> Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first
> segments")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Suanming Mou <suanmingm@nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-11-12 14:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-10  9:49 [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets Viacheslav Ovsiienko
2023-11-12 14:41 ` Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).