patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH 2/2] doc: update MLX5 LRO limitation
       [not found] <20221117143901.27957-1-getelson@nvidia.com>
@ 2022-11-17 14:39 ` Gregory Etelson
  2022-11-21 15:23   ` Thomas Monjalon
       [not found] ` <20221122051308.194-1-getelson@nvidia.com>
  1 sibling, 1 reply; 5+ messages in thread
From: Gregory Etelson @ 2022-11-17 14:39 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, rasland, stable, Viacheslav Ovsiienko

Maximal LRO message size must be multiply of 256.
Otherwise, TCP payload may not fit into a single WQE.

Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/nics/mlx5.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f0db21dde..98e0b24be4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -278,6 +278,9 @@ Limitations
 - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
   The flows within group 0 and set metadata action are rejected by hardware.
 
+- The driver rounds down the ``max_lro_pkt_size`` value in the port
+  configuration to a multiple of 256 due to HW limitation.
+
 .. note::
 
    MAC addresses not already present in the bridge table of the associated
-- 
2.34.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] doc: update MLX5 LRO limitation
  2022-11-17 14:39 ` [PATCH 2/2] doc: update MLX5 LRO limitation Gregory Etelson
@ 2022-11-21 15:23   ` Thomas Monjalon
  2022-11-22  5:17     ` Gregory Etelson
  0 siblings, 1 reply; 5+ messages in thread
From: Thomas Monjalon @ 2022-11-21 15:23 UTC (permalink / raw)
  To: getelson; +Cc: dev, stable, matan, rasland, Viacheslav Ovsiienko

17/11/2022 15:39, Gregory Etelson:
> Maximal LRO message size must be multiply of 256.
> Otherwise, TCP payload may not fit into a single WQE.
> 
> Cc: stable@dpdk.org
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>

Why the doc update is not in the same patch as the code change?

> @@ -278,6 +278,9 @@ Limitations
>  - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
>    The flows within group 0 and set metadata action are rejected by hardware.
>  
> +- The driver rounds down the ``max_lro_pkt_size`` value in the port
> +  configuration to a multiple of 256 due to HW limitation.
> +
>  .. note::
>  
>     MAC addresses not already present in the bridge table of the associated

If you would like to read the doc, I guess you'd prefer to find this info
in the section dedicated to LRO, not in a random place.






^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 2/2] doc: update MLX5 LRO limitation
       [not found] ` <20221122051308.194-1-getelson@nvidia.com>
@ 2022-11-22  5:13   ` Gregory Etelson
  0 siblings, 0 replies; 5+ messages in thread
From: Gregory Etelson @ 2022-11-22  5:13 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, rasland, stable, Viacheslav Ovsiienko

Maximal LRO message size must be multiply of 256.
Otherwise, TCP payload may not fit into a single WQE.

Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
v2: move the patch to LRO section.
---
 doc/guides/nics/mlx5.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f0db21dde..e77d79774b 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -411,6 +411,8 @@ Limitations
   - LRO packet aggregation is performed by HW only for packet size larger than
     ``lro_min_mss_size``. This value is reported on device start, when debug
     mode is enabled.
+  - The driver rounds down the ``max_lro_pkt_size`` value in the port configuration
+    to a multiple of 256 due to HW limitation.
 
 - CRC:
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [PATCH 2/2] doc: update MLX5 LRO limitation
  2022-11-21 15:23   ` Thomas Monjalon
@ 2022-11-22  5:17     ` Gregory Etelson
  2022-11-22  8:25       ` Thomas Monjalon
  0 siblings, 1 reply; 5+ messages in thread
From: Gregory Etelson @ 2022-11-22  5:17 UTC (permalink / raw)
  To: NBU-Contact-Thomas Monjalon (EXTERNAL)
  Cc: dev, stable, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko

Hello Thomas,

> >  .. note::
> >
> >     MAC addresses not already present in the bridge table of the
> associated
> 
> If you would like to read the doc, I guess you'd prefer to find this info
> in the section dedicated to LRO, not in a random place.
> 
I moved the patch location in v2

Regards,
Gregory

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] doc: update MLX5 LRO limitation
  2022-11-22  5:17     ` Gregory Etelson
@ 2022-11-22  8:25       ` Thomas Monjalon
  0 siblings, 0 replies; 5+ messages in thread
From: Thomas Monjalon @ 2022-11-22  8:25 UTC (permalink / raw)
  To: Gregory Etelson
  Cc: dev, stable, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko

22/11/2022 06:17, Gregory Etelson:
> Hello Thomas,
> 
> > >  .. note::
> > >
> > >     MAC addresses not already present in the bridge table of the
> > associated
> > 
> > If you would like to read the doc, I guess you'd prefer to find this info
> > in the section dedicated to LRO, not in a random place.
> > 
> I moved the patch location in v2

I've fixed v1 and merged yesterday. No need for v2.



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-11-22  8:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20221117143901.27957-1-getelson@nvidia.com>
2022-11-17 14:39 ` [PATCH 2/2] doc: update MLX5 LRO limitation Gregory Etelson
2022-11-21 15:23   ` Thomas Monjalon
2022-11-22  5:17     ` Gregory Etelson
2022-11-22  8:25       ` Thomas Monjalon
     [not found] ` <20221122051308.194-1-getelson@nvidia.com>
2022-11-22  5:13   ` [PATCH v2 " Gregory Etelson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).