* [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size
@ 2022-11-17 14:39 Gregory Etelson
2022-11-17 14:39 ` [PATCH 2/2] doc: update MLX5 LRO limitation Gregory Etelson
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Gregory Etelson @ 2022-11-17 14:39 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
The PMD analyzes each Rx queue maximal LRO size and selects one that
fits all queues to configure TIR LRO attribute.
TIR LRO attribute is number of 256 bytes chunks that match the
selected maximal LRO size.
PMD used `priv->max_lro_msg_size` for selected maximal LRO size and
number of TIR chunks.
Fixes: 9f1035b5f71c ("net/mlx5: fix port initialization with small LRO")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_devx.c | 3 ++-
drivers/net/mlx5/mlx5_rxq.c | 4 +---
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 02bee5808d..31982002ee 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1711,7 +1711,7 @@ struct mlx5_priv {
uint32_t refcnt; /**< Reference counter. */
/**< Verbs modify header action object. */
uint8_t ft_type; /**< Flow table type, Rx or Tx. */
- uint8_t max_lro_msg_size;
+ uint32_t max_lro_msg_size;
uint32_t link_speed_capa; /* Link speed capabilities. */
struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */
struct mlx5_stats_ctrl stats_ctrl; /* Stats control. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index c1305836cf..02deaac612 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -870,7 +870,8 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
if (lro) {
MLX5_ASSERT(priv->sh->config.lro_allowed);
tir_attr->lro_timeout_period_usecs = priv->config.lro_timeout;
- tir_attr->lro_max_msg_sz = priv->max_lro_msg_size;
+ tir_attr->lro_max_msg_sz =
+ priv->max_lro_msg_size / MLX5_LRO_SEG_CHUNK_SIZE;
tir_attr->lro_enable_mask =
MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 724cd6c7e6..81aa3f074a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1533,7 +1533,6 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
MLX5_MAX_TCP_HDR_OFFSET)
max_lro_size -= MLX5_MAX_TCP_HDR_OFFSET;
max_lro_size = RTE_MIN(max_lro_size, MLX5_MAX_LRO_SIZE);
- max_lro_size /= MLX5_LRO_SEG_CHUNK_SIZE;
if (priv->max_lro_msg_size)
priv->max_lro_msg_size =
RTE_MIN((uint32_t)priv->max_lro_msg_size, max_lro_size);
@@ -1541,8 +1540,7 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
priv->max_lro_msg_size = max_lro_size;
DRV_LOG(DEBUG,
"port %u Rx Queue %u max LRO message size adjusted to %u bytes",
- dev->data->port_id, idx,
- priv->max_lro_msg_size * MLX5_LRO_SEG_CHUNK_SIZE);
+ dev->data->port_id, idx, priv->max_lro_msg_size);
}
/**
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/2] doc: update MLX5 LRO limitation
2022-11-17 14:39 [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size Gregory Etelson
@ 2022-11-17 14:39 ` Gregory Etelson
2022-11-21 15:23 ` Thomas Monjalon
2022-11-20 15:32 ` [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size Raslan Darawsheh
2022-11-22 5:13 ` [PATCH v2 1/2] net/mlx5: fix port private max LRO msg size Gregory Etelson
2 siblings, 1 reply; 8+ messages in thread
From: Gregory Etelson @ 2022-11-17 14:39 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, stable, Viacheslav Ovsiienko
Maximal LRO message size must be multiply of 256.
Otherwise, TCP payload may not fit into a single WQE.
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
doc/guides/nics/mlx5.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f0db21dde..98e0b24be4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -278,6 +278,9 @@ Limitations
- No Tx metadata go to the E-Switch steering domain for the Flow group 0.
The flows within group 0 and set metadata action are rejected by hardware.
+- The driver rounds down the ``max_lro_pkt_size`` value in the port
+ configuration to a multiple of 256 due to HW limitation.
+
.. note::
MAC addresses not already present in the bridge table of the associated
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size
2022-11-17 14:39 [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size Gregory Etelson
2022-11-17 14:39 ` [PATCH 2/2] doc: update MLX5 LRO limitation Gregory Etelson
@ 2022-11-20 15:32 ` Raslan Darawsheh
2022-11-22 5:13 ` [PATCH v2 1/2] net/mlx5: fix port private max LRO msg size Gregory Etelson
2 siblings, 0 replies; 8+ messages in thread
From: Raslan Darawsheh @ 2022-11-20 15:32 UTC (permalink / raw)
To: Gregory Etelson, dev; +Cc: Matan Azrad, Slava Ovsiienko
Hi,
> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Thursday, November 17, 2022 4:39 PM
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>
> Subject: [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size
>
> The PMD analyzes each Rx queue maximal LRO size and selects one that
> fits all queues to configure TIR LRO attribute.
> TIR LRO attribute is number of 256 bytes chunks that match the
> selected maximal LRO size.
>
> PMD used `priv->max_lro_msg_size` for selected maximal LRO size and
> number of TIR chunks.
>
> Fixes: 9f1035b5f71c ("net/mlx5: fix port initialization with small LRO")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
Series applied to next-net-mlx,
With small fixes to title.
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] doc: update MLX5 LRO limitation
2022-11-17 14:39 ` [PATCH 2/2] doc: update MLX5 LRO limitation Gregory Etelson
@ 2022-11-21 15:23 ` Thomas Monjalon
2022-11-22 5:17 ` Gregory Etelson
0 siblings, 1 reply; 8+ messages in thread
From: Thomas Monjalon @ 2022-11-21 15:23 UTC (permalink / raw)
To: getelson; +Cc: dev, stable, matan, rasland, Viacheslav Ovsiienko
17/11/2022 15:39, Gregory Etelson:
> Maximal LRO message size must be multiply of 256.
> Otherwise, TCP payload may not fit into a single WQE.
>
> Cc: stable@dpdk.org
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
Why the doc update is not in the same patch as the code change?
> @@ -278,6 +278,9 @@ Limitations
> - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
> The flows within group 0 and set metadata action are rejected by hardware.
>
> +- The driver rounds down the ``max_lro_pkt_size`` value in the port
> + configuration to a multiple of 256 due to HW limitation.
> +
> .. note::
>
> MAC addresses not already present in the bridge table of the associated
If you would like to read the doc, I guess you'd prefer to find this info
in the section dedicated to LRO, not in a random place.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/2] net/mlx5: fix port private max LRO msg size
2022-11-17 14:39 [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size Gregory Etelson
2022-11-17 14:39 ` [PATCH 2/2] doc: update MLX5 LRO limitation Gregory Etelson
2022-11-20 15:32 ` [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size Raslan Darawsheh
@ 2022-11-22 5:13 ` Gregory Etelson
2022-11-22 5:13 ` [PATCH v2 2/2] doc: update MLX5 LRO limitation Gregory Etelson
2 siblings, 1 reply; 8+ messages in thread
From: Gregory Etelson @ 2022-11-22 5:13 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
The PMD analyzes each Rx queue maximal LRO size and selects one that
fits all queues to configure TIR LRO attribute.
TIR LRO attribute is number of 256 bytes chunks that match the
selected maximal LRO size.
PMD used `priv->max_lro_msg_size` for selected maximal LRO size and
number of TIR chunks.
Fixes: b9f1f4c239 ("net/mlx5: fix port initialization with small LRO")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_devx.c | 3 ++-
drivers/net/mlx5/mlx5_rxq.c | 4 +---
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 02bee5808d..31982002ee 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1711,7 +1711,7 @@ struct mlx5_priv {
uint32_t refcnt; /**< Reference counter. */
/**< Verbs modify header action object. */
uint8_t ft_type; /**< Flow table type, Rx or Tx. */
- uint8_t max_lro_msg_size;
+ uint32_t max_lro_msg_size;
uint32_t link_speed_capa; /* Link speed capabilities. */
struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */
struct mlx5_stats_ctrl stats_ctrl; /* Stats control. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index c1305836cf..02deaac612 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -870,7 +870,8 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
if (lro) {
MLX5_ASSERT(priv->sh->config.lro_allowed);
tir_attr->lro_timeout_period_usecs = priv->config.lro_timeout;
- tir_attr->lro_max_msg_sz = priv->max_lro_msg_size;
+ tir_attr->lro_max_msg_sz =
+ priv->max_lro_msg_size / MLX5_LRO_SEG_CHUNK_SIZE;
tir_attr->lro_enable_mask =
MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 724cd6c7e6..81aa3f074a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1533,7 +1533,6 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
MLX5_MAX_TCP_HDR_OFFSET)
max_lro_size -= MLX5_MAX_TCP_HDR_OFFSET;
max_lro_size = RTE_MIN(max_lro_size, MLX5_MAX_LRO_SIZE);
- max_lro_size /= MLX5_LRO_SEG_CHUNK_SIZE;
if (priv->max_lro_msg_size)
priv->max_lro_msg_size =
RTE_MIN((uint32_t)priv->max_lro_msg_size, max_lro_size);
@@ -1541,8 +1540,7 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
priv->max_lro_msg_size = max_lro_size;
DRV_LOG(DEBUG,
"port %u Rx Queue %u max LRO message size adjusted to %u bytes",
- dev->data->port_id, idx,
- priv->max_lro_msg_size * MLX5_LRO_SEG_CHUNK_SIZE);
+ dev->data->port_id, idx, priv->max_lro_msg_size);
}
/**
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 2/2] doc: update MLX5 LRO limitation
2022-11-22 5:13 ` [PATCH v2 1/2] net/mlx5: fix port private max LRO msg size Gregory Etelson
@ 2022-11-22 5:13 ` Gregory Etelson
0 siblings, 0 replies; 8+ messages in thread
From: Gregory Etelson @ 2022-11-22 5:13 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, stable, Viacheslav Ovsiienko
Maximal LRO message size must be multiply of 256.
Otherwise, TCP payload may not fit into a single WQE.
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
v2: move the patch to LRO section.
---
doc/guides/nics/mlx5.rst | 2 ++
1 file changed, 2 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f0db21dde..e77d79774b 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -411,6 +411,8 @@ Limitations
- LRO packet aggregation is performed by HW only for packet size larger than
``lro_min_mss_size``. This value is reported on device start, when debug
mode is enabled.
+ - The driver rounds down the ``max_lro_pkt_size`` value in the port configuration
+ to a multiple of 256 due to HW limitation.
- CRC:
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH 2/2] doc: update MLX5 LRO limitation
2022-11-21 15:23 ` Thomas Monjalon
@ 2022-11-22 5:17 ` Gregory Etelson
2022-11-22 8:25 ` Thomas Monjalon
0 siblings, 1 reply; 8+ messages in thread
From: Gregory Etelson @ 2022-11-22 5:17 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon (EXTERNAL)
Cc: dev, stable, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko
Hello Thomas,
> > .. note::
> >
> > MAC addresses not already present in the bridge table of the
> associated
>
> If you would like to read the doc, I guess you'd prefer to find this info
> in the section dedicated to LRO, not in a random place.
>
I moved the patch location in v2
Regards,
Gregory
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] doc: update MLX5 LRO limitation
2022-11-22 5:17 ` Gregory Etelson
@ 2022-11-22 8:25 ` Thomas Monjalon
0 siblings, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2022-11-22 8:25 UTC (permalink / raw)
To: Gregory Etelson
Cc: dev, stable, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko
22/11/2022 06:17, Gregory Etelson:
> Hello Thomas,
>
> > > .. note::
> > >
> > > MAC addresses not already present in the bridge table of the
> > associated
> >
> > If you would like to read the doc, I guess you'd prefer to find this info
> > in the section dedicated to LRO, not in a random place.
> >
> I moved the patch location in v2
I've fixed v1 and merged yesterday. No need for v2.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-11-22 8:25 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-17 14:39 [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size Gregory Etelson
2022-11-17 14:39 ` [PATCH 2/2] doc: update MLX5 LRO limitation Gregory Etelson
2022-11-21 15:23 ` Thomas Monjalon
2022-11-22 5:17 ` Gregory Etelson
2022-11-22 8:25 ` Thomas Monjalon
2022-11-20 15:32 ` [PATCH 1/2] net/mlx5: fix port private max_lro_msg_size Raslan Darawsheh
2022-11-22 5:13 ` [PATCH v2 1/2] net/mlx5: fix port private max LRO msg size Gregory Etelson
2022-11-22 5:13 ` [PATCH v2 2/2] doc: update MLX5 LRO limitation Gregory Etelson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).