* [PATCH 1/2] net/mlx5: fix miss LRO validation in RxQ setup
[not found] <20220425093020.125319-1-michaelba@nvidia.com>
@ 2022-04-25 9:30 ` Michael Baum
2022-04-25 9:30 ` [PATCH 2/2] net/mlx5: fix LRO configuration in drop RxQ Michael Baum
1 sibling, 0 replies; 2+ messages in thread
From: Michael Baum @ 2022-04-25 9:30 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, matan, stable
The mlx5_rx_queue_setup() gets from user the LRO offload.
When LRO is configured, the LRO flag in rxq_data is set to 1.
This patch adds validation to make sure the LRO is supported.
Fixes: 17ed314 ("net/mlx5: allow LRO per Rx queue")
Cc: matan@mellanox.com
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_rxq.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 981c296f29..a2d03f9f67 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -842,6 +842,14 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->dev_conf.rxmode.offloads;
bool is_extmem = false;
+ if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ !priv->sh->dev_cap.lro_supported) {
+ DRV_LOG(ERR,
+ "Port %u queue %u LRO is configured but not supported.",
+ dev->data->port_id, idx);
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
if (mp) {
/*
* The parameters should be checked on rte_eth_dev layer.
--
2.25.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* [PATCH 2/2] net/mlx5: fix LRO configuration in drop RxQ
[not found] <20220425093020.125319-1-michaelba@nvidia.com>
2022-04-25 9:30 ` [PATCH 1/2] net/mlx5: fix miss LRO validation in RxQ setup Michael Baum
@ 2022-04-25 9:30 ` Michael Baum
1 sibling, 0 replies; 2+ messages in thread
From: Michael Baum @ 2022-04-25 9:30 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, dkozlyuk, stable
The driver wrongly set the LRO configurations to the TIR of the DevX
drop queue even when LRO is not supported.
Actually, the LRO configuration is not relevant to the drop queue at
all.
This causes failure in the initialization of the device, which doesn't
support LRO where the drop queue is created.
Probably, the drop queue creation by DevX missed the fact that LRO is
set by default in the TIR creation function and didn't unset it in the
drop queue case like other cases that unset LRO.
Move the default LRO configuration to unset it and set it only in the
case of all the TIR queues configured with LRO.
Fixes: bc5bee028ebc ("net/mlx5: create drop queue using DevX")
Cc: dkozlyuk@nvidia.com
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_devx.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 5ab092a259..03c0fac32f 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -715,7 +715,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
{
struct mlx5_priv *priv = dev->data->dev_private;
bool is_hairpin;
- bool lro = true;
+ bool lro = false;
uint32_t i;
/* NULL queues designate drop queue. */
@@ -724,9 +724,9 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
} else if (mlx5_is_external_rxq(dev, ind_tbl->queues[0])) {
/* External RxQ supports neither Hairpin nor LRO. */
is_hairpin = false;
- lro = false;
} else {
is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]);
+ lro = true;
/* Enable TIR LRO only if all the queues were configured for. */
for (i = 0; i < ind_tbl->queues_n; ++i) {
struct mlx5_rxq_data *rxq_i =
@@ -776,6 +776,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
if (dev->data->dev_conf.lpbk_mode)
tir_attr->self_lb_block = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
if (lro) {
+ MLX5_ASSERT(priv->sh->dev_cap.lro_supported);
tir_attr->lro_timeout_period_usecs = priv->config.lro_timeout;
tir_attr->lro_max_msg_sz = priv->max_lro_msg_size;
tir_attr->lro_enable_mask =
--
2.25.1
^ permalink raw reply [flat|nested] 2+ messages in thread