* [PATCH] net/mlx5: fix queue counter error check
@ 2025-02-25 9:08 Dariusz Sosnowski
0 siblings, 0 replies; only message in thread
From: Dariusz Sosnowski @ 2025-02-25 9:08 UTC (permalink / raw)
To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
Matan Azrad, Shani Peretz
Cc: dev, Raslan Darawsheh
Whenever queue counter allocation fails, the FW syndrome error
should be checked to determine if maximum number of queue counters
was reached.
Fixes: f0c0731b6d40 ("net/mlx5: add counters for hairpin drop")
Cc: shperetz@nvidia.com
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Bing Zhao <bingz@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 91fd9346a9..cbedb2606e 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -3471,6 +3471,7 @@ mlx5_enable_port_level_hairpin_counter(struct rte_eth_dev *dev, uint64_t id __rt
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_rxq_priv *rxq;
+ int syndrome = 0;
unsigned int i;
int ret = mlx5_hairpin_queue_counter_supported(priv);
@@ -3487,15 +3488,15 @@ mlx5_enable_port_level_hairpin_counter(struct rte_eth_dev *dev, uint64_t id __rt
}
/* Alloc global hairpin queue counter. */
- priv->q_counter_hairpin = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx, NULL);
+ priv->q_counter_hairpin = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx, &syndrome);
if (!priv->q_counter_hairpin) {
- if (ret == MLX5_Q_COUNTERS_LIMIT_REACHED) {
- DRV_LOG(WARNING, "Maximum number of queue counters reached. "
- "Unable to create counter object for Port %d using DevX.",
- priv->dev_data->port_id);
+ if (syndrome == MLX5_Q_COUNTERS_LIMIT_REACHED) {
+ DRV_LOG(ERR, "Maximum number of queue counters reached. "
+ "Unable to create counter object for Port %d using DevX.",
+ priv->dev_data->port_id);
return -ENOSPC;
}
- DRV_LOG(WARNING, "Port %d global hairpin queue counter object cannot be created "
+ DRV_LOG(ERR, "Port %d global hairpin queue counter object cannot be created "
"by DevX.", priv->dev_data->port_id);
return -ENOMEM;
}
@@ -3536,6 +3537,7 @@ mlx5_enable_per_queue_hairpin_counter(struct rte_eth_dev *dev, uint64_t id)
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_rxq_priv *rxq;
struct mlx5_rxq_data *rxq_data;
+ int syndrome = 0;
int ret = mlx5_hairpin_queue_counter_supported(priv);
if (ret) {
@@ -3558,16 +3560,16 @@ mlx5_enable_per_queue_hairpin_counter(struct rte_eth_dev *dev, uint64_t id)
return 0;
/* Alloc hairpin queue counter. */
- rxq->q_counter = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx, NULL);
+ rxq->q_counter = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx, &syndrome);
if (rxq->q_counter == NULL) {
- if (ret == MLX5_Q_COUNTERS_LIMIT_REACHED) {
- DRV_LOG(WARNING, "Maximum number of queue counters reached. "
- "Unable to create counter object for Port %d, Queue %d "
- "using DevX. The counter from this queue will not increment.",
- priv->dev_data->port_id, rxq->idx);
+ if (syndrome == MLX5_Q_COUNTERS_LIMIT_REACHED) {
+ DRV_LOG(ERR, "Maximum number of queue counters reached. "
+ "Unable to create counter object for Port %d, Queue %d "
+ "using DevX. The counter from this queue will not increment.",
+ priv->dev_data->port_id, rxq->idx);
return -ENOSPC;
}
- DRV_LOG(WARNING, "Port %d queue %d counter object cannot be created "
+ DRV_LOG(ERR, "Port %d queue %d counter object cannot be created "
"by DevX. Counter from this queue will not increment.",
priv->dev_data->port_id, rxq->idx);
return -ENOMEM;
--
2.39.5
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2025-02-25 9:09 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-25 9:08 [PATCH] net/mlx5: fix queue counter error check Dariusz Sosnowski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).