From: Shani Peretz <shperetz@nvidia.com>
To: <dev@dpdk.org>
Cc: <ferruh.yigit@amd.com>, <stephen@networkplumber.org>,
Shani Peretz <shperetz@nvidia.com>,
Dariusz Sosnowski <dsosnowski@nvidia.com>,
"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>,
Bing Zhao <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>,
Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>,
Anatoly Burakov <anatoly.burakov@intel.com>
Subject: [PATCH v2 3/3] net/mlx5: add port and queue level counters of hairpin drops
Date: Fri, 24 Jan 2025 13:43:35 +0200 [thread overview]
Message-ID: <20250124114336.16610-4-shperetz@nvidia.com> (raw)
In-Reply-To: <20250124114336.16610-1-shperetz@nvidia.com>
Currently the mlx5 pmd expose global hairpin counter
(dev_internal_queue_oob) that tracks packets dropped
in the device's hairpin rx queues.
To offer more detailed insights,
we are making the following counters available:
- hairpin_out_of_buffer (port-level)
counts drops from all the hairpin queues
- hairpin_out_of_buffer_rxq* (queue-level)
counts drops of a specific queue
Both the port-level and per-queue counters can be enabled,
disabled, and queried.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 9 +-
drivers/common/mlx5/mlx5_devx_cmds.h | 2 +-
drivers/common/mlx5/mlx5_prm.h | 3 +
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 5 +
drivers/net/mlx5/linux/mlx5_os.c | 21 +-
drivers/net/mlx5/mlx5.c | 288 ++++++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 28 ++-
drivers/net/mlx5/mlx5_devx.c | 75 ++----
drivers/net/mlx5/mlx5_rx.h | 20 ++
drivers/net/mlx5/mlx5_rxq.c | 36 +++
drivers/net/mlx5/mlx5_stats.c | 292 +++++++++++++++++++++++-
drivers/net/mlx5/windows/mlx5_os.c | 3 +-
12 files changed, 691 insertions(+), 91 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index a75f011750..f7ba4a87ca 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1590,6 +1590,7 @@ mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
}
ret = mlx5_glue->devx_obj_modify(rq->obj, in, sizeof(in),
out, sizeof(out));
+
if (ret) {
DRV_LOG(ERR, "Failed to modify RQ using DevX");
rte_errno = errno;
@@ -3091,14 +3092,17 @@ mlx5_devx_cmd_wq_query(void *wq, uint32_t *counter_set_id)
*
* @param[in] ctx
* Context returned from mlx5 open_device() glue function.
+ * @param[out] syndrome
+ * Get syndrome of devx command response.
*
* @return
* Pointer to counter object on success, a NULL value otherwise and
* rte_errno is set.
*/
struct mlx5_devx_obj *
-mlx5_devx_cmd_queue_counter_alloc(void *ctx)
+mlx5_devx_cmd_queue_counter_alloc(void *ctx, int *syndrome)
{
+ int status;
struct mlx5_devx_obj *dcs = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*dcs), 0,
SOCKET_ID_ANY);
uint32_t in[MLX5_ST_SZ_DW(alloc_q_counter_in)] = {0};
@@ -3113,6 +3117,9 @@ mlx5_devx_cmd_queue_counter_alloc(void *ctx)
sizeof(out));
if (!dcs->obj) {
DEVX_DRV_LOG(DEBUG, out, "create q counter set", NULL, 0);
+ status = MLX5_GET(alloc_q_counter_out, out, status);
+ if (status && syndrome)
+ *syndrome = MLX5_GET(alloc_q_counter_out, out, syndrome);
mlx5_free(dcs);
return NULL;
}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index f523bf8529..38548b4c9f 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -848,7 +848,7 @@ __rte_internal
int mlx5_devx_cmd_wq_query(void *wq, uint32_t *counter_set_id);
__rte_internal
-struct mlx5_devx_obj *mlx5_devx_cmd_queue_counter_alloc(void *ctx);
+struct mlx5_devx_obj *mlx5_devx_cmd_queue_counter_alloc(void *ctx, int *syndrome);
__rte_internal
int mlx5_devx_cmd_queue_counter_query(struct mlx5_devx_obj *dcs, int clear,
uint32_t *out_of_buffers);
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 0ecddaa36b..d0cb0131f6 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -275,6 +275,9 @@
#define MLX5_ERROR_CQE_SYNDROME_OFFSET 52
#endif
+/* Firmware error code for allocating the maximum number of queue counters */
+#define MLX5_Q_COUNTERS_LIMIT_REACHED 0x587239
+
/* The completion mode offset in the WQE control segment line 2. */
#define MLX5_COMP_MODE_OFFSET 2
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 5d64984022..63277fc4ed 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -1425,6 +1425,11 @@ static const struct mlx5_counter_ctrl mlx5_counters_init[] = {
.dpdk_name = "hairpin_out_of_buffer",
.ctr_name = "hairpin_out_of_buffer",
.dev = 1,
+ .ctrl = {
+ .enable = mlx5_enable_port_level_hairpin_counter,
+ .disable = mlx5_disable_port_level_hairpin_counter,
+ .enabled = 0,
+ }
},
{
.dpdk_name = "dev_internal_queue_oob",
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 69a80b9ddc..0c63ab228e 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -973,7 +973,7 @@ mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
struct mlx5_priv *priv = dev->data->dev_private;
void *ctx = priv->sh->cdev->ctx;
- priv->q_counters = mlx5_devx_cmd_queue_counter_alloc(ctx);
+ priv->q_counters = mlx5_devx_cmd_queue_counter_alloc(ctx, NULL);
if (!priv->q_counters) {
struct ibv_cq *cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
struct ibv_wq *wq;
@@ -981,7 +981,6 @@ mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
DRV_LOG(DEBUG, "Port %d queue counter object cannot be created "
"by DevX - fall-back to use the kernel driver global "
"queue counter.", dev->data->port_id);
- priv->q_counters_allocation_failure = 1;
/* Create WQ by kernel and query its queue counter ID. */
if (cq) {
@@ -3052,23 +3051,11 @@ mlx5_os_read_dev_stat(struct mlx5_priv *priv, const char *ctr_name,
if (priv->sh) {
if (priv->q_counters != NULL &&
strcmp(ctr_name, "out_of_buffer") == 0) {
- if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
- DRV_LOG(WARNING, "DevX out_of_buffer counter is not supported in the secondary process");
- rte_errno = ENOTSUP;
- return 1;
- }
- return mlx5_devx_cmd_queue_counter_query
- (priv->q_counters, 0, (uint32_t *)stat);
+ return mlx5_read_queue_counter(priv->q_counters, ctr_name, stat);
}
- if (priv->q_counters_hairpin != NULL &&
+ if (priv->q_counter_hairpin != NULL &&
strcmp(ctr_name, "hairpin_out_of_buffer") == 0) {
- if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
- DRV_LOG(WARNING, "DevX out_of_buffer counter is not supported in the secondary process");
- rte_errno = ENOTSUP;
- return 1;
- }
- return mlx5_devx_cmd_queue_counter_query
- (priv->q_counters_hairpin, 0, (uint32_t *)stat);
+ return mlx5_read_queue_counter(priv->q_counter_hairpin, ctr_name, stat);
}
MKSTR(path, "%s/ports/%d/hw_counters/%s",
priv->sh->ibdev_path,
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6e4473e2f4..3a9e642fa6 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2373,6 +2373,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
priv->ptype_rss_groups = NULL;
}
#endif
+ mlx5_q_counters_destroy(dev);
if (priv->rxq_privs != NULL) {
/* XXX race condition if mlx5_rx_burst() is still running. */
rte_delay_us_sleep(1000);
@@ -2393,14 +2394,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_proc_priv_uninit(dev);
if (priv->drop_queue.hrxq)
mlx5_drop_action_destroy(dev);
- if (priv->q_counters) {
- mlx5_devx_cmd_destroy(priv->q_counters);
- priv->q_counters = NULL;
- }
- if (priv->q_counters_hairpin) {
- mlx5_devx_cmd_destroy(priv->q_counters_hairpin);
- priv->q_counters_hairpin = NULL;
- }
mlx5_mprq_free_mp(dev);
mlx5_os_free_shared_dr(priv);
#ifdef HAVE_MLX5_HWS_SUPPORT
@@ -2520,6 +2513,11 @@ const struct eth_dev_ops mlx5_dev_ops = {
.xstats_get = mlx5_xstats_get,
.xstats_reset = mlx5_xstats_reset,
.xstats_get_names = mlx5_xstats_get_names,
+
+ .xstats_enable = mlx5_xstats_enable,
+ .xstats_disable = mlx5_xstats_disable,
+ .xstats_query_state = mlx5_xstats_query_state,
+
.fw_version_get = mlx5_fw_version_get,
.dev_infos_get = mlx5_dev_infos_get,
.representor_info_get = mlx5_representor_info_get,
@@ -3393,6 +3391,280 @@ mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev)
return port_id;
}
+static int
+mlx5_hairpin_queue_counter_supported(struct mlx5_priv *priv)
+{
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+ DRV_LOG(WARNING,
+ "DevX counter is not supported in the secondary process");
+ return -ENOTSUP;
+ }
+
+ if (priv->obj_ops.rxq_obj_modify_counter_set_id == NULL) {
+ DRV_LOG(WARNING,
+ "DevX counter is not supported in this device");
+ return -ENOTSUP;
+ }
+ return 0;
+}
+
+/**
+ * Disables the port-level hairpin counter.
+ *
+ * This function iterates over each RXQ, detaches it from the global
+ * counter if it's a hairpin counter, and then destroys the global counter object if
+ * it exists.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param id
+ * The counter ID to disable (not used in this implementation).
+ *
+ * @return
+ * 0 on success, error code otherwise.
+ */
+int
+mlx5_disable_port_level_hairpin_counter(struct rte_eth_dev *dev, uint64_t id __rte_unused)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ unsigned int num_rxqs = priv->rxqs_n;
+ unsigned int i;
+
+ if (priv->q_counter_hairpin == NULL)
+ return 0;
+
+ /* Detach each RXQ from the global hairpin counter */
+ for (i = 0; i < num_rxqs; ++i) {
+ struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+
+ if (rxq == NULL || rxq->ctrl->obj->rq == NULL || !rxq->ctrl->is_hairpin)
+ continue;
+
+ if (priv->obj_ops.rxq_obj_modify_counter_set_id(rxq, 0) != 0)
+ DRV_LOG(ERR, "Port %u failed to modify rq object %s",
+ priv->dev_data->port_id, strerror(rte_errno));
+ }
+
+ mlx5_devx_cmd_destroy(priv->q_counter_hairpin);
+ priv->q_counter_hairpin = NULL;
+
+ /* Reset oob stats. */
+ mlx5_reset_xstats_by_name(priv, "hairpin_out_of_buffer");
+ return 0;
+}
+
+/**
+ * Enables the port-level hairpin counter.
+ *
+ * This function iterates over each RXQ, allocate a q counter and attach it to each
+ * hairpin queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param id
+ * The counter ID to disable (not used in this implementation).
+ *
+ * @return
+ * 0 on success, error code otherwise.
+ */
+int
+mlx5_enable_port_level_hairpin_counter(struct rte_eth_dev *dev, uint64_t id __rte_unused)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_priv *rxq;
+ unsigned int i;
+
+ int ret = mlx5_hairpin_queue_counter_supported(priv);
+ if (ret) {
+ DRV_LOG(DEBUG, "Hairpin out of buffer counter "
+ "is not available on this NIC.");
+ return ret;
+ }
+
+ /* check if counter is enable per queue - if yes - fail to enable per port */
+ if (priv->num_of_hairpin_q_counter_enabled != 0) {
+ DRV_LOG(WARNING, "Hairpin out of buffer counter is enabled per queue.");
+ return -EINVAL;
+ }
+
+ /* Alloc global hairpin queue counter. */
+ priv->q_counter_hairpin = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx, NULL);
+ if (!priv->q_counter_hairpin) {
+ if (ret == MLX5_Q_COUNTERS_LIMIT_REACHED) {
+ DRV_LOG(WARNING, "Maximum number of queue counters reached. "
+ "Unable to create counter object for Port %d using DevX.",
+ priv->dev_data->port_id);
+ return -ENOSPC;
+ }
+ DRV_LOG(WARNING, "Port %d global hairpin queue counter object cannot be created "
+ "by DevX.", priv->dev_data->port_id);
+ return -ENOMEM;
+ }
+
+ /* go over each queue and attach to global counter */
+ for (i = 0; (i != priv->rxqs_n); ++i) {
+ rxq = mlx5_rxq_get(dev, i);
+
+ if (rxq == NULL || rxq->ctrl->obj->rq == NULL || !rxq->ctrl->is_hairpin)
+ continue;
+
+ ret = priv->obj_ops.rxq_obj_modify_counter_set_id(rxq, priv->q_counter_hairpin->id);
+ if (ret) {
+ DRV_LOG(ERR, "failed to modify rq object for port %u"
+ "%s", priv->dev_data->port_id, strerror(rte_errno));
+ return ret;
+ }
+ }
+
+ /* Reset oob stats. */
+ mlx5_reset_xstats_by_name(priv, "hairpin_out_of_buffer");
+ return 0;
+}
+
+/**
+ * Creates a queue counter for hairpin Rx queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param id
+ * Index of the RX queue to disable the hairpin queue counter for.
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_enable_per_queue_hairpin_counter(struct rte_eth_dev *dev, uint64_t id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_priv *rxq;
+ struct mlx5_rxq_data *rxq_data;
+
+ int ret = mlx5_hairpin_queue_counter_supported(priv);
+ if (ret) {
+ DRV_LOG(DEBUG, "Hairpin out of buffer counter "
+ "is not available on this NIC.");
+ return ret;
+ }
+
+ /* check if we have port level counter enabled. if yes, don't set the queue level counter */
+ if (priv->q_counter_hairpin) {
+ DRV_LOG(WARNING, "Hairpin out of buffer counter is enabled per port.");
+ return -EINVAL;
+ }
+
+ rxq = mlx5_rxq_get(dev, id);
+ if (rxq == NULL || rxq->ctrl->obj->rq == NULL || !rxq->ctrl->is_hairpin)
+ return -EINVAL;
+
+ if (rxq->q_counter != NULL)
+ return 0;
+
+ /* Alloc hairpin queue counter. */
+ rxq->q_counter = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx, NULL);
+ if (rxq->q_counter == NULL) {
+ if (ret == MLX5_Q_COUNTERS_LIMIT_REACHED) {
+ DRV_LOG(WARNING, "Maximum number of queue counters reached. "
+ "Unable to create counter object for Port %d, Queue %d "
+ "using DevX. The counter from this queue will not increment.",
+ priv->dev_data->port_id, rxq->idx);
+ return -ENOSPC;
+ }
+ DRV_LOG(WARNING, "Port %d queue %d counter object cannot be created "
+ "by DevX. Counter from this queue will not increment.",
+ priv->dev_data->port_id, rxq->idx);
+ return -ENOMEM;
+ }
+
+ ret = priv->obj_ops.rxq_obj_modify_counter_set_id(rxq, rxq->q_counter->id);
+ if (ret) {
+ DRV_LOG(ERR, "failed to modify rq object for port %u"
+ "%s", priv->dev_data->port_id, strerror(rte_errno));
+ return ret;
+ }
+
+ rxq_data = mlx5_rxq_data_get(dev, id);
+ if (rxq_data != NULL)
+ rxq_data->stats.oobs.ctrl.enabled = 1;
+
+ priv->num_of_hairpin_q_counter_enabled++;
+ return 0;
+}
+
+/**
+ * Disables the hairpin queue counter for a specified RX queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param id
+ * Index of the RX queue to disable the hairpin queue counter for.
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_disable_per_queue_hairpin_counter(struct rte_eth_dev *dev, uint64_t id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_priv *rxq;
+ struct mlx5_rxq_data *rxq_data;
+ int ret = 0;
+
+ rxq = mlx5_rxq_get(dev, id);
+ rxq_data = mlx5_rxq_data_get(dev, id);
+
+ if (rxq == NULL || rxq->ctrl->obj->rq == NULL || !rxq->ctrl->is_hairpin)
+ return 0;
+
+ if (rxq->q_counter != NULL) {
+ /* Modify rxq. */
+ ret = priv->obj_ops.rxq_obj_modify_counter_set_id(rxq, 0);
+ if (ret)
+ DRV_LOG(ERR, "Port %u failed to modify rq object "
+ " %s", priv->dev_data->port_id, strerror(rte_errno));
+
+ mlx5_devx_cmd_destroy(rxq->q_counter);
+ rxq->q_counter = NULL;
+ }
+
+ /* Reset queue oob stats. */
+ if (rxq_data != NULL) {
+ rxq_data->stats.oobs.count = 0;
+ rxq_data->stats_reset.oobs.count = 0;
+ rxq_data->stats.oobs.ctrl.enabled = 0;
+ }
+
+ priv->num_of_hairpin_q_counter_enabled--;
+ return 0;
+}
+
+/**
+ * Read statistics per queue by a named counter.
+ *
+ * @param[in] q_counter
+ * Pointer to the queue's counter object.
+ * @param[in] ctr_name
+ * Pointer to the name of the statistic counter to read
+ * @param[out] stat
+ * Pointer to read statistic value.
+ * @return
+ * 0 on success and stat is valid, 1 if failed to read the value
+ * rte_errno is set.
+ *
+ */
+int
+mlx5_read_queue_counter(struct mlx5_devx_obj *q_counter, const char *ctr_name,
+ uint64_t *stat)
+{
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+ DRV_LOG(WARNING,
+ "DevX %s counter is not supported in the secondary process", ctr_name);
+ return -ENOTSUP;
+ }
+
+ if (q_counter == NULL)
+ return -EINVAL;
+
+ return mlx5_devx_cmd_queue_counter_query(q_counter, 0, (uint32_t *)stat);
+}
+
/**
* Callback to remove a device.
*
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index cc8a03ccd6..df9a387433 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -261,17 +261,31 @@ struct mlx5_local_data {
extern struct mlx5_shared_data *mlx5_shared_data;
+int mlx5_xstats_enable(struct rte_eth_dev *dev, uint64_t id);
+int mlx5_xstats_disable(struct rte_eth_dev *dev, uint64_t id);
+int mlx5_xstats_query_state(struct rte_eth_dev *dev, uint64_t id);
+
/* Dev ops structs */
extern const struct eth_dev_ops mlx5_dev_ops;
extern const struct eth_dev_ops mlx5_dev_sec_ops;
extern const struct eth_dev_ops mlx5_dev_ops_isolate;
+typedef int (*mlx5_enable_counter_t)(struct rte_eth_dev *dev, uint64_t id);
+typedef int (*mlx5_disable_counter_t)(struct rte_eth_dev *dev, uint64_t id);
+
+struct mlx5_stat_counter_ctrl {
+ mlx5_enable_counter_t enable;
+ mlx5_disable_counter_t disable;
+ uint32_t enabled;
+};
+
struct mlx5_counter_ctrl {
/* Name of the counter. */
char dpdk_name[RTE_ETH_XSTATS_NAME_SIZE];
/* Name of the counter on the device table. */
char ctr_name[RTE_ETH_XSTATS_NAME_SIZE];
uint32_t dev:1; /**< Nonzero for dev counters. */
+ struct mlx5_stat_counter_ctrl ctrl;
};
struct mlx5_xstats_ctrl {
@@ -1783,6 +1797,7 @@ struct mlx5_priv;
/* HW objects operations structure. */
struct mlx5_obj_ops {
int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_priv *rxq, int on);
+ int (*rxq_obj_modify_counter_set_id)(struct mlx5_rxq_priv *rxq, uint32_t counter_set_id);
int (*rxq_obj_new)(struct mlx5_rxq_priv *rxq);
int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type);
@@ -2044,12 +2059,12 @@ struct mlx5_priv {
LIST_HEAD(fdir, mlx5_fdir_flow) fdir_flows; /* fdir flows. */
rte_spinlock_t shared_act_sl; /* Shared actions spinlock. */
uint32_t rss_shared_actions; /* RSS shared actions. */
- /* If true, indicates that we failed to allocate a q counter in the past. */
- bool q_counters_allocation_failure;
+ /**< Total number of hairpin queues attach to q counters. */
+ uint64_t num_of_hairpin_q_counter_enabled;
struct mlx5_devx_obj *q_counters; /* DevX queue counter object. */
uint32_t counter_set_id; /* Queue counter ID to set in DevX objects. */
/* DevX queue counter object for all hairpin queues of the port. */
- struct mlx5_devx_obj *q_counters_hairpin;
+ struct mlx5_devx_obj *q_counter_hairpin;
uint32_t lag_affinity_idx; /* LAG mode queue 0 affinity starting. */
rte_spinlock_t flex_item_sl; /* Flex item list spinlock. */
struct mlx5_flex_item flex_item[MLX5_PORT_FLEX_ITEM_NUM];
@@ -2224,6 +2239,10 @@ bool mlx5_is_sf_repr(struct rte_eth_dev *dev);
void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh);
int mlx5_lwm_setup(struct mlx5_priv *priv);
void mlx5_lwm_unset(struct mlx5_dev_ctx_shared *sh);
+int mlx5_enable_port_level_hairpin_counter(struct rte_eth_dev *dev, uint64_t id);
+int mlx5_disable_port_level_hairpin_counter(struct rte_eth_dev *dev, uint64_t id);
+int mlx5_enable_per_queue_hairpin_counter(struct rte_eth_dev *dev, uint64_t id);
+int mlx5_disable_per_queue_hairpin_counter(struct rte_eth_dev *dev, uint64_t id);
/* Macro to iterate over all valid ports for mlx5 driver. */
#define MLX5_ETH_FOREACH_DEV(port_id, dev) \
@@ -2257,6 +2276,7 @@ int mlx5_flow_aso_ct_mng_init(struct mlx5_dev_ctx_shared *sh);
struct mlx5_physical_device *
mlx5_get_locked_physical_device(struct mlx5_priv *priv);
void mlx5_unlock_physical_device(void);
+int mlx5_read_queue_counter(struct mlx5_devx_obj *q_counter, const char *ctr_name, uint64_t *stat);
/* mlx5_ethdev.c */
@@ -2364,6 +2384,8 @@ int mlx5_xstats_reset(struct rte_eth_dev *dev);
int mlx5_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_xstat_name *xstats_names,
unsigned int n);
+void mlx5_reset_xstats_by_name(struct mlx5_priv *priv, const char *ctr_name);
+void mlx5_reset_xstats_rq(struct rte_eth_dev *dev);
/* mlx5_vlan.c */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 8ebe784000..a12891a983 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -91,6 +91,30 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_priv *rxq, int on)
return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
}
+/**
+ * Modify the q counter of a given RQ
+ *
+ * @param rxq
+ * Rx queue.
+ * @param counter_set_id
+ * Q counter id to set
+ *
+ * @return
+ * 0 on success, non-0 otherwise
+ */
+static int
+mlx5_rxq_obj_modify_counter(struct mlx5_rxq_priv *rxq, uint32_t counter_set_id)
+{
+ struct mlx5_devx_modify_rq_attr rq_attr;
+
+ memset(&rq_attr, 0, sizeof(rq_attr));
+ rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+ rq_attr.state = MLX5_RQC_STATE_RDY;
+ rq_attr.counter_set_id = counter_set_id;
+ rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_RQ_COUNTER_SET_ID;
+ return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr);
+}
+
/**
* Modify RQ using DevX API.
*
@@ -496,55 +520,6 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
return 0;
}
-/**
- * Create a global queue counter for all the port hairpin queues.
- *
- * @param priv
- * Device private data.
- *
- * @return
- * The counter_set_id of the queue counter object, 0 otherwise.
- */
-static uint32_t
-mlx5_set_hairpin_queue_counter_obj(struct mlx5_priv *priv)
-{
- if (priv->q_counters_hairpin != NULL)
- return priv->q_counters_hairpin->id;
-
- /* Queue counter allocation failed in the past - don't try again. */
- if (priv->q_counters_allocation_failure != 0)
- return 0;
-
- if (priv->pci_dev == NULL) {
- DRV_LOG(DEBUG, "Hairpin out of buffer counter is "
- "only supported on PCI device.");
- priv->q_counters_allocation_failure = 1;
- return 0;
- }
-
- switch (priv->pci_dev->id.device_id) {
- /* Counting out of buffer drops on hairpin queues is supported only on CX7 and up. */
- case PCI_DEVICE_ID_MELLANOX_CONNECTX7:
- case PCI_DEVICE_ID_MELLANOX_CONNECTXVF:
- case PCI_DEVICE_ID_MELLANOX_BLUEFIELD3:
- case PCI_DEVICE_ID_MELLANOX_BLUEFIELDVF:
-
- priv->q_counters_hairpin = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx);
- if (priv->q_counters_hairpin == NULL) {
- /* Failed to allocate */
- DRV_LOG(DEBUG, "Some of the statistics of port %d "
- "will not be available.", priv->dev_data->port_id);
- priv->q_counters_allocation_failure = 1;
- return 0;
- }
- return priv->q_counters_hairpin->id;
- default:
- DRV_LOG(DEBUG, "Hairpin out of buffer counter "
- "is not available on this NIC.");
- priv->q_counters_allocation_failure = 1;
- return 0;
- }
-}
/**
* Create the Rx hairpin queue object.
@@ -592,7 +567,6 @@ mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq)
unlocked_attr.wq_attr.log_hairpin_data_sz -
MLX5_HAIRPIN_QUEUE_STRIDE;
- unlocked_attr.counter_set_id = mlx5_set_hairpin_queue_counter_obj(priv);
rxq_ctrl->rxq.delay_drop = priv->config.hp_delay_drop;
unlocked_attr.delay_drop_en = priv->config.hp_delay_drop;
@@ -1710,6 +1684,7 @@ mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj)
struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
+ .rxq_obj_modify_counter_set_id = mlx5_rxq_obj_modify_counter,
.rxq_obj_new = mlx5_rxq_devx_obj_new,
.rxq_event_get = mlx5_rx_devx_get_event,
.rxq_obj_modify = mlx5_devx_modify_rq,
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 1a6f174c40..f80a2e3227 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -30,6 +30,12 @@
/* First entry must be NULL for comparison. */
#define mlx5_mr_btree_len(bt) ((bt)->len - 1)
+struct mlx5_rxq_stat {
+ int id;
+ uint64_t count;
+ struct mlx5_stat_counter_ctrl ctrl;
+};
+
struct mlx5_rxq_stats {
#ifdef MLX5_PMD_SOFT_COUNTERS
uint64_t ipackets; /**< Total of successfully received packets. */
@@ -37,6 +43,18 @@ struct mlx5_rxq_stats {
#endif
uint64_t idropped; /**< Total of packets dropped when RX ring full. */
uint64_t rx_nombuf; /**< Total of RX mbuf allocation failures. */
+ struct mlx5_rxq_stat oobs; /**< Total of hairpin queue out of buffers. */
+};
+
+/* store statistics names and its offset in stats structure */
+struct mlx5_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+struct mlx5_rq_stats {
+ /** Total number of hairpin queue packets received that are dropped. */
+ uint64_t q_oobs[RTE_ETHDEV_QUEUE_STAT_CNTRS];
};
/* Compressed CQE context. */
@@ -183,6 +201,7 @@ struct mlx5_rxq_priv {
uint32_t lwm:16;
uint32_t lwm_event_pending:1;
uint32_t lwm_devx_subscribed:1;
+ struct mlx5_devx_obj *q_counter; /* DevX hairpin queue counter object. */
};
/* mlx5_rxq.c */
@@ -208,6 +227,7 @@ void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
+void mlx5_q_counters_destroy(struct rte_eth_dev *dev);
struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
uint16_t desc, unsigned int socket,
const struct rte_eth_rxconf *conf,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 126b1970e6..a5971b5cdd 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1314,6 +1314,42 @@ mlx5_rxq_obj_verify(struct rte_eth_dev *dev)
return ret;
}
+/**
+ * Destroy all queue counters.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+void
+mlx5_q_counters_destroy(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ unsigned int i;
+
+ /* Destroy port q counter */
+ if (priv->q_counters) {
+ mlx5_devx_cmd_destroy(priv->q_counters);
+ priv->q_counters = NULL;
+ }
+
+ /* Destroy port global hairpin q counter */
+ if (priv->q_counter_hairpin) {
+ mlx5_devx_cmd_destroy(priv->q_counter_hairpin);
+ priv->q_counter_hairpin = NULL;
+ }
+
+ /* Destroy per hairpin queue counter */
+ for (i = 0; i != priv->rxqs_n; ++i) {
+ struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+
+ if (rxq == NULL || rxq->q_counter == NULL)
+ continue;
+
+ mlx5_devx_cmd_destroy(rxq->q_counter);
+ rxq->q_counter = NULL;
+ }
+}
+
/**
* Callback function to initialize mbufs for Multi-Packet RQ.
*/
diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c
index f4ac58e2f9..ebc6f9fb53 100644
--- a/drivers/net/mlx5/mlx5_stats.c
+++ b/drivers/net/mlx5/mlx5_stats.c
@@ -20,6 +20,107 @@
#include "mlx5_tx.h"
#include "mlx5_malloc.h"
+
+static const struct mlx5_xstats_name_off mlx5_rxq_stats_strings[] = {
+ {"out_of_buffer", offsetof(struct mlx5_rq_stats, q_oobs)},
+};
+
+#define NB_RXQ_STATS RTE_DIM(mlx5_rxq_stats_strings)
+
+/**
+ * Retrieve extended device statistics
+ * for Rx queues. It appends the specific statistics
+ * before the parts filled by preceding modules (eth stats, etc.)
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[out] stats
+ * Pointer to an array to store the retrieved statistics.
+ * @return
+ * Number of extended stats is filled,
+ * negative on error and rte_errno is set.
+ */
+static int
+mlx5_rq_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *stats)
+{
+ uint16_t n_stats_rq = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ int cnt_used_entries = 0;
+
+ for (unsigned int idx = 0; idx < n_stats_rq; idx++) {
+ struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, idx);
+ struct mlx5_rxq_priv *rxq_priv = mlx5_rxq_get(dev, idx);
+
+ if (rxq_data == NULL)
+ continue;
+
+ struct mlx5_rxq_stat *rxq_stat = &rxq_data->stats.oobs;
+ if (rxq_stat == NULL)
+ continue;
+
+ /* Handle initial stats setup - Flag uninitialized stat */
+ rxq_stat->id = -1;
+
+ /* Handle hairpin statistics */
+ if (rxq_priv && rxq_priv->ctrl->is_hairpin) {
+ if (stats) {
+ mlx5_read_queue_counter(rxq_priv->q_counter, "hairpin_out_of_buffer",
+ &rxq_stat->count);
+
+ stats[cnt_used_entries].id = cnt_used_entries;
+ stats[cnt_used_entries].value = rxq_stat->count -
+ rxq_data->stats_reset.oobs.count;
+ }
+ rxq_stat->ctrl.enable = mlx5_enable_per_queue_hairpin_counter;
+ rxq_stat->ctrl.disable = mlx5_disable_per_queue_hairpin_counter;
+ rxq_stat->id = cnt_used_entries;
+ cnt_used_entries++;
+ }
+ }
+ return cnt_used_entries;
+}
+
+/**
+ * Retrieve names of extended device statistics
+ * for Rx queues. It appends the specific stats names
+ * before the parts filled by preceding modules (eth stats, etc.)
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[out] xstats_names
+ * Buffer to insert names into.
+ *
+ * @return
+ * Number of xstats names.
+ */
+static int
+mlx5_rq_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
+ struct rte_eth_xstat_name *xstats_names)
+{
+ struct mlx5_rxq_priv *rxq;
+ unsigned int i;
+ int cnt_used_entries = 0;
+
+ uint16_t n_stats_rq = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+
+ for (i = 0; (i != n_stats_rq); ++i) {
+ rxq = mlx5_rxq_get(dev, i);
+
+ if (rxq == NULL)
+ continue;
+
+ if (rxq->ctrl->is_hairpin) {
+ if (xstats_names)
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "hairpin_%s_rxq%u",
+ mlx5_rxq_stats_strings[0].name, i);
+ cnt_used_entries++;
+ }
+ }
+ return cnt_used_entries;
+}
+
/**
* DPDK callback to get extended device statistics.
*
@@ -46,6 +147,7 @@ mlx5_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats,
uint16_t stats_n_2nd = 0;
uint16_t mlx5_stats_n = xstats_ctrl->mlx5_stats_n;
bool bond_master = (priv->master && priv->pf_bond >= 0);
+ int n_used = mlx5_rq_xstats_get(dev, stats);
if (n >= mlx5_stats_n && stats) {
int ret;
@@ -69,27 +171,27 @@ mlx5_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats,
if (ret < 0)
return ret;
for (i = 0; i != mlx5_stats_n; i++) {
- stats[i].id = i;
+ stats[i + n_used].id = i + n_used;
if (xstats_ctrl->info[i].dev) {
uint64_t wrap_n;
uint64_t hw_stat = xstats_ctrl->hw_stats[i];
- stats[i].value = (counters[i] -
+ stats[i + n_used].value = (counters[i] -
xstats_ctrl->base[i]) &
(uint64_t)UINT32_MAX;
wrap_n = hw_stat >> 32;
- if (stats[i].value <
+ if (stats[i + n_used].value <
(hw_stat & (uint64_t)UINT32_MAX))
wrap_n++;
- stats[i].value |= (wrap_n) << 32;
- xstats_ctrl->hw_stats[i] = stats[i].value;
+ stats[i + n_used].value |= (wrap_n) << 32;
+ xstats_ctrl->hw_stats[i] = stats[i + n_used].value;
} else {
- stats[i].value =
+ stats[i + n_used].value =
(counters[i] - xstats_ctrl->base[i]);
}
}
}
- mlx5_stats_n = mlx5_txpp_xstats_get(dev, stats, n, mlx5_stats_n);
+ mlx5_stats_n = mlx5_txpp_xstats_get(dev, stats, n, mlx5_stats_n + n_used);
return mlx5_stats_n;
}
@@ -273,11 +375,58 @@ mlx5_xstats_reset(struct rte_eth_dev *dev)
xstats_ctrl->base[i] = counters[i];
xstats_ctrl->hw_stats[i] = 0;
}
+ mlx5_reset_xstats_rq(dev);
mlx5_txpp_xstats_reset(dev);
mlx5_free(counters);
return 0;
}
+void
+mlx5_reset_xstats_by_name(struct mlx5_priv *priv, const char *ctr_name)
+{
+ struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
+ unsigned int mlx5_xstats_n = xstats_ctrl->mlx5_stats_n;
+ unsigned int i;
+
+ for (i = 0; i != mlx5_xstats_n; ++i) {
+ if (strcmp(xstats_ctrl->info[i].ctr_name, ctr_name) == 0) {
+ xstats_ctrl->base[i] = 0;
+ xstats_ctrl->hw_stats[i] = 0;
+ xstats_ctrl->xstats[i] = 0;
+ return;
+ }
+ }
+}
+
+/**
+ * Clear device extended statistics for each Rx queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ */
+void
+mlx5_reset_xstats_rq(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_priv *rxq;
+ struct mlx5_rxq_data *rxq_data;
+ unsigned int i;
+
+ for (i = 0; (i != priv->rxqs_n); ++i) {
+ rxq = mlx5_rxq_get(dev, i);
+ rxq_data = mlx5_rxq_data_get(dev, i);
+
+ if (rxq == NULL || rxq_data == NULL || rxq->q_counter == NULL)
+ continue;
+ if (rxq->ctrl->is_hairpin)
+ mlx5_read_queue_counter(rxq->q_counter,
+ "hairpin_out_of_buffer", &rxq_data->stats_reset.oobs.count);
+ else
+ mlx5_read_queue_counter(rxq->q_counter,
+ "out_of_buffer", &rxq_data->stats_reset.oobs.count);
+ }
+}
+
/**
* DPDK callback to retrieve names of extended device statistics
*
@@ -299,15 +448,140 @@ mlx5_xstats_get_names(struct rte_eth_dev *dev,
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
unsigned int mlx5_xstats_n = xstats_ctrl->mlx5_stats_n;
+ unsigned int n_used = mlx5_rq_xstats_get_names(dev, xstats_names);
if (n >= mlx5_xstats_n && xstats_names) {
for (i = 0; i != mlx5_xstats_n; ++i) {
- strlcpy(xstats_names[i].name,
+ rte_strscpy(xstats_names[i + n_used].name,
xstats_ctrl->info[i].dpdk_name,
RTE_ETH_XSTATS_NAME_SIZE);
+ xstats_names[i + n_used].name[RTE_ETH_XSTATS_NAME_SIZE - 1] = 0;
}
}
mlx5_xstats_n = mlx5_txpp_xstats_get_names(dev, xstats_names,
- n, mlx5_xstats_n);
+ n, mlx5_xstats_n + n_used);
return mlx5_xstats_n;
}
+
+static struct mlx5_stat_counter_ctrl*
+mlx5_rxq_get_counter_by_id(struct rte_eth_dev *dev, uint64_t id, uint64_t *rq_id)
+{
+ uint16_t n_stats_rq = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+
+ for (int i = 0; (i != n_stats_rq); i++) {
+ struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i);
+ if (rxq_data == NULL || rxq_data->stats.oobs.id == -1)
+ continue;
+
+ if ((uint64_t)rxq_data->stats.oobs.id == id) {
+ *rq_id = rxq_data->idx;
+ return &rxq_data->stats.oobs.ctrl;
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * Callback to enable an xstat counter of the given id.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param id
+ * The ID of the counter to enable
+ *
+ * @return
+ * 1 xstat is enabled, 0 if xstat is disabled,
+ * -ENOTSUP if enabling/disabling is not implemented and -EINVAL if xstat id is invalid.
+ */
+int
+mlx5_xstats_enable(struct rte_eth_dev *dev, uint64_t id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
+ struct mlx5_stat_counter_ctrl *counter_ctrl = NULL;
+ uint16_t n_stats_rq = mlx5_rq_xstats_get(dev, NULL);
+
+ if (id < n_stats_rq)
+ counter_ctrl = mlx5_rxq_get_counter_by_id(dev, id, &id);
+ else
+ counter_ctrl = &xstats_ctrl->info[id - n_stats_rq].ctrl;
+
+ if (counter_ctrl == NULL)
+ return -EINVAL;
+
+ if (counter_ctrl->enable == NULL)
+ return -ENOTSUP;
+
+ counter_ctrl->enabled = counter_ctrl->enable(dev, id) == 0 ? 1 : 0;
+ return counter_ctrl->enabled;
+}
+
+/**
+ * Callback to disable an xstat counter of the given id.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param id
+ * The ID of the counter to enable
+ *
+ * @return
+ * 1 if xstat is disabled, 0 xstat is enabled,
+ * -ENOTSUP if enabling/disabling is not implemented and -EINVAL if xstat id is invalid.
+ */
+int
+mlx5_xstats_disable(struct rte_eth_dev *dev, uint64_t id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
+ struct mlx5_stat_counter_ctrl *counter_ctrl = NULL;
+
+ uint16_t n_stats_rq = mlx5_rq_xstats_get(dev, NULL);
+ if (id < n_stats_rq)
+ counter_ctrl = mlx5_rxq_get_counter_by_id(dev, id, &id);
+ else
+ counter_ctrl = &xstats_ctrl->info[id - n_stats_rq].ctrl;
+
+ if (counter_ctrl == NULL)
+ return -EINVAL;
+
+ if (counter_ctrl->disable == NULL)
+ return -ENOTSUP;
+
+ counter_ctrl->enabled = counter_ctrl->disable(dev, id) == 0 ? 0 : 1;
+ return counter_ctrl->enabled;
+}
+
+/**
+ * Query the state of the xstat counter.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param id
+ * The ID of the counter to enable
+ *
+ * @return
+ * 1 if xstat is disabled, 0 xstat is enabled,
+ * -ENOTSUP if enabling/disabling is not implemented and -EINVAL if xstat id is invalid.
+ */
+int
+mlx5_xstats_query_state(struct rte_eth_dev *dev, uint64_t id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
+ struct mlx5_stat_counter_ctrl *counter_ctrl = NULL;
+
+ uint16_t n_stats_rq = mlx5_rq_xstats_get(dev, NULL);
+ if (id < n_stats_rq)
+ counter_ctrl = mlx5_rxq_get_counter_by_id(dev, id, &id);
+ else
+ counter_ctrl = &xstats_ctrl->info[id - n_stats_rq].ctrl;
+
+ if (counter_ctrl == NULL)
+ return -EINVAL;
+
+ if (counter_ctrl->disable == NULL)
+ return -ENOTSUP;
+
+ return counter_ctrl->enabled;
+}
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 268598f209..d583730066 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -78,12 +78,11 @@ mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
struct mlx5_priv *priv = dev->data->dev_private;
void *ctx = priv->sh->cdev->ctx;
- priv->q_counters = mlx5_devx_cmd_queue_counter_alloc(ctx);
+ priv->q_counters = mlx5_devx_cmd_queue_counter_alloc(ctx, NULL);
if (!priv->q_counters) {
DRV_LOG(ERR, "Port %d queue counter object cannot be created "
"by DevX - imissed counter will be unavailable",
dev->data->port_id);
- priv->q_counters_allocation_failure = 1;
return;
}
priv->counter_set_id = priv->q_counters->id;
--
2.34.1
prev parent reply other threads:[~2025-01-24 11:44 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-22 15:38 [RFC PATCH] ethdev: add new API for enable/disable xstat counters by ID Shani Peretz
2024-12-22 16:46 ` Stephen Hemminger
2024-12-23 10:46 ` Shani Peretz
2024-12-24 17:19 ` Stephen Hemminger
2025-01-01 19:05 ` Shani Peretz
2025-01-10 4:36 ` Stephen Hemminger
2025-01-24 11:43 ` [PATCH v2 0/3] add new API to enable/disable xstat counters Shani Peretz
2025-01-24 11:43 ` [PATCH v2 1/3] ethdev: add new API to enable/disable xstat by counters ID Shani Peretz
2025-01-24 11:43 ` [PATCH v2 2/3] app/testpmd: added testpmd commands to enable/disable xstat Shani Peretz
2025-01-24 11:43 ` Shani Peretz [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250124114336.16610-4-shperetz@nvidia.com \
--to=shperetz@nvidia.com \
--cc=anatoly.burakov@intel.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=ferruh.yigit@amd.com \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=stephen@networkplumber.org \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).