From: Shani Peretz <shperetz@nvidia.com>
To: <dev@dpdk.org>
Cc: <shperetz@nvidia.com>, <mkashani@nvidia.com>,
<rasland@nvidia.com>, <dsosnowski@nvidia.com>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
"Bing Zhao" <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>,
Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>,
Anatoly Burakov <anatoly.burakov@intel.com>
Subject: [PATCH 2/2] net/mlx5: add hairpin out of buffer counter
Date: Mon, 1 Jul 2024 21:12:45 +0300 [thread overview]
Message-ID: <20240701181245.128810-3-shperetz@nvidia.com> (raw)
In-Reply-To: <20240701181245.128810-1-shperetz@nvidia.com>
Currently mlx5 PMD exposes rx_out_of_buffer counter that tracks
packets dropped when Rx queue was full.
To provide more granular statistics, this patch splits the
`rx_out_of_buffer` counter into two separate counters:
1. hairpin_out_of_buffer - This counter specifically tracks packets dropped
by the device's hairpin Rx queues.
2. rx_out_of_buffer - This counter tracks packets dropped by the device's
Rx queues, excluding the hairpin Rx queues.
Two hardware counter objects will be created per device,
and all the Rx queues will be assigned to these counters during the
configuration phase.
The `hairpin_out_of_buffer` counter will be created only if there is
at least one hairpin Rx queue present on the device.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
doc/guides/nics/mlx5.rst | 3 ++
doc/guides/rel_notes/release_24_07.rst | 1 +
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 5 +++
drivers/net/mlx5/linux/mlx5_os.c | 14 ++++++-
drivers/net/mlx5/mlx5.c | 4 ++
drivers/net/mlx5/mlx5.h | 4 ++
drivers/net/mlx5/mlx5_devx.c | 54 ++++++++++++++++++++++++-
drivers/net/mlx5/windows/mlx5_os.c | 1 +
8 files changed, 84 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 304c6770af..caacc9f62d 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -750,6 +750,9 @@ Limitations
- Hairpin between two ports could only manual binding and explicit Tx flow mode. For single port hairpin, all the combinations of auto/manual binding and explicit/implicit Tx flow mode could be supported.
- Hairpin in switchdev SR-IOV mode is not supported till now.
+ - "out_of_buffer" statistics are not available on:
+ - NICs older than ConnectX-7.
+ - DPUs older than BlueField-3.
- Quota:
diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst
index c3e4fa5038..b9aec2d999 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -99,6 +99,7 @@ New Features
* Added match with E-Switch manager.
* Added flow item and actions validation to async flow API.
* Added global out of buffer counter for hairpin queues.
+ * Added port out of buffer counter for hairpin queues.
* **Updated TAP driver.**
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 7995ac6bbc..82f651f2f3 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -1420,6 +1420,11 @@ static const struct mlx5_counter_ctrl mlx5_counters_init[] = {
.ctr_name = "out_of_buffer",
.dev = 1,
},
+ {
+ .dpdk_name = "hairpin_out_of_buffer",
+ .ctr_name = "hairpin_out_of_buffer",
+ .dev = 1,
+ },
{
.dpdk_name = "dev_internal_queue_oob",
.ctr_name = "dev_internal_queue_oob",
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 50f4810bff..5e950e9be1 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -964,6 +964,8 @@ mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
DRV_LOG(DEBUG, "Port %d queue counter object cannot be created "
"by DevX - fall-back to use the kernel driver global "
"queue counter.", dev->data->port_id);
+ priv->q_counters_allocation_failure = 1;
+
/* Create WQ by kernel and query its queue counter ID. */
if (cq) {
wq = mlx5_glue->create_wq(ctx,
@@ -3037,13 +3039,23 @@ mlx5_os_read_dev_stat(struct mlx5_priv *priv, const char *ctr_name,
if (priv->q_counters != NULL &&
strcmp(ctr_name, "out_of_buffer") == 0) {
if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
- DRV_LOG(WARNING, "Devx out_of_buffer counter is not supported in the secondary process");
+ DRV_LOG(WARNING, "DevX out_of_buffer counter is not supported in the secondary process");
rte_errno = ENOTSUP;
return 1;
}
return mlx5_devx_cmd_queue_counter_query
(priv->q_counters, 0, (uint32_t *)stat);
}
+ if (priv->q_counters_hairpin != NULL &&
+ strcmp(ctr_name, "hairpin_out_of_buffer") == 0) {
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+ DRV_LOG(WARNING, "DevX out_of_buffer counter is not supported in the secondary process");
+ rte_errno = ENOTSUP;
+ return 1;
+ }
+ return mlx5_devx_cmd_queue_counter_query
+ (priv->q_counters_hairpin, 0, (uint32_t *)stat);
+ }
MKSTR(path, "%s/ports/%d/hw_counters/%s",
priv->sh->ibdev_path,
priv->dev_port,
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e482f7f0e5..8d266b0e64 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2394,6 +2394,10 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_devx_cmd_destroy(priv->q_counters);
priv->q_counters = NULL;
}
+ if (priv->q_counters_hairpin) {
+ mlx5_devx_cmd_destroy(priv->q_counters_hairpin);
+ priv->q_counters_hairpin = NULL;
+ }
mlx5_mprq_free_mp(dev);
mlx5_os_free_shared_dr(priv);
#ifdef HAVE_MLX5_HWS_SUPPORT
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index bd149b43e5..75a1e170af 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1986,8 +1986,12 @@ struct mlx5_priv {
LIST_HEAD(fdir, mlx5_fdir_flow) fdir_flows; /* fdir flows. */
rte_spinlock_t shared_act_sl; /* Shared actions spinlock. */
uint32_t rss_shared_actions; /* RSS shared actions. */
+ /* If true, indicates that we failed to allocate a q counter in the past. */
+ bool q_counters_allocation_failure;
struct mlx5_devx_obj *q_counters; /* DevX queue counter object. */
uint32_t counter_set_id; /* Queue counter ID to set in DevX objects. */
+ /* DevX queue counter object for all hairpin queues of the port. */
+ struct mlx5_devx_obj *q_counters_hairpin;
uint32_t lag_affinity_idx; /* LAG mode queue 0 affinity starting. */
rte_spinlock_t flex_item_sl; /* Flex item list spinlock. */
struct mlx5_flex_item flex_item[MLX5_PORT_FLEX_ITEM_NUM];
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index f23eb1def6..7db271acb4 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -496,6 +496,56 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
return 0;
}
+/**
+ * Create a global queue counter for all the port hairpin queues.
+ *
+ * @param priv
+ * Device private data.
+ *
+ * @return
+ * The counter_set_id of the queue counter object, 0 otherwise.
+ */
+static uint32_t
+mlx5_set_hairpin_queue_counter_obj(struct mlx5_priv *priv)
+{
+ if (priv->q_counters_hairpin != NULL)
+ return priv->q_counters_hairpin->id;
+
+ /* Queue counter allocation failed in the past - don't try again. */
+ if (priv->q_counters_allocation_failure != 0)
+ return 0;
+
+ if (priv->pci_dev == NULL) {
+ DRV_LOG(DEBUG, "Hairpin out of buffer counter is "
+ "only supported on PCI device.");
+ priv->q_counters_allocation_failure = 1;
+ return 0;
+ }
+
+ switch (priv->pci_dev->id.device_id) {
+ /* Counting out of buffer drops on hairpin queues is supported only on CX7 and up. */
+ case PCI_DEVICE_ID_MELLANOX_CONNECTX7:
+ case PCI_DEVICE_ID_MELLANOX_CONNECTXVF:
+ case PCI_DEVICE_ID_MELLANOX_BLUEFIELD3:
+ case PCI_DEVICE_ID_MELLANOX_BLUEFIELDVF:
+
+ priv->q_counters_hairpin = mlx5_devx_cmd_queue_counter_alloc(priv->sh->cdev->ctx);
+ if (priv->q_counters_hairpin == NULL) {
+ /* Failed to allocate */
+ DRV_LOG(DEBUG, "Some of the statistics of port %d "
+ "will not be available.", priv->dev_data->port_id);
+ priv->q_counters_allocation_failure = 1;
+ return 0;
+ }
+ return priv->q_counters_hairpin->id;
+ default:
+ DRV_LOG(DEBUG, "Hairpin out of buffer counter "
+ "is not available on this NIC.");
+ priv->q_counters_allocation_failure = 1;
+ return 0;
+ }
+}
+
/**
* Create the Rx hairpin queue object.
*
@@ -541,7 +591,9 @@ mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq)
unlocked_attr.wq_attr.log_hairpin_num_packets =
unlocked_attr.wq_attr.log_hairpin_data_sz -
MLX5_HAIRPIN_QUEUE_STRIDE;
- unlocked_attr.counter_set_id = priv->counter_set_id;
+
+ unlocked_attr.counter_set_id = mlx5_set_hairpin_queue_counter_obj(priv);
+
rxq_ctrl->rxq.delay_drop = priv->config.hp_delay_drop;
unlocked_attr.delay_drop_en = priv->config.hp_delay_drop;
unlocked_attr.hairpin_data_buffer_type =
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 98022ed3c7..0ebd233595 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -83,6 +83,7 @@ mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
DRV_LOG(ERR, "Port %d queue counter object cannot be created "
"by DevX - imissed counter will be unavailable",
dev->data->port_id);
+ priv->q_counters_allocation_failure = 1;
return;
}
priv->counter_set_id = priv->q_counters->id;
--
2.34.1
next prev parent reply other threads:[~2024-07-01 18:13 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-01 18:12 [PATCH 0/2] Added hairpin out of buffer counters Shani Peretz
2024-07-01 18:12 ` [PATCH 1/2] net/mlx5: add global hairpin out of buffer counter Shani Peretz
2024-07-01 18:12 ` Shani Peretz [this message]
2024-07-02 9:26 ` [PATCH 0/2] Added hairpin out of buffer counters Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240701181245.128810-3-shperetz@nvidia.com \
--to=shperetz@nvidia.com \
--cc=anatoly.burakov@intel.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=mkashani@nvidia.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).