DPDK patches and discussions
 help / color / mirror / Atom feed
From: Spike Du <spiked@nvidia.com>
To: <matan@nvidia.com>, <viacheslavo@nvidia.com>, <orika@nvidia.com>,
	<thomas@monjalon.net>
Cc: <dev@dpdk.org>, <rasland@nvidia.com>
Subject: [RFC v2 5/7] net/mlx5: support Rx queue based limit watermark
Date: Sun, 22 May 2022 08:58:58 +0300	[thread overview]
Message-ID: <20220522055900.417282-6-spiked@nvidia.com> (raw)
In-Reply-To: <20220522055900.417282-1-spiked@nvidia.com>

Add mlx5 specific LWM(limit watermark) configuration and query handler.
While the Rx queue fullness reaches the LWM limit, the driver catches
an HW event and invokes the user callback.
The query handler finds the next RX queue with pending LWM event
if any, starting from the given RX queue index.

Signed-off-by: Spike Du <spiked@nvidia.com>
---
 doc/guides/nics/mlx5.rst               |  12 ++
 doc/guides/rel_notes/release_22_07.rst |   1 +
 drivers/common/mlx5/mlx5_prm.h         |   1 +
 drivers/net/mlx5/mlx5.c                |   2 +
 drivers/net/mlx5/mlx5_rx.c             | 156 +++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_rx.h             |   5 +
 6 files changed, 177 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index d83c56de11..79f56018ef 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -93,6 +93,7 @@ Features
 - Connection tracking.
 - Sub-Function representors.
 - Sub-Function.
+- Rx queue LWM (Limit WaterMark) configuration.
 
 
 Limitations
@@ -520,6 +521,9 @@ Limitations
 
 - The NIC egress flow rules on representor port are not supported.
 
+- LWM:
+
+  - Doesn't support shared Rx queue and Hairpin Rx queue.
 
 Statistics
 ----------
@@ -1680,3 +1684,11 @@ The procedure below is an example of using a ConnectX-5 adapter card (pf0) with
 #. For each VF PCIe, using the following command to bind the driver::
 
    $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/bind
+
+LWM introduction
+----------------
+
+LWM (Limit WaterMark) is a per Rx queue attribute, it should be configured as
+a percentage of the Rx queue size.
+When Rx queue fullness is above LWM, an event is sent to PMD.
+
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a60a0d5f16..253bc7e381 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -80,6 +80,7 @@ New Features
   * Added support for promiscuous mode on Windows.
   * Added support for MTU on Windows.
   * Added matching and RSS on IPsec ESP.
+  * Added Rx queue LWM(Limit WaterMark) support.
 
 * **Updated Marvell cnxk crypto driver.**
 
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 630b2c5100..3b5e60532a 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3293,6 +3293,7 @@ struct mlx5_aso_wqe {
 
 enum {
 	MLX5_EVENT_TYPE_OBJECT_CHANGE = 0x27,
+	MLX5_EVENT_TYPE_SRQ_LIMIT_REACHED = 0x14,
 };
 
 enum {
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e04a66625e..35ae51b3af 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2071,6 +2071,8 @@ const struct eth_dev_ops mlx5_dev_ops = {
 	.dev_supported_ptypes_get = mlx5_dev_supported_ptypes_get,
 	.vlan_filter_set = mlx5_vlan_filter_set,
 	.rx_queue_setup = mlx5_rx_queue_setup,
+	.rx_queue_lwm_set = mlx5_rx_queue_lwm_set,
+	.rx_queue_lwm_query = mlx5_rx_queue_lwm_query,
 	.rx_hairpin_queue_setup = mlx5_rx_hairpin_queue_setup,
 	.tx_queue_setup = mlx5_tx_queue_setup,
 	.tx_hairpin_queue_setup = mlx5_tx_hairpin_queue_setup,
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 7d556c2b45..d30522e6df 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -19,12 +19,14 @@
 #include <mlx5_prm.h>
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
+#include <rte_pmd_mlx5.h>
 
 #include "mlx5_autoconf.h"
 #include "mlx5_defs.h"
 #include "mlx5.h"
 #include "mlx5_utils.h"
 #include "mlx5_rxtx.h"
+#include "mlx5_devx.h"
 #include "mlx5_rx.h"
 
 
@@ -128,6 +130,17 @@ mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset)
 	return RTE_ETH_RX_DESC_AVAIL;
 }
 
+/* Get rxq lwm percentage according to lwm number. */
+static uint8_t
+mlx5_rxq_lwm_to_percentage(struct mlx5_rxq_priv *rxq)
+{
+	struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq;
+	uint32_t wqe_cnt = 1 << rxq_data->elts_n;
+
+	/* ethdev LWM describes fullness, mlx5 LWM describes emptiness. */
+	return rxq->lwm ? (100 - rxq->lwm * 100 / wqe_cnt) : 0;
+}
+
 /**
  * DPDK callback to get the RX queue information.
  *
@@ -150,6 +163,7 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 {
 	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id);
 	struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id);
+	struct mlx5_rxq_priv *rxq_priv = mlx5_rxq_get(dev, rx_queue_id);
 
 	if (!rxq)
 		return;
@@ -169,6 +183,8 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ?
 		RTE_BIT32(rxq->elts_n) * RTE_BIT32(rxq->log_strd_num) :
 		RTE_BIT32(rxq->elts_n);
+	qinfo->conf.lwm = rxq_priv ?
+		mlx5_rxq_lwm_to_percentage(rxq_priv) : 0;
 }
 
 /**
@@ -1188,6 +1204,34 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev __rte_unused)
 	return -ENOTSUP;
 }
 
+int
+mlx5_rx_queue_lwm_query(struct rte_eth_dev *dev,
+			uint16_t *queue_id, uint8_t *lwm)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	unsigned int rxq_id, found = 0, n;
+	struct mlx5_rxq_priv *rxq;
+
+	if (!queue_id)
+		return -EINVAL;
+	/* Query all the Rx queues of the port in a circular way. */
+	for (rxq_id = *queue_id, n = 0; n < priv->rxqs_n; n++) {
+		rxq = mlx5_rxq_get(dev, rxq_id);
+		if (rxq && rxq->lwm_event_pending) {
+			pthread_mutex_lock(&priv->sh->lwm_config_lock);
+			rxq->lwm_event_pending = 0;
+			pthread_mutex_unlock(&priv->sh->lwm_config_lock);
+			*queue_id = rxq_id;
+			found = 1;
+			if (lwm)
+				*lwm =  mlx5_rxq_lwm_to_percentage(rxq);
+			break;
+		}
+		rxq_id = (rxq_id + 1) % priv->rxqs_n;
+	}
+	return found;
+}
+
 /**
  * Rte interrupt handler for LWM event.
  * It first checks if the event arrives, if so process the callback for
@@ -1220,3 +1264,115 @@ mlx5_dev_interrupt_handler_lwm(void *args)
 	}
 	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_RX_LWM, NULL);
 }
+
+/**
+ * DPDK callback to arm an Rx queue LWM(limit watermark) event.
+ * While the Rx queue fullness reaches the LWM limit, the driver catches
+ * an HW event and invokes the user event callback.
+ * After the last event handling, the user needs to call this API again
+ * to arm an additional event.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param[in] rx_queue_id
+ *   Rx queue identificator.
+ * @param[in] lwm
+ *   The LWM value, is defined by a percentage of the Rx queue size.
+ *   [1-99] to set a new LWM (update the old value).
+ *   0 to unarm the event.
+ *
+ * @return
+ *   0 : operation success.
+ *   Otherwise:
+ *   - ENOMEM - not enough memory to create LWM event channel.
+ *   - EINVAL - the input Rxq is not created by devx.
+ *   - E2BIG  - lwm is bigger than 99.
+ */
+int
+mlx5_rx_queue_lwm_set(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+		      uint8_t lwm)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	uint16_t port_id = PORT_ID(priv);
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id);
+	uint16_t event_nums[1] = {MLX5_EVENT_TYPE_SRQ_LIMIT_REACHED};
+	struct mlx5_rxq_data *rxq_data;
+	uint32_t wqe_cnt;
+	uint64_t cookie;
+	int ret = 0;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	rxq_data = &rxq->ctrl->rxq;
+	/* Ensure the Rq is created by devx. */
+	if (priv->obj_ops.rxq_obj_new != devx_obj_ops.rxq_obj_new) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	if (lwm > 99) {
+		DRV_LOG(WARNING, "Too big LWM configuration.");
+		rte_errno = E2BIG;
+		return -rte_errno;
+	}
+	/* Start config LWM. */
+	pthread_mutex_lock(&priv->sh->lwm_config_lock);
+	if (rxq->lwm == 0 && lwm == 0) {
+		/* Both old/new values are 0, do nothing. */
+		ret = 0;
+		goto end;
+	}
+	wqe_cnt = 1 << rxq_data->elts_n;
+	if (lwm) {
+		if (!priv->sh->devx_channel_lwm) {
+			ret = mlx5_lwm_setup(priv);
+			if (ret) {
+				DRV_LOG(WARNING,
+					"Failed to create shared_lwm.");
+				rte_errno = ENOMEM;
+				ret = -rte_errno;
+				goto end;
+			}
+		}
+		if (!rxq->lwm_devx_subscribed) {
+			cookie = ((uint32_t)
+				  (port_id << LWM_COOKIE_PORTID_OFFSET)) |
+				(rx_queue_id << LWM_COOKIE_RXQID_OFFSET);
+			ret = mlx5_os_devx_subscribe_devx_event
+				(priv->sh->devx_channel_lwm,
+				 rxq->devx_rq.rq->obj,
+				 sizeof(event_nums),
+				 event_nums,
+				 cookie);
+			if (ret) {
+				rte_errno = rte_errno ? rte_errno : EINVAL;
+				ret = -rte_errno;
+				goto end;
+			}
+			rxq->lwm_devx_subscribed = 1;
+		}
+	}
+	/* The ethdev LWM describes fullness, mlx5 lwm describes emptiness. */
+	if (lwm)
+		lwm = 100 - lwm;
+	/* Save LWM to rxq and send modfiy_rq devx command. */
+	rxq->lwm = lwm * wqe_cnt / 100;
+	/* Prevent integer division loss when switch lwm number to percentage. */
+	if (lwm && (lwm * wqe_cnt % 100)) {
+		rxq->lwm = ((uint32_t)(rxq->lwm + 1) >= wqe_cnt) ?
+			rxq->lwm : (rxq->lwm + 1);
+	}
+	if (lwm && !rxq->lwm) {
+		/* With mprq, wqe_cnt may be < 100. */
+		DRV_LOG(WARNING, "Too small LWM configuration.");
+		rte_errno = EINVAL;
+		ret = -rte_errno;
+		goto end;
+	}
+	ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RDY);
+end:
+	pthread_mutex_unlock(&priv->sh->lwm_config_lock);
+	return ret;
+}
+
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 068dff5863..e078aaf3dc 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -177,6 +177,7 @@ struct mlx5_rxq_priv {
 	uint32_t hairpin_status; /* Hairpin binding status. */
 	uint32_t lwm:16;
 	uint32_t lwm_event_pending:1;
+	uint32_t lwm_devx_subscribed:1;
 };
 
 /* External RX queue descriptor. */
@@ -297,6 +298,10 @@ int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 			   struct rte_eth_burst_mode *mode);
 int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
 void mlx5_dev_interrupt_handler_lwm(void *args);
+int mlx5_rx_queue_lwm_set(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+			  uint8_t lwm);
+int mlx5_rx_queue_lwm_query(struct rte_eth_dev *dev, uint16_t *rx_queue_id,
+			    uint8_t *lwm);
 
 /* Vectorized version of mlx5_rx.c */
 int mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq_data);
-- 
2.27.0


  parent reply	other threads:[~2022-05-22  5:59 UTC|newest]

Thread overview: 131+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-01  3:22 [RFC 0/6] net/mlx5: introduce limit watermark and host shaper Spike Du
2022-04-01  3:22 ` [RFC 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-05-06  3:56   ` [RFC v1 0/7] net/mlx5: introduce limit watermark and host shaper Spike Du
2022-05-06  3:56     ` [RFC v1 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-05-06  3:56     ` [RFC v1 2/7] common/mlx5: share interrupt management Spike Du
2022-05-06  3:56     ` [RFC v1 3/7] ethdev: introduce Rx queue based limit watermark Spike Du
2022-05-19  9:37       ` Andrew Rybchenko
2022-05-06  3:56     ` [RFC v1 4/7] net/mlx5: add LWM event handling support Spike Du
2022-05-06  3:56     ` [RFC v1 5/7] net/mlx5: support Rx queue based limit watermark Spike Du
2022-05-06  3:56     ` [RFC v1 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-05-06  3:56     ` [RFC v1 7/7] app/testpmd: add LWM and Host Shaper command Spike Du
2022-05-22  5:58     ` [RFC v2 0/7] introduce per-queue limit watermark and host shaper Spike Du
2022-05-22  5:58       ` [RFC v2 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-05-22  5:58       ` [RFC v2 2/7] common/mlx5: share interrupt management Spike Du
2022-05-22  5:58       ` [RFC v2 3/7] ethdev: introduce Rx queue based limit watermark Spike Du
2022-05-22 15:23         ` Stephen Hemminger
2022-05-23  3:01           ` Spike Du
2022-05-23 21:45             ` Thomas Monjalon
2022-05-24  2:50               ` Spike Du
2022-05-24  8:18                 ` Thomas Monjalon
2022-05-25 12:59                   ` Andrew Rybchenko
2022-05-25 13:58                     ` Thomas Monjalon
2022-05-25 14:23                       ` Andrew Rybchenko
2022-05-23 22:54             ` Stephen Hemminger
2022-05-24  3:46               ` Spike Du
2022-05-22 15:24         ` Stephen Hemminger
2022-05-23  2:18           ` Spike Du
2022-05-23  6:07         ` Morten Brørup
2022-05-23 10:58           ` Thomas Monjalon
2022-05-23 14:10             ` Spike Du
2022-05-23 14:39               ` Thomas Monjalon
2022-05-24  6:35                 ` Andrew Rybchenko
2022-05-24  9:40                   ` Morten Brørup
2022-05-22  5:58       ` [RFC v2 4/7] net/mlx5: add LWM event handling support Spike Du
2022-05-22  5:58       ` Spike Du [this message]
2022-05-22  5:58       ` [RFC v2 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-05-22  5:59       ` [RFC v2 7/7] app/testpmd: add LWM and Host Shaper command Spike Du
2022-05-24 15:20       ` [PATCH v3 0/7] introduce per-queue limit watermark and host shaper Spike Du
2022-05-24 15:20         ` [PATCH v3 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-05-24 15:20         ` [PATCH v3 2/7] common/mlx5: share interrupt management Spike Du
2022-05-24 15:20         ` [PATCH v3 3/7] ethdev: introduce Rx queue based limit watermark Spike Du
2022-05-24 15:20         ` [PATCH v3 4/7] net/mlx5: add LWM event handling support Spike Du
2022-05-24 15:20         ` [PATCH v3 5/7] net/mlx5: support Rx queue based limit watermark Spike Du
2022-05-24 15:20         ` [PATCH v3 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-05-24 15:20         ` [PATCH v3 7/7] app/testpmd: add LWM and Host Shaper command Spike Du
2022-05-24 15:59         ` [PATCH v3 0/7] introduce per-queue limit watermark and host shaper Thomas Monjalon
2022-05-24 19:00           ` Morten Brørup
2022-05-24 19:22             ` Thomas Monjalon
2022-05-25 14:11               ` Andrew Rybchenko
2022-05-25 13:14             ` Spike Du
2022-05-25 13:40               ` Morten Brørup
2022-05-25 13:59                 ` Spike Du
2022-05-25 14:16                   ` Morten Brørup
2022-05-25 14:30                     ` Andrew Rybchenko
2022-06-03 12:48         ` [PATCH v4 0/7] introduce per-queue fill threshold " Spike Du
2022-06-03 12:48           ` [PATCH v4 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-06-03 12:48           ` [PATCH v4 2/7] common/mlx5: share interrupt management Spike Du
2022-06-03 14:30             ` Ray Kinsella
2022-06-03 12:48           ` [PATCH v4 3/7] ethdev: introduce Rx queue based fill threshold Spike Du
2022-06-03 14:30             ` Ray Kinsella
2022-06-04 12:46             ` Andrew Rybchenko
2022-06-06 13:16               ` Spike Du
2022-06-06 17:15                 ` Andrew Rybchenko
2022-06-06 21:30                   ` Thomas Monjalon
2022-06-07  8:02                     ` Andrew Rybchenko
2022-06-07  6:00                   ` Spike Du
2022-06-06 15:49             ` Stephen Hemminger
2022-06-03 12:48           ` [PATCH v4 4/7] net/mlx5: add LWM event handling support Spike Du
2022-06-03 12:48           ` [PATCH v4 5/7] net/mlx5: support Rx queue based fill threshold Spike Du
2022-06-03 12:48           ` [PATCH v4 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-06-03 14:55             ` Ray Kinsella
2022-06-03 12:48           ` [PATCH v4 7/7] app/testpmd: add Host Shaper command Spike Du
2022-06-07 12:59           ` [PATCH v5 0/7] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-07 12:59             ` [PATCH v5 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-06-08 20:10               ` Matan Azrad
2022-06-07 12:59             ` [PATCH v5 2/7] common/mlx5: share interrupt management Spike Du
2022-06-07 12:59             ` [PATCH v5 3/7] ethdev: introduce Rx queue based available descriptor threshold Spike Du
2022-06-07 12:59             ` [PATCH v5 4/7] net/mlx5: add LWM event handling support Spike Du
2022-06-07 12:59             ` [PATCH v5 5/7] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-07 12:59             ` [PATCH v5 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-06-07 12:59             ` [PATCH v5 7/7] app/testpmd: add Host Shaper command Spike Du
2022-06-09  7:55               ` Andrew Rybchenko
2022-06-10  2:22                 ` Spike Du
2022-06-13  2:50               ` [PATCH v6] " Spike Du
2022-06-13  2:50                 ` Spike Du
2022-06-14  9:43                   ` Singh, Aman Deep
2022-06-14  9:54                     ` Spike Du
2022-06-14 12:01                   ` [PATCH v7] " Spike Du
2022-06-14 12:01                     ` Spike Du
2022-06-15  7:51                       ` Matan Azrad
2022-06-15 11:08                       ` Thomas Monjalon
2022-06-15 12:58                       ` [PATCH v8 0/6] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-15 12:58                         ` [PATCH v8 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-06-15 14:43                           ` [PATCH v9 0/6] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-15 14:43                             ` [PATCH v9 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-06-16  8:41                               ` [PATCH v10 0/6] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-16  8:41                                 ` [PATCH v10 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-06-16  8:41                                 ` [PATCH v10 2/6] common/mlx5: share interrupt management Spike Du
2022-06-23 16:05                                   ` Ray Kinsella
2022-06-16  8:41                                 ` [PATCH v10 3/6] net/mlx5: add LWM event handling support Spike Du
2022-06-16  8:41                                 ` [PATCH v10 4/6] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-16  8:41                                 ` [PATCH v10 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-06-16  8:41                                 ` [PATCH v10 6/6] app/testpmd: add Host Shaper command Spike Du
2022-06-19  8:14                                 ` [PATCH v10 0/6] introduce per-queue available descriptor threshold and host shaper Raslan Darawsheh
2022-06-15 14:43                             ` [PATCH v9 2/6] common/mlx5: share interrupt management Spike Du
2022-06-15 14:43                             ` [PATCH v9 3/6] net/mlx5: add LWM event handling support Spike Du
2022-06-15 14:43                             ` [PATCH v9 4/6] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-15 14:43                             ` [PATCH v9 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-06-15 14:43                             ` [PATCH v9 6/6] app/testpmd: add Host Shaper command Spike Du
2022-06-15 12:58                         ` [PATCH v8 2/6] common/mlx5: share interrupt management Spike Du
2022-06-15 12:58                         ` [PATCH v8 3/6] net/mlx5: add LWM event handling support Spike Du
2022-06-15 12:58                         ` [PATCH v8 4/6] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-15 12:58                         ` [PATCH v8 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-06-15 12:58                         ` [PATCH v8 6/6] app/testpmd: add Host Shaper command Spike Du
2022-06-08  9:43             ` [PATCH v5 0/7] introduce per-queue available descriptor threshold and host shaper Andrew Rybchenko
2022-06-08 16:35             ` [PATCH v6] ethdev: introduce available Rx descriptors threshold Andrew Rybchenko
2022-06-08 17:22               ` Thomas Monjalon
2022-06-08 17:46                 ` Thomas Monjalon
2022-06-09  0:17                   ` fengchengwen
2022-06-09  7:05                     ` Thomas Monjalon
2022-06-10  0:01                       ` fengchengwen
2022-04-01  3:22 ` [RFC 2/6] common/mlx5: share interrupt management Spike Du
2022-04-01  3:22 ` [RFC 3/6] net/mlx5: add LWM event handling support Spike Du
2022-04-01  3:22 ` [RFC 4/6] net/mlx5: add private API to configure Rxq LWM Spike Du
2022-04-01  3:22 ` [RFC 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-04-01  3:22 ` [RFC 6/6] app/testpmd: add LWM and Host Shaper command Spike Du
2022-04-05  8:58 ` [RFC 0/6] net/mlx5: introduce limit watermark and host shaper Jerin Jacob
2022-04-26  2:42   ` Spike Du
2022-05-01 12:50     ` Jerin Jacob
2022-05-02  3:58       ` Spike Du
2022-04-29  5:48   ` Spike Du

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220522055900.417282-6-spiked@nvidia.com \
    --to=spiked@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).