DPDK patches and discussions
 help / color / mirror / Atom feed
From: Xueming Li <xuemingl@nvidia.com>
To: <dev@dpdk.org>
Cc: <xuemingl@nvidia.com>, Lior Margalit <lmargalit@nvidia.com>,
	"Slava Ovsiienko" <viacheslavo@nvidia.com>,
	Matan Azrad <matan@nvidia.com>
Subject: [dpdk-dev] [PATCH v4 13/14] net/mlx5: support shared Rx queue
Date: Thu, 4 Nov 2021 20:33:19 +0800	[thread overview]
Message-ID: <20211104123320.1638915-14-xuemingl@nvidia.com> (raw)
In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com>

This patch introduces shared RxQ. All shared Rx queues with same group
and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.

Shared rxq_data is set into device Rx queues of all member ports as
RxQ object, used for receiving packets. Polling queue of any member
ports returns packets of any member, mbuf->port is used to identify
source port.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Slava Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini   |   1 +
 doc/guides/nics/mlx5.rst            |   6 +
 drivers/net/mlx5/linux/mlx5_os.c    |   2 +
 drivers/net/mlx5/linux/mlx5_verbs.c |   8 +-
 drivers/net/mlx5/mlx5.h             |   2 +
 drivers/net/mlx5/mlx5_devx.c        |  46 +++--
 drivers/net/mlx5/mlx5_ethdev.c      |   5 +
 drivers/net/mlx5/mlx5_rx.h          |   3 +
 drivers/net/mlx5/mlx5_rxq.c         | 273 ++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_trigger.c     |  61 ++++---
 10 files changed, 329 insertions(+), 78 deletions(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 403f58cd7e2..7cbd11bb160 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -11,6 +11,7 @@ Removal event        = Y
 Rx interrupt         = Y
 Fast mbuf free       = Y
 Queue start/stop     = Y
+Shared Rx queue      = Y
 Burst mode info      = Y
 Power mgmt address monitor = Y
 MTU update           = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bb92520dff4..824971d89ae 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -113,6 +113,7 @@ Features
 - Connection tracking.
 - Sub-Function representors.
 - Sub-Function.
+- Shared Rx queue.
 
 
 Limitations
@@ -465,6 +466,11 @@ Limitations
   - In order to achieve best insertion rate, application should manage the flows per lcore.
   - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache.
 
+ Shared Rx queue:
+
+  - Counters of received packets and bytes number of devices in same share group are same.
+  - Counters of received packets and bytes number of queues in same group and queue ID are same.
+
 - HW hashed bonding
 
   - TXQ affinity subjects to HW hash once enabled.
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index f51da8c3a38..e0304b685e5 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -420,6 +420,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 			mlx5_glue->dr_create_flow_action_default_miss();
 	if (!sh->default_miss_action)
 		DRV_LOG(WARNING, "Default miss action is not supported.");
+	LIST_INIT(&sh->shared_rxqs);
 	return 0;
 error:
 	/* Rollback the created objects. */
@@ -494,6 +495,7 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv)
 	MLX5_ASSERT(sh && sh->refcnt);
 	if (sh->refcnt > 1)
 		return;
+	MLX5_ASSERT(LIST_EMPTY(&sh->shared_rxqs));
 #ifdef HAVE_MLX5DV_DR
 	if (sh->rx_domain) {
 		mlx5_glue->dr_destroy_domain(sh->rx_domain);
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index f78916c868f..9d299542614 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -424,14 +424,16 @@ mlx5_rxq_ibv_obj_release(struct mlx5_rxq_priv *rxq)
 {
 	struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj;
 
-	MLX5_ASSERT(rxq_obj);
-	MLX5_ASSERT(rxq_obj->wq);
-	MLX5_ASSERT(rxq_obj->ibv_cq);
+	if (rxq_obj == NULL || rxq_obj->wq == NULL)
+		return;
 	claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+	rxq_obj->wq = NULL;
+	MLX5_ASSERT(rxq_obj->ibv_cq);
 	claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
 	if (rxq_obj->ibv_channel)
 		claim_zero(mlx5_glue->destroy_comp_channel
 							(rxq_obj->ibv_channel));
+	rxq->ctrl->started = false;
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a037a33debf..51f45788381 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1200,6 +1200,7 @@ struct mlx5_dev_ctx_shared {
 	struct mlx5_ecpri_parser_profile ecpri_parser;
 	/* Flex parser profiles information. */
 	void *devx_rx_uar; /* DevX UAR for Rx. */
+	LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */
 	struct mlx5_aso_age_mng *aso_age_mng;
 	/* Management data for aging mechanism using ASO Flow Hit. */
 	struct mlx5_geneve_tlv_option_resource *geneve_tlv_option_resource;
@@ -1267,6 +1268,7 @@ struct mlx5_rxq_obj {
 		};
 		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
+			struct mlx5_devx_rmp devx_rmp; /* RMP for shared RQ. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 668d47025e8..d3d189ab7f2 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -88,6 +88,8 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type)
 	default:
 		break;
 	}
+	if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+		return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr);
 	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
 
@@ -156,18 +158,21 @@ mlx5_txq_devx_modify(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
 static void
 mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
-	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
+	struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj;
 
-	MLX5_ASSERT(rxq != NULL);
-	MLX5_ASSERT(rxq_ctrl != NULL);
+	if (rxq_obj == NULL)
+		return;
 	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
-		MLX5_ASSERT(rxq_obj->rq);
+		if (rxq_obj->rq == NULL)
+			return;
 		mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
+		if (rxq->devx_rq.rq == NULL)
+			return;
 		mlx5_devx_rq_destroy(&rxq->devx_rq);
-		memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq));
+		if (rxq->devx_rq.rmp != NULL && rxq->devx_rq.rmp->ref_cnt > 0)
+			return;
 		mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
 		memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
 		if (rxq_obj->devx_channel) {
@@ -176,6 +181,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 			rxq_obj->devx_channel = NULL;
 		}
 	}
+	rxq->ctrl->started = false;
 }
 
 /**
@@ -271,6 +277,8 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq)
 						MLX5_WQ_END_PAD_MODE_NONE;
 	rq_attr.wq_attr.pd = cdev->pdn;
 	rq_attr.counter_set_id = priv->counter_set_id;
+	if (rxq_data->shared) /* Create RMP based RQ. */
+		rxq->devx_rq.rmp = &rxq_ctrl->obj->devx_rmp;
 	/* Create RQ using DevX API. */
 	return mlx5_devx_rq_create(cdev->ctx, &rxq->devx_rq, wqe_size,
 				   log_desc_n, &rq_attr, rxq_ctrl->socket);
@@ -300,6 +308,8 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
 	uint16_t event_nums[1] = { 0 };
 	int ret = 0;
 
+	if (rxq_ctrl->started)
+		return 0;
 	if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
 	    !rxq_data->lro) {
 		cq_attr.cqe_comp_en = 1u;
@@ -365,6 +375,7 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
 	rxq_data->cq_uar = mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar);
 	rxq_data->cqe_n = log_cqe_n;
 	rxq_data->cqn = cq_obj->cq->id;
+	rxq_data->cq_ci = 0;
 	if (rxq_ctrl->obj->devx_channel) {
 		ret = mlx5_os_devx_subscribe_devx_event
 					      (rxq_ctrl->obj->devx_channel,
@@ -463,7 +474,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
 		return mlx5_rxq_obj_hairpin_new(rxq);
 	tmpl->rxq_ctrl = rxq_ctrl;
-	if (rxq_ctrl->irq) {
+	if (rxq_ctrl->irq && !rxq_ctrl->started) {
 		int devx_ev_flag =
 			  MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA;
 
@@ -496,11 +507,19 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 	ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
-	rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
-	rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec;
-	mlx5_rxq_initialize(rxq_data);
+	if (!rxq_data->shared) {
+		rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
+		rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec;
+	} else if (!rxq_ctrl->started) {
+		rxq_data->wqes = (void *)(uintptr_t)tmpl->devx_rmp.wq.umem_buf;
+		rxq_data->rq_db =
+				(uint32_t *)(uintptr_t)tmpl->devx_rmp.wq.db_rec;
+	}
+	if (!rxq_ctrl->started) {
+		mlx5_rxq_initialize(rxq_data);
+		rxq_ctrl->wqn = rxq->devx_rq.rq->id;
+	}
 	priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq_ctrl->wqn = rxq->devx_rq.rq->id;
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
@@ -558,7 +577,10 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
 
 		MLX5_ASSERT(rxq != NULL);
-		rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
+		else
+			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index bb38d5d2ade..dc647d5580c 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -26,6 +26,7 @@
 #include "mlx5_rx.h"
 #include "mlx5_tx.h"
 #include "mlx5_autoconf.h"
+#include "mlx5_devx.h"
 
 /**
  * Get the interface index from device name.
@@ -336,9 +337,13 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK;
 	mlx5_set_default_params(dev, info);
 	mlx5_set_txlimit_params(dev, info);
+	if (priv->config.hca_attr.mem_rq_rmp &&
+	    priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new)
+		info->dev_capa |= RTE_ETH_DEV_CAPA_RXQ_SHARE;
 	info->switch_info.name = dev->data->name;
 	info->switch_info.domain_id = priv->domain_id;
 	info->switch_info.port_id = priv->representor_id;
+	info->switch_info.rx_domain = 0; /* No sub Rx domains. */
 	if (priv->representor) {
 		uint16_t port_id;
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 413e36f6d8d..eda6eca8dea 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -96,6 +96,7 @@ struct mlx5_rxq_data {
 	unsigned int lro:1; /* Enable LRO. */
 	unsigned int dynf_meta:1; /* Dynamic metadata is configured. */
 	unsigned int mcqe_format:3; /* CQE compression format. */
+	unsigned int shared:1; /* Shared RXQ. */
 	volatile uint32_t *rq_db;
 	volatile uint32_t *cq_db;
 	uint16_t port_id;
@@ -158,8 +159,10 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
 	enum mlx5_rxq_type type; /* Rxq type. */
 	unsigned int socket; /* CPU socket ID for allocations. */
+	LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
 	uint32_t share_group; /* Group ID of shared RXQ. */
 	uint16_t share_qid; /* Shared RxQ ID in group. */
+	unsigned int started:1; /* Whether (shared) RXQ has been started. */
 	unsigned int irq:1; /* Whether IRQ is enabled. */
 	uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index f3fc618ed2c..8feb3e2c0fb 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -29,6 +29,7 @@
 #include "mlx5_rx.h"
 #include "mlx5_utils.h"
 #include "mlx5_autoconf.h"
+#include "mlx5_devx.h"
 
 
 /* Default RSS hash key also used for ConnectX-3. */
@@ -633,14 +634,19 @@ mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t idx)
  *   RX queue index.
  * @param desc
  *   Number of descriptors to configure in queue.
+ * @param[out] rxq_ctrl
+ *   Address of pointer to shared Rx queue control.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc)
+mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc,
+			struct mlx5_rxq_ctrl **rxq_ctrl)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_priv *rxq;
+	bool empty;
 
 	if (!rte_is_power_of_2(*desc)) {
 		*desc = 1 << log2above(*desc);
@@ -657,16 +663,143 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc)
 		rte_errno = EOVERFLOW;
 		return -rte_errno;
 	}
-	if (!mlx5_rxq_releasable(dev, idx)) {
-		DRV_LOG(ERR, "port %u unable to release queue index %u",
-			dev->data->port_id, idx);
-		rte_errno = EBUSY;
-		return -rte_errno;
+	if (rxq_ctrl == NULL || *rxq_ctrl == NULL)
+		return 0;
+	if (!(*rxq_ctrl)->rxq.shared) {
+		if (!mlx5_rxq_releasable(dev, idx)) {
+			DRV_LOG(ERR, "port %u unable to release queue index %u",
+				dev->data->port_id, idx);
+			rte_errno = EBUSY;
+			return -rte_errno;
+		}
+		mlx5_rxq_release(dev, idx);
+	} else {
+		if ((*rxq_ctrl)->obj != NULL)
+			/* Some port using shared Rx queue has been started. */
+			return 0;
+		/* Release all owner RxQ to reconfigure Shared RxQ. */
+		do {
+			rxq = LIST_FIRST(&(*rxq_ctrl)->owners);
+			LIST_REMOVE(rxq, owner_entry);
+			empty = LIST_EMPTY(&(*rxq_ctrl)->owners);
+			mlx5_rxq_release(ETH_DEV(rxq->priv), rxq->idx);
+		} while (!empty);
+		*rxq_ctrl = NULL;
 	}
-	mlx5_rxq_release(dev, idx);
 	return 0;
 }
 
+/**
+ * Get the shared Rx queue object that matches group and queue index.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param group
+ *   Shared RXQ group.
+ * @param share_qid
+ *   Shared RX queue index.
+ *
+ * @return
+ *   Shared RXQ object that matching, or NULL if not found.
+ */
+static struct mlx5_rxq_ctrl *
+mlx5_shared_rxq_get(struct rte_eth_dev *dev, uint32_t group, uint16_t share_qid)
+{
+	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, share_entry) {
+		if (rxq_ctrl->share_group == group &&
+		    rxq_ctrl->share_qid == share_qid)
+			return rxq_ctrl;
+	}
+	return NULL;
+}
+
+/**
+ * Check whether requested Rx queue configuration matches shared RXQ.
+ *
+ * @param rxq_ctrl
+ *   Pointer to shared RXQ.
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param idx
+ *   Queue index.
+ * @param desc
+ *   Number of descriptors to configure in queue.
+ * @param socket
+ *   NUMA socket on which memory must be allocated.
+ * @param[in] conf
+ *   Thresholds parameters.
+ * @param mp
+ *   Memory pool for buffer allocations.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static bool
+mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev,
+		      uint16_t idx, uint16_t desc, unsigned int socket,
+		      const struct rte_eth_rxconf *conf,
+		      struct rte_mempool *mp)
+{
+	struct mlx5_priv *spriv = LIST_FIRST(&rxq_ctrl->owners)->priv;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	unsigned int i;
+
+	RTE_SET_USED(conf);
+	if (rxq_ctrl->socket != socket) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: socket mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: descriptor number mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->mtu != spriv->mtu) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mtu mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->dev_data->dev_conf.intr_conf.rxq !=
+	    spriv->dev_data->dev_conf.intr_conf.rxq) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: interrupt mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (mp != NULL && rxq_ctrl->rxq.mp != mp) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mempool mismatch",
+			dev->data->port_id, idx);
+		return false;
+	} else if (mp == NULL) {
+		for (i = 0; i < conf->rx_nseg; i++) {
+			if (conf->rx_seg[i].split.mp !=
+			    rxq_ctrl->rxq.rxseg[i].mp ||
+			    conf->rx_seg[i].split.length !=
+			    rxq_ctrl->rxq.rxseg[i].length) {
+				DRV_LOG(ERR, "port %u queue index %u failed to join shared group: segment %u configuration mismatch",
+					dev->data->port_id, idx, i);
+				return false;
+			}
+		}
+	}
+	if (priv->config.hw_padding != spriv->config.hw_padding) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: padding mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->config.cqe_comp != spriv->config.cqe_comp ||
+	    (priv->config.cqe_comp &&
+	     priv->config.cqe_comp_fmt != spriv->config.cqe_comp_fmt)) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: CQE compression mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	return true;
+}
+
 /**
  *
  * @param dev
@@ -692,12 +825,14 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
 	struct rte_eth_rxseg_split *rx_seg =
 				(struct rte_eth_rxseg_split *)conf->rx_seg;
 	struct rte_eth_rxseg_split rx_single = {.mp = mp};
 	uint16_t n_seg = conf->rx_nseg;
 	int res;
+	uint64_t offloads = conf->offloads |
+			    dev->data->dev_conf.rxmode.offloads;
 
 	if (mp) {
 		/*
@@ -709,9 +844,6 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		n_seg = 1;
 	}
 	if (n_seg > 1) {
-		uint64_t offloads = conf->offloads |
-				    dev->data->dev_conf.rxmode.offloads;
-
 		/* The offloads should be checked on rte_eth_dev layer. */
 		MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
@@ -723,9 +855,46 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		}
 		MLX5_ASSERT(n_seg < MLX5_MAX_RXQ_NSEG);
 	}
-	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
+	if (conf->share_group > 0) {
+		if (!priv->config.hca_attr.mem_rq_rmp) {
+			DRV_LOG(ERR, "port %u queue index %u shared Rx queue not supported by fw",
+				     dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (priv->obj_ops.rxq_obj_new != devx_obj_ops.rxq_obj_new) {
+			DRV_LOG(ERR, "port %u queue index %u shared Rx queue needs DevX api",
+				     dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (conf->share_qid >= priv->rxqs_n) {
+			DRV_LOG(ERR, "port %u shared Rx queue index %u > number of Rx queues %u",
+				dev->data->port_id, conf->share_qid,
+				priv->rxqs_n);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (priv->config.mprq.enabled) {
+			DRV_LOG(ERR, "port %u shared Rx queue index %u: not supported when MPRQ enabled",
+				dev->data->port_id, conf->share_qid);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		/* Try to reuse shared RXQ. */
+		rxq_ctrl = mlx5_shared_rxq_get(dev, conf->share_group,
+					       conf->share_qid);
+		if (rxq_ctrl != NULL &&
+		    !mlx5_shared_rxq_match(rxq_ctrl, dev, idx, desc, socket,
+					   conf, mp)) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+	}
+	res = mlx5_rx_queue_pre_setup(dev, idx, &desc, &rxq_ctrl);
 	if (res)
 		return res;
+	/* Allocate RXQ. */
 	rxq = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*rxq), 0,
 			  SOCKET_ID_ANY);
 	if (!rxq) {
@@ -737,15 +906,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->idx = idx;
 	(*priv->rxq_privs)[idx] = rxq;
-	rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg);
-	if (!rxq_ctrl) {
-		DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
-			dev->data->port_id, idx);
-		mlx5_free(rxq);
-		(*priv->rxq_privs)[idx] = NULL;
-		rte_errno = ENOMEM;
-		return -rte_errno;
+	if (rxq_ctrl != NULL) {
+		/* Join owner list. */
+		LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry);
+		rxq->ctrl = rxq_ctrl;
+	} else {
+		rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg,
+					n_seg);
+		if (rxq_ctrl == NULL) {
+			DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
+				dev->data->port_id, idx);
+			mlx5_free(rxq);
+			(*priv->rxq_privs)[idx] = NULL;
+			rte_errno = ENOMEM;
+			return -rte_errno;
+		}
 	}
+	mlx5_rxq_ref(dev, idx);
 	DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
 		dev->data->port_id, idx);
 	dev->data->rx_queues[idx] = &rxq_ctrl->rxq;
@@ -776,7 +953,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 	struct mlx5_rxq_ctrl *rxq_ctrl;
 	int res;
 
-	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
+	res = mlx5_rx_queue_pre_setup(dev, idx, &desc, NULL);
 	if (res)
 		return res;
 	if (hairpin_conf->peer_count != 1) {
@@ -1095,6 +1272,9 @@ mlx5_rxq_obj_verify(struct rte_eth_dev *dev)
 	struct mlx5_rxq_obj *rxq_obj;
 
 	LIST_FOREACH(rxq_obj, &priv->rxqsobj, next) {
+		if (rxq_obj->rxq_ctrl->rxq.shared &&
+		    !LIST_EMPTY(&rxq_obj->rxq_ctrl->owners))
+			continue;
 		DRV_LOG(DEBUG, "port %u Rx queue %u still referenced",
 			dev->data->port_id, rxq_obj->rxq_ctrl->rxq.idx);
 		++ret;
@@ -1413,6 +1593,12 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 		return NULL;
 	}
 	LIST_INIT(&tmpl->owners);
+	if (conf->share_group > 0) {
+		tmpl->rxq.shared = 1;
+		tmpl->share_group = conf->share_group;
+		tmpl->share_qid = conf->share_qid;
+		LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry);
+	}
 	rxq->ctrl = tmpl;
 	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
 	MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG);
@@ -1661,7 +1847,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq;
 #endif
 	tmpl->rxq.idx = idx;
-	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 error:
@@ -1836,31 +2021,41 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
+	uint32_t refcnt;
 
 	if (priv->rxq_privs == NULL)
 		return 0;
 	rxq = mlx5_rxq_get(dev, idx);
-	if (rxq == NULL)
+	if (rxq == NULL || rxq->refcnt == 0)
 		return 0;
-	if (mlx5_rxq_deref(dev, idx) > 1)
-		return 1;
 	rxq_ctrl = rxq->ctrl;
-	if (rxq_ctrl->obj != NULL) {
+	refcnt = mlx5_rxq_deref(dev, idx);
+	if (refcnt > 1) {
+		return 1;
+	} else if (refcnt == 1) { /* RxQ stopped. */
 		priv->obj_ops.rxq_obj_release(rxq);
-		LIST_REMOVE(rxq_ctrl->obj, next);
-		mlx5_free(rxq_ctrl->obj);
-		rxq_ctrl->obj = NULL;
-	}
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-		rxq_free_elts(rxq_ctrl);
-		dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
-	}
-	if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) {
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
-			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+		if (!rxq_ctrl->started && rxq_ctrl->obj != NULL) {
+			LIST_REMOVE(rxq_ctrl->obj, next);
+			mlx5_free(rxq_ctrl->obj);
+			rxq_ctrl->obj = NULL;
+		}
+		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+			if (!rxq_ctrl->started)
+				rxq_free_elts(rxq_ctrl);
+			dev->data->rx_queue_state[idx] =
+					RTE_ETH_QUEUE_STATE_STOPPED;
+		}
+	} else { /* Refcnt zero, closing device. */
 		LIST_REMOVE(rxq, owner_entry);
-		LIST_REMOVE(rxq_ctrl, next);
-		mlx5_free(rxq_ctrl);
+		if (LIST_EMPTY(&rxq_ctrl->owners)) {
+			if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+				mlx5_mr_btree_free
+					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+			if (rxq_ctrl->rxq.shared)
+				LIST_REMOVE(rxq_ctrl, share_entry);
+			LIST_REMOVE(rxq_ctrl, next);
+			mlx5_free(rxq_ctrl);
+		}
 		dev->data->rx_queues[idx] = NULL;
 		mlx5_free(rxq);
 		(*priv->rxq_privs)[idx] = NULL;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 72475e4b5b5..a3e62e95335 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -176,6 +176,39 @@ mlx5_rxq_stop(struct rte_eth_dev *dev)
 		mlx5_rxq_release(dev, i);
 }
 
+static int
+mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl,
+		      unsigned int idx)
+{
+	int ret = 0;
+
+	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+		/*
+		 * Pre-register the mempools. Regardless of whether
+		 * the implicit registration is enabled or not,
+		 * Rx mempool destruction is tracked to free MRs.
+		 */
+		if (mlx5_rxq_mempool_register(dev, rxq_ctrl) < 0)
+			return -rte_errno;
+		ret = rxq_alloc_elts(rxq_ctrl);
+		if (ret)
+			return ret;
+	}
+	MLX5_ASSERT(!rxq_ctrl->obj);
+	rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				    sizeof(*rxq_ctrl->obj), 0,
+				    rxq_ctrl->socket);
+	if (!rxq_ctrl->obj) {
+		DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.", dev->data->port_id,
+		idx, (void *)&rxq_ctrl->obj);
+	return 0;
+}
+
 /**
  * Start traffic on Rx queues.
  *
@@ -208,28 +241,10 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 		if (rxq == NULL)
 			continue;
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-			/*
-			 * Pre-register the mempools. Regardless of whether
-			 * the implicit registration is enabled or not,
-			 * Rx mempool destruction is tracked to free MRs.
-			 */
-			if (mlx5_rxq_mempool_register(dev, rxq_ctrl) < 0)
-				goto error;
-			ret = rxq_alloc_elts(rxq_ctrl);
-			if (ret)
+		if (!rxq_ctrl->started) {
+			if (mlx5_rxq_ctrl_prepare(dev, rxq_ctrl, i) < 0)
 				goto error;
-		}
-		MLX5_ASSERT(!rxq_ctrl->obj);
-		rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-					    sizeof(*rxq_ctrl->obj), 0,
-					    rxq_ctrl->socket);
-		if (!rxq_ctrl->obj) {
-			DRV_LOG(ERR,
-				"Port %u Rx queue %u can't allocate resources.",
-				dev->data->port_id, i);
-			rte_errno = ENOMEM;
-			goto error;
+			LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next);
 		}
 		ret = priv->obj_ops.rxq_obj_new(rxq);
 		if (ret) {
@@ -237,9 +252,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 			rxq_ctrl->obj = NULL;
 			goto error;
 		}
-		DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.",
-			dev->data->port_id, i, (void *)&rxq_ctrl->obj);
-		LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next);
+		rxq_ctrl->started = true;
 	}
 	return 0;
 error:
-- 
2.33.0


  parent reply	other threads:[~2021-11-04 12:36 UTC|newest]

Thread overview: 266+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-27  3:42 [dpdk-dev] [RFC] ethdev: introduce " Xueming Li
2021-07-28  7:56 ` Andrew Rybchenko
2021-07-28  8:20   ` Xueming(Steven) Li
2021-08-09 11:47 ` [dpdk-dev] [PATCH v1] " Xueming Li
2021-08-09 13:50   ` Jerin Jacob
2021-08-09 14:16     ` Xueming(Steven) Li
2021-08-11  8:02       ` Jerin Jacob
2021-08-11  8:28         ` Xueming(Steven) Li
2021-08-11 12:04           ` Ferruh Yigit
2021-08-11 12:59             ` Xueming(Steven) Li
2021-08-12 14:35               ` Xueming(Steven) Li
2021-09-15 15:34               ` Xueming(Steven) Li
2021-09-26  5:35             ` Xueming(Steven) Li
2021-09-28  9:35               ` Jerin Jacob
2021-09-28 11:36                 ` Xueming(Steven) Li
2021-09-28 11:37                 ` Xueming(Steven) Li
2021-09-28 11:37                 ` Xueming(Steven) Li
2021-09-28 12:58                   ` Jerin Jacob
2021-09-28 13:25                     ` Xueming(Steven) Li
2021-09-28 13:38                       ` Jerin Jacob
2021-09-28 13:59                         ` Ananyev, Konstantin
2021-09-28 14:40                           ` Xueming(Steven) Li
2021-09-28 14:59                             ` Jerin Jacob
2021-09-29  7:41                               ` Xueming(Steven) Li
2021-09-29  8:05                                 ` Jerin Jacob
2021-10-08  8:26                                   ` Xueming(Steven) Li
2021-10-10  9:46                                     ` Jerin Jacob
2021-10-10 13:40                                       ` Xueming(Steven) Li
2021-10-11  4:10                                         ` Jerin Jacob
2021-09-29  0:26                             ` Ananyev, Konstantin
2021-09-29  8:40                               ` Xueming(Steven) Li
2021-09-29 10:20                                 ` Ananyev, Konstantin
2021-09-29 13:25                                   ` Xueming(Steven) Li
2021-09-30  9:59                                     ` Ananyev, Konstantin
2021-10-06  7:54                                       ` Xueming(Steven) Li
2021-09-29  9:12                               ` Xueming(Steven) Li
2021-09-29  9:52                                 ` Ananyev, Konstantin
2021-09-29 11:07                                   ` Bruce Richardson
2021-09-29 11:46                                     ` Ananyev, Konstantin
2021-09-29 12:17                                       ` Bruce Richardson
2021-09-29 12:08                                   ` Xueming(Steven) Li
2021-09-29 12:35                                     ` Ananyev, Konstantin
2021-09-29 14:54                                       ` Xueming(Steven) Li
2021-09-28 14:51                         ` Xueming(Steven) Li
2021-09-28 12:59                 ` Xueming(Steven) Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 01/15] " Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 02/15] app/testpmd: dump port and queue info for each packet Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 03/15] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 04/15] app/testpmd: make sure shared Rx queue polled on same core Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 05/15] app/testpmd: adds common forwarding for shared Rx queue Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 06/15] app/testpmd: add common fwd wrapper function Xueming Li
2021-08-17  9:37     ` Jerin Jacob
2021-08-18 11:27       ` Xueming(Steven) Li
2021-08-18 11:47         ` Jerin Jacob
2021-08-18 14:08           ` Xueming(Steven) Li
2021-08-26 11:28             ` Jerin Jacob
2021-08-29  7:07               ` Xueming(Steven) Li
2021-09-01 14:44                 ` Xueming(Steven) Li
2021-09-28  5:54                   ` Xueming(Steven) Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 07/15] app/testpmd: support shared Rx queues for IO forwarding Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 08/15] app/testpmd: support shared Rx queue for rxonly forwarding Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 09/15] app/testpmd: support shared Rx queue for icmpecho fwd Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 10/15] app/testpmd: support shared Rx queue for csum fwd Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 11/15] app/testpmd: support shared Rx queue for flowgen Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 12/15] app/testpmd: support shared Rx queue for MAC fwd Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 13/15] app/testpmd: support shared Rx queue for macswap fwd Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 14/15] app/testpmd: support shared Rx queue for 5tuple fwd Xueming Li
2021-08-11 14:04   ` [dpdk-dev] [PATCH v2 15/15] app/testpmd: support shared Rx queue for ieee1588 fwd Xueming Li
2021-08-17  9:33   ` [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue Jerin Jacob
2021-08-17 11:31     ` Xueming(Steven) Li
2021-08-17 15:11       ` Jerin Jacob
2021-08-18 11:14         ` Xueming(Steven) Li
2021-08-19  5:26           ` Jerin Jacob
2021-08-19 12:09             ` Xueming(Steven) Li
2021-08-26 11:58               ` Jerin Jacob
2021-08-28 14:16                 ` Xueming(Steven) Li
2021-08-30  9:31                   ` Jerin Jacob
2021-08-30 10:13                     ` Xueming(Steven) Li
2021-09-15 14:45                     ` Xueming(Steven) Li
2021-09-16  4:16                       ` Jerin Jacob
2021-09-28  5:50                         ` Xueming(Steven) Li
2021-09-17  8:01 ` [dpdk-dev] [PATCH v3 0/8] " Xueming Li
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 1/8] " Xueming Li
2021-09-27 23:53     ` Ajit Khaparde
2021-09-28 14:24       ` Xueming(Steven) Li
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 2/8] ethdev: new API to aggregate shared Rx queue group Xueming Li
2021-09-26 17:54     ` Ajit Khaparde
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 3/8] app/testpmd: dump port and queue info for each packet Xueming Li
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 4/8] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 5/8] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 6/8] app/testpmd: add common fwd wrapper Xueming Li
2021-09-17 11:24     ` Jerin Jacob
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 7/8] app/testpmd: improve forwarding cache miss Xueming Li
2021-09-17  8:01   ` [dpdk-dev] [PATCH v3 8/8] app/testpmd: support shared Rx queue forwarding Xueming Li
2021-09-30 14:55 ` [dpdk-dev] [PATCH v4 0/6] ethdev: introduce shared Rx queue Xueming Li
2021-09-30 14:55   ` [dpdk-dev] [PATCH v4 1/6] " Xueming Li
2021-10-11 10:47     ` Andrew Rybchenko
2021-10-11 13:12       ` Xueming(Steven) Li
2021-09-30 14:55   ` [dpdk-dev] [PATCH v4 2/6] ethdev: new API to aggregate shared Rx queue group Xueming Li
2021-09-30 14:55   ` [dpdk-dev] [PATCH v4 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-09-30 14:56   ` [dpdk-dev] [PATCH v4 4/6] app/testpmd: dump port info for " Xueming Li
2021-09-30 14:56   ` [dpdk-dev] [PATCH v4 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-09-30 14:56   ` [dpdk-dev] [PATCH v4 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-11 11:49   ` [dpdk-dev] [PATCH v4 0/6] ethdev: introduce " Andrew Rybchenko
2021-10-11 15:11     ` Xueming(Steven) Li
2021-10-12  6:37       ` Xueming(Steven) Li
2021-10-12  8:48         ` Andrew Rybchenko
2021-10-12 10:55           ` Xueming(Steven) Li
2021-10-12 11:28             ` Andrew Rybchenko
2021-10-12 11:33               ` Xueming(Steven) Li
2021-10-13  7:53               ` Xueming(Steven) Li
2021-10-11 12:37 ` [dpdk-dev] [PATCH v5 0/5] " Xueming Li
2021-10-11 12:37   ` [dpdk-dev] [PATCH v5 1/5] " Xueming Li
2021-10-11 12:37   ` [dpdk-dev] [PATCH v5 2/5] app/testpmd: new parameter to enable " Xueming Li
2021-10-11 12:37   ` [dpdk-dev] [PATCH v5 3/5] app/testpmd: dump port info for " Xueming Li
2021-10-11 12:37   ` [dpdk-dev] [PATCH v5 4/5] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-11 12:37   ` [dpdk-dev] [PATCH v5 5/5] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-12 14:39 ` [dpdk-dev] [PATCH v6 0/5] ethdev: introduce " Xueming Li
2021-10-12 14:39   ` [dpdk-dev] [PATCH v6 1/5] " Xueming Li
2021-10-15  9:28     ` Andrew Rybchenko
2021-10-15 10:54       ` Xueming(Steven) Li
2021-10-18  6:46         ` Andrew Rybchenko
2021-10-18  6:57           ` Xueming(Steven) Li
2021-10-15 17:20     ` Ferruh Yigit
2021-10-16  9:14       ` Xueming(Steven) Li
2021-10-12 14:39   ` [dpdk-dev] [PATCH v6 2/5] app/testpmd: new parameter to enable " Xueming Li
2021-10-12 14:39   ` [dpdk-dev] [PATCH v6 3/5] app/testpmd: dump port info for " Xueming Li
2021-10-12 14:39   ` [dpdk-dev] [PATCH v6 4/5] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-12 14:39   ` [dpdk-dev] [PATCH v6 5/5] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-16  8:42 ` [dpdk-dev] [PATCH v7 0/5] ethdev: introduce " Xueming Li
2021-10-16  8:42   ` [dpdk-dev] [PATCH v7 1/5] " Xueming Li
2021-10-17  5:33     ` Ajit Khaparde
2021-10-17  7:29       ` Xueming(Steven) Li
2021-10-16  8:42   ` [dpdk-dev] [PATCH v7 2/5] app/testpmd: new parameter to enable " Xueming Li
2021-10-16  8:42   ` [dpdk-dev] [PATCH v7 3/5] app/testpmd: dump port info for " Xueming Li
2021-10-16  8:42   ` [dpdk-dev] [PATCH v7 4/5] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-16  8:42   ` [dpdk-dev] [PATCH v7 5/5] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 0/6] ethdev: introduce " Xueming Li
2021-10-18 12:59   ` [dpdk-dev] [PATCH v8 1/6] " Xueming Li
2021-10-19  0:21     ` Ajit Khaparde
2021-10-19  5:54       ` Xueming(Steven) Li
2021-10-19  6:28     ` Andrew Rybchenko
2021-10-18 12:59   ` [dpdk-dev] [PATCH v8 2/6] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-18 12:59   ` [dpdk-dev] [PATCH v8 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-18 12:59   ` [dpdk-dev] [PATCH v8 4/6] app/testpmd: dump port info for " Xueming Li
2021-10-18 12:59   ` [dpdk-dev] [PATCH v8 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-18 12:59   ` [dpdk-dev] [PATCH v8 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-19  8:17 ` [dpdk-dev] [PATCH v9 0/6] ethdev: introduce " Xueming Li
2021-10-19  8:17   ` [dpdk-dev] [PATCH v9 1/6] " Xueming Li
2021-10-19  8:17   ` [dpdk-dev] [PATCH v9 2/6] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-19  8:33     ` Andrew Rybchenko
2021-10-19  9:10       ` Xueming(Steven) Li
2021-10-19  9:39         ` Andrew Rybchenko
2021-10-19  8:17   ` [dpdk-dev] [PATCH v9 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-19  8:17   ` [dpdk-dev] [PATCH v9 4/6] app/testpmd: dump port info for " Xueming Li
2021-10-19  8:17   ` [dpdk-dev] [PATCH v9 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-19  8:17   ` [dpdk-dev] [PATCH v9 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 0/6] ethdev: introduce " Xueming Li
2021-10-19 15:20   ` [dpdk-dev] [PATCH v10 1/6] ethdev: new API to resolve device capability name Xueming Li
2021-10-19 15:20   ` [dpdk-dev] [PATCH v10 2/6] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-19 15:20   ` [dpdk-dev] [PATCH v10 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-19 15:20   ` [dpdk-dev] [PATCH v10 4/6] app/testpmd: dump port info for " Xueming Li
2021-10-19 15:20   ` [dpdk-dev] [PATCH v10 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-19 15:20   ` [dpdk-dev] [PATCH v10 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 0/7] ethdev: introduce " Xueming Li
2021-10-19 15:28   ` [dpdk-dev] [PATCH v10 1/7] " Xueming Li
2021-10-19 15:28   ` [dpdk-dev] [PATCH v10 2/7] ethdev: new API to resolve device capability name Xueming Li
2021-10-19 17:57     ` Andrew Rybchenko
2021-10-20  7:47       ` Xueming(Steven) Li
2021-10-20  7:48         ` Andrew Rybchenko
2021-10-19 15:28   ` [dpdk-dev] [PATCH v10 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-19 15:28   ` [dpdk-dev] [PATCH v10 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-19 15:28   ` [dpdk-dev] [PATCH v10 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-19 15:28   ` [dpdk-dev] [PATCH v10 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-19 15:28   ` [dpdk-dev] [PATCH v10 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-20  7:53 ` [dpdk-dev] [PATCH v11 0/7] ethdev: introduce " Xueming Li
2021-10-20  7:53   ` [dpdk-dev] [PATCH v11 1/7] " Xueming Li
2021-10-20 17:14     ` Ajit Khaparde
2021-10-20  7:53   ` [dpdk-dev] [PATCH v11 2/7] ethdev: new API to resolve device capability name Xueming Li
2021-10-20 10:52     ` Andrew Rybchenko
2021-10-20 17:16       ` Ajit Khaparde
2021-10-20 18:42     ` Thomas Monjalon
2021-10-20  7:53   ` [dpdk-dev] [PATCH v11 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-21  3:24     ` Li, Xiaoyun
2021-10-21  3:28       ` Ajit Khaparde
2021-10-20  7:53   ` [dpdk-dev] [PATCH v11 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-20 17:29     ` Ajit Khaparde
2021-10-20 19:14       ` Thomas Monjalon
2021-10-21  4:09         ` Xueming(Steven) Li
2021-10-21  3:49       ` Xueming(Steven) Li
2021-10-21  3:24     ` Li, Xiaoyun
2021-10-21  3:58       ` Xueming(Steven) Li
2021-10-21  5:15         ` Li, Xiaoyun
2021-10-20  7:53   ` [dpdk-dev] [PATCH v11 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-21  3:24     ` Li, Xiaoyun
2021-10-20  7:53   ` [dpdk-dev] [PATCH v11 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-21  3:24     ` Li, Xiaoyun
2021-10-21  4:21       ` Xueming(Steven) Li
2021-10-20  7:53   ` [dpdk-dev] [PATCH v11 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-20 19:20     ` Thomas Monjalon
2021-10-21  3:26       ` Li, Xiaoyun
2021-10-21  4:39       ` Xueming(Steven) Li
2021-10-21  5:08 ` [dpdk-dev] [PATCH v12 0/7] ethdev: introduce " Xueming Li
2021-10-21  5:08   ` [dpdk-dev] [PATCH v12 1/7] " Xueming Li
2021-10-21  5:08   ` [dpdk-dev] [PATCH v12 2/7] ethdev: get device capability name as string Xueming Li
2021-10-21  5:08   ` [dpdk-dev] [PATCH v12 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-21  5:08   ` [dpdk-dev] [PATCH v12 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-21  9:20     ` Thomas Monjalon
2021-10-21  5:08   ` [dpdk-dev] [PATCH v12 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-21  5:08   ` [dpdk-dev] [PATCH v12 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-21  6:35     ` Li, Xiaoyun
2021-10-21  5:08   ` [dpdk-dev] [PATCH v12 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-21  6:33     ` Li, Xiaoyun
2021-10-21  7:58       ` Xueming(Steven) Li
2021-10-21  8:01         ` Li, Xiaoyun
2021-10-21  8:22           ` Xueming(Steven) Li
2021-10-21  9:28     ` Thomas Monjalon
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 0/7] ethdev: introduce " Xueming Li
2021-10-21 10:41   ` [dpdk-dev] [PATCH v13 1/7] " Xueming Li
2021-10-21 10:41   ` [dpdk-dev] [PATCH v13 2/7] ethdev: get device capability name as string Xueming Li
2021-10-21 10:41   ` [dpdk-dev] [PATCH v13 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-21 10:41   ` [dpdk-dev] [PATCH v13 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-21 19:45     ` Ajit Khaparde
2021-10-21 10:41   ` [dpdk-dev] [PATCH v13 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-21 19:48     ` Ajit Khaparde
2021-10-21 10:41   ` [dpdk-dev] [PATCH v13 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-21 10:41   ` [dpdk-dev] [PATCH v13 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-21 23:41   ` [dpdk-dev] [PATCH v13 0/7] ethdev: introduce " Ferruh Yigit
2021-10-22  6:31     ` Xueming(Steven) Li
2021-11-04 15:52   ` Tom Barbette
2021-11-03  7:58 ` [dpdk-dev] [PATCH v3 00/14] net/mlx5: support " Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 01/14] common/mlx5: introduce user index field in completion Xueming Li
2021-11-04  9:14     ` Slava Ovsiienko
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 02/14] net/mlx5: fix field reference for PPC Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 03/14] common/mlx5: adds basic receive memory pool support Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 04/14] common/mlx5: support receive memory pool Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 05/14] net/mlx5: fix Rx queue memory allocation return value Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 06/14] net/mlx5: clean Rx queue code Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 07/14] net/mlx5: split Rx queue into shareable and private Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 08/14] net/mlx5: move Rx queue reference count Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 09/14] net/mlx5: move Rx queue hairpin info to private data Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 10/14] net/mlx5: remove port info from shareable Rx queue Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 11/14] net/mlx5: move Rx queue DevX resource Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 12/14] net/mlx5: remove Rx queue data list from device Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 13/14] net/mlx5: support shared Rx queue Xueming Li
2021-11-03  7:58   ` [dpdk-dev] [PATCH v3 14/14] net/mlx5: add shared Rx queue port datapath support Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 00/14] net/mlx5: support shared Rx queue Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 01/14] common/mlx5: introduce user index field in completion Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 02/14] net/mlx5: fix field reference for PPC Xueming Li
2021-11-04 17:07     ` Raslan Darawsheh
2021-11-04 17:49     ` David Christensen
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 03/14] common/mlx5: adds basic receive memory pool support Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 04/14] common/mlx5: support receive memory pool Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 05/14] net/mlx5: fix Rx queue memory allocation return value Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 06/14] net/mlx5: clean Rx queue code Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 07/14] net/mlx5: split Rx queue into shareable and private Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 08/14] net/mlx5: move Rx queue reference count Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 09/14] net/mlx5: move Rx queue hairpin info to private data Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 10/14] net/mlx5: remove port info from shareable Rx queue Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 11/14] net/mlx5: move Rx queue DevX resource Xueming Li
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 12/14] net/mlx5: remove Rx queue data list from device Xueming Li
2021-11-04 12:33   ` Xueming Li [this message]
2021-11-04 12:33   ` [dpdk-dev] [PATCH v4 14/14] net/mlx5: add shared Rx queue port datapath support Xueming Li
2021-11-04 17:50     ` David Christensen
2021-11-05  6:40     ` Ruifeng Wang
2021-11-04 20:06   ` [dpdk-dev] [PATCH v4 00/14] net/mlx5: support shared Rx queue Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211104123320.1638915-14-xuemingl@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=lmargalit@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).