DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD
@ 2020-10-08 14:16 Bing Zhao
  2020-10-08 14:16 ` [dpdk-dev] [PATCH 1/4] net/mlx5: remove hairpin queue peer port checking Bing Zhao
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-08 14:16 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

This patch set will add the support for hairpin between two ports in
mlx5 driver.

Depends-on: series-12779 ("introduce support for hairpin between two ports")

Bing Zhao (4):
  net/mlx5: remove hairpin queue peer port checking
  net/mlx5: add support for two ports hairpin mode
  net/mlx5: conditional hairpin auto bind
  doc: update hairpin support for mlx5 driver

 doc/guides/rel_notes/release_20_11.rst |   5 +
 drivers/net/mlx5/linux/mlx5_os.c       |  10 +
 drivers/net/mlx5/mlx5.h                |  19 ++
 drivers/net/mlx5/mlx5_rxq.c            |   4 +-
 drivers/net/mlx5/mlx5_rxtx.h           |   2 +
 drivers/net/mlx5/mlx5_trigger.c        | 502 ++++++++++++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_txq.c            |   4 +-
 7 files changed, 536 insertions(+), 10 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 1/4] net/mlx5: remove hairpin queue peer port checking
  2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
@ 2020-10-08 14:16 ` Bing Zhao
  2020-10-08 14:16 ` [dpdk-dev] [PATCH 2/4] net/mlx5: add support for two ports hairpin mode Bing Zhao
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-08 14:16 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In the current implementation of single port mode hairpin, the peer
queue should belong to the same port of the current port. When two
ports hairpin mode is introduced, the checking should be removed to
make the hairpin queue setup execute successfully.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 4 +---
 drivers/net/mlx5/mlx5_txq.c | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index f1d8373..66abce7 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -776,9 +776,7 @@
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	if (hairpin_conf->peer_count != 1 ||
-	    hairpin_conf->peers[0].port != dev->data->port_id ||
-	    hairpin_conf->peers[0].queue >= priv->txqs_n) {
+	if (hairpin_conf->peer_count != 1) {
 		DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u "
 			" invalid hairpind configuration", dev->data->port_id,
 			idx);
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index af84f5f..17a9f5a 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -421,9 +421,7 @@
 	res = mlx5_tx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	if (hairpin_conf->peer_count != 1 ||
-	    hairpin_conf->peers[0].port != dev->data->port_id ||
-	    hairpin_conf->peers[0].queue >= priv->rxqs_n) {
+	if (hairpin_conf->peer_count != 1) {
 		DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u "
 			" invalid hairpind configuration", dev->data->port_id,
 			idx);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 2/4] net/mlx5: add support for two ports hairpin mode
  2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  2020-10-08 14:16 ` [dpdk-dev] [PATCH 1/4] net/mlx5: remove hairpin queue peer port checking Bing Zhao
@ 2020-10-08 14:16 ` Bing Zhao
  2020-10-08 14:16 ` [dpdk-dev] [PATCH 3/4] net/mlx5: conditional hairpin auto bind Bing Zhao
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-08 14:16 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In order to support hairpin between two ports, mlx5 PMD needs to
implement the functions and provide them as the function pointers.

The bind and unbind functions are executed per port pairs. All the
hairpin queues between the two ports should have the same attributes
during queues setup. Different configurations among queue pairs from
the same ports are not supported. It is allowed that two ports only
have one direction hairpin.

In order to set up the connection between two queues, peer RX queue
HW information must be fetched via the internal RTE API and the queue
information could be used to modify the SQ object. Then the RQ object
will be modified with the TX queue HW information. The reverse
operation is not supported right now.

When disconnecting the queues pair, SQ and RQ object should be reset
without any peer HW information. The unbinding operation will try to
disconnect all TX queues from the port from the RX queues of the peer
port.

TX explicit mode attribute will be saved and used when creating a
hairpin flow.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  10 +
 drivers/net/mlx5/mlx5.h          |  19 ++
 drivers/net/mlx5/mlx5_rxtx.h     |   2 +
 drivers/net/mlx5/mlx5_trigger.c  | 470 ++++++++++++++++++++++++++++++++++++++-
 4 files changed, 499 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 487714f..ee8e1bb 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2530,6 +2530,11 @@
 	.get_module_eeprom = mlx5_get_module_eeprom,
 	.hairpin_cap_get = mlx5_hairpin_cap_get,
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
+	.hairpin_bind = mlx5_hairpin_bind,
+	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
+	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
+	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
 };
 
 /* Available operations from secondary process. */
@@ -2608,4 +2613,9 @@
 	.get_module_eeprom = mlx5_get_module_eeprom,
 	.hairpin_cap_get = mlx5_hairpin_cap_get,
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
+	.hairpin_bind = mlx5_hairpin_bind,
+	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
+	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
+	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 87d3c15..80d0859 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -878,6 +878,14 @@ struct mlx5_priv {
 #define PORT_ID(priv) ((priv)->dev_data->port_id)
 #define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)])
 
+struct rte_hairpin_peer_info {
+	uint32_t qp_id;
+	uint32_t vhca_id;
+	uint16_t peer_q;
+	uint16_t tx_explicit;
+	uint16_t manual_bind;
+};
+
 /* mlx5.c */
 
 int mlx5_getenv_int(const char *);
@@ -1028,6 +1036,17 @@ void mlx5_vlan_vmwa_acquire(struct rte_eth_dev *dev,
 int mlx5_traffic_enable(struct rte_eth_dev *dev);
 void mlx5_traffic_disable(struct rte_eth_dev *dev);
 int mlx5_traffic_restart(struct rte_eth_dev *dev);
+int mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
+				   struct rte_hairpin_peer_info *current_info,
+				   struct rte_hairpin_peer_info *peer_info,
+				   uint32_t direction);
+int mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
+				 struct rte_hairpin_peer_info *peer_info,
+				 uint32_t direction);
+int mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
+				   uint32_t direction);
+int mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port);
+int mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port);
 
 /* mlx5_flow.c */
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 674296e..ac612ca 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -184,6 +184,7 @@ struct mlx5_rxq_ctrl {
 	void *wq_umem; /* WQ buffer registration info. */
 	void *cq_umem; /* CQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
+	uint32_t hairpin_status;
 };
 
 /* TX queue send local data. */
@@ -279,6 +280,7 @@ struct mlx5_txq_ctrl {
 	off_t uar_mmap_offset; /* UAR mmap offset for non-primary process. */
 	void *bf_reg; /* BlueFlame register from Verbs. */
 	uint16_t dump_file_n; /* Number of dump files. */
+	uint32_t hairpin_status;
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	struct mlx5_txq_data txq; /* Data path structure. */
 	/* Must be the last field in the structure, contains elts[]. */
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index e72e5fb..f326b57 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -203,7 +203,7 @@
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_hairpin_bind(struct rte_eth_dev *dev)
+mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
@@ -281,6 +281,472 @@
 	return -rte_errno;
 }
 
+int
+mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
+			       struct rte_hairpin_peer_info *current_info,
+			       struct rte_hairpin_peer_info *peer_info,
+			       uint32_t direction)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	(void)current_info;
+
+	/*
+	 * Peer port used as egress. In the current design, hairpin TX queue
+	 * will be bound to the peer RX queue. Indeed, only the information of
+	 * peer RX queue needs to be fetched.
+	 */
+	if (direction) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+
+		txq_ctrl = mlx5_txq_get(dev, peer_queue);
+		if (!txq_ctrl) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin txq",
+				dev->data->port_id, peer_queue);
+			mlx5_txq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		if (!txq_ctrl->obj || !txq_ctrl->obj->sq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no txq object found: %d",
+				dev->data->port_id, peer_queue);
+			mlx5_txq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		peer_info->qp_id = txq_ctrl->obj->sq->id;
+		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
+		/* 1-to-1 mapping, only the first is used. */
+		peer_info->peer_q = txq_ctrl->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = txq_ctrl->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind;
+		mlx5_txq_release(dev, peer_queue);
+	} else { /* Peer port used as ingress. */
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+
+		rxq_ctrl = mlx5_rxq_get(dev, peer_queue);
+		if (!rxq_ctrl) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin rxq",
+				dev->data->port_id, peer_queue);
+			mlx5_rxq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no rxq object found: %d",
+				dev->data->port_id, peer_queue);
+			mlx5_rxq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		peer_info->qp_id = rxq_ctrl->obj->rq->id;
+		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
+		peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind;
+		mlx5_rxq_release(dev, peer_queue);
+	}
+	return 0;
+}
+
+int
+mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
+			     struct rte_hairpin_peer_info *peer_info,
+			     uint32_t direction)
+{
+	int ret = 0;
+
+	/*
+	 * Consistency checking of the peer queue: opposite direction is used
+	 * to get the peer queue info with ethdev index, no need to check.
+	 */
+	if (peer_info->peer_q != cur_queue) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "port %u queue %d and peer queue %d mismatch",
+			dev->data->port_id, cur_queue, peer_info->peer_q);
+		return -rte_errno;
+	}
+	if (!direction) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+		struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
+
+		txq_ctrl = mlx5_txq_get(dev, cur_queue);
+		if (!txq_ctrl) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin txq",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!txq_ctrl->obj || !txq_ctrl->obj->sq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no txq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->hairpin_status) {
+			rte_errno = EBUSY;
+			DRV_LOG(ERR, "port %u TX queue %d is already bound",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		/*
+		 * All queues' of one port consistency checking is done in the
+		 * bind() function, and that is optional.
+		 */
+		if (peer_info->tx_explicit !=
+		    txq_ctrl->hairpin_conf.tx_explicit) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u TX queue %d and peer TX rule "
+				"mode mismatch", dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (peer_info->manual_bind !=
+		    txq_ctrl->hairpin_conf.manual_bind) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u TX queue %d and peer binding "
+				"mode mismatch", dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		sq_attr.state = MLX5_SQC_STATE_RDY;
+		sq_attr.sq_state = MLX5_SQC_STATE_RST;
+		sq_attr.hairpin_peer_rq = peer_info->qp_id;
+		sq_attr.hairpin_peer_vhca = peer_info->vhca_id;
+		ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr);
+		if (!ret)
+			txq_ctrl->hairpin_status = 1;
+		mlx5_txq_release(dev, cur_queue);
+	} else {
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+
+		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
+		if (!rxq_ctrl) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin rxq",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no rxq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->hairpin_status) {
+			rte_errno = EBUSY;
+			DRV_LOG(ERR, "port %u RX queue %d is already bound",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (peer_info->tx_explicit !=
+		    rxq_ctrl->hairpin_conf.tx_explicit) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u RX queue %d and peer TX rule "
+				"mode mismatch", dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (peer_info->manual_bind !=
+		    rxq_ctrl->hairpin_conf.manual_bind) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d and peer binding "
+				"mode mismatch", dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		rq_attr.state = MLX5_SQC_STATE_RDY;
+		rq_attr.rq_state = MLX5_SQC_STATE_RST;
+		rq_attr.hairpin_peer_sq = peer_info->qp_id;
+		rq_attr.hairpin_peer_vhca = peer_info->vhca_id;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+		if (!ret)
+			rxq_ctrl->hairpin_status = 1;
+		mlx5_rxq_release(dev, cur_queue);
+	}
+	return ret;
+}
+
+int
+mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
+			       uint32_t direction)
+{
+	int ret = 0;
+
+	if (!direction) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+		struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
+
+		txq_ctrl = mlx5_txq_get(dev, cur_queue);
+		if (!txq_ctrl) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin txq",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!txq_ctrl->obj || !txq_ctrl->obj->sq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no txq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		/* Already unbound, 0 returns. */
+		if (!txq_ctrl->hairpin_status) {
+			mlx5_txq_release(dev, cur_queue);
+			DRV_LOG(DEBUG, "port %u TX queue %d is already unbound",
+				dev->data->port_id, cur_queue);
+			return 0;
+		}
+		sq_attr.state = MLX5_SQC_STATE_RST;
+		sq_attr.sq_state = MLX5_SQC_STATE_RST;
+		ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr);
+		if (!ret)
+			txq_ctrl->hairpin_status = 0;
+		mlx5_txq_release(dev, cur_queue);
+	} else {
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+
+		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
+		if (!rxq_ctrl) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin rxq",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no rxq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->hairpin_status) {
+			mlx5_rxq_release(dev, cur_queue);
+			DRV_LOG(DEBUG, "port %u RX queue %d is already unbound",
+				dev->data->port_id, cur_queue);
+			return 0;
+		}
+		rq_attr.state = MLX5_SQC_STATE_RST;
+		rq_attr.rq_state = MLX5_SQC_STATE_RST;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+		if (!ret)
+			rxq_ctrl->hairpin_status = 0;
+		mlx5_rxq_release(dev, cur_queue);
+	}
+	return ret;
+}
+
+int
+mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	int ret = 0;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	uint32_t i, j;
+	struct rte_hairpin_peer_info peer;
+	struct rte_hairpin_peer_info cur;
+	const struct rte_eth_hairpin_conf *conf;
+	uint16_t num_q = 0;
+	uint16_t local_port = priv->dev_data->port_id;
+	uint32_t manual;
+	uint32_t explicit;
+	uint16_t rx_queue;
+
+	/*
+	 * Before binding TXQ to peer RXQ, first round loop will be used for
+	 * checking the queues' configuration consistency. This would be a
+	 * little time consuming but better to do the rollback.
+	 */
+	for (i = 0; i != priv->txqs_n; i++) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		/*
+		 * All hairpin TX queues of a single port that connects to the
+		 * same peer RX port should have the same "auto-bind" and
+		 * "implicit TX rule part" modes.
+		 * Peer consistency checking will be done in per queue binding.
+		 * Only the single port hairpin supports the two modes above.
+		 */
+		conf = &txq_ctrl->hairpin_conf;
+		if (conf->peers[0].port == rx_port) {
+			if (!num_q) {
+				manual = conf->manual_bind;
+				explicit = conf->tx_explicit;
+				if ((!manual || !explicit) &&
+				    rx_port != local_port) {
+					mlx5_txq_release(dev, i);
+					rte_errno = EINVAL;
+					DRV_LOG(ERR, "port %u queue %d does "
+						"not support %s%s with "
+						"peer port %u", local_port, i,
+						manual ? "" : "auto-bind/",
+						explicit ? "" : "TX-implicit",
+						rx_port);
+					return -rte_errno;
+				}
+			} else {
+				if (manual != conf->manual_bind ||
+				    explicit != conf->tx_explicit) {
+					mlx5_txq_release(dev, i);
+					rte_errno = EINVAL;
+					DRV_LOG(ERR, "port %u queue %d mode "
+						"mismatch: %u %u, %u %u",
+						local_port, i, manual,
+						conf->manual_bind, explicit,
+						conf->tx_explicit);
+					return -rte_errno;
+				}
+			}
+			num_q++;
+		}
+		mlx5_txq_release(dev, i);
+		/* Once no queue is configured, success is returned directly. */
+		if (!num_q)
+			return ret;
+	}
+	/* All the hairpin TX queues need to be traversed again */
+	for (i = 0; i != priv->txqs_n; i++) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		/* Fetch peer RXQ's information. */
+		ret = rte_eth_hairpin_queue_peer_update(rx_port, rx_queue,
+							NULL, &peer, 0);
+		if (ret) {
+			mlx5_txq_release(dev, i);
+			goto error;
+		}
+		/* Accessing own device, mlx5 PMD API is enough. */
+		ret = mlx5_hairpin_queue_peer_bind(dev, i, &peer, 0);
+		if (ret)
+			goto error;
+		/* Pass TXQ's information to peer RXQ. */
+		cur.peer_q = rx_queue;
+		cur.qp_id = txq_ctrl->obj->sq->id;
+		cur.vhca_id = priv->config.hca_attr.vhca_id;
+		cur.tx_explicit = txq_ctrl->hairpin_conf.tx_explicit;
+		cur.manual_bind = txq_ctrl->hairpin_conf.manual_bind;
+		/* Accessing another device, RTE level API is needed. */
+		ret = rte_eth_hairpin_queue_peer_bind(rx_port, rx_queue,
+						      &cur, 1);
+		if (ret)
+			goto error;
+		mlx5_txq_release(dev, i);
+	}
+	return 0;
+error:
+	/*
+	 * Do roll-back process for the bound queues.
+	 * No need to check the return value of the queue unbind function.
+	 */
+	for (j = 0; j <= i; j++) {
+		/* No validation is needed here. */
+		txq_ctrl = mlx5_txq_get(dev, i);
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		mlx5_txq_release(dev, i);
+		rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 1);
+		mlx5_hairpin_queue_peer_unbind(dev, i, 0);
+	}
+	return ret;
+}
+
+int
+mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	uint32_t i;
+	int ret;
+	uint16_t cur_port = priv->dev_data->port_id;
+
+	for (i = 0; i != priv->txqs_n; i++) {
+		uint16_t rx_queue;
+
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		/* Indeed, only the first used queue needs to be checked. */
+		if (!txq_ctrl->hairpin_conf.manual_bind) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u and port %u is in auto-bind mode",
+				cur_port, rx_port);
+			mlx5_txq_release(dev, i);
+			return -rte_errno;
+		}
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		mlx5_txq_release(dev, i);
+		ret = rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 1);
+		if (ret) {
+			DRV_LOG(ERR, "port %u RX queue %d unbind - failure",
+				rx_port, rx_queue);
+			return ret;
+		}
+		ret = mlx5_hairpin_queue_peer_unbind(dev, i, 0);
+		if (ret) {
+			DRV_LOG(ERR, "port %u TX queue %d unbind - failure",
+				cur_port, i);
+			return ret;
+		}
+	}
+	return 0;
+}
+
 /**
  * DPDK callback to start the device.
  *
@@ -332,7 +798,7 @@
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
-	ret = mlx5_hairpin_bind(dev);
+	ret = mlx5_hairpin_auto_bind(dev);
 	if (ret) {
 		DRV_LOG(ERR, "port %u hairpin binding failed: %s",
 			dev->data->port_id, strerror(rte_errno));
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 3/4] net/mlx5: conditional hairpin auto bind
  2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  2020-10-08 14:16 ` [dpdk-dev] [PATCH 1/4] net/mlx5: remove hairpin queue peer port checking Bing Zhao
  2020-10-08 14:16 ` [dpdk-dev] [PATCH 2/4] net/mlx5: add support for two ports hairpin mode Bing Zhao
@ 2020-10-08 14:16 ` Bing Zhao
  2020-10-08 14:17 ` [dpdk-dev] [PATCH 4/4] doc: update hairpin support for mlx5 driver Bing Zhao
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-08 14:16 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In single port hairpin mode, after the queues are configured during
start up. The binding process will be enabled automatically in the
port start phase and the default control flow for egress will be
created.

When switching to two ports hairpin mode, the auto binding process
should be skipped if there is no TX queue with the peer RX queue on
the same device, and it should be skipped also if the queues are
configured with manual bind attribute.

If the explicit TX flow rule mode is configured or hairpin is
between two ports, the default control flows for TX queues should
not be created.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_trigger.c | 32 ++++++++++++++++++++++++++++++--
 1 file changed, 30 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f326b57..77d84dd 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -214,6 +214,8 @@
 	struct mlx5_devx_obj *rq;
 	unsigned int i;
 	int ret = 0;
+	bool need_auto = false;
+	uint16_t self_port = dev->data->port_id;
 
 	for (i = 0; i != priv->txqs_n; ++i) {
 		txq_ctrl = mlx5_txq_get(dev, i);
@@ -223,6 +225,25 @@
 			mlx5_txq_release(dev, i);
 			continue;
 		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != self_port)
+			continue;
+		if (txq_ctrl->hairpin_conf.manual_bind) {
+			mlx5_txq_release(dev, i);
+			return 0;
+		}
+		need_auto = true;
+		mlx5_txq_release(dev, i);
+	}
+	if (!need_auto)
+		return 0;
+	for (i = 0; i != priv->txqs_n; ++i) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
 		if (!txq_ctrl->obj) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no txq object found: %d",
@@ -798,9 +819,13 @@
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
+	/*
+	 * Such step will be skipped if there is no hairpin TX queue configured
+	 * with RX peer queue from the same device.
+	 */
 	ret = mlx5_hairpin_auto_bind(dev);
 	if (ret) {
-		DRV_LOG(ERR, "port %u hairpin binding failed: %s",
+		DRV_LOG(ERR, "port %u hairpin auto binding failed: %s",
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
@@ -949,7 +974,10 @@
 		struct mlx5_txq_ctrl *txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
+		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN &&
+		    !txq_ctrl->hairpin_conf.manual_bind &&
+		    txq_ctrl->hairpin_conf.peers[0].port ==
+		    priv->dev_data->port_id) {
 			ret = mlx5_ctrl_flow_source_queue(dev, i);
 			if (ret) {
 				mlx5_txq_release(dev, i);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH 4/4] doc: update hairpin support for mlx5 driver
  2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                   ` (2 preceding siblings ...)
  2020-10-08 14:16 ` [dpdk-dev] [PATCH 3/4] net/mlx5: conditional hairpin auto bind Bing Zhao
@ 2020-10-08 14:17 ` Bing Zhao
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  5 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-08 14:17 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

Hairpin between two ports will be supported by mlx5 PMD.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 05ceea0..454472b 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -70,6 +70,11 @@ New Features
   * Added support for non-zero priorities for group 0 flows
   * Added support for VXLAN decap combined with VLAN pop
 
+* **Updated Nvidia mlx5 driver.**
+
+  * Added support for hairpin between two ports and hairpin explicit
+    TX flow rules insertion.
+
 * **Updated Solarflare network PMD.**
 
   Updated the Solarflare ``sfc_efx`` driver with changes including:
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD
  2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                   ` (3 preceding siblings ...)
  2020-10-08 14:17 ` [dpdk-dev] [PATCH 4/4] doc: update hairpin support for mlx5 driver Bing Zhao
@ 2020-10-22 14:06 ` Bing Zhao
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking Bing Zhao
                     ` (5 more replies)
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  5 siblings, 6 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-22 14:06 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

This patch set will add the support for hairpin between two ports in
mlx5 PMD.

v2:
  * Update the code and reorganize the patch set

Bing Zhao (6):
  net/mlx5: change hairpin queue peer checking
  net/mlx5: add support for two ports hairpin mode
  net/mlx5: add support to get hairpin peer ports
  net/mlx5: conditional hairpin auto bind
  net/mlx5: change hairpin ingress flow validation
  net/mlx5: not split hairpin flow in explicit mode

 drivers/net/mlx5/linux/mlx5_os.c |  12 +
 drivers/net/mlx5/mlx5.h          |  21 ++
 drivers/net/mlx5/mlx5_flow.c     |   7 +
 drivers/net/mlx5/mlx5_flow_dv.c  |  15 +-
 drivers/net/mlx5/mlx5_rxq.c      |  50 ++-
 drivers/net/mlx5/mlx5_rxtx.h     |   4 +
 drivers/net/mlx5/mlx5_trigger.c  | 733 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_txq.c      |  23 +-
 8 files changed, 846 insertions(+), 19 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
@ 2020-10-22 14:06   ` Bing Zhao
  2020-10-26  9:28     ` Slava Ovsiienko
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode Bing Zhao
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Bing Zhao @ 2020-10-22 14:06 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In the current implementation of single port mode hairpin, the peer
queue should belong to the same port of the current queue. When the
two ports hairpin mode is introduced, such checking should be removed
to make the hairpin queue setup execute successfully since it is not
a valid condition anymore.

In the meanwhile, different devices could have different queue
configurations. The queues number of peer port is unknown to the
current device. The checking should be removed also.

If the Tx and Rx port IDs of a hairpin peer are different, only the
manual binding and explicit Tx flows are supported. Or else, the four
combinations of modes could be supported. The mode attributes
consistency checking will be done when connecting the queue with its
peer queue.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 23 +++++++++++++++++------
 drivers/net/mlx5/mlx5_txq.c | 23 +++++++++++++++++------
 2 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e1783ba..78e15e7 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -777,15 +777,26 @@
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	if (hairpin_conf->peer_count != 1 ||
-	    hairpin_conf->peers[0].port != dev->data->port_id ||
-	    hairpin_conf->peers[0].queue >= priv->txqs_n) {
-		DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u "
-			" invalid hairpind configuration", dev->data->port_id,
-			idx);
+	if (hairpin_conf->peer_count != 1) {
 		rte_errno = EINVAL;
+		DRV_LOG(ERR, "port %u unable to setup Rx hairpin queue index %u"
+			" peer count is %u", dev->data->port_id,
+			idx, hairpin_conf->peer_count);
 		return -rte_errno;
 	}
+	if (hairpin_conf->peers[0].port != dev->data->port_id) {
+		if (hairpin_conf->manual_bind == 0 ||
+		    hairpin_conf->tx_explicit == 0) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u unable to setup Rx hairpin queue"
+				" index %u peer port %u with attributes %u %u",
+				dev->data->port_id, idx,
+				hairpin_conf->peers[0].port,
+				hairpin_conf->manual_bind,
+				hairpin_conf->tx_explicit);
+			return -rte_errno;
+		}
+	}
 	rxq_ctrl = mlx5_rxq_hairpin_new(dev, idx, desc, hairpin_conf);
 	if (!rxq_ctrl) {
 		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 9c2dd2a..850a85c 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -421,15 +421,26 @@
 	res = mlx5_tx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	if (hairpin_conf->peer_count != 1 ||
-	    hairpin_conf->peers[0].port != dev->data->port_id ||
-	    hairpin_conf->peers[0].queue >= priv->rxqs_n) {
-		DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u "
-			" invalid hairpind configuration", dev->data->port_id,
-			idx);
+	if (hairpin_conf->peer_count != 1) {
 		rte_errno = EINVAL;
+		DRV_LOG(ERR, "port %u unable to setup Tx hairpin queue index %u"
+			" peer count is %u", dev->data->port_id,
+			idx, hairpin_conf->peer_count);
 		return -rte_errno;
 	}
+	if (hairpin_conf->peers[0].port != dev->data->port_id) {
+		if (hairpin_conf->manual_bind == 0 ||
+		    hairpin_conf->tx_explicit == 0) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u unable to setup Tx hairpin queue"
+				" index %u peer port %u with attributes %u %u",
+				dev->data->port_id, idx,
+				hairpin_conf->peers[0].port,
+				hairpin_conf->manual_bind,
+				hairpin_conf->tx_explicit);
+			return -rte_errno;
+		}
+	}
 	txq_ctrl = mlx5_txq_hairpin_new(dev, idx, desc,	hairpin_conf);
 	if (!txq_ctrl) {
 		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking Bing Zhao
@ 2020-10-22 14:06   ` Bing Zhao
  2020-10-26  9:29     ` Slava Ovsiienko
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports Bing Zhao
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Bing Zhao @ 2020-10-22 14:06 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In order to support hairpin between two ports, mlx5 PMD needs to
implement the functions and provide them as the function pointers.

The bind and unbind functions are executed per port pairs. All the
hairpin queues between the two ports should have the same attributes
during queues setup. Different configurations among queue pairs from
the same ports are not supported. It is allowed that two ports only
have one direction hairpin.

In order to set up the connection between two queues, peer Rx queue
HW information must be fetched via the internal RTE API and the queue
information could be used to modify the SQ object. Then the RQ object
will be modified with the Tx queue HW information. The reverse
operation is not supported right now.

When disconnecting the queues pair, SQ and RQ object should be reset
without any peer HW information. The unbinding operation will try to
disconnect all Tx queues from the port from the Rx queues of the peer
port.

Tx explicit mode attribute will be saved and used when creating a
hairpin flow.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  10 +
 drivers/net/mlx5/mlx5.h          |  19 ++
 drivers/net/mlx5/mlx5_rxtx.h     |   2 +
 drivers/net/mlx5/mlx5_trigger.c  | 611 ++++++++++++++++++++++++++++++++++++++-
 4 files changed, 640 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 40f9446..83a8b56 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2552,6 +2552,11 @@
 	.get_module_eeprom = mlx5_get_module_eeprom,
 	.hairpin_cap_get = mlx5_hairpin_cap_get,
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
+	.hairpin_bind = mlx5_hairpin_bind,
+	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
+	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
+	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
 };
 
 /* Available operations from secondary process. */
@@ -2630,4 +2635,9 @@
 	.get_module_eeprom = mlx5_get_module_eeprom,
 	.hairpin_cap_get = mlx5_hairpin_cap_get,
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
+	.hairpin_bind = mlx5_hairpin_bind,
+	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
+	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
+	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c9d5d71..38d0977 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -891,6 +891,14 @@ struct mlx5_priv {
 #define PORT_ID(priv) ((priv)->dev_data->port_id)
 #define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)])
 
+struct rte_hairpin_peer_info {
+	uint32_t qp_id;
+	uint32_t vhca_id;
+	uint16_t peer_q;
+	uint16_t tx_explicit;
+	uint16_t manual_bind;
+};
+
 /* mlx5.c */
 
 int mlx5_getenv_int(const char *);
@@ -1041,6 +1049,17 @@ void mlx5_vlan_vmwa_acquire(struct rte_eth_dev *dev,
 int mlx5_traffic_enable(struct rte_eth_dev *dev);
 void mlx5_traffic_disable(struct rte_eth_dev *dev);
 int mlx5_traffic_restart(struct rte_eth_dev *dev);
+int mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
+				   struct rte_hairpin_peer_info *current_info,
+				   struct rte_hairpin_peer_info *peer_info,
+				   uint32_t direction);
+int mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
+				 struct rte_hairpin_peer_info *peer_info,
+				 uint32_t direction);
+int mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
+				   uint32_t direction);
+int mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port);
+int mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port);
 
 /* mlx5_flow.c */
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index b243b6f..b50b643 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -184,6 +184,7 @@ struct mlx5_rxq_ctrl {
 	void *wq_umem; /* WQ buffer registration info. */
 	void *cq_umem; /* CQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
+	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
 /* TX queue send local data. */
@@ -280,6 +281,7 @@ struct mlx5_txq_ctrl {
 	void *bf_reg; /* BlueFlame register from Verbs. */
 	uint16_t dump_file_n; /* Number of dump files. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
+	uint32_t hairpin_status; /* Hairpin binding status. */
 	struct mlx5_txq_data txq; /* Data path structure. */
 	/* Must be the last field in the structure, contains elts[]. */
 };
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 7735f02..800645e 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -203,7 +203,7 @@
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_hairpin_bind(struct rte_eth_dev *dev)
+mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
@@ -281,6 +281,613 @@
 	return -rte_errno;
 }
 
+/*
+ * Fetch the peer queue's SW & HW information.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param peer_queue
+ *   Index of the queue to fetch the information.
+ * @param current_info
+ *   Pointer to the input peer information, not used currently.
+ * @param peer_info
+ *   Pointer to the structure to store the information, output.
+ * @param direction
+ *   Positive to get the RxQ information, zero to get the TxQ information.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
+			       struct rte_hairpin_peer_info *current_info,
+			       struct rte_hairpin_peer_info *peer_info,
+			       uint32_t direction)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	(void)current_info;
+
+	if (dev->data->dev_started == 0) {
+		rte_errno = EBUSY;
+		DRV_LOG(ERR, "peer port %u is not started",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+	/*
+	 * Peer port used as egress. In the current design, hairpin Tx queue
+	 * will be bound to the peer Rx queue. Indeed, only the information of
+	 * peer Rx queue needs to be fetched.
+	 */
+	if (direction == 0) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+
+		txq_ctrl = mlx5_txq_get(dev, peer_queue);
+		if (!txq_ctrl) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Tx queue %d",
+				dev->data->port_id, peer_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d is not a hairpin Txq",
+				dev->data->port_id, peer_queue);
+			mlx5_txq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		if (!txq_ctrl->obj || !txq_ctrl->obj->sq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Txq object found: %d",
+				dev->data->port_id, peer_queue);
+			mlx5_txq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		peer_info->qp_id = txq_ctrl->obj->sq->id;
+		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
+		/* 1-to-1 mapping, only the first one is used. */
+		peer_info->peer_q = txq_ctrl->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = txq_ctrl->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind;
+		mlx5_txq_release(dev, peer_queue);
+	} else { /* Peer port used as ingress. */
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+
+		rxq_ctrl = mlx5_rxq_get(dev, peer_queue);
+		if (!rxq_ctrl) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
+				dev->data->port_id, peer_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq",
+				dev->data->port_id, peer_queue);
+			mlx5_rxq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Rxq object found: %d",
+				dev->data->port_id, peer_queue);
+			mlx5_rxq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		peer_info->qp_id = rxq_ctrl->obj->rq->id;
+		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
+		peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind;
+		mlx5_rxq_release(dev, peer_queue);
+	}
+	return 0;
+}
+
+/*
+ * Bind the hairpin queue with the peer HW information.
+ * This needs to be called twice both for Tx and Rx queues of a pair.
+ * If the queue is already bound, it is considered successful.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param cur_queue
+ *   Index of the queue to change the HW configuration to bind.
+ * @param peer_info
+ *   Pointer to information of the peer queue.
+ * @param direction
+ *   Positive to configure the TxQ, zero to configure the RxQ.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
+			     struct rte_hairpin_peer_info *peer_info,
+			     uint32_t direction)
+{
+	int ret = 0;
+
+	/*
+	 * Consistency checking of the peer queue: opposite direction is used
+	 * to get the peer queue info with ethdev port ID, no need to check.
+	 */
+	if (peer_info->peer_q != cur_queue) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "port %u queue %d and peer queue %d mismatch",
+			dev->data->port_id, cur_queue, peer_info->peer_q);
+		return -rte_errno;
+	}
+	if (direction != 0) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+		struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
+
+		txq_ctrl = mlx5_txq_get(dev, cur_queue);
+		if (!txq_ctrl) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Tx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!txq_ctrl->obj || !txq_ctrl->obj->sq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Txq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->hairpin_status) {
+			DRV_LOG(DEBUG, "port %u Tx queue %d is already bound",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return 0;
+		}
+		/*
+		 * All queues' of one port consistency checking is done in the
+		 * bind() function, and that is optional.
+		 */
+		if (peer_info->tx_explicit !=
+		    txq_ctrl->hairpin_conf.tx_explicit) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Tx queue %d and peer Tx rule mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (peer_info->manual_bind !=
+		    txq_ctrl->hairpin_conf.manual_bind) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Tx queue %d and peer binding mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		sq_attr.state = MLX5_SQC_STATE_RDY;
+		sq_attr.sq_state = MLX5_SQC_STATE_RST;
+		sq_attr.hairpin_peer_rq = peer_info->qp_id;
+		sq_attr.hairpin_peer_vhca = peer_info->vhca_id;
+		ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr);
+		if (ret == 0)
+			txq_ctrl->hairpin_status = 1;
+		mlx5_txq_release(dev, cur_queue);
+	} else {
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+
+		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
+		if (!rxq_ctrl) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Rxq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->hairpin_status) {
+			DRV_LOG(DEBUG, "port %u Rx queue %d is already bound",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return 0;
+		}
+		if (peer_info->tx_explicit !=
+		    rxq_ctrl->hairpin_conf.tx_explicit) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (peer_info->manual_bind !=
+		    rxq_ctrl->hairpin_conf.manual_bind) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		rq_attr.state = MLX5_SQC_STATE_RDY;
+		rq_attr.rq_state = MLX5_SQC_STATE_RST;
+		rq_attr.hairpin_peer_sq = peer_info->qp_id;
+		rq_attr.hairpin_peer_vhca = peer_info->vhca_id;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+		if (ret == 0)
+			rxq_ctrl->hairpin_status = 1;
+		mlx5_rxq_release(dev, cur_queue);
+	}
+	return ret;
+}
+
+/*
+ * Unbind the hairpin queue and reset its HW configuration.
+ * This needs to be called twice both for Tx and Rx queues of a pair.
+ * If the queue is already unbound, it is considered successful.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param cur_queue
+ *   Index of the queue to change the HW configuration to unbind.
+ * @param direction
+ *   Positive to reset the TxQ, zero to reset the RxQ.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
+			       uint32_t direction)
+{
+	int ret = 0;
+
+	if (direction != 0) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+		struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
+
+		txq_ctrl = mlx5_txq_get(dev, cur_queue);
+		if (!txq_ctrl) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Tx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!txq_ctrl->obj || !txq_ctrl->obj->sq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Txq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		/* Already unbound, 0 returns. */
+		if (txq_ctrl->hairpin_status == 0) {
+			mlx5_txq_release(dev, cur_queue);
+			DRV_LOG(DEBUG, "port %u Tx queue %d is already unbound",
+				dev->data->port_id, cur_queue);
+			return 0;
+		}
+		sq_attr.state = MLX5_SQC_STATE_RST;
+		sq_attr.sq_state = MLX5_SQC_STATE_RST;
+		ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr);
+		if (ret == 0)
+			txq_ctrl->hairpin_status = 0;
+		mlx5_txq_release(dev, cur_queue);
+	} else {
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+
+		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
+		if (!rxq_ctrl) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Rxq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (!rxq_ctrl->hairpin_status) {
+			mlx5_rxq_release(dev, cur_queue);
+			DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound",
+				dev->data->port_id, cur_queue);
+			return 0;
+		}
+		rq_attr.state = MLX5_SQC_STATE_RST;
+		rq_attr.rq_state = MLX5_SQC_STATE_RST;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+		if (ret == 0)
+			rxq_ctrl->hairpin_status = 0;
+		mlx5_rxq_release(dev, cur_queue);
+	}
+	return ret;
+}
+
+/*
+ * Bind the hairpin port pairs, from the Tx to the peer Rx.
+ * This function only supports to bind the Tx to one Rx.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param rx_port
+ *   Port identifier of the Rx port.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	int ret = 0;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	uint32_t i, j;
+	struct rte_hairpin_peer_info peer;
+	struct rte_hairpin_peer_info cur;
+	const struct rte_eth_hairpin_conf *conf;
+	uint16_t num_q = 0;
+	uint16_t local_port = priv->dev_data->port_id;
+	uint32_t manual;
+	uint32_t explicit;
+	uint16_t rx_queue;
+
+	/*
+	 * Before binding TxQ to peer RxQ, first round loop will be used for
+	 * checking the queues' configuration consistency. This would be a
+	 * little time consuming but better than doing the rollback.
+	 */
+	for (i = 0; i != priv->txqs_n; i++) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		/*
+		 * All hairpin Tx queues of a single port that connected to the
+		 * same peer Rx port should have the same "auto binding" and
+		 * "implicit Tx flow" modes.
+		 * Peer consistency checking will be done in per queue binding.
+		 */
+		conf = &txq_ctrl->hairpin_conf;
+		if (conf->peers[0].port == rx_port) {
+			if (num_q == 0) {
+				manual = conf->manual_bind;
+				explicit = conf->tx_explicit;
+			} else {
+				if (manual != conf->manual_bind ||
+				    explicit != conf->tx_explicit) {
+					mlx5_txq_release(dev, i);
+					rte_errno = EINVAL;
+					DRV_LOG(ERR, "port %u queue %d mode"
+						" mismatch: %u %u, %u %u",
+						local_port, i, manual,
+						conf->manual_bind, explicit,
+						conf->tx_explicit);
+					return -rte_errno;
+				}
+			}
+			num_q++;
+		}
+		mlx5_txq_release(dev, i);
+	}
+	/* Once no queue is configured, success is returned directly. */
+	if (num_q == 0)
+		return ret;
+	/* All the hairpin TX queues need to be traversed again. */
+	for (i = 0; i != priv->txqs_n; i++) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		/*
+		 * Fetch peer RxQ's information.
+		 * No need to pass the information of the current queue.
+		 */
+		ret = rte_eth_hairpin_queue_peer_update(rx_port, rx_queue,
+							NULL, &peer, 1);
+		if (ret != 0) {
+			mlx5_txq_release(dev, i);
+			goto error;
+		}
+		/* Accessing its own device, inside mlx5 PMD. */
+		ret = mlx5_hairpin_queue_peer_bind(dev, i, &peer, 1);
+		if (ret != 0) {
+			mlx5_txq_release(dev, i);
+			goto error;
+		}
+		/* Pass TxQ's information to peer RxQ and try binding. */
+		cur.peer_q = rx_queue;
+		cur.qp_id = txq_ctrl->obj->sq->id;
+		cur.vhca_id = priv->config.hca_attr.vhca_id;
+		cur.tx_explicit = txq_ctrl->hairpin_conf.tx_explicit;
+		cur.manual_bind = txq_ctrl->hairpin_conf.manual_bind;
+		/*
+		 * In order to access another device in a proper way, RTE level
+		 * private function is needed.
+		 */
+		ret = rte_eth_hairpin_queue_peer_bind(rx_port, rx_queue,
+						      &cur, 0);
+		if (ret != 0) {
+			mlx5_txq_release(dev, i);
+			goto error;
+		}
+		mlx5_txq_release(dev, i);
+	}
+	return 0;
+error:
+	/*
+	 * Do roll-back process for the queues already bound.
+	 * No need to check the return value of the queue unbind function.
+	 */
+	for (j = i; j != 0; j--) {
+		/* No validation is needed here. */
+		txq_ctrl = mlx5_txq_get(dev, j);
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 0);
+		mlx5_hairpin_queue_peer_unbind(dev, j, 1);
+		mlx5_txq_release(dev, j);
+	}
+	return ret;
+}
+
+/*
+ * Unbind the hairpin port pair, HW configuration of both devices will be clear
+ * and status will be reset for all the queues used between the them.
+ * This function only supports to unbind the Tx from one Rx.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param rx_port
+ *   Port identifier of the Rx port.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_hairpin_unbind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	uint32_t i;
+	int ret;
+	uint16_t cur_port = priv->dev_data->port_id;
+
+	for (i = 0; i != priv->txqs_n; i++) {
+		uint16_t rx_queue;
+
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		/* Indeed, only the first used queue needs to be checked. */
+		if (txq_ctrl->hairpin_conf.manual_bind != 0) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u and port %u is in auto-bind mode",
+				cur_port, rx_port);
+			mlx5_txq_release(dev, i);
+			return -rte_errno;
+		}
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		mlx5_txq_release(dev, i);
+		ret = rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 0);
+		if (ret) {
+			DRV_LOG(ERR, "port %u Rx queue %d unbind - failure",
+				rx_port, rx_queue);
+			return ret;
+		}
+		ret = mlx5_hairpin_queue_peer_unbind(dev, i, 1);
+		if (ret) {
+			DRV_LOG(ERR, "port %u Tx queue %d unbind - failure",
+				cur_port, i);
+			return ret;
+		}
+	}
+	return 0;
+}
+
+/*
+ * Bind hairpin ports, Rx could be all ports when using RTE_MAX_ETHPORTS.
+ * @see mlx5_hairpin_bind_single_port()
+ */
+int
+mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	int ret = 0;
+	uint16_t p, pp;
+
+	/*
+	 * If the Rx port has no hairpin configuration with the current port,
+	 * the binding will be skipped in the called function of single port.
+	 * Device started status will be checked only before the queue
+	 * information updating.
+	 */
+	if (rx_port == RTE_MAX_ETHPORTS) {
+		RTE_ETH_FOREACH_DEV(p) {
+			ret = mlx5_hairpin_bind_single_port(dev, p);
+			if (ret != 0)
+				goto unbind;
+		}
+		return ret;
+	} else {
+		return mlx5_hairpin_bind_single_port(dev, rx_port);
+	}
+unbind:
+	RTE_ETH_FOREACH_DEV(pp)
+		if (pp < p)
+			mlx5_hairpin_unbind_single_port(dev, pp);
+	return ret;
+}
+
+/*
+ * Unbind hairpin ports, Rx could be all ports when using RTE_MAX_ETHPORTS.
+ * @see mlx5_hairpin_unbind_single_port()
+ */
+int
+mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	int ret = 0;
+	uint16_t p;
+
+	if (rx_port == RTE_MAX_ETHPORTS)
+		RTE_ETH_FOREACH_DEV(p) {
+			ret = mlx5_hairpin_unbind_single_port(dev, p);
+			if (ret != 0)
+				return ret;
+		}
+	else
+		ret = mlx5_hairpin_bind_single_port(dev, rx_port);
+	return ret;
+}
+
 /**
  * DPDK callback to start the device.
  *
@@ -332,7 +939,7 @@
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
-	ret = mlx5_hairpin_bind(dev);
+	ret = mlx5_hairpin_auto_bind(dev);
 	if (ret) {
 		DRV_LOG(ERR, "port %u hairpin binding failed: %s",
 			dev->data->port_id, strerror(rte_errno));
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking Bing Zhao
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode Bing Zhao
@ 2020-10-22 14:06   ` Bing Zhao
  2020-10-26  9:29     ` Slava Ovsiienko
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind Bing Zhao
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Bing Zhao @ 2020-10-22 14:06 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In real-life business, one device could be attached and detached
dynamically. The hairpin configuration of this port to/from all the
other ports should be enabled and disabled accordingly.

The RTE ethdev lib and PMD should provide this ability to get the
peer ports list in case that the application doesn't save it. It is
recommended that the size of the array to save the port IDs is as
large as the "RTE_MAX_ETHPORTS" to have the maximal capacity.

The order of the peer port IDs may be different from that during
hairpin queues set in the initialization stage. The peer port ID
could be the same as the current device port ID when the hairpin
peer ports contain itself - the single port hairpin.

The application should check the ports' status and decide if the
peer port should be bound / unbound when starting / stopping the
current device.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  2 +
 drivers/net/mlx5/mlx5.h          |  2 +
 drivers/net/mlx5/mlx5_trigger.c  | 89 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 93 insertions(+)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 83a8b56..17d3767 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2554,6 +2554,7 @@
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
 	.hairpin_bind = mlx5_hairpin_bind,
 	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_get_peer_ports = mlx5_hairpin_get_peer_ports,
 	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
 	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
 	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
@@ -2637,6 +2638,7 @@
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
 	.hairpin_bind = mlx5_hairpin_bind,
 	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_get_peer_ports = mlx5_hairpin_get_peer_ports,
 	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
 	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
 	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 38d0977..70a0d3d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1060,6 +1060,8 @@ int mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				   uint32_t direction);
 int mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port);
 int mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port);
+int mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
+				size_t len, uint32_t direction);
 
 /* mlx5_flow.c */
 
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 800645e..497f731 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -888,6 +888,95 @@
 	return ret;
 }
 
+/*
+ * DPDK callback to get the hairpin peer ports list.
+ * This will return the actual number of peer ports and save the identifiers
+ * into the array (sorted, may be different from that when setting up the
+ * hairpin peer queues).
+ * The peer port ID could be the same as the port ID of the current device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param peer_ports
+ *   Pointer to array to save the port identifiers.
+ * @param len
+ *   The length of the array.
+ * @param direction
+ *   Current port to peer port direction.
+ *   positive - current used as Tx to get all peer Rx ports.
+ *   zero - current used as Rx to get all peer Tx ports.
+ *
+ * @return
+ *   0 or positive value on success, actual number of peer ports.
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
+			    size_t len, uint32_t direction)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	struct mlx5_rxq_ctrl *rxq_ctrl;
+	uint32_t i;
+	uint16_t pp;
+	uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0};
+	int ret = 0;
+
+	if (direction) {
+		for (i = 0; i < priv->txqs_n; i++) {
+			txq_ctrl = mlx5_txq_get(dev, i);
+			if (!txq_ctrl)
+				continue;
+			if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+				mlx5_txq_release(dev, i);
+				continue;
+			}
+			pp = txq_ctrl->hairpin_conf.peers[0].port;
+			if (pp >= RTE_MAX_ETHPORTS) {
+				rte_errno = ERANGE;
+				mlx5_txq_release(dev, i);
+				DRV_LOG(ERR, "port %hu queue %u peer port "
+					"out of range %hu",
+					priv->dev_data->port_id, i, pp);
+				return -rte_errno;
+			}
+			bits[pp / 32] |= 1 << (pp % 32);
+			mlx5_txq_release(dev, i);
+		}
+	} else {
+		for (i = 0; i < priv->rxqs_n; i++) {
+			rxq_ctrl = mlx5_rxq_get(dev, i);
+			if (!rxq_ctrl)
+				continue;
+			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+				mlx5_rxq_release(dev, i);
+				continue;
+			}
+			pp = rxq_ctrl->hairpin_conf.peers[0].port;
+			if (pp >= RTE_MAX_ETHPORTS) {
+				rte_errno = ERANGE;
+				mlx5_rxq_release(dev, i);
+				DRV_LOG(ERR, "port %hu queue %u peer port "
+					"out of range %hu",
+					priv->dev_data->port_id, i, pp);
+				return -rte_errno;
+			}
+			bits[pp / 32] |= 1 << (pp % 32);
+			mlx5_rxq_release(dev, i);
+		}
+	}
+	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+		if (bits[i / 32] & (1 << (i % 32))) {
+			if ((size_t)ret >= len) {
+				rte_errno = E2BIG;
+				return -rte_errno;
+			}
+			peer_ports[ret++] = i;
+		}
+	}
+	return ret;
+}
+
 /**
  * DPDK callback to start the device.
  *
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (2 preceding siblings ...)
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports Bing Zhao
@ 2020-10-22 14:06   ` Bing Zhao
  2020-10-26  9:29     ` Slava Ovsiienko
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation Bing Zhao
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
  5 siblings, 1 reply; 28+ messages in thread
From: Bing Zhao @ 2020-10-22 14:06 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In single port hairpin mode, after the queues are configured during
start up. The binding process will be enabled automatically in the
port start phase and the default control flow for egress will be
created.

When switching to two ports hairpin mode, the auto binding process
should be skipped if there is no TX queue with the peer RX queue on
the same device, and it should be skipped also if the queues are
configured with manual bind attribute.

If the explicit TX flow rule mode is configured or hairpin is
between two ports, the default control flows for TX queues should
not be created.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_trigger.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 497f731..27bd3e9 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -214,6 +214,8 @@
 	struct mlx5_devx_obj *rq;
 	unsigned int i;
 	int ret = 0;
+	bool need_auto = false;
+	uint16_t self_port = dev->data->port_id;
 
 	for (i = 0; i != priv->txqs_n; ++i) {
 		txq_ctrl = mlx5_txq_get(dev, i);
@@ -223,6 +225,25 @@
 			mlx5_txq_release(dev, i);
 			continue;
 		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != self_port)
+			continue;
+		if (txq_ctrl->hairpin_conf.manual_bind) {
+			mlx5_txq_release(dev, i);
+			return 0;
+		}
+		need_auto = true;
+		mlx5_txq_release(dev, i);
+	}
+	if (!need_auto)
+		return 0;
+	for (i = 0; i != priv->txqs_n; ++i) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
 		if (!txq_ctrl->obj) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no txq object found: %d",
@@ -1028,9 +1049,13 @@
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
+	/*
+	 * Such step will be skipped if there is no hairpin TX queue configured
+	 * with RX peer queue from the same device.
+	 */
 	ret = mlx5_hairpin_auto_bind(dev);
 	if (ret) {
-		DRV_LOG(ERR, "port %u hairpin binding failed: %s",
+		DRV_LOG(ERR, "port %u hairpin auto binding failed: %s",
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
@@ -1181,7 +1206,11 @@
 		struct mlx5_txq_ctrl *txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
+		/* Only Tx implicit mode requires the default Tx flow. */
+		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN &&
+		    txq_ctrl->hairpin_conf.tx_explicit == 0 &&
+		    txq_ctrl->hairpin_conf.peers[0].port ==
+		    priv->dev_data->port_id) {
 			ret = mlx5_ctrl_flow_source_queue(dev, i);
 			if (ret) {
 				mlx5_txq_release(dev, i);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (3 preceding siblings ...)
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind Bing Zhao
@ 2020-10-22 14:06   ` Bing Zhao
  2020-10-26  9:30     ` Slava Ovsiienko
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
  5 siblings, 1 reply; 28+ messages in thread
From: Bing Zhao @ 2020-10-22 14:06 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In the current implementation of the single port hairpin, there is a
implicit splitting process for actions. When inserting a hairpin
flow, all the actions will be included with the ingress attribute.
The flow engine will check and decide which actions should be moved
into the TX flow part, e.g., encapsulation, VLAN push.

In some NICs, some actions can only be done in one direction. Since
the hairpin flow will be split into two parts, such validation will
be skipped.

With the hairpin explicit TX flow mode, no splitting is needed any
more. The hairpin flow may have no big difference from a standard
flow (except the queue). The application should take full charge of
the actions and the flow engine should validate the hairpin flow in
the same way as other flows.

In the meanwhile, a new internal API is added to get the hairpin
configuration. This will bypass the useless atomic operation to save
the CPU cycles.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 15 ++++++++++++---
 drivers/net/mlx5/mlx5_rxq.c     | 27 +++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_rxtx.h    |  2 ++
 3 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 15cd34e..d5be6f0 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6058,11 +6058,17 @@ struct field_modify_info modify_tcp[] = {
 						  actions,
 						  "no fate action is found");
 	}
-	/* Continue validation for Xcap and VLAN actions.*/
+	/*
+	 * Continue validation for Xcap and VLAN actions.
+	 * If hairpin is working in explicit TX rule mode, there is no actions
+	 * splitting and the validation of hairpin ingress flow should be the
+	 * same as other standard flows.
+	 */
 	if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS |
 			     MLX5_FLOW_VLAN_ACTIONS)) &&
 	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) {
+	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN ||
+	     !!mlx5_rxq_get_hairpin_conf(dev, queue_index)->tx_explicit)) {
 		if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
 		    MLX5_FLOW_XCAP_ACTIONS)
 			return rte_flow_error_set(error, ENOTSUP,
@@ -6091,7 +6097,10 @@ struct field_modify_info modify_tcp[] = {
 						 "multiple VLAN actions");
 		}
 	}
-	/* Hairpin flow will add one more TAG action. */
+	/*
+	 * Hairpin flow will add one more TAG action in TX implicit mode.
+	 * In TX explicit mode, there will be no hairpin flow ID.
+	 */
 	if (hairpin > 0)
 		rw_act_num += MLX5_ACT_NUM_SET_TAG;
 	/* extra metadata enabled: one more TAG action will be add. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 78e15e7..d328d4a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1720,6 +1720,33 @@ enum mlx5_rxq_type
 	return MLX5_RXQ_TYPE_UNDEFINED;
 }
 
+/*
+ * Get a Rx hairpin queue configuration.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Rx queue index.
+ *
+ * @return
+ *   Pointer to the configuration if a hairpin RX queue, otherwise NULL.
+ */
+const struct rte_eth_hairpin_conf *
+mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+
+	if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) {
+		rxq_ctrl = container_of((*priv->rxqs)[idx],
+					struct mlx5_rxq_ctrl,
+					rxq);
+		if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+			return &rxq_ctrl->hairpin_conf;
+	}
+	return NULL;
+}
+
 /**
  * Get an indirection table.
  *
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index b50b643..d91ed0f 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -344,6 +344,8 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
 int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx);
 int mlx5_hrxq_verify(struct rte_eth_dev *dev);
 enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx);
+const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf
+	(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev);
 void mlx5_drop_action_destroy(struct rte_eth_dev *dev);
 uint64_t mlx5_get_rx_port_offloads(void);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (4 preceding siblings ...)
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation Bing Zhao
@ 2020-10-22 14:06   ` Bing Zhao
  2020-10-26  9:30     ` Slava Ovsiienko
  5 siblings, 1 reply; 28+ messages in thread
From: Bing Zhao @ 2020-10-22 14:06 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In the current implementation, the hairpin flow will be split into
two flows implicitly if there is some action that only belongs to the
TX part. A TX device flow will be inserted by the mlx5 PMD itself.

In hairpin between two ports, the explicit TX flow mode will be the
only one to be supported. It is not the appropriate behavior to
insert a TX flow into another device implicitly. The application
could create any flow as it likes and has full control of the user
flows. Hairpin flows will have no difference from standard flows and
the application can decide how to chain RX and TX flows together.

Even in the single port hairpin, this explicit TX flow mode could
also be supported.

When checking if the hairpin needs to be split, just return if the
hairpin queue is with "tx_explicit" attribute. Then in the following
steps for validation and translation, the code path will be the same
as that for standard flows.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d7243a8..8a114a6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -3261,6 +3261,7 @@ struct mlx5_flow_tunnel_info {
 	const struct rte_flow_action_queue *queue;
 	const struct rte_flow_action_rss *rss;
 	const struct rte_flow_action_raw_encap *raw_encap;
+	const struct rte_eth_hairpin_conf *conf;
 
 	if (!attr->ingress)
 		return 0;
@@ -3273,6 +3274,9 @@ struct mlx5_flow_tunnel_info {
 			if (mlx5_rxq_get_type(dev, queue->index) !=
 			    MLX5_RXQ_TYPE_HAIRPIN)
 				return 0;
+			conf = mlx5_rxq_get_hairpin_conf(dev, queue->index);
+			if (!!conf->tx_explicit)
+				return 0;
 			queue_action = 1;
 			action_n++;
 			break;
@@ -3283,6 +3287,9 @@ struct mlx5_flow_tunnel_info {
 			if (mlx5_rxq_get_type(dev, rss->queue[0]) !=
 			    MLX5_RXQ_TYPE_HAIRPIN)
 				return 0;
+			conf = mlx5_rxq_get_hairpin_conf(dev, rss->queue[0]);
+			if (conf != NULL && !!conf->tx_explicit)
+				return 0;
 			queue_action = 1;
 			action_n++;
 			break;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking Bing Zhao
@ 2020-10-26  9:28     ` Slava Ovsiienko
  0 siblings, 0 replies; 28+ messages in thread
From: Slava Ovsiienko @ 2020-10-26  9:28 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam, Raslan Darawsheh

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Bing Zhao
> Sent: Thursday, October 22, 2020 17:07
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer
> checking
> 
> In the current implementation of single port mode hairpin, the peer queue
> should belong to the same port of the current queue. When the two ports
> hairpin mode is introduced, such checking should be removed to make the
> hairpin queue setup execute successfully since it is not a valid condition
> anymore.
> 
> In the meanwhile, different devices could have different queue configurations.
> The queues number of peer port is unknown to the current device. The
> checking should be removed also.
> 
> If the Tx and Rx port IDs of a hairpin peer are different, only the manual
> binding and explicit Tx flows are supported. Or else, the four combinations of
> modes could be supported. The mode attributes consistency checking will be
> done when connecting the queue with its peer queue.
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

> ---
>  drivers/net/mlx5/mlx5_rxq.c | 23 +++++++++++++++++------
> drivers/net/mlx5/mlx5_txq.c | 23 +++++++++++++++++------
>  2 files changed, 34 insertions(+), 12 deletions(-)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode Bing Zhao
@ 2020-10-26  9:29     ` Slava Ovsiienko
  0 siblings, 0 replies; 28+ messages in thread
From: Slava Ovsiienko @ 2020-10-26  9:29 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam, Raslan Darawsheh

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, October 22, 2020 17:07
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode
> 
> In order to support hairpin between two ports, mlx5 PMD needs to implement
> the functions and provide them as the function pointers.
> 
> The bind and unbind functions are executed per port pairs. All the hairpin
> queues between the two ports should have the same attributes during queues
> setup. Different configurations among queue pairs from the same ports are not
> supported. It is allowed that two ports only have one direction hairpin.
> 
> In order to set up the connection between two queues, peer Rx queue HW
> information must be fetched via the internal RTE API and the queue
> information could be used to modify the SQ object. Then the RQ object will be
> modified with the Tx queue HW information. The reverse operation is not
> supported right now.
> 
> When disconnecting the queues pair, SQ and RQ object should be reset
> without any peer HW information. The unbinding operation will try to
> disconnect all Tx queues from the port from the Rx queues of the peer port.
> 
> Tx explicit mode attribute will be saved and used when creating a hairpin flow.
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

> ---
>  drivers/net/mlx5/linux/mlx5_os.c |  10 +
>  drivers/net/mlx5/mlx5.h          |  19 ++
>  drivers/net/mlx5/mlx5_rxtx.h     |   2 +
>  drivers/net/mlx5/mlx5_trigger.c  | 611
> ++++++++++++++++++++++++++++++++++++++-
>  4 files changed, 640 insertions(+), 2 deletions(-)
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports Bing Zhao
@ 2020-10-26  9:29     ` Slava Ovsiienko
  0 siblings, 0 replies; 28+ messages in thread
From: Slava Ovsiienko @ 2020-10-26  9:29 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam, Raslan Darawsheh

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, October 22, 2020 17:07
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports
> 
> In real-life business, one device could be attached and detached dynamically.
> The hairpin configuration of this port to/from all the other ports should be
> enabled and disabled accordingly.
> 
> The RTE ethdev lib and PMD should provide this ability to get the peer ports
> list in case that the application doesn't save it. It is recommended that the size
> of the array to save the port IDs is as large as the "RTE_MAX_ETHPORTS" to
> have the maximal capacity.
> 
> The order of the peer port IDs may be different from that during hairpin
> queues set in the initialization stage. The peer port ID could be the same as the
> current device port ID when the hairpin peer ports contain itself - the single
> port hairpin.
> 
> The application should check the ports' status and decide if the peer port
> should be bound / unbound when starting / stopping the current device.
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

> ---
>  drivers/net/mlx5/linux/mlx5_os.c |  2 +
>  drivers/net/mlx5/mlx5.h          |  2 +
>  drivers/net/mlx5/mlx5_trigger.c  | 89
> ++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 93 insertions(+)
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind Bing Zhao
@ 2020-10-26  9:29     ` Slava Ovsiienko
  0 siblings, 0 replies; 28+ messages in thread
From: Slava Ovsiienko @ 2020-10-26  9:29 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam, Raslan Darawsheh

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, October 22, 2020 17:07
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind
> 
> In single port hairpin mode, after the queues are configured during start up.
> The binding process will be enabled automatically in the port start phase and
> the default control flow for egress will be created.
> 
> When switching to two ports hairpin mode, the auto binding process should be
> skipped if there is no TX queue with the peer RX queue on the same device,
> and it should be skipped also if the queues are configured with manual bind
> attribute.
> 
> If the explicit TX flow rule mode is configured or hairpin is between two ports,
> the default control flows for TX queues should not be created.
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_trigger.c | 33 +++++++++++++++++++++++++++++++--
>  1 file changed, 31 insertions(+), 2 deletions(-)
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation Bing Zhao
@ 2020-10-26  9:30     ` Slava Ovsiienko
  0 siblings, 0 replies; 28+ messages in thread
From: Slava Ovsiienko @ 2020-10-26  9:30 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam, Raslan Darawsheh

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, October 22, 2020 17:07
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation
> 
> In the current implementation of the single port hairpin, there is a implicit
> splitting process for actions. When inserting a hairpin flow, all the actions will
> be included with the ingress attribute.
> The flow engine will check and decide which actions should be moved into the
> TX flow part, e.g., encapsulation, VLAN push.
> 
> In some NICs, some actions can only be done in one direction. Since the
> hairpin flow will be split into two parts, such validation will be skipped.
> 
> With the hairpin explicit TX flow mode, no splitting is needed any more. The
> hairpin flow may have no big difference from a standard flow (except the
> queue). The application should take full charge of the actions and the flow
> engine should validate the hairpin flow in the same way as other flows.
> 
> In the meanwhile, a new internal API is added to get the hairpin configuration.
> This will bypass the useless atomic operation to save the CPU cycles.
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

> ---
>  drivers/net/mlx5/mlx5_flow_dv.c | 15 ++++++++++++---
>  drivers/net/mlx5/mlx5_rxq.c     | 27 +++++++++++++++++++++++++++
>  drivers/net/mlx5/mlx5_rxtx.h    |  2 ++
>  3 files changed, 41 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> b/drivers/net/mlx5/mlx5_flow_dv.c index 15cd34e..d5be6f0 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -6058,11 +6058,17 @@ struct field_modify_info modify_tcp[] = {
>  						  actions,
>  						  "no fate action is found");
>  	}
> -	/* Continue validation for Xcap and VLAN actions.*/
> +	/*
> +	 * Continue validation for Xcap and VLAN actions.
> +	 * If hairpin is working in explicit TX rule mode, there is no actions
> +	 * splitting and the validation of hairpin ingress flow should be the
> +	 * same as other standard flows.
> +	 */
>  	if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS |
>  			     MLX5_FLOW_VLAN_ACTIONS)) &&
>  	    (queue_index == 0xFFFF ||
> -	     mlx5_rxq_get_type(dev, queue_index) !=
> MLX5_RXQ_TYPE_HAIRPIN)) {
> +	     mlx5_rxq_get_type(dev, queue_index) !=
> MLX5_RXQ_TYPE_HAIRPIN ||
> +	     !!mlx5_rxq_get_hairpin_conf(dev, queue_index)->tx_explicit)) {
>  		if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
>  		    MLX5_FLOW_XCAP_ACTIONS)
>  			return rte_flow_error_set(error, ENOTSUP, @@ -
> 6091,7 +6097,10 @@ struct field_modify_info modify_tcp[] = {
>  						 "multiple VLAN actions");
>  		}
>  	}
> -	/* Hairpin flow will add one more TAG action. */
> +	/*
> +	 * Hairpin flow will add one more TAG action in TX implicit mode.
> +	 * In TX explicit mode, there will be no hairpin flow ID.
> +	 */
>  	if (hairpin > 0)
>  		rw_act_num += MLX5_ACT_NUM_SET_TAG;
>  	/* extra metadata enabled: one more TAG action will be add. */ diff --
> git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index
> 78e15e7..d328d4a 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1720,6 +1720,33 @@ enum mlx5_rxq_type
>  	return MLX5_RXQ_TYPE_UNDEFINED;
>  }
> 
> +/*
> + * Get a Rx hairpin queue configuration.
> + *
> + * @param dev
> + *   Pointer to Ethernet device.
> + * @param idx
> + *   Rx queue index.
> + *
> + * @return
> + *   Pointer to the configuration if a hairpin RX queue, otherwise NULL.
> + */
> +const struct rte_eth_hairpin_conf *
> +mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) {
> +	struct mlx5_priv *priv = dev->data->dev_private;
> +	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
> +
> +	if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) {
> +		rxq_ctrl = container_of((*priv->rxqs)[idx],
> +					struct mlx5_rxq_ctrl,
> +					rxq);
> +		if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
> +			return &rxq_ctrl->hairpin_conf;
> +	}
> +	return NULL;
> +}
> +
>  /**
>   * Get an indirection table.
>   *
> diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index
> b50b643..d91ed0f 100644
> --- a/drivers/net/mlx5/mlx5_rxtx.h
> +++ b/drivers/net/mlx5/mlx5_rxtx.h
> @@ -344,6 +344,8 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,  int
> mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx);  int
> mlx5_hrxq_verify(struct rte_eth_dev *dev);  enum mlx5_rxq_type
> mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx);
> +const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf
> +	(struct rte_eth_dev *dev, uint16_t idx);
>  struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev);  void
> mlx5_drop_action_destroy(struct rte_eth_dev *dev);  uint64_t
> mlx5_get_rx_port_offloads(void);
> --
> 1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode
  2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
@ 2020-10-26  9:30     ` Slava Ovsiienko
  0 siblings, 0 replies; 28+ messages in thread
From: Slava Ovsiienko @ 2020-10-26  9:30 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam, Raslan Darawsheh

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, October 22, 2020 17:07
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode
> 
> In the current implementation, the hairpin flow will be split into two flows
> implicitly if there is some action that only belongs to the TX part. A TX device
> flow will be inserted by the mlx5 PMD itself.
> 
> In hairpin between two ports, the explicit TX flow mode will be the only one to
> be supported. It is not the appropriate behavior to insert a TX flow into
> another device implicitly. The application could create any flow as it likes and
> has full control of the user flows. Hairpin flows will have no difference from
> standard flows and the application can decide how to chain RX and TX flows
> together.
> 
> Even in the single port hairpin, this explicit TX flow mode could also be
> supported.
> 
> When checking if the hairpin needs to be split, just return if the hairpin queue
> is with "tx_explicit" attribute. Then in the following steps for validation and
> translation, the code path will be the same as that for standard flows.
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

> ---
>  drivers/net/mlx5/mlx5_flow.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD
  2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                   ` (4 preceding siblings ...)
  2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
@ 2020-10-26 16:37 ` Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: change hairpin queue peer checking Bing Zhao
                     ` (7 more replies)
  5 siblings, 8 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

This patch set will add the support for hairpin between two ports in
mlx5 PMD.

v2:
  * Update the code and reorganize the patch set
v3:
  * Doc update
  * fix code bugs and code style update

Bing Zhao (7):
  net/mlx5: change hairpin queue peer checking
  net/mlx5: add support for two ports hairpin mode
  net/mlx5: add support to get hairpin peer ports
  net/mlx5: conditional hairpin auto bind
  net/mlx5: change hairpin ingress flow validation
  net/mlx5: not split hairpin flow in explicit mode
  doc: update mlx5 hairpin support and limitations

 doc/guides/nics/mlx5.rst               |   5 +
 doc/guides/rel_notes/release_20_11.rst |   1 +
 drivers/net/mlx5/linux/mlx5_os.c       |  12 +
 drivers/net/mlx5/mlx5.h                |  21 +
 drivers/net/mlx5/mlx5_flow.c           |   9 +-
 drivers/net/mlx5/mlx5_flow_dv.c        |  17 +-
 drivers/net/mlx5/mlx5_rxq.c            |  59 ++-
 drivers/net/mlx5/mlx5_rxtx.h           |   4 +
 drivers/net/mlx5/mlx5_trigger.c        | 757 ++++++++++++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_txq.c            |  32 +-
 10 files changed, 894 insertions(+), 23 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 1/7] net/mlx5: change hairpin queue peer checking
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
@ 2020-10-26 16:37   ` Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add support for two ports hairpin mode Bing Zhao
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In the current implementation of single port mode hairpin, the peer
queue should belong to the same port of the current queue. When the
two ports hairpin mode is introduced, such checking should be removed
to make the hairpin queue setup execute successfully since it is not
an invalid condition, if the Tx port and Rx port are not the same.

In the meanwhile, different devices could have different queue
configurations. The queues number of peer port is unknown to the
current device. The checking should be removed also.

If the Tx and Rx port IDs of a hairpin peer are different, only the
manual binding and explicit Tx flows are supported. Or else, the four
combinations of modes could be supported. The mode attributes
consistency checking will be done when connecting the queue with its
peer queue.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v3: fix attributes checking
---
 drivers/net/mlx5/mlx5_rxq.c | 32 ++++++++++++++++++++++++++------
 drivers/net/mlx5/mlx5_txq.c | 32 ++++++++++++++++++++++++++------
 2 files changed, 52 insertions(+), 12 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 1cc477a..034f43e 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -818,15 +818,35 @@
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	if (hairpin_conf->peer_count != 1 ||
-	    hairpin_conf->peers[0].port != dev->data->port_id ||
-	    hairpin_conf->peers[0].queue >= priv->txqs_n) {
-		DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u "
-			" invalid hairpind configuration", dev->data->port_id,
-			idx);
+	if (hairpin_conf->peer_count != 1) {
 		rte_errno = EINVAL;
+		DRV_LOG(ERR, "port %u unable to setup Rx hairpin queue index %u"
+			" peer count is %u", dev->data->port_id,
+			idx, hairpin_conf->peer_count);
 		return -rte_errno;
 	}
+	if (hairpin_conf->peers[0].port == dev->data->port_id) {
+		if (hairpin_conf->peers[0].queue >= priv->txqs_n) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u unable to setup Rx hairpin queue"
+				" index %u, Tx %u is larger than %u",
+				dev->data->port_id, idx,
+				hairpin_conf->peers[0].queue, priv->txqs_n);
+			return -rte_errno;
+		}
+	} else {
+		if (hairpin_conf->manual_bind == 0 ||
+		    hairpin_conf->tx_explicit == 0) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u unable to setup Rx hairpin queue"
+				" index %u peer port %u with attributes %u %u",
+				dev->data->port_id, idx,
+				hairpin_conf->peers[0].port,
+				hairpin_conf->manual_bind,
+				hairpin_conf->tx_explicit);
+			return -rte_errno;
+		}
+	}
 	rxq_ctrl = mlx5_rxq_hairpin_new(dev, idx, desc, hairpin_conf);
 	if (!rxq_ctrl) {
 		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 9c2dd2a..dca9c05 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -421,15 +421,35 @@
 	res = mlx5_tx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	if (hairpin_conf->peer_count != 1 ||
-	    hairpin_conf->peers[0].port != dev->data->port_id ||
-	    hairpin_conf->peers[0].queue >= priv->rxqs_n) {
-		DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u "
-			" invalid hairpind configuration", dev->data->port_id,
-			idx);
+	if (hairpin_conf->peer_count != 1) {
 		rte_errno = EINVAL;
+		DRV_LOG(ERR, "port %u unable to setup Tx hairpin queue index %u"
+			" peer count is %u", dev->data->port_id,
+			idx, hairpin_conf->peer_count);
 		return -rte_errno;
 	}
+	if (hairpin_conf->peers[0].port == dev->data->port_id) {
+		if (hairpin_conf->peers[0].queue >= priv->rxqs_n) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u unable to setup Tx hairpin queue"
+				" index %u, Rx %u is larger than %u",
+				dev->data->port_id, idx,
+				hairpin_conf->peers[0].queue, priv->txqs_n);
+			return -rte_errno;
+		}
+	} else {
+		if (hairpin_conf->manual_bind == 0 ||
+		    hairpin_conf->tx_explicit == 0) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u unable to setup Tx hairpin queue"
+				" index %u peer port %u with attributes %u %u",
+				dev->data->port_id, idx,
+				hairpin_conf->peers[0].port,
+				hairpin_conf->manual_bind,
+				hairpin_conf->tx_explicit);
+			return -rte_errno;
+		}
+	}
 	txq_ctrl = mlx5_txq_hairpin_new(dev, idx, desc,	hairpin_conf);
 	if (!txq_ctrl) {
 		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 2/7] net/mlx5: add support for two ports hairpin mode
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: change hairpin queue peer checking Bing Zhao
@ 2020-10-26 16:37   ` Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: add support to get hairpin peer ports Bing Zhao
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In order to support hairpin between two ports, mlx5 PMD needs to
implement the functions and provide them as the function pointers.

The bind and unbind functions are executed per port pairs. All the
hairpin queues between the two ports should have the same attributes
during queues setup. Different configurations among queue pairs from
the same ports are not supported. It is allowed that two ports only
have one direction hairpin.

In order to set up the connection between two queues, peer Rx queue
HW information must be fetched via the internal RTE API and the queue
information could be used to modify the SQ object. Then the RQ object
will be modified with the Tx queue HW information. The reverse
operation is not supported right now.

When disconnecting the queues pair, SQ and RQ object should be reset
without any peer HW information. The unbinding operation will try to
disconnect all Tx queues from the port from the Rx queues of the peer
port.

Tx explicit mode attribute will be saved and used when creating a
hairpin flow.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v3:
  * fix code review comments and bugs for unbinding
  * code style update
  * Checking for mlx5 driver type
---
 drivers/net/mlx5/linux/mlx5_os.c |  10 +
 drivers/net/mlx5/mlx5.h          |  19 ++
 drivers/net/mlx5/mlx5_rxtx.h     |   2 +
 drivers/net/mlx5/mlx5_trigger.c  | 629 ++++++++++++++++++++++++++++++++++++++-
 4 files changed, 658 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index ed3f020..b791859 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2570,6 +2570,11 @@
 	.get_module_eeprom = mlx5_get_module_eeprom,
 	.hairpin_cap_get = mlx5_hairpin_cap_get,
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
+	.hairpin_bind = mlx5_hairpin_bind,
+	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
+	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
+	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
 };
 
 /* Available operations from secondary process. */
@@ -2648,4 +2653,9 @@
 	.get_module_eeprom = mlx5_get_module_eeprom,
 	.hairpin_cap_get = mlx5_hairpin_cap_get,
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
+	.hairpin_bind = mlx5_hairpin_bind,
+	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
+	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
+	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 258be03..010152c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -904,6 +904,14 @@ struct mlx5_priv {
 #define PORT_ID(priv) ((priv)->dev_data->port_id)
 #define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)])
 
+struct rte_hairpin_peer_info {
+	uint32_t qp_id;
+	uint32_t vhca_id;
+	uint16_t peer_q;
+	uint16_t tx_explicit;
+	uint16_t manual_bind;
+};
+
 /* mlx5.c */
 
 int mlx5_getenv_int(const char *);
@@ -1054,6 +1062,17 @@ void mlx5_vlan_vmwa_acquire(struct rte_eth_dev *dev,
 int mlx5_traffic_enable(struct rte_eth_dev *dev);
 void mlx5_traffic_disable(struct rte_eth_dev *dev);
 int mlx5_traffic_restart(struct rte_eth_dev *dev);
+int mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
+				   struct rte_hairpin_peer_info *current_info,
+				   struct rte_hairpin_peer_info *peer_info,
+				   uint32_t direction);
+int mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
+				 struct rte_hairpin_peer_info *peer_info,
+				 uint32_t direction);
+int mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
+				   uint32_t direction);
+int mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port);
+int mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port);
 
 /* mlx5_flow.c */
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index f204f7e..cdc18e3 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -199,6 +199,7 @@ struct mlx5_rxq_ctrl {
 	void *wq_umem; /* WQ buffer registration info. */
 	void *cq_umem; /* CQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
+	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
 /* TX queue send local data. */
@@ -295,6 +296,7 @@ struct mlx5_txq_ctrl {
 	void *bf_reg; /* BlueFlame register from Verbs. */
 	uint16_t dump_file_n; /* Number of dump files. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
+	uint32_t hairpin_status; /* Hairpin binding status. */
 	struct mlx5_txq_data txq; /* Data path structure. */
 	/* Must be the last field in the structure, contains elts[]. */
 };
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 19f2d66..f76122b 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -207,7 +207,7 @@
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_hairpin_bind(struct rte_eth_dev *dev)
+mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
@@ -285,6 +285,631 @@
 	return -rte_errno;
 }
 
+/*
+ * Fetch the peer queue's SW & HW information.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param peer_queue
+ *   Index of the queue to fetch the information.
+ * @param current_info
+ *   Pointer to the input peer information, not used currently.
+ * @param peer_info
+ *   Pointer to the structure to store the information, output.
+ * @param direction
+ *   Positive to get the RxQ information, zero to get the TxQ information.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
+			       struct rte_hairpin_peer_info *current_info,
+			       struct rte_hairpin_peer_info *peer_info,
+			       uint32_t direction)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	RTE_SET_USED(current_info);
+
+	if (dev->data->dev_started == 0) {
+		rte_errno = EBUSY;
+		DRV_LOG(ERR, "peer port %u is not started",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+	/*
+	 * Peer port used as egress. In the current design, hairpin Tx queue
+	 * will be bound to the peer Rx queue. Indeed, only the information of
+	 * peer Rx queue needs to be fetched.
+	 */
+	if (direction == 0) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+
+		txq_ctrl = mlx5_txq_get(dev, peer_queue);
+		if (txq_ctrl == NULL) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Tx queue %d",
+				dev->data->port_id, peer_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d is not a hairpin Txq",
+				dev->data->port_id, peer_queue);
+			mlx5_txq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->obj == NULL || txq_ctrl->obj->sq == NULL) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Txq object found: %d",
+				dev->data->port_id, peer_queue);
+			mlx5_txq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		peer_info->qp_id = txq_ctrl->obj->sq->id;
+		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
+		/* 1-to-1 mapping, only the first one is used. */
+		peer_info->peer_q = txq_ctrl->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = txq_ctrl->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind;
+		mlx5_txq_release(dev, peer_queue);
+	} else { /* Peer port used as ingress. */
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+
+		rxq_ctrl = mlx5_rxq_get(dev, peer_queue);
+		if (rxq_ctrl == NULL) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
+				dev->data->port_id, peer_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq",
+				dev->data->port_id, peer_queue);
+			mlx5_rxq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Rxq object found: %d",
+				dev->data->port_id, peer_queue);
+			mlx5_rxq_release(dev, peer_queue);
+			return -rte_errno;
+		}
+		peer_info->qp_id = rxq_ctrl->obj->rq->id;
+		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
+		peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind;
+		mlx5_rxq_release(dev, peer_queue);
+	}
+	return 0;
+}
+
+/*
+ * Bind the hairpin queue with the peer HW information.
+ * This needs to be called twice both for Tx and Rx queues of a pair.
+ * If the queue is already bound, it is considered successful.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param cur_queue
+ *   Index of the queue to change the HW configuration to bind.
+ * @param peer_info
+ *   Pointer to information of the peer queue.
+ * @param direction
+ *   Positive to configure the TxQ, zero to configure the RxQ.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
+			     struct rte_hairpin_peer_info *peer_info,
+			     uint32_t direction)
+{
+	int ret = 0;
+
+	/*
+	 * Consistency checking of the peer queue: opposite direction is used
+	 * to get the peer queue info with ethdev port ID, no need to check.
+	 */
+	if (peer_info->peer_q != cur_queue) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "port %u queue %d and peer queue %d mismatch",
+			dev->data->port_id, cur_queue, peer_info->peer_q);
+		return -rte_errno;
+	}
+	if (direction != 0) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+		struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
+
+		txq_ctrl = mlx5_txq_get(dev, cur_queue);
+		if (txq_ctrl == NULL) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Tx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->obj == NULL || txq_ctrl->obj->sq == NULL) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Txq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->hairpin_status != 0) {
+			DRV_LOG(DEBUG, "port %u Tx queue %d is already bound",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return 0;
+		}
+		/*
+		 * All queues' of one port consistency checking is done in the
+		 * bind() function, and that is optional.
+		 */
+		if (peer_info->tx_explicit !=
+		    txq_ctrl->hairpin_conf.tx_explicit) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Tx queue %d and peer Tx rule mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (peer_info->manual_bind !=
+		    txq_ctrl->hairpin_conf.manual_bind) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Tx queue %d and peer binding mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		sq_attr.state = MLX5_SQC_STATE_RDY;
+		sq_attr.sq_state = MLX5_SQC_STATE_RST;
+		sq_attr.hairpin_peer_rq = peer_info->qp_id;
+		sq_attr.hairpin_peer_vhca = peer_info->vhca_id;
+		ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr);
+		if (ret == 0)
+			txq_ctrl->hairpin_status = 1;
+		mlx5_txq_release(dev, cur_queue);
+	} else {
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+
+		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
+		if (rxq_ctrl == NULL) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Rxq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->hairpin_status != 0) {
+			DRV_LOG(DEBUG, "port %u Rx queue %d is already bound",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return 0;
+		}
+		if (peer_info->tx_explicit !=
+		    rxq_ctrl->hairpin_conf.tx_explicit) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (peer_info->manual_bind !=
+		    rxq_ctrl->hairpin_conf.manual_bind) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode"
+				" mismatch", dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		rq_attr.state = MLX5_SQC_STATE_RDY;
+		rq_attr.rq_state = MLX5_SQC_STATE_RST;
+		rq_attr.hairpin_peer_sq = peer_info->qp_id;
+		rq_attr.hairpin_peer_vhca = peer_info->vhca_id;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+		if (ret == 0)
+			rxq_ctrl->hairpin_status = 1;
+		mlx5_rxq_release(dev, cur_queue);
+	}
+	return ret;
+}
+
+/*
+ * Unbind the hairpin queue and reset its HW configuration.
+ * This needs to be called twice both for Tx and Rx queues of a pair.
+ * If the queue is already unbound, it is considered successful.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param cur_queue
+ *   Index of the queue to change the HW configuration to unbind.
+ * @param direction
+ *   Positive to reset the TxQ, zero to reset the RxQ.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
+			       uint32_t direction)
+{
+	int ret = 0;
+
+	if (direction != 0) {
+		struct mlx5_txq_ctrl *txq_ctrl;
+		struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
+
+		txq_ctrl = mlx5_txq_get(dev, cur_queue);
+		if (txq_ctrl == NULL) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Tx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		/* Already unbound, return success before obj checking. */
+		if (txq_ctrl->hairpin_status == 0) {
+			DRV_LOG(DEBUG, "port %u Tx queue %d is already unbound",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return 0;
+		}
+		if (!txq_ctrl->obj || !txq_ctrl->obj->sq) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Txq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_txq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		sq_attr.state = MLX5_SQC_STATE_RST;
+		sq_attr.sq_state = MLX5_SQC_STATE_RST;
+		ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr);
+		if (ret == 0)
+			txq_ctrl->hairpin_status = 0;
+		mlx5_txq_release(dev, cur_queue);
+	} else {
+		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+
+		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
+		if (rxq_ctrl == NULL) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
+				dev->data->port_id, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+			rte_errno = EINVAL;
+			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		if (rxq_ctrl->hairpin_status == 0) {
+			DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return 0;
+		}
+		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
+			rte_errno = ENOMEM;
+			DRV_LOG(ERR, "port %u no Rxq object found: %d",
+				dev->data->port_id, cur_queue);
+			mlx5_rxq_release(dev, cur_queue);
+			return -rte_errno;
+		}
+		rq_attr.state = MLX5_SQC_STATE_RST;
+		rq_attr.rq_state = MLX5_SQC_STATE_RST;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+		if (ret == 0)
+			rxq_ctrl->hairpin_status = 0;
+		mlx5_rxq_release(dev, cur_queue);
+	}
+	return ret;
+}
+
+/*
+ * Bind the hairpin port pairs, from the Tx to the peer Rx.
+ * This function only supports to bind the Tx to one Rx.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param rx_port
+ *   Port identifier of the Rx port.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	int ret = 0;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	uint32_t i;
+	struct rte_hairpin_peer_info peer = {0xffffff};
+	struct rte_hairpin_peer_info cur;
+	const struct rte_eth_hairpin_conf *conf;
+	uint16_t num_q = 0;
+	uint16_t local_port = priv->dev_data->port_id;
+	uint32_t manual;
+	uint32_t explicit;
+	uint16_t rx_queue;
+
+	if (mlx5_eth_find_next(rx_port, priv->pci_dev) != rx_port) {
+		rte_errno = ENODEV;
+		DRV_LOG(ERR, "Rx port %u does not belong to mlx5", rx_port);
+		return -rte_errno;
+	}
+	/*
+	 * Before binding TxQ to peer RxQ, first round loop will be used for
+	 * checking the queues' configuration consistency. This would be a
+	 * little time consuming but better than doing the rollback.
+	 */
+	for (i = 0; i != priv->txqs_n; i++) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (txq_ctrl == NULL)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		/*
+		 * All hairpin Tx queues of a single port that connected to the
+		 * same peer Rx port should have the same "auto binding" and
+		 * "implicit Tx flow" modes.
+		 * Peer consistency checking will be done in per queue binding.
+		 */
+		conf = &txq_ctrl->hairpin_conf;
+		if (conf->peers[0].port == rx_port) {
+			if (num_q == 0) {
+				manual = conf->manual_bind;
+				explicit = conf->tx_explicit;
+			} else {
+				if (manual != conf->manual_bind ||
+				    explicit != conf->tx_explicit) {
+					rte_errno = EINVAL;
+					DRV_LOG(ERR, "port %u queue %d mode"
+						" mismatch: %u %u, %u %u",
+						local_port, i, manual,
+						conf->manual_bind, explicit,
+						conf->tx_explicit);
+					mlx5_txq_release(dev, i);
+					return -rte_errno;
+				}
+			}
+			num_q++;
+		}
+		mlx5_txq_release(dev, i);
+	}
+	/* Once no queue is configured, success is returned directly. */
+	if (num_q == 0)
+		return ret;
+	/* All the hairpin TX queues need to be traversed again. */
+	for (i = 0; i != priv->txqs_n; i++) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (txq_ctrl == NULL)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		/*
+		 * Fetch peer RxQ's information.
+		 * No need to pass the information of the current queue.
+		 */
+		ret = rte_eth_hairpin_queue_peer_update(rx_port, rx_queue,
+							NULL, &peer, 1);
+		if (ret != 0) {
+			mlx5_txq_release(dev, i);
+			goto error;
+		}
+		/* Accessing its own device, inside mlx5 PMD. */
+		ret = mlx5_hairpin_queue_peer_bind(dev, i, &peer, 1);
+		if (ret != 0) {
+			mlx5_txq_release(dev, i);
+			goto error;
+		}
+		/* Pass TxQ's information to peer RxQ and try binding. */
+		cur.peer_q = rx_queue;
+		cur.qp_id = txq_ctrl->obj->sq->id;
+		cur.vhca_id = priv->config.hca_attr.vhca_id;
+		cur.tx_explicit = txq_ctrl->hairpin_conf.tx_explicit;
+		cur.manual_bind = txq_ctrl->hairpin_conf.manual_bind;
+		/*
+		 * In order to access another device in a proper way, RTE level
+		 * private function is needed.
+		 */
+		ret = rte_eth_hairpin_queue_peer_bind(rx_port, rx_queue,
+						      &cur, 0);
+		if (ret != 0) {
+			mlx5_txq_release(dev, i);
+			goto error;
+		}
+		mlx5_txq_release(dev, i);
+	}
+	return 0;
+error:
+	/*
+	 * Do roll-back process for the queues already bound.
+	 * No need to check the return value of the queue unbind function.
+	 */
+	do {
+		/* No validation is needed here. */
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (txq_ctrl == NULL)
+			continue;
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 0);
+		mlx5_hairpin_queue_peer_unbind(dev, i, 1);
+		mlx5_txq_release(dev, i);
+	} while (i--);
+	return ret;
+}
+
+/*
+ * Unbind the hairpin port pair, HW configuration of both devices will be clear
+ * and status will be reset for all the queues used between the them.
+ * This function only supports to unbind the Tx from one Rx.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param rx_port
+ *   Port identifier of the Rx port.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_hairpin_unbind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	uint32_t i;
+	int ret;
+	uint16_t cur_port = priv->dev_data->port_id;
+
+	if (mlx5_eth_find_next(rx_port, priv->pci_dev) != rx_port) {
+		rte_errno = ENODEV;
+		DRV_LOG(ERR, "Rx port %u does not belong to mlx5", rx_port);
+		return -rte_errno;
+	}
+	for (i = 0; i != priv->txqs_n; i++) {
+		uint16_t rx_queue;
+
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (txq_ctrl == NULL)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		/* Indeed, only the first used queue needs to be checked. */
+		if (txq_ctrl->hairpin_conf.manual_bind == 0) {
+			if (cur_port != rx_port) {
+				rte_errno = EINVAL;
+				DRV_LOG(ERR, "port %u and port %u are in"
+					" auto-bind mode", cur_port, rx_port);
+				mlx5_txq_release(dev, i);
+				return -rte_errno;
+			} else {
+				return 0;
+			}
+		}
+		rx_queue = txq_ctrl->hairpin_conf.peers[0].queue;
+		mlx5_txq_release(dev, i);
+		ret = rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 0);
+		if (ret) {
+			DRV_LOG(ERR, "port %u Rx queue %d unbind - failure",
+				rx_port, rx_queue);
+			return ret;
+		}
+		ret = mlx5_hairpin_queue_peer_unbind(dev, i, 1);
+		if (ret) {
+			DRV_LOG(ERR, "port %u Tx queue %d unbind - failure",
+				cur_port, i);
+			return ret;
+		}
+	}
+	return 0;
+}
+
+/*
+ * Bind hairpin ports, Rx could be all ports when using RTE_MAX_ETHPORTS.
+ * @see mlx5_hairpin_bind_single_port()
+ */
+int
+mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	int ret = 0;
+	uint16_t p, pp;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	/*
+	 * If the Rx port has no hairpin configuration with the current port,
+	 * the binding will be skipped in the called function of single port.
+	 * Device started status will be checked only before the queue
+	 * information updating.
+	 */
+	if (rx_port == RTE_MAX_ETHPORTS) {
+		MLX5_ETH_FOREACH_DEV(p, priv->pci_dev) {
+			ret = mlx5_hairpin_bind_single_port(dev, p);
+			if (ret != 0)
+				goto unbind;
+		}
+		return ret;
+	} else {
+		return mlx5_hairpin_bind_single_port(dev, rx_port);
+	}
+unbind:
+	MLX5_ETH_FOREACH_DEV(pp, priv->pci_dev)
+		if (pp < p)
+			mlx5_hairpin_unbind_single_port(dev, pp);
+	return ret;
+}
+
+/*
+ * Unbind hairpin ports, Rx could be all ports when using RTE_MAX_ETHPORTS.
+ * @see mlx5_hairpin_unbind_single_port()
+ */
+int
+mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port)
+{
+	int ret = 0;
+	uint16_t p;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	if (rx_port == RTE_MAX_ETHPORTS)
+		MLX5_ETH_FOREACH_DEV(p, priv->pci_dev) {
+			ret = mlx5_hairpin_unbind_single_port(dev, p);
+			if (ret != 0)
+				return ret;
+		}
+	else
+		ret = mlx5_hairpin_bind_single_port(dev, rx_port);
+	return ret;
+}
+
 /**
  * DPDK callback to start the device.
  *
@@ -336,7 +961,7 @@
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
-	ret = mlx5_hairpin_bind(dev);
+	ret = mlx5_hairpin_auto_bind(dev);
 	if (ret) {
 		DRV_LOG(ERR, "port %u hairpin binding failed: %s",
 			dev->data->port_id, strerror(rte_errno));
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 3/7] net/mlx5: add support to get hairpin peer ports
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: change hairpin queue peer checking Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add support for two ports hairpin mode Bing Zhao
@ 2020-10-26 16:37   ` Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 4/7] net/mlx5: conditional hairpin auto bind Bing Zhao
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In real-life business, one device could be attached and detached
dynamically. The hairpin configuration of this port to/from all the
other ports should be enabled and disabled accordingly.

The RTE ethdev lib and PMD should provide this ability to get the
peer ports list in case that the application doesn't save it. It is
recommended that the size of the array to save the port IDs is as
large as the "RTE_MAX_ETHPORTS" to have the maximal capacity.

The order of the peer port IDs may be different from that during
hairpin queues set in the initialization stage. The peer port ID
could be the same as the current device port ID when the hairpin
peer ports contain itself - the single port hairpin.

The application should check the ports' status and decide if the
peer port should be bound / unbound when starting / stopping the
current device.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  2 +
 drivers/net/mlx5/mlx5.h          |  2 +
 drivers/net/mlx5/mlx5_trigger.c  | 89 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 93 insertions(+)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index b791859..c890998 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2572,6 +2572,7 @@
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
 	.hairpin_bind = mlx5_hairpin_bind,
 	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_get_peer_ports = mlx5_hairpin_get_peer_ports,
 	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
 	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
 	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
@@ -2655,6 +2656,7 @@
 	.mtr_ops_get = mlx5_flow_meter_ops_get,
 	.hairpin_bind = mlx5_hairpin_bind,
 	.hairpin_unbind = mlx5_hairpin_unbind,
+	.hairpin_get_peer_ports = mlx5_hairpin_get_peer_ports,
 	.hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update,
 	.hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind,
 	.hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind,
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 010152c..c537af9 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1073,6 +1073,8 @@ int mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				   uint32_t direction);
 int mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port);
 int mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port);
+int mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
+				size_t len, uint32_t direction);
 
 /* mlx5_flow.c */
 
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f76122b..3f56592 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -910,6 +910,95 @@
 	return ret;
 }
 
+/*
+ * DPDK callback to get the hairpin peer ports list.
+ * This will return the actual number of peer ports and save the identifiers
+ * into the array (sorted, may be different from that when setting up the
+ * hairpin peer queues).
+ * The peer port ID could be the same as the port ID of the current device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param peer_ports
+ *   Pointer to array to save the port identifiers.
+ * @param len
+ *   The length of the array.
+ * @param direction
+ *   Current port to peer port direction.
+ *   positive - current used as Tx to get all peer Rx ports.
+ *   zero - current used as Rx to get all peer Tx ports.
+ *
+ * @return
+ *   0 or positive value on success, actual number of peer ports.
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
+			    size_t len, uint32_t direction)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_ctrl *txq_ctrl;
+	struct mlx5_rxq_ctrl *rxq_ctrl;
+	uint32_t i;
+	uint16_t pp;
+	uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0};
+	int ret = 0;
+
+	if (direction) {
+		for (i = 0; i < priv->txqs_n; i++) {
+			txq_ctrl = mlx5_txq_get(dev, i);
+			if (!txq_ctrl)
+				continue;
+			if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+				mlx5_txq_release(dev, i);
+				continue;
+			}
+			pp = txq_ctrl->hairpin_conf.peers[0].port;
+			if (pp >= RTE_MAX_ETHPORTS) {
+				rte_errno = ERANGE;
+				mlx5_txq_release(dev, i);
+				DRV_LOG(ERR, "port %hu queue %u peer port "
+					"out of range %hu",
+					priv->dev_data->port_id, i, pp);
+				return -rte_errno;
+			}
+			bits[pp / 32] |= 1 << (pp % 32);
+			mlx5_txq_release(dev, i);
+		}
+	} else {
+		for (i = 0; i < priv->rxqs_n; i++) {
+			rxq_ctrl = mlx5_rxq_get(dev, i);
+			if (!rxq_ctrl)
+				continue;
+			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+				mlx5_rxq_release(dev, i);
+				continue;
+			}
+			pp = rxq_ctrl->hairpin_conf.peers[0].port;
+			if (pp >= RTE_MAX_ETHPORTS) {
+				rte_errno = ERANGE;
+				mlx5_rxq_release(dev, i);
+				DRV_LOG(ERR, "port %hu queue %u peer port "
+					"out of range %hu",
+					priv->dev_data->port_id, i, pp);
+				return -rte_errno;
+			}
+			bits[pp / 32] |= 1 << (pp % 32);
+			mlx5_rxq_release(dev, i);
+		}
+	}
+	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+		if (bits[i / 32] & (1 << (i % 32))) {
+			if ((size_t)ret >= len) {
+				rte_errno = E2BIG;
+				return -rte_errno;
+			}
+			peer_ports[ret++] = i;
+		}
+	}
+	return ret;
+}
+
 /**
  * DPDK callback to start the device.
  *
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 4/7] net/mlx5: conditional hairpin auto bind
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (2 preceding siblings ...)
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: add support to get hairpin peer ports Bing Zhao
@ 2020-10-26 16:37   ` Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: change hairpin ingress flow validation Bing Zhao
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In single port hairpin mode, after the queues are configured during
start up. The binding process will be enabled automatically in the
port start phase and the default control flow for egress will be
created.

When switching to two ports hairpin mode, the auto binding process
should be skipped if there is no TX queue with the peer RX queue on
the same device, and it should be skipped also if the queues are
configured with manual bind attribute.

If the explicit TX flow rule mode is configured or hairpin is
between two ports, the default control flows for TX queues should
not be created.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_trigger.c | 39 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 3f56592..52691b6 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -218,6 +218,8 @@
 	struct mlx5_devx_obj *rq;
 	unsigned int i;
 	int ret = 0;
+	bool need_auto = false;
+	uint16_t self_port = dev->data->port_id;
 
 	for (i = 0; i != priv->txqs_n; ++i) {
 		txq_ctrl = mlx5_txq_get(dev, i);
@@ -227,6 +229,28 @@
 			mlx5_txq_release(dev, i);
 			continue;
 		}
+		if (txq_ctrl->hairpin_conf.peers[0].port != self_port)
+			continue;
+		if (txq_ctrl->hairpin_conf.manual_bind) {
+			mlx5_txq_release(dev, i);
+			return 0;
+		}
+		need_auto = true;
+		mlx5_txq_release(dev, i);
+	}
+	if (!need_auto)
+		return 0;
+	for (i = 0; i != priv->txqs_n; ++i) {
+		txq_ctrl = mlx5_txq_get(dev, i);
+		if (!txq_ctrl)
+			continue;
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_release(dev, i);
+			continue;
+		}
+		/* Skip hairpin queues with other peer ports. */
+		if (txq_ctrl->hairpin_conf.peers[0].port != self_port)
+			continue;
 		if (!txq_ctrl->obj) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no txq object found: %d",
@@ -275,6 +299,9 @@
 		ret = mlx5_devx_cmd_modify_rq(rq, &rq_attr);
 		if (ret)
 			goto error;
+		/* Qs with auto-bind will be destroyed directly. */
+		rxq_ctrl->hairpin_status = 1;
+		txq_ctrl->hairpin_status = 1;
 		mlx5_txq_release(dev, i);
 		mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue);
 	}
@@ -1050,9 +1077,13 @@
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
+	/*
+	 * Such step will be skipped if there is no hairpin TX queue configured
+	 * with RX peer queue from the same device.
+	 */
 	ret = mlx5_hairpin_auto_bind(dev);
 	if (ret) {
-		DRV_LOG(ERR, "port %u hairpin binding failed: %s",
+		DRV_LOG(ERR, "port %u hairpin auto binding failed: %s",
 			dev->data->port_id, strerror(rte_errno));
 		goto error;
 	}
@@ -1203,7 +1234,11 @@
 		struct mlx5_txq_ctrl *txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
+		/* Only Tx implicit mode requires the default Tx flow. */
+		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN &&
+		    txq_ctrl->hairpin_conf.tx_explicit == 0 &&
+		    txq_ctrl->hairpin_conf.peers[0].port ==
+		    priv->dev_data->port_id) {
 			ret = mlx5_ctrl_flow_source_queue(dev, i);
 			if (ret) {
 				mlx5_txq_release(dev, i);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 5/7] net/mlx5: change hairpin ingress flow validation
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (3 preceding siblings ...)
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 4/7] net/mlx5: conditional hairpin auto bind Bing Zhao
@ 2020-10-26 16:37   ` Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In the current implementation of the single port hairpin, there is
an implicit splitting process for actions. When inserting a hairpin
flow, all the actions will be included with the ingress attribute.
The flow engine will check and decide which actions should be moved
into the TX flow part, e.g., encapsulation, VLAN push.

In some NICs, some actions can only be done in one direction. Since
the hairpin flow will be split into two parts, such validation will
be skipped.

With the hairpin explicit TX flow mode, no splitting is needed any
more. The hairpin flow may have no big difference from a standard
flow (except the queue). The application should take full charge of
the actions and the flow engine should validate the hairpin flow in
the same way as other flows.

In the meanwhile, a new internal API is added to get the hairpin
configuration. This will bypass the useless atomic operation to save
the CPU cycles.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 17 ++++++++++++++---
 drivers/net/mlx5/mlx5_rxq.c     | 27 +++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_rxtx.h    |  2 ++
 3 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 504d842..62e1d19 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5310,6 +5310,7 @@ struct field_modify_info modify_tcp[] = {
 		.transfer = !!attr->transfer,
 		.fdb_def_rule = !!priv->fdb_def_rule,
 	};
+	const struct rte_eth_hairpin_conf *conf;
 
 	if (items == NULL)
 		return -1;
@@ -6155,11 +6156,18 @@ struct field_modify_info modify_tcp[] = {
 						  actions,
 						  "no fate action is found");
 	}
-	/* Continue validation for Xcap and VLAN actions.*/
+	/*
+	 * Continue validation for Xcap and VLAN actions.
+	 * If hairpin is working in explicit TX rule mode, there is no actions
+	 * splitting and the validation of hairpin ingress flow should be the
+	 * same as other standard flows.
+	 */
 	if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS |
 			     MLX5_FLOW_VLAN_ACTIONS)) &&
 	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) {
+	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN ||
+	     ((conf = mlx5_rxq_get_hairpin_conf(dev, queue_index)) != NULL &&
+	     conf->tx_explicit != 0))) {
 		if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
 		    MLX5_FLOW_XCAP_ACTIONS)
 			return rte_flow_error_set(error, ENOTSUP,
@@ -6188,7 +6196,10 @@ struct field_modify_info modify_tcp[] = {
 						 "multiple VLAN actions");
 		}
 	}
-	/* Hairpin flow will add one more TAG action. */
+	/*
+	 * Hairpin flow will add one more TAG action in TX implicit mode.
+	 * In TX explicit mode, there will be no hairpin flow ID.
+	 */
 	if (hairpin > 0)
 		rw_act_num += MLX5_ACT_NUM_SET_TAG;
 	/* extra metadata enabled: one more TAG action will be add. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 034f43e..493c5f2 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1845,6 +1845,33 @@ enum mlx5_rxq_type
 	return MLX5_RXQ_TYPE_UNDEFINED;
 }
 
+/*
+ * Get a Rx hairpin queue configuration.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Rx queue index.
+ *
+ * @return
+ *   Pointer to the configuration if a hairpin RX queue, otherwise NULL.
+ */
+const struct rte_eth_hairpin_conf *
+mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+
+	if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) {
+		rxq_ctrl = container_of((*priv->rxqs)[idx],
+					struct mlx5_rxq_ctrl,
+					rxq);
+		if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+			return &rxq_ctrl->hairpin_conf;
+	}
+	return NULL;
+}
+
 /**
  * Match queues listed in arguments to queues contained in indirection table
  * object.
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index cdc18e3..1b5fba4 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -360,6 +360,8 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
 int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx);
 int mlx5_hrxq_verify(struct rte_eth_dev *dev);
 enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx);
+const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf
+	(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev);
 void mlx5_drop_action_destroy(struct rte_eth_dev *dev);
 uint64_t mlx5_get_rx_port_offloads(void);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 6/7] net/mlx5: not split hairpin flow in explicit mode
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (4 preceding siblings ...)
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: change hairpin ingress flow validation Bing Zhao
@ 2020-10-26 16:37   ` Bing Zhao
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations Bing Zhao
  2020-10-26 22:42   ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Raslan Darawsheh
  7 siblings, 0 replies; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

In the current implementation, the hairpin flow will be split into
two flows implicitly if there is some action that only belongs to the
Tx part. A Tx device flow will be inserted by the mlx5 PMD itself.

In hairpin between two ports, the explicit Tx flow mode will be the
only one to be supported. It is not the appropriate behavior to
insert a Tx flow into another device implicitly. The application
could create any flow as it likes and has full control of the user
flows. Hairpin flows will have no difference from standard flows and
the application can decide how to chain Rx and Tx flows together.

Even in the single port hairpin, this explicit Tx flow mode could
also be supported.

When checking if the hairpin needs to be split, it will just return
if the hairpin queue is with "tx_explicit" attribute. Then in the
following steps for validation and translation, the code path will
be the same as that for standard flows.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v3: remove unnecessary checking of hairpin queue type
---
 drivers/net/mlx5/mlx5_flow.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 949b9ce..4756cf9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -3618,6 +3618,7 @@ struct rte_flow_shared_action *
 	const struct rte_flow_action_queue *queue;
 	const struct rte_flow_action_rss *rss;
 	const struct rte_flow_action_raw_encap *raw_encap;
+	const struct rte_eth_hairpin_conf *conf;
 
 	if (!attr->ingress)
 		return 0;
@@ -3627,8 +3628,8 @@ struct rte_flow_shared_action *
 			queue = actions->conf;
 			if (queue == NULL)
 				return 0;
-			if (mlx5_rxq_get_type(dev, queue->index) !=
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			conf = mlx5_rxq_get_hairpin_conf(dev, queue->index);
+			if (conf != NULL && !!conf->tx_explicit)
 				return 0;
 			queue_action = 1;
 			action_n++;
@@ -3637,8 +3638,8 @@ struct rte_flow_shared_action *
 			rss = actions->conf;
 			if (rss == NULL || rss->queue_num == 0)
 				return 0;
-			if (mlx5_rxq_get_type(dev, rss->queue[0]) !=
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			conf = mlx5_rxq_get_hairpin_conf(dev, rss->queue[0]);
+			if (conf != NULL && !!conf->tx_explicit)
 				return 0;
 			queue_action = 1;
 			action_n++;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [dpdk-dev] [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (5 preceding siblings ...)
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
@ 2020-10-26 16:37   ` Bing Zhao
  2020-10-26 16:44     ` Slava Ovsiienko
  2020-10-26 22:42   ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Raslan Darawsheh
  7 siblings, 1 reply; 28+ messages in thread
From: Bing Zhao @ 2020-10-26 16:37 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland

Hairpin between two ports will be supported by mlx5 PMD.

The supported scenarios and limitations are listed in "mlx5.rst".

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/nics/mlx5.rst               | 5 +++++
 doc/guides/rel_notes/release_20_11.rst | 1 +
 2 files changed, 6 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 8dc7c62..ab5cc62 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -326,6 +326,11 @@ Limitations
   The last extension header item 'next header' field can specify the following
   header protocol type.
 
+- Hairpin:
+
+  - Hairpin between two ports could only manual binding and explicit Tx flow mode. For single port hairpin, all the combinations of auto/manual binding and explicit/implicit Tx flow mode could be supported.
+  - Hairpin in switchdev SR-IOV mode is not supported till now.
+
 Statistics
 ----------
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index f9ef4fe..1c5c2e0 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -354,6 +354,7 @@ New Features
   * Updated the supported timeout for Age action to the maximal value supported
     by rte_flow API.
   * Added support of Age action query.
+  * Added support of multi-ports hairpin.
 
 * **Updated vhost sample application.**
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations Bing Zhao
@ 2020-10-26 16:44     ` Slava Ovsiienko
  0 siblings, 0 replies; 28+ messages in thread
From: Slava Ovsiienko @ 2020-10-26 16:44 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam, Raslan Darawsheh

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Monday, October 26, 2020 18:38
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations
> 
> Hairpin between two ports will be supported by mlx5 PMD.
> 
> The supported scenarios and limitations are listed in "mlx5.rst".
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD
  2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
                     ` (6 preceding siblings ...)
  2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations Bing Zhao
@ 2020-10-26 22:42   ` Raslan Darawsheh
  7 siblings, 0 replies; 28+ messages in thread
From: Raslan Darawsheh @ 2020-10-26 22:42 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, Ori Kam

Hi,

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Monday, October 26, 2020 6:38 PM
> To: viacheslavo@mellanox.com; matan@mellanox.com
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD
> 
> This patch set will add the support for hairpin between two ports in
> mlx5 PMD.
> 
> v2:
>   * Update the code and reorganize the patch set
> v3:
>   * Doc update
>   * fix code bugs and code style update
> 
> Bing Zhao (7):
>   net/mlx5: change hairpin queue peer checking
>   net/mlx5: add support for two ports hairpin mode
>   net/mlx5: add support to get hairpin peer ports
>   net/mlx5: conditional hairpin auto bind
>   net/mlx5: change hairpin ingress flow validation
>   net/mlx5: not split hairpin flow in explicit mode
>   doc: update mlx5 hairpin support and limitations
> 
>  doc/guides/nics/mlx5.rst               |   5 +
>  doc/guides/rel_notes/release_20_11.rst |   1 +
>  drivers/net/mlx5/linux/mlx5_os.c       |  12 +
>  drivers/net/mlx5/mlx5.h                |  21 +
>  drivers/net/mlx5/mlx5_flow.c           |   9 +-
>  drivers/net/mlx5/mlx5_flow_dv.c        |  17 +-
>  drivers/net/mlx5/mlx5_rxq.c            |  59 ++-
>  drivers/net/mlx5/mlx5_rxtx.h           |   4 +
>  drivers/net/mlx5/mlx5_trigger.c        | 757
> ++++++++++++++++++++++++++++++++-
>  drivers/net/mlx5/mlx5_txq.c            |  32 +-
>  10 files changed, 894 insertions(+), 23 deletions(-)
> 
> --
> 1.8.3.1


Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2020-10-26 22:42 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
2020-10-08 14:16 ` [dpdk-dev] [PATCH 1/4] net/mlx5: remove hairpin queue peer port checking Bing Zhao
2020-10-08 14:16 ` [dpdk-dev] [PATCH 2/4] net/mlx5: add support for two ports hairpin mode Bing Zhao
2020-10-08 14:16 ` [dpdk-dev] [PATCH 3/4] net/mlx5: conditional hairpin auto bind Bing Zhao
2020-10-08 14:17 ` [dpdk-dev] [PATCH 4/4] doc: update hairpin support for mlx5 driver Bing Zhao
2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking Bing Zhao
2020-10-26  9:28     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode Bing Zhao
2020-10-26  9:29     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports Bing Zhao
2020-10-26  9:29     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind Bing Zhao
2020-10-26  9:29     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation Bing Zhao
2020-10-26  9:30     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
2020-10-26  9:30     ` Slava Ovsiienko
2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: change hairpin queue peer checking Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add support for two ports hairpin mode Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: add support to get hairpin peer ports Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 4/7] net/mlx5: conditional hairpin auto bind Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: change hairpin ingress flow validation Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations Bing Zhao
2020-10-26 16:44     ` Slava Ovsiienko
2020-10-26 22:42   ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).