DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver
@ 2020-01-09  3:15 Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 01/11] net/hns3: support different numbered Rx and Tx queues Wei Hu (Xavier)
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

This series are updates and bugfixes for hns3 ethernet PMD driver.

Chengwen Feng (1):
  net/hns3: fix triggering reset proceduce in slave process

Hongbo Zheng (1):
  net/hns3: fix segment error when closing the port

Wei Hu (Xavier) (8):
  net/hns3: support different numbered Rx and Tx queues
  net/hns3: support setting VF MAC address by PF driver
  net/hns3: remove io rmb call in Rx operation
  net/hns3: add free thresh in Rx operation
  net/hns3: fix Rx queue search miss RAS err when recv BC pkt
  net/hns3: fix ring vector related mailbox command format
  net/hns3: fix dumping VF register information
  net/hns3: fix link status when failure in issuing command

Yisen Zhuang (1):
  net/hns3: reduce the judgements of free Tx ring space

 drivers/net/hns3/hns3_cmd.c       |   8 +-
 drivers/net/hns3/hns3_dcb.c       |  88 ++--
 drivers/net/hns3/hns3_dcb.h       |   4 +-
 drivers/net/hns3/hns3_ethdev.c    | 101 +++--
 drivers/net/hns3/hns3_ethdev.h    |  17 +-
 drivers/net/hns3/hns3_ethdev_vf.c | 183 ++++++--
 drivers/net/hns3/hns3_flow.c      |   9 +-
 drivers/net/hns3/hns3_mbx.c       |  14 +-
 drivers/net/hns3/hns3_mbx.h       |   1 +
 drivers/net/hns3/hns3_regs.c      |  29 +-
 drivers/net/hns3/hns3_rxtx.c      | 727 ++++++++++++++++++++++++------
 drivers/net/hns3/hns3_rxtx.h      |  14 +-
 drivers/net/hns3/hns3_stats.c     |   3 -
 13 files changed, 934 insertions(+), 264 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 01/11] net/hns3: support different numbered Rx and Tx queues
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 02/11] net/hns3: support setting VF MAC address by PF driver Wei Hu (Xavier)
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Hardware does not support individually enable/disable/reset the Tx or Rx
queue in hns3 network engine, driver must enable/disable/reset Tx and Rx
queues at the same time.

Currently, hns3 PMD driver does not support the scenarios as below:
1) When calling the following function, the input parameter nb_rx_q and
   nb_tx_q are not equal.
     rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q,
                      uint16_t nb_tx_q,
		      const struct rte_eth_conf *dev_conf);
2) When calling the following functions to setup queues, the cumulatively
   setupped Rx queues are not the same as the setupped Tx queues.
     rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,,,);
     rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,,,);
However, these are common usage scenarios in some applications, such as,
l3fwd, ip_ressmbly and OVS-DPDK, etc.

This patch adds support for this usage of these functions by setupping
fake Tx or Rx queues to adjust numbers of Tx/Rx queues. But these fake
queues are imperceptible, and can not be used by upper applications.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_dcb.c       |  88 ++--
 drivers/net/hns3/hns3_dcb.h       |   4 +-
 drivers/net/hns3/hns3_ethdev.c    |  56 +--
 drivers/net/hns3/hns3_ethdev.h    |  16 +-
 drivers/net/hns3/hns3_ethdev_vf.c |  68 +--
 drivers/net/hns3/hns3_flow.c      |   9 +-
 drivers/net/hns3/hns3_rxtx.c      | 675 +++++++++++++++++++++++++-----
 drivers/net/hns3/hns3_rxtx.h      |  11 +
 8 files changed, 734 insertions(+), 193 deletions(-)

diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 19235dfb9..369a40e6a 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -578,17 +578,33 @@ hns3_dcb_pri_shaper_cfg(struct hns3_hw *hw)
 }
 
 void
-hns3_tc_queue_mapping_cfg(struct hns3_hw *hw)
+hns3_set_rss_size(struct hns3_hw *hw, uint16_t nb_rx_q)
+{
+	uint16_t rx_qnum_per_tc;
+
+	rx_qnum_per_tc = nb_rx_q / hw->num_tc;
+	rx_qnum_per_tc = RTE_MIN(hw->rss_size_max, rx_qnum_per_tc);
+	if (hw->alloc_rss_size != rx_qnum_per_tc) {
+		hns3_info(hw, "rss size changes from %u to %u",
+			  hw->alloc_rss_size, rx_qnum_per_tc);
+		hw->alloc_rss_size = rx_qnum_per_tc;
+	}
+	hw->used_rx_queues = hw->num_tc * hw->alloc_rss_size;
+}
+
+void
+hns3_tc_queue_mapping_cfg(struct hns3_hw *hw, uint16_t nb_queue)
 {
 	struct hns3_tc_queue_info *tc_queue;
 	uint8_t i;
 
+	hw->tx_qnum_per_tc = nb_queue / hw->num_tc;
 	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
 		tc_queue = &hw->tc_queue[i];
 		if (hw->hw_tc_map & BIT(i) && i < hw->num_tc) {
 			tc_queue->enable = true;
-			tc_queue->tqp_offset = i * hw->alloc_rss_size;
-			tc_queue->tqp_count = hw->alloc_rss_size;
+			tc_queue->tqp_offset = i * hw->tx_qnum_per_tc;
+			tc_queue->tqp_count = hw->tx_qnum_per_tc;
 			tc_queue->tc = i;
 		} else {
 			/* Set to default queue if TC is disable */
@@ -598,30 +614,22 @@ hns3_tc_queue_mapping_cfg(struct hns3_hw *hw)
 			tc_queue->tc = 0;
 		}
 	}
+	hw->used_tx_queues = hw->num_tc * hw->tx_qnum_per_tc;
 }
 
 static void
-hns3_dcb_update_tc_queue_mapping(struct hns3_hw *hw, uint16_t queue_num)
+hns3_dcb_update_tc_queue_mapping(struct hns3_hw *hw, uint16_t nb_rx_q,
+				 uint16_t nb_tx_q)
 {
 	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
 	struct hns3_pf *pf = &hns->pf;
-	uint16_t tqpnum_per_tc;
-	uint16_t alloc_tqps;
-
-	alloc_tqps = RTE_MIN(hw->tqps_num, queue_num);
-	hw->num_tc = RTE_MIN(alloc_tqps, hw->dcb_info.num_tc);
-	tqpnum_per_tc = RTE_MIN(hw->rss_size_max, alloc_tqps / hw->num_tc);
 
-	if (hw->alloc_rss_size != tqpnum_per_tc) {
-		PMD_INIT_LOG(INFO, "rss size changes from %d to %d",
-			     hw->alloc_rss_size, tqpnum_per_tc);
-		hw->alloc_rss_size = tqpnum_per_tc;
-	}
-	hw->alloc_tqps = hw->num_tc * hw->alloc_rss_size;
+	hw->num_tc = hw->dcb_info.num_tc;
+	hns3_set_rss_size(hw, nb_rx_q);
+	hns3_tc_queue_mapping_cfg(hw, nb_tx_q);
 
-	hns3_tc_queue_mapping_cfg(hw);
-
-	memcpy(pf->prio_tc, hw->dcb_info.prio_tc, HNS3_MAX_USER_PRIO);
+	if (!hns->is_vf)
+		memcpy(pf->prio_tc, hw->dcb_info.prio_tc, HNS3_MAX_USER_PRIO);
 }
 
 int
@@ -1309,20 +1317,35 @@ hns3_dcb_info_cfg(struct hns3_adapter *hns)
 	for (i = 0; i < HNS3_MAX_USER_PRIO; i++)
 		hw->dcb_info.prio_tc[i] = dcb_rx_conf->dcb_tc[i];
 
-	hns3_dcb_update_tc_queue_mapping(hw, hw->data->nb_rx_queues);
+	hns3_dcb_update_tc_queue_mapping(hw, hw->data->nb_rx_queues,
+					 hw->data->nb_tx_queues);
 }
 
-static void
+static int
 hns3_dcb_info_update(struct hns3_adapter *hns, uint8_t num_tc)
 {
 	struct hns3_pf *pf = &hns->pf;
 	struct hns3_hw *hw = &hns->hw;
+	uint16_t nb_rx_q = hw->data->nb_rx_queues;
+	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	uint8_t bit_map = 0;
 	uint8_t i;
 
 	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE &&
 	    hw->dcb_info.num_pg != 1)
-		return;
+		return -EINVAL;
+
+	if (nb_rx_q < num_tc) {
+		hns3_err(hw, "number of Rx queues(%d) is less than tcs(%d).",
+			 nb_rx_q, num_tc);
+		return -EINVAL;
+	}
+
+	if (nb_tx_q < num_tc) {
+		hns3_err(hw, "number of Tx queues(%d) is less than tcs(%d).",
+			 nb_tx_q, num_tc);
+		return -EINVAL;
+	}
 
 	/* Currently not support uncontinuous tc */
 	hw->dcb_info.num_tc = num_tc;
@@ -1333,10 +1356,10 @@ hns3_dcb_info_update(struct hns3_adapter *hns, uint8_t num_tc)
 		bit_map = 1;
 		hw->dcb_info.num_tc = 1;
 	}
-
 	hw->hw_tc_map = bit_map;
-
 	hns3_dcb_info_cfg(hns);
+
+	return 0;
 }
 
 static int
@@ -1422,10 +1445,15 @@ hns3_dcb_configure(struct hns3_adapter *hns)
 
 	hns3_dcb_cfg_validate(hns, &num_tc, &map_changed);
 	if (map_changed || rte_atomic16_read(&hw->reset.resetting)) {
-		hns3_dcb_info_update(hns, num_tc);
+		ret = hns3_dcb_info_update(hns, num_tc);
+		if (ret) {
+			hns3_err(hw, "dcb info update failed: %d", ret);
+			return ret;
+		}
+
 		ret = hns3_dcb_hw_configure(hns);
 		if (ret) {
-			hns3_err(hw, "dcb sw configure fails: %d", ret);
+			hns3_err(hw, "dcb sw configure failed: %d", ret);
 			return ret;
 		}
 	}
@@ -1479,7 +1507,8 @@ hns3_dcb_init(struct hns3_hw *hw)
 			hns3_err(hw, "dcb info init failed: %d", ret);
 			return ret;
 		}
-		hns3_dcb_update_tc_queue_mapping(hw, hw->tqps_num);
+		hns3_dcb_update_tc_queue_mapping(hw, hw->tqps_num,
+						 hw->tqps_num);
 	}
 
 	/*
@@ -1502,10 +1531,11 @@ static int
 hns3_update_queue_map_configure(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
-	uint16_t queue_num = hw->data->nb_rx_queues;
+	uint16_t nb_rx_q = hw->data->nb_rx_queues;
+	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	int ret;
 
-	hns3_dcb_update_tc_queue_mapping(hw, queue_num);
+	hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
 	ret = hns3_q_to_qs_map(hw);
 	if (ret) {
 		hns3_err(hw, "failed to map nq to qs! ret = %d", ret);
diff --git a/drivers/net/hns3/hns3_dcb.h b/drivers/net/hns3/hns3_dcb.h
index 9ec4e704b..9c2c5f21c 100644
--- a/drivers/net/hns3/hns3_dcb.h
+++ b/drivers/net/hns3/hns3_dcb.h
@@ -159,7 +159,9 @@ hns3_fc_enable(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf);
 int
 hns3_dcb_pfc_enable(struct rte_eth_dev *dev, struct rte_eth_pfc_conf *pfc_conf);
 
-void hns3_tc_queue_mapping_cfg(struct hns3_hw *hw);
+void hns3_set_rss_size(struct hns3_hw *hw, uint16_t nb_rx_q);
+
+void hns3_tc_queue_mapping_cfg(struct hns3_hw *hw, uint16_t nb_queue);
 
 int hns3_dcb_cfg_update(struct hns3_adapter *hns);
 
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 49aef7dbc..800fa47cc 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2061,10 +2061,11 @@ hns3_bind_ring_with_vector(struct rte_eth_dev *dev, uint8_t vector_id,
 static int
 hns3_dev_configure(struct rte_eth_dev *dev)
 {
-	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	struct hns3_adapter *hns = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 	enum rte_eth_rx_mq_mode mq_mode = conf->rxmode.mq_mode;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
 	uint16_t nb_rx_q = dev->data->nb_rx_queues;
 	uint16_t nb_tx_q = dev->data->nb_tx_queues;
 	struct rte_eth_rss_conf rss_conf;
@@ -2072,23 +2073,28 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	int ret;
 
 	/*
-	 * Hardware does not support where the number of rx and tx queues is
-	 * not equal in hip08.
+	 * Hardware does not support individually enable/disable/reset the Tx or
+	 * Rx queue in hns3 network engine. Driver must enable/disable/reset Tx
+	 * and Rx queues at the same time. When the numbers of Tx queues
+	 * allocated by upper applications are not equal to the numbers of Rx
+	 * queues, driver needs to setup fake Tx or Rx queues to adjust numbers
+	 * of Tx/Rx queues. otherwise, network engine can not work as usual. But
+	 * these fake queues are imperceptible, and can not be used by upper
+	 * applications.
 	 */
-	if (nb_rx_q != nb_tx_q) {
-		hns3_err(hw,
-			 "nb_rx_queues(%u) not equal with nb_tx_queues(%u)! "
-			 "Hardware does not support this configuration!",
-			 nb_rx_q, nb_tx_q);
-		return -EINVAL;
+	ret = hns3_set_fake_rx_or_tx_queues(dev, nb_rx_q, nb_tx_q);
+	if (ret) {
+		hns3_err(hw, "Failed to set rx/tx fake queues: %d", ret);
+		return ret;
 	}
 
+	hw->adapter_state = HNS3_NIC_CONFIGURING;
 	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
-		return -EINVAL;
+		ret = -EINVAL;
+		goto cfg_err;
 	}
 
-	hw->adapter_state = HNS3_NIC_CONFIGURING;
 	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
 		ret = hns3_check_dcb_cfg(dev);
 		if (ret)
@@ -2134,7 +2140,9 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 
 cfg_err:
+	(void)hns3_set_fake_rx_or_tx_queues(dev, 0, 0);
 	hw->adapter_state = HNS3_NIC_INITIALIZED;
+
 	return ret;
 }
 
@@ -4084,7 +4092,7 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
 	/* check and configure queue intr-vector mapping */
 	if (rte_intr_cap_multiple(intr_handle) ||
 	    !RTE_ETH_DEV_SRIOV(dev).active) {
-		intr_vector = dev->data->nb_rx_queues;
+		intr_vector = hw->used_rx_queues;
 		/* creates event fd for each intr vector when MSIX is used */
 		if (rte_intr_efd_enable(intr_handle, intr_vector))
 			return -EINVAL;
@@ -4092,10 +4100,10 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
 	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
 		intr_handle->intr_vec =
 			rte_zmalloc("intr_vec",
-				    dev->data->nb_rx_queues * sizeof(int), 0);
+				    hw->used_rx_queues * sizeof(int), 0);
 		if (intr_handle->intr_vec == NULL) {
 			hns3_err(hw, "Failed to allocate %d rx_queues"
-				     " intr_vec", dev->data->nb_rx_queues);
+				     " intr_vec", hw->used_rx_queues);
 			ret = -ENOMEM;
 			goto alloc_intr_vec_error;
 		}
@@ -4106,7 +4114,7 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
 		base = RTE_INTR_VEC_RXTX_OFFSET;
 	}
 	if (rte_intr_dp_is_en(intr_handle)) {
-		for (q_id = 0; q_id < dev->data->nb_rx_queues; q_id++) {
+		for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
 			ret = hns3_bind_ring_with_vector(dev, vec, true, q_id);
 			if (ret)
 				goto bind_vector_error;
@@ -4190,6 +4198,8 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
 	uint8_t base = 0;
 	uint8_t vec = 0;
 	uint16_t q_id;
@@ -4203,7 +4213,7 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
 		base = RTE_INTR_VEC_RXTX_OFFSET;
 	}
 	if (rte_intr_dp_is_en(intr_handle)) {
-		for (q_id = 0; q_id < dev->data->nb_rx_queues; q_id++) {
+		for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
 			(void)hns3_bind_ring_with_vector(dev, vec, false, q_id);
 			if (vec < base + intr_handle->nb_efd - 1)
 				vec++;
@@ -4446,15 +4456,13 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	for (i = 0; i < dcb_info->nb_tcs; i++)
 		dcb_info->tc_bws[i] = hw->dcb_info.pg_info[0].tc_dwrr[i];
 
-	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
-		dcb_info->tc_queue.tc_rxq[0][i].base =
-					hw->tc_queue[i].tqp_offset;
+	for (i = 0; i < hw->num_tc; i++) {
+		dcb_info->tc_queue.tc_rxq[0][i].base = hw->alloc_rss_size * i;
 		dcb_info->tc_queue.tc_txq[0][i].base =
-					hw->tc_queue[i].tqp_offset;
-		dcb_info->tc_queue.tc_rxq[0][i].nb_queue =
-					hw->tc_queue[i].tqp_count;
+						hw->tc_queue[i].tqp_offset;
+		dcb_info->tc_queue.tc_rxq[0][i].nb_queue = hw->alloc_rss_size;
 		dcb_info->tc_queue.tc_txq[0][i].nb_queue =
-					hw->tc_queue[i].tqp_count;
+						hw->tc_queue[i].tqp_count;
 	}
 	rte_spinlock_unlock(&hw->lock);
 
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 7422706a8..2aa4c3cd7 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -153,6 +153,12 @@ struct hns3_mac {
 	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
 };
 
+struct hns3_fake_queue_data {
+	void **rx_queues; /* Array of pointers to fake RX queues. */
+	void **tx_queues; /* Array of pointers to fake TX queues. */
+	uint16_t nb_fake_rx_queues; /* Number of fake RX queues. */
+	uint16_t nb_fake_tx_queues; /* Number of fake TX queues. */
+};
 
 /* Primary process maintains driver state in main thread.
  *
@@ -365,8 +371,14 @@ struct hns3_hw {
 	struct hns3_dcb_info dcb_info;
 	enum hns3_fc_status current_fc_status; /* current flow control status */
 	struct hns3_tc_queue_info tc_queue[HNS3_MAX_TC_NUM];
-	uint16_t alloc_tqps;
-	uint16_t alloc_rss_size;    /* Queue number per TC */
+	uint16_t used_rx_queues;
+	uint16_t used_tx_queues;
+
+	/* Config max queue numbers between rx and tx queues from user */
+	uint16_t cfg_max_queues;
+	struct hns3_fake_queue_data fkq_data;     /* fake queue data */
+	uint16_t alloc_rss_size;    /* RX queue number per TC */
+	uint16_t tx_qnum_per_tc;    /* TX queue number per TC */
 
 	uint32_t flag;
 	/*
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 10969011b..71e358e81 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -428,24 +428,28 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	int ret;
 
 	/*
-	 * Hardware does not support where the number of rx and tx queues is
-	 * not equal in hip08.
+	 * Hardware does not support individually enable/disable/reset the Tx or
+	 * Rx queue in hns3 network engine. Driver must enable/disable/reset Tx
+	 * and Rx queues at the same time. When the numbers of Tx queues
+	 * allocated by upper applications are not equal to the numbers of Rx
+	 * queues, driver needs to setup fake Tx or Rx queues to adjust numbers
+	 * of Tx/Rx queues. otherwise, network engine can not work as usual. But
+	 * these fake queues are imperceptible, and can not be used by upper
+	 * applications.
 	 */
-	if (nb_rx_q != nb_tx_q) {
-		hns3_err(hw,
-			 "nb_rx_queues(%u) not equal with nb_tx_queues(%u)! "
-			 "Hardware does not support this configuration!",
-			 nb_rx_q, nb_tx_q);
-		return -EINVAL;
+	ret = hns3_set_fake_rx_or_tx_queues(dev, nb_rx_q, nb_tx_q);
+	if (ret) {
+		hns3_err(hw, "Failed to set rx/tx fake queues: %d", ret);
+		return ret;
 	}
 
+	hw->adapter_state = HNS3_NIC_CONFIGURING;
 	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
-		return -EINVAL;
+		ret = -EINVAL;
+		goto cfg_err;
 	}
 
-	hw->adapter_state = HNS3_NIC_CONFIGURING;
-
 	/* When RSS is not configured, redirect the packet queue 0 */
 	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		rss_conf = conf->rx_adv_conf.rss_conf;
@@ -484,7 +488,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 
 cfg_err:
+	(void)hns3_set_fake_rx_or_tx_queues(dev, 0, 0);
 	hw->adapter_state = HNS3_NIC_INITIALIZED;
+
 	return ret;
 }
 
@@ -799,12 +805,12 @@ hns3vf_get_configuration(struct hns3_hw *hw)
 	return hns3vf_get_tc_info(hw);
 }
 
-static void
+static int
 hns3vf_set_tc_info(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
 	uint16_t nb_rx_q = hw->data->nb_rx_queues;
-	uint16_t new_tqps;
+	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	uint8_t i;
 
 	hw->num_tc = 0;
@@ -812,11 +818,22 @@ hns3vf_set_tc_info(struct hns3_adapter *hns)
 		if (hw->hw_tc_map & BIT(i))
 			hw->num_tc++;
 
-	new_tqps = RTE_MIN(hw->tqps_num, nb_rx_q);
-	hw->alloc_rss_size = RTE_MIN(hw->rss_size_max, new_tqps / hw->num_tc);
-	hw->alloc_tqps = hw->alloc_rss_size * hw->num_tc;
+	if (nb_rx_q < hw->num_tc) {
+		hns3_err(hw, "number of Rx queues(%d) is less than tcs(%d).",
+			 nb_rx_q, hw->num_tc);
+		return -EINVAL;
+	}
+
+	if (nb_tx_q < hw->num_tc) {
+		hns3_err(hw, "number of Tx queues(%d) is less than tcs(%d).",
+			 nb_tx_q, hw->num_tc);
+		return -EINVAL;
+	}
 
-	hns3_tc_queue_mapping_cfg(hw);
+	hns3_set_rss_size(hw, nb_rx_q);
+	hns3_tc_queue_mapping_cfg(hw, nb_tx_q);
+
+	return 0;
 }
 
 static void
@@ -1256,6 +1273,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 static void
 hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
 {
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint8_t base = 0;
@@ -1271,7 +1289,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
 		base = RTE_INTR_VEC_RXTX_OFFSET;
 	}
 	if (rte_intr_dp_is_en(intr_handle)) {
-		for (q_id = 0; q_id < dev->data->nb_rx_queues; q_id++) {
+		for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
 			(void)hns3vf_bind_ring_with_vector(dev, vec, false,
 							   q_id);
 			if (vec < base + intr_handle->nb_efd - 1)
@@ -1381,7 +1399,9 @@ hns3vf_do_start(struct hns3_adapter *hns, bool reset_queue)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	hns3vf_set_tc_info(hns);
+	ret = hns3vf_set_tc_info(hns);
+	if (ret)
+		return ret;
 
 	ret = hns3_start_queues(hns, reset_queue);
 	if (ret) {
@@ -1412,8 +1432,8 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
 
 	/* check and configure queue intr-vector mapping */
 	if (rte_intr_cap_multiple(intr_handle) ||
-		!RTE_ETH_DEV_SRIOV(dev).active) {
-		intr_vector = dev->data->nb_rx_queues;
+	    !RTE_ETH_DEV_SRIOV(dev).active) {
+		intr_vector = hw->used_rx_queues;
 		/* It creates event fd for each intr vector when MSIX is used */
 		if (rte_intr_efd_enable(intr_handle, intr_vector))
 			return -EINVAL;
@@ -1421,10 +1441,10 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
 	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
 		intr_handle->intr_vec =
 			rte_zmalloc("intr_vec",
-				    dev->data->nb_rx_queues * sizeof(int), 0);
+				    hw->used_rx_queues * sizeof(int), 0);
 		if (intr_handle->intr_vec == NULL) {
 			hns3_err(hw, "Failed to allocate %d rx_queues"
-				     " intr_vec", dev->data->nb_rx_queues);
+				     " intr_vec", hw->used_rx_queues);
 			ret = -ENOMEM;
 			goto vf_alloc_intr_vec_error;
 		}
@@ -1435,7 +1455,7 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
 		base = RTE_INTR_VEC_RXTX_OFFSET;
 	}
 	if (rte_intr_dp_is_en(intr_handle)) {
-		for (q_id = 0; q_id < dev->data->nb_rx_queues; q_id++) {
+		for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
 			ret = hns3vf_bind_ring_with_vector(dev, vec, true,
 							   q_id);
 			if (ret)
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index bcd121f48..aa614175d 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -224,14 +224,19 @@ hns3_handle_action_queue(struct rte_eth_dev *dev,
 			 struct rte_flow_error *error)
 {
 	struct hns3_adapter *hns = dev->data->dev_private;
-	struct hns3_hw *hw = &hns->hw;
 	const struct rte_flow_action_queue *queue;
+	struct hns3_hw *hw = &hns->hw;
 
 	queue = (const struct rte_flow_action_queue *)action->conf;
-	if (queue->index >= hw->data->nb_rx_queues)
+	if (queue->index >= hw->used_rx_queues) {
+		hns3_err(hw, "queue ID(%d) is greater than number of "
+			  "available queue (%d) in driver.",
+			  queue->index, hw->used_rx_queues);
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION, action,
 					  "Invalid queue ID in PF");
+	}
+
 	rule->queue_id = queue->index;
 	rule->action = HNS3_FD_ACTION_ACCEPT_PACKET;
 	return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 003a5bde4..3d13ed526 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -37,6 +37,7 @@ hns3_rx_queue_release_mbufs(struct hns3_rx_queue *rxq)
 {
 	uint16_t i;
 
+	/* Note: Fake rx queue will not enter here */
 	if (rxq->sw_ring) {
 		for (i = 0; i < rxq->nb_rx_desc; i++) {
 			if (rxq->sw_ring[i].mbuf) {
@@ -52,6 +53,7 @@ hns3_tx_queue_release_mbufs(struct hns3_tx_queue *txq)
 {
 	uint16_t i;
 
+	/* Note: Fake rx queue will not enter here */
 	if (txq->sw_ring) {
 		for (i = 0; i < txq->nb_tx_desc; i++) {
 			if (txq->sw_ring[i].mbuf) {
@@ -120,22 +122,115 @@ hns3_dev_tx_queue_release(void *queue)
 	rte_spinlock_unlock(&hns->hw.lock);
 }
 
-void
-hns3_free_all_queues(struct rte_eth_dev *dev)
+static void
+hns3_fake_rx_queue_release(struct hns3_rx_queue *queue)
+{
+	struct hns3_rx_queue *rxq = queue;
+	struct hns3_adapter *hns;
+	struct hns3_hw *hw;
+	uint16_t idx;
+
+	if (rxq == NULL)
+		return;
+
+	hns = rxq->hns;
+	hw = &hns->hw;
+	idx = rxq->queue_id;
+	if (hw->fkq_data.rx_queues[idx]) {
+		hns3_rx_queue_release(hw->fkq_data.rx_queues[idx]);
+		hw->fkq_data.rx_queues[idx] = NULL;
+	}
+
+	/* free fake rx queue arrays */
+	if (idx == (hw->fkq_data.nb_fake_rx_queues - 1)) {
+		hw->fkq_data.nb_fake_rx_queues = 0;
+		rte_free(hw->fkq_data.rx_queues);
+		hw->fkq_data.rx_queues = NULL;
+	}
+}
+
+static void
+hns3_fake_tx_queue_release(struct hns3_tx_queue *queue)
 {
+	struct hns3_tx_queue *txq = queue;
+	struct hns3_adapter *hns;
+	struct hns3_hw *hw;
+	uint16_t idx;
+
+	if (txq == NULL)
+		return;
+
+	hns = txq->hns;
+	hw = &hns->hw;
+	idx = txq->queue_id;
+	if (hw->fkq_data.tx_queues[idx]) {
+		hns3_tx_queue_release(hw->fkq_data.tx_queues[idx]);
+		hw->fkq_data.tx_queues[idx] = NULL;
+	}
+
+	/* free fake tx queue arrays */
+	if (idx == (hw->fkq_data.nb_fake_tx_queues - 1)) {
+		hw->fkq_data.nb_fake_tx_queues = 0;
+		rte_free(hw->fkq_data.tx_queues);
+		hw->fkq_data.tx_queues = NULL;
+	}
+}
+
+static void
+hns3_free_rx_queues(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_fake_queue_data *fkq_data;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t nb_rx_q;
 	uint16_t i;
 
-	if (dev->data->rx_queues)
-		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+	nb_rx_q = hw->data->nb_rx_queues;
+	for (i = 0; i < nb_rx_q; i++) {
+		if (dev->data->rx_queues[i]) {
 			hns3_rx_queue_release(dev->data->rx_queues[i]);
 			dev->data->rx_queues[i] = NULL;
 		}
+	}
+
+	/* Free fake Rx queues */
+	fkq_data = &hw->fkq_data;
+	for (i = 0; i < fkq_data->nb_fake_rx_queues; i++) {
+		if (fkq_data->rx_queues[i])
+			hns3_fake_rx_queue_release(fkq_data->rx_queues[i]);
+	}
+}
 
-	if (dev->data->tx_queues)
-		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+static void
+hns3_free_tx_queues(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_fake_queue_data *fkq_data;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t nb_tx_q;
+	uint16_t i;
+
+	nb_tx_q = hw->data->nb_tx_queues;
+	for (i = 0; i < nb_tx_q; i++) {
+		if (dev->data->tx_queues[i]) {
 			hns3_tx_queue_release(dev->data->tx_queues[i]);
 			dev->data->tx_queues[i] = NULL;
 		}
+	}
+
+	/* Free fake Tx queues */
+	fkq_data = &hw->fkq_data;
+	for (i = 0; i < fkq_data->nb_fake_tx_queues; i++) {
+		if (fkq_data->tx_queues[i])
+			hns3_fake_tx_queue_release(fkq_data->tx_queues[i]);
+	}
+}
+
+void
+hns3_free_all_queues(struct rte_eth_dev *dev)
+{
+	hns3_free_rx_queues(dev);
+	hns3_free_tx_queues(dev);
 }
 
 static int
@@ -223,17 +318,26 @@ hns3_init_tx_queue_hw(struct hns3_tx_queue *txq)
 static void
 hns3_enable_all_queues(struct hns3_hw *hw, bool en)
 {
+	uint16_t nb_rx_q = hw->data->nb_rx_queues;
+	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	struct hns3_rx_queue *rxq;
 	struct hns3_tx_queue *txq;
 	uint32_t rcb_reg;
 	int i;
 
-	for (i = 0; i < hw->data->nb_rx_queues; i++) {
-		rxq = hw->data->rx_queues[i];
-		txq = hw->data->tx_queues[i];
+	for (i = 0; i < hw->cfg_max_queues; i++) {
+		if (i < nb_rx_q)
+			rxq = hw->data->rx_queues[i];
+		else
+			rxq = hw->fkq_data.rx_queues[i - nb_rx_q];
+		if (i < nb_tx_q)
+			txq = hw->data->tx_queues[i];
+		else
+			txq = hw->fkq_data.tx_queues[i - nb_tx_q];
 		if (rxq == NULL || txq == NULL ||
 		    (en && (rxq->rx_deferred_start || txq->tx_deferred_start)))
 			continue;
+
 		rcb_reg = hns3_read_dev(rxq, HNS3_RING_EN_REG);
 		if (en)
 			rcb_reg |= BIT(HNS3_RING_EN_B);
@@ -382,10 +486,9 @@ int
 hns3_reset_all_queues(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
-	int ret;
-	uint16_t i;
+	int ret, i;
 
-	for (i = 0; i < hw->data->nb_rx_queues; i++) {
+	for (i = 0; i < hw->cfg_max_queues; i++) {
 		ret = hns3_reset_queue(hns, i);
 		if (ret) {
 			hns3_err(hw, "Failed to reset No.%d queue: %d", i, ret);
@@ -445,12 +548,11 @@ hns3_dev_rx_queue_start(struct hns3_adapter *hns, uint16_t idx)
 
 	PMD_INIT_FUNC_TRACE();
 
-	rxq = hw->data->rx_queues[idx];
-
+	rxq = (struct hns3_rx_queue *)hw->data->rx_queues[idx];
 	ret = hns3_alloc_rx_queue_mbufs(hw, rxq);
 	if (ret) {
 		hns3_err(hw, "Failed to alloc mbuf for No.%d rx queue: %d",
-			    idx, ret);
+			 idx, ret);
 		return ret;
 	}
 
@@ -462,15 +564,24 @@ hns3_dev_rx_queue_start(struct hns3_adapter *hns, uint16_t idx)
 }
 
 static void
-hns3_dev_tx_queue_start(struct hns3_adapter *hns, uint16_t idx)
+hns3_fake_rx_queue_start(struct hns3_adapter *hns, uint16_t idx)
 {
 	struct hns3_hw *hw = &hns->hw;
-	struct hns3_tx_queue *txq;
+	struct hns3_rx_queue *rxq;
+
+	rxq = (struct hns3_rx_queue *)hw->fkq_data.rx_queues[idx];
+	rxq->next_to_use = 0;
+	rxq->next_to_clean = 0;
+	hns3_init_rx_queue_hw(rxq);
+}
+
+static void
+hns3_init_tx_queue(struct hns3_tx_queue *queue)
+{
+	struct hns3_tx_queue *txq = queue;
 	struct hns3_desc *desc;
 	int i;
 
-	txq = hw->data->tx_queues[idx];
-
 	/* Clear tx bd */
 	desc = txq->tx_ring;
 	for (i = 0; i < txq->nb_tx_desc; i++) {
@@ -480,10 +591,30 @@ hns3_dev_tx_queue_start(struct hns3_adapter *hns, uint16_t idx)
 
 	txq->next_to_use = 0;
 	txq->next_to_clean = 0;
-	txq->tx_bd_ready   = txq->nb_tx_desc;
+	txq->tx_bd_ready = txq->nb_tx_desc;
 	hns3_init_tx_queue_hw(txq);
 }
 
+static void
+hns3_dev_tx_queue_start(struct hns3_adapter *hns, uint16_t idx)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tx_queue *txq;
+
+	txq = (struct hns3_tx_queue *)hw->data->tx_queues[idx];
+	hns3_init_tx_queue(txq);
+}
+
+static void
+hns3_fake_tx_queue_start(struct hns3_adapter *hns, uint16_t idx)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tx_queue *txq;
+
+	txq = (struct hns3_tx_queue *)hw->fkq_data.tx_queues[idx];
+	hns3_init_tx_queue(txq);
+}
+
 static void
 hns3_init_tx_ring_tc(struct hns3_adapter *hns)
 {
@@ -500,7 +631,7 @@ hns3_init_tx_ring_tc(struct hns3_adapter *hns)
 
 		for (j = 0; j < tc_queue->tqp_count; j++) {
 			num = tc_queue->tqp_offset + j;
-			txq = hw->data->tx_queues[num];
+			txq = (struct hns3_tx_queue *)hw->data->tx_queues[num];
 			if (txq == NULL)
 				continue;
 
@@ -509,16 +640,13 @@ hns3_init_tx_ring_tc(struct hns3_adapter *hns)
 	}
 }
 
-int
-hns3_start_queues(struct hns3_adapter *hns, bool reset_queue)
+static int
+hns3_start_rx_queues(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
-	struct rte_eth_dev_data *dev_data = hw->data;
 	struct hns3_rx_queue *rxq;
-	struct hns3_tx_queue *txq;
+	int i, j;
 	int ret;
-	int i;
-	int j;
 
 	/* Initialize RSS for queues */
 	ret = hns3_config_rss(hns);
@@ -527,49 +655,85 @@ hns3_start_queues(struct hns3_adapter *hns, bool reset_queue)
 		return ret;
 	}
 
-	if (reset_queue) {
-		ret = hns3_reset_all_queues(hns);
-		if (ret) {
-			hns3_err(hw, "Failed to reset all queues %d", ret);
-			return ret;
-		}
-	}
-
-	/*
-	 * Hardware does not support where the number of rx and tx queues is
-	 * not equal in hip08. In .dev_configure callback function we will
-	 * check the two values, here we think that the number of rx and tx
-	 * queues is equal.
-	 */
 	for (i = 0; i < hw->data->nb_rx_queues; i++) {
-		rxq = dev_data->rx_queues[i];
-		txq = dev_data->tx_queues[i];
-		if (rxq == NULL || txq == NULL || rxq->rx_deferred_start ||
-		    txq->tx_deferred_start)
+		rxq = (struct hns3_rx_queue *)hw->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
 			continue;
-
 		ret = hns3_dev_rx_queue_start(hns, i);
 		if (ret) {
 			hns3_err(hw, "Failed to start No.%d rx queue: %d", i,
 				 ret);
 			goto out;
 		}
-		hns3_dev_tx_queue_start(hns, i);
 	}
-	hns3_init_tx_ring_tc(hns);
 
-	hns3_enable_all_queues(hw, true);
+	for (i = 0; i < hw->fkq_data.nb_fake_rx_queues; i++) {
+		rxq = (struct hns3_rx_queue *)hw->fkq_data.rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		hns3_fake_rx_queue_start(hns, i);
+	}
 	return 0;
 
 out:
 	for (j = 0; j < i; j++) {
-		rxq = dev_data->rx_queues[j];
+		rxq = (struct hns3_rx_queue *)hw->data->rx_queues[j];
 		hns3_rx_queue_release_mbufs(rxq);
 	}
 
 	return ret;
 }
 
+static void
+hns3_start_tx_queues(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < hw->data->nb_tx_queues; i++) {
+		txq = (struct hns3_tx_queue *)hw->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		hns3_dev_tx_queue_start(hns, i);
+	}
+
+	for (i = 0; i < hw->fkq_data.nb_fake_tx_queues; i++) {
+		txq = (struct hns3_tx_queue *)hw->fkq_data.tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		hns3_fake_tx_queue_start(hns, i);
+	}
+
+	hns3_init_tx_ring_tc(hns);
+}
+
+int
+hns3_start_queues(struct hns3_adapter *hns, bool reset_queue)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	if (reset_queue) {
+		ret = hns3_reset_all_queues(hns);
+		if (ret) {
+			hns3_err(hw, "Failed to reset all queues %d", ret);
+			return ret;
+		}
+	}
+
+	ret = hns3_start_rx_queues(hns);
+	if (ret) {
+		hns3_err(hw, "Failed to start rx queues: %d", ret);
+		return ret;
+	}
+
+	hns3_start_tx_queues(hns);
+	hns3_enable_all_queues(hw, true);
+
+	return 0;
+}
+
 int
 hns3_stop_queues(struct hns3_adapter *hns, bool reset_queue)
 {
@@ -587,6 +751,337 @@ hns3_stop_queues(struct hns3_adapter *hns, bool reset_queue)
 	return 0;
 }
 
+static void*
+hns3_alloc_rxq_and_dma_zone(struct rte_eth_dev *dev,
+			    struct hns3_queue_info *q_info)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	const struct rte_memzone *rx_mz;
+	struct hns3_rx_queue *rxq;
+	unsigned int rx_desc;
+
+	rxq = rte_zmalloc_socket(q_info->type, sizeof(struct hns3_rx_queue),
+				 RTE_CACHE_LINE_SIZE, q_info->socket_id);
+	if (rxq == NULL) {
+		hns3_err(hw, "Failed to allocate memory for No.%d rx ring!",
+			 q_info->idx);
+		return NULL;
+	}
+
+	/* Allocate rx ring hardware descriptors. */
+	rxq->queue_id = q_info->idx;
+	rxq->nb_rx_desc = q_info->nb_desc;
+	rx_desc = rxq->nb_rx_desc * sizeof(struct hns3_desc);
+	rx_mz = rte_eth_dma_zone_reserve(dev, q_info->ring_name, q_info->idx,
+					 rx_desc, HNS3_RING_BASE_ALIGN,
+					 q_info->socket_id);
+	if (rx_mz == NULL) {
+		hns3_err(hw, "Failed to reserve DMA memory for No.%d rx ring!",
+			 q_info->idx);
+		hns3_rx_queue_release(rxq);
+		return NULL;
+	}
+	rxq->mz = rx_mz;
+	rxq->rx_ring = (struct hns3_desc *)rx_mz->addr;
+	rxq->rx_ring_phys_addr = rx_mz->iova;
+
+	hns3_dbg(hw, "No.%d rx descriptors iova 0x%" PRIx64, q_info->idx,
+		 rxq->rx_ring_phys_addr);
+
+	return rxq;
+}
+
+static int
+hns3_fake_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
+			 uint16_t nb_desc, unsigned int socket_id)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_queue_info q_info;
+	struct hns3_rx_queue *rxq;
+	uint16_t nb_rx_q;
+
+	if (hw->fkq_data.rx_queues[idx]) {
+		hns3_rx_queue_release(hw->fkq_data.rx_queues[idx]);
+		hw->fkq_data.rx_queues[idx] = NULL;
+	}
+
+	q_info.idx = idx;
+	q_info.socket_id = socket_id;
+	q_info.nb_desc = nb_desc;
+	q_info.type = "hns3 fake RX queue";
+	q_info.ring_name = "rx_fake_ring";
+	rxq = hns3_alloc_rxq_and_dma_zone(dev, &q_info);
+	if (rxq == NULL) {
+		hns3_err(hw, "Failed to setup No.%d fake rx ring.", idx);
+		return -ENOMEM;
+	}
+
+	/* Don't need alloc sw_ring, because upper applications don't use it */
+	rxq->sw_ring = NULL;
+
+	rxq->hns = hns;
+	rxq->rx_deferred_start = false;
+	rxq->port_id = dev->data->port_id;
+	rxq->configured = true;
+	nb_rx_q = dev->data->nb_rx_queues;
+	rxq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
+				(nb_rx_q + idx) * HNS3_TQP_REG_SIZE);
+	rxq->rx_buf_len = hw->rx_buf_len;
+
+	rte_spinlock_lock(&hw->lock);
+	hw->fkq_data.rx_queues[idx] = rxq;
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static void*
+hns3_alloc_txq_and_dma_zone(struct rte_eth_dev *dev,
+			    struct hns3_queue_info *q_info)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	const struct rte_memzone *tx_mz;
+	struct hns3_tx_queue *txq;
+	struct hns3_desc *desc;
+	unsigned int tx_desc;
+	int i;
+
+	txq = rte_zmalloc_socket(q_info->type, sizeof(struct hns3_tx_queue),
+				 RTE_CACHE_LINE_SIZE, q_info->socket_id);
+	if (txq == NULL) {
+		hns3_err(hw, "Failed to allocate memory for No.%d tx ring!",
+			 q_info->idx);
+		return NULL;
+	}
+
+	/* Allocate tx ring hardware descriptors. */
+	txq->queue_id = q_info->idx;
+	txq->nb_tx_desc = q_info->nb_desc;
+	tx_desc = txq->nb_tx_desc * sizeof(struct hns3_desc);
+	tx_mz = rte_eth_dma_zone_reserve(dev, q_info->ring_name, q_info->idx,
+					 tx_desc, HNS3_RING_BASE_ALIGN,
+					 q_info->socket_id);
+	if (tx_mz == NULL) {
+		hns3_err(hw, "Failed to reserve DMA memory for No.%d tx ring!",
+			 q_info->idx);
+		hns3_tx_queue_release(txq);
+		return NULL;
+	}
+	txq->mz = tx_mz;
+	txq->tx_ring = (struct hns3_desc *)tx_mz->addr;
+	txq->tx_ring_phys_addr = tx_mz->iova;
+
+	hns3_dbg(hw, "No.%d tx descriptors iova 0x%" PRIx64, q_info->idx,
+		 txq->tx_ring_phys_addr);
+
+	/* Clear tx bd */
+	desc = txq->tx_ring;
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		desc->tx.tp_fe_sc_vld_ra_ri = 0;
+		desc++;
+	}
+
+	return txq;
+}
+
+static int
+hns3_fake_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
+			 uint16_t nb_desc, unsigned int socket_id)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_queue_info q_info;
+	struct hns3_tx_queue *txq;
+	uint16_t nb_tx_q;
+
+	if (hw->fkq_data.tx_queues[idx] != NULL) {
+		hns3_tx_queue_release(hw->fkq_data.tx_queues[idx]);
+		hw->fkq_data.tx_queues[idx] = NULL;
+	}
+
+	q_info.idx = idx;
+	q_info.socket_id = socket_id;
+	q_info.nb_desc = nb_desc;
+	q_info.type = "hns3 fake TX queue";
+	q_info.ring_name = "tx_fake_ring";
+	txq = hns3_alloc_txq_and_dma_zone(dev, &q_info);
+	if (txq == NULL) {
+		hns3_err(hw, "Failed to setup No.%d fake tx ring.", idx);
+		return -ENOMEM;
+	}
+
+	/* Don't need alloc sw_ring, because upper applications don't use it */
+	txq->sw_ring = NULL;
+
+	txq->hns = hns;
+	txq->tx_deferred_start = false;
+	txq->port_id = dev->data->port_id;
+	txq->configured = true;
+	nb_tx_q = dev->data->nb_tx_queues;
+	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
+				(nb_tx_q + idx) * HNS3_TQP_REG_SIZE);
+
+	rte_spinlock_lock(&hw->lock);
+	hw->fkq_data.tx_queues[idx] = txq;
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static int
+hns3_fake_rx_queue_config(struct hns3_hw *hw, uint16_t nb_queues)
+{
+	uint16_t old_nb_queues = hw->fkq_data.nb_fake_rx_queues;
+	void **rxq;
+	uint8_t i;
+
+	if (hw->fkq_data.rx_queues == NULL && nb_queues != 0) {
+		/* first time configuration */
+
+		uint32_t size;
+		size = sizeof(hw->fkq_data.rx_queues[0]) * nb_queues;
+		hw->fkq_data.rx_queues = rte_zmalloc("fake_rx_queues", size,
+						     RTE_CACHE_LINE_SIZE);
+		if (hw->fkq_data.rx_queues == NULL) {
+			hw->fkq_data.nb_fake_rx_queues = 0;
+			return -ENOMEM;
+		}
+	} else if (hw->fkq_data.rx_queues != NULL && nb_queues != 0) {
+		/* re-configure */
+
+		rxq = hw->fkq_data.rx_queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			hns3_dev_rx_queue_release(rxq[i]);
+
+		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
+				  RTE_CACHE_LINE_SIZE);
+		if (rxq == NULL)
+			return -ENOMEM;
+		if (nb_queues > old_nb_queues) {
+			uint16_t new_qs = nb_queues - old_nb_queues;
+			memset(rxq + old_nb_queues, 0, sizeof(rxq[0]) * new_qs);
+		}
+
+		hw->fkq_data.rx_queues = rxq;
+	} else if (hw->fkq_data.rx_queues != NULL && nb_queues == 0) {
+		rxq = hw->fkq_data.rx_queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			hns3_dev_rx_queue_release(rxq[i]);
+
+		rte_free(hw->fkq_data.rx_queues);
+		hw->fkq_data.rx_queues = NULL;
+	}
+
+	hw->fkq_data.nb_fake_rx_queues = nb_queues;
+
+	return 0;
+}
+
+static int
+hns3_fake_tx_queue_config(struct hns3_hw *hw, uint16_t nb_queues)
+{
+	uint16_t old_nb_queues = hw->fkq_data.nb_fake_tx_queues;
+	void **txq;
+	uint8_t i;
+
+	if (hw->fkq_data.tx_queues == NULL && nb_queues != 0) {
+		/* first time configuration */
+
+		uint32_t size;
+		size = sizeof(hw->fkq_data.tx_queues[0]) * nb_queues;
+		hw->fkq_data.tx_queues = rte_zmalloc("fake_tx_queues", size,
+						     RTE_CACHE_LINE_SIZE);
+		if (hw->fkq_data.tx_queues == NULL) {
+			hw->fkq_data.nb_fake_tx_queues = 0;
+			return -ENOMEM;
+		}
+	} else if (hw->fkq_data.tx_queues != NULL && nb_queues != 0) {
+		/* re-configure */
+
+		txq = hw->fkq_data.tx_queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			hns3_dev_tx_queue_release(txq[i]);
+		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
+				  RTE_CACHE_LINE_SIZE);
+		if (txq == NULL)
+			return -ENOMEM;
+		if (nb_queues > old_nb_queues) {
+			uint16_t new_qs = nb_queues - old_nb_queues;
+			memset(txq + old_nb_queues, 0, sizeof(txq[0]) * new_qs);
+		}
+
+		hw->fkq_data.tx_queues = txq;
+	} else if (hw->fkq_data.tx_queues != NULL && nb_queues == 0) {
+		txq = hw->fkq_data.tx_queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			hns3_dev_tx_queue_release(txq[i]);
+
+		rte_free(hw->fkq_data.tx_queues);
+		hw->fkq_data.tx_queues = NULL;
+	}
+	hw->fkq_data.nb_fake_tx_queues = nb_queues;
+
+	return 0;
+}
+
+int
+hns3_set_fake_rx_or_tx_queues(struct rte_eth_dev *dev, uint16_t nb_rx_q,
+			      uint16_t nb_tx_q)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t rx_need_add_nb_q;
+	uint16_t tx_need_add_nb_q;
+	uint16_t port_id;
+	uint16_t q;
+	int ret;
+
+	/* Setup new number of fake RX/TX queues and reconfigure device. */
+	hw->cfg_max_queues = RTE_MAX(nb_rx_q, nb_tx_q);
+	rx_need_add_nb_q = hw->cfg_max_queues - nb_rx_q;
+	tx_need_add_nb_q = hw->cfg_max_queues - nb_tx_q;
+	ret = hns3_fake_rx_queue_config(hw, rx_need_add_nb_q);
+	if (ret) {
+		hns3_err(hw, "Fail to configure fake rx queues: %d", ret);
+		goto cfg_fake_rx_q_fail;
+	}
+
+	ret = hns3_fake_tx_queue_config(hw, tx_need_add_nb_q);
+	if (ret) {
+		hns3_err(hw, "Fail to configure fake rx queues: %d", ret);
+		goto cfg_fake_tx_q_fail;
+	}
+
+	/* Allocate and set up fake RX queue per Ethernet port. */
+	port_id = hw->data->port_id;
+	for (q = 0; q < rx_need_add_nb_q; q++) {
+		ret = hns3_fake_rx_queue_setup(dev, q, HNS3_MIN_RING_DESC,
+					       rte_eth_dev_socket_id(port_id));
+		if (ret)
+			goto setup_fake_rx_q_fail;
+	}
+
+	/* Allocate and set up fake TX queue per Ethernet port. */
+	for (q = 0; q < tx_need_add_nb_q; q++) {
+		ret = hns3_fake_tx_queue_setup(dev, q, HNS3_MIN_RING_DESC,
+					       rte_eth_dev_socket_id(port_id));
+		if (ret)
+			goto setup_fake_tx_q_fail;
+	}
+
+	return 0;
+
+setup_fake_tx_q_fail:
+setup_fake_rx_q_fail:
+	(void)hns3_fake_tx_queue_config(hw, 0);
+cfg_fake_tx_q_fail:
+	(void)hns3_fake_rx_queue_config(hw, 0);
+cfg_fake_rx_q_fail:
+	hw->cfg_max_queues = 0;
+
+	return ret;
+}
+
 void
 hns3_dev_release_mbufs(struct hns3_adapter *hns)
 {
@@ -618,11 +1113,9 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 		    struct rte_mempool *mp)
 {
 	struct hns3_adapter *hns = dev->data->dev_private;
-	const struct rte_memzone *rx_mz;
 	struct hns3_hw *hw = &hns->hw;
+	struct hns3_queue_info q_info;
 	struct hns3_rx_queue *rxq;
-	unsigned int desc_size = sizeof(struct hns3_desc);
-	unsigned int rx_desc;
 	int rx_entry_len;
 
 	if (dev->data->dev_started) {
@@ -642,17 +1135,20 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 		dev->data->rx_queues[idx] = NULL;
 	}
 
-	rxq = rte_zmalloc_socket("hns3 RX queue", sizeof(struct hns3_rx_queue),
-				 RTE_CACHE_LINE_SIZE, socket_id);
+	q_info.idx = idx;
+	q_info.socket_id = socket_id;
+	q_info.nb_desc = nb_desc;
+	q_info.type = "hns3 RX queue";
+	q_info.ring_name = "rx_ring";
+	rxq = hns3_alloc_rxq_and_dma_zone(dev, &q_info);
 	if (rxq == NULL) {
-		hns3_err(hw, "Failed to allocate memory for rx queue!");
+		hns3_err(hw,
+			 "Failed to alloc mem and reserve DMA mem for rx ring!");
 		return -ENOMEM;
 	}
 
 	rxq->hns = hns;
 	rxq->mb_pool = mp;
-	rxq->nb_rx_desc = nb_desc;
-	rxq->queue_id = idx;
 	if (conf->rx_free_thresh <= 0)
 		rxq->rx_free_thresh = DEFAULT_RX_FREE_THRESH;
 	else
@@ -668,23 +1164,6 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 		return -ENOMEM;
 	}
 
-	/* Allocate rx ring hardware descriptors. */
-	rx_desc = rxq->nb_rx_desc * desc_size;
-	rx_mz = rte_eth_dma_zone_reserve(dev, "rx_ring", idx, rx_desc,
-					 HNS3_RING_BASE_ALIGN, socket_id);
-	if (rx_mz == NULL) {
-		hns3_err(hw, "Failed to reserve DMA memory for No.%d rx ring!",
-			 idx);
-		hns3_rx_queue_release(rxq);
-		return -ENOMEM;
-	}
-	rxq->mz = rx_mz;
-	rxq->rx_ring = (struct hns3_desc *)rx_mz->addr;
-	rxq->rx_ring_phys_addr = rx_mz->iova;
-
-	hns3_dbg(hw, "No.%d rx descriptors iova 0x%" PRIx64, idx,
-		 rxq->rx_ring_phys_addr);
-
 	rxq->next_to_use = 0;
 	rxq->next_to_clean = 0;
 	rxq->nb_rx_hold = 0;
@@ -1062,14 +1541,10 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 		    unsigned int socket_id, const struct rte_eth_txconf *conf)
 {
 	struct hns3_adapter *hns = dev->data->dev_private;
-	const struct rte_memzone *tx_mz;
 	struct hns3_hw *hw = &hns->hw;
+	struct hns3_queue_info q_info;
 	struct hns3_tx_queue *txq;
-	struct hns3_desc *desc;
-	unsigned int desc_size = sizeof(struct hns3_desc);
-	unsigned int tx_desc;
 	int tx_entry_len;
-	int i;
 
 	if (dev->data->dev_started) {
 		hns3_err(hw, "tx_queue_setup after dev_start no supported");
@@ -1088,17 +1563,19 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 		dev->data->tx_queues[idx] = NULL;
 	}
 
-	txq = rte_zmalloc_socket("hns3 TX queue", sizeof(struct hns3_tx_queue),
-				 RTE_CACHE_LINE_SIZE, socket_id);
+	q_info.idx = idx;
+	q_info.socket_id = socket_id;
+	q_info.nb_desc = nb_desc;
+	q_info.type = "hns3 TX queue";
+	q_info.ring_name = "tx_ring";
+	txq = hns3_alloc_txq_and_dma_zone(dev, &q_info);
 	if (txq == NULL) {
-		hns3_err(hw, "Failed to allocate memory for tx queue!");
+		hns3_err(hw,
+			 "Failed to alloc mem and reserve DMA mem for tx ring!");
 		return -ENOMEM;
 	}
 
-	txq->nb_tx_desc = nb_desc;
-	txq->queue_id = idx;
 	txq->tx_deferred_start = conf->tx_deferred_start;
-
 	tx_entry_len = sizeof(struct hns3_entry) * txq->nb_tx_desc;
 	txq->sw_ring = rte_zmalloc_socket("hns3 TX sw ring", tx_entry_len,
 					  RTE_CACHE_LINE_SIZE, socket_id);
@@ -1108,34 +1585,10 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 		return -ENOMEM;
 	}
 
-	/* Allocate tx ring hardware descriptors. */
-	tx_desc = txq->nb_tx_desc * desc_size;
-	tx_mz = rte_eth_dma_zone_reserve(dev, "tx_ring", idx, tx_desc,
-					 HNS3_RING_BASE_ALIGN, socket_id);
-	if (tx_mz == NULL) {
-		hns3_err(hw, "Failed to reserve DMA memory for No.%d tx ring!",
-			 idx);
-		hns3_tx_queue_release(txq);
-		return -ENOMEM;
-	}
-	txq->mz = tx_mz;
-	txq->tx_ring = (struct hns3_desc *)tx_mz->addr;
-	txq->tx_ring_phys_addr = tx_mz->iova;
-
-	hns3_dbg(hw, "No.%d tx descriptors iova 0x%" PRIx64, idx,
-		 txq->tx_ring_phys_addr);
-
-	/* Clear tx bd */
-	desc = txq->tx_ring;
-	for (i = 0; i < txq->nb_tx_desc; i++) {
-		desc->tx.tp_fe_sc_vld_ra_ri = 0;
-		desc++;
-	}
-
 	txq->hns = hns;
 	txq->next_to_use = 0;
 	txq->next_to_clean = 0;
-	txq->tx_bd_ready   = txq->nb_tx_desc;
+	txq->tx_bd_ready = txq->nb_tx_desc;
 	txq->port_id = dev->data->port_id;
 	txq->configured = true;
 	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cc210268a..a042c9902 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -273,6 +273,14 @@ struct hns3_tx_queue {
 	bool configured;        /* indicate if tx queue has been configured */
 };
 
+struct hns3_queue_info {
+	const char *type;   /* point to queue memory name */
+	const char *ring_name;  /* point to hardware ring name */
+	uint16_t idx;
+	uint16_t nb_desc;
+	unsigned int socket_id;
+};
+
 #define HNS3_TX_CKSUM_OFFLOAD_MASK ( \
 	PKT_TX_OUTER_IPV6 | \
 	PKT_TX_OUTER_IPV4 | \
@@ -314,4 +322,7 @@ uint16_t hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 const uint32_t *hns3_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
 void hns3_tqp_intr_enable(struct hns3_hw *hw, uint16_t tpq_int_num, bool en);
+int hns3_set_fake_rx_or_tx_queues(struct rte_eth_dev *dev, uint16_t nb_rx_q,
+				  uint16_t nb_tx_q);
+
 #endif /* _HNS3_RXTX_H_ */
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 02/11] net/hns3: support setting VF MAC address by PF driver
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 01/11] net/hns3: support different numbered Rx and Tx queues Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 03/11] net/hns3: reduce the judgements of free Tx ring space Wei Hu (Xavier)
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, we only support VF device is bound to vfio_pci or igb_uio and
then driven by DPDK driver when PF is driven by kernel mode hns3 ethdev
driver, VF is not supported when PF is driven by hns3 DPDK driver.

This patch adds support setting VF MAC address by hns3 PF kernel ethdev
driver on the host by "ip link set ..." command.
1) If the hns3 PF kernel ethdev driver sets the MAC address for VF device
   before the initialization of the related VF device, hns3 VF PMD driver
   should get the MAC address from PF driver through mailbox and configure
   hardware using this MAC address in the initialization. The hns3 VF PMD
   driver get the MAC address form PF driver, if obtaining a non-zero MAC
   address from mailbox, VF driver will configure hardware using it,
   otherwise using a random MAC address in the initialization of VF device.
2) If the hns3 PF kernel ethdev driver sets the MAC address for VF device
   after the initialization of the related VF device, the PF driver will
   notify VF driver to reset VF device to make the new MAC address
   effective immediately. The hns3 VF PMD driver should check whether the
   MAC address has been changed by the PF kernel netdevice driver, if
   changed VF driver should configure hardware using the new MAC address
   in the recovering hardware configuration stage of the reset process.
3) When user has configured a mac address for VF device by
   "ip link set ..." command based on the PF device, the hns3 PF kernel
   ethdev driver does not allow VF driver to request reconfiguring a
   different default mac address and return -EPREM to VF driver through
   mailbox.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
---
 drivers/net/hns3/hns3_ethdev_vf.c | 108 ++++++++++++++++++++++++++++--
 1 file changed, 102 insertions(+), 6 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 71e358e81..92cf7ee99 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -208,12 +208,27 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
 
 	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST,
 				HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes,
-				HNS3_TWO_ETHER_ADDR_LEN, false, NULL, 0);
+				HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0);
 	if (ret) {
-		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
-				      mac_addr);
-		hns3_err(hw, "Failed to set mac addr(%s) for vf: %d", mac_str,
-			 ret);
+		/*
+		 * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev
+		 * driver. When user has configured a MAC address for VF device
+		 * by "ip link set ..." command based on the PF device, the hns3
+		 * PF kernel ethdev driver does not allow VF driver to request
+		 * reconfiguring a different default MAC address, and return
+		 * -EPREM to VF driver through mailbox.
+		 */
+		if (ret == -EPERM) {
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      old_addr);
+			hns3_warn(hw, "Has permanet mac addr(%s) for vf",
+				  mac_str);
+		} else {
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      mac_addr);
+			hns3_err(hw, "Failed to set mac addr(%s) for vf: %d",
+				 mac_str, ret);
+		}
 	}
 
 	rte_ether_addr_copy(mac_addr,
@@ -784,6 +799,24 @@ hns3vf_get_tc_info(struct hns3_hw *hw)
 	return 0;
 }
 
+static int
+hns3vf_get_host_mac_addr(struct hns3_hw *hw)
+{
+	uint8_t host_mac[RTE_ETHER_ADDR_LEN];
+	int ret;
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_MAC_ADDR, 0, NULL, 0,
+				true, host_mac, RTE_ETHER_ADDR_LEN);
+	if (ret) {
+		hns3_err(hw, "Failed to get mac addr from PF: %d", ret);
+		return ret;
+	}
+
+	memcpy(hw->mac.mac_addr, host_mac, RTE_ETHER_ADDR_LEN);
+
+	return 0;
+}
+
 static int
 hns3vf_get_configuration(struct hns3_hw *hw)
 {
@@ -801,6 +834,11 @@ hns3vf_get_configuration(struct hns3_hw *hw)
 	if (ret)
 		return ret;
 
+	/* Get user defined VF MAC addr from PF */
+	ret = hns3vf_get_host_mac_addr(hw);
+	if (ret)
+		return ret;
+
 	/* Get tc configuration from PF */
 	return hns3vf_get_tc_info(hw);
 }
@@ -1170,7 +1208,20 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
 		goto err_get_config;
 	}
 
-	rte_eth_random_addr(hw->mac.mac_addr); /* Generate a random mac addr */
+	/*
+	 * The hns3 PF ethdev driver in kernel support setting VF MAC address
+	 * on the host by "ip link set ..." command. To avoid some incorrect
+	 * scenes, for example, hns3 VF PMD driver fails to receive and send
+	 * packets after user configure the MAC address by using the
+	 * "ip link set ..." command, hns3 VF PMD driver keep the same MAC
+	 * address strategy as the hns3 kernel ethdev driver in the
+	 * initialization. If user configure a MAC address by the ip command
+	 * for VF device, then hns3 VF PMD driver will start with it, otherwise
+	 * start with a random MAC address in the initialization.
+	 */
+	ret = rte_is_zero_ether_addr((struct rte_ether_addr *)hw->mac.mac_addr);
+	if (ret)
+		rte_eth_random_addr(hw->mac.mac_addr);
 
 	ret = hns3vf_clear_vport_list(hw);
 	if (ret) {
@@ -1663,12 +1714,57 @@ hns3vf_start_service(struct hns3_adapter *hns)
 	return 0;
 }
 
+static int
+hns3vf_check_default_mac_change(struct hns3_hw *hw)
+{
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	struct rte_ether_addr *hw_mac;
+	int ret;
+
+	/*
+	 * The hns3 PF ethdev driver in kernel support setting VF MAC address
+	 * on the host by "ip link set ..." command. If the hns3 PF kernel
+	 * ethdev driver sets the MAC address for VF device after the
+	 * initialization of the related VF device, the PF driver will notify
+	 * VF driver to reset VF device to make the new MAC address effective
+	 * immediately. The hns3 VF PMD driver should check whether the MAC
+	 * address has been changed by the PF kernel ethdev driver, if changed
+	 * VF driver should configure hardware using the new MAC address in the
+	 * recovering hardware configuration stage of the reset process.
+	 */
+	ret = hns3vf_get_host_mac_addr(hw);
+	if (ret)
+		return ret;
+
+	hw_mac = (struct rte_ether_addr *)hw->mac.mac_addr;
+	ret = rte_is_zero_ether_addr(hw_mac);
+	if (ret)
+		rte_ether_addr_copy(&hw->data->mac_addrs[0], hw_mac);
+	else {
+		ret = rte_is_same_ether_addr(&hw->data->mac_addrs[0], hw_mac);
+		if (!ret) {
+			rte_ether_addr_copy(hw_mac, &hw->data->mac_addrs[0]);
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      &hw->data->mac_addrs[0]);
+			hns3_warn(hw, "Default MAC address has been changed to:"
+				  " %s by the host PF kernel ethdev driver",
+				  mac_str);
+		}
+	}
+
+	return 0;
+}
+
 static int
 hns3vf_restore_conf(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
+	ret = hns3vf_check_default_mac_change(hw);
+	if (ret)
+		return ret;
+
 	ret = hns3vf_configure_mac_addr(hns, false);
 	if (ret)
 		return ret;
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 03/11] net/hns3: reduce the judgements of free Tx ring space
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 01/11] net/hns3: support different numbered Rx and Tx queues Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 02/11] net/hns3: support setting VF MAC address by PF driver Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 04/11] net/hns3: remove io rmb call in Rx operation Wei Hu (Xavier)
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: Yisen Zhuang <yisen.zhuang@huawei.com>

This patch reduces the number of the judgement of the free Tx ring space
in the 'tx_pkt_burst' ops implementation function to avoid performance
loss. According to hardware constraints, we need to reserve a Tx Buffer
Descriptor in the TX ring in hns3 network engine.

Signed-off-by: Yisen Zhuang <yisen.zhuang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 32 +++++++-------------------------
 1 file changed, 7 insertions(+), 25 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3d13ed526..34919cd2c 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -591,7 +591,7 @@ hns3_init_tx_queue(struct hns3_tx_queue *queue)
 
 	txq->next_to_use = 0;
 	txq->next_to_clean = 0;
-	txq->tx_bd_ready = txq->nb_tx_desc;
+	txq->tx_bd_ready = txq->nb_tx_desc - 1;
 	hns3_init_tx_queue_hw(txq);
 }
 
@@ -1588,7 +1588,7 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	txq->hns = hns;
 	txq->next_to_use = 0;
 	txq->next_to_clean = 0;
-	txq->tx_bd_ready = txq->nb_tx_desc;
+	txq->tx_bd_ready = txq->nb_tx_desc - 1;
 	txq->port_id = dev->data->port_id;
 	txq->configured = true;
 	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
@@ -1600,19 +1600,6 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	return 0;
 }
 
-static inline int
-tx_ring_dist(struct hns3_tx_queue *txq, int begin, int end)
-{
-	return (end - begin + txq->nb_tx_desc) % txq->nb_tx_desc;
-}
-
-static inline int
-tx_ring_space(struct hns3_tx_queue *txq)
-{
-	return txq->nb_tx_desc -
-		tx_ring_dist(txq, txq->next_to_clean, txq->next_to_use) - 1;
-}
-
 static inline void
 hns3_queue_xmit(struct hns3_tx_queue *txq, uint32_t buf_num)
 {
@@ -1631,7 +1618,7 @@ hns3_tx_free_useless_buffer(struct hns3_tx_queue *txq)
 	struct rte_mbuf *mbuf;
 
 	while ((!hns3_get_bit(desc->tx.tp_fe_sc_vld_ra_ri, HNS3_TXD_VLD_B)) &&
-		(tx_next_use != tx_next_clean || tx_bd_ready < tx_bd_max)) {
+		tx_next_use != tx_next_clean) {
 		mbuf = tx_bak_pkt->mbuf;
 		if (mbuf) {
 			rte_pktmbuf_free_seg(mbuf);
@@ -2054,7 +2041,6 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct rte_mbuf *m_seg;
 	uint32_t nb_hold = 0;
 	uint16_t tx_next_use;
-	uint16_t tx_bd_ready;
 	uint16_t tx_pkt_num;
 	uint16_t tx_bd_max;
 	uint16_t nb_buf;
@@ -2063,13 +2049,10 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 	/* free useless buffer */
 	hns3_tx_free_useless_buffer(txq);
-	tx_bd_ready = txq->tx_bd_ready;
-	if (tx_bd_ready == 0)
-		return 0;
 
 	tx_next_use   = txq->next_to_use;
 	tx_bd_max     = txq->nb_tx_desc;
-	tx_pkt_num = (tx_bd_ready < nb_pkts) ? tx_bd_ready : nb_pkts;
+	tx_pkt_num = nb_pkts;
 
 	/* send packets */
 	tx_bak_pkt = &txq->sw_ring[tx_next_use];
@@ -2078,7 +2061,7 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		nb_buf = tx_pkt->nb_segs;
 
-		if (nb_buf > tx_ring_space(txq)) {
+		if (nb_buf > txq->tx_bd_ready) {
 			if (nb_tx == 0)
 				return 0;
 
@@ -2137,14 +2120,13 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		nb_hold += i;
 		txq->next_to_use = tx_next_use;
+		txq->tx_bd_ready -= i;
 	}
 
 end_of_tx:
 
-	if (likely(nb_tx)) {
+	if (likely(nb_tx))
 		hns3_queue_xmit(txq, nb_hold);
-		txq->tx_bd_ready   = tx_bd_ready - nb_hold;
-	}
 
 	return nb_tx;
 }
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 04/11] net/hns3: remove io rmb call in Rx operation
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (2 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 03/11] net/hns3: reduce the judgements of free Tx ring space Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 05/11] net/hns3: add free thresh " Wei Hu (Xavier)
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

When receiving a packet, hns3 hardware network engine firstly writes the
packet content to the memory pointed by the 'addr' field of the Rx Buffer
Descriptor, secondly fills the result of parsing the packet include the
valid field into the Rx Buffer Decriptor in one write operation, and
thirdly writes the number of the Buffer Descriptor not processed by the
driver to the HNS3_RING_RX_FBDNUM_REG register.

This patch optimizes the Rx performance by removing one rte_io_rmb call
in the '.rx_pkt_burst' ops implementation function named hns3_recv_pkts.
The change as follows:
1. Driver no longer read HNS3_RING_RX_FBDNUM_REG register, so remove one
   rte_io_rmb call, and directly read the valid flag of Rx Buffer
   Descriptor to check whether the BD is ready.
2. Delete the non_vld_descs field from the statistic information of the
   hns3 driver because now it has become a common case that the valid flag
   of Rx Buffer Descriptor read by the driver is invalid.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c  | 12 +++---------
 drivers/net/hns3/hns3_rxtx.h  |  1 -
 drivers/net/hns3/hns3_stats.c |  3 ---
 3 files changed, 3 insertions(+), 13 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 34919cd2c..a1655e246 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1174,7 +1174,6 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	rxq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
 	rxq->rx_buf_len = hw->rx_buf_len;
-	rxq->non_vld_descs = 0;
 	rxq->l2_errors = 0;
 	rxq->pkt_len_errors = 0;
 	rxq->l3_csum_erros = 0;
@@ -1421,7 +1420,6 @@ hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t pkt_len;
 	uint16_t nb_rx;
 	uint16_t rx_id;
-	int num;                        /* num of desc in ring */
 	int ret;
 
 	nb_rx = 0;
@@ -1435,15 +1433,11 @@ hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	last_seg = rxq->pkt_last_seg;
 	sw_ring = rxq->sw_ring;
 
-	/* Get num of packets in descriptor ring */
-	num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
-	while (nb_rx_bd < num && nb_rx < nb_pkts) {
+	while (nb_rx < nb_pkts) {
 		rxdp = &rx_ring[rx_id];
 		bd_base_info = rte_le_to_cpu_32(rxdp->rx.bd_base_info);
-		if (unlikely(!hns3_get_bit(bd_base_info, HNS3_RXD_VLD_B))) {
-			rxq->non_vld_descs++;
+		if (unlikely(!hns3_get_bit(bd_base_info, HNS3_RXD_VLD_B)))
 			break;
-		}
 
 		nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
 		if (unlikely(nmb == NULL)) {
@@ -1454,7 +1448,7 @@ hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		nb_rx_bd++;
 		rxe = &sw_ring[rx_id];
 		rx_id++;
-		if (rx_id == rxq->nb_rx_desc)
+		if (unlikely(rx_id == rxq->nb_rx_desc))
 			rx_id = 0;
 
 		rte_prefetch0(sw_ring[rx_id].mbuf);
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index a042c9902..943c6d8bb 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -245,7 +245,6 @@ struct hns3_rx_queue {
 	bool rx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if rx queue has been configured */
 
-	uint64_t non_vld_descs; /* num of non valid rx descriptors */
 	uint64_t l2_errors;
 	uint64_t pkt_len_errors;
 	uint64_t l3_csum_erros;
diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
index 9948beb17..b3797ec53 100644
--- a/drivers/net/hns3/hns3_stats.c
+++ b/drivers/net/hns3/hns3_stats.c
@@ -219,8 +219,6 @@ static const struct hns3_xstats_name_offset hns3_reset_stats_strings[] = {
 
 /* The statistic of errors in Rx BD */
 static const struct hns3_xstats_name_offset hns3_rx_bd_error_strings[] = {
-	{"NONE_VALIDATED_DESCRIPTORS",
-		HNS3_RX_BD_ERROR_STATS_FIELD_OFFSET(non_vld_descs)},
 	{"RX_PKT_LEN_ERRORS",
 		HNS3_RX_BD_ERROR_STATS_FIELD_OFFSET(pkt_len_errors)},
 	{"L2_RX_ERRORS",
@@ -510,7 +508,6 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev)
 		rxq = eth_dev->data->rx_queues[i];
 		if (rxq) {
 			rxq->pkt_len_errors = 0;
-			rxq->non_vld_descs = 0;
 			rxq->l2_errors = 0;
 			rxq->l3_csum_erros = 0;
 			rxq->l4_csum_erros = 0;
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 05/11] net/hns3: add free thresh in Rx operation
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (3 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 04/11] net/hns3: remove io rmb call in Rx operation Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 06/11] net/hns3: fix Rx queue search miss RAS err when recv BC pkt Wei Hu (Xavier)
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

This patch optimizes the Rx performance by adding the rx_free_thresh
related process in the '.rx_pkt_burst' ops implementation function named
hns3_recv_pkts. The related change as follows:
1. Adding the rx_free_thresh related process to reduce the number of
   writing the HNS3_RING_RX_HEAD_REG register.
2. Adjusting the internal macro named DEFAULT_RX_FREE_THRESH to 32 and
   adjusting HNS3_MIN_RING_DESC to 64 to make the effect of the thresh
   more obvious.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 12 ++++++++++--
 drivers/net/hns3/hns3_rxtx.h |  2 +-
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index a1655e246..6f74a7917 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -30,7 +30,7 @@
 #include "hns3_logs.h"
 
 #define HNS3_CFG_DESC_NUM(num)	((num) / 8 - 1)
-#define DEFAULT_RX_FREE_THRESH	16
+#define DEFAULT_RX_FREE_THRESH	32
 
 static void
 hns3_rx_queue_release_mbufs(struct hns3_rx_queue *rxq)
@@ -558,6 +558,7 @@ hns3_dev_rx_queue_start(struct hns3_adapter *hns, uint16_t idx)
 
 	rxq->next_to_use = 0;
 	rxq->next_to_clean = 0;
+	rxq->nb_rx_hold = 0;
 	hns3_init_rx_queue_hw(rxq);
 
 	return 0;
@@ -572,6 +573,7 @@ hns3_fake_rx_queue_start(struct hns3_adapter *hns, uint16_t idx)
 	rxq = (struct hns3_rx_queue *)hw->fkq_data.rx_queues[idx];
 	rxq->next_to_use = 0;
 	rxq->next_to_clean = 0;
+	rxq->nb_rx_hold = 0;
 	hns3_init_rx_queue_hw(rxq);
 }
 
@@ -1525,7 +1527,13 @@ hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	rxq->next_to_clean = rx_id;
 	rxq->pkt_first_seg = first_seg;
 	rxq->pkt_last_seg = last_seg;
-	hns3_clean_rx_buffers(rxq, nb_rx_bd);
+
+	nb_rx_bd = nb_rx_bd + rxq->nb_rx_hold;
+	if (nb_rx_bd > rxq->rx_free_thresh) {
+		hns3_clean_rx_buffers(rxq, nb_rx_bd);
+		nb_rx_bd = 0;
+	}
+	rxq->nb_rx_hold = nb_rx_bd;
 
 	return nb_rx;
 }
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 943c6d8bb..1c2723ffb 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -5,7 +5,7 @@
 #ifndef _HNS3_RXTX_H_
 #define _HNS3_RXTX_H_
 
-#define	HNS3_MIN_RING_DESC	32
+#define	HNS3_MIN_RING_DESC	64
 #define	HNS3_MAX_RING_DESC	32768
 #define HNS3_DEFAULT_RING_DESC  1024
 #define	HNS3_ALIGN_RING_DESC	32
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 06/11] net/hns3: fix Rx queue search miss RAS err when recv BC pkt
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (4 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 05/11] net/hns3: add free thresh " Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 07/11] net/hns3: fix segment error when closing the port Wei Hu (Xavier)
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, there is a certain probability of a type of RAS errors when
receiving broadcast packets. This type of RAS errors are parsed as
rx_q_search_miss error by hns3 PF PMD driver, the related log as below:
0000:bd:00.0 hns3_find_highest_level(): PPP_MFP_ABNORMAL_INT_ST2
rx_q_search_miss found [error status=0x20000000]

When receiving broadcast packet, network engine select which functions
need to accept it according to the function's promisc_bcast_en bit
configuration. And then search TQP_MAP configuration to select which
hardware queues the packet should enter, if can't find the target hardware
queue, network engine will trigger rx_q_search_miss RAS error.

The root cause as below:
1. VF's promisc_bcast_en bit configuration is not cleared by FLR reset,
   and the configuration has been setted in the previous application.
2. There is one bug in setting TQP_MAP configuration in the initialization
   of PF device: when tqp is allocated to VF, it is still marked as PF.
   This issue will affect the correctness of the TQP_MAP configuration.

This patch fixes it with the following modification.
1. clear all VFs promisc_bcast_en bit in the initialization of PF device.
2. fix the issue to ensure the correct TQP_MAP configuration.

Fixes: d51867db65c1 ("net/hns3: add initialization")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 32 +++++++++++++++++++++++++++++++-
 drivers/net/hns3/hns3_ethdev.h |  1 +
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 800fa47cc..ca8718000 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2408,6 +2408,7 @@ hns3_query_pf_resource(struct hns3_hw *hw)
 	hw->total_tqps_num = rte_le_to_cpu_16(req->tqp_num);
 	pf->pkt_buf_size = rte_le_to_cpu_16(req->buf_size) << HNS3_BUF_UNIT_S;
 	hw->tqps_num = RTE_MIN(hw->total_tqps_num, HNS3_MAX_TQP_NUM_PER_FUNC);
+	pf->func_num = rte_le_to_cpu_16(req->pf_own_fun_number);
 
 	if (req->tx_buf_size)
 		pf->tx_buf_size =
@@ -2684,6 +2685,7 @@ hns3_map_tqp(struct hns3_hw *hw)
 	uint16_t tqps_num = hw->total_tqps_num;
 	uint16_t func_id;
 	uint16_t tqp_id;
+	bool is_pf;
 	int num;
 	int ret;
 	int i;
@@ -2695,10 +2697,11 @@ hns3_map_tqp(struct hns3_hw *hw)
 	tqp_id = 0;
 	num = DIV_ROUND_UP(hw->total_tqps_num, HNS3_MAX_TQP_NUM_PER_FUNC);
 	for (func_id = 0; func_id < num; func_id++) {
+		is_pf = func_id == 0 ? true : false;
 		for (i = 0;
 		     i < HNS3_MAX_TQP_NUM_PER_FUNC && tqp_id < tqps_num; i++) {
 			ret = hns3_map_tqps_to_func(hw, func_id, tqp_id++, i,
-						    true);
+						    is_pf);
 			if (ret)
 				return ret;
 		}
@@ -3599,6 +3602,26 @@ hns3_set_promisc_mode(struct hns3_hw *hw, bool en_uc_pmc, bool en_mc_pmc)
 	return 0;
 }
 
+static int
+hns3_clear_all_vfs_promisc_mode(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_promisc_param param;
+	uint16_t func_id;
+	int ret;
+
+	/* func_id 0 is denoted PF, the VFs start from 1 */
+	for (func_id = 1; func_id < pf->func_num; func_id++) {
+		hns3_promisc_param_init(&param, false, false, false, func_id);
+		ret = hns3_cmd_set_promisc_mode(hw, &param);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
 static int
 hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
 {
@@ -3886,6 +3909,13 @@ hns3_init_hardware(struct hns3_adapter *hns)
 		goto err_mac_init;
 	}
 
+	ret = hns3_clear_all_vfs_promisc_mode(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to clear all vfs promisc mode: %d",
+			     ret);
+		goto err_mac_init;
+	}
+
 	ret = hns3_init_vlan_config(hns);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Failed to init vlan: %d", ret);
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 2aa4c3cd7..d4a03065f 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -464,6 +464,7 @@ struct hns3_mp_param {
 struct hns3_pf {
 	struct hns3_adapter *adapter;
 	bool is_main_pf;
+	uint16_t func_num; /* num functions of this pf, include pf and vfs */
 
 	uint32_t pkt_buf_size; /* Total pf buf size for tx/rx */
 	uint32_t tx_buf_size; /* Tx buffer size for each TC */
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 07/11] net/hns3: fix segment error when closing the port
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (5 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 06/11] net/hns3: fix Rx queue search miss RAS err when recv BC pkt Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 08/11] net/hns3: fix ring vector related mailbox command format Wei Hu (Xavier)
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: Hongbo Zheng <zhenghongbo3@huawei.com>

Currently there is a certain probability of segment error in concurrent
reset when the port is closing.
The calltrace info:

This GDB was configured as "aarch64-redhat-linux-gnu".
Reading symbols from /usr/app/testpmd...(no debugging symbols found)...
 done.
[New LWP 98204]
[New LWP 98203]
[New LWP 98206]
[New LWP 98205]
[New LWP 98207]
[New LWP 98208]
Missing separate debuginfo for /root/lib/libnuma.so.1
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/app/testpmd --log-level=6 --socket-mem 16'.
Program terminated with signal 11, Segmentation fault.
Missing separate debuginfos, use:
 debuginfo-install glibc-2.17-260.el7.aarch64
(gdb) bt
in hns3vf_service_handler ()
1  0x00000000006988b8 in eal_alarm_callback ()
2  0x00000000006969b4 in eal_intr_thread_main ()
3  0x0000ffffb08d6c48 in start_thread () from /lib64/libpthread.so.0
4  0x0000ffffb0828600 in thread_start () from /lib64/libc.so.6
(gdb)

Reset process may turn on the cancelled link state timer whether the
current port status is on or off, in order to solve this problem, this
patch add judge the current network port state before starting the timer,
only the port in the running state can start the link state timer, so as
to solve the problem that the link state timer accesses the null pointer
and causes the segment error.

Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    | 11 +++++++----
 drivers/net/hns3/hns3_ethdev_vf.c |  7 +++++--
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index ca8718000..b05a55781 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -4197,6 +4197,7 @@ hns3_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	hns3_set_rxtx_function(dev);
 	hns3_mp_req_start_rxtx(dev);
+	rte_eal_alarm_set(HNS3_SERVICE_INTERVAL, hns3_service_handler, dev);
 
 	hns3_info(hw, "hns3 dev start successful!");
 	return 0;
@@ -4279,6 +4280,7 @@ hns3_dev_stop(struct rte_eth_dev *dev)
 		hns3_dev_release_mbufs(hns);
 		hw->adapter_state = HNS3_NIC_CONFIGURED;
 	}
+	rte_eal_alarm_cancel(hns3_service_handler, dev);
 	rte_spinlock_unlock(&hw->lock);
 	hns3_unmap_rx_interrupt(dev);
 }
@@ -4301,7 +4303,6 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
 	hw->adapter_state = HNS3_NIC_CLOSING;
 	hns3_reset_abort(hns);
 	hw->adapter_state = HNS3_NIC_CLOSED;
-	rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 
 	hns3_configure_all_mc_mac_addr(hns, true);
 	hns3_remove_all_vlan_table(hns);
@@ -4760,7 +4761,8 @@ hns3_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
+	if (hw->adapter_state == HNS3_NIC_STARTED)
+		rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 	hw->mac.link_status = ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
@@ -4801,7 +4803,9 @@ hns3_start_service(struct hns3_adapter *hns)
 	eth_dev = &rte_eth_devices[hw->data->port_id];
 	hns3_set_rxtx_function(eth_dev);
 	hns3_mp_req_start_rxtx(eth_dev);
-	hns3_service_handler(eth_dev);
+	if (hw->adapter_state == HNS3_NIC_STARTED)
+		hns3_service_handler(eth_dev);
+
 	return 0;
 }
 
@@ -5059,7 +5063,6 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 		hns3_notify_reset_ready(hw, false);
 	}
 
-	rte_eal_alarm_set(HNS3_SERVICE_INTERVAL, hns3_service_handler, eth_dev);
 	hns3_info(hw, "hns3 dev initialization successful!");
 	return 0;
 
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 92cf7ee99..35243b2be 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1559,6 +1559,7 @@ hns3vf_dev_start(struct rte_eth_dev *dev)
 	hns3_set_rxtx_function(dev);
 	hns3_mp_req_start_rxtx(dev);
 	rte_eal_alarm_set(HNS3VF_SERVICE_INTERVAL, hns3vf_service_handler, dev);
+
 	return ret;
 }
 
@@ -1671,7 +1672,8 @@ hns3vf_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	rte_eal_alarm_cancel(hns3vf_service_handler, eth_dev);
+	if (hw->adapter_state == HNS3_NIC_STARTED)
+		rte_eal_alarm_cancel(hns3vf_service_handler, eth_dev);
 	hw->mac.link_status = ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
@@ -1709,8 +1711,9 @@ hns3vf_start_service(struct hns3_adapter *hns)
 	eth_dev = &rte_eth_devices[hw->data->port_id];
 	hns3_set_rxtx_function(eth_dev);
 	hns3_mp_req_start_rxtx(eth_dev);
+	if (hw->adapter_state == HNS3_NIC_STARTED)
+		hns3vf_service_handler(eth_dev);
 
-	hns3vf_service_handler(eth_dev);
 	return 0;
 }
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 08/11] net/hns3: fix ring vector related mailbox command format
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (6 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 07/11] net/hns3: fix segment error when closing the port Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 09/11] net/hns3: fix dumping VF register information Wei Hu (Xavier)
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

The format of the ring vector related mailbox commands between driver and
firmware is different from those of other mailbox commands in hns3 network
engine.

This patch fixes the error mailbox command format about the vector of the
rings, the related command opcode as below:
HNS3_MBX_MAP_RING_TO_VECTOR
HNS3_MBX_UNMAP_RING_TO_VECTOR
HNS3_MBX_GET_RING_VECTOR_MAP

Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_mbx.c | 14 +++++++++++---
 drivers/net/hns3/hns3_mbx.h |  1 +
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index 26807bc4b..0d03f5064 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -150,6 +150,8 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
 {
 	struct hns3_mbx_vf_to_pf_cmd *req;
 	struct hns3_cmd_desc desc;
+	bool is_ring_vector_msg;
+	int offset;
 	int ret;
 
 	req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
@@ -164,9 +166,15 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
 
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
 	req->msg[0] = code;
-	req->msg[1] = subcode;
-	if (msg_data)
-		memcpy(&req->msg[HNS3_CMD_CODE_OFFSET], msg_data, msg_len);
+	is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) ||
+			     (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) ||
+			     (code == HNS3_MBX_GET_RING_VECTOR_MAP);
+	if (!is_ring_vector_msg)
+		req->msg[1] = subcode;
+	if (msg_data) {
+		offset = is_ring_vector_msg ? 1 : HNS3_CMD_CODE_OFFSET;
+		memcpy(&req->msg[offset], msg_data, msg_len);
+	}
 
 	/* synchronous send */
 	if (need_resp) {
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h
index 3722c8760..b01eaacc3 100644
--- a/drivers/net/hns3/hns3_mbx.h
+++ b/drivers/net/hns3/hns3_mbx.h
@@ -41,6 +41,7 @@ enum HNS3_MBX_OPCODE {
 	HNS3_MBX_GET_QID_IN_PF,         /* (VF -> PF) get queue id in pf */
 
 	HNS3_MBX_HANDLE_VF_TBL = 38,    /* (VF -> PF) store/clear hw cfg tbl */
+	HNS3_MBX_GET_RING_VECTOR_MAP,   /* (VF -> PF) get ring-to-vector map */
 	HNS3_MBX_PUSH_LINK_STATUS = 201, /* (IMP -> PF) get port link status */
 };
 
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 09/11] net/hns3: fix dumping VF register information
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (7 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 08/11] net/hns3: fix ring vector related mailbox command format Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 10/11] net/hns3: fix link status when failure in issuing command Wei Hu (Xavier)
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, when the API interface named rte_eth_dev_get_reg_info is called
by upper applications based on VF device, it returns error.

We can read registers directly to get ring and interrupt related
information in hns3 PF/VF PMD driver. But for some other internal table
entries and common configuration information, we can get them only
through the command interface between driver and firmware in PF driver,
and VF driver has not the related access permission.

This patch fixes it by preventing getting these information through the
command interface based on VF device in 'get_reg' ops implementation
function.

Fixes: 936eda25e8da ("net/hns3: support dump register")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_regs.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index 23405030e..a3f2a51f9 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -118,15 +118,9 @@ hns3_get_regs_length(struct hns3_hw *hw, uint32_t *length)
 	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
 	int cmdq_lines, common_lines, ring_lines, tqp_intr_lines;
 	uint32_t regs_num_32_bit, regs_num_64_bit;
+	uint32_t len;
 	int ret;
 
-	ret = hns3_get_regs_num(hw, &regs_num_32_bit, &regs_num_64_bit);
-	if (ret) {
-		hns3_err(hw, "Get register number failed, ret = %d.",
-			 ret);
-		return -ENOTSUP;
-	}
-
 	cmdq_lines = sizeof(cmdq_reg_addrs) / REG_LEN_PER_LINE + 1;
 	if (hns->is_vf)
 		common_lines =
@@ -136,11 +130,21 @@ hns3_get_regs_length(struct hns3_hw *hw, uint32_t *length)
 	ring_lines = sizeof(ring_reg_addrs) / REG_LEN_PER_LINE + 1;
 	tqp_intr_lines = sizeof(tqp_intr_reg_addrs) / REG_LEN_PER_LINE + 1;
 
-	*length = (cmdq_lines + common_lines + ring_lines * hw->tqps_num +
-		   tqp_intr_lines * hw->num_msi) * REG_LEN_PER_LINE +
-		  regs_num_32_bit * sizeof(uint32_t) +
-		  regs_num_64_bit * sizeof(uint64_t);
+	len = (cmdq_lines + common_lines + ring_lines * hw->tqps_num +
+	      tqp_intr_lines * hw->num_msi) * REG_LEN_PER_LINE;
 
+	if (!hns->is_vf) {
+		ret = hns3_get_regs_num(hw, &regs_num_32_bit, &regs_num_64_bit);
+		if (ret) {
+			hns3_err(hw, "Get register number failed, ret = %d.",
+				 ret);
+			return -ENOTSUP;
+		}
+		len += regs_num_32_bit * sizeof(uint32_t) +
+		       regs_num_64_bit * sizeof(uint64_t);
+	}
+
+	*length = len;
 	return 0;
 }
 
@@ -346,6 +350,9 @@ hns3_get_regs(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
 	/* fetching per-PF registers values from PF PCIe register space */
 	hns3_direct_access_regs(hw, data);
 
+	if (hns->is_vf)
+		return 0;
+
 	ret = hns3_get_regs_num(hw, &regs_num_32_bit, &regs_num_64_bit);
 	if (ret) {
 		hns3_err(hw, "Get register number failed, ret = %d", ret);
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 10/11] net/hns3: fix link status when failure in issuing command
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (8 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 09/11] net/hns3: fix dumping VF register information Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 11/11] net/hns3: fix triggering reset proceduce in slave process Wei Hu (Xavier)
  2020-01-14 14:33 ` [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Ferruh Yigit
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, the hns3 PMD driver issues command to the firmware and gets link
status information.

When the driver fails to call internal interface function named
hns3_cmd_send to query the status from firmware for some reason, the link
status queried by the driver should be down.

Fixes: 59fad0f32135 ("net/hns3: support link update operation")
Cc: stable@dpdk.org

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index b05a55781..9866d147b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -3836,7 +3836,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret) {
 		hns3_err(hw, "get link status cmd failed %d", ret);
-		return ret;
+		return ETH_LINK_DOWN;
 	}
 
 	req = (struct hns3_link_status_cmd *)desc.data;
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 11/11] net/hns3: fix triggering reset proceduce in slave process
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (9 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 10/11] net/hns3: fix link status when failure in issuing command Wei Hu (Xavier)
@ 2020-01-09  3:15 ` Wei Hu (Xavier)
  2020-01-14 14:33 ` [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Ferruh Yigit
  11 siblings, 0 replies; 13+ messages in thread
From: Wei Hu (Xavier) @ 2020-01-09  3:15 UTC (permalink / raw)
  To: dev

From: Chengwen Feng <fengchengwen@huawei.com>

Currently, reset related operations can only be performed in the primary
process and are not allowed in the slave process in hns3 PMD driver.

In the internal function interface named hns3_cmd_send used for
communication between driver and firmware, if the wrong head value is
detected in the static subfunction hns3_cmd_csq_clean, driver will trigger
a function level reset to make the hardware work normally again.

This patch adds check condition to prevent triggering reset proceduce in
the salve process to avoid failure.

Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 65a5af8e4..5ec3dfe01 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -215,12 +215,12 @@ hns3_cmd_csq_clean(struct hns3_hw *hw)
 	head = hns3_read_dev(hw, HNS3_CMDQ_TX_HEAD_REG);
 
 	if (!is_valid_csq_clean_head(csq, head)) {
-		struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
 		hns3_err(hw, "wrong cmd head (%u, %u-%u)", head,
 			    csq->next_to_use, csq->next_to_clean);
-		rte_atomic16_set(&hw->reset.disable_cmd, 1);
-
-		hns3_schedule_delayed_reset(hns);
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			rte_atomic16_set(&hw->reset.disable_cmd, 1);
+			hns3_schedule_delayed_reset(HNS3_DEV_HW_TO_ADAPTER(hw));
+		}
 
 		return -EIO;
 	}
-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver
  2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
                   ` (10 preceding siblings ...)
  2020-01-09  3:15 ` [dpdk-dev] [PATCH 11/11] net/hns3: fix triggering reset proceduce in slave process Wei Hu (Xavier)
@ 2020-01-14 14:33 ` Ferruh Yigit
  11 siblings, 0 replies; 13+ messages in thread
From: Ferruh Yigit @ 2020-01-14 14:33 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev

On 1/9/2020 3:15 AM, Wei Hu (Xavier) wrote:
> This series are updates and bugfixes for hns3 ethernet PMD driver.
> 
> Chengwen Feng (1):
>   net/hns3: fix triggering reset proceduce in slave process
> 
> Hongbo Zheng (1):
>   net/hns3: fix segment error when closing the port
> 
> Wei Hu (Xavier) (8):
>   net/hns3: support different numbered Rx and Tx queues
>   net/hns3: support setting VF MAC address by PF driver
>   net/hns3: remove io rmb call in Rx operation
>   net/hns3: add free thresh in Rx operation
>   net/hns3: fix Rx queue search miss RAS err when recv BC pkt
>   net/hns3: fix ring vector related mailbox command format
>   net/hns3: fix dumping VF register information
>   net/hns3: fix link status when failure in issuing command
> 
> Yisen Zhuang (1):
>   net/hns3: reduce the judgements of free Tx ring space

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-01-14 14:33 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-09  3:15 [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 01/11] net/hns3: support different numbered Rx and Tx queues Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 02/11] net/hns3: support setting VF MAC address by PF driver Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 03/11] net/hns3: reduce the judgements of free Tx ring space Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 04/11] net/hns3: remove io rmb call in Rx operation Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 05/11] net/hns3: add free thresh " Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 06/11] net/hns3: fix Rx queue search miss RAS err when recv BC pkt Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 07/11] net/hns3: fix segment error when closing the port Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 08/11] net/hns3: fix ring vector related mailbox command format Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 09/11] net/hns3: fix dumping VF register information Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 10/11] net/hns3: fix link status when failure in issuing command Wei Hu (Xavier)
2020-01-09  3:15 ` [dpdk-dev] [PATCH 11/11] net/hns3: fix triggering reset proceduce in slave process Wei Hu (Xavier)
2020-01-14 14:33 ` [dpdk-dev] [PATCH 00/11] misc updates and fixes for hns3 PMD driver Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).