DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/8] some bugfixes for hns3
@ 2022-07-27 10:36 Dongdong Liu
  2022-07-27 10:36 ` [PATCH 1/8] net/hns3: fix segment fault when using SVE xmit Dongdong Liu
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas; +Cc: stable, Dongdong Liu

The patchset include some bugfixes for hns3.

Chengwen Feng (6):
  net/hns3: fix segment fault when using SVE xmit
  net/hns3: fix next-to-use overflow when using SVE xmit
  net/hns3: fix next-to-use overflow when using simple xmit
  net/hns3: optimize SVE xmit performance
  net/hns3: fix segment fault when secondary process access FW
  net/hns3: revert optimize Tx performance

Huisong Li (2):
  net/hns3: delete rte unused tag
  net/hns3: fix uncleared hardware MAC statistics

 drivers/net/hns3/hns3_ethdev.c       |  10 ++-
 drivers/net/hns3/hns3_ethdev_vf.c    |  11 ++-
 drivers/net/hns3/hns3_rxtx.c         | 123 ++++++++++++++-------------
 drivers/net/hns3/hns3_rxtx_vec_sve.c |  32 +++----
 drivers/net/hns3/hns3_stats.c        |  26 ++----
 5 files changed, 109 insertions(+), 93 deletions(-)

--
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/8] net/hns3: fix segment fault when using SVE xmit
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  2022-07-27 10:36 ` [PATCH 2/8] net/hns3: fix next-to-use overflow " Dongdong Liu
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Chengwen Feng, Dongdong Liu, Yisen Zhuang, Lijun Ou,
	Min Hu (Connor)

From: Chengwen Feng <fengchengwen@huawei.com>

Currently, the number of Tx send bytes is obtained by accumulating the
length of the batch 'mbuf' packets of the current loop cycle.
Unfortunately, it uses svcntd (which means all lane, regardless of
whether the corresponding lane is valid) which may lead to overflow,
and thus refers to an invalid mbuf.

Because the SVE xmit algorithm applies only to a single mbuf, the
mbuf's data_len is equal pkt_len, so this patch fixes it by using
svaddv_u64(svbool_t pg, svuint64_t data_len) which only adds valid
lanes.

Fixes: fdcd6a3e0246 ("net/hns3: add bytes stats")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 drivers/net/hns3/hns3_rxtx_vec_sve.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index be1fdbcdf0..b0dfb052bb 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -435,9 +435,8 @@ hns3_tx_fill_hw_ring_sve(struct hns3_tx_queue *txq,
 				offsets, svdup_n_u64(valid_bit));
 
 		/* Increment bytes counter */
-		uint32_t idx;
-		for (idx = 0; idx < svcntd(); idx++)
-			txq->basic_stats.bytes += pkts[idx]->pkt_len;
+		txq->basic_stats.bytes +=
+			(svaddv_u64(pg, data_len) >> HNS3_UINT16_BIT);
 
 		/* update index for next loop */
 		i += svcntd();
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/8] net/hns3: fix next-to-use overflow when using SVE xmit
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
  2022-07-27 10:36 ` [PATCH 1/8] net/hns3: fix segment fault when using SVE xmit Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  2022-07-27 10:36 ` [PATCH 3/8] net/hns3: fix next-to-use overflow when using simple xmit Dongdong Liu
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Chengwen Feng, Dongdong Liu, Yisen Zhuang,
	Wei Hu (Xavier),
	Huisong Li

From: Chengwen Feng <fengchengwen@huawei.com>

If txq's next-to-use plus nb_pkts equal txq's nb_tx_desc when using
SVE xmit algorithm, the txq's next-to-use will equal nb_tx_desc after
the xmit, this does not cause Tx exceptions, but may affect other ops
that depend on this field, such as tx_descriptor_status.

This patch fixes it.

Fixes: f0c243a6cb6f ("net/hns3: support SVE Tx")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 drivers/net/hns3/hns3_rxtx_vec_sve.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index b0dfb052bb..f09a81dbd5 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -464,14 +464,16 @@ hns3_xmit_fixed_burst_vec_sve(void *__restrict tx_queue,
 		return 0;
 	}
 
-	if (txq->next_to_use + nb_pkts > txq->nb_tx_desc) {
+	if (txq->next_to_use + nb_pkts >= txq->nb_tx_desc) {
 		nb_tx = txq->nb_tx_desc - txq->next_to_use;
 		hns3_tx_fill_hw_ring_sve(txq, tx_pkts, nb_tx);
 		txq->next_to_use = 0;
 	}
 
-	hns3_tx_fill_hw_ring_sve(txq, tx_pkts + nb_tx, nb_pkts - nb_tx);
-	txq->next_to_use += nb_pkts - nb_tx;
+	if (nb_pkts > nb_tx) {
+		hns3_tx_fill_hw_ring_sve(txq, tx_pkts + nb_tx, nb_pkts - nb_tx);
+		txq->next_to_use += nb_pkts - nb_tx;
+	}
 
 	txq->tx_bd_ready -= nb_pkts;
 	hns3_write_txq_tail_reg(txq, nb_pkts);
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 3/8] net/hns3: fix next-to-use overflow when using simple xmit
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
  2022-07-27 10:36 ` [PATCH 1/8] net/hns3: fix segment fault when using SVE xmit Dongdong Liu
  2022-07-27 10:36 ` [PATCH 2/8] net/hns3: fix next-to-use overflow " Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  2022-07-27 10:36 ` [PATCH 4/8] net/hns3: optimize SVE xmit performance Dongdong Liu
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Chengwen Feng, Dongdong Liu, Yisen Zhuang,
	Wei Hu (Xavier),
	Huisong Li

From: Chengwen Feng <fengchengwen@huawei.com>

If txq's next-to-use plus nb_pkts equal txq's nb_tx_desc when using
simple xmit algorithm, the txq's next-to-use will equal nb_tx_desc
fter the xmit, this does not cause Tx exceptions, but may affect other
ops that depend on this field, such as tx_descriptor_status.

This patch fixes it.

Fixes: 7ef933908f04 ("net/hns3: add simple Tx path")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 95f711e7eb..bb06038848 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4126,14 +4126,16 @@ hns3_xmit_pkts_simple(void *tx_queue,
 	}
 
 	txq->tx_bd_ready -= nb_pkts;
-	if (txq->next_to_use + nb_pkts > txq->nb_tx_desc) {
+	if (txq->next_to_use + nb_pkts >= txq->nb_tx_desc) {
 		nb_tx = txq->nb_tx_desc - txq->next_to_use;
 		hns3_tx_fill_hw_ring(txq, tx_pkts, nb_tx);
 		txq->next_to_use = 0;
 	}
 
-	hns3_tx_fill_hw_ring(txq, tx_pkts + nb_tx, nb_pkts - nb_tx);
-	txq->next_to_use += nb_pkts - nb_tx;
+	if (nb_pkts > nb_tx) {
+		hns3_tx_fill_hw_ring(txq, tx_pkts + nb_tx, nb_pkts - nb_tx);
+		txq->next_to_use += nb_pkts - nb_tx;
+	}
 
 	hns3_write_txq_tail_reg(txq, nb_pkts);
 
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 4/8] net/hns3: optimize SVE xmit performance
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
                   ` (2 preceding siblings ...)
  2022-07-27 10:36 ` [PATCH 3/8] net/hns3: fix next-to-use overflow when using simple xmit Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  2022-07-27 10:36 ` [PATCH 5/8] net/hns3: fix segment fault when secondary process access FW Dongdong Liu
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Chengwen Feng, Dongdong Liu, Yisen Zhuang

From: Chengwen Feng <fengchengwen@huawei.com>

This patch optimize SVE xmit algorithm performance, will get about 1%+
performance gain under 64B macfwd.

Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 drivers/net/hns3/hns3_rxtx_vec_sve.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index f09a81dbd5..6f23ba674d 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -389,10 +389,12 @@ hns3_tx_fill_hw_ring_sve(struct hns3_tx_queue *txq,
 				   HNS3_UINT32_BIT;
 	svuint64_t base_addr, buf_iova, data_off, data_len, addr;
 	svuint64_t offsets = svindex_u64(0, BD_SIZE);
-	uint32_t i = 0;
-	svbool_t pg = svwhilelt_b64_u32(i, nb_pkts);
+	uint32_t cnt = svcntd();
+	svbool_t pg;
+	uint32_t i;
 
-	do {
+	for (i = 0; i < nb_pkts; /* i is updated in the inner loop */) {
+		pg = svwhilelt_b64_u32(i, nb_pkts);
 		base_addr = svld1_u64(pg, (uint64_t *)pkts);
 		/* calc mbuf's field buf_iova address */
 		buf_iova = svadd_n_u64_z(pg, base_addr,
@@ -439,12 +441,11 @@ hns3_tx_fill_hw_ring_sve(struct hns3_tx_queue *txq,
 			(svaddv_u64(pg, data_len) >> HNS3_UINT16_BIT);
 
 		/* update index for next loop */
-		i += svcntd();
-		pkts += svcntd();
-		txdp += svcntd();
-		tx_entry += svcntd();
-		pg = svwhilelt_b64_u32(i, nb_pkts);
-	} while (svptest_any(svptrue_b64(), pg));
+		i += cnt;
+		pkts += cnt;
+		txdp += cnt;
+		tx_entry += cnt;
+	}
 }
 
 static uint16_t
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 5/8] net/hns3: fix segment fault when secondary process access FW
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
                   ` (3 preceding siblings ...)
  2022-07-27 10:36 ` [PATCH 4/8] net/hns3: optimize SVE xmit performance Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  2022-07-27 10:36 ` [PATCH 6/8] net/hns3: delete rte unused tag Dongdong Liu
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Chengwen Feng, Dongdong Liu, Yisen Zhuang,
	Anatoly Burakov, Ferruh Yigit, Huisong Li, Wei Hu (Xavier),
	Hao Chen, Min Hu (Connor)

From: Chengwen Feng <fengchengwen@huawei.com>

Currently, to prevent missing reporting of reset interrupts and quickly
identify reset interrupts, the following logic is designed in the
FW (firmware) command interface hns3_cmd_send: if an unprocessed
interrupt exist (by checking reset registers), related reset task is
scheduled.

The secondary process may invoke the hns3_cmd_send interface (e.g. using
proc-info query some stats). Unfortunately, the secondary process
does not support reset processing, and a segment fault may occur if it
schedules reset task.

This patch fixes it by limit the checking and scheduling of reset under
only primary process.

Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    | 10 +++++++++-
 drivers/net/hns3/hns3_ethdev_vf.c | 11 +++++++++--
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 6b1d1a5fb1..aedd17ef26 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5601,7 +5601,15 @@ hns3_is_reset_pending(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	enum hns3_reset_level reset;
 
-	hns3_check_event_cause(hns, NULL);
+	/*
+	 * Check the registers to confirm whether there is reset pending.
+	 * Note: This check may lead to schedule reset task, but only primary
+	 *       process can process the reset event. Therefore, limit the
+	 *       checking under only primary process.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		hns3_check_event_cause(hns, NULL);
+
 	reset = hns3_get_reset_level(hns, &hw->reset.pending);
 	if (reset != HNS3_NONE_RESET && hw->reset.level != HNS3_NONE_RESET &&
 	    hw->reset.level < reset) {
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 18504e6926..f3167523cf 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1863,8 +1863,15 @@ hns3vf_is_reset_pending(struct hns3_adapter *hns)
 	if (hw->reset.level == HNS3_VF_FULL_RESET)
 		return false;
 
-	/* Check the registers to confirm whether there is reset pending */
-	hns3vf_check_event_cause(hns, NULL);
+	/*
+	 * Check the registers to confirm whether there is reset pending.
+	 * Note: This check may lead to schedule reset task, but only primary
+	 *       process can process the reset event. Therefore, limit the
+	 *       checking under only primary process.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		hns3vf_check_event_cause(hns, NULL);
+
 	reset = hns3vf_get_reset_level(hw, &hw->reset.pending);
 	if (hw->reset.level != HNS3_NONE_RESET && reset != HNS3_NONE_RESET &&
 	    hw->reset.level < reset) {
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 6/8] net/hns3: delete rte unused tag
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
                   ` (4 preceding siblings ...)
  2022-07-27 10:36 ` [PATCH 5/8] net/hns3: fix segment fault when secondary process access FW Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  2022-07-27 10:36 ` [PATCH 7/8] net/hns3: fix uncleared hardware MAC statistics Dongdong Liu
  2022-07-27 10:36 ` [PATCH 8/8] net/hns3: revert optimize Tx performance Dongdong Liu
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Huisong Li, Dongdong Liu, Yisen Zhuang, Chunsong Feng,
	Ferruh Yigit, Hao Chen, Min Hu (Connor)

From: Huisong Li <lihuisong@huawei.com>

The '__rte_unused' tag in the input parameter  of 'hns3_mac_stats_reset'
is redundant. This patch remove this tag. In addition, this function is
aimed to clear MAC statics. So using 'struct hns3_hw' as input parameter
is better than 'struct rte_eth_dev', and it also facilitates the call of
this function.

Fixes: 8839c5e202f3 ("net/hns3: support device stats")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 drivers/net/hns3/hns3_stats.c | 22 +++++-----------------
 1 file changed, 5 insertions(+), 17 deletions(-)

diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
index 4ec0911522..2ec7a9635e 100644
--- a/drivers/net/hns3/hns3_stats.c
+++ b/drivers/net/hns3/hns3_stats.c
@@ -406,15 +406,6 @@ hns3_query_mac_stats_reg_num(struct hns3_hw *hw)
 	return 0;
 }
 
-static int
-hns3_query_update_mac_stats(struct rte_eth_dev *dev)
-{
-	struct hns3_adapter *hns = dev->data->dev_private;
-	struct hns3_hw *hw = &hns->hw;
-
-	return hns3_update_mac_stats(hw);
-}
-
 static int
 hns3_update_port_rpu_drop_stats(struct hns3_hw *hw)
 {
@@ -763,14 +754,13 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev)
 }
 
 static int
-hns3_mac_stats_reset(__rte_unused struct rte_eth_dev *dev)
+hns3_mac_stats_reset(struct hns3_hw *hw)
 {
-	struct hns3_adapter *hns = dev->data->dev_private;
-	struct hns3_hw *hw = &hns->hw;
 	struct hns3_mac_stats *mac_stats = &hw->mac_stats;
 	int ret;
 
-	ret = hns3_query_update_mac_stats(dev);
+	/* Clear hardware MAC statistics by reading it. */
+	ret = hns3_update_mac_stats(hw);
 	if (ret) {
 		hns3_err(hw, "Clear Mac stats fail : %d", ret);
 		return ret;
@@ -1063,8 +1053,7 @@ hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 	hns3_tqp_basic_stats_get(dev, xstats, &count);
 
 	if (!hns->is_vf) {
-		/* Update Mac stats */
-		ret = hns3_query_update_mac_stats(dev);
+		ret = hns3_update_mac_stats(hw);
 		if (ret < 0) {
 			hns3_err(hw, "Update Mac stats fail : %d", ret);
 			rte_spinlock_unlock(&hw->stats_lock);
@@ -1482,8 +1471,7 @@ hns3_dev_xstats_reset(struct rte_eth_dev *dev)
 	if (hns->is_vf)
 		goto out;
 
-	/* HW registers are cleared on read */
-	ret = hns3_mac_stats_reset(dev);
+	ret = hns3_mac_stats_reset(hw);
 
 out:
 	rte_spinlock_unlock(&hw->stats_lock);
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 7/8] net/hns3: fix uncleared hardware MAC statistics
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
                   ` (5 preceding siblings ...)
  2022-07-27 10:36 ` [PATCH 6/8] net/hns3: delete rte unused tag Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  2022-07-27 10:36 ` [PATCH 8/8] net/hns3: revert optimize Tx performance Dongdong Liu
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Huisong Li, Dongdong Liu, Yisen Zhuang, Min Hu (Connor),
	Ferruh Yigit, Hao Chen, Chunsong Feng

From: Huisong Li <lihuisong@huawei.com>

In the situation that the driver hns3 exits abnormally during packets
sending and receiving, the hardware statistics are not cleared when the
driver hns3 is reloaded. It need to be cleared during driver hns3
initialization that hardware MAC statistics.

Fixes: 8839c5e202f3 ("net/hns3: support device stats")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 drivers/net/hns3/hns3_stats.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
index 2ec7a9635e..bad65fcbed 100644
--- a/drivers/net/hns3/hns3_stats.c
+++ b/drivers/net/hns3/hns3_stats.c
@@ -1528,6 +1528,7 @@ hns3_tqp_stats_clear(struct hns3_hw *hw)
 int
 hns3_stats_init(struct hns3_hw *hw)
 {
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
 	int ret;
 
 	rte_spinlock_init(&hw->stats_lock);
@@ -1538,6 +1539,9 @@ hns3_stats_init(struct hns3_hw *hw)
 		return ret;
 	}
 
+	if (!hns->is_vf)
+		hns3_mac_stats_reset(hw);
+
 	return hns3_tqp_stats_init(hw);
 }
 
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 8/8] net/hns3: revert optimize Tx performance
  2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
                   ` (6 preceding siblings ...)
  2022-07-27 10:36 ` [PATCH 7/8] net/hns3: fix uncleared hardware MAC statistics Dongdong Liu
@ 2022-07-27 10:36 ` Dongdong Liu
  7 siblings, 0 replies; 9+ messages in thread
From: Dongdong Liu @ 2022-07-27 10:36 UTC (permalink / raw)
  To: dev, andrew.rybchenko, ferruh.yigit, thomas
  Cc: stable, Chengwen Feng, Dongdong Liu, Yisen Zhuang, Min Hu (Connor)

From: Chengwen Feng <fengchengwen@huawei.com>

The Tx performance deteriorates in the case of larger packets size and
larger burst. It may take a long time to optimize in these scenarios,
so this commit reverts
commit 0b77e8f3d364 ("net/hns3: optimize Tx performance")

Fixes: 0b77e8f3d364 ("net/hns3: optimize Tx performance")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 115 ++++++++++++++++++-----------------
 1 file changed, 60 insertions(+), 55 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index bb06038848..169c058c95 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -3072,51 +3072,40 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	return 0;
 }
 
-static int
+static void
 hns3_tx_free_useless_buffer(struct hns3_tx_queue *txq)
 {
 	uint16_t tx_next_clean = txq->next_to_clean;
-	uint16_t tx_next_use = txq->next_to_use;
-	struct hns3_entry *tx_entry = &txq->sw_ring[tx_next_clean];
+	uint16_t tx_next_use   = txq->next_to_use;
+	uint16_t tx_bd_ready   = txq->tx_bd_ready;
+	uint16_t tx_bd_max     = txq->nb_tx_desc;
+	struct hns3_entry *tx_bak_pkt = &txq->sw_ring[tx_next_clean];
 	struct hns3_desc *desc = &txq->tx_ring[tx_next_clean];
-	uint16_t i;
-
-	if (tx_next_use >= tx_next_clean &&
-	    tx_next_use < tx_next_clean + txq->tx_rs_thresh)
-		return -1;
+	struct rte_mbuf *mbuf;
 
-	/*
-	 * All mbufs can be released only when the VLD bits of all
-	 * descriptors in a batch are cleared.
-	 */
-	for (i = 0; i < txq->tx_rs_thresh; i++) {
-		if (desc[i].tx.tp_fe_sc_vld_ra_ri &
-			rte_le_to_cpu_16(BIT(HNS3_TXD_VLD_B)))
-			return -1;
-	}
+	while ((!(desc->tx.tp_fe_sc_vld_ra_ri &
+		rte_cpu_to_le_16(BIT(HNS3_TXD_VLD_B)))) &&
+		tx_next_use != tx_next_clean) {
+		mbuf = tx_bak_pkt->mbuf;
+		if (mbuf) {
+			rte_pktmbuf_free_seg(mbuf);
+			tx_bak_pkt->mbuf = NULL;
+		}
 
-	for (i = 0; i < txq->tx_rs_thresh; i++) {
-		rte_pktmbuf_free_seg(tx_entry[i].mbuf);
-		tx_entry[i].mbuf = NULL;
+		desc++;
+		tx_bak_pkt++;
+		tx_next_clean++;
+		tx_bd_ready++;
+
+		if (tx_next_clean >= tx_bd_max) {
+			tx_next_clean = 0;
+			desc = txq->tx_ring;
+			tx_bak_pkt = txq->sw_ring;
+		}
 	}
 
-	/* Update numbers of available descriptor due to buffer freed */
-	txq->tx_bd_ready += txq->tx_rs_thresh;
-	txq->next_to_clean += txq->tx_rs_thresh;
-	if (txq->next_to_clean >= txq->nb_tx_desc)
-		txq->next_to_clean = 0;
-
-	return 0;
-}
-
-static inline int
-hns3_tx_free_required_buffer(struct hns3_tx_queue *txq, uint16_t required_bds)
-{
-	while (required_bds > txq->tx_bd_ready) {
-		if (hns3_tx_free_useless_buffer(txq) != 0)
-			return -1;
-	}
-	return 0;
+	txq->next_to_clean = tx_next_clean;
+	txq->tx_bd_ready   = tx_bd_ready;
 }
 
 int
@@ -4159,8 +4148,7 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	uint16_t nb_tx;
 	uint16_t i;
 
-	if (txq->tx_bd_ready < txq->tx_free_thresh)
-		(void)hns3_tx_free_useless_buffer(txq);
+	hns3_tx_free_useless_buffer(txq);
 
 	tx_next_use   = txq->next_to_use;
 	tx_bd_max     = txq->nb_tx_desc;
@@ -4175,14 +4163,10 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nb_buf = tx_pkt->nb_segs;
 
 		if (nb_buf > txq->tx_bd_ready) {
-			/* Try to release the required MBUF, but avoid releasing
-			 * all MBUFs, otherwise, the MBUFs will be released for
-			 * a long time and may cause jitter.
-			 */
-			if (hns3_tx_free_required_buffer(txq, nb_buf) != 0) {
-				txq->dfx_stats.queue_full_cnt++;
-				goto end_of_tx;
-			}
+			txq->dfx_stats.queue_full_cnt++;
+			if (nb_tx == 0)
+				return 0;
+			goto end_of_tx;
 		}
 
 		/*
@@ -4598,22 +4582,43 @@ hns3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 static int
 hns3_tx_done_cleanup_full(struct hns3_tx_queue *txq, uint32_t free_cnt)
 {
-	uint16_t round_cnt;
+	uint16_t next_to_clean = txq->next_to_clean;
+	uint16_t next_to_use   = txq->next_to_use;
+	uint16_t tx_bd_ready   = txq->tx_bd_ready;
+	struct hns3_entry *tx_pkt = &txq->sw_ring[next_to_clean];
+	struct hns3_desc *desc = &txq->tx_ring[next_to_clean];
 	uint32_t idx;
 
 	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
 		free_cnt = txq->nb_tx_desc;
 
-	if (txq->tx_rs_thresh == 0)
-		return 0;
-
-	round_cnt = rounddown(free_cnt, txq->tx_rs_thresh);
-	for (idx = 0; idx < round_cnt; idx += txq->tx_rs_thresh) {
-		if (hns3_tx_free_useless_buffer(txq) != 0)
+	for (idx = 0; idx < free_cnt; idx++) {
+		if (next_to_clean == next_to_use)
+			break;
+		if (desc->tx.tp_fe_sc_vld_ra_ri &
+		    rte_cpu_to_le_16(BIT(HNS3_TXD_VLD_B)))
 			break;
+		if (tx_pkt->mbuf != NULL) {
+			rte_pktmbuf_free_seg(tx_pkt->mbuf);
+			tx_pkt->mbuf = NULL;
+		}
+		next_to_clean++;
+		tx_bd_ready++;
+		tx_pkt++;
+		desc++;
+		if (next_to_clean == txq->nb_tx_desc) {
+			tx_pkt = txq->sw_ring;
+			desc = txq->tx_ring;
+			next_to_clean = 0;
+		}
+	}
+
+	if (idx > 0) {
+		txq->next_to_clean = next_to_clean;
+		txq->tx_bd_ready = tx_bd_ready;
 	}
 
-	return idx;
+	return (int)idx;
 }
 
 int
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-07-27 10:38 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-27 10:36 [PATCH 0/8] some bugfixes for hns3 Dongdong Liu
2022-07-27 10:36 ` [PATCH 1/8] net/hns3: fix segment fault when using SVE xmit Dongdong Liu
2022-07-27 10:36 ` [PATCH 2/8] net/hns3: fix next-to-use overflow " Dongdong Liu
2022-07-27 10:36 ` [PATCH 3/8] net/hns3: fix next-to-use overflow when using simple xmit Dongdong Liu
2022-07-27 10:36 ` [PATCH 4/8] net/hns3: optimize SVE xmit performance Dongdong Liu
2022-07-27 10:36 ` [PATCH 5/8] net/hns3: fix segment fault when secondary process access FW Dongdong Liu
2022-07-27 10:36 ` [PATCH 6/8] net/hns3: delete rte unused tag Dongdong Liu
2022-07-27 10:36 ` [PATCH 7/8] net/hns3: fix uncleared hardware MAC statistics Dongdong Liu
2022-07-27 10:36 ` [PATCH 8/8] net/hns3: revert optimize Tx performance Dongdong Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).