patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Wei Hu (Xavier)" <huwei013@chinasoftinc.com>
To: <stable@dpdk.org>
Cc: <xavier.huwei@huawei.com>
Subject: [dpdk-stable] [PATCH 10/24] net/hns3: get Tx abnormal errors in xstats
Date: Mon, 17 Aug 2020 17:25:18 +0800	[thread overview]
Message-ID: <20200817092532.59530-11-huwei013@chinasoftinc.com> (raw)
In-Reply-To: <20200817092532.59530-1-huwei013@chinasoftinc.com>

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

[ upstream commit c4b7d6761d01d80cd975f47bf5c850721b891b61 ]

When upper level application calls the rte_eth_tx_burst API function to
send multiple packets at a time with burst mode based on hns3 network
engine, there are some abnormal conditions that cause the driver to fail
to operate the hardware to send packets correctly.

This patch adds some statistic counts for the abnormal errors of Tx data
path to the extend device statistics. The upper level application can
get them by calling the rte_eth_xstats_get API function.

Note: When using burst mode to call the rte_eth_tx_burst API function to
send multiple packets at a time. When the first abnormal error is
detected, add one to the relevant error statistics item, and then exit
the loop of sending multiple packets of the function. That is to say,
even if there are multiple packets in which abnormal errors may be
detected in the burst, the relevant error statistics in the driver will
only be increased by one.

The detail description of the Tx abnormal errors statistic items as
below:
 - TX_OVER_LENGTH_PKT_CNT Total number of greater than
   HNS3_MAX_FRAME_LEN the driver supported.

 - TX_EXCEED_LIMITED_BD_PKT_CNT
     Total number of exceeding the hardware limited bd which process a
     packet needed bd numbers.

 - TX_EXCEED_LIMITED_BD_PKT_REASSEMBLE_FAIL_CNT
     Total number of exceeding the hardware limited bd fail which
     process a packet needed bd numbers and reassemble fail.

 - TX_UNSUPPORTED_TUNNEL_PKT_CNT
     Total number of unsupported tunnel packet. The unsupported tunnel
     type: vxlan_gpe, gtp, ipip and MPLSINUDP, MPLSINUDP is a packet
     with MPLS-in-UDP RFC 7510 header.

 - TX_QUEUE_FULL_CNT
     Total count which the available bd numbers in current bd queue is
     less than the bd numbers with the pkt process needed.

 - TX_SHORT_PKT_PAD_FAIL_CNT
     Total count which the packet length is less than minimum packet
     size HNS3_MIN_PKT_SIZE and fail to be appended with 0.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c  |  24 +++++--
 drivers/net/hns3/hns3_rxtx.h  |  48 ++++++++++++++
 drivers/net/hns3/hns3_stats.c | 115 +++++++++++++++++++++++++++-------
 3 files changed, 159 insertions(+), 28 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 734a11fdc..972dc7101 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1810,6 +1810,12 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	txq->configured = true;
 	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
+	txq->over_length_pkt_cnt = 0;
+	txq->exceed_limit_bd_pkt_cnt = 0;
+	txq->exceed_limit_bd_reassem_fail = 0;
+	txq->unsupported_tunnel_pkt_cnt = 0;
+	txq->queue_full_cnt = 0;
+	txq->pkt_padding_fail_cnt = 0;
 	rte_spinlock_lock(&hw->lock);
 	dev->data->tx_queues[idx] = txq;
 	rte_spinlock_unlock(&hw->lock);
@@ -2443,8 +2449,10 @@ hns3_parse_cksum(struct hns3_tx_queue *txq, uint16_t tx_desc_id,
 	if (m->ol_flags & PKT_TX_TUNNEL_MASK) {
 		(void)rte_net_get_ptype(m, hdr_lens, RTE_PTYPE_ALL_MASK);
 		if (hns3_parse_tunneling_params(txq, tx_desc_id, m->ol_flags,
-						hdr_lens))
+						hdr_lens)) {
+			txq->unsupported_tunnel_pkt_cnt++;
 			return -EINVAL;
+		}
 	}
 	/* Enable checksum offloading */
 	if (m->ol_flags & HNS3_TX_CKSUM_OFFLOAD_MASK)
@@ -2467,13 +2475,18 @@ hns3_check_non_tso_pkt(uint16_t nb_buf, struct rte_mbuf **m_seg,
 	 * If packet length is greater than HNS3_MAX_FRAME_LEN
 	 * driver support, the packet will be ignored.
 	 */
-	if (unlikely(rte_pktmbuf_pkt_len(tx_pkt) > HNS3_MAX_FRAME_LEN))
+	if (unlikely(rte_pktmbuf_pkt_len(tx_pkt) > HNS3_MAX_FRAME_LEN)) {
+		txq->over_length_pkt_cnt++;
 		return -EINVAL;
+	}
 
 	if (unlikely(nb_buf > HNS3_MAX_NON_TSO_BD_PER_PKT)) {
+		txq->exceed_limit_bd_pkt_cnt++;
 		ret = hns3_reassemble_tx_pkts(txq, tx_pkt, &new_pkt);
-		if (ret)
+		if (ret) {
+			txq->exceed_limit_bd_reassem_fail++;
 			return ret;
+		}
 		*m_seg = new_pkt;
 	}
 
@@ -2511,6 +2524,7 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nb_buf = tx_pkt->nb_segs;
 
 		if (nb_buf > txq->tx_bd_ready) {
+			txq->queue_full_cnt++;
 			if (nb_tx == 0)
 				return 0;
 
@@ -2528,8 +2542,10 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			add_len = HNS3_MIN_PKT_SIZE -
 					 rte_pktmbuf_pkt_len(tx_pkt);
 			appended = rte_pktmbuf_append(tx_pkt, add_len);
-			if (appended == NULL)
+			if (appended == NULL) {
+				txq->pkt_padding_fail_cnt++;
 				break;
+			}
 
 			memset(appended, 0, add_len);
 		}
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 1fd1afd1d..ee4514290 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -278,6 +278,54 @@ struct hns3_tx_queue {
 
 	bool tx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if tx queue has been configured */
+
+	/*
+	 * The following items are used for the abnormal errors statistics in
+	 * the Tx datapath. When upper level application calls the
+	 * rte_eth_tx_burst API function to send multiple packets at a time with
+	 * burst mode based on hns3 network engine, there are some abnormal
+	 * conditions that cause the driver to fail to operate the hardware to
+	 * send packets correctly.
+	 * Note: When using burst mode to call the rte_eth_tx_burst API function
+	 * to send multiple packets at a time. When the first abnormal error is
+	 * detected, add one to the relevant error statistics item, and then
+	 * exit the loop of sending multiple packets of the function. That is to
+	 * say, even if there are multiple packets in which abnormal errors may
+	 * be detected in the burst, the relevant error statistics in the driver
+	 * will only be increased by one.
+	 * The detail description of the Tx abnormal errors statistic items as
+	 * below:
+	 *  - over_length_pkt_cnt
+	 *     Total number of greater than HNS3_MAX_FRAME_LEN the driver
+	 *     supported.
+	 *
+	 * - exceed_limit_bd_pkt_cnt
+	 *     Total number of exceeding the hardware limited bd which process
+	 *     a packet needed bd numbers.
+	 *
+	 * - exceed_limit_bd_reassem_fail
+	 *     Total number of exceeding the hardware limited bd fail which
+	 *     process a packet needed bd numbers and reassemble fail.
+	 *
+	 * - unsupported_tunnel_pkt_cnt
+	 *     Total number of unsupported tunnel packet. The unsupported tunnel
+	 *     type: vxlan_gpe, gtp, ipip and MPLSINUDP, MPLSINUDP is a packet
+	 *     with MPLS-in-UDP RFC 7510 header.
+	 *
+	 * - queue_full_cnt
+	 *     Total count which the available bd numbers in current bd queue is
+	 *     less than the bd numbers with the pkt process needed.
+	 *
+	 * - pkt_padding_fail_cnt
+	 *     Total count which the packet length is less than minimum packet
+	 *     size HNS3_MIN_PKT_SIZE and fail to be appended with 0.
+	 */
+	uint64_t over_length_pkt_cnt;
+	uint64_t exceed_limit_bd_pkt_cnt;
+	uint64_t exceed_limit_bd_reassem_fail;
+	uint64_t unsupported_tunnel_pkt_cnt;
+	uint64_t queue_full_cnt;
+	uint64_t pkt_padding_fail_cnt;
 };
 
 struct hns3_queue_info {
diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
index e03ffbbd6..d2467a484 100644
--- a/drivers/net/hns3/hns3_stats.c
+++ b/drivers/net/hns3/hns3_stats.c
@@ -234,6 +234,22 @@ static const struct hns3_xstats_name_offset hns3_rx_bd_error_strings[] = {
 		HNS3_RX_BD_ERROR_STATS_FIELD_OFFSET(ol4_csum_erros)}
 };
 
+/* The statistic of the Tx errors */
+static const struct hns3_xstats_name_offset hns3_tx_errors_strings[] = {
+	{"TX_OVER_LENGTH_PKT_CNT",
+		HNS3_TX_ERROR_STATS_FIELD_OFFSET(over_length_pkt_cnt)},
+	{"TX_EXCEED_LIMITED_BD_PKT_CNT",
+		HNS3_TX_ERROR_STATS_FIELD_OFFSET(exceed_limit_bd_pkt_cnt)},
+	{"TX_EXCEED_LIMITED_BD_PKT_REASSEMBLE_FAIL_CNT",
+		HNS3_TX_ERROR_STATS_FIELD_OFFSET(exceed_limit_bd_reassem_fail)},
+	{"TX_UNSUPPORTED_TUNNEL_PKT_CNT",
+		HNS3_TX_ERROR_STATS_FIELD_OFFSET(unsupported_tunnel_pkt_cnt)},
+	{"TX_QUEUE_FULL_CNT",
+		HNS3_TX_ERROR_STATS_FIELD_OFFSET(queue_full_cnt)},
+	{"TX_SHORT_PKT_PAD_FAIL_CNT",
+		HNS3_TX_ERROR_STATS_FIELD_OFFSET(pkt_padding_fail_cnt)}
+};
+
 /* The statistic of rx queue */
 static const struct hns3_xstats_name_offset hns3_rx_queue_strings[] = {
 	{"RX_QUEUE_FBD", HNS3_RING_RX_FBDNUM_REG}
@@ -256,6 +272,9 @@ static const struct hns3_xstats_name_offset hns3_tx_queue_strings[] = {
 #define HNS3_NUM_RX_BD_ERROR_XSTATS (sizeof(hns3_rx_bd_error_strings) / \
 	sizeof(hns3_rx_bd_error_strings[0]))
 
+#define HNS3_NUM_TX_ERRORS_XSTATS (sizeof(hns3_tx_errors_strings) / \
+	sizeof(hns3_tx_errors_strings[0]))
+
 #define HNS3_NUM_RX_QUEUE_STATS (sizeof(hns3_rx_queue_strings) / \
 	sizeof(hns3_rx_queue_strings[0]))
 
@@ -491,6 +510,7 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev)
 	struct hns3_tqp_stats *stats = &hw->tqp_stats;
 	struct hns3_cmd_desc desc_reset;
 	struct hns3_rx_queue *rxq;
+	struct hns3_tx_queue *txq;
 	uint16_t i;
 	int ret;
 
@@ -522,7 +542,7 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev)
 		}
 	}
 
-	/* Clear Rx BD and Tx error stats */
+	/* Clear the Rx BD errors stats */
 	for (i = 0; i != eth_dev->data->nb_rx_queues; ++i) {
 		rxq = eth_dev->data->rx_queues[i];
 		if (rxq) {
@@ -535,6 +555,19 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev)
 		}
 	}
 
+	/* Clear the Tx errors stats */
+	for (i = 0; i != eth_dev->data->nb_tx_queues; ++i) {
+		txq = eth_dev->data->tx_queues[i];
+		if (txq) {
+			txq->over_length_pkt_cnt = 0;
+			txq->exceed_limit_bd_pkt_cnt = 0;
+			txq->exceed_limit_bd_reassem_fail = 0;
+			txq->unsupported_tunnel_pkt_cnt = 0;
+			txq->queue_full_cnt = 0;
+			txq->pkt_padding_fail_cnt = 0;
+		}
+	}
+
 	memset(stats, 0, sizeof(struct hns3_tqp_stats));
 
 	return 0;
@@ -565,15 +598,51 @@ hns3_xstats_calc_num(struct rte_eth_dev *dev)
 {
 	struct hns3_adapter *hns = dev->data->dev_private;
 	int bderr_stats = dev->data->nb_rx_queues * HNS3_NUM_RX_BD_ERROR_XSTATS;
+	int tx_err_stats = dev->data->nb_tx_queues * HNS3_NUM_TX_ERRORS_XSTATS;
 	int rx_queue_stats = dev->data->nb_rx_queues * HNS3_NUM_RX_QUEUE_STATS;
 	int tx_queue_stats = dev->data->nb_tx_queues * HNS3_NUM_TX_QUEUE_STATS;
 
 	if (hns->is_vf)
-		return bderr_stats + rx_queue_stats + tx_queue_stats +
-			HNS3_NUM_RESET_XSTATS;
+		return bderr_stats + tx_err_stats + rx_queue_stats +
+		       tx_queue_stats + HNS3_NUM_RESET_XSTATS;
 	else
-		return bderr_stats + rx_queue_stats + tx_queue_stats +
-			HNS3_FIX_NUM_STATS;
+		return bderr_stats + tx_err_stats + rx_queue_stats +
+		       tx_queue_stats + HNS3_FIX_NUM_STATS;
+}
+
+static void
+hns3_get_queue_stats(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+		     int *count)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	uint32_t reg_offset;
+	uint16_t i, j;
+
+	/* Get rx queue stats */
+	for (j = 0; j < dev->data->nb_rx_queues; j++) {
+		for (i = 0; i < HNS3_NUM_RX_QUEUE_STATS; i++) {
+			reg_offset = HNS3_TQP_REG_OFFSET +
+					HNS3_TQP_REG_SIZE * j;
+			xstats[*count].value = hns3_read_dev(hw,
+				reg_offset + hns3_rx_queue_strings[i].offset);
+			xstats[*count].id = *count;
+			(*count)++;
+		}
+	}
+
+	/* Get tx queue stats */
+	for (j = 0; j < dev->data->nb_tx_queues; j++) {
+		for (i = 0; i < HNS3_NUM_TX_QUEUE_STATS; i++) {
+			reg_offset = HNS3_TQP_REG_OFFSET +
+					HNS3_TQP_REG_SIZE * j;
+			xstats[*count].value = hns3_read_dev(hw,
+				reg_offset + hns3_tx_queue_strings[i].offset);
+			xstats[*count].id = *count;
+			(*count)++;
+		}
+	}
+
 }
 
 /*
@@ -599,7 +668,7 @@ hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 	struct hns3_mac_stats *mac_stats = &hw->mac_stats;
 	struct hns3_reset_stats *reset_stats = &hw->reset.stats;
 	struct hns3_rx_queue *rxq;
-	uint32_t reg_offset;
+	struct hns3_tx_queue *txq;
 	uint16_t i, j;
 	char *addr;
 	int count;
@@ -658,30 +727,18 @@ hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 		}
 	}
 
-	/* Get rx queue stats */
-	for (j = 0; j < dev->data->nb_rx_queues; j++) {
-		for (i = 0; i < HNS3_NUM_RX_QUEUE_STATS; i++) {
-			reg_offset = HNS3_TQP_REG_OFFSET +
-					HNS3_TQP_REG_SIZE * j;
-			xstats[count].value = hns3_read_dev(hw,
-				reg_offset + hns3_rx_queue_strings[i].offset);
-			xstats[count].id = count;
-			count++;
-		}
-	}
-
-	/* Get tx queue stats */
+	/* Get the Tx errors stats */
 	for (j = 0; j < dev->data->nb_tx_queues; j++) {
-		for (i = 0; i < HNS3_NUM_TX_QUEUE_STATS; i++) {
-			reg_offset = HNS3_TQP_REG_OFFSET +
-					HNS3_TQP_REG_SIZE * j;
-			xstats[count].value = hns3_read_dev(hw,
-				reg_offset + hns3_tx_queue_strings[i].offset);
+		for (i = 0; i < HNS3_NUM_TX_ERRORS_XSTATS; i++) {
+			txq = dev->data->tx_queues[j];
+			addr = (char *)txq + hns3_tx_errors_strings[i].offset;
+			xstats[count].value = *(uint64_t *)addr;
 			xstats[count].id = count;
 			count++;
 		}
 	}
 
+	hns3_get_queue_stats(dev, xstats, &count);
 	return count;
 }
 
@@ -756,6 +813,16 @@ hns3_dev_xstats_get_names(struct rte_eth_dev *dev,
 		}
 	}
 
+	for (j = 0; j < dev->data->nb_tx_queues; j++) {
+		for (i = 0; i < HNS3_NUM_TX_ERRORS_XSTATS; i++) {
+			snprintf(xstats_names[count].name,
+				 sizeof(xstats_names[count].name),
+				 "tx_q%u%s", j,
+				 hns3_tx_errors_strings[i].name);
+			count++;
+		}
+	}
+
 	for (j = 0; j < dev->data->nb_rx_queues; j++) {
 		for (i = 0; i < HNS3_NUM_RX_QUEUE_STATS; i++) {
 			snprintf(xstats_names[count].name,
-- 
2.27.0


  parent reply	other threads:[~2020-08-17  9:26 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-17  9:25 [dpdk-stable] [PATCH 00/24] backport for 19.11.4 Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 01/24] net/hns3: get link status change through mailbox Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 02/24] net/hns3: optimize default RSS algorithm Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 03/24] net/hns3: support setting VF MAC address by PF driver Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 04/24] net/hns3: remove unnecessary branch Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 05/24] net/hns3: support TSO Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 06/24] net/hns3: remove restriction on setting VF MTU Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 07/24] net/hns3: support promiscuous and allmulticast mode for VF Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 08/24] net/hns3: fix adding multicast MAC address Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 09/24] net/hns3: get Rx/Tx queue fbd in xstats Wei Hu (Xavier)
2020-08-17  9:25 ` Wei Hu (Xavier) [this message]
2020-08-17  9:25 ` [dpdk-stable] [PATCH 11/24] net/hns3: get PCI revision ID Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 12/24] net/hns3: check TSO segment size during Tx Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 13/24] net/hns3: support symmetric RSS Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 14/24] net/hns3: support LRO Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 15/24] net/hns3: decrease non-nearby memory access in Rx Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 16/24] net/hns3: support setting VF PVID by PF driver Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 17/24] net/hns3: get device capability in primary process Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 18/24] net/hns3: report Tx descriptor segment limitations Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 19/24] net/hns3: cleanup duplicated code on processing TSO in Tx Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 20/24] net/hns3: support copper media type Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 21/24] net/hns3: fix reassembling multiple segment packets in Tx Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 22/24] net/hns3: fix inserted VLAN tag position " Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 23/24] app/testpmd: remove hardcoded descriptors limit Wei Hu (Xavier)
2020-08-17  9:25 ` [dpdk-stable] [PATCH 24/24] net/bonding: change state machine to defaulted Wei Hu (Xavier)
2020-08-17  9:51 ` [dpdk-stable] [PATCH 00/24] backport for 19.11.4 Luca Boccassi
2020-08-17 11:54   ` Wei Hu (Xavier)
2020-08-17 13:42     ` Luca Boccassi
2020-08-18  3:25       ` Wei Hu (Xavier)
2020-08-18  6:49 ` [dpdk-stable] [PATCH v2 00/10] " Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 01/10] net/hns3: get link status change through mailbox Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 02/10] net/hns3: optimize default RSS algorithm Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 03/10] net/hns3: remove unnecessary branch Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 04/10] net/hns3: remove restriction on setting VF MTU Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 05/10] net/hns3: fix adding multicast MAC address Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 06/10] net/hns3: get device capability in primary process Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 07/10] net/hns3: report Tx descriptor segment limitations Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 08/10] net/hns3: fix reassembling multiple segment packets in Tx Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 09/10] app/testpmd: remove hardcoded descriptors limit Wei Hu (Xavier)
2020-08-18  6:49   ` [dpdk-stable] [PATCH v2 10/10] net/bonding: change state machine to defaulted Wei Hu (Xavier)
2020-08-18  7:15 ` [dpdk-stable] [PATCH v3 0/7] backport for 19.11.4 Wei Hu (Xavier)
2020-08-18  7:15   ` [dpdk-stable] [PATCH v3 1/7] net/hns3: get link status change through mailbox Wei Hu (Xavier)
2020-08-18  7:15   ` [dpdk-stable] [PATCH v3 2/7] net/hns3: optimize default RSS algorithm Wei Hu (Xavier)
2020-08-18  7:15   ` [dpdk-stable] [PATCH v3 3/7] net/hns3: remove unnecessary branch Wei Hu (Xavier)
2020-08-18  7:15   ` [dpdk-stable] [PATCH v3 4/7] net/hns3: remove restriction on setting VF MTU Wei Hu (Xavier)
2020-08-18  7:15   ` [dpdk-stable] [PATCH v3 5/7] net/hns3: fix adding multicast MAC address Wei Hu (Xavier)
2020-08-18  7:15   ` [dpdk-stable] [PATCH v3 6/7] app/testpmd: remove hardcoded descriptors limit Wei Hu (Xavier)
2020-08-18  7:15   ` [dpdk-stable] [PATCH v3 7/7] net/bonding: change state machine to defaulted Wei Hu (Xavier)
2020-08-18 18:00   ` [dpdk-stable] [PATCH v3 0/7] backport for 19.11.4 Luca Boccassi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200817092532.59530-11-huwei013@chinasoftinc.com \
    --to=huwei013@chinasoftinc.com \
    --cc=stable@dpdk.org \
    --cc=xavier.huwei@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).