DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver
@ 2020-09-22  8:53 Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
                   ` (17 more replies)
  0 siblings, 18 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

This series are updates for hns3 PMD driver.

Chengchang Tang (2):
  net/hns3: fix default VLAN won't be deleted when set PF PVID
  net/hns3: add default branch to switch in Rx VLAN processing

Chengwen Feng (1):
  net/hns3: support flow action of queue region

Hongbo Zheng (3):
  net/hns3: add max number of segs compatibility
  net/hns3: avoid accessing nonexistent VF reg when PF in FLR
  net/hns3: add break to exit loop when err stat item found

Lijun Ou (5):
  net/hns3: support querying RSS flow rule
  net/hns3: check input RSS type when creating flow with RSS
  net/hns3: set RSS hash type input configuration
  net/hns3: fix config when creating RSS rule after flush
  net/hns3: fix flushing RSS rule

Wei Hu (Xavier) (6):
  net/hns3: add VLAN configuration compatibility
  net/hns3: add TSO pseudo header calculation compatibility
  net/hns3: add default branch to switch when parsing fd tuple
  net/hns3: fix flow RSS queue num with 0
  net/hns3: fix configuring device with RSS is enabled
  net/hns3: fix storing RSS info when creating flow action

 drivers/net/hns3/hns3_cmd.c       |   5 +-
 drivers/net/hns3/hns3_cmd.h       |  14 +-
 drivers/net/hns3/hns3_ethdev.c    | 133 ++++++++++--------
 drivers/net/hns3/hns3_ethdev.h    |  96 +++++++++++--
 drivers/net/hns3/hns3_ethdev_vf.c |  20 ++-
 drivers/net/hns3/hns3_fdir.c      |  43 ++++--
 drivers/net/hns3/hns3_fdir.h      |  23 +++-
 drivers/net/hns3/hns3_flow.c      | 282 +++++++++++++++++++++++++++++---------
 drivers/net/hns3/hns3_mbx.c       |   2 +-
 drivers/net/hns3/hns3_rss.c       | 244 +++++++++++++++++++++++++--------
 drivers/net/hns3/hns3_rss.h       |  21 +--
 drivers/net/hns3/hns3_rxtx.c      | 151 +++++++++++++-------
 drivers/net/hns3/hns3_rxtx.h      |  53 +++++--
 drivers/net/hns3/hns3_stats.c     |   1 +
 14 files changed, 797 insertions(+), 291 deletions(-)

-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 01/17] net/hns3: add VLAN configuration compatibility
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID Wei Hu (Xavier)
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Because of hardware limitation based on the old version of hns3 network
engine, there are some restrictions:
a) HNS3 PMD driver needs select different processing mode for VLAN based
   on whether PVID is set which means our driver need sense the PVID
   states.
b) For packets transmitting process, only two layer of VLAN tag is
   supported. If the total number of VLAN tags in mbuf and VLAN offload
   by haredware (VLAN insert by descriptor) exceeds two, the VLAN in mbuf
   will be overwritten by VLAN in the descriptor.
c) If port based VLAN is set, only one VLAN header is allowed in mbuf or
   it will be discard by hardware.

In order to solve these restriction, two change is implemented on the new
verions of network engine.
1) add a new VLAN tagged insertion mode, named tag shift mode;
2) add a new VLAN strip control bit, named strip hide enable;

The tag shift mode means that VLAN tag will shift automatically when the
inserted place has a tag. For PMD driver, the VLAN tag1 and tag2
configurations in Tx side do not need to be considered because the hardware
completes it. However, the related configuration will still be retained to
be compatible with the old version of network engine.

The VLAN strip hide means that hardware will strip the VLAN tag and hide
VLAN in descriptor (VLAN ID exposed as zero and related STRIP_TAGP is off).

These changes make it no longer necessary for the hns3 PMD driver to be
aware of the PVID status and have the ability to send mult-layer (more than
two) VLANs packets. Therefore, hns3 PMD driver introduces the concept of
VLAN mode and adds a new VLAN mode named HNS3_PVID_MODE to indicate that
PVID-related IO process can be implemented by the hardware. And VF driver
does not need to be modified because the related mailbox messages will not
be sent by PF kernel mode netdev driver under new network engine and all
the related hardware configuration is on the PF side.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_cmd.h    |  3 +++
 drivers/net/hns3/hns3_ethdev.c | 33 ++++++++++++++++++++----
 drivers/net/hns3/hns3_ethdev.h | 58 +++++++++++++++++++++++++++++++++++-------
 drivers/net/hns3/hns3_mbx.c    |  2 +-
 drivers/net/hns3/hns3_rxtx.c   | 48 ++++++++++++++++++++++++++--------
 drivers/net/hns3/hns3_rxtx.h   | 39 ++++++++++++++++++----------
 6 files changed, 144 insertions(+), 39 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 87d6053..629b114 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -449,11 +449,14 @@ struct hns3_umv_spc_alc_cmd {
 #define HNS3_CFG_NIC_ROCE_SEL_B		4
 #define HNS3_ACCEPT_TAG2_B		5
 #define HNS3_ACCEPT_UNTAG2_B		6
+#define HNS3_TAG_SHIFT_MODE_EN_B	7
 
 #define HNS3_REM_TAG1_EN_B		0
 #define HNS3_REM_TAG2_EN_B		1
 #define HNS3_SHOW_TAG1_EN_B		2
 #define HNS3_SHOW_TAG2_EN_B		3
+#define HNS3_DISCARD_TAG1_EN_B		5
+#define HNS3_DISCARD_TAG2_EN_B		6
 
 /* Factor used to calculate offset and bitmap of VF num */
 #define HNS3_VF_NUM_PER_CMD             64
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 73d5042..3e98df3 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -477,6 +477,11 @@ hns3_set_vlan_rx_offload_cfg(struct hns3_adapter *hns,
 	hns3_set_bit(req->vport_vlan_cfg, HNS3_SHOW_TAG2_EN_B,
 		     vcfg->vlan2_vlan_prionly ? 1 : 0);
 
+	/* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG1_EN_B,
+		     vcfg->strip_tag1_discard_en ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG2_EN_B,
+		     vcfg->strip_tag2_discard_en ? 1 : 0);
 	/*
 	 * In current version VF is not supported when PF is driven by DPDK
 	 * driver, just need to configure parameters for PF vport.
@@ -518,11 +523,14 @@ hns3_en_hw_strip_rxvtag(struct hns3_adapter *hns, bool enable)
 	if (hw->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_DISABLE) {
 		rxvlan_cfg.strip_tag1_en = false;
 		rxvlan_cfg.strip_tag2_en = enable;
+		rxvlan_cfg.strip_tag2_discard_en = false;
 	} else {
 		rxvlan_cfg.strip_tag1_en = enable;
 		rxvlan_cfg.strip_tag2_en = true;
+		rxvlan_cfg.strip_tag2_discard_en = true;
 	}
 
+	rxvlan_cfg.strip_tag1_discard_en = false;
 	rxvlan_cfg.vlan1_vlan_prionly = false;
 	rxvlan_cfg.vlan2_vlan_prionly = false;
 	rxvlan_cfg.rx_vlan_offload_en = enable;
@@ -678,6 +686,10 @@ hns3_set_vlan_tx_offload_cfg(struct hns3_adapter *hns,
 		     vcfg->insert_tag2_en ? 1 : 0);
 	hns3_set_bit(req->vport_vlan_cfg, HNS3_CFG_NIC_ROCE_SEL_B, 0);
 
+	/* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_TAG_SHIFT_MODE_EN_B,
+		     vcfg->tag_shift_mode_en ? 1 : 0);
+
 	/*
 	 * In current version VF is not supported when PF is driven by DPDK
 	 * driver, just need to configure parameters for PF vport.
@@ -707,7 +719,8 @@ hns3_vlan_txvlan_cfg(struct hns3_adapter *hns, uint16_t port_base_vlan_state,
 		txvlan_cfg.insert_tag1_en = false;
 		txvlan_cfg.default_tag1 = 0;
 	} else {
-		txvlan_cfg.accept_tag1 = false;
+		txvlan_cfg.accept_tag1 =
+			hw->vlan_mode == HNS3_HW_SHIFT_AND_DISCARD_MODE;
 		txvlan_cfg.insert_tag1_en = true;
 		txvlan_cfg.default_tag1 = pvid;
 	}
@@ -717,6 +730,7 @@ hns3_vlan_txvlan_cfg(struct hns3_adapter *hns, uint16_t port_base_vlan_state,
 	txvlan_cfg.accept_untag2 = true;
 	txvlan_cfg.insert_tag2_en = false;
 	txvlan_cfg.default_tag2 = 0;
+	txvlan_cfg.tag_shift_mode_en = true;
 
 	ret = hns3_set_vlan_tx_offload_cfg(hns, &txvlan_cfg);
 	if (ret) {
@@ -841,14 +855,17 @@ hns3_en_pvid_strip(struct hns3_adapter *hns, int on)
 	bool rx_strip_en;
 	int ret;
 
-	rx_strip_en = old_cfg->rx_vlan_offload_en ? true : false;
+	rx_strip_en = old_cfg->rx_vlan_offload_en;
 	if (on) {
 		rx_vlan_cfg.strip_tag1_en = rx_strip_en;
 		rx_vlan_cfg.strip_tag2_en = true;
+		rx_vlan_cfg.strip_tag2_discard_en = true;
 	} else {
 		rx_vlan_cfg.strip_tag1_en = false;
 		rx_vlan_cfg.strip_tag2_en = rx_strip_en;
+		rx_vlan_cfg.strip_tag2_discard_en = false;
 	}
+	rx_vlan_cfg.strip_tag1_discard_en = false;
 	rx_vlan_cfg.vlan1_vlan_prionly = false;
 	rx_vlan_cfg.vlan2_vlan_prionly = false;
 	rx_vlan_cfg.rx_vlan_offload_en = old_cfg->rx_vlan_offload_en;
@@ -940,9 +957,13 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
 	rte_spinlock_unlock(&hw->lock);
 	if (ret)
 		return ret;
-
-	if (pvid_en_state_change)
-		hns3_update_all_queues_pvid_state(hw);
+	/*
+	 * Only in HNS3_SW_SHIFT_AND_MODE the PVID related operation in Tx/Rx
+	 * need be processed by PMD driver.
+	 */
+	if (pvid_en_state_change &&
+	    hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
+		hns3_update_all_queues_pvid_proc_en(hw);
 
 	return 0;
 }
@@ -2904,6 +2925,7 @@ hns3_get_capability(struct hns3_hw *hw)
 		hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_RSV_ONE;
 		hw->intr.coalesce_mode = HNS3_INTR_COALESCE_NON_QL;
 		hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_2US;
+		hw->vlan_mode = HNS3_SW_SHIFT_AND_DISCARD_MODE;
 		hw->min_tx_pkt_len = HNS3_HIP08_MIN_TX_PKT_LEN;
 		return 0;
 	}
@@ -2919,6 +2941,7 @@ hns3_get_capability(struct hns3_hw *hw)
 	hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_ALL;
 	hw->intr.coalesce_mode = HNS3_INTR_COALESCE_QL;
 	hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_1US;
+	hw->vlan_mode = HNS3_HW_SHIFT_AND_DISCARD_MODE;
 	hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN;
 
 	return 0;
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index fd6a9f9..a70223f 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -37,6 +37,9 @@
 #define HNS3_PF_FUNC_ID			0
 #define HNS3_1ST_VF_FUNC_ID		1
 
+#define HNS3_SW_SHIFT_AND_DISCARD_MODE		0
+#define HNS3_HW_SHIFT_AND_DISCARD_MODE		1
+
 #define HNS3_UC_MACADDR_NUM		128
 #define HNS3_VF_UC_MACADDR_NUM		48
 #define HNS3_MC_MACADDR_NUM		128
@@ -474,6 +477,28 @@ struct hns3_hw {
 
 	struct hns3_queue_intr intr;
 
+	/*
+	 * vlan mode.
+	 * value range:
+	 *      HNS3_SW_SHIFT_AND_DISCARD_MODE/HNS3_HW_SHFIT_AND_DISCARD_MODE
+	 *
+	 *  - HNS3_SW_SHIFT_AND_DISCARD_MODE
+	 *     For some versions of hardware network engine, because of the
+	 *     hardware limitation, PMD driver needs to detect the PVID status
+	 *     to work with haredware to implement PVID-related functions.
+	 *     For example, driver need discard the stripped PVID tag to ensure
+	 *     the PVID will not report to mbuf and shift the inserted VLAN tag
+	 *     to avoid port based VLAN covering it.
+	 *
+	 *  - HNS3_HW_SHIT_AND_DISCARD_MODE
+	 *     PMD driver does not need to process PVID-related functions in
+	 *     I/O process, Hardware will adjust the sequence between port based
+	 *     VLAN tag and BD VLAN tag automatically and VLAN tag stripped by
+	 *     PVID will be invisible to driver. And in this mode, hns3 is able
+	 *     to send a multi-layer VLAN packets when hw VLAN insert offload
+	 *     is enabled.
+	 */
+	uint8_t vlan_mode;
 	uint8_t max_non_tso_bd_num; /* max BD number of one non-TSO packet */
 
 	struct hns3_port_base_vlan_config port_base_vlan_cfg;
@@ -532,11 +557,21 @@ struct hns3_user_vlan_table {
 
 /* Vlan tag configuration for RX direction */
 struct hns3_rx_vtag_cfg {
-	uint8_t rx_vlan_offload_en; /* Whether enable rx vlan offload */
-	uint8_t strip_tag1_en;      /* Whether strip inner vlan tag */
-	uint8_t strip_tag2_en;      /* Whether strip outer vlan tag */
-	uint8_t vlan1_vlan_prionly; /* Inner VLAN Tag up to descriptor Enable */
-	uint8_t vlan2_vlan_prionly; /* Outer VLAN Tag up to descriptor Enable */
+	bool rx_vlan_offload_en;    /* Whether enable rx vlan offload */
+	bool strip_tag1_en;         /* Whether strip inner vlan tag */
+	bool strip_tag2_en;         /* Whether strip outer vlan tag */
+	/*
+	 * If strip_tag_en is enabled, this bit decide whether to map the vlan
+	 * tag to descriptor.
+	 */
+	bool strip_tag1_discard_en;
+	bool strip_tag2_discard_en;
+	/*
+	 * If this bit is enabled, only map inner/outer priority to descriptor
+	 * and the vlan tag is always 0.
+	 */
+	bool vlan1_vlan_prionly;
+	bool vlan2_vlan_prionly;
 };
 
 /* Vlan tag configuration for TX direction */
@@ -545,10 +580,15 @@ struct hns3_tx_vtag_cfg {
 	bool accept_untag1;         /* Whether accept untag1 packet from host */
 	bool accept_tag2;
 	bool accept_untag2;
-	bool insert_tag1_en;        /* Whether insert inner vlan tag */
-	bool insert_tag2_en;        /* Whether insert outer vlan tag */
-	uint16_t default_tag1;      /* The default inner vlan tag to insert */
-	uint16_t default_tag2;      /* The default outer vlan tag to insert */
+	bool insert_tag1_en;        /* Whether insert outer vlan tag */
+	bool insert_tag2_en;        /* Whether insert inner vlan tag */
+	/*
+	 * In shift mode, hw will shift the sequence of port based VLAN and
+	 * BD VLAN.
+	 */
+	bool tag_shift_mode_en;     /* hw shift vlan tag automatically */
+	uint16_t default_tag1;      /* The default outer vlan tag to insert */
+	uint16_t default_tag2;      /* The default inner vlan tag to insert */
 };
 
 struct hns3_vtag_cfg {
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index 2510582..305007a 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -347,7 +347,7 @@ hns3_update_port_base_vlan_info(struct hns3_hw *hw,
 	 */
 	if (hw->port_base_vlan_cfg.state != new_pvid_state) {
 		hw->port_base_vlan_cfg.state = new_pvid_state;
-		hns3_update_all_queues_pvid_state(hw);
+		hns3_update_all_queues_pvid_proc_en(hw);
 	}
 }
 
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 816c39c..cf55d94 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -339,26 +339,26 @@ hns3_init_tx_queue_hw(struct hns3_tx_queue *txq)
 }
 
 void
-hns3_update_all_queues_pvid_state(struct hns3_hw *hw)
+hns3_update_all_queues_pvid_proc_en(struct hns3_hw *hw)
 {
 	uint16_t nb_rx_q = hw->data->nb_rx_queues;
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	struct hns3_rx_queue *rxq;
 	struct hns3_tx_queue *txq;
-	int pvid_state;
+	bool pvid_en;
 	int i;
 
-	pvid_state = hw->port_base_vlan_cfg.state;
+	pvid_en = hw->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_ENABLE;
 	for (i = 0; i < hw->cfg_max_queues; i++) {
 		if (i < nb_rx_q) {
 			rxq = hw->data->rx_queues[i];
 			if (rxq != NULL)
-				rxq->pvid_state = pvid_state;
+				rxq->pvid_sw_discard_en = pvid_en;
 		}
 		if (i < nb_tx_q) {
 			txq = hw->data->tx_queues[i];
 			if (txq != NULL)
-				txq->pvid_state = pvid_state;
+				txq->pvid_sw_shift_en = pvid_en;
 		}
 	}
 }
@@ -1405,7 +1405,20 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	rxq->pkt_first_seg = NULL;
 	rxq->pkt_last_seg = NULL;
 	rxq->port_id = dev->data->port_id;
-	rxq->pvid_state = hw->port_base_vlan_cfg.state;
+	/*
+	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
+	 * the pvid_sw_discard_en in the queue struct should not be changed,
+	 * because PVID-related operations do not need to be processed by PMD
+	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * on the configuration of PF kernel mode netdevice driver. And the
+	 * related PF configuration is delivered through the mailbox and finally
+	 * reflectd in port_base_vlan_cfg.
+	 */
+	if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
+		rxq->pvid_sw_discard_en = hw->port_base_vlan_cfg.state ==
+				       HNS3_PORT_BASE_VLAN_ENABLE;
+	else
+		rxq->pvid_sw_discard_en = false;
 	rxq->configured = true;
 	rxq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
@@ -1592,7 +1605,7 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 	};
 	strip_status = hns3_get_field(l234_info, HNS3_RXD_STRP_TAGP_M,
 				      HNS3_RXD_STRP_TAGP_S);
-	report_mode = report_type[rxq->pvid_state][strip_status];
+	report_mode = report_type[rxq->pvid_sw_discard_en][strip_status];
 	switch (report_mode) {
 	case HNS3_NO_STRP_VLAN_VLD:
 		mb->vlan_tci = 0;
@@ -2151,7 +2164,20 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	}
 
 	txq->port_id = dev->data->port_id;
-	txq->pvid_state = hw->port_base_vlan_cfg.state;
+	/*
+	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
+	 * the pvid_sw_shift_en in the queue struct should not be changed,
+	 * because PVID-related operations do not need to be processed by PMD
+	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * on the configuration of PF kernel mode netdev driver. And the
+	 * related PF configuration is delivered through the mailbox and finally
+	 * reflectd in port_base_vlan_cfg.
+	 */
+	if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
+		txq->pvid_sw_shift_en = hw->port_base_vlan_cfg.state ==
+					HNS3_PORT_BASE_VLAN_ENABLE;
+	else
+		txq->pvid_sw_shift_en = false;
 	txq->configured = true;
 	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
@@ -2352,7 +2378,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_state && ol_flags & (PKT_TX_VLAN_PKT |
+	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN_PKT |
 				PKT_TX_QINQ_PKT)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
@@ -2365,7 +2391,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	}
 
 	if (ol_flags & PKT_TX_QINQ_PKT ||
-	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_state)) {
+	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
@@ -2791,7 +2817,7 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	struct rte_ether_hdr *eh;
 	struct rte_vlan_hdr *vh;
 
-	if (!txq->pvid_state)
+	if (!txq->pvid_sw_shift_en)
 		return 0;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 27041ab..476cfc2 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -273,7 +273,6 @@ struct hns3_rx_queue {
 	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
 	const struct rte_memzone *mz;
 	struct hns3_entry *sw_ring;
-
 	struct rte_mbuf *pkt_first_seg;
 	struct rte_mbuf *pkt_last_seg;
 
@@ -290,17 +289,24 @@ struct hns3_rx_queue {
 	uint16_t rx_free_hold;   /* num of BDs waited to passed to hardware */
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
-	/*
-	 * port based vlan configuration state.
-	 * value range: HNS3_PORT_BASE_VLAN_DISABLE / HNS3_PORT_BASE_VLAN_ENABLE
-	 */
-	uint16_t pvid_state;
 
 	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	bool rx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if rx queue has been configured */
+	/*
+	 * Indicate whether ignore the outer VLAN field in the Rx BD reported
+	 * by the Hardware. Because the outer VLAN is the PVID if the PVID is
+	 * set for some version of hardware network engine whose vlan mode is
+	 * HNS3_SW_SHIFT_AND_DISCARD_MODE, such as kunpeng 920. And this VLAN
+	 * should not be transitted to the upper-layer application. For hardware
+	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
+	 * such as kunpeng 930, PVID will not be reported to the BDs. So, PMD
+	 * driver does not need to perform PVID-related operation in Rx. At this
+	 * point, the pvid_sw_discard_en will be false.
+	 */
+	bool pvid_sw_discard_en;
 
 	uint64_t l2_errors;
 	uint64_t pkt_len_errors;
@@ -361,12 +367,6 @@ struct hns3_tx_queue {
 	struct rte_mbuf **free;
 
 	/*
-	 * port based vlan configuration state.
-	 * value range: HNS3_PORT_BASE_VLAN_DISABLE / HNS3_PORT_BASE_VLAN_ENABLE
-	 */
-	uint16_t pvid_state;
-
-	/*
 	 * The minimum length of the packet supported by hardware in the Tx
 	 * direction.
 	 */
@@ -374,6 +374,19 @@ struct hns3_tx_queue {
 
 	bool tx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if tx queue has been configured */
+	/*
+	 * Indicate whether add the vlan_tci of the mbuf to the inner VLAN field
+	 * of Tx BD. Because the outer VLAN will always be the PVID when the
+	 * PVID is set and for some version of hardware network engine whose
+	 * vlan mode is HNS3_SW_SHIFT_AND_DISCARD_MODE, such as kunpeng 920, the
+	 * PVID will overwrite the outer VLAN field of Tx BD. For the hardware
+	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
+	 * such as kunpeng 930, if the PVID is set, the hardware will shift the
+	 * VLAN field automatically. So, PMD driver does not need to do
+	 * PVID-related operations in Tx. And pvid_sw_shift_en will be false at
+	 * this point.
+	 */
+	bool pvid_sw_shift_en;
 
 	/*
 	 * The following items are used for the abnormal errors statistics in
@@ -620,7 +633,7 @@ int hns3_set_fake_rx_or_tx_queues(struct rte_eth_dev *dev, uint16_t nb_rx_q,
 				  uint16_t nb_tx_q);
 int hns3_config_gro(struct hns3_hw *hw, bool en);
 int hns3_restore_gro_conf(struct hns3_hw *hw);
-void hns3_update_all_queues_pvid_state(struct hns3_hw *hw);
+void hns3_update_all_queues_pvid_proc_en(struct hns3_hw *hw);
 void hns3_rx_scattered_reset(struct rte_eth_dev *dev);
 void hns3_rx_scattered_calc(struct rte_eth_dev *dev);
 int hns3_rx_check_vec_support(struct rte_eth_dev *dev);
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 03/17] net/hns3: add default branch to switch in Rx VLAN processing Wei Hu (Xavier)
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Chengchang Tang <tangchengchang@huawei.com>

Currently, the default VLAN (vlan id 0) will never be deleted from the
hardware VLAN table based on hns3 PF device. As a result, even a non-zero
PVID is set by calling rte_eth_dev_set_vlan_pvid based on hns3 PF device,
packets with VLAN 0 and without VLAN are still received by PF driver in Rx
direction.

This patch deletes the restriction that VLAN 0 cannot be removed in PVID
configuration to ensure packets without PVID will be filtered when PVID
is set. And the patch adds VLAN 0 to the soft list when initing vlan
configuration to ensure that VLAN 0 will be deleted from the hardware VLAN
table when device is closed.

Fixes: 411d23b9eafb ("net/hns3: support VLAN")
Cc: stable@dpdk.org

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 95 +++++++++++++++++++-----------------------
 1 file changed, 43 insertions(+), 52 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 3e98df3..9ce382a 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -346,8 +346,9 @@ hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
 	int ret = 0;
 
 	/*
-	 * When vlan filter is enabled, hardware regards vlan id 0 as the entry
-	 * for normal packet, deleting vlan id 0 is not allowed.
+	 * When vlan filter is enabled, hardware regards packets without vlan
+	 * as packets with vlan 0. So, to receive packets without vlan, vlan id
+	 * 0 is not allowed to be removed by rte_eth_dev_vlan_filter.
 	 */
 	if (on == 0 && vlan_id == 0)
 		return 0;
@@ -364,7 +365,7 @@ hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
 		writen_to_tbl = true;
 	}
 
-	if (ret == 0 && vlan_id) {
+	if (ret == 0) {
 		if (on)
 			hns3_add_dev_vlan_table(hns, vlan_id, writen_to_tbl);
 		else
@@ -743,16 +744,6 @@ hns3_vlan_txvlan_cfg(struct hns3_adapter *hns, uint16_t port_base_vlan_state,
 	return ret;
 }
 
-static void
-hns3_store_port_base_vlan_info(struct hns3_adapter *hns, uint16_t pvid, int on)
-{
-	struct hns3_hw *hw = &hns->hw;
-
-	hw->port_base_vlan_cfg.state = on ?
-	    HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE;
-
-	hw->port_base_vlan_cfg.pvid = pvid;
-}
 
 static void
 hns3_rm_all_vlan_table(struct hns3_adapter *hns, bool is_del_list)
@@ -761,10 +752,10 @@ hns3_rm_all_vlan_table(struct hns3_adapter *hns, bool is_del_list)
 	struct hns3_pf *pf = &hns->pf;
 
 	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
-		if (vlan_entry->hd_tbl_status)
+		if (vlan_entry->hd_tbl_status) {
 			hns3_set_port_vlan_filter(hns, vlan_entry->vlan_id, 0);
-
-		vlan_entry->hd_tbl_status = false;
+			vlan_entry->hd_tbl_status = false;
+		}
 	}
 
 	if (is_del_list) {
@@ -784,10 +775,10 @@ hns3_add_all_vlan_table(struct hns3_adapter *hns)
 	struct hns3_pf *pf = &hns->pf;
 
 	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
-		if (!vlan_entry->hd_tbl_status)
+		if (!vlan_entry->hd_tbl_status) {
 			hns3_set_port_vlan_filter(hns, vlan_entry->vlan_id, 1);
-
-		vlan_entry->hd_tbl_status = true;
+			vlan_entry->hd_tbl_status = true;
+		}
 	}
 }
 
@@ -811,40 +802,41 @@ hns3_remove_all_vlan_table(struct hns3_adapter *hns)
 
 static int
 hns3_update_vlan_filter_entries(struct hns3_adapter *hns,
-				uint16_t port_base_vlan_state,
-				uint16_t new_pvid, uint16_t old_pvid)
+			uint16_t port_base_vlan_state, uint16_t new_pvid)
 {
 	struct hns3_hw *hw = &hns->hw;
-	int ret = 0;
+	uint16_t old_pvid;
+	int ret;
 
 	if (port_base_vlan_state == HNS3_PORT_BASE_VLAN_ENABLE) {
-		if (old_pvid != HNS3_INVLID_PVID && old_pvid != 0) {
+		old_pvid = hw->port_base_vlan_cfg.pvid;
+		if (old_pvid != HNS3_INVLID_PVID) {
 			ret = hns3_set_port_vlan_filter(hns, old_pvid, 0);
 			if (ret) {
-				hns3_err(hw,
-					 "Failed to clear clear old pvid filter, ret =%d",
-					 ret);
+				hns3_err(hw, "failed to remove old pvid %u, "
+						"ret = %d", old_pvid, ret);
 				return ret;
 			}
 		}
 
 		hns3_rm_all_vlan_table(hns, false);
-		return hns3_set_port_vlan_filter(hns, new_pvid, 1);
-	}
-
-	if (new_pvid != 0) {
+		ret = hns3_set_port_vlan_filter(hns, new_pvid, 1);
+		if (ret) {
+			hns3_err(hw, "failed to add new pvid %u, ret = %d",
+					new_pvid, ret);
+			return ret;
+		}
+	} else {
 		ret = hns3_set_port_vlan_filter(hns, new_pvid, 0);
 		if (ret) {
-			hns3_err(hw, "Failed to set port vlan filter, ret =%d",
-				 ret);
+			hns3_err(hw, "failed to remove pvid %u, ret = %d",
+					new_pvid, ret);
 			return ret;
 		}
-	}
 
-	if (new_pvid == hw->port_base_vlan_cfg.pvid)
 		hns3_add_all_vlan_table(hns);
-
-	return ret;
+	}
+	return 0;
 }
 
 static int
@@ -883,7 +875,6 @@ hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid, int on)
 {
 	struct hns3_hw *hw = &hns->hw;
 	uint16_t port_base_vlan_state;
-	uint16_t old_pvid;
 	int ret;
 
 	if (on == 0 && pvid != hw->port_base_vlan_cfg.pvid) {
@@ -912,17 +903,16 @@ hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid, int on)
 
 	if (pvid == HNS3_INVLID_PVID)
 		goto out;
-	old_pvid = hw->port_base_vlan_cfg.pvid;
-	ret = hns3_update_vlan_filter_entries(hns, port_base_vlan_state, pvid,
-					      old_pvid);
+	ret = hns3_update_vlan_filter_entries(hns, port_base_vlan_state, pvid);
 	if (ret) {
-		hns3_err(hw, "Failed to update vlan filter entries, ret =%d",
+		hns3_err(hw, "failed to update vlan filter entries, ret = %d",
 			 ret);
 		return ret;
 	}
 
 out:
-	hns3_store_port_base_vlan_info(hns, pvid, on);
+	hw->port_base_vlan_cfg.state = port_base_vlan_state;
+	hw->port_base_vlan_cfg.pvid = on ? pvid : HNS3_INVLID_PVID;
 	return ret;
 }
 
@@ -968,20 +958,19 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
 	return 0;
 }
 
-static void
-init_port_base_vlan_info(struct hns3_hw *hw)
-{
-	hw->port_base_vlan_cfg.state = HNS3_PORT_BASE_VLAN_DISABLE;
-	hw->port_base_vlan_cfg.pvid = HNS3_INVLID_PVID;
-}
-
 static int
 hns3_default_vlan_config(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	ret = hns3_set_port_vlan_filter(hns, 0, 1);
+	/*
+	 * When vlan filter is enabled, hardware regards packets without vlan
+	 * as packets with vlan 0. Therefore, if vlan 0 is not in the vlan
+	 * table, packets without vlan won't be received. So, add vlan 0 as
+	 * the default vlan.
+	 */
+	ret = hns3_vlan_filter_configure(hns, 0, 1);
 	if (ret)
 		hns3_err(hw, "default vlan 0 config failed, ret =%d", ret);
 	return ret;
@@ -1000,8 +989,10 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 	 * ensure that the hardware configuration remains unchanged before and
 	 * after reset.
 	 */
-	if (rte_atomic16_read(&hw->reset.resetting) == 0)
-		init_port_base_vlan_info(hw);
+	if (rte_atomic16_read(&hw->reset.resetting) == 0) {
+		hw->port_base_vlan_cfg.state = HNS3_PORT_BASE_VLAN_DISABLE;
+		hw->port_base_vlan_cfg.pvid = HNS3_INVLID_PVID;
+	}
 
 	ret = hns3_vlan_filter_init(hns);
 	if (ret) {
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 03/17] net/hns3: add default branch to switch in Rx VLAN processing
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 04/17] net/hns3: add max number of segs compatibility Wei Hu (Xavier)
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Chengchang Tang <tangchengchang@huawei.com>

This patch solves the static check warning as follow:
"The switch statement must have a 'default' branch".

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index cf55d94..6d02bad 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1618,6 +1618,9 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.ot_vlan_tag);
 		return;
+	default:
+		mb->vlan_tci = 0;
+		return;
 	}
 }
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 04/17] net/hns3: add max number of segs compatibility
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (2 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 03/17] net/hns3: add default branch to switch in Rx VLAN processing Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 05/17] net/hns3: add TSO pseudo header calculation compatibility Wei Hu (Xavier)
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Hongbo Zheng <zhenghongbo3@huawei.com>

Kunpeng 920 supports the maximum nb_segs of non-tso packet is 8 in Tx
direction, kunpeng 930 expands this limit value to 18, this patch sets
the corresponding value by querying the maximum number of non-tso nb_segs
supported by the device during initialization.

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    |  2 +-
 drivers/net/hns3/hns3_ethdev_vf.c |  2 +-
 drivers/net/hns3/hns3_rxtx.c      | 30 ++++++++++++++++++++----------
 drivers/net/hns3/hns3_rxtx.h      |  1 +
 4 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 9ce382a..ed4273b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2514,7 +2514,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 		.nb_min = HNS3_MIN_RING_DESC,
 		.nb_align = HNS3_ALIGN_RING_DESC,
 		.nb_seg_max = HNS3_MAX_TSO_BD_PER_PKT,
-		.nb_mtu_seg_max = HNS3_MAX_NON_TSO_BD_PER_PKT,
+		.nb_mtu_seg_max = hw->max_non_tso_bd_num,
 	};
 
 	info->default_rxconf = (struct rte_eth_rxconf) {
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 037a5be..c39edf5 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -966,7 +966,7 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 		.nb_min = HNS3_MIN_RING_DESC,
 		.nb_align = HNS3_ALIGN_RING_DESC,
 		.nb_seg_max = HNS3_MAX_TSO_BD_PER_PKT,
-		.nb_mtu_seg_max = HNS3_MAX_NON_TSO_BD_PER_PKT,
+		.nb_mtu_seg_max = hw->max_non_tso_bd_num,
 	};
 
 	info->default_rxconf = (struct rte_eth_rxconf) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 6d02bad..7c7b9de 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2181,6 +2181,7 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 					HNS3_PORT_BASE_VLAN_ENABLE;
 	else
 		txq->pvid_sw_shift_en = false;
+	txq->max_non_tso_bd_num = hw->max_non_tso_bd_num;
 	txq->configured = true;
 	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
@@ -2438,7 +2439,8 @@ hns3_pktmbuf_copy_hdr(struct rte_mbuf *new_pkt, struct rte_mbuf *old_pkt)
 }
 
 static int
-hns3_reassemble_tx_pkts(struct rte_mbuf *tx_pkt, struct rte_mbuf **new_pkt)
+hns3_reassemble_tx_pkts(struct rte_mbuf *tx_pkt, struct rte_mbuf **new_pkt,
+				  uint8_t max_non_tso_bd_num)
 {
 	struct rte_mempool *mb_pool;
 	struct rte_mbuf *new_mbuf;
@@ -2458,7 +2460,7 @@ hns3_reassemble_tx_pkts(struct rte_mbuf *tx_pkt, struct rte_mbuf **new_pkt)
 	mb_pool = tx_pkt->pool;
 	buf_size = tx_pkt->buf_len - RTE_PKTMBUF_HEADROOM;
 	nb_new_buf = (rte_pktmbuf_pkt_len(tx_pkt) - 1) / buf_size + 1;
-	if (nb_new_buf > HNS3_MAX_NON_TSO_BD_PER_PKT)
+	if (nb_new_buf > max_non_tso_bd_num)
 		return -EINVAL;
 
 	last_buf_len = rte_pktmbuf_pkt_len(tx_pkt) % buf_size;
@@ -2690,7 +2692,8 @@ hns3_txd_enable_checksum(struct hns3_tx_queue *txq, uint16_t tx_desc_id,
 }
 
 static bool
-hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num)
+hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num,
+				 uint32_t max_non_tso_bd_num)
 {
 	struct rte_mbuf *m_first = tx_pkts;
 	struct rte_mbuf *m_last = tx_pkts;
@@ -2705,10 +2708,10 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num)
 	 * frags greater than gso header len + mss, and the remaining 7
 	 * consecutive frags greater than MSS except the last 7 frags.
 	 */
-	if (bd_num <= HNS3_MAX_NON_TSO_BD_PER_PKT)
+	if (bd_num <= max_non_tso_bd_num)
 		return false;
 
-	for (i = 0; m_last && i < HNS3_MAX_NON_TSO_BD_PER_PKT - 1;
+	for (i = 0; m_last && i < max_non_tso_bd_num - 1;
 	     i++, m_last = m_last->next)
 		tot_len += m_last->data_len;
 
@@ -2726,7 +2729,7 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num)
 	 * ensure the sum of the data length of every 7 consecutive buffer
 	 * is greater than mss except the last one.
 	 */
-	for (i = 0; m_last && i < bd_num - HNS3_MAX_NON_TSO_BD_PER_PKT; i++) {
+	for (i = 0; m_last && i < bd_num - max_non_tso_bd_num; i++) {
 		tot_len -= m_first->data_len;
 		tot_len += m_last->data_len;
 
@@ -2859,15 +2862,19 @@ uint16_t
 hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	       uint16_t nb_pkts)
 {
+	struct hns3_tx_queue *txq;
 	struct rte_mbuf *m;
 	uint16_t i;
 	int ret;
 
+	txq = (struct hns3_tx_queue *)tx_queue;
+
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 
 		if (hns3_pkt_is_tso(m) &&
-		    (hns3_pkt_need_linearized(m, m->nb_segs) ||
+		    (hns3_pkt_need_linearized(m, m->nb_segs,
+					      txq->max_non_tso_bd_num) ||
 		     hns3_check_tso_pkt_valid(m))) {
 			rte_errno = EINVAL;
 			return i;
@@ -2880,7 +2887,7 @@ hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 			return i;
 		}
 
-		if (hns3_vld_vlan_chk(tx_queue, m)) {
+		if (hns3_vld_vlan_chk(txq, m)) {
 			rte_errno = EINVAL;
 			return i;
 		}
@@ -2921,6 +2928,7 @@ static int
 hns3_check_non_tso_pkt(uint16_t nb_buf, struct rte_mbuf **m_seg,
 		      struct rte_mbuf *tx_pkt, struct hns3_tx_queue *txq)
 {
+	uint8_t max_non_tso_bd_num;
 	struct rte_mbuf *new_pkt;
 	int ret;
 
@@ -2936,9 +2944,11 @@ hns3_check_non_tso_pkt(uint16_t nb_buf, struct rte_mbuf **m_seg,
 		return -EINVAL;
 	}
 
-	if (unlikely(nb_buf > HNS3_MAX_NON_TSO_BD_PER_PKT)) {
+	max_non_tso_bd_num = txq->max_non_tso_bd_num;
+	if (unlikely(nb_buf > max_non_tso_bd_num)) {
 		txq->exceed_limit_bd_pkt_cnt++;
-		ret = hns3_reassemble_tx_pkts(tx_pkt, &new_pkt);
+		ret = hns3_reassemble_tx_pkts(tx_pkt, &new_pkt,
+					      max_non_tso_bd_num);
 		if (ret) {
 			txq->exceed_limit_bd_reassem_fail++;
 			return ret;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 476cfc2..5ffe30e 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -372,6 +372,7 @@ struct hns3_tx_queue {
 	 */
 	uint32_t min_tx_pkt_len;
 
+	uint8_t max_non_tso_bd_num; /* max BD number of one non-TSO packet */
 	bool tx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if tx queue has been configured */
 	/*
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 05/17] net/hns3: add TSO pseudo header calculation compatibility
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (3 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 04/17] net/hns3: add max number of segs compatibility Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR Wei Hu (Xavier)
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

In kunpeng 920, when process pkts which need TSO, the network driver
need to erase the L4 len value of the TCP TSO pseudo header and
recalculate the pseudo header checksum. kunpeng930 support not need
to erase the L4 len value of the TCP TSO pseudo header.

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    |  2 +
 drivers/net/hns3/hns3_ethdev.h    | 21 +++++++++-
 drivers/net/hns3/hns3_ethdev_vf.c |  2 +
 drivers/net/hns3/hns3_rxtx.c      | 82 ++++++++++++++++++++++++---------------
 drivers/net/hns3/hns3_rxtx.h      | 17 ++++++++
 5 files changed, 92 insertions(+), 32 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index ed4273b..9c9f264 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2916,6 +2916,7 @@ hns3_get_capability(struct hns3_hw *hw)
 		hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_RSV_ONE;
 		hw->intr.coalesce_mode = HNS3_INTR_COALESCE_NON_QL;
 		hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_2US;
+		hw->tso_mode = HNS3_TSO_SW_CAL_PSEUDO_H_CSUM;
 		hw->vlan_mode = HNS3_SW_SHIFT_AND_DISCARD_MODE;
 		hw->min_tx_pkt_len = HNS3_HIP08_MIN_TX_PKT_LEN;
 		return 0;
@@ -2932,6 +2933,7 @@ hns3_get_capability(struct hns3_hw *hw)
 	hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_ALL;
 	hw->intr.coalesce_mode = HNS3_INTR_COALESCE_QL;
 	hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_1US;
+	hw->tso_mode = HNS3_TSO_HW_CAL_PSEUDO_H_CSUM;
 	hw->vlan_mode = HNS3_HW_SHIFT_AND_DISCARD_MODE;
 	hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN;
 
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index a70223f..f170df9 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -416,6 +416,9 @@ struct hns3_queue_intr {
 	uint8_t gl_unit;
 };
 
+#define HNS3_TSO_SW_CAL_PSEUDO_H_CSUM		0
+#define HNS3_TSO_HW_CAL_PSEUDO_H_CSUM		1
+
 struct hns3_hw {
 	struct rte_eth_dev_data *data;
 	void *io_base;
@@ -476,7 +479,23 @@ struct hns3_hw {
 	uint32_t min_tx_pkt_len;
 
 	struct hns3_queue_intr intr;
-
+	/*
+	 * tso mode.
+	 * value range:
+	 *      HNS3_TSO_SW_CAL_PSEUDO_H_CSUM/HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *
+	 *  - HNS3_TSO_SW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, because of the hardware constraint, network driver
+	 *     software need erase the L4 len value of the TCP pseudo header
+	 *     and recalculate the TCP pseudo header checksum of packets that
+	 *     need TSO.
+	 *
+	 *  - HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, hardware support recalculate the TCP pseudo header
+	 *     checksum of packets that need TSO, so network driver software
+	 *     not need to recalculate it.
+	 */
+	uint8_t tso_mode;
 	/*
 	 * vlan mode.
 	 * value range:
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index c39edf5..565cf60 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1165,6 +1165,7 @@ hns3vf_get_capability(struct hns3_hw *hw)
 		hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_RSV_ONE;
 		hw->intr.coalesce_mode = HNS3_INTR_COALESCE_NON_QL;
 		hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_2US;
+		hw->tso_mode = HNS3_TSO_SW_CAL_PSEUDO_H_CSUM;
 		hw->min_tx_pkt_len = HNS3_HIP08_MIN_TX_PKT_LEN;
 		return 0;
 	}
@@ -1180,6 +1181,7 @@ hns3vf_get_capability(struct hns3_hw *hw)
 	hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_ALL;
 	hw->intr.coalesce_mode = HNS3_INTR_COALESCE_QL;
 	hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_1US;
+	hw->tso_mode = HNS3_TSO_HW_CAL_PSEUDO_H_CSUM;
 	hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN;
 
 	return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 7c7b9de..930aa28 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2188,6 +2188,7 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	txq->io_tail_reg = (volatile void *)((char *)txq->io_base +
 					     HNS3_RING_TX_TAIL_REG);
 	txq->min_tx_pkt_len = hw->min_tx_pkt_len;
+	txq->tso_mode = hw->tso_mode;
 	txq->over_length_pkt_cnt = 0;
 	txq->exceed_limit_bd_pkt_cnt = 0;
 	txq->exceed_limit_bd_reassem_fail = 0;
@@ -2858,47 +2859,66 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 }
 #endif
 
-uint16_t
-hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
+static int
+hns3_prep_pkt_proc(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 {
-	struct hns3_tx_queue *txq;
-	struct rte_mbuf *m;
-	uint16_t i;
 	int ret;
 
-	txq = (struct hns3_tx_queue *)tx_queue;
-
-	for (i = 0; i < nb_pkts; i++) {
-		m = tx_pkts[i];
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	ret = rte_validate_tx_offload(m);
+	if (ret != 0) {
+		rte_errno = -ret;
+		return ret;
+	}
 
-		if (hns3_pkt_is_tso(m) &&
-		    (hns3_pkt_need_linearized(m, m->nb_segs,
-					      txq->max_non_tso_bd_num) ||
-		     hns3_check_tso_pkt_valid(m))) {
+	ret = hns3_vld_vlan_chk(tx_queue, m);
+	if (ret != 0) {
+		rte_errno = EINVAL;
+		return ret;
+	}
+#endif
+	if (hns3_pkt_is_tso(m)) {
+		if (hns3_pkt_need_linearized(m, m->nb_segs,
+					     tx_queue->max_non_tso_bd_num) ||
+		    hns3_check_tso_pkt_valid(m)) {
 			rte_errno = EINVAL;
-			return i;
+			return -EINVAL;
 		}
 
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-		ret = rte_validate_tx_offload(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
+		if (tx_queue->tso_mode != HNS3_TSO_SW_CAL_PSEUDO_H_CSUM) {
+			/*
+			 * (tso mode != HNS3_TSO_SW_CAL_PSEUDO_H_CSUM) means
+			 * hardware support recalculate the TCP pseudo header
+			 * checksum of packets that need TSO, so network driver
+			 * software not need to recalculate it.
+			 */
+			hns3_outer_header_cksum_prepare(m);
+			return 0;
 		}
+	}
 
-		if (hns3_vld_vlan_chk(txq, m)) {
-			rte_errno = EINVAL;
-			return i;
-		}
-#endif
-		ret = rte_net_intel_cksum_prepare(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
-		}
+	ret = rte_net_intel_cksum_prepare(m);
+	if (ret != 0) {
+		rte_errno = -ret;
+		return ret;
+	}
+
+	hns3_outer_header_cksum_prepare(m);
+
+	return 0;
+}
 
-		hns3_outer_header_cksum_prepare(m);
+uint16_t
+hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	       uint16_t nb_pkts)
+{
+	struct rte_mbuf *m;
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (hns3_prep_pkt_proc(tx_queue, m))
+			return i;
 	}
 
 	return i;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 5ffe30e..d7d70f6 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -367,6 +367,23 @@ struct hns3_tx_queue {
 	struct rte_mbuf **free;
 
 	/*
+	 * tso mode.
+	 * value range:
+	 *      HNS3_TSO_SW_CAL_PSEUDO_H_CSUM/HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *
+	 *  - HNS3_TSO_SW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, because of the hardware constraint, network driver
+	 *     software need erase the L4 len value of the TCP pseudo header
+	 *     and recalculate the TCP pseudo header checksum of packets that
+	 *     need TSO.
+	 *
+	 *  - HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, hardware support recalculate the TCP pseudo header
+	 *     checksum of packets that need TSO, so network driver software
+	 *     not need to recalculate it.
+	 */
+	uint8_t tso_mode;
+	/*
 	 * The minimum length of the packet supported by hardware in the Tx
 	 * direction.
 	 */
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (4 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 05/17] net/hns3: add TSO pseudo header calculation compatibility Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 07/17] net/hns3: add default branch to switch when parsing fd tuple Wei Hu (Xavier)
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Hongbo Zheng <zhenghongbo3@huawei.com>

According to the protocol of PCIe, FLR to a PF device resets the PF state
as well as the SR-IOV extended capability including VF Enable which means
that VFs no longer exist.

When PF device is in FLR reset stage, at this time, the register state
of VF device is not reliable, so VF device's register state detection
is not carried out in PF FLR.

In this case, we just ignore the register states to avoid accessing
nonexistent register and return false in the internal function named
hns3vf_is_reset_pending to indicate that there are no other reset states
that need to be processed by PMD driver.

Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.h    |  7 +++++++
 drivers/net/hns3/hns3_ethdev_vf.c | 15 +++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index f170df9..3f3f973 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -275,6 +275,13 @@ enum hns3_reset_level {
 	 * Kernel PF driver use mailbox to inform DPDK VF to do reset, the value
 	 * of the reset level and the one defined in kernel driver should be
 	 * same.
+	 *
+	 * According to the protocol of PCIe, FLR to a PF resets the PF state as
+	 * well as the SR-IOV extended capability including VF Enable which
+	 * means that VFs no longer exist.
+	 *
+	 * In PF FLR, the register state of VF is not reliable, VF's driver
+	 * should not access the registers of the VF device.
 	 */
 	HNS3_VF_FULL_RESET = 3,
 	HNS3_FLR_RESET,     /* A VF perform FLR reset */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 565cf60..cb2747b 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -2191,6 +2191,21 @@ hns3vf_is_reset_pending(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	enum hns3_reset_level reset;
 
+	/*
+	 * According to the protocol of PCIe, FLR to a PF device resets the PF
+	 * state as well as the SR-IOV extended capability including VF Enable
+	 * which means that VFs no longer exist.
+	 *
+	 * HNS3_VF_FULL_RESET means PF device is in FLR reset. when PF device
+	 * is in FLR stage, the register state of VF device is not reliable,
+	 * so register states detection can not be carried out. In this case,
+	 * we just ignore the register states and return false to indicate that
+	 * there are no other reset states that need to be processed by driver.
+	 */
+	if (hw->reset.level == HNS3_VF_FULL_RESET)
+		return false;
+
+	/* Check the registers to confirm whether there is reset pending */
 	hns3vf_check_event_cause(hns, NULL);
 	reset = hns3vf_get_reset_level(hw, &hw->reset.pending);
 	if (hw->reset.level != HNS3_NONE_RESET && hw->reset.level < reset) {
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 07/17] net/hns3: add default branch to switch when parsing fd tuple
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (5 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 08/17] net/hns3: add break to exit loop when err stat item found Wei Hu (Xavier)
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

This patch solves the static check warning in the internal function named
hns3_fd_convert_tuple as follow:
    "The switch statement must have a 'default' branch".

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_fdir.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index 5c3dd05..5729923 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -526,7 +526,8 @@ static inline void hns3_fd_convert_int32(uint32_t key, uint32_t mask,
 	memcpy(val_y, &tmp_y_l, sizeof(tmp_y_l));
 }
 
-static bool hns3_fd_convert_tuple(uint32_t tuple, uint8_t *key_x,
+static bool hns3_fd_convert_tuple(struct hns3_hw *hw,
+				  uint32_t tuple, uint8_t *key_x,
 				  uint8_t *key_y, struct hns3_fdir_rule *rule)
 {
 	struct hns3_fdir_key_conf *key_conf;
@@ -603,6 +604,9 @@ static bool hns3_fd_convert_tuple(uint32_t tuple, uint8_t *key_x,
 		calc_y(*key_y, key_conf->spec.ip_proto,
 		       key_conf->mask.ip_proto);
 		break;
+	default:
+		hns3_warn(hw, "not support tuple of (%d)", tuple);
+		break;
 	}
 	return true;
 }
@@ -710,7 +714,7 @@ static int hns3_config_key(struct hns3_adapter *hns,
 
 		tuple_size = tuple_key_info[i].key_length / HNS3_BITS_PER_BYTE;
 		if (key_cfg->tuple_active & BIT(i)) {
-			tuple_valid = hns3_fd_convert_tuple(i, cur_key_x,
+			tuple_valid = hns3_fd_convert_tuple(hw, i, cur_key_x,
 							    cur_key_y, rule);
 			if (tuple_valid) {
 				cur_key_x += tuple_size;
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 08/17] net/hns3: add break to exit loop when err stat item found
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (6 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 07/17] net/hns3: add default branch to switch when parsing fd tuple Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 09/17] net/hns3: support flow action of queue region Wei Hu (Xavier)
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Hongbo Zheng <zhenghongbo3@huawei.com>

This patch solves the redundant operation during traversal. In the internal
function named hns3_error_int_stats_add for adding error statistics,
because only one statistical item will be found in the for loop statment,
a break can be executed after finding the error statistical item without
traversing the remaining table entries.

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_stats.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
index 067673c..e8846b9 100644
--- a/drivers/net/hns3/hns3_stats.c
+++ b/drivers/net/hns3/hns3_stats.c
@@ -703,6 +703,7 @@ hns3_error_int_stats_add(struct hns3_adapter *hns, const char *err)
 			addr = (char *)&pf->abn_int_stats +
 				hns3_error_int_stats_strings[i].offset;
 			*(uint64_t *)addr += 1;
+			break;
 		}
 	}
 }
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 09/17] net/hns3: support flow action of queue region
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (7 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 08/17] net/hns3: add break to exit loop when err stat item found Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 10/17] net/hns3: support querying RSS flow rule Wei Hu (Xavier)
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Chengwen Feng <fengchengwen@huawei.com>

Kunpeng 930 hardware support spread packets to region of queues which can
be configured by FDIR rule, it means user can create one FDIR rule which
action is one region of queues, and then RSS use the region info to spread
packets.

As we know, RTE_FLOW_ACTION_TYPE_RSS is used to spread packets among
several queues, user could config such as func/level/types/key/queue
parameter to control RSS function, so we provide this feature under the
RTE_FLOW_ACTION_TYPE_RSS framework.

Consider RSS input tuple don't have eth header, we use the following
rule to distinguish them (whether it's queue region configuration or
rss general configuration):
Case 1: pattern have ETH and action's queue_num > 0, indicate it is
queue region configuration.
Case other: rss general configuration.

So if user want to configure one flow which ipv4=192.168.1.192 spread to
queue region of queue 0/1/2/3, the patter should:
  RTE_FLOW_ITEM_TYPE_ETH with spec=last=mask=NULL
  RTE_FLOW_ITEM_TYPE_IPV4 with spec=192.168.1.192 last=mask=NULL
  RTE_FLOW_ITEM_TYPE_END
the action should:
  RTE_FLOW_ACTION_TYPE_RSS with queue_num=4 queue=0/1/2/3
  RTE_FLOW_ACTION_TYPE_END
after calling rte_flow_create, one FDIR rule will be created.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c    |   5 +-
 drivers/net/hns3/hns3_cmd.h    |   2 +-
 drivers/net/hns3/hns3_ethdev.h |   8 +--
 drivers/net/hns3/hns3_fdir.c   |  35 +++++++-----
 drivers/net/hns3/hns3_fdir.h   |  23 +++++++-
 drivers/net/hns3/hns3_flow.c   | 117 +++++++++++++++++++++++++++++++++++++----
 6 files changed, 158 insertions(+), 32 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 0299072..f7cfa00 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -433,8 +433,9 @@ static void hns3_parse_capability(struct hns3_hw *hw,
 
 	if (hns3_get_bit(caps, HNS3_CAPS_UDP_GSO_B))
 		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_UDP_GSO_B, 1);
-	if (hns3_get_bit(caps, HNS3_CAPS_ADQ_B))
-		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_ADQ_B, 1);
+	if (hns3_get_bit(caps, HNS3_CAPS_FD_QUEUE_REGION_B))
+		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_FD_QUEUE_REGION_B,
+			     1);
 	if (hns3_get_bit(caps, HNS3_CAPS_PTP_B))
 		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_PTP_B, 1);
 	if (hns3_get_bit(caps, HNS3_CAPS_TX_PUSH_B))
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 629b114..510d68c 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -278,7 +278,7 @@ struct hns3_rx_priv_buff_cmd {
 enum HNS3_CAPS_BITS {
 	HNS3_CAPS_UDP_GSO_B,
 	HNS3_CAPS_ATR_B,
-	HNS3_CAPS_ADQ_B,
+	HNS3_CAPS_FD_QUEUE_REGION_B,
 	HNS3_CAPS_PTP_B,
 	HNS3_CAPS_INT_QL_B,
 	HNS3_CAPS_SIMPLE_BD_B,
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 3f3f973..e1ed4d6 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -719,7 +719,7 @@ struct hns3_adapter {
 #define HNS3_DEV_SUPPORT_DCB_B			0x0
 #define HNS3_DEV_SUPPORT_COPPER_B		0x1
 #define HNS3_DEV_SUPPORT_UDP_GSO_B		0x2
-#define HNS3_DEV_SUPPORT_ADQ_B			0x3
+#define HNS3_DEV_SUPPORT_FD_QUEUE_REGION_B	0x3
 #define HNS3_DEV_SUPPORT_PTP_B			0x4
 #define HNS3_DEV_SUPPORT_TX_PUSH_B		0x5
 #define HNS3_DEV_SUPPORT_INDEP_TXRX_B		0x6
@@ -736,9 +736,9 @@ struct hns3_adapter {
 #define hns3_dev_udp_gso_supported(hw) \
 	hns3_get_bit((hw)->capability, HNS3_DEV_SUPPORT_UDP_GSO_B)
 
-/* Support Application Device Queue */
-#define hns3_dev_adq_supported(hw) \
-	hns3_get_bit((hw)->capability, HNS3_DEV_SUPPORT_ADQ_B)
+/* Support the queue region action rule of flow directory */
+#define hns3_dev_fd_queue_region_supported(hw) \
+	hns3_get_bit((hw)->capability, HNS3_DEV_SUPPORT_FD_QUEUE_REGION_B)
 
 /* Support PTP timestamp offload */
 #define hns3_dev_ptp_supported(hw) \
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index 5729923..bf698f3 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -29,20 +29,23 @@
 
 #define HNS3_FD_AD_DATA_S		32
 #define HNS3_FD_AD_DROP_B		0
-#define HNS3_FD_AD_DIRECT_QID_B	1
+#define HNS3_FD_AD_DIRECT_QID_B		1
 #define HNS3_FD_AD_QID_S		2
-#define HNS3_FD_AD_QID_M		GENMASK(12, 2)
+#define HNS3_FD_AD_QID_M		GENMASK(11, 2)
 #define HNS3_FD_AD_USE_COUNTER_B	12
 #define HNS3_FD_AD_COUNTER_NUM_S	13
-#define HNS3_FD_AD_COUNTER_NUM_M	GENMASK(20, 13)
+#define HNS3_FD_AD_COUNTER_NUM_M	GENMASK(19, 13)
 #define HNS3_FD_AD_NXT_STEP_B		20
 #define HNS3_FD_AD_NXT_KEY_S		21
-#define HNS3_FD_AD_NXT_KEY_M		GENMASK(26, 21)
-#define HNS3_FD_AD_WR_RULE_ID_B	0
+#define HNS3_FD_AD_NXT_KEY_M		GENMASK(25, 21)
+#define HNS3_FD_AD_WR_RULE_ID_B		0
 #define HNS3_FD_AD_RULE_ID_S		1
-#define HNS3_FD_AD_RULE_ID_M		GENMASK(13, 1)
-#define HNS3_FD_AD_COUNTER_HIGH_BIT     7
-#define HNS3_FD_AD_COUNTER_HIGH_BIT_B   26
+#define HNS3_FD_AD_RULE_ID_M		GENMASK(12, 1)
+#define HNS3_FD_AD_QUE_REGION_EN_B	16
+#define HNS3_FD_AD_QUE_REGION_SIZE_S	17
+#define HNS3_FD_AD_QUE_REGION_SIZE_M	GENMASK(20, 17)
+#define HNS3_FD_AD_COUNTER_HIGH_BIT	7
+#define HNS3_FD_AD_COUNTER_HIGH_BIT_B	26
 
 enum HNS3_PORT_TYPE {
 	HOST_PORT,
@@ -426,13 +429,19 @@ static int hns3_fd_ad_config(struct hns3_hw *hw, int loc,
 		     action->write_rule_id_to_bd);
 	hns3_set_field(ad_data, HNS3_FD_AD_RULE_ID_M, HNS3_FD_AD_RULE_ID_S,
 		       action->rule_id);
+	if (action->nb_queues > 1) {
+		hns3_set_bit(ad_data, HNS3_FD_AD_QUE_REGION_EN_B, 1);
+		hns3_set_field(ad_data, HNS3_FD_AD_QUE_REGION_SIZE_M,
+			       HNS3_FD_AD_QUE_REGION_SIZE_S,
+			       rte_log2_u32(action->nb_queues));
+	}
 	/* set extend bit if counter_id is in [128 ~ 255] */
 	if (action->counter_id & BIT(HNS3_FD_AD_COUNTER_HIGH_BIT))
 		hns3_set_bit(ad_data, HNS3_FD_AD_COUNTER_HIGH_BIT_B, 1);
 	ad_data <<= HNS3_FD_AD_DATA_S;
 	hns3_set_bit(ad_data, HNS3_FD_AD_DROP_B, action->drop_packet);
-	hns3_set_bit(ad_data, HNS3_FD_AD_DIRECT_QID_B,
-		     action->forward_to_direct_queue);
+	if (action->nb_queues == 1)
+		hns3_set_bit(ad_data, HNS3_FD_AD_DIRECT_QID_B, 1);
 	hns3_set_field(ad_data, HNS3_FD_AD_QID_M, HNS3_FD_AD_QID_S,
 		       action->queue_id);
 	hns3_set_bit(ad_data, HNS3_FD_AD_USE_COUNTER_B, action->use_counter);
@@ -440,7 +449,7 @@ static int hns3_fd_ad_config(struct hns3_hw *hw, int loc,
 		       HNS3_FD_AD_COUNTER_NUM_S, action->counter_id);
 	hns3_set_bit(ad_data, HNS3_FD_AD_NXT_STEP_B, action->use_next_stage);
 	hns3_set_field(ad_data, HNS3_FD_AD_NXT_KEY_M, HNS3_FD_AD_NXT_KEY_S,
-		       action->counter_id);
+		       action->next_input_key);
 
 	req->ad_data = rte_cpu_to_le_64(ad_data);
 	ret = hns3_cmd_send(hw, &desc, 1);
@@ -752,12 +761,12 @@ static int hns3_config_action(struct hns3_hw *hw, struct hns3_fdir_rule *rule)
 
 	if (rule->action == HNS3_FD_ACTION_DROP_PACKET) {
 		ad_data.drop_packet = true;
-		ad_data.forward_to_direct_queue = false;
 		ad_data.queue_id = 0;
+		ad_data.nb_queues = 0;
 	} else {
 		ad_data.drop_packet = false;
-		ad_data.forward_to_direct_queue = true;
 		ad_data.queue_id = rule->queue_id;
+		ad_data.nb_queues = rule->nb_queues;
 	}
 
 	if (unlikely(rule->flags & HNS3_RULE_FLAG_COUNTER)) {
diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h
index f7b4216..a5760a3 100644
--- a/drivers/net/hns3/hns3_fdir.h
+++ b/drivers/net/hns3/hns3_fdir.h
@@ -104,8 +104,18 @@ struct hns3_fd_rule_tuples {
 struct hns3_fd_ad_data {
 	uint16_t ad_id;
 	uint8_t drop_packet;
-	uint8_t forward_to_direct_queue;
+	/*
+	 * equal 0 when action is drop.
+	 * index of queue when action is queue.
+	 * index of first queue of queue region when action is queue region.
+	 */
 	uint16_t queue_id;
+	/*
+	 * equal 0 when action is drop.
+	 * equal 1 when action is queue.
+	 * numbers of queues of queue region when action is queue region.
+	 */
+	uint16_t nb_queues;
 	uint8_t use_counter;
 	uint8_t counter_id;
 	uint8_t use_next_stage;
@@ -141,7 +151,18 @@ struct hns3_fdir_rule {
 	uint8_t action;
 	/* VF id, avaiblable when flags with HNS3_RULE_FLAG_VF_ID. */
 	uint8_t vf_id;
+	/*
+	 * equal 0 when action is drop.
+	 * index of queue when action is queue.
+	 * index of first queue of queue region when action is queue region.
+	 */
 	uint16_t queue_id;
+	/*
+	 * equal 0 when action is drop.
+	 * equal 1 when action is queue.
+	 * numbers of queues of queue region when action is queue region.
+	 */
+	uint16_t nb_queues;
 	uint16_t location;
 	struct rte_flow_action_count act_cnt;
 };
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 7ec46ae..1bc5911 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -90,16 +90,56 @@ net_addr_to_host(uint32_t *dst, const rte_be32_t *src, size_t len)
 		dst[i] = rte_be_to_cpu_32(src[i]);
 }
 
-static inline const struct rte_flow_action *
-find_rss_action(const struct rte_flow_action actions[])
+/*
+ * This function is used to find rss general action.
+ * 1. As we know RSS is used to spread packets among several queues, the flow
+ *    API provide the struct rte_flow_action_rss, user could config it's field
+ *    sush as: func/level/types/key/queue to control RSS function.
+ * 2. The flow API also support queue region configuration for hns3. It was
+ *    implemented by FDIR + RSS in hns3 hardware, user can create one FDIR rule
+ *    which action is RSS queues region.
+ * 3. When action is RSS, we use the following rule to distinguish:
+ *    Case 1: pattern have ETH and action's queue_num > 0, indicate it is queue
+ *            region configuration.
+ *    Case other: an rss general action.
+ */
+static const struct rte_flow_action *
+hns3_find_rss_general_action(const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[])
 {
-	const struct rte_flow_action *next = &actions[0];
+	const struct rte_flow_action *act = NULL;
+	const struct hns3_rss_conf *rss;
+	bool have_eth = false;
 
-	for (; next->type != RTE_FLOW_ACTION_TYPE_END; next++) {
-		if (next->type == RTE_FLOW_ACTION_TYPE_RSS)
-			return next;
+	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+		if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
+			act = actions;
+			break;
+		}
 	}
-	return NULL;
+	if (!act)
+		return NULL;
+
+	for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) {
+		if (pattern->type == RTE_FLOW_ITEM_TYPE_ETH) {
+			have_eth = true;
+			break;
+		}
+	}
+
+	rss = act->conf;
+	if (have_eth && rss->conf.queue_num) {
+		/*
+		 * Patter have ETH and action's queue_num > 0, indicate this is
+		 * queue region configuration.
+		 * Because queue region is implemented by FDIR + RSS in hns3
+		 * hardware, it need enter FDIR process, so here return NULL to
+		 * avoid enter RSS process.
+		 */
+		return NULL;
+	}
+
+	return act;
 }
 
 static inline struct hns3_flow_counter *
@@ -233,11 +273,53 @@ hns3_handle_action_queue(struct rte_eth_dev *dev,
 			  "available queue (%d) in driver.",
 			  queue->index, hw->used_rx_queues);
 		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION, action,
-					  "Invalid queue ID in PF");
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  action, "Invalid queue ID in PF");
 	}
 
 	rule->queue_id = queue->index;
+	rule->nb_queues = 1;
+	rule->action = HNS3_FD_ACTION_ACCEPT_PACKET;
+	return 0;
+}
+
+static int
+hns3_handle_action_queue_region(struct rte_eth_dev *dev,
+				const struct rte_flow_action *action,
+				struct hns3_fdir_rule *rule,
+				struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	const struct rte_flow_action_rss *conf = action->conf;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t idx;
+
+	if (!hns3_dev_fd_queue_region_supported(hw))
+		return rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_ACTION, action,
+			"Not support config queue region!");
+
+	if ((!rte_is_power_of_2(conf->queue_num)) ||
+		conf->queue_num > hw->rss_size_max ||
+		conf->queue[0] >= hw->used_rx_queues ||
+		conf->queue[0] + conf->queue_num > hw->used_rx_queues) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION_CONF, action,
+			"Invalid start queue ID and queue num! the start queue "
+			"ID must valid, the queue num must be power of 2 and "
+			"<= rss_size_max.");
+	}
+
+	for (idx = 1; idx < conf->queue_num; idx++) {
+		if (conf->queue[idx] != conf->queue[idx - 1] + 1)
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_CONF, action,
+				"Invalid queue ID sequence! the queue ID "
+				"must be continuous increment.");
+	}
+
+	rule->queue_id = conf->queue[0];
+	rule->nb_queues = conf->queue_num;
 	rule->action = HNS3_FD_ACTION_ACCEPT_PACKET;
 	return 0;
 }
@@ -274,6 +356,19 @@ hns3_handle_actions(struct rte_eth_dev *dev,
 		case RTE_FLOW_ACTION_TYPE_DROP:
 			rule->action = HNS3_FD_ACTION_DROP_PACKET;
 			break;
+		/*
+		 * Here RSS's real action is queue region.
+		 * Queue region is implemented by FDIR + RSS in hns3 hardware,
+		 * the FDIR's action is one queue region (start_queue_id and
+		 * queue_num), then RSS spread packets to the queue region by
+		 * RSS algorigthm.
+		 */
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			ret = hns3_handle_action_queue_region(dev, actions,
+							      rule, error);
+			if (ret)
+				return ret;
+			break;
 		case RTE_FLOW_ACTION_TYPE_MARK:
 			mark =
 			    (const struct rte_flow_action_mark *)actions->conf;
@@ -1629,7 +1724,7 @@ hns3_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	if (ret)
 		return ret;
 
-	if (find_rss_action(actions))
+	if (hns3_find_rss_general_action(pattern, actions))
 		return hns3_parse_rss_filter(dev, actions, error);
 
 	memset(&fdir_rule, 0, sizeof(struct hns3_fdir_rule));
@@ -1684,7 +1779,7 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	flow_node->flow = flow;
 	TAILQ_INSERT_TAIL(&process_list->flow_list, flow_node, entries);
 
-	act = find_rss_action(actions);
+	act = hns3_find_rss_general_action(pattern, actions);
 	if (act) {
 		rss_conf = act->conf;
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 10/17] net/hns3: support querying RSS flow rule
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (8 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 09/17] net/hns3: support flow action of queue region Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 11/17] net/hns3: check input RSS type when creating flow with RSS Wei Hu (Xavier)
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

This patch enables to query some RSS configurations of the specified
rule. For example, show RSS hash function and rss types.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 1bc5911..14d5968 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1962,9 +1962,15 @@ hns3_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
 		const struct rte_flow_action *actions, void *data,
 		struct rte_flow_error *error)
 {
+	struct rte_flow_action_rss *rss_conf;
+	struct hns3_rss_conf_ele *rss_rule;
 	struct rte_flow_query_count *qc;
 	int ret;
 
+	if (!flow->rule)
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "invalid rule");
+
 	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
 		switch (actions->type) {
 		case RTE_FLOW_ACTION_TYPE_VOID:
@@ -1975,13 +1981,24 @@ hns3_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
 			if (ret)
 				return ret;
 			break;
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			if (flow->filter_type != RTE_ETH_FILTER_HASH) {
+				return rte_flow_error_set(error, ENOTSUP,
+					RTE_FLOW_ERROR_TYPE_ACTION,
+					actions, "action is not supported");
+			}
+			rss_conf = (struct rte_flow_action_rss *)data;
+			rss_rule = (struct hns3_rss_conf_ele *)flow->rule;
+			rte_memcpy(rss_conf, &rss_rule->filter_info.conf,
+				   sizeof(struct rte_flow_action_rss));
+			break;
 		default:
 			return rte_flow_error_set(error, ENOTSUP,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  actions,
-						  "Query action only support count");
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				actions, "action is not supported");
 		}
 	}
+
 	return 0;
 }
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 11/17] net/hns3: check input RSS type when creating flow with RSS
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (9 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 10/17] net/hns3: support querying RSS flow rule Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 12/17] net/hns3: set RSS hash type input configuration Wei Hu (Xavier)
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

This patch adds checking the input RSS type when creating a flow with RSS.
If the input RSS type are not supported based on hns3 network engine, an
error is returned.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 26 ++++++++------------------
 1 file changed, 8 insertions(+), 18 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 14d5968..f4ea47a 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1403,6 +1403,13 @@ hns3_parse_rss_filter(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ACTION, act,
 					  "too many queues for RSS context");
 
+	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & ETH_RSS_IP))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &rss->types,
+					  "input RSS types are not supported");
+
 	act_index++;
 
 	/* Check if the next not void action is END */
@@ -1563,23 +1570,6 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 		.queue = conf->conf.queue,
 	};
 
-	/* The types is Unsupported by hns3' RSS */
-	if (!(rss_flow_conf.types & HNS3_ETH_RSS_SUPPORT) &&
-	    rss_flow_conf.types) {
-		hns3_err(hw,
-			 "Flow types(%" PRIx64 ") is unsupported by hns3's RSS",
-			 rss_flow_conf.types);
-		return -EINVAL;
-	}
-
-	if (rss_flow_conf.key_len &&
-	    rss_flow_conf.key_len > RTE_DIM(rss_info->key)) {
-		hns3_err(hw,
-			"input hash key(%u) greater than supported len(%zu)",
-			rss_flow_conf.key_len, RTE_DIM(rss_info->key));
-		return -EINVAL;
-	}
-
 	/* Filter the unsupported flow types */
 	flow_types = conf->conf.types ?
 		     rss_flow_conf.types & HNS3_ETH_RSS_SUPPORT :
@@ -1755,7 +1745,7 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct hns3_fdir_rule fdir_rule;
 	int ret;
 
-	ret = hns3_flow_args_check(attr, pattern, actions, error);
+	ret = hns3_flow_validate(dev, attr, pattern, actions, error);
 	if (ret)
 		return NULL;
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 12/17] net/hns3: set RSS hash type input configuration
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (10 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 11/17] net/hns3: check input RSS type when creating flow with RSS Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 13/17] net/hns3: fix config when creating RSS rule after flush Wei Hu (Xavier)
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

This patch support for hash type input set configuration. That is, upper
level application can call the rte_flow_create or
rte_eth_dev_rss_hash_update API to set the specified tuples by RSS types.
As a result, the hardware can selectively use the tuples to perform hash
calculation based on the specific tuple to enable the fields.
The following uses the RSS flow as an example:

When application calls the rte_flow_create API to select tuple input for
ipv4-tcp l3-src-only, the driver can enable the ipv4-tcp src field of the
hardware register by calling the internal function named
hns3_set_rss_tuple_by_rss_hf according to the RSS types in actions list.
In testpmd application, the command as follows:
testpmd> flow create 0 ingress pattern end actions rss types \
ipv4-tcp l3-src-only end queues 1 3 end / end

The following use the rte_eth_dev_rss_hash_update as an example:
if the user set the select tuple input for ipv4-tcp l3-src-only by
rte_eth_rss_conf structure in rte_eth_dev_rss_hash_update. the next
flow is the same as that of the RSS flow. In testpmd application,
the command as follows:
testpmd> port config all rss ipv4-tcp l3-src-only

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_cmd.h    |   9 +-
 drivers/net/hns3/hns3_ethdev.h |   2 +
 drivers/net/hns3/hns3_rss.c    | 244 +++++++++++++++++++++++++++++++----------
 drivers/net/hns3/hns3_rss.h    |  20 +---
 4 files changed, 197 insertions(+), 78 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 510d68c..0b531d9 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -563,14 +563,7 @@ struct hns3_rss_generic_config_cmd {
 
 /* Configure the tuple selection for RSS hash input, opcode:0x0D02 */
 struct hns3_rss_input_tuple_cmd {
-	uint8_t ipv4_tcp_en;
-	uint8_t ipv4_udp_en;
-	uint8_t ipv4_sctp_en;
-	uint8_t ipv4_fragment_en;
-	uint8_t ipv6_tcp_en;
-	uint8_t ipv6_udp_en;
-	uint8_t ipv6_sctp_en;
-	uint8_t ipv6_fragment_en;
+	uint64_t tuple_field;
 	uint8_t rsv[16];
 };
 
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index e1ed4d6..c2d0a75 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -790,6 +790,8 @@ struct hns3_adapter {
 
 #define BIT(nr) (1UL << (nr))
 
+#define BIT_ULL(x) (1ULL << (x))
+
 #define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
 #define GENMASK(h, l) \
 	(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 247bd7d..5b51512 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -23,6 +23,166 @@ static const uint8_t hns3_hash_key[] = {
 	0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA
 };
 
+enum hns3_tuple_field {
+	/* IPV4_TCP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D = 0,
+	HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S,
+	HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S,
+
+	/* IPV4_UDP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D = 8,
+	HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S,
+	HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S,
+
+	/* IPV4_SCTP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D = 16,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER,
+
+	/* IPV4 ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D = 24,
+	HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S,
+	HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D,
+	HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S,
+
+	/* IPV6_TCP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D = 32,
+	HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S,
+	HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S,
+
+	/* IPV6_UDP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D = 40,
+	HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S,
+	HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S,
+
+	/* IPV6_UDP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D = 50,
+	HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S,
+	HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER,
+
+	/* IPV6 ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D = 56,
+	HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S,
+	HNS3_RSS_FIELD_IPV6_FRAG_IP_D,
+	HNS3_RSS_FIELD_IPV6_FRAG_IP_S
+};
+
+static const struct {
+	uint64_t rss_types;
+	uint64_t rss_field;
+} hns3_set_tuple_table[] = {
+	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
+	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
+	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
+	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
+};
+
+static const struct {
+	uint64_t rss_types;
+	uint64_t rss_field;
+} hns3_set_rss_types[] = {
+	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
+	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
+	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
+	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
+};
+
 /*
  * rss_generic_config command function, opcode:0x0D01.
  * Used to set algorithm, key_offset and hash key of rss.
@@ -91,14 +251,8 @@ hns3_set_rss_input_tuple(struct hns3_hw *hw)
 
 	req = (struct hns3_rss_input_tuple_cmd *)desc_tuple.data;
 
-	req->ipv4_tcp_en = rss_config->rss_tuple_sets.ipv4_tcp_en;
-	req->ipv4_udp_en = rss_config->rss_tuple_sets.ipv4_udp_en;
-	req->ipv4_sctp_en = rss_config->rss_tuple_sets.ipv4_sctp_en;
-	req->ipv4_fragment_en = rss_config->rss_tuple_sets.ipv4_fragment_en;
-	req->ipv6_tcp_en = rss_config->rss_tuple_sets.ipv6_tcp_en;
-	req->ipv6_udp_en = rss_config->rss_tuple_sets.ipv6_udp_en;
-	req->ipv6_sctp_en = rss_config->rss_tuple_sets.ipv6_sctp_en;
-	req->ipv6_fragment_en = rss_config->rss_tuple_sets.ipv6_fragment_en;
+	req->tuple_field =
+		rte_cpu_to_le_64(rss_config->rss_tuple_sets.rss_tuple_fields);
 
 	ret = hns3_cmd_send(hw, &desc_tuple, 1);
 	if (ret)
@@ -171,6 +325,7 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 {
 	struct hns3_rss_input_tuple_cmd *req;
 	struct hns3_cmd_desc desc;
+	uint32_t fields_count = 0; /* count times for setting tuple fields */
 	uint32_t i;
 	int ret;
 
@@ -178,46 +333,30 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 
 	req = (struct hns3_rss_input_tuple_cmd *)desc.data;
 
-	/* Enable ipv4 or ipv6 tuple by flow type */
-	for (i = 0; i < RTE_ETH_FLOW_MAX; i++) {
-		switch (rss_hf & (1ULL << i)) {
-		case ETH_RSS_NONFRAG_IPV4_TCP:
-			req->ipv4_tcp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV4_UDP:
-			req->ipv4_udp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV4_SCTP:
-			req->ipv4_sctp_en = HNS3_RSS_INPUT_TUPLE_SCTP;
-			break;
-		case ETH_RSS_FRAG_IPV4:
-			req->ipv4_fragment_en |= HNS3_IP_FRAG_BIT_MASK;
-			break;
-		case ETH_RSS_NONFRAG_IPV4_OTHER:
-			req->ipv4_fragment_en |= HNS3_IP_OTHER_BIT_MASK;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_TCP:
-			req->ipv6_tcp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_UDP:
-			req->ipv6_udp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_SCTP:
-			req->ipv6_sctp_en = HNS3_RSS_INPUT_TUPLE_SCTP;
-			break;
-		case ETH_RSS_FRAG_IPV6:
-			req->ipv6_fragment_en |= HNS3_IP_FRAG_BIT_MASK;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_OTHER:
-			req->ipv6_fragment_en |= HNS3_IP_OTHER_BIT_MASK;
-			break;
-		default:
-			/*
-			 * rss_hf doesn't include unsupported flow types
-			 * because the API framework has checked it, and
-			 * this branch will never go unless rss_hf is zero.
-			 */
-			break;
+	for (i = 0; i < RTE_DIM(hns3_set_tuple_table); i++) {
+		if ((rss_hf & hns3_set_tuple_table[i].rss_types) ==
+		     hns3_set_tuple_table[i].rss_types) {
+			req->tuple_field |=
+			    rte_cpu_to_le_64(hns3_set_tuple_table[i].rss_field);
+			fields_count++;
+		}
+	}
+
+	/*
+	 * When user does not specify the following types or a combination of
+	 * the following types, it enables all fields for the supported RSS
+	 * types. the following types as:
+	 * - ETH_RSS_L3_SRC_ONLY
+	 * - ETH_RSS_L3_DST_ONLY
+	 * - ETH_RSS_L4_SRC_ONLY
+	 * - ETH_RSS_L4_DST_ONLY
+	 */
+	if (fields_count == 0) {
+		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
+			if ((rss_hf & hns3_set_rss_types[i].rss_types) ==
+			     hns3_set_rss_types[i].rss_types)
+				req->tuple_field |= rte_cpu_to_le_64(
+					hns3_set_rss_types[i].rss_field);
 		}
 	}
 
@@ -227,14 +366,7 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 		return ret;
 	}
 
-	tuple->ipv4_tcp_en = req->ipv4_tcp_en;
-	tuple->ipv4_udp_en = req->ipv4_udp_en;
-	tuple->ipv4_sctp_en = req->ipv4_sctp_en;
-	tuple->ipv4_fragment_en = req->ipv4_fragment_en;
-	tuple->ipv6_tcp_en = req->ipv6_tcp_en;
-	tuple->ipv6_udp_en = req->ipv6_udp_en;
-	tuple->ipv6_sctp_en = req->ipv6_sctp_en;
-	tuple->ipv6_fragment_en = req->ipv6_fragment_en;
+	tuple->rss_tuple_fields = rte_le_to_cpu_64(req->tuple_field);
 
 	return 0;
 }
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index df5bba6..47d3586 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -17,7 +17,11 @@
 	ETH_RSS_NONFRAG_IPV6_TCP | \
 	ETH_RSS_NONFRAG_IPV6_UDP | \
 	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L3_SRC_ONLY | \
+	ETH_RSS_L3_DST_ONLY | \
+	ETH_RSS_L4_SRC_ONLY | \
+	ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_KEY_SIZE	40
@@ -30,20 +34,8 @@
 #define HNS3_RSS_HASH_ALGO_SYMMETRIC_TOEP 2
 #define HNS3_RSS_HASH_ALGO_MASK		0xf
 
-#define HNS3_RSS_INPUT_TUPLE_OTHER	GENMASK(3, 0)
-#define HNS3_RSS_INPUT_TUPLE_SCTP	GENMASK(4, 0)
-#define HNS3_IP_FRAG_BIT_MASK		GENMASK(3, 2)
-#define HNS3_IP_OTHER_BIT_MASK		GENMASK(1, 0)
-
 struct hns3_rss_tuple_cfg {
-	uint8_t ipv4_tcp_en;      /* Bit8.0~8.3 */
-	uint8_t ipv4_udp_en;      /* Bit9.0~9.3 */
-	uint8_t ipv4_sctp_en;     /* Bit10.0~10.4 */
-	uint8_t ipv4_fragment_en; /* Bit11.0~11.3 */
-	uint8_t ipv6_tcp_en;      /* Bit12.0~12.3 */
-	uint8_t ipv6_udp_en;      /* Bit13.0~13.3 */
-	uint8_t ipv6_sctp_en;     /* Bit14.0~14.4 */
-	uint8_t ipv6_fragment_en; /* Bit15.0~15.3 */
+	uint64_t rss_tuple_fields;
 };
 
 #define HNS3_RSS_QUEUES_BUFFER_NUM	64 /* Same as the Max rx/tx queue num */
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 13/17] net/hns3: fix config when creating RSS rule after flush
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (11 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 12/17] net/hns3: set RSS hash type input configuration Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 14/17] net/hns3: fix flow RSS queue num with 0 Wei Hu (Xavier)
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

Currnetly, when user create a flow RSS rule and then flush the rule, driver
will set 0 for the internal structure of hw->rss_info maintained in driver.
And then user create a flow RSS rule without specified RSS hash key, driver
configure a validate RSS hash key into hardware network engine and will
cause an RSS error. The related steps when using testpmd as
follows:
  flow 0 <pattern> action rss xx end / end
  flow flush 0
  flow 0 <pattern> action rss queues 0 1 end / end

To slove the preceding problem, the flow flush processing is modified.
it don't clear all RSS configurations for hw->rss_info. Actually the RSS
key information in the hardware is not cleared, therefore, the
hw->rss_info.key is kept consistent with the value in the hardware. In
addition, because reset the redirection table, we need to set queues NULL
and queue_num for zero that indicate the corresponding parameters in the
hardware is unavailable. Also we set hw->rss_info.func to
RTE_ETH_HASH_FUNCTION_MAX that indicate it is invalid.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 40 +++++++++++++++++++++++++++++++++++-----
 1 file changed, 35 insertions(+), 5 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index f4ea47a..6f2ff87 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1293,10 +1293,24 @@ static bool
 hns3_action_rss_same(const struct rte_flow_action_rss *comp,
 		     const struct rte_flow_action_rss *with)
 {
-	return (comp->func == with->func &&
-		comp->level == with->level &&
-		comp->types == with->types &&
-		comp->key_len == with->key_len &&
+	bool func_is_same;
+
+	/*
+	 * When user flush all RSS rule, RSS func is set invalid with
+	 * RTE_ETH_HASH_FUNCTION_MAX. Then the user create a flow after
+	 * flushed, any validate RSS func is different with it before
+	 * flushed. Others, when user create an action RSS with RSS func
+	 * specified RTE_ETH_HASH_FUNCTION_DEFAULT, the func is the same
+	 * between continuous RSS flow.
+	 */
+	if (comp->func == RTE_ETH_HASH_FUNCTION_MAX)
+		func_is_same = false;
+	else
+		func_is_same = (with->func ? (comp->func == with->func) : true);
+
+	return (func_is_same &&
+		comp->types == (with->types & HNS3_ETH_RSS_SUPPORT) &&
+		comp->level == with->level && comp->key_len == with->key_len &&
 		comp->queue_num == with->queue_num &&
 		!memcmp(comp->key, with->key, with->key_len) &&
 		!memcmp(comp->queue, with->queue,
@@ -1589,7 +1603,19 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 				hns3_err(hw, "RSS disable failed(%d)", ret);
 				return ret;
 			}
-			memset(rss_info, 0, sizeof(struct hns3_rss_conf));
+
+			if (rss_flow_conf.queue_num) {
+				/*
+				 * Due the content of queue pointer have been
+				 * reset to 0, the rss_info->conf.queue should
+				 * be set NULL.
+				 */
+				rss_info->conf.queue = NULL;
+				rss_info->conf.queue_num = 0;
+			}
+
+			/* set RSS func invalid after flushed */
+			rss_info->conf.func = RTE_ETH_HASH_FUNCTION_MAX;
 			return 0;
 		}
 		return -EINVAL;
@@ -1651,6 +1677,10 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev)
 	if (hw->rss_info.conf.queue_num == 0)
 		return 0;
 
+	/* When user flush all rules, it doesn't need to restore RSS rule */
+	if (hw->rss_info.conf.func == RTE_ETH_HASH_FUNCTION_MAX)
+		return 0;
+
 	return hns3_config_rss_filter(dev, &hw->rss_info, true);
 }
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 14/17] net/hns3: fix flow RSS queue num with 0
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (12 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 13/17] net/hns3: fix config when creating RSS rule after flush Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 15/17] net/hns3: fix flushing RSS rule Wei Hu (Xavier)
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

When user specifies RSS queue num for 0 in action list by flow create API,
it should create a valid flow rule. The following flow rule should be
success in the command line of the testpmd application:
flow create 0 <pattern> actions rss queues  / end

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 6f2ff87..5b5124c 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1360,10 +1360,9 @@ hns3_parse_rss_filter(struct rte_eth_dev *dev,
 	uint16_t n;
 
 	NEXT_ITEM_OF_ACTION(act, actions, act_index);
-	/* Get configuration args from APP cmdline input */
 	rss = act->conf;
 
-	if (rss == NULL || rss->queue_num == 0) {
+	if (rss == NULL) {
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "no valid queues");
@@ -1537,11 +1536,6 @@ hns3_update_indir_table(struct rte_eth_dev *dev,
 	uint8_t queue_id;
 	uint32_t i;
 
-	if (num == 0) {
-		hns3_err(hw, "No PF queues are configured to enable RSS");
-		return -ENOTSUP;
-	}
-
 	allow_rss_queues = RTE_MIN(dev->data->nb_rx_queues, hw->rss_size_max);
 	/* Fill in redirection table */
 	memcpy(indir_tbl, hw->rss_info.rss_indirection_tbl,
@@ -1632,10 +1626,11 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 	hns3_info(hw, "Max of contiguous %u PF queues are configured", num);
 
 	rte_spinlock_lock(&hw->lock);
-	/* Update redirection talbe of rss */
-	ret = hns3_update_indir_table(dev, &rss_flow_conf, num);
-	if (ret)
-		goto rss_config_err;
+	if (num) {
+		ret = hns3_update_indir_table(dev, &rss_flow_conf, num);
+		if (ret)
+			goto rss_config_err;
+	}
 
 	/* Set hash algorithm and flow types by the user's config */
 	ret = hns3_hw_rss_hash_set(hw, &rss_flow_conf);
@@ -1661,9 +1656,6 @@ hns3_clear_rss_filter(struct rte_eth_dev *dev)
 	struct hns3_adapter *hns = dev->data->dev_private;
 	struct hns3_hw *hw = &hns->hw;
 
-	if (hw->rss_info.conf.queue_num == 0)
-		return 0;
-
 	return hns3_config_rss_filter(dev, &hw->rss_info, false);
 }
 
@@ -1674,9 +1666,6 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev)
 	struct hns3_adapter *hns = dev->data->dev_private;
 	struct hns3_hw *hw = &hns->hw;
 
-	if (hw->rss_info.conf.queue_num == 0)
-		return 0;
-
 	/* When user flush all rules, it doesn't need to restore RSS rule */
 	if (hw->rss_info.conf.func == RTE_ETH_HASH_FUNCTION_MAX)
 		return 0;
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 15/17] net/hns3: fix flushing RSS rule
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (13 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 14/17] net/hns3: fix flow RSS queue num with 0 Wei Hu (Xavier)
@ 2020-09-22  8:53 ` Wei Hu (Xavier)
  2020-09-22  8:54 ` [dpdk-dev] [PATCH 16/17] net/hns3: fix configuring device with RSS is enabled Wei Hu (Xavier)
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:53 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

When user create a flow without RSS by calling rte_flow_create API and then
destroy it by calling rte_flow_flush API, driver should not clear RSS rule.

A reasonable handling method is that when user creates an RSS rule, the
driver should clear the created RSS rule when flushing destroy all flow
rules. Also, hw->rss_info should save the RSS config of the last success
RSS rule. When create n RSS rules, the RSS should not be disabled before
the last RSS rule detroyed.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 79 ++++++++++++++++++++++++++++++++------------
 drivers/net/hns3/hns3_rss.h  |  1 +
 2 files changed, 58 insertions(+), 22 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 5b5124c..4f078e8 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1560,7 +1560,9 @@ static int
 hns3_config_rss_filter(struct rte_eth_dev *dev,
 		       const struct hns3_rss_conf *conf, bool add)
 {
+	struct hns3_process_private *process_list = dev->process_private;
 	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_hw *hw = &hns->hw;
 	struct hns3_rss_conf *rss_info;
 	uint64_t flow_types;
@@ -1591,28 +1593,27 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 
 	rss_info = &hw->rss_info;
 	if (!add) {
-		if (hns3_action_rss_same(&rss_info->conf, &rss_flow_conf)) {
-			ret = hns3_disable_rss(hw);
-			if (ret) {
-				hns3_err(hw, "RSS disable failed(%d)", ret);
-				return ret;
-			}
+		if (!conf->valid)
+			return 0;
 
-			if (rss_flow_conf.queue_num) {
-				/*
-				 * Due the content of queue pointer have been
-				 * reset to 0, the rss_info->conf.queue should
-				 * be set NULL.
-				 */
-				rss_info->conf.queue = NULL;
-				rss_info->conf.queue_num = 0;
-			}
+		ret = hns3_disable_rss(hw);
+		if (ret) {
+			hns3_err(hw, "RSS disable failed(%d)", ret);
+			return ret;
+		}
 
-			/* set RSS func invalid after flushed */
-			rss_info->conf.func = RTE_ETH_HASH_FUNCTION_MAX;
-			return 0;
+		if (rss_flow_conf.queue_num) {
+			/*
+			 * Due the content of queue pointer have been reset to
+			 * 0, the rss_info->conf.queue should be set NULL
+			 */
+			rss_info->conf.queue = NULL;
+			rss_info->conf.queue_num = 0;
 		}
-		return -EINVAL;
+
+		/* set RSS func invalid after flushed */
+		rss_info->conf.func = RTE_ETH_HASH_FUNCTION_MAX;
+		return 0;
 	}
 
 	/* Get rx queues num */
@@ -1643,6 +1644,13 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 		goto rss_config_err;
 	}
 
+	/*
+	 * When create a new RSS rule, the old rule will be overlayed and set
+	 * invalid.
+	 */
+	TAILQ_FOREACH(rss_filter_ptr, &process_list->filter_rss_list, entries)
+		rss_filter_ptr->filter_info.valid = false;
+
 rss_config_err:
 	rte_spinlock_unlock(&hw->lock);
 
@@ -1653,10 +1661,36 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 static int
 hns3_clear_rss_filter(struct rte_eth_dev *dev)
 {
+	struct hns3_process_private *process_list = dev->process_private;
 	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_hw *hw = &hns->hw;
+	int rss_rule_succ_cnt = 0; /* count for success of clearing RSS rules */
+	int rss_rule_fail_cnt = 0; /* count for failure of clearing RSS rules */
+	int ret = 0;
+
+	rss_filter_ptr = TAILQ_FIRST(&process_list->filter_rss_list);
+	while (rss_filter_ptr) {
+		TAILQ_REMOVE(&process_list->filter_rss_list, rss_filter_ptr,
+			     entries);
+		ret = hns3_config_rss_filter(dev, &rss_filter_ptr->filter_info,
+					     false);
+		if (ret)
+			rss_rule_fail_cnt++;
+		else
+			rss_rule_succ_cnt++;
+		rte_free(rss_filter_ptr);
+		rss_filter_ptr = TAILQ_FIRST(&process_list->filter_rss_list);
+	}
 
-	return hns3_config_rss_filter(dev, &hw->rss_info, false);
+	if (rss_rule_fail_cnt) {
+		hns3_err(hw, "fail to delete all RSS filters, success num = %d "
+			     "fail num = %d", rss_rule_succ_cnt,
+			     rss_rule_fail_cnt);
+		ret = -EIO;
+	}
+
+	return ret;
 }
 
 /* Restore the rss filter */
@@ -1807,6 +1841,7 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 		}
 		memcpy(&rss_filter_ptr->filter_info, rss_conf,
 			sizeof(struct hns3_rss_conf));
+		rss_filter_ptr->filter_info.valid = true;
 		TAILQ_INSERT_TAIL(&process_list->filter_rss_list,
 				  rss_filter_ptr, entries);
 
@@ -1872,7 +1907,6 @@ hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 	struct hns3_fdir_rule_ele *fdir_rule_ptr;
 	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_flow_mem *flow_node;
-	struct hns3_hw *hw = &hns->hw;
 	enum rte_filter_type filter_type;
 	struct hns3_fdir_rule fdir_rule;
 	int ret;
@@ -1902,7 +1936,8 @@ hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		break;
 	case RTE_ETH_FILTER_HASH:
 		rss_filter_ptr = (struct hns3_rss_conf_ele *)flow->rule;
-		ret = hns3_config_rss_filter(dev, &hw->rss_info, false);
+		ret = hns3_config_rss_filter(dev, &rss_filter_ptr->filter_info,
+					     false);
 		if (ret)
 			return rte_flow_error_set(error, EIO,
 						  RTE_FLOW_ERROR_TYPE_HANDLE,
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 47d3586..8fa1b30 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -47,6 +47,7 @@ struct hns3_rss_conf {
 	struct hns3_rss_tuple_cfg rss_tuple_sets;
 	uint8_t rss_indirection_tbl[HNS3_RSS_IND_TBL_SIZE]; /* Shadow table */
 	uint16_t queue[HNS3_RSS_QUEUES_BUFFER_NUM]; /* Queues indices to use */
+	bool valid; /* check if RSS rule is valid */
 };
 
 #ifndef ilog2
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 16/17] net/hns3: fix configuring device with RSS is enabled
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (14 preceding siblings ...)
  2020-09-22  8:53 ` [dpdk-dev] [PATCH 15/17] net/hns3: fix flushing RSS rule Wei Hu (Xavier)
@ 2020-09-22  8:54 ` Wei Hu (Xavier)
  2020-09-22  8:54 ` [dpdk-dev] [PATCH 17/17] net/hns3: fix storing RSS info when creating flow action Wei Hu (Xavier)
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:54 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, when running the following commands in the CLI of testpmd
application, the driver reports an -EINVAL error when performing the No.3
step.
1) flow create 0 ingress pattern end actions rss key <key> func simple_xor
     types all end / end
2) flow flush 0
3) port config dcb vt off pfc off

The root cause as below:
In the No.2 step, when RSS rules is flushed, we set the the flag
hw->rss_dis_flag with true to indicate RSS id disabled. And in the No.3
step, calling rte_eth_dev_configure API function, the internal function
named hns3_dev_rss_hash_update check hw->rss_dis_flag is true and return
-EINVAL.

When user calls the rte_eth_dev_configure API function with the input
parameter dev_conf->rxmode.mq_mode having ETH_MQ_RX_RSS_FLAG to enable RSS,
driver should set internal flag hw->rss_dis_flag with false to indicate RSS
is enabled in the '.dev_configure' ops implementation function named
hns3_dev_configure and hns3vf_dev_configure.

Fixes: 5e782bc2570c ("net/hns3: fix configuring RSS hash when rules are flushed")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    | 1 +
 drivers/net/hns3/hns3_ethdev_vf.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 9c9f264..90ddcaf 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2326,6 +2326,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
+		hw->rss_dis_flag = false;
 		if (rss_conf.rss_key == NULL) {
 			rss_conf.rss_key = rss_cfg->key;
 			rss_conf.rss_key_len = HNS3_RSS_KEY_SIZE;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index cb2747b..4c73441 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -783,6 +783,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	/* When RSS is not configured, redirect the packet queue 0 */
 	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		if (rss_conf.rss_key == NULL) {
 			rss_conf.rss_key = rss_cfg->key;
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH 17/17] net/hns3: fix storing RSS info when creating flow action
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (15 preceding siblings ...)
  2020-09-22  8:54 ` [dpdk-dev] [PATCH 16/17] net/hns3: fix configuring device with RSS is enabled Wei Hu (Xavier)
@ 2020-09-22  8:54 ` Wei Hu (Xavier)
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22  8:54 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, when calling the rte_flow_query API function to query the RSS
information, the queue related information is not as expected.

The root cause is that when application call the rte_flow_create API
function to create RSS action, the operation of storing the data whose typs
is struct rte_flow_action_rss is incorrect in the '.create' ops
implementation function named hns3_flow_create.

This patch fixes it by replacing memcpy with hns3_rss_conf_copy function to
store the RSS information in the hns3_flow_create.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 4f078e8..3e48963 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1839,8 +1839,8 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			ret = -ENOMEM;
 			goto err;
 		}
-		memcpy(&rss_filter_ptr->filter_info, rss_conf,
-			sizeof(struct hns3_rss_conf));
+		hns3_rss_conf_copy(&rss_filter_ptr->filter_info,
+				   &rss_conf->conf);
 		rss_filter_ptr->filter_info.valid = true;
 		TAILQ_INSERT_TAIL(&process_list->filter_rss_list,
 				  rss_filter_ptr, entries);
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver
  2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                   ` (16 preceding siblings ...)
  2020-09-22  8:54 ` [dpdk-dev] [PATCH 17/17] net/hns3: fix storing RSS info when creating flow action Wei Hu (Xavier)
@ 2020-09-22 12:03 ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
                     ` (17 more replies)
  17 siblings, 18 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

This series are updates for hns3 PMD driver.

Chengchang Tang (2):
  net/hns3: fix default VLAN won't be deleted when set PF PVID
  net/hns3: add default branch to switch in Rx VLAN processing

Chengwen Feng (1):
  net/hns3: support flow action of queue region

Hongbo Zheng (3):
  net/hns3: add max number of segs compatibility
  net/hns3: avoid accessing nonexistent VF reg when PF in FLR
  net/hns3: add break to exit loop when err stat item found

Lijun Ou (5):
  net/hns3: support querying RSS flow rule
  net/hns3: check input RSS type when creating flow with RSS
  net/hns3: set RSS hash type input configuration
  net/hns3: fix config when creating RSS rule after flush
  net/hns3: fix flushing RSS rule

Wei Hu (Xavier) (6):
  net/hns3: add VLAN configuration compatibility
  net/hns3: add TSO pseudo header calculation compatibility
  net/hns3: add default branch to switch when parsing fd tuple
  net/hns3: fix flow RSS queue num with 0
  net/hns3: fix configuring device with RSS is enabled
  net/hns3: fix storing RSS info when creating flow action

 drivers/net/hns3/hns3_cmd.c       |   5 +-
 drivers/net/hns3/hns3_cmd.h       |  14 +-
 drivers/net/hns3/hns3_ethdev.c    | 143 ++++++++++---------
 drivers/net/hns3/hns3_ethdev.h    |  96 +++++++++++--
 drivers/net/hns3/hns3_ethdev_vf.c |  20 ++-
 drivers/net/hns3/hns3_fdir.c      |  43 ++++--
 drivers/net/hns3/hns3_fdir.h      |  23 +++-
 drivers/net/hns3/hns3_flow.c      | 282 +++++++++++++++++++++++++++++---------
 drivers/net/hns3/hns3_mbx.c       |   2 +-
 drivers/net/hns3/hns3_rss.c       | 244 +++++++++++++++++++++++++--------
 drivers/net/hns3/hns3_rss.h       |  21 +--
 drivers/net/hns3/hns3_rxtx.c      | 151 +++++++++++++-------
 drivers/net/hns3/hns3_rxtx.h      |  53 +++++--
 drivers/net/hns3/hns3_stats.c     |   1 +
 14 files changed, 802 insertions(+), 296 deletions(-)

-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 01/17] net/hns3: add VLAN configuration compatibility
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID Wei Hu (Xavier)
                     ` (16 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Because of hardware limitation based on the old version of hns3 network
engine, there are some restrictions:
a) HNS3 PMD driver needs select different processing mode for VLAN based
   on whether PVID is set which means our driver need sense the PVID
   states.
b) For packets transmitting process, only two layer of VLAN tag is
   supported. If the total number of VLAN tags in mbuf and VLAN offload
   by haredware (VLAN insert by descriptor) exceeds two, the VLAN in mbuf
   will be overwritten by VLAN in the descriptor.
c) If port based VLAN is set, only one VLAN header is allowed in mbuf or
   it will be discard by hardware.

In order to solve these restriction, two change is implemented on the new
versions of network engine.
1) add a new VLAN tagged insertion mode, named tag shift mode;
2) add a new VLAN strip control bit, named strip hide enable;

The tag shift mode means that VLAN tag will shift automatically when the
inserted place has a tag. For PMD driver, the VLAN tag1 and tag2
configurations in Tx side do not need to be considered because the hardware
completes it. However, the related configuration will still be retained to
be compatible with the old version of network engine.

The VLAN strip hide means that hardware will strip the VLAN tag and hide
VLAN in descriptor (VLAN ID exposed as zero and related STRIP_TAGP is off).

These changes make it no longer necessary for the hns3 PMD driver to be
aware of the PVID status and have the ability to send mult-layer (more than
two) VLANs packets. Therefore, hns3 PMD driver introduces the concept of
VLAN mode and adds a new VLAN mode named HNS3_PVID_MODE to indicate that
PVID-related IO process can be implemented by the hardware. And VF driver
does not need to be modified because the related mailbox messages will not
be sent by PF kernel mode netdev driver under new network engine and all
the related hardware configuration is on the PF side.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_cmd.h    |  3 +++
 drivers/net/hns3/hns3_ethdev.c | 33 ++++++++++++++++++++----
 drivers/net/hns3/hns3_ethdev.h | 58 +++++++++++++++++++++++++++++++++++-------
 drivers/net/hns3/hns3_mbx.c    |  2 +-
 drivers/net/hns3/hns3_rxtx.c   | 48 ++++++++++++++++++++++++++--------
 drivers/net/hns3/hns3_rxtx.h   | 39 ++++++++++++++++++----------
 6 files changed, 144 insertions(+), 39 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 87d6053..629b114 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -449,11 +449,14 @@ struct hns3_umv_spc_alc_cmd {
 #define HNS3_CFG_NIC_ROCE_SEL_B		4
 #define HNS3_ACCEPT_TAG2_B		5
 #define HNS3_ACCEPT_UNTAG2_B		6
+#define HNS3_TAG_SHIFT_MODE_EN_B	7
 
 #define HNS3_REM_TAG1_EN_B		0
 #define HNS3_REM_TAG2_EN_B		1
 #define HNS3_SHOW_TAG1_EN_B		2
 #define HNS3_SHOW_TAG2_EN_B		3
+#define HNS3_DISCARD_TAG1_EN_B		5
+#define HNS3_DISCARD_TAG2_EN_B		6
 
 /* Factor used to calculate offset and bitmap of VF num */
 #define HNS3_VF_NUM_PER_CMD             64
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 73d5042..3e98df3 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -477,6 +477,11 @@ hns3_set_vlan_rx_offload_cfg(struct hns3_adapter *hns,
 	hns3_set_bit(req->vport_vlan_cfg, HNS3_SHOW_TAG2_EN_B,
 		     vcfg->vlan2_vlan_prionly ? 1 : 0);
 
+	/* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG1_EN_B,
+		     vcfg->strip_tag1_discard_en ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG2_EN_B,
+		     vcfg->strip_tag2_discard_en ? 1 : 0);
 	/*
 	 * In current version VF is not supported when PF is driven by DPDK
 	 * driver, just need to configure parameters for PF vport.
@@ -518,11 +523,14 @@ hns3_en_hw_strip_rxvtag(struct hns3_adapter *hns, bool enable)
 	if (hw->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_DISABLE) {
 		rxvlan_cfg.strip_tag1_en = false;
 		rxvlan_cfg.strip_tag2_en = enable;
+		rxvlan_cfg.strip_tag2_discard_en = false;
 	} else {
 		rxvlan_cfg.strip_tag1_en = enable;
 		rxvlan_cfg.strip_tag2_en = true;
+		rxvlan_cfg.strip_tag2_discard_en = true;
 	}
 
+	rxvlan_cfg.strip_tag1_discard_en = false;
 	rxvlan_cfg.vlan1_vlan_prionly = false;
 	rxvlan_cfg.vlan2_vlan_prionly = false;
 	rxvlan_cfg.rx_vlan_offload_en = enable;
@@ -678,6 +686,10 @@ hns3_set_vlan_tx_offload_cfg(struct hns3_adapter *hns,
 		     vcfg->insert_tag2_en ? 1 : 0);
 	hns3_set_bit(req->vport_vlan_cfg, HNS3_CFG_NIC_ROCE_SEL_B, 0);
 
+	/* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_TAG_SHIFT_MODE_EN_B,
+		     vcfg->tag_shift_mode_en ? 1 : 0);
+
 	/*
 	 * In current version VF is not supported when PF is driven by DPDK
 	 * driver, just need to configure parameters for PF vport.
@@ -707,7 +719,8 @@ hns3_vlan_txvlan_cfg(struct hns3_adapter *hns, uint16_t port_base_vlan_state,
 		txvlan_cfg.insert_tag1_en = false;
 		txvlan_cfg.default_tag1 = 0;
 	} else {
-		txvlan_cfg.accept_tag1 = false;
+		txvlan_cfg.accept_tag1 =
+			hw->vlan_mode == HNS3_HW_SHIFT_AND_DISCARD_MODE;
 		txvlan_cfg.insert_tag1_en = true;
 		txvlan_cfg.default_tag1 = pvid;
 	}
@@ -717,6 +730,7 @@ hns3_vlan_txvlan_cfg(struct hns3_adapter *hns, uint16_t port_base_vlan_state,
 	txvlan_cfg.accept_untag2 = true;
 	txvlan_cfg.insert_tag2_en = false;
 	txvlan_cfg.default_tag2 = 0;
+	txvlan_cfg.tag_shift_mode_en = true;
 
 	ret = hns3_set_vlan_tx_offload_cfg(hns, &txvlan_cfg);
 	if (ret) {
@@ -841,14 +855,17 @@ hns3_en_pvid_strip(struct hns3_adapter *hns, int on)
 	bool rx_strip_en;
 	int ret;
 
-	rx_strip_en = old_cfg->rx_vlan_offload_en ? true : false;
+	rx_strip_en = old_cfg->rx_vlan_offload_en;
 	if (on) {
 		rx_vlan_cfg.strip_tag1_en = rx_strip_en;
 		rx_vlan_cfg.strip_tag2_en = true;
+		rx_vlan_cfg.strip_tag2_discard_en = true;
 	} else {
 		rx_vlan_cfg.strip_tag1_en = false;
 		rx_vlan_cfg.strip_tag2_en = rx_strip_en;
+		rx_vlan_cfg.strip_tag2_discard_en = false;
 	}
+	rx_vlan_cfg.strip_tag1_discard_en = false;
 	rx_vlan_cfg.vlan1_vlan_prionly = false;
 	rx_vlan_cfg.vlan2_vlan_prionly = false;
 	rx_vlan_cfg.rx_vlan_offload_en = old_cfg->rx_vlan_offload_en;
@@ -940,9 +957,13 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
 	rte_spinlock_unlock(&hw->lock);
 	if (ret)
 		return ret;
-
-	if (pvid_en_state_change)
-		hns3_update_all_queues_pvid_state(hw);
+	/*
+	 * Only in HNS3_SW_SHIFT_AND_MODE the PVID related operation in Tx/Rx
+	 * need be processed by PMD driver.
+	 */
+	if (pvid_en_state_change &&
+	    hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
+		hns3_update_all_queues_pvid_proc_en(hw);
 
 	return 0;
 }
@@ -2904,6 +2925,7 @@ hns3_get_capability(struct hns3_hw *hw)
 		hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_RSV_ONE;
 		hw->intr.coalesce_mode = HNS3_INTR_COALESCE_NON_QL;
 		hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_2US;
+		hw->vlan_mode = HNS3_SW_SHIFT_AND_DISCARD_MODE;
 		hw->min_tx_pkt_len = HNS3_HIP08_MIN_TX_PKT_LEN;
 		return 0;
 	}
@@ -2919,6 +2941,7 @@ hns3_get_capability(struct hns3_hw *hw)
 	hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_ALL;
 	hw->intr.coalesce_mode = HNS3_INTR_COALESCE_QL;
 	hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_1US;
+	hw->vlan_mode = HNS3_HW_SHIFT_AND_DISCARD_MODE;
 	hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN;
 
 	return 0;
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index fd6a9f9..a70223f 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -37,6 +37,9 @@
 #define HNS3_PF_FUNC_ID			0
 #define HNS3_1ST_VF_FUNC_ID		1
 
+#define HNS3_SW_SHIFT_AND_DISCARD_MODE		0
+#define HNS3_HW_SHIFT_AND_DISCARD_MODE		1
+
 #define HNS3_UC_MACADDR_NUM		128
 #define HNS3_VF_UC_MACADDR_NUM		48
 #define HNS3_MC_MACADDR_NUM		128
@@ -474,6 +477,28 @@ struct hns3_hw {
 
 	struct hns3_queue_intr intr;
 
+	/*
+	 * vlan mode.
+	 * value range:
+	 *      HNS3_SW_SHIFT_AND_DISCARD_MODE/HNS3_HW_SHFIT_AND_DISCARD_MODE
+	 *
+	 *  - HNS3_SW_SHIFT_AND_DISCARD_MODE
+	 *     For some versions of hardware network engine, because of the
+	 *     hardware limitation, PMD driver needs to detect the PVID status
+	 *     to work with haredware to implement PVID-related functions.
+	 *     For example, driver need discard the stripped PVID tag to ensure
+	 *     the PVID will not report to mbuf and shift the inserted VLAN tag
+	 *     to avoid port based VLAN covering it.
+	 *
+	 *  - HNS3_HW_SHIT_AND_DISCARD_MODE
+	 *     PMD driver does not need to process PVID-related functions in
+	 *     I/O process, Hardware will adjust the sequence between port based
+	 *     VLAN tag and BD VLAN tag automatically and VLAN tag stripped by
+	 *     PVID will be invisible to driver. And in this mode, hns3 is able
+	 *     to send a multi-layer VLAN packets when hw VLAN insert offload
+	 *     is enabled.
+	 */
+	uint8_t vlan_mode;
 	uint8_t max_non_tso_bd_num; /* max BD number of one non-TSO packet */
 
 	struct hns3_port_base_vlan_config port_base_vlan_cfg;
@@ -532,11 +557,21 @@ struct hns3_user_vlan_table {
 
 /* Vlan tag configuration for RX direction */
 struct hns3_rx_vtag_cfg {
-	uint8_t rx_vlan_offload_en; /* Whether enable rx vlan offload */
-	uint8_t strip_tag1_en;      /* Whether strip inner vlan tag */
-	uint8_t strip_tag2_en;      /* Whether strip outer vlan tag */
-	uint8_t vlan1_vlan_prionly; /* Inner VLAN Tag up to descriptor Enable */
-	uint8_t vlan2_vlan_prionly; /* Outer VLAN Tag up to descriptor Enable */
+	bool rx_vlan_offload_en;    /* Whether enable rx vlan offload */
+	bool strip_tag1_en;         /* Whether strip inner vlan tag */
+	bool strip_tag2_en;         /* Whether strip outer vlan tag */
+	/*
+	 * If strip_tag_en is enabled, this bit decide whether to map the vlan
+	 * tag to descriptor.
+	 */
+	bool strip_tag1_discard_en;
+	bool strip_tag2_discard_en;
+	/*
+	 * If this bit is enabled, only map inner/outer priority to descriptor
+	 * and the vlan tag is always 0.
+	 */
+	bool vlan1_vlan_prionly;
+	bool vlan2_vlan_prionly;
 };
 
 /* Vlan tag configuration for TX direction */
@@ -545,10 +580,15 @@ struct hns3_tx_vtag_cfg {
 	bool accept_untag1;         /* Whether accept untag1 packet from host */
 	bool accept_tag2;
 	bool accept_untag2;
-	bool insert_tag1_en;        /* Whether insert inner vlan tag */
-	bool insert_tag2_en;        /* Whether insert outer vlan tag */
-	uint16_t default_tag1;      /* The default inner vlan tag to insert */
-	uint16_t default_tag2;      /* The default outer vlan tag to insert */
+	bool insert_tag1_en;        /* Whether insert outer vlan tag */
+	bool insert_tag2_en;        /* Whether insert inner vlan tag */
+	/*
+	 * In shift mode, hw will shift the sequence of port based VLAN and
+	 * BD VLAN.
+	 */
+	bool tag_shift_mode_en;     /* hw shift vlan tag automatically */
+	uint16_t default_tag1;      /* The default outer vlan tag to insert */
+	uint16_t default_tag2;      /* The default inner vlan tag to insert */
 };
 
 struct hns3_vtag_cfg {
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index 2510582..305007a 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -347,7 +347,7 @@ hns3_update_port_base_vlan_info(struct hns3_hw *hw,
 	 */
 	if (hw->port_base_vlan_cfg.state != new_pvid_state) {
 		hw->port_base_vlan_cfg.state = new_pvid_state;
-		hns3_update_all_queues_pvid_state(hw);
+		hns3_update_all_queues_pvid_proc_en(hw);
 	}
 }
 
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 816c39c..cf55d94 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -339,26 +339,26 @@ hns3_init_tx_queue_hw(struct hns3_tx_queue *txq)
 }
 
 void
-hns3_update_all_queues_pvid_state(struct hns3_hw *hw)
+hns3_update_all_queues_pvid_proc_en(struct hns3_hw *hw)
 {
 	uint16_t nb_rx_q = hw->data->nb_rx_queues;
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	struct hns3_rx_queue *rxq;
 	struct hns3_tx_queue *txq;
-	int pvid_state;
+	bool pvid_en;
 	int i;
 
-	pvid_state = hw->port_base_vlan_cfg.state;
+	pvid_en = hw->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_ENABLE;
 	for (i = 0; i < hw->cfg_max_queues; i++) {
 		if (i < nb_rx_q) {
 			rxq = hw->data->rx_queues[i];
 			if (rxq != NULL)
-				rxq->pvid_state = pvid_state;
+				rxq->pvid_sw_discard_en = pvid_en;
 		}
 		if (i < nb_tx_q) {
 			txq = hw->data->tx_queues[i];
 			if (txq != NULL)
-				txq->pvid_state = pvid_state;
+				txq->pvid_sw_shift_en = pvid_en;
 		}
 	}
 }
@@ -1405,7 +1405,20 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	rxq->pkt_first_seg = NULL;
 	rxq->pkt_last_seg = NULL;
 	rxq->port_id = dev->data->port_id;
-	rxq->pvid_state = hw->port_base_vlan_cfg.state;
+	/*
+	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
+	 * the pvid_sw_discard_en in the queue struct should not be changed,
+	 * because PVID-related operations do not need to be processed by PMD
+	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * on the configuration of PF kernel mode netdevice driver. And the
+	 * related PF configuration is delivered through the mailbox and finally
+	 * reflectd in port_base_vlan_cfg.
+	 */
+	if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
+		rxq->pvid_sw_discard_en = hw->port_base_vlan_cfg.state ==
+				       HNS3_PORT_BASE_VLAN_ENABLE;
+	else
+		rxq->pvid_sw_discard_en = false;
 	rxq->configured = true;
 	rxq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
@@ -1592,7 +1605,7 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 	};
 	strip_status = hns3_get_field(l234_info, HNS3_RXD_STRP_TAGP_M,
 				      HNS3_RXD_STRP_TAGP_S);
-	report_mode = report_type[rxq->pvid_state][strip_status];
+	report_mode = report_type[rxq->pvid_sw_discard_en][strip_status];
 	switch (report_mode) {
 	case HNS3_NO_STRP_VLAN_VLD:
 		mb->vlan_tci = 0;
@@ -2151,7 +2164,20 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	}
 
 	txq->port_id = dev->data->port_id;
-	txq->pvid_state = hw->port_base_vlan_cfg.state;
+	/*
+	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
+	 * the pvid_sw_shift_en in the queue struct should not be changed,
+	 * because PVID-related operations do not need to be processed by PMD
+	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * on the configuration of PF kernel mode netdev driver. And the
+	 * related PF configuration is delivered through the mailbox and finally
+	 * reflectd in port_base_vlan_cfg.
+	 */
+	if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
+		txq->pvid_sw_shift_en = hw->port_base_vlan_cfg.state ==
+					HNS3_PORT_BASE_VLAN_ENABLE;
+	else
+		txq->pvid_sw_shift_en = false;
 	txq->configured = true;
 	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
@@ -2352,7 +2378,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_state && ol_flags & (PKT_TX_VLAN_PKT |
+	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN_PKT |
 				PKT_TX_QINQ_PKT)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
@@ -2365,7 +2391,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	}
 
 	if (ol_flags & PKT_TX_QINQ_PKT ||
-	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_state)) {
+	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
@@ -2791,7 +2817,7 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	struct rte_ether_hdr *eh;
 	struct rte_vlan_hdr *vh;
 
-	if (!txq->pvid_state)
+	if (!txq->pvid_sw_shift_en)
 		return 0;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 27041ab..476cfc2 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -273,7 +273,6 @@ struct hns3_rx_queue {
 	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
 	const struct rte_memzone *mz;
 	struct hns3_entry *sw_ring;
-
 	struct rte_mbuf *pkt_first_seg;
 	struct rte_mbuf *pkt_last_seg;
 
@@ -290,17 +289,24 @@ struct hns3_rx_queue {
 	uint16_t rx_free_hold;   /* num of BDs waited to passed to hardware */
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
-	/*
-	 * port based vlan configuration state.
-	 * value range: HNS3_PORT_BASE_VLAN_DISABLE / HNS3_PORT_BASE_VLAN_ENABLE
-	 */
-	uint16_t pvid_state;
 
 	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	bool rx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if rx queue has been configured */
+	/*
+	 * Indicate whether ignore the outer VLAN field in the Rx BD reported
+	 * by the Hardware. Because the outer VLAN is the PVID if the PVID is
+	 * set for some version of hardware network engine whose vlan mode is
+	 * HNS3_SW_SHIFT_AND_DISCARD_MODE, such as kunpeng 920. And this VLAN
+	 * should not be transitted to the upper-layer application. For hardware
+	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
+	 * such as kunpeng 930, PVID will not be reported to the BDs. So, PMD
+	 * driver does not need to perform PVID-related operation in Rx. At this
+	 * point, the pvid_sw_discard_en will be false.
+	 */
+	bool pvid_sw_discard_en;
 
 	uint64_t l2_errors;
 	uint64_t pkt_len_errors;
@@ -361,12 +367,6 @@ struct hns3_tx_queue {
 	struct rte_mbuf **free;
 
 	/*
-	 * port based vlan configuration state.
-	 * value range: HNS3_PORT_BASE_VLAN_DISABLE / HNS3_PORT_BASE_VLAN_ENABLE
-	 */
-	uint16_t pvid_state;
-
-	/*
 	 * The minimum length of the packet supported by hardware in the Tx
 	 * direction.
 	 */
@@ -374,6 +374,19 @@ struct hns3_tx_queue {
 
 	bool tx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if tx queue has been configured */
+	/*
+	 * Indicate whether add the vlan_tci of the mbuf to the inner VLAN field
+	 * of Tx BD. Because the outer VLAN will always be the PVID when the
+	 * PVID is set and for some version of hardware network engine whose
+	 * vlan mode is HNS3_SW_SHIFT_AND_DISCARD_MODE, such as kunpeng 920, the
+	 * PVID will overwrite the outer VLAN field of Tx BD. For the hardware
+	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
+	 * such as kunpeng 930, if the PVID is set, the hardware will shift the
+	 * VLAN field automatically. So, PMD driver does not need to do
+	 * PVID-related operations in Tx. And pvid_sw_shift_en will be false at
+	 * this point.
+	 */
+	bool pvid_sw_shift_en;
 
 	/*
 	 * The following items are used for the abnormal errors statistics in
@@ -620,7 +633,7 @@ int hns3_set_fake_rx_or_tx_queues(struct rte_eth_dev *dev, uint16_t nb_rx_q,
 				  uint16_t nb_tx_q);
 int hns3_config_gro(struct hns3_hw *hw, bool en);
 int hns3_restore_gro_conf(struct hns3_hw *hw);
-void hns3_update_all_queues_pvid_state(struct hns3_hw *hw);
+void hns3_update_all_queues_pvid_proc_en(struct hns3_hw *hw);
 void hns3_rx_scattered_reset(struct rte_eth_dev *dev);
 void hns3_rx_scattered_calc(struct rte_eth_dev *dev);
 int hns3_rx_check_vec_support(struct rte_eth_dev *dev);
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 03/17] net/hns3: add default branch to switch in Rx VLAN processing Wei Hu (Xavier)
                     ` (15 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Chengchang Tang <tangchengchang@huawei.com>

Currently, the default VLAN (vlan id 0) will never be deleted from the
hardware VLAN table based on hns3 PF device. As a result, even a non-zero
PVID is set by calling rte_eth_dev_set_vlan_pvid based on hns3 PF device,
packets with VLAN 0 and without VLAN are still received by PF driver in Rx
direction.

This patch deletes the restriction that VLAN 0 cannot be removed in PVID
configuration to ensure packets without PVID will be filtered when PVID
is set. And the patch adds VLAN 0 to the soft list when initing vlan
configuration to ensure that VLAN 0 will be deleted from the hardware VLAN
table when device is closed.

Fixes: 411d23b9eafb ("net/hns3: support VLAN")
Cc: stable@dpdk.org

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
v1 -> v2:
	 fix TYPO_SPELLING with INVALID.
---
 drivers/net/hns3/hns3_ethdev.c | 105 +++++++++++++++++++----------------------
 1 file changed, 48 insertions(+), 57 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 3e98df3..abc1742 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -35,7 +35,7 @@
 #define HNS3_DEFAULT_PORT_CONF_QUEUES_NUM	1
 
 #define HNS3_SERVICE_INTERVAL		1000000 /* us */
-#define HNS3_INVLID_PVID		0xFFFF
+#define HNS3_INVALID_PVID		0xFFFF
 
 #define HNS3_FILTER_TYPE_VF		0
 #define HNS3_FILTER_TYPE_PORT		1
@@ -346,8 +346,9 @@ hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
 	int ret = 0;
 
 	/*
-	 * When vlan filter is enabled, hardware regards vlan id 0 as the entry
-	 * for normal packet, deleting vlan id 0 is not allowed.
+	 * When vlan filter is enabled, hardware regards packets without vlan
+	 * as packets with vlan 0. So, to receive packets without vlan, vlan id
+	 * 0 is not allowed to be removed by rte_eth_dev_vlan_filter.
 	 */
 	if (on == 0 && vlan_id == 0)
 		return 0;
@@ -364,7 +365,7 @@ hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
 		writen_to_tbl = true;
 	}
 
-	if (ret == 0 && vlan_id) {
+	if (ret == 0) {
 		if (on)
 			hns3_add_dev_vlan_table(hns, vlan_id, writen_to_tbl);
 		else
@@ -743,16 +744,6 @@ hns3_vlan_txvlan_cfg(struct hns3_adapter *hns, uint16_t port_base_vlan_state,
 	return ret;
 }
 
-static void
-hns3_store_port_base_vlan_info(struct hns3_adapter *hns, uint16_t pvid, int on)
-{
-	struct hns3_hw *hw = &hns->hw;
-
-	hw->port_base_vlan_cfg.state = on ?
-	    HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE;
-
-	hw->port_base_vlan_cfg.pvid = pvid;
-}
 
 static void
 hns3_rm_all_vlan_table(struct hns3_adapter *hns, bool is_del_list)
@@ -761,10 +752,10 @@ hns3_rm_all_vlan_table(struct hns3_adapter *hns, bool is_del_list)
 	struct hns3_pf *pf = &hns->pf;
 
 	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
-		if (vlan_entry->hd_tbl_status)
+		if (vlan_entry->hd_tbl_status) {
 			hns3_set_port_vlan_filter(hns, vlan_entry->vlan_id, 0);
-
-		vlan_entry->hd_tbl_status = false;
+			vlan_entry->hd_tbl_status = false;
+		}
 	}
 
 	if (is_del_list) {
@@ -784,10 +775,10 @@ hns3_add_all_vlan_table(struct hns3_adapter *hns)
 	struct hns3_pf *pf = &hns->pf;
 
 	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
-		if (!vlan_entry->hd_tbl_status)
+		if (!vlan_entry->hd_tbl_status) {
 			hns3_set_port_vlan_filter(hns, vlan_entry->vlan_id, 1);
-
-		vlan_entry->hd_tbl_status = true;
+			vlan_entry->hd_tbl_status = true;
+		}
 	}
 }
 
@@ -798,7 +789,7 @@ hns3_remove_all_vlan_table(struct hns3_adapter *hns)
 	int ret;
 
 	hns3_rm_all_vlan_table(hns, true);
-	if (hw->port_base_vlan_cfg.pvid != HNS3_INVLID_PVID) {
+	if (hw->port_base_vlan_cfg.pvid != HNS3_INVALID_PVID) {
 		ret = hns3_set_port_vlan_filter(hns,
 						hw->port_base_vlan_cfg.pvid, 0);
 		if (ret) {
@@ -811,40 +802,41 @@ hns3_remove_all_vlan_table(struct hns3_adapter *hns)
 
 static int
 hns3_update_vlan_filter_entries(struct hns3_adapter *hns,
-				uint16_t port_base_vlan_state,
-				uint16_t new_pvid, uint16_t old_pvid)
+			uint16_t port_base_vlan_state, uint16_t new_pvid)
 {
 	struct hns3_hw *hw = &hns->hw;
-	int ret = 0;
+	uint16_t old_pvid;
+	int ret;
 
 	if (port_base_vlan_state == HNS3_PORT_BASE_VLAN_ENABLE) {
-		if (old_pvid != HNS3_INVLID_PVID && old_pvid != 0) {
+		old_pvid = hw->port_base_vlan_cfg.pvid;
+		if (old_pvid != HNS3_INVALID_PVID) {
 			ret = hns3_set_port_vlan_filter(hns, old_pvid, 0);
 			if (ret) {
-				hns3_err(hw,
-					 "Failed to clear clear old pvid filter, ret =%d",
-					 ret);
+				hns3_err(hw, "failed to remove old pvid %u, "
+						"ret = %d", old_pvid, ret);
 				return ret;
 			}
 		}
 
 		hns3_rm_all_vlan_table(hns, false);
-		return hns3_set_port_vlan_filter(hns, new_pvid, 1);
-	}
-
-	if (new_pvid != 0) {
+		ret = hns3_set_port_vlan_filter(hns, new_pvid, 1);
+		if (ret) {
+			hns3_err(hw, "failed to add new pvid %u, ret = %d",
+					new_pvid, ret);
+			return ret;
+		}
+	} else {
 		ret = hns3_set_port_vlan_filter(hns, new_pvid, 0);
 		if (ret) {
-			hns3_err(hw, "Failed to set port vlan filter, ret =%d",
-				 ret);
+			hns3_err(hw, "failed to remove pvid %u, ret = %d",
+					new_pvid, ret);
 			return ret;
 		}
-	}
 
-	if (new_pvid == hw->port_base_vlan_cfg.pvid)
 		hns3_add_all_vlan_table(hns);
-
-	return ret;
+	}
+	return 0;
 }
 
 static int
@@ -883,11 +875,10 @@ hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid, int on)
 {
 	struct hns3_hw *hw = &hns->hw;
 	uint16_t port_base_vlan_state;
-	uint16_t old_pvid;
 	int ret;
 
 	if (on == 0 && pvid != hw->port_base_vlan_cfg.pvid) {
-		if (hw->port_base_vlan_cfg.pvid != HNS3_INVLID_PVID)
+		if (hw->port_base_vlan_cfg.pvid != HNS3_INVALID_PVID)
 			hns3_warn(hw, "Invalid operation! As current pvid set "
 				  "is %u, disable pvid %u is invalid",
 				  hw->port_base_vlan_cfg.pvid, pvid);
@@ -910,19 +901,18 @@ hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid, int on)
 		return ret;
 	}
 
-	if (pvid == HNS3_INVLID_PVID)
+	if (pvid == HNS3_INVALID_PVID)
 		goto out;
-	old_pvid = hw->port_base_vlan_cfg.pvid;
-	ret = hns3_update_vlan_filter_entries(hns, port_base_vlan_state, pvid,
-					      old_pvid);
+	ret = hns3_update_vlan_filter_entries(hns, port_base_vlan_state, pvid);
 	if (ret) {
-		hns3_err(hw, "Failed to update vlan filter entries, ret =%d",
+		hns3_err(hw, "failed to update vlan filter entries, ret = %d",
 			 ret);
 		return ret;
 	}
 
 out:
-	hns3_store_port_base_vlan_info(hns, pvid, on);
+	hw->port_base_vlan_cfg.state = port_base_vlan_state;
+	hw->port_base_vlan_cfg.pvid = on ? pvid : HNS3_INVALID_PVID;
 	return ret;
 }
 
@@ -968,20 +958,19 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
 	return 0;
 }
 
-static void
-init_port_base_vlan_info(struct hns3_hw *hw)
-{
-	hw->port_base_vlan_cfg.state = HNS3_PORT_BASE_VLAN_DISABLE;
-	hw->port_base_vlan_cfg.pvid = HNS3_INVLID_PVID;
-}
-
 static int
 hns3_default_vlan_config(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	ret = hns3_set_port_vlan_filter(hns, 0, 1);
+	/*
+	 * When vlan filter is enabled, hardware regards packets without vlan
+	 * as packets with vlan 0. Therefore, if vlan 0 is not in the vlan
+	 * table, packets without vlan won't be received. So, add vlan 0 as
+	 * the default vlan.
+	 */
+	ret = hns3_vlan_filter_configure(hns, 0, 1);
 	if (ret)
 		hns3_err(hw, "default vlan 0 config failed, ret =%d", ret);
 	return ret;
@@ -1000,8 +989,10 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 	 * ensure that the hardware configuration remains unchanged before and
 	 * after reset.
 	 */
-	if (rte_atomic16_read(&hw->reset.resetting) == 0)
-		init_port_base_vlan_info(hw);
+	if (rte_atomic16_read(&hw->reset.resetting) == 0) {
+		hw->port_base_vlan_cfg.state = HNS3_PORT_BASE_VLAN_DISABLE;
+		hw->port_base_vlan_cfg.pvid = HNS3_INVALID_PVID;
+	}
 
 	ret = hns3_vlan_filter_init(hns);
 	if (ret) {
@@ -1023,7 +1014,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 	 * and hns3_restore_vlan_conf later.
 	 */
 	if (rte_atomic16_read(&hw->reset.resetting) == 0) {
-		ret = hns3_vlan_pvid_configure(hns, HNS3_INVLID_PVID, 0);
+		ret = hns3_vlan_pvid_configure(hns, HNS3_INVALID_PVID, 0);
 		if (ret) {
 			hns3_err(hw, "pvid set fail in pf, ret =%d", ret);
 			return ret;
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 03/17] net/hns3: add default branch to switch in Rx VLAN processing
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 04/17] net/hns3: add max number of segs compatibility Wei Hu (Xavier)
                     ` (14 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Chengchang Tang <tangchengchang@huawei.com>

This patch solves the static check warning as follow:
"The switch statement must have a 'default' branch".

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index cf55d94..6d02bad 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1618,6 +1618,9 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.ot_vlan_tag);
 		return;
+	default:
+		mb->vlan_tci = 0;
+		return;
 	}
 }
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 04/17] net/hns3: add max number of segs compatibility
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (2 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 03/17] net/hns3: add default branch to switch in Rx VLAN processing Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 05/17] net/hns3: add TSO pseudo header calculation compatibility Wei Hu (Xavier)
                     ` (13 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Hongbo Zheng <zhenghongbo3@huawei.com>

Kunpeng 920 supports the maximum nb_segs of non-tso packet is 8 in Tx
direction, kunpeng 930 expands this limit value to 18, this patch sets
the corresponding value by querying the maximum number of non-tso nb_segs
supported by the device during initialization.

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    |  2 +-
 drivers/net/hns3/hns3_ethdev_vf.c |  2 +-
 drivers/net/hns3/hns3_rxtx.c      | 30 ++++++++++++++++++++----------
 drivers/net/hns3/hns3_rxtx.h      |  1 +
 4 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index abc1742..3922df5 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2514,7 +2514,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 		.nb_min = HNS3_MIN_RING_DESC,
 		.nb_align = HNS3_ALIGN_RING_DESC,
 		.nb_seg_max = HNS3_MAX_TSO_BD_PER_PKT,
-		.nb_mtu_seg_max = HNS3_MAX_NON_TSO_BD_PER_PKT,
+		.nb_mtu_seg_max = hw->max_non_tso_bd_num,
 	};
 
 	info->default_rxconf = (struct rte_eth_rxconf) {
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 037a5be..c39edf5 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -966,7 +966,7 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 		.nb_min = HNS3_MIN_RING_DESC,
 		.nb_align = HNS3_ALIGN_RING_DESC,
 		.nb_seg_max = HNS3_MAX_TSO_BD_PER_PKT,
-		.nb_mtu_seg_max = HNS3_MAX_NON_TSO_BD_PER_PKT,
+		.nb_mtu_seg_max = hw->max_non_tso_bd_num,
 	};
 
 	info->default_rxconf = (struct rte_eth_rxconf) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 6d02bad..7c7b9de 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2181,6 +2181,7 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 					HNS3_PORT_BASE_VLAN_ENABLE;
 	else
 		txq->pvid_sw_shift_en = false;
+	txq->max_non_tso_bd_num = hw->max_non_tso_bd_num;
 	txq->configured = true;
 	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
 				idx * HNS3_TQP_REG_SIZE);
@@ -2438,7 +2439,8 @@ hns3_pktmbuf_copy_hdr(struct rte_mbuf *new_pkt, struct rte_mbuf *old_pkt)
 }
 
 static int
-hns3_reassemble_tx_pkts(struct rte_mbuf *tx_pkt, struct rte_mbuf **new_pkt)
+hns3_reassemble_tx_pkts(struct rte_mbuf *tx_pkt, struct rte_mbuf **new_pkt,
+				  uint8_t max_non_tso_bd_num)
 {
 	struct rte_mempool *mb_pool;
 	struct rte_mbuf *new_mbuf;
@@ -2458,7 +2460,7 @@ hns3_reassemble_tx_pkts(struct rte_mbuf *tx_pkt, struct rte_mbuf **new_pkt)
 	mb_pool = tx_pkt->pool;
 	buf_size = tx_pkt->buf_len - RTE_PKTMBUF_HEADROOM;
 	nb_new_buf = (rte_pktmbuf_pkt_len(tx_pkt) - 1) / buf_size + 1;
-	if (nb_new_buf > HNS3_MAX_NON_TSO_BD_PER_PKT)
+	if (nb_new_buf > max_non_tso_bd_num)
 		return -EINVAL;
 
 	last_buf_len = rte_pktmbuf_pkt_len(tx_pkt) % buf_size;
@@ -2690,7 +2692,8 @@ hns3_txd_enable_checksum(struct hns3_tx_queue *txq, uint16_t tx_desc_id,
 }
 
 static bool
-hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num)
+hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num,
+				 uint32_t max_non_tso_bd_num)
 {
 	struct rte_mbuf *m_first = tx_pkts;
 	struct rte_mbuf *m_last = tx_pkts;
@@ -2705,10 +2708,10 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num)
 	 * frags greater than gso header len + mss, and the remaining 7
 	 * consecutive frags greater than MSS except the last 7 frags.
 	 */
-	if (bd_num <= HNS3_MAX_NON_TSO_BD_PER_PKT)
+	if (bd_num <= max_non_tso_bd_num)
 		return false;
 
-	for (i = 0; m_last && i < HNS3_MAX_NON_TSO_BD_PER_PKT - 1;
+	for (i = 0; m_last && i < max_non_tso_bd_num - 1;
 	     i++, m_last = m_last->next)
 		tot_len += m_last->data_len;
 
@@ -2726,7 +2729,7 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num)
 	 * ensure the sum of the data length of every 7 consecutive buffer
 	 * is greater than mss except the last one.
 	 */
-	for (i = 0; m_last && i < bd_num - HNS3_MAX_NON_TSO_BD_PER_PKT; i++) {
+	for (i = 0; m_last && i < bd_num - max_non_tso_bd_num; i++) {
 		tot_len -= m_first->data_len;
 		tot_len += m_last->data_len;
 
@@ -2859,15 +2862,19 @@ uint16_t
 hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	       uint16_t nb_pkts)
 {
+	struct hns3_tx_queue *txq;
 	struct rte_mbuf *m;
 	uint16_t i;
 	int ret;
 
+	txq = (struct hns3_tx_queue *)tx_queue;
+
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 
 		if (hns3_pkt_is_tso(m) &&
-		    (hns3_pkt_need_linearized(m, m->nb_segs) ||
+		    (hns3_pkt_need_linearized(m, m->nb_segs,
+					      txq->max_non_tso_bd_num) ||
 		     hns3_check_tso_pkt_valid(m))) {
 			rte_errno = EINVAL;
 			return i;
@@ -2880,7 +2887,7 @@ hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 			return i;
 		}
 
-		if (hns3_vld_vlan_chk(tx_queue, m)) {
+		if (hns3_vld_vlan_chk(txq, m)) {
 			rte_errno = EINVAL;
 			return i;
 		}
@@ -2921,6 +2928,7 @@ static int
 hns3_check_non_tso_pkt(uint16_t nb_buf, struct rte_mbuf **m_seg,
 		      struct rte_mbuf *tx_pkt, struct hns3_tx_queue *txq)
 {
+	uint8_t max_non_tso_bd_num;
 	struct rte_mbuf *new_pkt;
 	int ret;
 
@@ -2936,9 +2944,11 @@ hns3_check_non_tso_pkt(uint16_t nb_buf, struct rte_mbuf **m_seg,
 		return -EINVAL;
 	}
 
-	if (unlikely(nb_buf > HNS3_MAX_NON_TSO_BD_PER_PKT)) {
+	max_non_tso_bd_num = txq->max_non_tso_bd_num;
+	if (unlikely(nb_buf > max_non_tso_bd_num)) {
 		txq->exceed_limit_bd_pkt_cnt++;
-		ret = hns3_reassemble_tx_pkts(tx_pkt, &new_pkt);
+		ret = hns3_reassemble_tx_pkts(tx_pkt, &new_pkt,
+					      max_non_tso_bd_num);
 		if (ret) {
 			txq->exceed_limit_bd_reassem_fail++;
 			return ret;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 476cfc2..5ffe30e 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -372,6 +372,7 @@ struct hns3_tx_queue {
 	 */
 	uint32_t min_tx_pkt_len;
 
+	uint8_t max_non_tso_bd_num; /* max BD number of one non-TSO packet */
 	bool tx_deferred_start; /* don't start this queue in dev start */
 	bool configured;        /* indicate if tx queue has been configured */
 	/*
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 05/17] net/hns3: add TSO pseudo header calculation compatibility
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (3 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 04/17] net/hns3: add max number of segs compatibility Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR Wei Hu (Xavier)
                     ` (12 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

In kunpeng 920, when process pkts which need TSO, the network driver
need to erase the L4 len value of the TCP TSO pseudo header and
recalculate the pseudo header checksum. kunpeng930 support not need
to erase the L4 len value of the TCP TSO pseudo header.

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    |  2 +
 drivers/net/hns3/hns3_ethdev.h    | 21 +++++++++-
 drivers/net/hns3/hns3_ethdev_vf.c |  2 +
 drivers/net/hns3/hns3_rxtx.c      | 82 ++++++++++++++++++++++++---------------
 drivers/net/hns3/hns3_rxtx.h      | 17 ++++++++
 5 files changed, 92 insertions(+), 32 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 3922df5..10cfc5d 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2916,6 +2916,7 @@ hns3_get_capability(struct hns3_hw *hw)
 		hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_RSV_ONE;
 		hw->intr.coalesce_mode = HNS3_INTR_COALESCE_NON_QL;
 		hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_2US;
+		hw->tso_mode = HNS3_TSO_SW_CAL_PSEUDO_H_CSUM;
 		hw->vlan_mode = HNS3_SW_SHIFT_AND_DISCARD_MODE;
 		hw->min_tx_pkt_len = HNS3_HIP08_MIN_TX_PKT_LEN;
 		return 0;
@@ -2932,6 +2933,7 @@ hns3_get_capability(struct hns3_hw *hw)
 	hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_ALL;
 	hw->intr.coalesce_mode = HNS3_INTR_COALESCE_QL;
 	hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_1US;
+	hw->tso_mode = HNS3_TSO_HW_CAL_PSEUDO_H_CSUM;
 	hw->vlan_mode = HNS3_HW_SHIFT_AND_DISCARD_MODE;
 	hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN;
 
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index a70223f..f170df9 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -416,6 +416,9 @@ struct hns3_queue_intr {
 	uint8_t gl_unit;
 };
 
+#define HNS3_TSO_SW_CAL_PSEUDO_H_CSUM		0
+#define HNS3_TSO_HW_CAL_PSEUDO_H_CSUM		1
+
 struct hns3_hw {
 	struct rte_eth_dev_data *data;
 	void *io_base;
@@ -476,7 +479,23 @@ struct hns3_hw {
 	uint32_t min_tx_pkt_len;
 
 	struct hns3_queue_intr intr;
-
+	/*
+	 * tso mode.
+	 * value range:
+	 *      HNS3_TSO_SW_CAL_PSEUDO_H_CSUM/HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *
+	 *  - HNS3_TSO_SW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, because of the hardware constraint, network driver
+	 *     software need erase the L4 len value of the TCP pseudo header
+	 *     and recalculate the TCP pseudo header checksum of packets that
+	 *     need TSO.
+	 *
+	 *  - HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, hardware support recalculate the TCP pseudo header
+	 *     checksum of packets that need TSO, so network driver software
+	 *     not need to recalculate it.
+	 */
+	uint8_t tso_mode;
 	/*
 	 * vlan mode.
 	 * value range:
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index c39edf5..565cf60 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1165,6 +1165,7 @@ hns3vf_get_capability(struct hns3_hw *hw)
 		hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_RSV_ONE;
 		hw->intr.coalesce_mode = HNS3_INTR_COALESCE_NON_QL;
 		hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_2US;
+		hw->tso_mode = HNS3_TSO_SW_CAL_PSEUDO_H_CSUM;
 		hw->min_tx_pkt_len = HNS3_HIP08_MIN_TX_PKT_LEN;
 		return 0;
 	}
@@ -1180,6 +1181,7 @@ hns3vf_get_capability(struct hns3_hw *hw)
 	hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_ALL;
 	hw->intr.coalesce_mode = HNS3_INTR_COALESCE_QL;
 	hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_1US;
+	hw->tso_mode = HNS3_TSO_HW_CAL_PSEUDO_H_CSUM;
 	hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN;
 
 	return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 7c7b9de..930aa28 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2188,6 +2188,7 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	txq->io_tail_reg = (volatile void *)((char *)txq->io_base +
 					     HNS3_RING_TX_TAIL_REG);
 	txq->min_tx_pkt_len = hw->min_tx_pkt_len;
+	txq->tso_mode = hw->tso_mode;
 	txq->over_length_pkt_cnt = 0;
 	txq->exceed_limit_bd_pkt_cnt = 0;
 	txq->exceed_limit_bd_reassem_fail = 0;
@@ -2858,47 +2859,66 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 }
 #endif
 
-uint16_t
-hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
+static int
+hns3_prep_pkt_proc(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 {
-	struct hns3_tx_queue *txq;
-	struct rte_mbuf *m;
-	uint16_t i;
 	int ret;
 
-	txq = (struct hns3_tx_queue *)tx_queue;
-
-	for (i = 0; i < nb_pkts; i++) {
-		m = tx_pkts[i];
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	ret = rte_validate_tx_offload(m);
+	if (ret != 0) {
+		rte_errno = -ret;
+		return ret;
+	}
 
-		if (hns3_pkt_is_tso(m) &&
-		    (hns3_pkt_need_linearized(m, m->nb_segs,
-					      txq->max_non_tso_bd_num) ||
-		     hns3_check_tso_pkt_valid(m))) {
+	ret = hns3_vld_vlan_chk(tx_queue, m);
+	if (ret != 0) {
+		rte_errno = EINVAL;
+		return ret;
+	}
+#endif
+	if (hns3_pkt_is_tso(m)) {
+		if (hns3_pkt_need_linearized(m, m->nb_segs,
+					     tx_queue->max_non_tso_bd_num) ||
+		    hns3_check_tso_pkt_valid(m)) {
 			rte_errno = EINVAL;
-			return i;
+			return -EINVAL;
 		}
 
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-		ret = rte_validate_tx_offload(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
+		if (tx_queue->tso_mode != HNS3_TSO_SW_CAL_PSEUDO_H_CSUM) {
+			/*
+			 * (tso mode != HNS3_TSO_SW_CAL_PSEUDO_H_CSUM) means
+			 * hardware support recalculate the TCP pseudo header
+			 * checksum of packets that need TSO, so network driver
+			 * software not need to recalculate it.
+			 */
+			hns3_outer_header_cksum_prepare(m);
+			return 0;
 		}
+	}
 
-		if (hns3_vld_vlan_chk(txq, m)) {
-			rte_errno = EINVAL;
-			return i;
-		}
-#endif
-		ret = rte_net_intel_cksum_prepare(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
-		}
+	ret = rte_net_intel_cksum_prepare(m);
+	if (ret != 0) {
+		rte_errno = -ret;
+		return ret;
+	}
+
+	hns3_outer_header_cksum_prepare(m);
+
+	return 0;
+}
 
-		hns3_outer_header_cksum_prepare(m);
+uint16_t
+hns3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	       uint16_t nb_pkts)
+{
+	struct rte_mbuf *m;
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (hns3_prep_pkt_proc(tx_queue, m))
+			return i;
 	}
 
 	return i;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 5ffe30e..d7d70f6 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -367,6 +367,23 @@ struct hns3_tx_queue {
 	struct rte_mbuf **free;
 
 	/*
+	 * tso mode.
+	 * value range:
+	 *      HNS3_TSO_SW_CAL_PSEUDO_H_CSUM/HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *
+	 *  - HNS3_TSO_SW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, because of the hardware constraint, network driver
+	 *     software need erase the L4 len value of the TCP pseudo header
+	 *     and recalculate the TCP pseudo header checksum of packets that
+	 *     need TSO.
+	 *
+	 *  - HNS3_TSO_HW_CAL_PSEUDO_H_CSUM
+	 *     In this mode, hardware support recalculate the TCP pseudo header
+	 *     checksum of packets that need TSO, so network driver software
+	 *     not need to recalculate it.
+	 */
+	uint8_t tso_mode;
+	/*
 	 * The minimum length of the packet supported by hardware in the Tx
 	 * direction.
 	 */
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (4 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 05/17] net/hns3: add TSO pseudo header calculation compatibility Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 07/17] net/hns3: add default branch to switch when parsing fd tuple Wei Hu (Xavier)
                     ` (11 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Hongbo Zheng <zhenghongbo3@huawei.com>

According to the protocol of PCIe, FLR to a PF device resets the PF state
as well as the SR-IOV extended capability including VF Enable which means
that VFs no longer exist.

When PF device is in FLR reset stage, at this time, the register state
of VF device is not reliable, so VF device's register state detection
is not carried out in PF FLR.

In this case, we just ignore the register states to avoid accessing
nonexistent register and return false in the internal function named
hns3vf_is_reset_pending to indicate that there are no other reset states
that need to be processed by PMD driver.

Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.h    |  7 +++++++
 drivers/net/hns3/hns3_ethdev_vf.c | 15 +++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index f170df9..3f3f973 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -275,6 +275,13 @@ enum hns3_reset_level {
 	 * Kernel PF driver use mailbox to inform DPDK VF to do reset, the value
 	 * of the reset level and the one defined in kernel driver should be
 	 * same.
+	 *
+	 * According to the protocol of PCIe, FLR to a PF resets the PF state as
+	 * well as the SR-IOV extended capability including VF Enable which
+	 * means that VFs no longer exist.
+	 *
+	 * In PF FLR, the register state of VF is not reliable, VF's driver
+	 * should not access the registers of the VF device.
 	 */
 	HNS3_VF_FULL_RESET = 3,
 	HNS3_FLR_RESET,     /* A VF perform FLR reset */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 565cf60..cb2747b 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -2191,6 +2191,21 @@ hns3vf_is_reset_pending(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	enum hns3_reset_level reset;
 
+	/*
+	 * According to the protocol of PCIe, FLR to a PF device resets the PF
+	 * state as well as the SR-IOV extended capability including VF Enable
+	 * which means that VFs no longer exist.
+	 *
+	 * HNS3_VF_FULL_RESET means PF device is in FLR reset. when PF device
+	 * is in FLR stage, the register state of VF device is not reliable,
+	 * so register states detection can not be carried out. In this case,
+	 * we just ignore the register states and return false to indicate that
+	 * there are no other reset states that need to be processed by driver.
+	 */
+	if (hw->reset.level == HNS3_VF_FULL_RESET)
+		return false;
+
+	/* Check the registers to confirm whether there is reset pending */
 	hns3vf_check_event_cause(hns, NULL);
 	reset = hns3vf_get_reset_level(hw, &hw->reset.pending);
 	if (hw->reset.level != HNS3_NONE_RESET && hw->reset.level < reset) {
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 07/17] net/hns3: add default branch to switch when parsing fd tuple
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (5 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 08/17] net/hns3: add break to exit loop when err stat item found Wei Hu (Xavier)
                     ` (10 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

This patch solves the static check warning in the internal function named
hns3_fd_convert_tuple as follow:
    "The switch statement must have a 'default' branch".

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_fdir.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index 5c3dd05..5729923 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -526,7 +526,8 @@ static inline void hns3_fd_convert_int32(uint32_t key, uint32_t mask,
 	memcpy(val_y, &tmp_y_l, sizeof(tmp_y_l));
 }
 
-static bool hns3_fd_convert_tuple(uint32_t tuple, uint8_t *key_x,
+static bool hns3_fd_convert_tuple(struct hns3_hw *hw,
+				  uint32_t tuple, uint8_t *key_x,
 				  uint8_t *key_y, struct hns3_fdir_rule *rule)
 {
 	struct hns3_fdir_key_conf *key_conf;
@@ -603,6 +604,9 @@ static bool hns3_fd_convert_tuple(uint32_t tuple, uint8_t *key_x,
 		calc_y(*key_y, key_conf->spec.ip_proto,
 		       key_conf->mask.ip_proto);
 		break;
+	default:
+		hns3_warn(hw, "not support tuple of (%d)", tuple);
+		break;
 	}
 	return true;
 }
@@ -710,7 +714,7 @@ static int hns3_config_key(struct hns3_adapter *hns,
 
 		tuple_size = tuple_key_info[i].key_length / HNS3_BITS_PER_BYTE;
 		if (key_cfg->tuple_active & BIT(i)) {
-			tuple_valid = hns3_fd_convert_tuple(i, cur_key_x,
+			tuple_valid = hns3_fd_convert_tuple(hw, i, cur_key_x,
 							    cur_key_y, rule);
 			if (tuple_valid) {
 				cur_key_x += tuple_size;
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 08/17] net/hns3: add break to exit loop when err stat item found
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (6 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 07/17] net/hns3: add default branch to switch when parsing fd tuple Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 09/17] net/hns3: support flow action of queue region Wei Hu (Xavier)
                     ` (9 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Hongbo Zheng <zhenghongbo3@huawei.com>

This patch solves the redundant operation during traversal. In the internal
function named hns3_error_int_stats_add for adding error statistics,
because only one statistical item will be found in the for loop statement,
a break can be executed after finding the error statistical item without
traversing the remaining table entries.

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_stats.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
index 067673c..e8846b9 100644
--- a/drivers/net/hns3/hns3_stats.c
+++ b/drivers/net/hns3/hns3_stats.c
@@ -703,6 +703,7 @@ hns3_error_int_stats_add(struct hns3_adapter *hns, const char *err)
 			addr = (char *)&pf->abn_int_stats +
 				hns3_error_int_stats_strings[i].offset;
 			*(uint64_t *)addr += 1;
+			break;
 		}
 	}
 }
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 09/17] net/hns3: support flow action of queue region
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (7 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 08/17] net/hns3: add break to exit loop when err stat item found Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 10/17] net/hns3: support querying RSS flow rule Wei Hu (Xavier)
                     ` (8 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Chengwen Feng <fengchengwen@huawei.com>

Kunpeng 930 hardware support spread packets to region of queues which can
be configured by FDIR rule, it means user can create one FDIR rule which
action is one region of queues, and then RSS use the region info to spread
packets.

As we know, RTE_FLOW_ACTION_TYPE_RSS is used to spread packets among
several queues, user could config such as func/level/types/key/queue
parameter to control RSS function, so we provide this feature under the
RTE_FLOW_ACTION_TYPE_RSS framework.

Consider RSS input tuple don't have eth header, we use the following
rule to distinguish them (whether it's queue region configuration or
rss general configuration):
Case 1: pattern have ETH and action's queue_num > 0, indicate it is
queue region configuration.
Case other: rss general configuration.

So if user want to configure one flow which ipv4=192.168.1.192 spread to
queue region of queue 0/1/2/3, the patter should:
  RTE_FLOW_ITEM_TYPE_ETH with spec=last=mask=NULL
  RTE_FLOW_ITEM_TYPE_IPV4 with spec=192.168.1.192 last=mask=NULL
  RTE_FLOW_ITEM_TYPE_END
the action should:
  RTE_FLOW_ACTION_TYPE_RSS with queue_num=4 queue=0/1/2/3
  RTE_FLOW_ACTION_TYPE_END
after calling rte_flow_create, one FDIR rule will be created.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
v1 -> v2: fix TYPO_SPELLING with 'QUEUE'.
---
 drivers/net/hns3/hns3_cmd.c    |   5 +-
 drivers/net/hns3/hns3_cmd.h    |   2 +-
 drivers/net/hns3/hns3_ethdev.h |   8 +--
 drivers/net/hns3/hns3_fdir.c   |  35 +++++++-----
 drivers/net/hns3/hns3_fdir.h   |  23 +++++++-
 drivers/net/hns3/hns3_flow.c   | 117 +++++++++++++++++++++++++++++++++++++----
 6 files changed, 158 insertions(+), 32 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 0299072..f7cfa00 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -433,8 +433,9 @@ static void hns3_parse_capability(struct hns3_hw *hw,
 
 	if (hns3_get_bit(caps, HNS3_CAPS_UDP_GSO_B))
 		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_UDP_GSO_B, 1);
-	if (hns3_get_bit(caps, HNS3_CAPS_ADQ_B))
-		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_ADQ_B, 1);
+	if (hns3_get_bit(caps, HNS3_CAPS_FD_QUEUE_REGION_B))
+		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_FD_QUEUE_REGION_B,
+			     1);
 	if (hns3_get_bit(caps, HNS3_CAPS_PTP_B))
 		hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_PTP_B, 1);
 	if (hns3_get_bit(caps, HNS3_CAPS_TX_PUSH_B))
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 629b114..510d68c 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -278,7 +278,7 @@ struct hns3_rx_priv_buff_cmd {
 enum HNS3_CAPS_BITS {
 	HNS3_CAPS_UDP_GSO_B,
 	HNS3_CAPS_ATR_B,
-	HNS3_CAPS_ADQ_B,
+	HNS3_CAPS_FD_QUEUE_REGION_B,
 	HNS3_CAPS_PTP_B,
 	HNS3_CAPS_INT_QL_B,
 	HNS3_CAPS_SIMPLE_BD_B,
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 3f3f973..e1ed4d6 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -719,7 +719,7 @@ struct hns3_adapter {
 #define HNS3_DEV_SUPPORT_DCB_B			0x0
 #define HNS3_DEV_SUPPORT_COPPER_B		0x1
 #define HNS3_DEV_SUPPORT_UDP_GSO_B		0x2
-#define HNS3_DEV_SUPPORT_ADQ_B			0x3
+#define HNS3_DEV_SUPPORT_FD_QUEUE_REGION_B	0x3
 #define HNS3_DEV_SUPPORT_PTP_B			0x4
 #define HNS3_DEV_SUPPORT_TX_PUSH_B		0x5
 #define HNS3_DEV_SUPPORT_INDEP_TXRX_B		0x6
@@ -736,9 +736,9 @@ struct hns3_adapter {
 #define hns3_dev_udp_gso_supported(hw) \
 	hns3_get_bit((hw)->capability, HNS3_DEV_SUPPORT_UDP_GSO_B)
 
-/* Support Application Device Queue */
-#define hns3_dev_adq_supported(hw) \
-	hns3_get_bit((hw)->capability, HNS3_DEV_SUPPORT_ADQ_B)
+/* Support the queue region action rule of flow directory */
+#define hns3_dev_fd_queue_region_supported(hw) \
+	hns3_get_bit((hw)->capability, HNS3_DEV_SUPPORT_FD_QUEUE_REGION_B)
 
 /* Support PTP timestamp offload */
 #define hns3_dev_ptp_supported(hw) \
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index 5729923..65ab19d 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -29,20 +29,23 @@
 
 #define HNS3_FD_AD_DATA_S		32
 #define HNS3_FD_AD_DROP_B		0
-#define HNS3_FD_AD_DIRECT_QID_B	1
+#define HNS3_FD_AD_DIRECT_QID_B		1
 #define HNS3_FD_AD_QID_S		2
-#define HNS3_FD_AD_QID_M		GENMASK(12, 2)
+#define HNS3_FD_AD_QID_M		GENMASK(11, 2)
 #define HNS3_FD_AD_USE_COUNTER_B	12
 #define HNS3_FD_AD_COUNTER_NUM_S	13
-#define HNS3_FD_AD_COUNTER_NUM_M	GENMASK(20, 13)
+#define HNS3_FD_AD_COUNTER_NUM_M	GENMASK(19, 13)
 #define HNS3_FD_AD_NXT_STEP_B		20
 #define HNS3_FD_AD_NXT_KEY_S		21
-#define HNS3_FD_AD_NXT_KEY_M		GENMASK(26, 21)
-#define HNS3_FD_AD_WR_RULE_ID_B	0
+#define HNS3_FD_AD_NXT_KEY_M		GENMASK(25, 21)
+#define HNS3_FD_AD_WR_RULE_ID_B		0
 #define HNS3_FD_AD_RULE_ID_S		1
-#define HNS3_FD_AD_RULE_ID_M		GENMASK(13, 1)
-#define HNS3_FD_AD_COUNTER_HIGH_BIT     7
-#define HNS3_FD_AD_COUNTER_HIGH_BIT_B   26
+#define HNS3_FD_AD_RULE_ID_M		GENMASK(12, 1)
+#define HNS3_FD_AD_QUEUE_REGION_EN_B	16
+#define HNS3_FD_AD_QUEUE_REGION_SIZE_S	17
+#define HNS3_FD_AD_QUEUE_REGION_SIZE_M	GENMASK(20, 17)
+#define HNS3_FD_AD_COUNTER_HIGH_BIT	7
+#define HNS3_FD_AD_COUNTER_HIGH_BIT_B	26
 
 enum HNS3_PORT_TYPE {
 	HOST_PORT,
@@ -426,13 +429,19 @@ static int hns3_fd_ad_config(struct hns3_hw *hw, int loc,
 		     action->write_rule_id_to_bd);
 	hns3_set_field(ad_data, HNS3_FD_AD_RULE_ID_M, HNS3_FD_AD_RULE_ID_S,
 		       action->rule_id);
+	if (action->nb_queues > 1) {
+		hns3_set_bit(ad_data, HNS3_FD_AD_QUEUE_REGION_EN_B, 1);
+		hns3_set_field(ad_data, HNS3_FD_AD_QUEUE_REGION_SIZE_M,
+			       HNS3_FD_AD_QUEUE_REGION_SIZE_S,
+			       rte_log2_u32(action->nb_queues));
+	}
 	/* set extend bit if counter_id is in [128 ~ 255] */
 	if (action->counter_id & BIT(HNS3_FD_AD_COUNTER_HIGH_BIT))
 		hns3_set_bit(ad_data, HNS3_FD_AD_COUNTER_HIGH_BIT_B, 1);
 	ad_data <<= HNS3_FD_AD_DATA_S;
 	hns3_set_bit(ad_data, HNS3_FD_AD_DROP_B, action->drop_packet);
-	hns3_set_bit(ad_data, HNS3_FD_AD_DIRECT_QID_B,
-		     action->forward_to_direct_queue);
+	if (action->nb_queues == 1)
+		hns3_set_bit(ad_data, HNS3_FD_AD_DIRECT_QID_B, 1);
 	hns3_set_field(ad_data, HNS3_FD_AD_QID_M, HNS3_FD_AD_QID_S,
 		       action->queue_id);
 	hns3_set_bit(ad_data, HNS3_FD_AD_USE_COUNTER_B, action->use_counter);
@@ -440,7 +449,7 @@ static int hns3_fd_ad_config(struct hns3_hw *hw, int loc,
 		       HNS3_FD_AD_COUNTER_NUM_S, action->counter_id);
 	hns3_set_bit(ad_data, HNS3_FD_AD_NXT_STEP_B, action->use_next_stage);
 	hns3_set_field(ad_data, HNS3_FD_AD_NXT_KEY_M, HNS3_FD_AD_NXT_KEY_S,
-		       action->counter_id);
+		       action->next_input_key);
 
 	req->ad_data = rte_cpu_to_le_64(ad_data);
 	ret = hns3_cmd_send(hw, &desc, 1);
@@ -752,12 +761,12 @@ static int hns3_config_action(struct hns3_hw *hw, struct hns3_fdir_rule *rule)
 
 	if (rule->action == HNS3_FD_ACTION_DROP_PACKET) {
 		ad_data.drop_packet = true;
-		ad_data.forward_to_direct_queue = false;
 		ad_data.queue_id = 0;
+		ad_data.nb_queues = 0;
 	} else {
 		ad_data.drop_packet = false;
-		ad_data.forward_to_direct_queue = true;
 		ad_data.queue_id = rule->queue_id;
+		ad_data.nb_queues = rule->nb_queues;
 	}
 
 	if (unlikely(rule->flags & HNS3_RULE_FLAG_COUNTER)) {
diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h
index f7b4216..a5760a3 100644
--- a/drivers/net/hns3/hns3_fdir.h
+++ b/drivers/net/hns3/hns3_fdir.h
@@ -104,8 +104,18 @@ struct hns3_fd_rule_tuples {
 struct hns3_fd_ad_data {
 	uint16_t ad_id;
 	uint8_t drop_packet;
-	uint8_t forward_to_direct_queue;
+	/*
+	 * equal 0 when action is drop.
+	 * index of queue when action is queue.
+	 * index of first queue of queue region when action is queue region.
+	 */
 	uint16_t queue_id;
+	/*
+	 * equal 0 when action is drop.
+	 * equal 1 when action is queue.
+	 * numbers of queues of queue region when action is queue region.
+	 */
+	uint16_t nb_queues;
 	uint8_t use_counter;
 	uint8_t counter_id;
 	uint8_t use_next_stage;
@@ -141,7 +151,18 @@ struct hns3_fdir_rule {
 	uint8_t action;
 	/* VF id, avaiblable when flags with HNS3_RULE_FLAG_VF_ID. */
 	uint8_t vf_id;
+	/*
+	 * equal 0 when action is drop.
+	 * index of queue when action is queue.
+	 * index of first queue of queue region when action is queue region.
+	 */
 	uint16_t queue_id;
+	/*
+	 * equal 0 when action is drop.
+	 * equal 1 when action is queue.
+	 * numbers of queues of queue region when action is queue region.
+	 */
+	uint16_t nb_queues;
 	uint16_t location;
 	struct rte_flow_action_count act_cnt;
 };
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 7ec46ae..1bc5911 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -90,16 +90,56 @@ net_addr_to_host(uint32_t *dst, const rte_be32_t *src, size_t len)
 		dst[i] = rte_be_to_cpu_32(src[i]);
 }
 
-static inline const struct rte_flow_action *
-find_rss_action(const struct rte_flow_action actions[])
+/*
+ * This function is used to find rss general action.
+ * 1. As we know RSS is used to spread packets among several queues, the flow
+ *    API provide the struct rte_flow_action_rss, user could config it's field
+ *    sush as: func/level/types/key/queue to control RSS function.
+ * 2. The flow API also support queue region configuration for hns3. It was
+ *    implemented by FDIR + RSS in hns3 hardware, user can create one FDIR rule
+ *    which action is RSS queues region.
+ * 3. When action is RSS, we use the following rule to distinguish:
+ *    Case 1: pattern have ETH and action's queue_num > 0, indicate it is queue
+ *            region configuration.
+ *    Case other: an rss general action.
+ */
+static const struct rte_flow_action *
+hns3_find_rss_general_action(const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[])
 {
-	const struct rte_flow_action *next = &actions[0];
+	const struct rte_flow_action *act = NULL;
+	const struct hns3_rss_conf *rss;
+	bool have_eth = false;
 
-	for (; next->type != RTE_FLOW_ACTION_TYPE_END; next++) {
-		if (next->type == RTE_FLOW_ACTION_TYPE_RSS)
-			return next;
+	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+		if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
+			act = actions;
+			break;
+		}
 	}
-	return NULL;
+	if (!act)
+		return NULL;
+
+	for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) {
+		if (pattern->type == RTE_FLOW_ITEM_TYPE_ETH) {
+			have_eth = true;
+			break;
+		}
+	}
+
+	rss = act->conf;
+	if (have_eth && rss->conf.queue_num) {
+		/*
+		 * Patter have ETH and action's queue_num > 0, indicate this is
+		 * queue region configuration.
+		 * Because queue region is implemented by FDIR + RSS in hns3
+		 * hardware, it need enter FDIR process, so here return NULL to
+		 * avoid enter RSS process.
+		 */
+		return NULL;
+	}
+
+	return act;
 }
 
 static inline struct hns3_flow_counter *
@@ -233,11 +273,53 @@ hns3_handle_action_queue(struct rte_eth_dev *dev,
 			  "available queue (%d) in driver.",
 			  queue->index, hw->used_rx_queues);
 		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION, action,
-					  "Invalid queue ID in PF");
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  action, "Invalid queue ID in PF");
 	}
 
 	rule->queue_id = queue->index;
+	rule->nb_queues = 1;
+	rule->action = HNS3_FD_ACTION_ACCEPT_PACKET;
+	return 0;
+}
+
+static int
+hns3_handle_action_queue_region(struct rte_eth_dev *dev,
+				const struct rte_flow_action *action,
+				struct hns3_fdir_rule *rule,
+				struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	const struct rte_flow_action_rss *conf = action->conf;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t idx;
+
+	if (!hns3_dev_fd_queue_region_supported(hw))
+		return rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_ACTION, action,
+			"Not support config queue region!");
+
+	if ((!rte_is_power_of_2(conf->queue_num)) ||
+		conf->queue_num > hw->rss_size_max ||
+		conf->queue[0] >= hw->used_rx_queues ||
+		conf->queue[0] + conf->queue_num > hw->used_rx_queues) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION_CONF, action,
+			"Invalid start queue ID and queue num! the start queue "
+			"ID must valid, the queue num must be power of 2 and "
+			"<= rss_size_max.");
+	}
+
+	for (idx = 1; idx < conf->queue_num; idx++) {
+		if (conf->queue[idx] != conf->queue[idx - 1] + 1)
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_CONF, action,
+				"Invalid queue ID sequence! the queue ID "
+				"must be continuous increment.");
+	}
+
+	rule->queue_id = conf->queue[0];
+	rule->nb_queues = conf->queue_num;
 	rule->action = HNS3_FD_ACTION_ACCEPT_PACKET;
 	return 0;
 }
@@ -274,6 +356,19 @@ hns3_handle_actions(struct rte_eth_dev *dev,
 		case RTE_FLOW_ACTION_TYPE_DROP:
 			rule->action = HNS3_FD_ACTION_DROP_PACKET;
 			break;
+		/*
+		 * Here RSS's real action is queue region.
+		 * Queue region is implemented by FDIR + RSS in hns3 hardware,
+		 * the FDIR's action is one queue region (start_queue_id and
+		 * queue_num), then RSS spread packets to the queue region by
+		 * RSS algorigthm.
+		 */
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			ret = hns3_handle_action_queue_region(dev, actions,
+							      rule, error);
+			if (ret)
+				return ret;
+			break;
 		case RTE_FLOW_ACTION_TYPE_MARK:
 			mark =
 			    (const struct rte_flow_action_mark *)actions->conf;
@@ -1629,7 +1724,7 @@ hns3_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	if (ret)
 		return ret;
 
-	if (find_rss_action(actions))
+	if (hns3_find_rss_general_action(pattern, actions))
 		return hns3_parse_rss_filter(dev, actions, error);
 
 	memset(&fdir_rule, 0, sizeof(struct hns3_fdir_rule));
@@ -1684,7 +1779,7 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	flow_node->flow = flow;
 	TAILQ_INSERT_TAIL(&process_list->flow_list, flow_node, entries);
 
-	act = find_rss_action(actions);
+	act = hns3_find_rss_general_action(pattern, actions);
 	if (act) {
 		rss_conf = act->conf;
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 10/17] net/hns3: support querying RSS flow rule
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (8 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 09/17] net/hns3: support flow action of queue region Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 11/17] net/hns3: check input RSS type when creating flow with RSS Wei Hu (Xavier)
                     ` (7 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

This patch enables to query some RSS configurations of the specified
rule. For example, show RSS hash function and rss types.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 1bc5911..14d5968 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1962,9 +1962,15 @@ hns3_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
 		const struct rte_flow_action *actions, void *data,
 		struct rte_flow_error *error)
 {
+	struct rte_flow_action_rss *rss_conf;
+	struct hns3_rss_conf_ele *rss_rule;
 	struct rte_flow_query_count *qc;
 	int ret;
 
+	if (!flow->rule)
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "invalid rule");
+
 	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
 		switch (actions->type) {
 		case RTE_FLOW_ACTION_TYPE_VOID:
@@ -1975,13 +1981,24 @@ hns3_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
 			if (ret)
 				return ret;
 			break;
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			if (flow->filter_type != RTE_ETH_FILTER_HASH) {
+				return rte_flow_error_set(error, ENOTSUP,
+					RTE_FLOW_ERROR_TYPE_ACTION,
+					actions, "action is not supported");
+			}
+			rss_conf = (struct rte_flow_action_rss *)data;
+			rss_rule = (struct hns3_rss_conf_ele *)flow->rule;
+			rte_memcpy(rss_conf, &rss_rule->filter_info.conf,
+				   sizeof(struct rte_flow_action_rss));
+			break;
 		default:
 			return rte_flow_error_set(error, ENOTSUP,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  actions,
-						  "Query action only support count");
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				actions, "action is not supported");
 		}
 	}
+
 	return 0;
 }
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 11/17] net/hns3: check input RSS type when creating flow with RSS
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (9 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 10/17] net/hns3: support querying RSS flow rule Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 12/17] net/hns3: set RSS hash type input configuration Wei Hu (Xavier)
                     ` (6 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

This patch adds checking the input RSS type when creating a flow with RSS.
If the input RSS type are not supported based on hns3 network engine, an
error is returned.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 26 ++++++++------------------
 1 file changed, 8 insertions(+), 18 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 14d5968..f4ea47a 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1403,6 +1403,13 @@ hns3_parse_rss_filter(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ACTION, act,
 					  "too many queues for RSS context");
 
+	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & ETH_RSS_IP))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &rss->types,
+					  "input RSS types are not supported");
+
 	act_index++;
 
 	/* Check if the next not void action is END */
@@ -1563,23 +1570,6 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 		.queue = conf->conf.queue,
 	};
 
-	/* The types is Unsupported by hns3' RSS */
-	if (!(rss_flow_conf.types & HNS3_ETH_RSS_SUPPORT) &&
-	    rss_flow_conf.types) {
-		hns3_err(hw,
-			 "Flow types(%" PRIx64 ") is unsupported by hns3's RSS",
-			 rss_flow_conf.types);
-		return -EINVAL;
-	}
-
-	if (rss_flow_conf.key_len &&
-	    rss_flow_conf.key_len > RTE_DIM(rss_info->key)) {
-		hns3_err(hw,
-			"input hash key(%u) greater than supported len(%zu)",
-			rss_flow_conf.key_len, RTE_DIM(rss_info->key));
-		return -EINVAL;
-	}
-
 	/* Filter the unsupported flow types */
 	flow_types = conf->conf.types ?
 		     rss_flow_conf.types & HNS3_ETH_RSS_SUPPORT :
@@ -1755,7 +1745,7 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct hns3_fdir_rule fdir_rule;
 	int ret;
 
-	ret = hns3_flow_args_check(attr, pattern, actions, error);
+	ret = hns3_flow_validate(dev, attr, pattern, actions, error);
 	if (ret)
 		return NULL;
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 12/17] net/hns3: set RSS hash type input configuration
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (10 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 11/17] net/hns3: check input RSS type when creating flow with RSS Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 13/17] net/hns3: fix config when creating RSS rule after flush Wei Hu (Xavier)
                     ` (5 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

This patch support for hash type input set configuration. That is, upper
level application can call the rte_flow_create or
rte_eth_dev_rss_hash_update API to set the specified tuples by RSS types.
As a result, the hardware can selectively use the tuples to perform hash
calculation based on the specific tuple to enable the fields.
The following uses the RSS flow as an example:

When application calls the rte_flow_create API to select tuple input for
ipv4-tcp l3-src-only, the driver can enable the ipv4-tcp src field of the
hardware register by calling the internal function named
hns3_set_rss_tuple_by_rss_hf according to the RSS types in actions list.
In testpmd application, the command as follows:
testpmd> flow create 0 ingress pattern end actions rss types \
ipv4-tcp l3-src-only end queues 1 3 end / end

The following use the rte_eth_dev_rss_hash_update as an example:
if the user set the select tuple input for ipv4-tcp l3-src-only by
rte_eth_rss_conf structure in rte_eth_dev_rss_hash_update. the next
flow is the same as that of the RSS flow. In testpmd application,
the command as follows:
testpmd> port config all rss ipv4-tcp l3-src-only

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_cmd.h    |   9 +-
 drivers/net/hns3/hns3_ethdev.h |   2 +
 drivers/net/hns3/hns3_rss.c    | 244 +++++++++++++++++++++++++++++++----------
 drivers/net/hns3/hns3_rss.h    |  20 +---
 4 files changed, 197 insertions(+), 78 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 510d68c..0b531d9 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -563,14 +563,7 @@ struct hns3_rss_generic_config_cmd {
 
 /* Configure the tuple selection for RSS hash input, opcode:0x0D02 */
 struct hns3_rss_input_tuple_cmd {
-	uint8_t ipv4_tcp_en;
-	uint8_t ipv4_udp_en;
-	uint8_t ipv4_sctp_en;
-	uint8_t ipv4_fragment_en;
-	uint8_t ipv6_tcp_en;
-	uint8_t ipv6_udp_en;
-	uint8_t ipv6_sctp_en;
-	uint8_t ipv6_fragment_en;
+	uint64_t tuple_field;
 	uint8_t rsv[16];
 };
 
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index e1ed4d6..c2d0a75 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -790,6 +790,8 @@ struct hns3_adapter {
 
 #define BIT(nr) (1UL << (nr))
 
+#define BIT_ULL(x) (1ULL << (x))
+
 #define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
 #define GENMASK(h, l) \
 	(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 247bd7d..5b51512 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -23,6 +23,166 @@ static const uint8_t hns3_hash_key[] = {
 	0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA
 };
 
+enum hns3_tuple_field {
+	/* IPV4_TCP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D = 0,
+	HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S,
+	HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S,
+
+	/* IPV4_UDP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D = 8,
+	HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S,
+	HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S,
+
+	/* IPV4_SCTP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D = 16,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S,
+	HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER,
+
+	/* IPV4 ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D = 24,
+	HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S,
+	HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D,
+	HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S,
+
+	/* IPV6_TCP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D = 32,
+	HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S,
+	HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S,
+
+	/* IPV6_UDP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D = 40,
+	HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S,
+	HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D,
+	HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S,
+
+	/* IPV6_UDP ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D = 50,
+	HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S,
+	HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER,
+
+	/* IPV6 ENABLE FIELD */
+	HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D = 56,
+	HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S,
+	HNS3_RSS_FIELD_IPV6_FRAG_IP_D,
+	HNS3_RSS_FIELD_IPV6_FRAG_IP_S
+};
+
+static const struct {
+	uint64_t rss_types;
+	uint64_t rss_field;
+} hns3_set_tuple_table[] = {
+	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
+	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
+	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
+	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
+};
+
+static const struct {
+	uint64_t rss_types;
+	uint64_t rss_field;
+} hns3_set_rss_types[] = {
+	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
+	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
+	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
+	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
+	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
+	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
+};
+
 /*
  * rss_generic_config command function, opcode:0x0D01.
  * Used to set algorithm, key_offset and hash key of rss.
@@ -91,14 +251,8 @@ hns3_set_rss_input_tuple(struct hns3_hw *hw)
 
 	req = (struct hns3_rss_input_tuple_cmd *)desc_tuple.data;
 
-	req->ipv4_tcp_en = rss_config->rss_tuple_sets.ipv4_tcp_en;
-	req->ipv4_udp_en = rss_config->rss_tuple_sets.ipv4_udp_en;
-	req->ipv4_sctp_en = rss_config->rss_tuple_sets.ipv4_sctp_en;
-	req->ipv4_fragment_en = rss_config->rss_tuple_sets.ipv4_fragment_en;
-	req->ipv6_tcp_en = rss_config->rss_tuple_sets.ipv6_tcp_en;
-	req->ipv6_udp_en = rss_config->rss_tuple_sets.ipv6_udp_en;
-	req->ipv6_sctp_en = rss_config->rss_tuple_sets.ipv6_sctp_en;
-	req->ipv6_fragment_en = rss_config->rss_tuple_sets.ipv6_fragment_en;
+	req->tuple_field =
+		rte_cpu_to_le_64(rss_config->rss_tuple_sets.rss_tuple_fields);
 
 	ret = hns3_cmd_send(hw, &desc_tuple, 1);
 	if (ret)
@@ -171,6 +325,7 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 {
 	struct hns3_rss_input_tuple_cmd *req;
 	struct hns3_cmd_desc desc;
+	uint32_t fields_count = 0; /* count times for setting tuple fields */
 	uint32_t i;
 	int ret;
 
@@ -178,46 +333,30 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 
 	req = (struct hns3_rss_input_tuple_cmd *)desc.data;
 
-	/* Enable ipv4 or ipv6 tuple by flow type */
-	for (i = 0; i < RTE_ETH_FLOW_MAX; i++) {
-		switch (rss_hf & (1ULL << i)) {
-		case ETH_RSS_NONFRAG_IPV4_TCP:
-			req->ipv4_tcp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV4_UDP:
-			req->ipv4_udp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV4_SCTP:
-			req->ipv4_sctp_en = HNS3_RSS_INPUT_TUPLE_SCTP;
-			break;
-		case ETH_RSS_FRAG_IPV4:
-			req->ipv4_fragment_en |= HNS3_IP_FRAG_BIT_MASK;
-			break;
-		case ETH_RSS_NONFRAG_IPV4_OTHER:
-			req->ipv4_fragment_en |= HNS3_IP_OTHER_BIT_MASK;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_TCP:
-			req->ipv6_tcp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_UDP:
-			req->ipv6_udp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_SCTP:
-			req->ipv6_sctp_en = HNS3_RSS_INPUT_TUPLE_SCTP;
-			break;
-		case ETH_RSS_FRAG_IPV6:
-			req->ipv6_fragment_en |= HNS3_IP_FRAG_BIT_MASK;
-			break;
-		case ETH_RSS_NONFRAG_IPV6_OTHER:
-			req->ipv6_fragment_en |= HNS3_IP_OTHER_BIT_MASK;
-			break;
-		default:
-			/*
-			 * rss_hf doesn't include unsupported flow types
-			 * because the API framework has checked it, and
-			 * this branch will never go unless rss_hf is zero.
-			 */
-			break;
+	for (i = 0; i < RTE_DIM(hns3_set_tuple_table); i++) {
+		if ((rss_hf & hns3_set_tuple_table[i].rss_types) ==
+		     hns3_set_tuple_table[i].rss_types) {
+			req->tuple_field |=
+			    rte_cpu_to_le_64(hns3_set_tuple_table[i].rss_field);
+			fields_count++;
+		}
+	}
+
+	/*
+	 * When user does not specify the following types or a combination of
+	 * the following types, it enables all fields for the supported RSS
+	 * types. the following types as:
+	 * - ETH_RSS_L3_SRC_ONLY
+	 * - ETH_RSS_L3_DST_ONLY
+	 * - ETH_RSS_L4_SRC_ONLY
+	 * - ETH_RSS_L4_DST_ONLY
+	 */
+	if (fields_count == 0) {
+		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
+			if ((rss_hf & hns3_set_rss_types[i].rss_types) ==
+			     hns3_set_rss_types[i].rss_types)
+				req->tuple_field |= rte_cpu_to_le_64(
+					hns3_set_rss_types[i].rss_field);
 		}
 	}
 
@@ -227,14 +366,7 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 		return ret;
 	}
 
-	tuple->ipv4_tcp_en = req->ipv4_tcp_en;
-	tuple->ipv4_udp_en = req->ipv4_udp_en;
-	tuple->ipv4_sctp_en = req->ipv4_sctp_en;
-	tuple->ipv4_fragment_en = req->ipv4_fragment_en;
-	tuple->ipv6_tcp_en = req->ipv6_tcp_en;
-	tuple->ipv6_udp_en = req->ipv6_udp_en;
-	tuple->ipv6_sctp_en = req->ipv6_sctp_en;
-	tuple->ipv6_fragment_en = req->ipv6_fragment_en;
+	tuple->rss_tuple_fields = rte_le_to_cpu_64(req->tuple_field);
 
 	return 0;
 }
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index df5bba6..47d3586 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -17,7 +17,11 @@
 	ETH_RSS_NONFRAG_IPV6_TCP | \
 	ETH_RSS_NONFRAG_IPV6_UDP | \
 	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L3_SRC_ONLY | \
+	ETH_RSS_L3_DST_ONLY | \
+	ETH_RSS_L4_SRC_ONLY | \
+	ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_KEY_SIZE	40
@@ -30,20 +34,8 @@
 #define HNS3_RSS_HASH_ALGO_SYMMETRIC_TOEP 2
 #define HNS3_RSS_HASH_ALGO_MASK		0xf
 
-#define HNS3_RSS_INPUT_TUPLE_OTHER	GENMASK(3, 0)
-#define HNS3_RSS_INPUT_TUPLE_SCTP	GENMASK(4, 0)
-#define HNS3_IP_FRAG_BIT_MASK		GENMASK(3, 2)
-#define HNS3_IP_OTHER_BIT_MASK		GENMASK(1, 0)
-
 struct hns3_rss_tuple_cfg {
-	uint8_t ipv4_tcp_en;      /* Bit8.0~8.3 */
-	uint8_t ipv4_udp_en;      /* Bit9.0~9.3 */
-	uint8_t ipv4_sctp_en;     /* Bit10.0~10.4 */
-	uint8_t ipv4_fragment_en; /* Bit11.0~11.3 */
-	uint8_t ipv6_tcp_en;      /* Bit12.0~12.3 */
-	uint8_t ipv6_udp_en;      /* Bit13.0~13.3 */
-	uint8_t ipv6_sctp_en;     /* Bit14.0~14.4 */
-	uint8_t ipv6_fragment_en; /* Bit15.0~15.3 */
+	uint64_t rss_tuple_fields;
 };
 
 #define HNS3_RSS_QUEUES_BUFFER_NUM	64 /* Same as the Max rx/tx queue num */
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 13/17] net/hns3: fix config when creating RSS rule after flush
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (11 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 12/17] net/hns3: set RSS hash type input configuration Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 14/17] net/hns3: fix flow RSS queue num with 0 Wei Hu (Xavier)
                     ` (4 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

Currnetly, when user create a flow RSS rule and then flush the rule, driver
will set 0 for the internal structure of hw->rss_info maintained in driver.
And then user create a flow RSS rule without specified RSS hash key, driver
configure a validate RSS hash key into hardware network engine and will
cause an RSS error. The related steps when using testpmd as
follows:
  flow 0 <pattern> action rss xx end / end
  flow flush 0
  flow 0 <pattern> action rss queues 0 1 end / end

To slove the preceding problem, the flow flush processing is modified.
it don't clear all RSS configurations for hw->rss_info. Actually the RSS
key information in the hardware is not cleared, therefore, the
hw->rss_info.key is kept consistent with the value in the hardware. In
addition, because reset the redirection table, we need to set queues NULL
and queue_num for zero that indicate the corresponding parameters in the
hardware is unavailable. Also we set hw->rss_info.func to
RTE_ETH_HASH_FUNCTION_MAX that indicate it is invalid.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 40 +++++++++++++++++++++++++++++++++++-----
 1 file changed, 35 insertions(+), 5 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index f4ea47a..6f2ff87 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1293,10 +1293,24 @@ static bool
 hns3_action_rss_same(const struct rte_flow_action_rss *comp,
 		     const struct rte_flow_action_rss *with)
 {
-	return (comp->func == with->func &&
-		comp->level == with->level &&
-		comp->types == with->types &&
-		comp->key_len == with->key_len &&
+	bool func_is_same;
+
+	/*
+	 * When user flush all RSS rule, RSS func is set invalid with
+	 * RTE_ETH_HASH_FUNCTION_MAX. Then the user create a flow after
+	 * flushed, any validate RSS func is different with it before
+	 * flushed. Others, when user create an action RSS with RSS func
+	 * specified RTE_ETH_HASH_FUNCTION_DEFAULT, the func is the same
+	 * between continuous RSS flow.
+	 */
+	if (comp->func == RTE_ETH_HASH_FUNCTION_MAX)
+		func_is_same = false;
+	else
+		func_is_same = (with->func ? (comp->func == with->func) : true);
+
+	return (func_is_same &&
+		comp->types == (with->types & HNS3_ETH_RSS_SUPPORT) &&
+		comp->level == with->level && comp->key_len == with->key_len &&
 		comp->queue_num == with->queue_num &&
 		!memcmp(comp->key, with->key, with->key_len) &&
 		!memcmp(comp->queue, with->queue,
@@ -1589,7 +1603,19 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 				hns3_err(hw, "RSS disable failed(%d)", ret);
 				return ret;
 			}
-			memset(rss_info, 0, sizeof(struct hns3_rss_conf));
+
+			if (rss_flow_conf.queue_num) {
+				/*
+				 * Due the content of queue pointer have been
+				 * reset to 0, the rss_info->conf.queue should
+				 * be set NULL.
+				 */
+				rss_info->conf.queue = NULL;
+				rss_info->conf.queue_num = 0;
+			}
+
+			/* set RSS func invalid after flushed */
+			rss_info->conf.func = RTE_ETH_HASH_FUNCTION_MAX;
 			return 0;
 		}
 		return -EINVAL;
@@ -1651,6 +1677,10 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev)
 	if (hw->rss_info.conf.queue_num == 0)
 		return 0;
 
+	/* When user flush all rules, it doesn't need to restore RSS rule */
+	if (hw->rss_info.conf.func == RTE_ETH_HASH_FUNCTION_MAX)
+		return 0;
+
 	return hns3_config_rss_filter(dev, &hw->rss_info, true);
 }
 
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 14/17] net/hns3: fix flow RSS queue num with 0
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (12 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 13/17] net/hns3: fix config when creating RSS rule after flush Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 15/17] net/hns3: fix flushing RSS rule Wei Hu (Xavier)
                     ` (3 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

When user specifies RSS queue num for 0 in action list by flow create API,
it should create a valid flow rule. The following flow rule should be
success in the command line of the testpmd application:
flow create 0 <pattern> actions rss queues  / end

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 6f2ff87..5b5124c 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1360,10 +1360,9 @@ hns3_parse_rss_filter(struct rte_eth_dev *dev,
 	uint16_t n;
 
 	NEXT_ITEM_OF_ACTION(act, actions, act_index);
-	/* Get configuration args from APP cmdline input */
 	rss = act->conf;
 
-	if (rss == NULL || rss->queue_num == 0) {
+	if (rss == NULL) {
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "no valid queues");
@@ -1537,11 +1536,6 @@ hns3_update_indir_table(struct rte_eth_dev *dev,
 	uint8_t queue_id;
 	uint32_t i;
 
-	if (num == 0) {
-		hns3_err(hw, "No PF queues are configured to enable RSS");
-		return -ENOTSUP;
-	}
-
 	allow_rss_queues = RTE_MIN(dev->data->nb_rx_queues, hw->rss_size_max);
 	/* Fill in redirection table */
 	memcpy(indir_tbl, hw->rss_info.rss_indirection_tbl,
@@ -1632,10 +1626,11 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 	hns3_info(hw, "Max of contiguous %u PF queues are configured", num);
 
 	rte_spinlock_lock(&hw->lock);
-	/* Update redirection talbe of rss */
-	ret = hns3_update_indir_table(dev, &rss_flow_conf, num);
-	if (ret)
-		goto rss_config_err;
+	if (num) {
+		ret = hns3_update_indir_table(dev, &rss_flow_conf, num);
+		if (ret)
+			goto rss_config_err;
+	}
 
 	/* Set hash algorithm and flow types by the user's config */
 	ret = hns3_hw_rss_hash_set(hw, &rss_flow_conf);
@@ -1661,9 +1656,6 @@ hns3_clear_rss_filter(struct rte_eth_dev *dev)
 	struct hns3_adapter *hns = dev->data->dev_private;
 	struct hns3_hw *hw = &hns->hw;
 
-	if (hw->rss_info.conf.queue_num == 0)
-		return 0;
-
 	return hns3_config_rss_filter(dev, &hw->rss_info, false);
 }
 
@@ -1674,9 +1666,6 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev)
 	struct hns3_adapter *hns = dev->data->dev_private;
 	struct hns3_hw *hw = &hns->hw;
 
-	if (hw->rss_info.conf.queue_num == 0)
-		return 0;
-
 	/* When user flush all rules, it doesn't need to restore RSS rule */
 	if (hw->rss_info.conf.func == RTE_ETH_HASH_FUNCTION_MAX)
 		return 0;
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 15/17] net/hns3: fix flushing RSS rule
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (13 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 14/17] net/hns3: fix flow RSS queue num with 0 Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 16/17] net/hns3: fix configuring device with RSS is enabled Wei Hu (Xavier)
                     ` (2 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: Lijun Ou <oulijun@huawei.com>

When user create a flow without RSS by calling rte_flow_create API and then
destroy it by calling rte_flow_flush API, driver should not clear RSS rule.

A reasonable handling method is that when user creates an RSS rule, the
driver should clear the created RSS rule when flushing destroy all flow
rules. Also, hw->rss_info should save the RSS config of the last success
RSS rule. When create n RSS rules, the RSS should not be disabled before
the last RSS rule destroyed.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 79 ++++++++++++++++++++++++++++++++------------
 drivers/net/hns3/hns3_rss.h  |  1 +
 2 files changed, 58 insertions(+), 22 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 5b5124c..05cc95e 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1560,7 +1560,9 @@ static int
 hns3_config_rss_filter(struct rte_eth_dev *dev,
 		       const struct hns3_rss_conf *conf, bool add)
 {
+	struct hns3_process_private *process_list = dev->process_private;
 	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_hw *hw = &hns->hw;
 	struct hns3_rss_conf *rss_info;
 	uint64_t flow_types;
@@ -1591,28 +1593,27 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 
 	rss_info = &hw->rss_info;
 	if (!add) {
-		if (hns3_action_rss_same(&rss_info->conf, &rss_flow_conf)) {
-			ret = hns3_disable_rss(hw);
-			if (ret) {
-				hns3_err(hw, "RSS disable failed(%d)", ret);
-				return ret;
-			}
+		if (!conf->valid)
+			return 0;
 
-			if (rss_flow_conf.queue_num) {
-				/*
-				 * Due the content of queue pointer have been
-				 * reset to 0, the rss_info->conf.queue should
-				 * be set NULL.
-				 */
-				rss_info->conf.queue = NULL;
-				rss_info->conf.queue_num = 0;
-			}
+		ret = hns3_disable_rss(hw);
+		if (ret) {
+			hns3_err(hw, "RSS disable failed(%d)", ret);
+			return ret;
+		}
 
-			/* set RSS func invalid after flushed */
-			rss_info->conf.func = RTE_ETH_HASH_FUNCTION_MAX;
-			return 0;
+		if (rss_flow_conf.queue_num) {
+			/*
+			 * Due the content of queue pointer have been reset to
+			 * 0, the rss_info->conf.queue should be set NULL
+			 */
+			rss_info->conf.queue = NULL;
+			rss_info->conf.queue_num = 0;
 		}
-		return -EINVAL;
+
+		/* set RSS func invalid after flushed */
+		rss_info->conf.func = RTE_ETH_HASH_FUNCTION_MAX;
+		return 0;
 	}
 
 	/* Get rx queues num */
@@ -1643,6 +1644,13 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 		goto rss_config_err;
 	}
 
+	/*
+	 * When create a new RSS rule, the old rule will be overlaid and set
+	 * invalid.
+	 */
+	TAILQ_FOREACH(rss_filter_ptr, &process_list->filter_rss_list, entries)
+		rss_filter_ptr->filter_info.valid = false;
+
 rss_config_err:
 	rte_spinlock_unlock(&hw->lock);
 
@@ -1653,10 +1661,36 @@ hns3_config_rss_filter(struct rte_eth_dev *dev,
 static int
 hns3_clear_rss_filter(struct rte_eth_dev *dev)
 {
+	struct hns3_process_private *process_list = dev->process_private;
 	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_hw *hw = &hns->hw;
+	int rss_rule_succ_cnt = 0; /* count for success of clearing RSS rules */
+	int rss_rule_fail_cnt = 0; /* count for failure of clearing RSS rules */
+	int ret = 0;
+
+	rss_filter_ptr = TAILQ_FIRST(&process_list->filter_rss_list);
+	while (rss_filter_ptr) {
+		TAILQ_REMOVE(&process_list->filter_rss_list, rss_filter_ptr,
+			     entries);
+		ret = hns3_config_rss_filter(dev, &rss_filter_ptr->filter_info,
+					     false);
+		if (ret)
+			rss_rule_fail_cnt++;
+		else
+			rss_rule_succ_cnt++;
+		rte_free(rss_filter_ptr);
+		rss_filter_ptr = TAILQ_FIRST(&process_list->filter_rss_list);
+	}
 
-	return hns3_config_rss_filter(dev, &hw->rss_info, false);
+	if (rss_rule_fail_cnt) {
+		hns3_err(hw, "fail to delete all RSS filters, success num = %d "
+			     "fail num = %d", rss_rule_succ_cnt,
+			     rss_rule_fail_cnt);
+		ret = -EIO;
+	}
+
+	return ret;
 }
 
 /* Restore the rss filter */
@@ -1807,6 +1841,7 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 		}
 		memcpy(&rss_filter_ptr->filter_info, rss_conf,
 			sizeof(struct hns3_rss_conf));
+		rss_filter_ptr->filter_info.valid = true;
 		TAILQ_INSERT_TAIL(&process_list->filter_rss_list,
 				  rss_filter_ptr, entries);
 
@@ -1872,7 +1907,6 @@ hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 	struct hns3_fdir_rule_ele *fdir_rule_ptr;
 	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_flow_mem *flow_node;
-	struct hns3_hw *hw = &hns->hw;
 	enum rte_filter_type filter_type;
 	struct hns3_fdir_rule fdir_rule;
 	int ret;
@@ -1902,7 +1936,8 @@ hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		break;
 	case RTE_ETH_FILTER_HASH:
 		rss_filter_ptr = (struct hns3_rss_conf_ele *)flow->rule;
-		ret = hns3_config_rss_filter(dev, &hw->rss_info, false);
+		ret = hns3_config_rss_filter(dev, &rss_filter_ptr->filter_info,
+					     false);
 		if (ret)
 			return rte_flow_error_set(error, EIO,
 						  RTE_FLOW_ERROR_TYPE_HANDLE,
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 47d3586..8fa1b30 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -47,6 +47,7 @@ struct hns3_rss_conf {
 	struct hns3_rss_tuple_cfg rss_tuple_sets;
 	uint8_t rss_indirection_tbl[HNS3_RSS_IND_TBL_SIZE]; /* Shadow table */
 	uint16_t queue[HNS3_RSS_QUEUES_BUFFER_NUM]; /* Queues indices to use */
+	bool valid; /* check if RSS rule is valid */
 };
 
 #ifndef ilog2
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 16/17] net/hns3: fix configuring device with RSS is enabled
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (14 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 15/17] net/hns3: fix flushing RSS rule Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 17/17] net/hns3: fix storing RSS info when creating flow action Wei Hu (Xavier)
  2020-09-28 13:29   ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Ferruh Yigit
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, when running the following commands in the CLI of testpmd
application, the driver reports an -EINVAL error when performing the No.3
step.
1) flow create 0 ingress pattern end actions rss key <key> func simple_xor
     types all end / end
2) flow flush 0
3) port config dcb vt off pfc off

The root cause as below:
In the No.2 step, when RSS rules is flushed, we set the the flag
hw->rss_dis_flag with true to indicate RSS id disabled. And in the No.3
step, calling rte_eth_dev_configure API function, the internal function
named hns3_dev_rss_hash_update check hw->rss_dis_flag is true and return
-EINVAL.

When user calls the rte_eth_dev_configure API function with the input
parameter dev_conf->rxmode.mq_mode having ETH_MQ_RX_RSS_FLAG to enable RSS,
driver should set internal flag hw->rss_dis_flag with false to indicate RSS
is enabled in the '.dev_configure' ops implementation function named
hns3_dev_configure and hns3vf_dev_configure.

Fixes: 5e782bc2570c ("net/hns3: fix configuring RSS hash when rules are flushed")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    | 1 +
 drivers/net/hns3/hns3_ethdev_vf.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 10cfc5d..99bcc7a 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2326,6 +2326,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
+		hw->rss_dis_flag = false;
 		if (rss_conf.rss_key == NULL) {
 			rss_conf.rss_key = rss_cfg->key;
 			rss_conf.rss_key_len = HNS3_RSS_KEY_SIZE;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index cb2747b..4c73441 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -783,6 +783,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	/* When RSS is not configured, redirect the packet queue 0 */
 	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		if (rss_conf.rss_key == NULL) {
 			rss_conf.rss_key = rss_cfg->key;
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [dpdk-dev] [PATCH v2 17/17] net/hns3: fix storing RSS info when creating flow action
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (15 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 16/17] net/hns3: fix configuring device with RSS is enabled Wei Hu (Xavier)
@ 2020-09-22 12:03   ` Wei Hu (Xavier)
  2020-09-28 13:29   ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Ferruh Yigit
  17 siblings, 0 replies; 37+ messages in thread
From: Wei Hu (Xavier) @ 2020-09-22 12:03 UTC (permalink / raw)
  To: dev; +Cc: xavier.huwei

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

Currently, when calling the rte_flow_query API function to query the RSS
information, the queue related information is not as expected.

The root cause is that when application call the rte_flow_create API
function to create RSS action, the operation of storing the data whose typs
is struct rte_flow_action_rss is incorrect in the '.create' ops
implementation function named hns3_flow_create.

This patch fixes it by replacing memcpy with hns3_rss_conf_copy function to
store the RSS information in the hns3_flow_create.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
 drivers/net/hns3/hns3_flow.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 05cc95e..2cdfb68 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1839,8 +1839,8 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			ret = -ENOMEM;
 			goto err;
 		}
-		memcpy(&rss_filter_ptr->filter_info, rss_conf,
-			sizeof(struct hns3_rss_conf));
+		hns3_rss_conf_copy(&rss_filter_ptr->filter_info,
+				   &rss_conf->conf);
 		rss_filter_ptr->filter_info.valid = true;
 		TAILQ_INSERT_TAIL(&process_list->filter_rss_list,
 				  rss_filter_ptr, entries);
-- 
2.9.5


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver
  2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
                     ` (16 preceding siblings ...)
  2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 17/17] net/hns3: fix storing RSS info when creating flow action Wei Hu (Xavier)
@ 2020-09-28 13:29   ` Ferruh Yigit
  17 siblings, 0 replies; 37+ messages in thread
From: Ferruh Yigit @ 2020-09-28 13:29 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev; +Cc: xavier.huwei

On 9/22/2020 1:03 PM, Wei Hu (Xavier) wrote:
> This series are updates for hns3 PMD driver.
> 
> Chengchang Tang (2):
>    net/hns3: fix default VLAN won't be deleted when set PF PVID
>    net/hns3: add default branch to switch in Rx VLAN processing
> 
> Chengwen Feng (1):
>    net/hns3: support flow action of queue region
> 
> Hongbo Zheng (3):
>    net/hns3: add max number of segs compatibility
>    net/hns3: avoid accessing nonexistent VF reg when PF in FLR
>    net/hns3: add break to exit loop when err stat item found
> 
> Lijun Ou (5):
>    net/hns3: support querying RSS flow rule
>    net/hns3: check input RSS type when creating flow with RSS
>    net/hns3: set RSS hash type input configuration
>    net/hns3: fix config when creating RSS rule after flush
>    net/hns3: fix flushing RSS rule
> 
> Wei Hu (Xavier) (6):
>    net/hns3: add VLAN configuration compatibility
>    net/hns3: add TSO pseudo header calculation compatibility
>    net/hns3: add default branch to switch when parsing fd tuple
>    net/hns3: fix flow RSS queue num with 0
>    net/hns3: fix configuring device with RSS is enabled
>    net/hns3: fix storing RSS info when creating flow action
> 

Series applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2020-09-28 13:29 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-22  8:53 [dpdk-dev] [PATCH 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 03/17] net/hns3: add default branch to switch in Rx VLAN processing Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 04/17] net/hns3: add max number of segs compatibility Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 05/17] net/hns3: add TSO pseudo header calculation compatibility Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 07/17] net/hns3: add default branch to switch when parsing fd tuple Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 08/17] net/hns3: add break to exit loop when err stat item found Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 09/17] net/hns3: support flow action of queue region Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 10/17] net/hns3: support querying RSS flow rule Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 11/17] net/hns3: check input RSS type when creating flow with RSS Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 12/17] net/hns3: set RSS hash type input configuration Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 13/17] net/hns3: fix config when creating RSS rule after flush Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 14/17] net/hns3: fix flow RSS queue num with 0 Wei Hu (Xavier)
2020-09-22  8:53 ` [dpdk-dev] [PATCH 15/17] net/hns3: fix flushing RSS rule Wei Hu (Xavier)
2020-09-22  8:54 ` [dpdk-dev] [PATCH 16/17] net/hns3: fix configuring device with RSS is enabled Wei Hu (Xavier)
2020-09-22  8:54 ` [dpdk-dev] [PATCH 17/17] net/hns3: fix storing RSS info when creating flow action Wei Hu (Xavier)
2020-09-22 12:03 ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 01/17] net/hns3: add VLAN configuration compatibility Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 02/17] net/hns3: fix default VLAN won't be deleted when set PF PVID Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 03/17] net/hns3: add default branch to switch in Rx VLAN processing Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 04/17] net/hns3: add max number of segs compatibility Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 05/17] net/hns3: add TSO pseudo header calculation compatibility Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 06/17] net/hns3: avoid accessing nonexistent VF reg when PF in FLR Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 07/17] net/hns3: add default branch to switch when parsing fd tuple Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 08/17] net/hns3: add break to exit loop when err stat item found Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 09/17] net/hns3: support flow action of queue region Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 10/17] net/hns3: support querying RSS flow rule Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 11/17] net/hns3: check input RSS type when creating flow with RSS Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 12/17] net/hns3: set RSS hash type input configuration Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 13/17] net/hns3: fix config when creating RSS rule after flush Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 14/17] net/hns3: fix flow RSS queue num with 0 Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 15/17] net/hns3: fix flushing RSS rule Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 16/17] net/hns3: fix configuring device with RSS is enabled Wei Hu (Xavier)
2020-09-22 12:03   ` [dpdk-dev] [PATCH v2 17/17] net/hns3: fix storing RSS info when creating flow action Wei Hu (Xavier)
2020-09-28 13:29   ` [dpdk-dev] [PATCH v2 00/17] updates for hns3 PMD driver Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).