* [dpdk-dev] [PATCH 1/7] net/hns3: check max SIMD bitwidth
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
@ 2021-04-17 9:54 ` Min Hu (Connor)
2021-04-20 0:16 ` Ferruh Yigit
2021-04-17 9:54 ` [dpdk-dev] [PATCH 2/7] net/hns3: Rx vector add compile-time verify Min Hu (Connor)
` (6 subsequent siblings)
7 siblings, 1 reply; 10+ messages in thread
From: Min Hu (Connor) @ 2021-04-17 9:54 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
From: Chengwen Feng <fengchengwen@huawei.com>
This patch supports check max SIMD bitwidth when choosing NEON and SVE
vector path.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_rxtx.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 9bb30fc..f2022a5 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -13,6 +13,7 @@
#include <rte_malloc.h>
#if defined(RTE_ARCH_ARM64)
#include <rte_cpuflags.h>
+#include <rte_vect.h>
#endif
#include "hns3_ethdev.h"
@@ -2800,6 +2801,8 @@ static bool
hns3_get_default_vec_support(void)
{
#if defined(RTE_ARCH_ARM64)
+ if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128)
+ return false;
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON))
return true;
#endif
@@ -2810,6 +2813,8 @@ static bool
hns3_get_sve_support(void)
{
#if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_SVE)
+ if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_256)
+ return false;
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SVE))
return true;
#endif
--
2.7.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 2/7] net/hns3: Rx vector add compile-time verify
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 1/7] net/hns3: check max SIMD bitwidth Min Hu (Connor)
@ 2021-04-17 9:54 ` Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 3/7] net/hns3: remove redundant code about mbx Min Hu (Connor)
` (5 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Min Hu (Connor) @ 2021-04-17 9:54 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
From: Chengwen Feng <fengchengwen@huawei.com>
According the Rx vector implement, it depends on the MBUF's fields
(such as rearm_data/rx_descriptor_fields1) layout, this patch adds
compile-time verifies with this.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_rxtx_vec.c | 22 ++++++++++++++++++++++
drivers/net/hns3/hns3_rxtx_vec_neon.h | 8 ++++++++
drivers/net/hns3/hns3_rxtx_vec_sve.c | 6 ++++++
3 files changed, 36 insertions(+)
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index 08a86e0..dc1e1ae 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -147,6 +147,28 @@ hns3_rxq_vec_setup_rearm_data(struct hns3_rx_queue *rxq)
mb_def.port = rxq->port_id;
rte_mbuf_refcnt_set(&mb_def, 1);
+ /* compile-time verifies the rearm_data first 8bytes */
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) <
+ offsetof(struct rte_mbuf, rearm_data));
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) <
+ offsetof(struct rte_mbuf, rearm_data));
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) <
+ offsetof(struct rte_mbuf, rearm_data));
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) <
+ offsetof(struct rte_mbuf, rearm_data));
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) <
+ offsetof(struct rte_mbuf, rearm_data));
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) -
+ offsetof(struct rte_mbuf, rearm_data) > 6);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
+ offsetof(struct rte_mbuf, rearm_data) > 6);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
+ offsetof(struct rte_mbuf, rearm_data) > 6);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
+ offsetof(struct rte_mbuf, rearm_data) > 6);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
+ offsetof(struct rte_mbuf, rearm_data) > 6);
+
/* prevent compiler reordering: rearm_data covers previous fields */
rte_compiler_barrier();
p = (uintptr_t)&mb_def.rearm_data;
diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h
index 14d6fb0..69af7b3 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_neon.h
+++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h
@@ -161,6 +161,14 @@ hns3_recv_burst_vec(struct hns3_rx_queue *__restrict rxq,
0, 0, 0, /* ignore non-length fields */
};
+ /* compile-time verifies the shuffle mask */
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash.rss) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+
for (pos = 0; pos < nb_pkts; pos += HNS3_DEFAULT_DESCS_PER_LOOP,
rxdp += HNS3_DEFAULT_DESCS_PER_LOOP) {
uint64x2x2_t descs[HNS3_DEFAULT_DESCS_PER_LOOP];
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index 2eaf692..1fd87ca 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -122,6 +122,12 @@ hns3_recv_burst_vec_sve(struct hns3_rx_queue *__restrict rxq,
svuint32_t rss_tbl1 = svld1_u32(PG32_256BIT, rss_adjust);
svuint32_t rss_tbl2 = svld1_u32(PG32_256BIT, &rss_adjust[8]);
+ /* compile-time verifies the xlen_adjust mask */
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+ offsetof(struct rte_mbuf, pkt_len) + 4);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+ offsetof(struct rte_mbuf, data_len) + 2);
+
for (pos = 0; pos < nb_pkts; pos += HNS3_SVE_DEFAULT_DESCS_PER_LOOP,
rxdp += HNS3_SVE_DEFAULT_DESCS_PER_LOOP) {
svuint64_t vld_clz, mbp1st, mbp2st, mbuf_init;
--
2.7.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 3/7] net/hns3: remove redundant code about mbx
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 1/7] net/hns3: check max SIMD bitwidth Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 2/7] net/hns3: Rx vector add compile-time verify Min Hu (Connor)
@ 2021-04-17 9:54 ` Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 4/7] net/hns3: fix the check for DCB mode Min Hu (Connor)
` (4 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Min Hu (Connor) @ 2021-04-17 9:54 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
From: Huisong Li <lihuisong@huawei.com>
Some mbx messages do not need to reply with data. In this case,
it is no need to set the response data address and the response
length.
This patch removes these redundant codes from mbx messages that do
not need be replied.
Fixes: a5475d61fa34 ("net/hns3: support VF")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_ethdev_vf.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 5770c47..58809fb 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1511,7 +1511,6 @@ static void
hns3vf_request_link_info(struct hns3_hw *hw)
{
struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw);
- uint8_t resp_msg;
bool send_req;
int ret;
@@ -1524,7 +1523,7 @@ hns3vf_request_link_info(struct hns3_hw *hw)
return;
ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false,
- &resp_msg, sizeof(resp_msg));
+ NULL, 0);
if (ret) {
hns3_err(hw, "failed to fetch link status, ret = %d", ret);
return;
@@ -1756,11 +1755,10 @@ hns3vf_keep_alive_handler(void *param)
struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
struct hns3_adapter *hns = eth_dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
- uint8_t respmsg;
int ret;
ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0,
- false, &respmsg, sizeof(uint8_t));
+ false, NULL, 0);
if (ret)
hns3_err(hw, "VF sends keeping alive cmd failed(=%d)",
ret);
--
2.7.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 4/7] net/hns3: fix the check for DCB mode
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
` (2 preceding siblings ...)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 3/7] net/hns3: remove redundant code about mbx Min Hu (Connor)
@ 2021-04-17 9:54 ` Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 5/7] net/hns3: fix the check for VMDq mode Min Hu (Connor)
` (3 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Min Hu (Connor) @ 2021-04-17 9:54 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
From: Huisong Li <lihuisong@huawei.com>
Currently, "ONLY DCB" and "DCB+RSS" mode are both supported by HNS3
PF driver. But the driver verifies only the "DCB+RSS" multiple queues
mode.
Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 1832d26..b705a72 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2241,7 +2241,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rx_mq_mode == ETH_MQ_RX_DCB_RSS) {
+ if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
if (dcb_rx_conf->nb_tcs > pf->tc_max) {
hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
dcb_rx_conf->nb_tcs, pf->tc_max);
--
2.7.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 5/7] net/hns3: fix the check for VMDq mode
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
` (3 preceding siblings ...)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 4/7] net/hns3: fix the check for DCB mode Min Hu (Connor)
@ 2021-04-17 9:54 ` Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 6/7] net/hns3: fix FDIR lock bug Min Hu (Connor)
` (2 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: Min Hu (Connor) @ 2021-04-17 9:54 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
From: Huisong Li <lihuisong@huawei.com>
HNS3 PF driver only supports RSS, DCB or NONE multiple queues mode.
Currently, driver doesn't verify the VMDq multi-queue mode completely.
This patch fixes the verification for VMDq mode.
Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 28 ++++++++++++----------------
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index b705a72..c276c6b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2224,23 +2224,16 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
int max_tc = 0;
int i;
- dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
-
- if (rx_mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
- hns3_err(hw, "ETH_MQ_RX_VMDQ_DCB_RSS is not supported. "
- "rx_mq_mode = %d", rx_mq_mode);
- return -EINVAL;
- }
-
- if (rx_mq_mode == ETH_MQ_RX_VMDQ_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
- hns3_err(hw, "ETH_MQ_RX_VMDQ_DCB and ETH_MQ_TX_VMDQ_DCB "
- "is not supported. rx_mq_mode = %d, tx_mq_mode = %d",
+ if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
+ (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
+ tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+ hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
rx_mq_mode, tx_mq_mode);
- return -EINVAL;
+ return -EOPNOTSUPP;
}
+ dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+ dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
if (dcb_rx_conf->nb_tcs > pf->tc_max) {
hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
@@ -2299,8 +2292,7 @@ hns3_check_dcb_cfg(struct rte_eth_dev *dev)
return -EOPNOTSUPP;
}
- /* Check multiple queue mode */
- return hns3_check_mq_mode(dev);
+ return 0;
}
static int
@@ -2473,6 +2465,10 @@ hns3_dev_configure(struct rte_eth_dev *dev)
}
hw->adapter_state = HNS3_NIC_CONFIGURING;
+ ret = hns3_check_mq_mode(dev);
+ if (ret)
+ goto cfg_err;
+
if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
ret = hns3_check_dcb_cfg(dev);
if (ret)
--
2.7.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 6/7] net/hns3: fix FDIR lock bug
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
` (4 preceding siblings ...)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 5/7] net/hns3: fix the check for VMDq mode Min Hu (Connor)
@ 2021-04-17 9:54 ` Min Hu (Connor)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 7/7] net/hns3: move the link speeds check to dev configure Min Hu (Connor)
2021-04-20 0:42 ` [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Ferruh Yigit
7 siblings, 0 replies; 10+ messages in thread
From: Min Hu (Connor) @ 2021-04-17 9:54 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
From: Chengwen Feng <fengchengwen@huawei.com>
Currently, the fdir lock was used to protect concurrent access in
multiple processes, it has the following problems:
1) Lack of protection for fdir reset recover.
2) Only part data is protected, eg. the filterlist is not protected.
We use the following scheme:
1) Del the fdir lock.
2) Add a flow lock and provides rte flow driver ops API-level
protection.
3) Declare support RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE.
Fixes: fcba820d9b9e ("net/hns3: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 4 +-
drivers/net/hns3/hns3_ethdev.h | 4 ++
drivers/net/hns3/hns3_ethdev_vf.c | 3 +-
drivers/net/hns3/hns3_fdir.c | 28 ++++++------
drivers/net/hns3/hns3_fdir.h | 3 +-
drivers/net/hns3/hns3_flow.c | 96 ++++++++++++++++++++++++++++++++++++---
6 files changed, 111 insertions(+), 27 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index c276c6b..bcbebd0 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -7334,8 +7334,8 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
PMD_INIT_LOG(ERR, "Failed to alloc memory for process private");
return -ENOMEM;
}
- /* initialize flow filter lists */
- hns3_filterlist_init(eth_dev);
+
+ hns3_flow_init(eth_dev);
hns3_set_rxtx_function(eth_dev);
eth_dev->dev_ops = &hns3_eth_dev_ops;
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 7b7d359..b1360cb 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -5,6 +5,7 @@
#ifndef _HNS3_ETHDEV_H_
#define _HNS3_ETHDEV_H_
+#include <pthread.h>
#include <sys/time.h>
#include <ethdev_driver.h>
#include <rte_byteorder.h>
@@ -613,6 +614,9 @@ struct hns3_hw {
uint8_t udp_cksum_mode;
struct hns3_port_base_vlan_config port_base_vlan_cfg;
+
+ pthread_mutex_t flows_lock; /* rte_flow ops lock */
+
/*
* PMD setup and configuration is not thread safe. Since it is not
* performance sensitive, it is better to guarantee thread-safety
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 58809fb..cf2b74e 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -2910,8 +2910,7 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* initialize flow filter lists */
- hns3_filterlist_init(eth_dev);
+ hns3_flow_init(eth_dev);
hns3_set_rxtx_function(eth_dev);
eth_dev->dev_ops = &hns3vf_eth_dev_ops;
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index 603cc82..87c1aef 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -830,7 +830,6 @@ int hns3_fdir_filter_init(struct hns3_adapter *hns)
fdir_hash_params.socket_id = rte_socket_id();
TAILQ_INIT(&fdir_info->fdir_list);
- rte_spinlock_init(&fdir_info->flows_lock);
snprintf(fdir_hash_name, RTE_HASH_NAMESIZE, "%s", hns->hw.data->name);
fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
if (fdir_info->hash_handle == NULL) {
@@ -856,7 +855,6 @@ void hns3_fdir_filter_uninit(struct hns3_adapter *hns)
struct hns3_fdir_info *fdir_info = &pf->fdir;
struct hns3_fdir_rule_ele *fdir_filter;
- rte_spinlock_lock(&fdir_info->flows_lock);
if (fdir_info->hash_map) {
rte_free(fdir_info->hash_map);
fdir_info->hash_map = NULL;
@@ -865,7 +863,6 @@ void hns3_fdir_filter_uninit(struct hns3_adapter *hns)
rte_hash_free(fdir_info->hash_handle);
fdir_info->hash_handle = NULL;
}
- rte_spinlock_unlock(&fdir_info->flows_lock);
fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list);
while (fdir_filter) {
@@ -891,10 +888,8 @@ static int hns3_fdir_filter_lookup(struct hns3_fdir_info *fdir_info,
hash_sig_t sig;
int ret;
- rte_spinlock_lock(&fdir_info->flows_lock);
sig = rte_hash_crc(key, sizeof(*key), 0);
ret = rte_hash_lookup_with_hash(fdir_info->hash_handle, key, sig);
- rte_spinlock_unlock(&fdir_info->flows_lock);
return ret;
}
@@ -908,11 +903,9 @@ static int hns3_insert_fdir_filter(struct hns3_hw *hw,
int ret;
key = &fdir_filter->fdir_conf.key_conf;
- rte_spinlock_lock(&fdir_info->flows_lock);
sig = rte_hash_crc(key, sizeof(*key), 0);
ret = rte_hash_add_key_with_hash(fdir_info->hash_handle, key, sig);
if (ret < 0) {
- rte_spinlock_unlock(&fdir_info->flows_lock);
hns3_err(hw, "Hash table full? err:%d(%s)!", ret,
strerror(-ret));
return ret;
@@ -920,7 +913,6 @@ static int hns3_insert_fdir_filter(struct hns3_hw *hw,
fdir_info->hash_map[ret] = fdir_filter;
TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
- rte_spinlock_unlock(&fdir_info->flows_lock);
return ret;
}
@@ -933,11 +925,9 @@ static int hns3_remove_fdir_filter(struct hns3_hw *hw,
hash_sig_t sig;
int ret;
- rte_spinlock_lock(&fdir_info->flows_lock);
sig = rte_hash_crc(key, sizeof(*key), 0);
ret = rte_hash_del_key_with_hash(fdir_info->hash_handle, key, sig);
if (ret < 0) {
- rte_spinlock_unlock(&fdir_info->flows_lock);
hns3_err(hw, "Delete hash key fail ret=%d", ret);
return ret;
}
@@ -945,7 +935,6 @@ static int hns3_remove_fdir_filter(struct hns3_hw *hw,
fdir_filter = fdir_info->hash_map[ret];
fdir_info->hash_map[ret] = NULL;
TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
- rte_spinlock_unlock(&fdir_info->flows_lock);
rte_free(fdir_filter);
@@ -1000,11 +989,9 @@ int hns3_fdir_filter_program(struct hns3_adapter *hns,
rule->location = ret;
node->fdir_conf.location = ret;
- rte_spinlock_lock(&fdir_info->flows_lock);
ret = hns3_config_action(hw, rule);
if (!ret)
ret = hns3_config_key(hns, rule);
- rte_spinlock_unlock(&fdir_info->flows_lock);
if (ret) {
hns3_err(hw, "Failed to config fdir: %u src_ip:%x dst_ip:%x "
"src_port:%u dst_port:%u ret = %d",
@@ -1029,9 +1016,7 @@ int hns3_clear_all_fdir_filter(struct hns3_adapter *hns)
int ret = 0;
/* flush flow director */
- rte_spinlock_lock(&fdir_info->flows_lock);
rte_hash_reset(fdir_info->hash_handle);
- rte_spinlock_unlock(&fdir_info->flows_lock);
fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list);
while (fdir_filter) {
@@ -1059,6 +1044,17 @@ int hns3_restore_all_fdir_filter(struct hns3_adapter *hns)
bool err = false;
int ret;
+ /*
+ * This API is called in the reset recovery process, the parent function
+ * must hold hw->lock.
+ * There maybe deadlock if acquire hw->flows_lock directly because rte
+ * flow driver ops first acquire hw->flows_lock and then may acquire
+ * hw->lock.
+ * So here first release the hw->lock and then acquire the
+ * hw->flows_lock to avoid deadlock.
+ */
+ rte_spinlock_unlock(&hw->lock);
+ pthread_mutex_lock(&hw->flows_lock);
TAILQ_FOREACH(fdir_filter, &fdir_info->fdir_list, entries) {
ret = hns3_config_action(hw, &fdir_filter->fdir_conf);
if (!ret)
@@ -1069,6 +1065,8 @@ int hns3_restore_all_fdir_filter(struct hns3_adapter *hns)
break;
}
}
+ pthread_mutex_unlock(&hw->flows_lock);
+ rte_spinlock_lock(&hw->lock);
if (err) {
hns3_err(hw, "Fail to restore FDIR filter, ret = %d", ret);
diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h
index 266d37c..fc62daa 100644
--- a/drivers/net/hns3/hns3_fdir.h
+++ b/drivers/net/hns3/hns3_fdir.h
@@ -199,7 +199,6 @@ struct hns3_process_private {
* A structure used to define fields of a FDIR related info.
*/
struct hns3_fdir_info {
- rte_spinlock_t flows_lock;
struct hns3_fdir_rule_list fdir_list;
struct hns3_fdir_rule_ele **hash_map;
struct rte_hash *hash_handle;
@@ -220,7 +219,7 @@ int hns3_fdir_filter_program(struct hns3_adapter *hns,
struct hns3_fdir_rule *rule, bool del);
int hns3_clear_all_fdir_filter(struct hns3_adapter *hns);
int hns3_get_count(struct hns3_hw *hw, uint32_t id, uint64_t *value);
-void hns3_filterlist_init(struct rte_eth_dev *dev);
+void hns3_flow_init(struct rte_eth_dev *dev);
int hns3_restore_all_fdir_filter(struct hns3_adapter *hns);
#endif /* _HNS3_FDIR_H_ */
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 098b191..4511a49 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1214,9 +1214,18 @@ hns3_parse_fdir_filter(struct rte_eth_dev *dev,
}
void
-hns3_filterlist_init(struct rte_eth_dev *dev)
+hns3_flow_init(struct rte_eth_dev *dev)
{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct hns3_process_private *process_list = dev->process_private;
+ pthread_mutexattr_t attr;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ pthread_mutexattr_init(&attr);
+ pthread_mutexattr_setpshared(&attr, PTHREAD_PROCESS_SHARED);
+ pthread_mutex_init(&hw->flows_lock, &attr);
+ dev->data->dev_flags |= RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE;
+ }
TAILQ_INIT(&process_list->fdir_list);
TAILQ_INIT(&process_list->filter_rss_list);
@@ -2002,12 +2011,87 @@ hns3_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
return 0;
}
+static int
+hns3_flow_validate_wrap(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ pthread_mutex_lock(&hw->flows_lock);
+ ret = hns3_flow_validate(dev, attr, pattern, actions, error);
+ pthread_mutex_unlock(&hw->flows_lock);
+
+ return ret;
+}
+
+static struct rte_flow *
+hns3_flow_create_wrap(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+
+ pthread_mutex_lock(&hw->flows_lock);
+ flow = hns3_flow_create(dev, attr, pattern, actions, error);
+ pthread_mutex_unlock(&hw->flows_lock);
+
+ return flow;
+}
+
+static int
+hns3_flow_destroy_wrap(struct rte_eth_dev *dev, struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ pthread_mutex_lock(&hw->flows_lock);
+ ret = hns3_flow_destroy(dev, flow, error);
+ pthread_mutex_unlock(&hw->flows_lock);
+
+ return ret;
+}
+
+static int
+hns3_flow_flush_wrap(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ pthread_mutex_lock(&hw->flows_lock);
+ ret = hns3_flow_flush(dev, error);
+ pthread_mutex_unlock(&hw->flows_lock);
+
+ return ret;
+}
+
+static int
+hns3_flow_query_wrap(struct rte_eth_dev *dev, struct rte_flow *flow,
+ const struct rte_flow_action *actions, void *data,
+ struct rte_flow_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ pthread_mutex_lock(&hw->flows_lock);
+ ret = hns3_flow_query(dev, flow, actions, data, error);
+ pthread_mutex_unlock(&hw->flows_lock);
+
+ return ret;
+}
+
static const struct rte_flow_ops hns3_flow_ops = {
- .validate = hns3_flow_validate,
- .create = hns3_flow_create,
- .destroy = hns3_flow_destroy,
- .flush = hns3_flow_flush,
- .query = hns3_flow_query,
+ .validate = hns3_flow_validate_wrap,
+ .create = hns3_flow_create_wrap,
+ .destroy = hns3_flow_destroy_wrap,
+ .flush = hns3_flow_flush_wrap,
+ .query = hns3_flow_query_wrap,
.isolate = NULL,
};
--
2.7.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 7/7] net/hns3: move the link speeds check to dev configure
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
` (5 preceding siblings ...)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 6/7] net/hns3: fix FDIR lock bug Min Hu (Connor)
@ 2021-04-17 9:54 ` Min Hu (Connor)
2021-04-20 0:42 ` [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Ferruh Yigit
7 siblings, 0 replies; 10+ messages in thread
From: Min Hu (Connor) @ 2021-04-17 9:54 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
From: Huisong Li <lihuisong@huawei.com>
This patch moves the check for "link_speeds" in dev_conf to dev_configure,
so that users know whether "link_speeds" is valid in advance.
Fixes: 0d90a6b8e59f ("net/hns3: support link speed autoneg for PF")
Fixes: 30b08275cb87 ("net/hns3: support fixed link speed")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 57 +++++++++++++++++++++++++++++++++---------
1 file changed, 45 insertions(+), 12 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index bcbebd0..a3ea513 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -105,6 +105,7 @@ static int hns3_remove_mc_addr(struct hns3_hw *hw,
static int hns3_restore_fec(struct hns3_hw *hw);
static int hns3_query_dev_fec_info(struct hns3_hw *hw);
static int hns3_do_stop(struct hns3_adapter *hns);
+static int hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds);
void hns3_ether_format_addr(char *buf, uint16_t size,
const struct rte_ether_addr *ether_addr)
@@ -2431,6 +2432,46 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
}
static int
+hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
+{
+ int ret;
+
+ /*
+ * Some hardware doesn't support auto-negotiation, but users may not
+ * configure link_speeds (default 0), which means auto-negotiation.
+ * In this case, a warning message need to be printed, instead of
+ * an error.
+ */
+ if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+ hw->mac.support_autoneg == 0) {
+ hns3_warn(hw, "auto-negotiation is not supported, use default fixed speed!");
+ return 0;
+ }
+
+ if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+ ret = hns3_check_port_speed(hw, link_speeds);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+hns3_check_dev_conf(struct rte_eth_dev *dev)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_eth_conf *conf = &dev->data->dev_conf;
+ int ret;
+
+ ret = hns3_check_mq_mode(dev);
+ if (ret)
+ return ret;
+
+ return hns3_check_link_speed(hw, conf->link_speeds);
+}
+
+static int
hns3_dev_configure(struct rte_eth_dev *dev)
{
struct hns3_adapter *hns = dev->data->dev_private;
@@ -2465,7 +2506,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
}
hw->adapter_state = HNS3_NIC_CONFIGURING;
- ret = hns3_check_mq_mode(dev);
+ ret = hns3_check_dev_conf(dev);
if (ret)
goto cfg_err;
@@ -5466,14 +5507,11 @@ hns3_set_fiber_port_link_speed(struct hns3_hw *hw,
/*
* Some hardware doesn't support auto-negotiation, but users may not
- * configure link_speeds (default 0), which means auto-negotiation
- * In this case, a warning message need to be printed, instead of
- * an error.
+ * configure link_speeds (default 0), which means auto-negotiation.
+ * In this case, it should return success.
*/
- if (cfg->autoneg) {
- hns3_warn(hw, "auto-negotiation is not supported.");
+ if (cfg->autoneg)
return 0;
- }
return hns3_cfg_mac_speed_dup(hw, cfg->speed, cfg->duplex);
}
@@ -5514,16 +5552,11 @@ hns3_apply_link_speed(struct hns3_hw *hw)
{
struct rte_eth_conf *conf = &hw->data->dev_conf;
struct hns3_set_link_speed_cfg cfg;
- int ret;
memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
ETH_LINK_AUTONEG : ETH_LINK_FIXED;
if (cfg.autoneg != ETH_LINK_AUTONEG) {
- ret = hns3_check_port_speed(hw, conf->link_speeds);
- if (ret)
- return ret;
-
cfg.speed = hns3_get_link_speed(conf->link_speeds);
cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
}
--
2.7.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD
2021-04-17 9:54 [dpdk-dev] [PATCH 0/7] features and bugfix for hns3 PMD Min Hu (Connor)
` (6 preceding siblings ...)
2021-04-17 9:54 ` [dpdk-dev] [PATCH 7/7] net/hns3: move the link speeds check to dev configure Min Hu (Connor)
@ 2021-04-20 0:42 ` Ferruh Yigit
7 siblings, 0 replies; 10+ messages in thread
From: Ferruh Yigit @ 2021-04-20 0:42 UTC (permalink / raw)
To: Min Hu (Connor), dev
On 4/17/2021 10:54 AM, Min Hu (Connor) wrote:
> This patchset contains 2 features and 5 bugfixes.
>
> Chengwen Feng (3):
> net/hns3: check max SIMD bitwidth
> net/hns3: Rx vector add compile-time verify
> net/hns3: fix FDIR lock bug
>
> Huisong Li (4):
> net/hns3: remove redundant code about mbx
> net/hns3: fix the check for DCB mode
> net/hns3: fix the check for VMDq mode
> net/hns3: move the link speeds check to dev configure
>
Except 1/7,
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread