* [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow
@ 2020-03-18 1:47 Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 1/4] net/e1000: remove the legacy filter functions Chenxu Di
` (12 more replies)
0 siblings, 13 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-18 1:47 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, Chenxu Di
remove legacy filter functions already implemented in rte_flow
for drivers igb, ixgbe, and i40e.
implement hash function include set hash function and set hash
input set in rte_flow for driver i40e.
Chenxu Di (4):
net/e1000: remove the legacy filter functions
net/ixgbe: remove the legacy filter functions
net/i40e: remove the legacy filter functions
net/i40e: implement hash function in rte flow API
doc/guides/nics/i40e.rst | 14 +
doc/guides/rel_notes/release_20_05.rst | 9 +
drivers/net/e1000/igb_ethdev.c | 36 -
drivers/net/i40e/i40e_ethdev.c | 913 +++++++++++--------------
drivers/net/i40e/i40e_ethdev.h | 26 +-
drivers/net/i40e/i40e_fdir.c | 393 -----------
drivers/net/i40e/i40e_flow.c | 186 ++++-
drivers/net/ixgbe/ixgbe_ethdev.c | 78 ---
drivers/net/ixgbe/ixgbe_fdir.c | 11 -
9 files changed, 610 insertions(+), 1056 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH 1/4] net/e1000: remove the legacy filter functions
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
@ 2020-03-18 1:47 ` Chenxu Di
2020-03-18 3:15 ` Yang, Qiming
2020-03-18 1:47 ` [dpdk-dev] [PATCH 2/4] net/ixgbe: " Chenxu Di
` (11 subsequent siblings)
12 siblings, 1 reply; 26+ messages in thread
From: Chenxu Di @ 2020-03-18 1:47 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, Chenxu Di
remove the legacy filter functions in Intel igb driver.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
doc/guides/rel_notes/release_20_05.rst | 9 +++++++
drivers/net/e1000/igb_ethdev.c | 36 --------------------------
2 files changed, 9 insertions(+), 36 deletions(-)
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 2190eaf85..e79f8d841 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -56,6 +56,15 @@ New Features
Also, make sure to start the actual text at the margin.
=========================================================
+* **remove legacy filter API and switch to rte flow**
+
+ remove legacy filter API functions and switch to rte_flow in drivers, including:
+
+ * remove legacy filter API functions in the Intel igb driver.
+ * remove legacy filter API functions in the Intel ixgbe driver.
+ * remove legacy filter API functions in the Intel i40 driver.
+ * Added support set hash function and set hash input set in rte flow API.
+
Removed Items
-------------
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 520fba8fa..2d660eb7e 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -3716,16 +3716,6 @@ eth_igb_syn_filter_handle(struct rte_eth_dev *dev,
}
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = eth_igb_syn_filter_set(dev,
- (struct rte_eth_syn_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = eth_igb_syn_filter_set(dev,
- (struct rte_eth_syn_filter *)arg,
- FALSE);
- break;
case RTE_ETH_FILTER_GET:
ret = eth_igb_syn_filter_get(dev,
(struct rte_eth_syn_filter *)arg);
@@ -4207,12 +4197,6 @@ eth_igb_flex_filter_handle(struct rte_eth_dev *dev,
}
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = eth_igb_add_del_flex_filter(dev, filter, TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = eth_igb_add_del_flex_filter(dev, filter, FALSE);
- break;
case RTE_ETH_FILTER_GET:
ret = eth_igb_get_flex_filter(dev, filter);
break;
@@ -4713,16 +4697,6 @@ igb_ntuple_filter_handle(struct rte_eth_dev *dev,
}
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = igb_add_del_ntuple_filter(dev,
- (struct rte_eth_ntuple_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = igb_add_del_ntuple_filter(dev,
- (struct rte_eth_ntuple_filter *)arg,
- FALSE);
- break;
case RTE_ETH_FILTER_GET:
ret = igb_get_ntuple_filter(dev,
(struct rte_eth_ntuple_filter *)arg);
@@ -4894,16 +4868,6 @@ igb_ethertype_filter_handle(struct rte_eth_dev *dev,
}
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = igb_add_del_ethertype_filter(dev,
- (struct rte_eth_ethertype_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = igb_add_del_ethertype_filter(dev,
- (struct rte_eth_ethertype_filter *)arg,
- FALSE);
- break;
case RTE_ETH_FILTER_GET:
ret = igb_get_ethertype_filter(dev,
(struct rte_eth_ethertype_filter *)arg);
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH 2/4] net/ixgbe: remove the legacy filter functions
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 1/4] net/e1000: remove the legacy filter functions Chenxu Di
@ 2020-03-18 1:47 ` Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 3/4] net/i40e: " Chenxu Di
` (10 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-18 1:47 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, Chenxu Di
remove the legacy filter functions in Intel ixgbe driver
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 78 --------------------------------
drivers/net/ixgbe/ixgbe_fdir.c | 11 -----
2 files changed, 89 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 23b3f5b0c..89f8deade 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -369,9 +369,6 @@ static int ixgbe_dev_l2_tunnel_offload_set
struct rte_eth_l2_tunnel_conf *l2_tunnel,
uint32_t mask,
uint8_t en);
-static int ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
- enum rte_filter_op filter_op,
- void *arg);
static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -6426,16 +6423,6 @@ ixgbe_syn_filter_handle(struct rte_eth_dev *dev,
}
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = ixgbe_syn_filter_set(dev,
- (struct rte_eth_syn_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = ixgbe_syn_filter_set(dev,
- (struct rte_eth_syn_filter *)arg,
- FALSE);
- break;
case RTE_ETH_FILTER_GET:
ret = ixgbe_syn_filter_get(dev,
(struct rte_eth_syn_filter *)arg);
@@ -6853,16 +6840,6 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
}
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = ixgbe_add_del_ntuple_filter(dev,
- (struct rte_eth_ntuple_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = ixgbe_add_del_ntuple_filter(dev,
- (struct rte_eth_ntuple_filter *)arg,
- FALSE);
- break;
case RTE_ETH_FILTER_GET:
ret = ixgbe_get_ntuple_filter(dev,
(struct rte_eth_ntuple_filter *)arg);
@@ -7004,16 +6981,6 @@ ixgbe_ethertype_filter_handle(struct rte_eth_dev *dev,
}
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = ixgbe_add_del_ethertype_filter(dev,
- (struct rte_eth_ethertype_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = ixgbe_add_del_ethertype_filter(dev,
- (struct rte_eth_ethertype_filter *)arg,
- FALSE);
- break;
case RTE_ETH_FILTER_GET:
ret = ixgbe_get_ethertype_filter(dev,
(struct rte_eth_ethertype_filter *)arg);
@@ -7047,9 +7014,6 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_FDIR:
ret = ixgbe_fdir_ctrl_func(dev, filter_op, arg);
break;
- case RTE_ETH_FILTER_L2_TUNNEL:
- ret = ixgbe_dev_l2_tunnel_filter_handle(dev, filter_op, arg);
- break;
case RTE_ETH_FILTER_GENERIC:
if (filter_op != RTE_ETH_FILTER_GET)
return -EINVAL;
@@ -8121,48 +8085,6 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
}
-/**
- * ixgbe_dev_l2_tunnel_filter_handle - Handle operations for l2 tunnel filter.
- * @dev: pointer to rte_eth_dev structure
- * @filter_op:operation will be taken.
- * @arg: a pointer to specific structure corresponding to the filter_op
- */
-static int
-ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
- enum rte_filter_op filter_op,
- void *arg)
-{
- int ret;
-
- if (filter_op == RTE_ETH_FILTER_NOP)
- return 0;
-
- if (arg == NULL) {
- PMD_DRV_LOG(ERR, "arg shouldn't be NULL for operation %u.",
- filter_op);
- return -EINVAL;
- }
-
- switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = ixgbe_dev_l2_tunnel_filter_add
- (dev,
- (struct rte_eth_l2_tunnel_conf *)arg,
- FALSE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = ixgbe_dev_l2_tunnel_filter_del
- (dev,
- (struct rte_eth_l2_tunnel_conf *)arg);
- break;
- default:
- PMD_DRV_LOG(ERR, "unsupported operation %u.", filter_op);
- ret = -EINVAL;
- break;
- }
- return ret;
-}
-
static int
ixgbe_e_tag_forwarding_en_dis(struct rte_eth_dev *dev, bool en)
{
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 166dae1e0..9ba26cd52 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1555,21 +1555,10 @@ ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
return -EINVAL;
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = ixgbe_add_del_fdir_filter(dev,
- (struct rte_eth_fdir_filter *)arg, FALSE, FALSE);
- break;
case RTE_ETH_FILTER_UPDATE:
ret = ixgbe_add_del_fdir_filter(dev,
(struct rte_eth_fdir_filter *)arg, FALSE, TRUE);
break;
- case RTE_ETH_FILTER_DELETE:
- ret = ixgbe_add_del_fdir_filter(dev,
- (struct rte_eth_fdir_filter *)arg, TRUE, FALSE);
- break;
- case RTE_ETH_FILTER_FLUSH:
- ret = ixgbe_fdir_flush(dev);
- break;
case RTE_ETH_FILTER_INFO:
ixgbe_fdir_info_get(dev, (struct rte_eth_fdir_info *)arg);
break;
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH 3/4] net/i40e: remove the legacy filter functions
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 1/4] net/e1000: remove the legacy filter functions Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 2/4] net/ixgbe: " Chenxu Di
@ 2020-03-18 1:47 ` Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 4/4] net/i40e: implement hash function in rte flow API Chenxu Di
` (9 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-18 1:47 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, Chenxu Di
remove the legacy filter functions in Intel i40e driver
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 454 ---------------------------------
drivers/net/i40e/i40e_ethdev.h | 8 -
drivers/net/i40e/i40e_fdir.c | 393 ----------------------------
3 files changed, 855 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9fbda1c34..1ee60f18e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -317,9 +317,6 @@ static int i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
static int i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
static void i40e_filter_input_set_init(struct i40e_pf *pf);
-static int i40e_ethertype_filter_handle(struct rte_eth_dev *dev,
- enum rte_filter_op filter_op,
- void *arg);
static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
enum rte_filter_type filter_type,
enum rte_filter_op filter_op,
@@ -4204,119 +4201,6 @@ i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
}
}
-/* Set perfect match or hash match of MAC and VLAN for a VF */
-static int
-i40e_vf_mac_filter_set(struct i40e_pf *pf,
- struct rte_eth_mac_filter *filter,
- bool add)
-{
- struct i40e_hw *hw;
- struct i40e_mac_filter_info mac_filter;
- struct rte_ether_addr old_mac;
- struct rte_ether_addr *new_mac;
- struct i40e_pf_vf *vf = NULL;
- uint16_t vf_id;
- int ret;
-
- if (pf == NULL) {
- PMD_DRV_LOG(ERR, "Invalid PF argument.");
- return -EINVAL;
- }
- hw = I40E_PF_TO_HW(pf);
-
- if (filter == NULL) {
- PMD_DRV_LOG(ERR, "Invalid mac filter argument.");
- return -EINVAL;
- }
-
- new_mac = &filter->mac_addr;
-
- if (rte_is_zero_ether_addr(new_mac)) {
- PMD_DRV_LOG(ERR, "Invalid ethernet address.");
- return -EINVAL;
- }
-
- vf_id = filter->dst_id;
-
- if (vf_id > pf->vf_num - 1 || !pf->vfs) {
- PMD_DRV_LOG(ERR, "Invalid argument.");
- return -EINVAL;
- }
- vf = &pf->vfs[vf_id];
-
- if (add && rte_is_same_ether_addr(new_mac, &pf->dev_addr)) {
- PMD_DRV_LOG(INFO, "Ignore adding permanent MAC address.");
- return -EINVAL;
- }
-
- if (add) {
- rte_memcpy(&old_mac, hw->mac.addr, RTE_ETHER_ADDR_LEN);
- rte_memcpy(hw->mac.addr, new_mac->addr_bytes,
- RTE_ETHER_ADDR_LEN);
- rte_memcpy(&mac_filter.mac_addr, &filter->mac_addr,
- RTE_ETHER_ADDR_LEN);
-
- mac_filter.filter_type = filter->filter_type;
- ret = i40e_vsi_add_mac(vf->vsi, &mac_filter);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to add MAC filter.");
- return -1;
- }
- rte_ether_addr_copy(new_mac, &pf->dev_addr);
- } else {
- rte_memcpy(hw->mac.addr, hw->mac.perm_addr,
- RTE_ETHER_ADDR_LEN);
- ret = i40e_vsi_delete_mac(vf->vsi, &filter->mac_addr);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to delete MAC filter.");
- return -1;
- }
-
- /* Clear device address as it has been removed */
- if (rte_is_same_ether_addr(&pf->dev_addr, new_mac))
- memset(&pf->dev_addr, 0, sizeof(struct rte_ether_addr));
- }
-
- return 0;
-}
-
-/* MAC filter handle */
-static int
-i40e_mac_filter_handle(struct rte_eth_dev *dev, enum rte_filter_op filter_op,
- void *arg)
-{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_mac_filter *filter;
- struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- int ret = I40E_NOT_SUPPORTED;
-
- filter = (struct rte_eth_mac_filter *)(arg);
-
- switch (filter_op) {
- case RTE_ETH_FILTER_NOP:
- ret = I40E_SUCCESS;
- break;
- case RTE_ETH_FILTER_ADD:
- i40e_pf_disable_irq0(hw);
- if (filter->is_vf)
- ret = i40e_vf_mac_filter_set(pf, filter, 1);
- i40e_pf_enable_irq0(hw);
- break;
- case RTE_ETH_FILTER_DELETE:
- i40e_pf_disable_irq0(hw);
- if (filter->is_vf)
- ret = i40e_vf_mac_filter_set(pf, filter, 0);
- i40e_pf_enable_irq0(hw);
- break;
- default:
- PMD_DRV_LOG(ERR, "unknown operation %u", filter_op);
- ret = I40E_ERR_PARAM;
- break;
- }
-
- return ret;
-}
-
static int
i40e_get_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
{
@@ -7868,145 +7752,6 @@ i40e_sw_tunnel_filter_del(struct i40e_pf *pf,
return 0;
}
-int
-i40e_dev_tunnel_filter_set(struct i40e_pf *pf,
- struct rte_eth_tunnel_filter_conf *tunnel_filter,
- uint8_t add)
-{
- uint16_t ip_type;
- uint32_t ipv4_addr, ipv4_addr_le;
- uint8_t i, tun_type = 0;
- /* internal varialbe to convert ipv6 byte order */
- uint32_t convert_ipv6[4];
- int val, ret = 0;
- struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- struct i40e_vsi *vsi = pf->main_vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
- struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
- struct i40e_tunnel_filter check_filter; /* Check if filter exists */
-
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (NULL == cld_filter) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
- rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
- rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
-
- pfilter->element.inner_vlan =
- rte_cpu_to_le_16(tunnel_filter->inner_vlan);
- if (tunnel_filter->ip_type == RTE_TUNNEL_IPTYPE_IPV4) {
- ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
- ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
- ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
- &ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
- } else {
- ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
- for (i = 0; i < 4; i++) {
- convert_ipv6[i] =
- rte_cpu_to_le_32(rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv6_addr[i]));
- }
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
- &convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
- }
-
- /* check tunneled type */
- switch (tunnel_filter->tunnel_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_VXLAN;
- break;
- case RTE_TUNNEL_TYPE_NVGRE:
- tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC;
- break;
- case RTE_TUNNEL_TYPE_IP_IN_GRE:
- tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_IP;
- break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
- tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE;
- break;
- default:
- /* Other tunnel types is not supported. */
- PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
- return -EINVAL;
- }
-
- val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
- if (val < 0) {
- rte_free(cld_filter);
- return -EINVAL;
- }
-
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
- rte_cpu_to_le_16(tunnel_filter->queue_id);
-
- /* Check if there is the filter in SW list */
- memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
- node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
- if (add && node) {
- PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
- return -EINVAL;
- }
-
- if (!add && !node) {
- PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
- return -EINVAL;
- }
-
- if (add) {
- ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
- return -ENOTSUP;
- }
- tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
- if (tunnel == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
- return -ENOMEM;
- }
-
- rte_memcpy(tunnel, &check_filter, sizeof(check_filter));
- ret = i40e_sw_tunnel_filter_insert(pf, tunnel);
- if (ret < 0)
- rte_free(tunnel);
- } else {
- ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
- return -ENOTSUP;
- }
- ret = i40e_sw_tunnel_filter_del(pf, &node->input);
- }
-
- rte_free(cld_filter);
- return ret;
-}
-
#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_TR_WORD0 0x48
#define I40E_TR_VXLAN_GRE_KEY_MASK 0x4
#define I40E_TR_GENEVE_KEY_MASK 0x8
@@ -8809,40 +8554,6 @@ i40e_pf_config_rss(struct i40e_pf *pf)
return i40e_hw_rss_hash_set(pf, &rss_conf);
}
-static int
-i40e_tunnel_filter_param_check(struct i40e_pf *pf,
- struct rte_eth_tunnel_filter_conf *filter)
-{
- if (pf == NULL || filter == NULL) {
- PMD_DRV_LOG(ERR, "Invalid parameter");
- return -EINVAL;
- }
-
- if (filter->queue_id >= pf->dev_data->nb_rx_queues) {
- PMD_DRV_LOG(ERR, "Invalid queue ID");
- return -EINVAL;
- }
-
- if (filter->inner_vlan > RTE_ETHER_MAX_VLAN_ID) {
- PMD_DRV_LOG(ERR, "Invalid inner VLAN ID");
- return -EINVAL;
- }
-
- if ((filter->filter_type & ETH_TUNNEL_FILTER_OMAC) &&
- (rte_is_zero_ether_addr(&filter->outer_mac))) {
- PMD_DRV_LOG(ERR, "Cannot add NULL outer MAC address");
- return -EINVAL;
- }
-
- if ((filter->filter_type & ETH_TUNNEL_FILTER_IMAC) &&
- (rte_is_zero_ether_addr(&filter->inner_mac))) {
- PMD_DRV_LOG(ERR, "Cannot add NULL inner MAC address");
- return -EINVAL;
- }
-
- return 0;
-}
-
#define I40E_GL_PRS_FVBM_MSK_ENA 0x80000000
#define I40E_GL_PRS_FVBM(_i) (0x00269760 + ((_i) * 4))
static int
@@ -8928,40 +8639,6 @@ i40e_filter_ctrl_global_config(struct rte_eth_dev *dev,
return ret;
}
-static int
-i40e_tunnel_filter_handle(struct rte_eth_dev *dev,
- enum rte_filter_op filter_op,
- void *arg)
-{
- struct rte_eth_tunnel_filter_conf *filter;
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- int ret = I40E_SUCCESS;
-
- filter = (struct rte_eth_tunnel_filter_conf *)(arg);
-
- if (i40e_tunnel_filter_param_check(pf, filter) < 0)
- return I40E_ERR_PARAM;
-
- switch (filter_op) {
- case RTE_ETH_FILTER_NOP:
- if (!(pf->flags & I40E_FLAG_VXLAN))
- ret = I40E_NOT_SUPPORTED;
- break;
- case RTE_ETH_FILTER_ADD:
- ret = i40e_dev_tunnel_filter_set(pf, filter, 1);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = i40e_dev_tunnel_filter_set(pf, filter, 0);
- break;
- default:
- PMD_DRV_LOG(ERR, "unknown operation %u", filter_op);
- ret = I40E_ERR_PARAM;
- break;
- }
-
- return ret;
-}
-
static int
i40e_pf_config_mq_rx(struct i40e_pf *pf)
{
@@ -9923,89 +9600,6 @@ i40e_hash_filter_inset_select(struct i40e_hw *hw,
return 0;
}
-int
-i40e_fdir_filter_inset_select(struct i40e_pf *pf,
- struct rte_eth_input_set_conf *conf)
-{
- struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- enum i40e_filter_pctype pctype;
- uint64_t input_set, inset_reg = 0;
- uint32_t mask_reg[I40E_INSET_MASK_NUM_REG] = {0};
- int ret, i, num;
-
- if (!hw || !conf) {
- PMD_DRV_LOG(ERR, "Invalid pointer");
- return -EFAULT;
- }
- if (conf->op != RTE_ETH_INPUT_SET_SELECT &&
- conf->op != RTE_ETH_INPUT_SET_ADD) {
- PMD_DRV_LOG(ERR, "Unsupported input set operation");
- return -EINVAL;
- }
-
- pctype = i40e_flowtype_to_pctype(pf->adapter, conf->flow_type);
-
- if (pctype == I40E_FILTER_PCTYPE_INVALID) {
- PMD_DRV_LOG(ERR, "invalid flow_type input.");
- return -EINVAL;
- }
-
- ret = i40e_parse_input_set(&input_set, pctype, conf->field,
- conf->inset_size);
- if (ret) {
- PMD_DRV_LOG(ERR, "Failed to parse input set");
- return -EINVAL;
- }
-
- /* get inset value in register */
- inset_reg = i40e_read_rx_ctl(hw, I40E_PRTQF_FD_INSET(pctype, 1));
- inset_reg <<= I40E_32_BIT_WIDTH;
- inset_reg |= i40e_read_rx_ctl(hw, I40E_PRTQF_FD_INSET(pctype, 0));
-
- /* Can not change the inset reg for flex payload for fdir,
- * it is done by writing I40E_PRTQF_FD_FLXINSET
- * in i40e_set_flex_mask_on_pctype.
- */
- if (conf->op == RTE_ETH_INPUT_SET_SELECT)
- inset_reg &= I40E_REG_INSET_FLEX_PAYLOAD_WORDS;
- else
- input_set |= pf->fdir.input_set[pctype];
- num = i40e_generate_inset_mask_reg(input_set, mask_reg,
- I40E_INSET_MASK_NUM_REG);
- if (num < 0)
- return -EINVAL;
- if (pf->support_multi_driver && num > 0) {
- PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
- return -ENOTSUP;
- }
-
- inset_reg |= i40e_translate_input_set_reg(hw->mac.type, input_set);
-
- i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 0),
- (uint32_t)(inset_reg & UINT32_MAX));
- i40e_check_write_reg(hw, I40E_PRTQF_FD_INSET(pctype, 1),
- (uint32_t)((inset_reg >>
- I40E_32_BIT_WIDTH) & UINT32_MAX));
-
- if (!pf->support_multi_driver) {
- for (i = 0; i < num; i++)
- i40e_check_write_global_reg(hw,
- I40E_GLQF_FD_MSK(i, pctype),
- mask_reg[i]);
- /*clear unused mask registers of the pctype */
- for (i = num; i < I40E_INSET_MASK_NUM_REG; i++)
- i40e_check_write_global_reg(hw,
- I40E_GLQF_FD_MSK(i, pctype),
- 0);
- } else {
- PMD_DRV_LOG(ERR, "FDIR bit mask is not supported.");
- }
- I40E_WRITE_FLUSH(hw);
-
- pf->fdir.input_set[pctype] = input_set;
- return 0;
-}
-
static int
i40e_hash_filter_get(struct i40e_hw *hw, struct rte_eth_hash_filter_info *info)
{
@@ -10263,45 +9857,6 @@ i40e_ethertype_filter_set(struct i40e_pf *pf,
return ret;
}
-/*
- * Handle operations for ethertype filter.
- */
-static int
-i40e_ethertype_filter_handle(struct rte_eth_dev *dev,
- enum rte_filter_op filter_op,
- void *arg)
-{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- int ret = 0;
-
- if (filter_op == RTE_ETH_FILTER_NOP)
- return ret;
-
- if (arg == NULL) {
- PMD_DRV_LOG(ERR, "arg shouldn't be NULL for operation %u",
- filter_op);
- return -EINVAL;
- }
-
- switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = i40e_ethertype_filter_set(pf,
- (struct rte_eth_ethertype_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = i40e_ethertype_filter_set(pf,
- (struct rte_eth_ethertype_filter *)arg,
- FALSE);
- break;
- default:
- PMD_DRV_LOG(ERR, "unsupported operation %u", filter_op);
- ret = -ENOSYS;
- break;
- }
- return ret;
-}
-
static int
i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
enum rte_filter_type filter_type,
@@ -10321,15 +9876,6 @@ i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_HASH:
ret = i40e_hash_filter_ctrl(dev, filter_op, arg);
break;
- case RTE_ETH_FILTER_MACVLAN:
- ret = i40e_mac_filter_handle(dev, filter_op, arg);
- break;
- case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_handle(dev, filter_op, arg);
- break;
- case RTE_ETH_FILTER_TUNNEL:
- ret = i40e_tunnel_filter_handle(dev, filter_op, arg);
- break;
case RTE_ETH_FILTER_FDIR:
ret = i40e_fdir_ctrl_func(dev, filter_op, arg);
break;
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..22170dec6 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -1256,8 +1256,6 @@ int i40e_select_filter_input_set(struct i40e_hw *hw,
void i40e_fdir_filter_restore(struct i40e_pf *pf);
int i40e_hash_filter_inset_select(struct i40e_hw *hw,
struct rte_eth_input_set_conf *conf);
-int i40e_fdir_filter_inset_select(struct i40e_pf *pf,
- struct rte_eth_input_set_conf *conf);
int i40e_pf_host_send_msg_to_vf(struct i40e_pf_vf *vf, uint32_t opcode,
uint32_t retval, uint8_t *msg,
uint16_t msglen);
@@ -1285,15 +1283,9 @@ uint64_t i40e_get_default_input_set(uint16_t pctype);
int i40e_ethertype_filter_set(struct i40e_pf *pf,
struct rte_eth_ethertype_filter *filter,
bool add);
-int i40e_add_del_fdir_filter(struct rte_eth_dev *dev,
- const struct rte_eth_fdir_filter *filter,
- bool add);
int i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
const struct i40e_fdir_filter_conf *filter,
bool add);
-int i40e_dev_tunnel_filter_set(struct i40e_pf *pf,
- struct rte_eth_tunnel_filter_conf *tunnel_filter,
- uint8_t add);
int i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_tunnel_filter_conf *tunnel_filter,
uint8_t add);
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 931f25976..6e052fbf9 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -84,10 +84,6 @@
(1ULL << RTE_ETH_FLOW_NONFRAG_IPV6_OTHER) | \
(1ULL << RTE_ETH_FLOW_L2_PAYLOAD))
-static int i40e_fdir_filter_programming(struct i40e_pf *pf,
- enum i40e_filter_pctype pctype,
- const struct rte_eth_fdir_filter *filter,
- bool add);
static int i40e_fdir_filter_convert(const struct i40e_fdir_filter_conf *input,
struct i40e_fdir_filter *filter);
static struct i40e_fdir_filter *
@@ -813,155 +809,6 @@ i40e_fdir_fill_eth_ip_head(const struct rte_eth_fdir_input *fdir_input,
}
-/*
- * i40e_fdir_construct_pkt - construct packet based on fields in input
- * @pf: board private structure
- * @fdir_input: input set of the flow director entry
- * @raw_pkt: a packet to be constructed
- */
-static int
-i40e_fdir_construct_pkt(struct i40e_pf *pf,
- const struct rte_eth_fdir_input *fdir_input,
- unsigned char *raw_pkt)
-{
- unsigned char *payload, *ptr;
- struct rte_udp_hdr *udp;
- struct rte_tcp_hdr *tcp;
- struct rte_sctp_hdr *sctp;
- uint8_t size, dst = 0;
- uint8_t i, pit_idx, set_idx = I40E_FLXPLD_L4_IDX; /* use l4 by default*/
- int len;
-
- /* fill the ethernet and IP head */
- len = i40e_fdir_fill_eth_ip_head(fdir_input, raw_pkt,
- !!fdir_input->flow_ext.vlan_tci);
- if (len < 0)
- return -EINVAL;
-
- /* fill the L4 head */
- switch (fdir_input->flow_type) {
- case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
- udp = (struct rte_udp_hdr *)(raw_pkt + len);
- payload = (unsigned char *)udp + sizeof(struct rte_udp_hdr);
- /*
- * The source and destination fields in the transmitted packet
- * need to be presented in a reversed order with respect
- * to the expected received packets.
- */
- udp->src_port = fdir_input->flow.udp4_flow.dst_port;
- udp->dst_port = fdir_input->flow.udp4_flow.src_port;
- udp->dgram_len = rte_cpu_to_be_16(I40E_FDIR_UDP_DEFAULT_LEN);
- break;
-
- case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
- tcp = (struct rte_tcp_hdr *)(raw_pkt + len);
- payload = (unsigned char *)tcp + sizeof(struct rte_tcp_hdr);
- /*
- * The source and destination fields in the transmitted packet
- * need to be presented in a reversed order with respect
- * to the expected received packets.
- */
- tcp->src_port = fdir_input->flow.tcp4_flow.dst_port;
- tcp->dst_port = fdir_input->flow.tcp4_flow.src_port;
- tcp->data_off = I40E_FDIR_TCP_DEFAULT_DATAOFF;
- break;
-
- case RTE_ETH_FLOW_NONFRAG_IPV4_SCTP:
- sctp = (struct rte_sctp_hdr *)(raw_pkt + len);
- payload = (unsigned char *)sctp + sizeof(struct rte_sctp_hdr);
- /*
- * The source and destination fields in the transmitted packet
- * need to be presented in a reversed order with respect
- * to the expected received packets.
- */
- sctp->src_port = fdir_input->flow.sctp4_flow.dst_port;
- sctp->dst_port = fdir_input->flow.sctp4_flow.src_port;
- sctp->tag = fdir_input->flow.sctp4_flow.verify_tag;
- break;
-
- case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
- case RTE_ETH_FLOW_FRAG_IPV4:
- payload = raw_pkt + len;
- set_idx = I40E_FLXPLD_L3_IDX;
- break;
-
- case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
- udp = (struct rte_udp_hdr *)(raw_pkt + len);
- payload = (unsigned char *)udp + sizeof(struct rte_udp_hdr);
- /*
- * The source and destination fields in the transmitted packet
- * need to be presented in a reversed order with respect
- * to the expected received packets.
- */
- udp->src_port = fdir_input->flow.udp6_flow.dst_port;
- udp->dst_port = fdir_input->flow.udp6_flow.src_port;
- udp->dgram_len = rte_cpu_to_be_16(I40E_FDIR_IPv6_PAYLOAD_LEN);
- break;
-
- case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
- tcp = (struct rte_tcp_hdr *)(raw_pkt + len);
- payload = (unsigned char *)tcp + sizeof(struct rte_tcp_hdr);
- /*
- * The source and destination fields in the transmitted packet
- * need to be presented in a reversed order with respect
- * to the expected received packets.
- */
- tcp->data_off = I40E_FDIR_TCP_DEFAULT_DATAOFF;
- tcp->src_port = fdir_input->flow.udp6_flow.dst_port;
- tcp->dst_port = fdir_input->flow.udp6_flow.src_port;
- break;
-
- case RTE_ETH_FLOW_NONFRAG_IPV6_SCTP:
- sctp = (struct rte_sctp_hdr *)(raw_pkt + len);
- payload = (unsigned char *)sctp + sizeof(struct rte_sctp_hdr);
- /*
- * The source and destination fields in the transmitted packet
- * need to be presented in a reversed order with respect
- * to the expected received packets.
- */
- sctp->src_port = fdir_input->flow.sctp6_flow.dst_port;
- sctp->dst_port = fdir_input->flow.sctp6_flow.src_port;
- sctp->tag = fdir_input->flow.sctp6_flow.verify_tag;
- break;
-
- case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
- case RTE_ETH_FLOW_FRAG_IPV6:
- payload = raw_pkt + len;
- set_idx = I40E_FLXPLD_L3_IDX;
- break;
- case RTE_ETH_FLOW_L2_PAYLOAD:
- payload = raw_pkt + len;
- /*
- * ARP packet is a special case on which the payload
- * starts after the whole ARP header
- */
- if (fdir_input->flow.l2_flow.ether_type ==
- rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP))
- payload += sizeof(struct rte_arp_hdr);
- set_idx = I40E_FLXPLD_L2_IDX;
- break;
- default:
- PMD_DRV_LOG(ERR, "unknown flow type %u.", fdir_input->flow_type);
- return -EINVAL;
- }
-
- /* fill the flexbytes to payload */
- for (i = 0; i < I40E_MAX_FLXPLD_FIED; i++) {
- pit_idx = set_idx * I40E_MAX_FLXPLD_FIED + i;
- size = pf->fdir.flex_set[pit_idx].size;
- if (size == 0)
- continue;
- dst = pf->fdir.flex_set[pit_idx].dst_offset * sizeof(uint16_t);
- ptr = payload +
- pf->fdir.flex_set[pit_idx].src_offset * sizeof(uint16_t);
- rte_memcpy(ptr,
- &fdir_input->flow_ext.flexbytes[dst],
- size * sizeof(uint16_t));
- }
-
- return 0;
-}
-
static struct i40e_customized_pctype *
i40e_flow_fdir_find_customized_pctype(struct i40e_pf *pf, uint8_t pctype)
{
@@ -1607,68 +1454,6 @@ i40e_sw_fdir_filter_del(struct i40e_pf *pf, struct i40e_fdir_input *input)
return 0;
}
-/*
- * i40e_add_del_fdir_filter - add or remove a flow director filter.
- * @pf: board private structure
- * @filter: fdir filter entry
- * @add: 0 - delete, 1 - add
- */
-int
-i40e_add_del_fdir_filter(struct rte_eth_dev *dev,
- const struct rte_eth_fdir_filter *filter,
- bool add)
-{
- struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- unsigned char *pkt = (unsigned char *)pf->fdir.prg_pkt;
- enum i40e_filter_pctype pctype;
- int ret = 0;
-
- if (dev->data->dev_conf.fdir_conf.mode != RTE_FDIR_MODE_PERFECT) {
- PMD_DRV_LOG(ERR, "FDIR is not enabled, please"
- " check the mode in fdir_conf.");
- return -ENOTSUP;
- }
-
- pctype = i40e_flowtype_to_pctype(pf->adapter, filter->input.flow_type);
- if (pctype == I40E_FILTER_PCTYPE_INVALID) {
- PMD_DRV_LOG(ERR, "invalid flow_type input.");
- return -EINVAL;
- }
- if (filter->action.rx_queue >= pf->dev_data->nb_rx_queues) {
- PMD_DRV_LOG(ERR, "Invalid queue ID");
- return -EINVAL;
- }
- if (filter->input.flow_ext.is_vf &&
- filter->input.flow_ext.dst_id >= pf->vf_num) {
- PMD_DRV_LOG(ERR, "Invalid VF ID");
- return -EINVAL;
- }
-
- memset(pkt, 0, I40E_FDIR_PKT_LEN);
-
- ret = i40e_fdir_construct_pkt(pf, &filter->input, pkt);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "construct packet for fdir fails.");
- return ret;
- }
-
- if (hw->mac.type == I40E_MAC_X722) {
- /* get translated pctype value in fd pctype register */
- pctype = (enum i40e_filter_pctype)i40e_read_rx_ctl(
- hw, I40E_GLQF_FD_PCTYPES((int)pctype));
- }
-
- ret = i40e_fdir_filter_programming(pf, pctype, filter, add);
- if (ret < 0) {
- PMD_DRV_LOG(ERR, "fdir programming fails for PCTYPE(%u).",
- pctype);
- return ret;
- }
-
- return ret;
-}
-
/**
* i40e_flow_add_del_fdir_filter - add or remove a flow director filter.
* @pf: board private structure
@@ -1771,141 +1556,6 @@ i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
return ret;
}
-/*
- * i40e_fdir_filter_programming - Program a flow director filter rule.
- * Is done by Flow Director Programming Descriptor followed by packet
- * structure that contains the filter fields need to match.
- * @pf: board private structure
- * @pctype: pctype
- * @filter: fdir filter entry
- * @add: 0 - delete, 1 - add
- */
-static int
-i40e_fdir_filter_programming(struct i40e_pf *pf,
- enum i40e_filter_pctype pctype,
- const struct rte_eth_fdir_filter *filter,
- bool add)
-{
- struct i40e_tx_queue *txq = pf->fdir.txq;
- struct i40e_rx_queue *rxq = pf->fdir.rxq;
- const struct rte_eth_fdir_action *fdir_action = &filter->action;
- volatile struct i40e_tx_desc *txdp;
- volatile struct i40e_filter_program_desc *fdirdp;
- uint32_t td_cmd;
- uint16_t vsi_id, i;
- uint8_t dest;
-
- PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
- fdirdp = (volatile struct i40e_filter_program_desc *)
- (&(txq->tx_ring[txq->tx_tail]));
-
- fdirdp->qindex_flex_ptype_vsi =
- rte_cpu_to_le_32((fdir_action->rx_queue <<
- I40E_TXD_FLTR_QW0_QINDEX_SHIFT) &
- I40E_TXD_FLTR_QW0_QINDEX_MASK);
-
- fdirdp->qindex_flex_ptype_vsi |=
- rte_cpu_to_le_32((fdir_action->flex_off <<
- I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT) &
- I40E_TXD_FLTR_QW0_FLEXOFF_MASK);
-
- fdirdp->qindex_flex_ptype_vsi |=
- rte_cpu_to_le_32((pctype <<
- I40E_TXD_FLTR_QW0_PCTYPE_SHIFT) &
- I40E_TXD_FLTR_QW0_PCTYPE_MASK);
-
- if (filter->input.flow_ext.is_vf)
- vsi_id = pf->vfs[filter->input.flow_ext.dst_id].vsi->vsi_id;
- else
- /* Use LAN VSI Id by default */
- vsi_id = pf->main_vsi->vsi_id;
- fdirdp->qindex_flex_ptype_vsi |=
- rte_cpu_to_le_32(((uint32_t)vsi_id <<
- I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT) &
- I40E_TXD_FLTR_QW0_DEST_VSI_MASK);
-
- fdirdp->dtype_cmd_cntindex =
- rte_cpu_to_le_32(I40E_TX_DESC_DTYPE_FILTER_PROG);
-
- if (add)
- fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32(
- I40E_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE <<
- I40E_TXD_FLTR_QW1_PCMD_SHIFT);
- else
- fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32(
- I40E_FILTER_PROGRAM_DESC_PCMD_REMOVE <<
- I40E_TXD_FLTR_QW1_PCMD_SHIFT);
-
- if (fdir_action->behavior == RTE_ETH_FDIR_REJECT)
- dest = I40E_FILTER_PROGRAM_DESC_DEST_DROP_PACKET;
- else if (fdir_action->behavior == RTE_ETH_FDIR_ACCEPT)
- dest = I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX;
- else if (fdir_action->behavior == RTE_ETH_FDIR_PASSTHRU)
- dest = I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER;
- else {
- PMD_DRV_LOG(ERR, "Failed to program FDIR filter:"
- " unsupported fdir behavior.");
- return -EINVAL;
- }
-
- fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32((dest <<
- I40E_TXD_FLTR_QW1_DEST_SHIFT) &
- I40E_TXD_FLTR_QW1_DEST_MASK);
-
- fdirdp->dtype_cmd_cntindex |=
- rte_cpu_to_le_32((fdir_action->report_status<<
- I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT) &
- I40E_TXD_FLTR_QW1_FD_STATUS_MASK);
-
- fdirdp->dtype_cmd_cntindex |=
- rte_cpu_to_le_32(I40E_TXD_FLTR_QW1_CNT_ENA_MASK);
- fdirdp->dtype_cmd_cntindex |=
- rte_cpu_to_le_32(
- ((uint32_t)pf->fdir.match_counter_index <<
- I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) &
- I40E_TXD_FLTR_QW1_CNTINDEX_MASK);
-
- fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id);
-
- PMD_DRV_LOG(INFO, "filling transmit descriptor.");
- txdp = &(txq->tx_ring[txq->tx_tail + 1]);
- txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
- td_cmd = I40E_TX_DESC_CMD_EOP |
- I40E_TX_DESC_CMD_RS |
- I40E_TX_DESC_CMD_DUMMY;
-
- txdp->cmd_type_offset_bsz =
- i40e_build_ctob(td_cmd, 0, I40E_FDIR_PKT_LEN, 0);
-
- txq->tx_tail += 2; /* set 2 descriptors above, fdirdp and txdp */
- if (txq->tx_tail >= txq->nb_tx_desc)
- txq->tx_tail = 0;
- /* Update the tx tail register */
- rte_wmb();
- I40E_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
- for (i = 0; i < I40E_FDIR_MAX_WAIT_US; i++) {
- if ((txdp->cmd_type_offset_bsz &
- rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
- rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
- break;
- rte_delay_us(1);
- }
- if (i >= I40E_FDIR_MAX_WAIT_US) {
- PMD_DRV_LOG(ERR, "Failed to program FDIR filter:"
- " time out to get DD on tx queue.");
- return -ETIMEDOUT;
- }
- /* totally delay 10 ms to check programming status*/
- for (; i < I40E_FDIR_MAX_WAIT_US; i++) {
- if (i40e_check_fdir_programming_status(rxq) >= 0)
- return 0;
- rte_delay_us(1);
- }
- PMD_DRV_LOG(ERR,
- "Failed to program FDIR filter: programming status reported.");
- return -ETIMEDOUT;
-}
-
/*
* i40e_flow_fdir_filter_programming - Program a flow director filter rule.
* Is done by Flow Director Programming Descriptor followed by packet
@@ -2224,32 +1874,6 @@ i40e_fdir_stats_get(struct rte_eth_dev *dev, struct rte_eth_fdir_stats *stat)
I40E_PFQF_FDSTAT_BEST_CNT_SHIFT);
}
-static int
-i40e_fdir_filter_set(struct rte_eth_dev *dev,
- struct rte_eth_fdir_filter_info *info)
-{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- int ret = 0;
-
- if (!info) {
- PMD_DRV_LOG(ERR, "Invalid pointer");
- return -EFAULT;
- }
-
- switch (info->info_type) {
- case RTE_ETH_FDIR_FILTER_INPUT_SET_SELECT:
- ret = i40e_fdir_filter_inset_select(pf,
- &(info->info.input_set_conf));
- break;
- default:
- PMD_DRV_LOG(ERR, "FD filter info type (%d) not supported",
- info->info_type);
- return -EINVAL;
- }
-
- return ret;
-}
-
/*
* i40e_fdir_ctrl_func - deal with all operations on flow director.
* @pf: board private structure
@@ -2274,26 +1898,9 @@ i40e_fdir_ctrl_func(struct rte_eth_dev *dev,
return -EINVAL;
switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- ret = i40e_add_del_fdir_filter(dev,
- (struct rte_eth_fdir_filter *)arg,
- TRUE);
- break;
- case RTE_ETH_FILTER_DELETE:
- ret = i40e_add_del_fdir_filter(dev,
- (struct rte_eth_fdir_filter *)arg,
- FALSE);
- break;
- case RTE_ETH_FILTER_FLUSH:
- ret = i40e_fdir_flush(dev);
- break;
case RTE_ETH_FILTER_INFO:
i40e_fdir_info_get(dev, (struct rte_eth_fdir_info *)arg);
break;
- case RTE_ETH_FILTER_SET:
- ret = i40e_fdir_filter_set(dev,
- (struct rte_eth_fdir_filter_info *)arg);
- break;
case RTE_ETH_FILTER_STATS:
i40e_fdir_stats_get(dev, (struct rte_eth_fdir_stats *)arg);
break;
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH 4/4] net/i40e: implement hash function in rte flow API
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (2 preceding siblings ...)
2020-03-18 1:47 ` [dpdk-dev] [PATCH 3/4] net/i40e: " Chenxu Di
@ 2020-03-18 1:47 ` Chenxu Di
2020-03-18 3:00 ` [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Stephen Hemminger
` (8 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-18 1:47 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, Chenxu Di
implement set hash global configurations, set symmetric hash enable
and set hash input set in rte flow API.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
doc/guides/nics/i40e.rst | 14 +
drivers/net/i40e/i40e_ethdev.c | 451 ++++++++++++++++++++++++++++++---
drivers/net/i40e/i40e_ethdev.h | 18 ++
drivers/net/i40e/i40e_flow.c | 186 +++++++++++---
4 files changed, 597 insertions(+), 72 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..9ba87b032 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,20 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+Enable set hash input set and hash enable in generic flow API.
+For the reason queue region configuration in i40e is for all PCTYPE,
+pctype must be empty while configuring queue region.
+The pctype in pattern and actions must be matched.
+For exampale, to set queue region configuration queue 0, 1, 2, 3
+and set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ func end queues 0 1 2 3 end / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queue end / end
+
Limitations or Known issues
---------------------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 1ee60f18e..e32138023 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1653,6 +1653,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize rss rule list */
+ TAILQ_INIT(&pf->rss_info_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -11864,10 +11867,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
+ struct i40e_rte_flow_rss_filter *rss_item;
+
+ TAILQ_FOREACH(rss_item, rss_list, next) {
+ i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
+ }
}
static void
@@ -12491,31 +12496,214 @@ i40e_action_rss_same(const struct rte_flow_action_rss *comp,
sizeof(*with->queue) * with->queue_num));
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* config rss hash input set */
+static int
+i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_type_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
};
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(types & (1ull << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_type_table); j++) {
+ if ((types & inset_type_table[j].type) ==
+ inset_type_table[j].type) {
+ conf.field[conf.inset_size] =
+ inset_type_table[j].field;
+ conf.inset_size++;
+ }
+ }
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
}
+ }
+
+ return ret;
+}
+
+/* set existing rule invalid if it is covered */
+static void
+i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rte_flow_rss_filter *rss_item;
+ uint64_t input_bits;
+
+ /* to check pctype same need without input set bits */
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* config rss queue rule */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss input set rule */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function symmetric rule */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function xor or toeplitz rule */
+ if (rss_item->rss_filter_info.conf.func !=
+ RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ (rss_item->rss_filter_info.conf.types & input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* config rss hash enable and set hash input set */
+static int
+i40e_config_hash_pctype_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Confirm hash input set */
+ if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss queue region */
+static int
+i40e_config_hash_queue_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -12535,6 +12723,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -12545,29 +12734,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hash function */
+static int
+i40e_config_hash_function_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_hash_global_conf g_cfg;
+ uint64_t input_bits;
+
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ } else {
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ g_cfg.hash_func = conf->conf.func;
+ g_cfg.sym_hash_enable_mask[0] = conf->conf.types & input_bits;
+ g_cfg.valid_bit_mask[0] = conf->conf.types & input_bits;
+ i40e_set_hash_filter_global_config(hw, &g_cfg);
+ }
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hena disable and set hash input set to defalut */
+static int
+i40e_config_hash_pctype_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* set hash enable register to disable */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
+ i40e_hw_rss_hash_set(pf, &rss_conf);
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash input set default */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ return 0;
+}
+
+/* config rss queue region to default */
+static int
+i40e_config_hash_queue_del(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+/* config rss hash function to default */
+static int
+i40e_config_hash_function_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i;
+ uint16_t j;
+
+ /* set symmetric hash to default status */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 0);
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash global config disable */
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j), 0);
+ }
}
- i40e_hw_rss_hash_set(pf, &rss_conf);
+ return 0;
+}
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* config rss queue region */
+ ret = i40e_config_hash_queue_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
+ /* config hash function */
+ ret = i40e_config_hash_function_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* config hash enable and input set for each pctype */
+ ret = i40e_config_hash_pctype_add(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* update rss info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_config_hash_queue_del(pf);
+ else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ i40e_config_hash_function_del(pf, conf);
+ else
+ i40e_config_hash_pctype_del(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 22170dec6..35701c6bc 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
+
+/* rss filter list structure */
+struct i40e_rte_flow_rss_filter {
+ TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1039,6 +1056,7 @@ struct i40e_pf {
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rss_conf_list rss_info_list; /* rss rull list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..4774fde6d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
@@ -4438,15 +4438,66 @@ static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index or symmetric hash enable must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "no valid rules");
return -rte_errno;
}
@@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues must be empty while"
+ " setting SYMMETRIC hash function");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
{
int ret;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ struct i40e_rss_pattern_info p_info;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rull new created is always valid
+ * the existing rull covered by new rull will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_info_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5352,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (3 preceding siblings ...)
2020-03-18 1:47 ` [dpdk-dev] [PATCH 4/4] net/i40e: implement hash function in rte flow API Chenxu Di
@ 2020-03-18 3:00 ` Stephen Hemminger
2020-03-19 6:39 ` [dpdk-dev] [PATCH v2] net/i40e: implement hash function in rte flow API Chenxu Di
` (7 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Stephen Hemminger @ 2020-03-18 3:00 UTC (permalink / raw)
To: Chenxu Di; +Cc: dev, Yang Qiming
On Wed, 18 Mar 2020 01:47:06 +0000
Chenxu Di <chenxux.di@intel.com> wrote:
> remove legacy filter functions already implemented in rte_flow
> for drivers igb, ixgbe, and i40e.
> implement hash function include set hash function and set hash
> input set in rte_flow for driver i40e.
>
> Chenxu Di (4):
> net/e1000: remove the legacy filter functions
> net/ixgbe: remove the legacy filter functions
> net/i40e: remove the legacy filter functions
> net/i40e: implement hash function in rte flow API
>
> doc/guides/nics/i40e.rst | 14 +
> doc/guides/rel_notes/release_20_05.rst | 9 +
> drivers/net/e1000/igb_ethdev.c | 36 -
> drivers/net/i40e/i40e_ethdev.c | 913 +++++++++++--------------
> drivers/net/i40e/i40e_ethdev.h | 26 +-
> drivers/net/i40e/i40e_fdir.c | 393 -----------
> drivers/net/i40e/i40e_flow.c | 186 ++++-
> drivers/net/ixgbe/ixgbe_ethdev.c | 78 ---
> drivers/net/ixgbe/ixgbe_fdir.c | 11 -
> 9 files changed, 610 insertions(+), 1056 deletions(-)
>
This looks like an API break for users using the legacy filter API.
Even though filter_ctrl is marked as deprecated. That probably has
to wait for 20.11 until it is removed. At that point, drop the ethdev
ops handle, the rte_eth_dev_filter_ctrl API (etc) and fix all the test
code.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] net/e1000: remove the legacy filter functions
2020-03-18 1:47 ` [dpdk-dev] [PATCH 1/4] net/e1000: remove the legacy filter functions Chenxu Di
@ 2020-03-18 3:15 ` Yang, Qiming
0 siblings, 0 replies; 26+ messages in thread
From: Yang, Qiming @ 2020-03-18 3:15 UTC (permalink / raw)
To: Di, ChenxuX, dev
> -----Original Message-----
> From: Di, ChenxuX <chenxux.di@intel.com>
> Sent: Wednesday, March 18, 2020 09:47
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [PATCH 1/4] net/e1000: remove the legacy filter functions
>
> remove the legacy filter functions in Intel igb driver.
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> doc/guides/rel_notes/release_20_05.rst | 9 +++++++
> drivers/net/e1000/igb_ethdev.c | 36 --------------------------
> 2 files changed, 9 insertions(+), 36 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst
> b/doc/guides/rel_notes/release_20_05.rst
> index 2190eaf85..e79f8d841 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -56,6 +56,15 @@ New Features
> Also, make sure to start the actual text at the margin.
>
> =========================================================
>
> +* **remove legacy filter API and switch to rte flow**
> +
> + remove legacy filter API functions and switch to rte_flow in drivers,
> including:
> +
> + * remove legacy filter API functions in the Intel igb driver.
> + * remove legacy filter API functions in the Intel ixgbe driver.
> + * remove legacy filter API functions in the Intel i40 driver.
You only delete parts of the legacy function.
> + * Added support set hash function and set hash input set in rte flow API.
> +
>
> Removed Items
> -------------
> diff --git a/drivers/net/e1000/igb_ethdev.c
> b/drivers/net/e1000/igb_ethdev.c index 520fba8fa..2d660eb7e 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -3716,16 +3716,6 @@ eth_igb_syn_filter_handle(struct rte_eth_dev
> *dev,
> }
>
> switch (filter_op) {
> - case RTE_ETH_FILTER_ADD:
> - ret = eth_igb_syn_filter_set(dev,
> - (struct rte_eth_syn_filter *)arg,
> - TRUE);
> - break;
> - case RTE_ETH_FILTER_DELETE:
> - ret = eth_igb_syn_filter_set(dev,
> - (struct rte_eth_syn_filter *)arg,
> - FALSE);
> - break;
> case RTE_ETH_FILTER_GET:
> ret = eth_igb_syn_filter_get(dev,
> (struct rte_eth_syn_filter *)arg);
> @@ -4207,12 +4197,6 @@ eth_igb_flex_filter_handle(struct rte_eth_dev
> *dev,
> }
>
> switch (filter_op) {
> - case RTE_ETH_FILTER_ADD:
> - ret = eth_igb_add_del_flex_filter(dev, filter, TRUE);
> - break;
> - case RTE_ETH_FILTER_DELETE:
> - ret = eth_igb_add_del_flex_filter(dev, filter, FALSE);
> - break;
> case RTE_ETH_FILTER_GET:
> ret = eth_igb_get_flex_filter(dev, filter);
> break;
> @@ -4713,16 +4697,6 @@ igb_ntuple_filter_handle(struct rte_eth_dev *dev,
> }
>
> switch (filter_op) {
> - case RTE_ETH_FILTER_ADD:
> - ret = igb_add_del_ntuple_filter(dev,
> - (struct rte_eth_ntuple_filter *)arg,
> - TRUE);
> - break;
> - case RTE_ETH_FILTER_DELETE:
> - ret = igb_add_del_ntuple_filter(dev,
> - (struct rte_eth_ntuple_filter *)arg,
> - FALSE);
> - break;
> case RTE_ETH_FILTER_GET:
> ret = igb_get_ntuple_filter(dev,
> (struct rte_eth_ntuple_filter *)arg); @@ -4894,16
> +4868,6 @@ igb_ethertype_filter_handle(struct rte_eth_dev *dev,
> }
>
> switch (filter_op) {
> - case RTE_ETH_FILTER_ADD:
> - ret = igb_add_del_ethertype_filter(dev,
> - (struct rte_eth_ethertype_filter *)arg,
> - TRUE);
> - break;
> - case RTE_ETH_FILTER_DELETE:
> - ret = igb_add_del_ethertype_filter(dev,
> - (struct rte_eth_ethertype_filter *)arg,
> - FALSE);
> - break;
> case RTE_ETH_FILTER_GET:
> ret = igb_get_ethertype_filter(dev,
> (struct rte_eth_ethertype_filter *)arg);
> --
> 2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v2] net/i40e: implement hash function in rte flow API
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (4 preceding siblings ...)
2020-03-18 3:00 ` [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Stephen Hemminger
@ 2020-03-19 6:39 ` Chenxu Di
2020-03-20 1:24 ` [dpdk-dev] [PATCH v3] " Chenxu Di
` (6 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-19 6:39 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, Chenxu Di
implement set hash global configurations, set symmetric hash enable
and set hash input set in rte flow API.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
v2:
-canceled remove legacy filter functions.
---
doc/guides/nics/i40e.rst | 14 +
doc/guides/rel_notes/release_20_05.rst | 6 +
drivers/net/i40e/i40e_ethdev.c | 451 ++++++++++++++++++++++---
drivers/net/i40e/i40e_ethdev.h | 18 +
drivers/net/i40e/i40e_flow.c | 186 ++++++++--
5 files changed, 603 insertions(+), 72 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..9ba87b032 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,20 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+Enable set hash input set and hash enable in generic flow API.
+For the reason queue region configuration in i40e is for all PCTYPE,
+pctype must be empty while configuring queue region.
+The pctype in pattern and actions must be matched.
+For exampale, to set queue region configuration queue 0, 1, 2, 3
+and set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ func end queues 0 1 2 3 end / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queue end / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf501..89ce8de6c 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,12 @@ New Features
* Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
+* **Updated Intel i40e driver.**
+
+ Updated i40e PMD with new features and improvements, including:
+
+ * Added support set hash function and set hash input set in rte flow API.
+
Removed Items
-------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9539b0470..62e4ef7a7 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize rss rule list */
+ TAILQ_INIT(&pf->rss_info_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
+ struct i40e_rte_flow_rss_filter *rss_item;
+
+ TAILQ_FOREACH(rss_item, rss_list, next) {
+ i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
+ }
}
static void
@@ -12956,31 +12961,214 @@ i40e_action_rss_same(const struct rte_flow_action_rss *comp,
sizeof(*with->queue) * with->queue_num));
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* config rss hash input set */
+static int
+i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_type_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
};
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(types & (1ull << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_type_table); j++) {
+ if ((types & inset_type_table[j].type) ==
+ inset_type_table[j].type) {
+ conf.field[conf.inset_size] =
+ inset_type_table[j].field;
+ conf.inset_size++;
+ }
+ }
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
}
+ }
+
+ return ret;
+}
+
+/* set existing rule invalid if it is covered */
+static void
+i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rte_flow_rss_filter *rss_item;
+ uint64_t input_bits;
+
+ /* to check pctype same need without input set bits */
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* config rss queue rule */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss input set rule */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function symmetric rule */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function xor or toeplitz rule */
+ if (rss_item->rss_filter_info.conf.func !=
+ RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ (rss_item->rss_filter_info.conf.types & input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* config rss hash enable and set hash input set */
+static int
+i40e_config_hash_pctype_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Confirm hash input set */
+ if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss queue region */
+static int
+i40e_config_hash_queue_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13000,6 +13188,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -13010,29 +13199,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hash function */
+static int
+i40e_config_hash_function_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_hash_global_conf g_cfg;
+ uint64_t input_bits;
+
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ } else {
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ g_cfg.hash_func = conf->conf.func;
+ g_cfg.sym_hash_enable_mask[0] = conf->conf.types & input_bits;
+ g_cfg.valid_bit_mask[0] = conf->conf.types & input_bits;
+ i40e_set_hash_filter_global_config(hw, &g_cfg);
+ }
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hena disable and set hash input set to defalut */
+static int
+i40e_config_hash_pctype_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* set hash enable register to disable */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
+ i40e_hw_rss_hash_set(pf, &rss_conf);
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash input set default */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ return 0;
+}
+
+/* config rss queue region to default */
+static int
+i40e_config_hash_queue_del(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+/* config rss hash function to default */
+static int
+i40e_config_hash_function_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i;
+ uint16_t j;
+
+ /* set symmetric hash to default status */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 0);
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash global config disable */
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j), 0);
+ }
}
- i40e_hw_rss_hash_set(pf, &rss_conf);
+ return 0;
+}
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* config rss queue region */
+ ret = i40e_config_hash_queue_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
+ /* config hash function */
+ ret = i40e_config_hash_function_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* config hash enable and input set for each pctype */
+ ret = i40e_config_hash_pctype_add(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* update rss info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_config_hash_queue_del(pf);
+ else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ i40e_config_hash_function_del(pf, conf);
+ else
+ i40e_config_hash_pctype_del(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..1e4e64ea7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
+
+/* rss filter list structure */
+struct i40e_rte_flow_rss_filter {
+ TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1039,6 +1056,7 @@ struct i40e_pf {
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rss_conf_list rss_info_list; /* rss rull list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..4774fde6d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
@@ -4438,15 +4438,66 @@ static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index or symmetric hash enable must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "no valid rules");
return -rte_errno;
}
@@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues must be empty while"
+ " setting SYMMETRIC hash function");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
{
int ret;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ struct i40e_rss_pattern_info p_info;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rull new created is always valid
+ * the existing rull covered by new rull will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_info_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5352,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v3] net/i40e: implement hash function in rte flow API
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (5 preceding siblings ...)
2020-03-19 6:39 ` [dpdk-dev] [PATCH v2] net/i40e: implement hash function in rte flow API Chenxu Di
@ 2020-03-20 1:24 ` Chenxu Di
2020-03-23 8:25 ` [dpdk-dev] [PATCH v4] " Chenxu Di
` (5 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-20 1:24 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, bernard.iremonger, Chenxu Di
implement set hash global configurations, set symmetric hash enable
and set hash input set in rte flow API.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
v3:
-modified the doc i40e.rst
v2:
-canceled remove legacy filter functions.
---
doc/guides/nics/i40e.rst | 14 +
doc/guides/rel_notes/release_20_05.rst | 6 +
drivers/net/i40e/i40e_ethdev.c | 451 ++++++++++++++++++++++---
drivers/net/i40e/i40e_ethdev.h | 18 +
drivers/net/i40e/i40e_flow.c | 186 ++++++++--
5 files changed, 603 insertions(+), 72 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..03b117a99 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,20 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+Enable set hash input set and hash enable in generic flow API.
+For the reason queue region configuration in i40e is for all PCTYPE,
+pctype must be empty while configuring queue region.
+The pctype in pattern and actions must be matched.
+For exampale, to set queue region configuration queue 0, 1, 2, 3
+and set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues 0 1 2 3 end / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queues end / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf501..89ce8de6c 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,12 @@ New Features
* Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
+* **Updated Intel i40e driver.**
+
+ Updated i40e PMD with new features and improvements, including:
+
+ * Added support set hash function and set hash input set in rte flow API.
+
Removed Items
-------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9539b0470..62e4ef7a7 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize rss rule list */
+ TAILQ_INIT(&pf->rss_info_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
+ struct i40e_rte_flow_rss_filter *rss_item;
+
+ TAILQ_FOREACH(rss_item, rss_list, next) {
+ i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
+ }
}
static void
@@ -12956,31 +12961,214 @@ i40e_action_rss_same(const struct rte_flow_action_rss *comp,
sizeof(*with->queue) * with->queue_num));
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* config rss hash input set */
+static int
+i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_type_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
};
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(types & (1ull << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_type_table); j++) {
+ if ((types & inset_type_table[j].type) ==
+ inset_type_table[j].type) {
+ conf.field[conf.inset_size] =
+ inset_type_table[j].field;
+ conf.inset_size++;
+ }
+ }
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
}
+ }
+
+ return ret;
+}
+
+/* set existing rule invalid if it is covered */
+static void
+i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rte_flow_rss_filter *rss_item;
+ uint64_t input_bits;
+
+ /* to check pctype same need without input set bits */
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* config rss queue rule */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss input set rule */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function symmetric rule */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function xor or toeplitz rule */
+ if (rss_item->rss_filter_info.conf.func !=
+ RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ (rss_item->rss_filter_info.conf.types & input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* config rss hash enable and set hash input set */
+static int
+i40e_config_hash_pctype_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Confirm hash input set */
+ if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss queue region */
+static int
+i40e_config_hash_queue_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13000,6 +13188,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -13010,29 +13199,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hash function */
+static int
+i40e_config_hash_function_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_hash_global_conf g_cfg;
+ uint64_t input_bits;
+
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ } else {
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ g_cfg.hash_func = conf->conf.func;
+ g_cfg.sym_hash_enable_mask[0] = conf->conf.types & input_bits;
+ g_cfg.valid_bit_mask[0] = conf->conf.types & input_bits;
+ i40e_set_hash_filter_global_config(hw, &g_cfg);
+ }
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hena disable and set hash input set to defalut */
+static int
+i40e_config_hash_pctype_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* set hash enable register to disable */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
+ i40e_hw_rss_hash_set(pf, &rss_conf);
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash input set default */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ return 0;
+}
+
+/* config rss queue region to default */
+static int
+i40e_config_hash_queue_del(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+/* config rss hash function to default */
+static int
+i40e_config_hash_function_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i;
+ uint16_t j;
+
+ /* set symmetric hash to default status */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 0);
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash global config disable */
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j), 0);
+ }
}
- i40e_hw_rss_hash_set(pf, &rss_conf);
+ return 0;
+}
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* config rss queue region */
+ ret = i40e_config_hash_queue_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
+ /* config hash function */
+ ret = i40e_config_hash_function_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* config hash enable and input set for each pctype */
+ ret = i40e_config_hash_pctype_add(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* update rss info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_config_hash_queue_del(pf);
+ else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ i40e_config_hash_function_del(pf, conf);
+ else
+ i40e_config_hash_pctype_del(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..1e4e64ea7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
+
+/* rss filter list structure */
+struct i40e_rte_flow_rss_filter {
+ TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1039,6 +1056,7 @@ struct i40e_pf {
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rss_conf_list rss_info_list; /* rss rull list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..4774fde6d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
@@ -4438,15 +4438,66 @@ static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index or symmetric hash enable must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "no valid rules");
return -rte_errno;
}
@@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues must be empty while"
+ " setting SYMMETRIC hash function");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
{
int ret;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ struct i40e_rss_pattern_info p_info;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rull new created is always valid
+ * the existing rull covered by new rull will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_info_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5352,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v4] net/i40e: implement hash function in rte flow API
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (6 preceding siblings ...)
2020-03-20 1:24 ` [dpdk-dev] [PATCH v3] " Chenxu Di
@ 2020-03-23 8:25 ` Chenxu Di
2020-03-24 3:28 ` Yang, Qiming
2020-03-24 8:17 ` [dpdk-dev] [PATCH v5] " Chenxu Di
` (4 subsequent siblings)
12 siblings, 1 reply; 26+ messages in thread
From: Chenxu Di @ 2020-03-23 8:25 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, Chenxu Di
implement set hash global configurations, set symmetric hash enable
and set hash input set in rte flow API.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
v4:
-added check for l3 pctype with l4 input set.
v3:
-modified the doc i40e.rst
v2:
-canceled remove legacy filter functions.
---
doc/guides/nics/i40e.rst | 14 +
doc/guides/rel_notes/release_20_05.rst | 6 +
drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
drivers/net/i40e/i40e_ethdev.h | 18 +
drivers/net/i40e/i40e_flow.c | 186 ++++++++--
5 files changed, 623 insertions(+), 72 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..03b117a99 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,20 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+Enable set hash input set and hash enable in generic flow API.
+For the reason queue region configuration in i40e is for all PCTYPE,
+pctype must be empty while configuring queue region.
+The pctype in pattern and actions must be matched.
+For exampale, to set queue region configuration queue 0, 1, 2, 3
+and set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues 0 1 2 3 end / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queues end / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf501..89ce8de6c 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,12 @@ New Features
* Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
+* **Updated Intel i40e driver.**
+
+ Updated i40e PMD with new features and improvements, including:
+
+ * Added support set hash function and set hash input set in rte flow API.
+
Removed Items
-------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9539b0470..e80553010 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize rss rule list */
+ TAILQ_INIT(&pf->rss_info_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
+ struct i40e_rte_flow_rss_filter *rss_item;
+
+ TAILQ_FOREACH(rss_item, rss_list, next) {
+ i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
+ }
}
static void
@@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct rte_flow_action_rss *comp,
sizeof(*with->queue) * with->queue_num));
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* config rss hash input set */
+static int
+i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_type_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
};
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(types & (1ull << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_type_table); j++) {
+ if ((types & inset_type_table[j].type) ==
+ inset_type_table[j].type) {
+ if (inset_type_table[j].field ==
+ RTE_ETH_INPUT_SET_UNKNOWN) {
+ return -EINVAL;
+ }
+ conf.field[conf.inset_size] =
+ inset_type_table[j].field;
+ conf.inset_size++;
+ }
}
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+/* set existing rule invalid if it is covered */
+static void
+i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rte_flow_rss_filter *rss_item;
+ uint64_t input_bits;
+
+ /* to check pctype same need without input set bits */
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* config rss queue rule */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss input set rule */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function symmetric rule */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function xor or toeplitz rule */
+ if (rss_item->rss_filter_info.conf.func !=
+ RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ (rss_item->rss_filter_info.conf.types & input_bits) ==
+ (conf->conf.types & input_bits))
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* config rss hash enable and set hash input set */
+static int
+i40e_config_hash_pctype_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Confirm hash input set */
+ if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss queue region */
+static int
+i40e_config_hash_queue_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hash function */
+static int
+i40e_config_hash_function_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_hash_global_conf g_cfg;
+ uint64_t input_bits;
+
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ } else {
+ input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ g_cfg.hash_func = conf->conf.func;
+ g_cfg.sym_hash_enable_mask[0] = conf->conf.types & input_bits;
+ g_cfg.valid_bit_mask[0] = conf->conf.types & input_bits;
+ i40e_set_hash_filter_global_config(hw, &g_cfg);
+ }
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hena disable and set hash input set to defalut */
+static int
+i40e_config_hash_pctype_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* set hash enable register to disable */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
+ i40e_hw_rss_hash_set(pf, &rss_conf);
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash input set default */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ return 0;
+}
+
+/* config rss queue region to default */
+static int
+i40e_config_hash_queue_del(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+/* config rss hash function to default */
+static int
+i40e_config_hash_function_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i;
+ uint16_t j;
+
+ /* set symmetric hash to default status */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 0);
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash global config disable */
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j), 0);
+ }
}
- i40e_hw_rss_hash_set(pf, &rss_conf);
+ return 0;
+}
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* config rss queue region */
+ ret = i40e_config_hash_queue_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
+ /* config hash function */
+ ret = i40e_config_hash_function_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* config hash enable and input set for each pctype */
+ ret = i40e_config_hash_pctype_add(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* update rss info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_config_hash_queue_del(pf);
+ else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ i40e_config_hash_function_del(pf, conf);
+ else
+ i40e_config_hash_pctype_del(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..1e4e64ea7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
+
+/* rss filter list structure */
+struct i40e_rte_flow_rss_filter {
+ TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1039,6 +1056,7 @@ struct i40e_pf {
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rss_conf_list rss_info_list; /* rss rull list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..4774fde6d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
@@ -4438,15 +4438,66 @@ static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index or symmetric hash enable must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "no valid rules");
return -rte_errno;
}
@@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues must be empty while"
+ " setting SYMMETRIC hash function");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
{
int ret;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ struct i40e_rss_pattern_info p_info;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rull new created is always valid
+ * the existing rull covered by new rull will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_info_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5352,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v4] net/i40e: implement hash function in rte flow API
2020-03-23 8:25 ` [dpdk-dev] [PATCH v4] " Chenxu Di
@ 2020-03-24 3:28 ` Yang, Qiming
0 siblings, 0 replies; 26+ messages in thread
From: Yang, Qiming @ 2020-03-24 3:28 UTC (permalink / raw)
To: Di, ChenxuX; +Cc: dev
Comments inline, I think many various name is not suitable.
BTW you should CC Beilei and Zhaowei to review.
Qiming
> -----Original Message-----
> From: Di, ChenxuX <chenxux.di@intel.com>
> Sent: Monday, March 23, 2020 16:25
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [PATCH v4] net/i40e: implement hash function in rte flow API
>
> implement set hash global configurations, set symmetric hash enable and
> set hash input set in rte flow API.
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> v4:
> -added check for l3 pctype with l4 input set.
> v3:
> -modified the doc i40e.rst
> v2:
> -canceled remove legacy filter functions.
> ---
> doc/guides/nics/i40e.rst | 14 +
> doc/guides/rel_notes/release_20_05.rst | 6 +
> drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
> drivers/net/i40e/i40e_ethdev.h | 18 +
> drivers/net/i40e/i40e_flow.c | 186 ++++++++--
> 5 files changed, 623 insertions(+), 72 deletions(-)
>
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> d6e578eda..03b117a99 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -569,6 +569,20 @@ details please refer
> to :doc:`../testpmd_app_ug/index`.
> testpmd> set port (port_id) queue-region flush (on|off)
> testpmd> show port (port_id) queue-region
>
> +Generic flow API
> +~~~~~~~~~~~~~~~~~~~
> +Enable set hash input set and hash enable in generic flow API.
> +For the reason queue region configuration in i40e is for all PCTYPE,
> +pctype must be empty while configuring queue region.
> +The pctype in pattern and actions must be matched.
> +For exampale, to set queue region configuration queue 0, 1, 2, 3 and
> +set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
> +
> + testpmd> flow create 0 ingress pattern end actions rss types end \
> + queues 0 1 2 3 end / end
> + testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
> + actions rss types ipv4-tcp l3-src-only end queues end / end
> +
Do we have generic flow API statement in I40e in history? Can you merge this part to the whole statement, I think this feature is an adding for generic flow API not create API.
> Limitations or Known issues
> ---------------------------
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst
> b/doc/guides/rel_notes/release_20_05.rst
> index 000bbf501..89ce8de6c 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -62,6 +62,12 @@ New Features
>
> * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
>
> +* **Updated Intel i40e driver.**
> +
> + Updated i40e PMD with new features and improvements, including:
> +
> + * Added support set hash function and set hash input set in rte flow API.
Add support for doing sth and sth
> +
>
> Removed Items
> -------------
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 9539b0470..e80553010 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void
> *init_params __rte_unused)
> /* initialize mirror rule list */
> TAILQ_INIT(&pf->mirror_list);
>
> + /* initialize rss rule list */
> + TAILQ_INIT(&pf->rss_info_list);
> +
> /* initialize Traffic Manager configuration */
> i40e_tm_conf_init(dev);
>
> @@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
> static inline void i40e_rss_filter_restore(struct i40e_pf *pf) {
> - struct i40e_rte_flow_rss_conf *conf =
> - &pf->rss_info;
> - if (conf->conf.queue_num)
> - i40e_config_rss_filter(pf, conf, TRUE);
> + struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
> + struct i40e_rte_flow_rss_filter *rss_item;
> +
> + TAILQ_FOREACH(rss_item, rss_list, next) {
> + i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
> + }
> }
>
> static void
> @@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct
> rte_flow_action_rss *comp,
> sizeof(*with->queue) * with->queue_num)); }
>
> -int
> -i40e_config_rss_filter(struct i40e_pf *pf,
> - struct i40e_rte_flow_rss_conf *conf, bool add)
> +/* config rss hash input set */
> +static int
> +i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
> {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint32_t i, lut = 0;
> - uint16_t j, num;
> - struct rte_eth_rss_conf rss_conf = {
> - .rss_key = conf->conf.key_len ?
> - (void *)(uintptr_t)conf->conf.key : NULL,
> - .rss_key_len = conf->conf.key_len,
> - .rss_hf = conf->conf.types,
> + struct rte_eth_input_set_conf conf;
> + int i, ret;
> + uint32_t j;
> + static const struct {
> + uint64_t type;
> + enum rte_eth_input_set_field field;
> + } inset_type_table[] = {
I still confuse about why this table should define within function not in this header file or other place?
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> };
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
>
> - if (!add) {
> - if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
> - i40e_pf_disable_rss(pf);
> - memset(rss_info, 0,
> - sizeof(struct i40e_rte_flow_rss_conf));
> - return 0;
> + ret = 0;
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(types & (1ull << i)))
> + continue;
> +
> + conf.op = RTE_ETH_INPUT_SET_SELECT;
> + conf.flow_type = i;
> + conf.inset_size = 0;
> + for (j = 0; j < RTE_DIM(inset_type_table); j++) {
> + if ((types & inset_type_table[j].type) ==
> + inset_type_table[j].type) {
> + if (inset_type_table[j].field ==
> + RTE_ETH_INPUT_SET_UNKNOWN) {
> + return -EINVAL;
> + }
> + conf.field[conf.inset_size] =
> + inset_type_table[j].field;
> + conf.inset_size++;
> + }
> }
> +
> + if (conf.inset_size) {
> + ret = i40e_hash_filter_inset_select(hw, &conf);
> + if (ret)
> + return ret;
> + }
> + }
> +
> + return ret;
> +}
> +
> +/* set existing rule invalid if it is covered */ static void
> +i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_rte_flow_rss_filter *rss_item;
> + uint64_t input_bits;
Why not named rss_inset?
> +
> + /* to check pctype same need without input set bits */
> + input_bits = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> +
> + TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
> + if (!rss_item->rss_filter_info.valid)
> + continue;
> +
> + /* config rss queue rule */
> + if (conf->conf.queue_num &&
> + rss_item->rss_filter_info.conf.queue_num)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss input set rule */
> + if (conf->conf.types &&
> + (rss_item->rss_filter_info.conf.types &
> + input_bits) ==
> + (conf->conf.types & input_bits))
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function symmetric rule */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
> + rss_item->rss_filter_info.conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function xor or toeplitz rule */
> + if (rss_item->rss_filter_info.conf.func !=
> + RTE_ETH_HASH_FUNCTION_DEFAULT &&
> + conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT
> &&
> + (rss_item->rss_filter_info.conf.types & input_bits) ==
> + (conf->conf.types & input_bits))
> + rss_item->rss_filter_info.valid = false;
> + }
> +}
> +
> +/* config rss hash enable and set hash input set */ static int
> +i40e_config_hash_pctype_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> +
> + if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
> + return -ENOTSUP;
> +
> + /* Confirm hash input set */
> + if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
> return -EINVAL;
> +
> + if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> + /* Random default keys */
> + static uint32_t rss_key_default[] = {0x6b793944,
> + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
> +
> + rss_conf->rss_key = (uint8_t *)rss_key_default;
> + rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1)
> *
> + sizeof(uint32_t);
> + PMD_DRV_LOG(INFO,
> + "No valid RSS key config for i40e, using default\n");
> }
>
> + rss_conf->rss_hf |= rss_info->conf.types;
> + i40e_hw_rss_hash_set(pf, rss_conf);
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss queue region */
> +static int
> +i40e_config_hash_queue_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i, lut;
> + uint16_t j, num;
> +
> /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> * It's necessary to calculate the actual PF queues that are configured.
> */
> @@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> return -ENOTSUP;
> }
>
> + lut = 0;
> /* Fill in redirection table */
> for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> if (j == num)
> @@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> }
>
> - if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
> - i40e_pf_disable_rss(pf);
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hash function */
> +static int
> +i40e_config_hash_function_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct rte_eth_hash_global_conf g_cfg;
> + uint64_t input_bits;
Same as above.
> +
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
> + i40e_set_symmetric_hash_enable_per_port(hw, 1);
> + } else {
> + input_bits = ~(ETH_RSS_L3_SRC_ONLY |
> ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> + g_cfg.hash_func = conf->conf.func;
> + g_cfg.sym_hash_enable_mask[0] = conf->conf.types &
> input_bits;
> + g_cfg.valid_bit_mask[0] = conf->conf.types & input_bits;
> + i40e_set_hash_filter_global_config(hw, &g_cfg);
> + }
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hena disable and set hash input set to defalut */ static
> +int i40e_config_hash_pctype_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = pf->rss_info.conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = pf->rss_info.conf.key_len,
> + };
> + uint32_t i;
> +
> + /* set hash enable register to disable */
> + rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
> + i40e_hw_rss_hash_set(pf, &rss_conf);
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash input set default */
> + struct rte_eth_input_set_conf input_conf = {
> + .op = RTE_ETH_INPUT_SET_SELECT,
> + .flow_type = i,
> + .inset_size = 1,
> + };
> + input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
> + i40e_hash_filter_inset_select(hw, &input_conf);
> + }
> +
> + rss_info->conf.types = rss_conf.rss_hf;
> +
> + return 0;
> +}
> +
> +/* config rss queue region to default */ static int
> +i40e_config_hash_queue_del(struct i40e_pf *pf) {
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + uint16_t queue[I40E_MAX_Q_PER_TC];
> + uint32_t num_rxq, i, lut;
> + uint16_t j, num;
> +
> + num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues,
> I40E_MAX_Q_PER_TC);
> +
> + for (j = 0; j < num_rxq; j++)
> + queue[j] = j;
> +
> + /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> + * It's necessary to calculate the actual PF queues that are configured.
> + */
> + if (pf->dev_data->dev_conf.rxmode.mq_mode &
> ETH_MQ_RX_VMDQ_FLAG)
> + num = i40e_pf_calc_configured_queues_num(pf);
> + else
> + num = pf->dev_data->nb_rx_queues;
> +
> + num = RTE_MIN(num, num_rxq);
> + PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are
> configured",
> + num);
> +
> + if (num == 0) {
> + PMD_DRV_LOG(ERR,
> + "No PF queues are configured to enable RSS for
> port %u",
> + pf->dev_data->port_id);
> + return -ENOTSUP;
> + }
> +
> + lut = 0;
> + /* Fill in redirection table */
> + for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> + if (j == num)
> + j = 0;
> + lut = (lut << 8) | (queue[j] & ((0x1 <<
> + hw->func_caps.rss_table_entry_width) - 1));
> + if ((i & 3) == 3)
> + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> + }
> +
> + rss_info->conf.queue_num = 0;
> + memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
> +
> + return 0;
> +}
> +
> +/* config rss hash function to default */ static int
> +i40e_config_hash_function_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i;
> + uint16_t j;
> +
> + /* set symmetric hash to default status */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
> + i40e_set_symmetric_hash_enable_per_port(hw, 0);
> +
> return 0;
> }
> - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
> - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> - /* Random default keys */
> - static uint32_t rss_key_default[] = {0x6b793944,
> - 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> - 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> - 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
>
> - rss_conf.rss_key = (uint8_t *)rss_key_default;
> - rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> - sizeof(uint32_t);
> - PMD_DRV_LOG(INFO,
> - "No valid RSS key config for i40e, using default\n");
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash global config disable */
> + for (j = I40E_FILTER_PCTYPE_INVALID + 1;
> + j < I40E_FILTER_PCTYPE_MAX; j++) {
> + if (pf->adapter->pctypes_tbl[i] &
> + (1ULL << j))
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(j), 0);
> + }
> }
>
> - i40e_hw_rss_hash_set(pf, &rss_conf);
> + return 0;
> +}
>
> - if (i40e_rss_conf_init(rss_info, &conf->conf))
> - return -EINVAL;
> +int
> +i40e_config_rss_filter(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf, bool add) {
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_flow_action_rss update_conf = rss_info->conf;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = conf->conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = conf->conf.key_len,
> + .rss_hf = conf->conf.types,
> + };
> + int ret = 0;
> +
> + if (add) {
> + if (conf->conf.queue_num) {
> + /* config rss queue region */
> + ret = i40e_config_hash_queue_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.queue_num = conf->conf.queue_num;
> + update_conf.queue = conf->conf.queue;
> + } else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT) {
> + /* config hash function */
> + ret = i40e_config_hash_function_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.func = conf->conf.func;
> + } else {
> + /* config hash enable and input set for each pctype
> */
> + ret = i40e_config_hash_pctype_add(pf, conf,
> &rss_conf);
> + if (ret)
> + return ret;
> +
> + update_conf.types = rss_conf.rss_hf;
> + update_conf.key = rss_conf.rss_key;
> + update_conf.key_len = rss_conf.rss_key_len;
> + }
> +
> + /* update rss info in pf */
> + if (i40e_rss_conf_init(rss_info, &update_conf))
> + return -EINVAL;
> + } else {
> + if (!conf->valid)
> + return 0;
> +
> + if (conf->conf.queue_num)
> + i40e_config_hash_queue_del(pf);
> + else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT)
> + i40e_config_hash_function_del(pf, conf);
> + else
> + i40e_config_hash_pctype_del(pf, conf);
> + }
>
> return 0;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev.h
> b/drivers/net/i40e/i40e_ethdev.h index aac89de91..1e4e64ea7 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx { #define
> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
> I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
>
> +#define I40E_RSS_TYPE_NONE 0ULL
> +#define I40E_RSS_TYPE_INVALID 1ULL
> +
> #define I40E_INSET_NONE 0x00000000000000000ULL
>
> /* bit0 ~ bit 7 */
> @@ -749,6 +752,11 @@ struct i40e_queue_regions {
> struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX +
> 1]; };
>
> +struct i40e_rss_pattern_info {
> + uint8_t action_flag;
> + uint64_t types;
> +};
> +
> /* Tunnel filter number HW supports */
> #define I40E_MAX_TUNNEL_FILTER_NUM 400
>
> @@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
> I40E_VFQF_HKEY_MAX_INDEX :
> I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t)]; /* Hash key. */
> uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use.
> */
> + bool valid; /* Check if it's valid */
> +};
> +
> +TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
> +
> +/* rss filter list structure */
> +struct i40e_rte_flow_rss_filter {
> + TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
> + struct i40e_rte_flow_rss_conf rss_filter_info;
> };
>
> struct i40e_vf_msg_cfg {
> @@ -1039,6 +1056,7 @@ struct i40e_pf {
> struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
> struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
> struct i40e_rte_flow_rss_conf rss_info; /* rss info */
> + struct i40e_rss_conf_list rss_info_list; /* rss rull list */
> struct i40e_queue_regions queue_region; /* queue region info */
> struct i40e_fc_conf fc_conf; /* Flow control conf */
> struct i40e_mirror_rule_list mirror_list; diff --git
> a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index
> d877ac250..4774fde6d 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev
> *dev,
> * function for RSS, or flowtype for queue region configuration.
> * For example:
> * pattern:
> - * Case 1: only ETH, indicate flowtype for queue region will be parsed.
> - * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
> - * Case 3: none, indicate RSS related will be parsed in action.
> - * Any pattern other the ETH or VLAN will be treated as invalid except END.
> + * Case 1: try to transform patterns to pctype. valid pctype will be
> + * used in parse action.
> + * Case 2: only ETH, indicate flowtype for queue region will be parsed.
> + * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
> * So, pattern choice is depened on the purpose of configuration of
> * that flow.
> * action:
> @@ -4438,15 +4438,66 @@ static int
> i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> const struct rte_flow_item *pattern,
> struct rte_flow_error *error,
> - uint8_t *action_flag,
> + struct i40e_rss_pattern_info *p_info,
> struct i40e_queue_regions *info) {
> const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
> const struct rte_flow_item *item = pattern;
> enum rte_flow_item_type item_type;
> -
> - if (item->type == RTE_FLOW_ITEM_TYPE_END)
> + struct rte_flow_item *items;
> + uint32_t item_num = 0; /* non-void item number of pattern*/
> + uint32_t i = 0;
> + static const struct {
> + enum rte_flow_item_type *item_array;
> + uint64_t type;
> + } i40e_rss_pctype_patterns[] = {
> + { pattern_fdir_ipv4,
> + ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_OTHER },
> + { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
> + { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
> + { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
> + { pattern_fdir_ipv6,
> + ETH_RSS_FRAG_IPV6 |
> ETH_RSS_NONFRAG_IPV6_OTHER },
> + { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
> + { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
> + { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
> + };
> +
> + p_info->types = I40E_RSS_TYPE_INVALID;
> +
> + if (item->type == RTE_FLOW_ITEM_TYPE_END) {
> + p_info->types = I40E_RSS_TYPE_NONE;
> return 0;
> + }
> +
> + /* convert flow to pctype */
> + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
> + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
> + item_num++;
> + i++;
> + }
> + item_num++;
> +
> + items = rte_zmalloc("i40e_pattern",
> + item_num * sizeof(struct rte_flow_item), 0);
> + if (!items) {
> + rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> + NULL, "No memory for PMD internal
> items.");
> + return -ENOMEM;
> + }
> +
> + i40e_pattern_skip_void_item(items, pattern);
> +
> + for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
> + if
> (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
> + items)) {
> + p_info->types = i40e_rss_pctype_patterns[i].type;
> + rte_free(items);
> + return 0;
> + }
> + }
> +
> + rte_free(items);
>
> for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> if (item->last) {
> @@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> item_type = item->type;
> switch (item_type) {
> case RTE_FLOW_ITEM_TYPE_ETH:
> - *action_flag = 1;
> + p_info->action_flag = 1;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> vlan_spec = item->spec;
> @@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> vlan_spec->tci) >> 13) & 0x7;
> info->region[0].user_priority_num =
> 1;
> info->queue_region_number = 1;
> - *action_flag = 0;
> + p_info->action_flag = 0;
> }
> }
> break;
> @@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused
> struct rte_eth_dev *dev,
> * max index should be 7, and so on. And also, queue index should be
> * continuous sequence and queue region index should be part of rss
> * queue index for this port.
> + * For hash params, the pctype in action and pattern must be same.
> + * Set queue index or symmetric hash enable must be with non-types.
> */
> static int
> i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
> const struct rte_flow_action *actions,
> struct rte_flow_error *error,
> - uint8_t action_flag,
> + struct i40e_rss_pattern_info p_info,
> struct i40e_queue_regions *conf_info,
> union i40e_filter_t *filter)
> {
> @@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> struct i40e_rte_flow_rss_conf *rss_config =
> &filter->rss_conf;
> struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> - uint16_t i, j, n, tmp;
> + uint16_t i, j, n, tmp, nb_types;
> uint32_t index = 0;
> uint64_t hf_bit = 1;
>
> @@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> return -rte_errno;
> }
>
> - if (action_flag) {
> + if (p_info.action_flag) {
> for (n = 0; n < 64; n++) {
> if (rss->types & (hf_bit << n)) {
> conf_info->region[0].hw_flowtype[0] = n;
> @@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> if (rss_config->queue_region_conf)
> return 0;
>
> - if (!rss || !rss->queue_num) {
> + if (!rss) {
> rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "no valid queues");
> + "no valid rules");
> return -rte_errno;
> }
>
> @@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> }
> }
>
> - if (rss_info->conf.queue_num) {
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ACTION,
> - act,
> - "rss only allow one valid rule");
> - return -rte_errno;
> + if (rss->queue_num && (p_info.types || rss->types))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype must be empty while configuring queue
> region");
> +
> + /* validate pattern and pctype */
> + if (!(rss->types & p_info.types) &&
> + (rss->types || p_info.types) && !rss->queue_num)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "invaild pctype");
> +
> + nb_types = 0;
> + for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
> + if (rss->types & (hf_bit << n))
> + nb_types++;
> + if (nb_types > 1)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "multi pctype is not supported");
> }
>
> + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ
> &&
> + (p_info.types || rss->types || rss->queue_num))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype and queues must be empty while"
> + " setting SYMMETRIC hash function");
> +
> /* Parse RSS related parameters from configuration */
> - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
> + if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "non-default RSS hash functions are not supported");
> + "RSS hash functions are not supported");
> if (rss->level)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act, @@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev
> *dev, {
> int ret;
> struct i40e_queue_regions info;
> - uint8_t action_flag = 0;
> + struct i40e_rss_pattern_info p_info;
>
> memset(&info, 0, sizeof(struct i40e_queue_regions));
> + memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
>
> ret = i40e_flow_parse_rss_pattern(dev, pattern,
> - error, &action_flag, &info);
> + error, &p_info, &info);
> if (ret)
> return ret;
>
> ret = i40e_flow_parse_rss_action(dev, actions, error,
> - action_flag, &info, filter);
> + p_info, &info, filter);
> if (ret)
> return ret;
>
> @@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rte_flow_rss_filter *rss_filter;
> int ret;
>
> if (conf->queue_region_conf) {
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
> - conf->queue_region_conf = 0;
> } else {
> ret = i40e_config_rss_filter(pf, conf, 1);
> }
> - return ret;
> +
> + if (ret)
> + return ret;
> +
> + rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
> + sizeof(*rss_filter), 0);
> + if (rss_filter == NULL) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + rss_filter->rss_filter_info = *conf;
> + /* the rull new created is always valid
> + * the existing rull covered by new rull will be set invalid
> + */
> + rss_filter->rss_filter_info.valid = true;
> +
> + TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
> +
> + return 0;
> }
>
> static int
> @@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rte_flow_rss_filter *rss_filter;
>
> - i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + if (conf->queue_region_conf)
> + i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + else
> + i40e_config_rss_filter(pf, conf, 0);
>
> - i40e_config_rss_filter(pf, conf, 0);
> + TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
> + if (!memcmp(&rss_filter->rss_filter_info, conf,
> + sizeof(struct rte_flow_action_rss))) {
> + TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
> + rte_free(rss_filter);
> + }
> + }
> return 0;
> }
>
> @@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
> &cons_filter.rss_conf);
> if (ret)
> goto free_flow;
> - flow->rule = &pf->rss_info;
> + flow->rule = TAILQ_LAST(&pf->rss_info_list,
> + i40e_rss_conf_list);
> break;
> default:
> goto free_flow;
> @@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
> break;
> case RTE_ETH_FILTER_HASH:
> ret = i40e_config_rss_filter_del(dev,
> - (struct i40e_rte_flow_rss_conf *)flow->rule);
> + &((struct i40e_rte_flow_rss_filter *)flow->rule)-
> >rss_filter_info);
> break;
> default:
> PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
> @@ -5248,13 +5352,27 @@ static int i40e_flow_flush_rss_filter(struct
> rte_eth_dev *dev) {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct rte_flow *flow;
> + void *temp;
> int32_t ret = -EINVAL;
>
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
>
> - if (rss_info->conf.queue_num)
> - ret = i40e_config_rss_filter(pf, rss_info, FALSE);
> + /* Delete rss flows in flow list. */
> + TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
> + if (flow->filter_type != RTE_ETH_FILTER_HASH)
> + continue;
> +
> + if (flow->rule) {
> + ret = i40e_config_rss_filter_del(dev,
> + &((struct i40e_rte_flow_rss_filter *)flow-
> >rule)->rss_filter_info);
> + if (ret)
> + return ret;
> + }
> + TAILQ_REMOVE(&pf->flow_list, flow, node);
> + rte_free(flow);
> + }
> +
> return ret;
> }
> --
> 2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v5] net/i40e: implement hash function in rte flow API
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (7 preceding siblings ...)
2020-03-23 8:25 ` [dpdk-dev] [PATCH v4] " Chenxu Di
@ 2020-03-24 8:17 ` Chenxu Di
2020-03-24 12:57 ` Iremonger, Bernard
2020-03-27 12:49 ` Xing, Beilei
2020-03-30 7:40 ` [dpdk-dev] [PATCH v6] " Chenxu Di
` (3 subsequent siblings)
12 siblings, 2 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-24 8:17 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, beilei.xing, wei.zhao1, Chenxu Di
implement set hash global configurations, set symmetric hash enable
and set hash input set in rte flow API.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
v5:
-Modified the doc i40e.rst and various name.
v4:
-added check for l3 pctype with l4 input set.
v3:
-modified the doc i40e.rst
v2:
-canceled remove legacy filter functions.
---
doc/guides/nics/i40e.rst | 14 +
doc/guides/rel_notes/release_20_05.rst | 6 +
drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
drivers/net/i40e/i40e_ethdev.h | 18 +
drivers/net/i40e/i40e_flow.c | 186 ++++++++--
5 files changed, 623 insertions(+), 72 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..03b117a99 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,20 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+Enable set hash input set and hash enable in generic flow API.
+For the reason queue region configuration in i40e is for all PCTYPE,
+pctype must be empty while configuring queue region.
+The pctype in pattern and actions must be matched.
+For exampale, to set queue region configuration queue 0, 1, 2, 3
+and set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues 0 1 2 3 end / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queues end / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf501..12e85118f 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,12 @@ New Features
* Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
+* **Updated Intel i40e driver.**
+
+ Updated i40e PMD with new features and improvements, including:
+
+ * Added support for RSS using L3/L4 source/destination only.
+
Removed Items
-------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9539b0470..2727eef80 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize rss rule list */
+ TAILQ_INIT(&pf->rss_info_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
+ struct i40e_rte_flow_rss_filter *rss_item;
+
+ TAILQ_FOREACH(rss_item, rss_list, next) {
+ i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
+ }
}
static void
@@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct rte_flow_action_rss *comp,
sizeof(*with->queue) * with->queue_num));
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* config rss hash input set */
+static int
+i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_type_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
};
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(types & (1ull << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_type_table); j++) {
+ if ((types & inset_type_table[j].type) ==
+ inset_type_table[j].type) {
+ if (inset_type_table[j].field ==
+ RTE_ETH_INPUT_SET_UNKNOWN) {
+ return -EINVAL;
+ }
+ conf.field[conf.inset_size] =
+ inset_type_table[j].field;
+ conf.inset_size++;
+ }
}
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+/* set existing rule invalid if it is covered */
+static void
+i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rte_flow_rss_filter *rss_item;
+ uint64_t rss_inset;
+
+ /* to check pctype same need without input set bits */
+ rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* config rss queue rule */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss input set rule */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ rss_inset) ==
+ (conf->conf.types & rss_inset))
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function symmetric rule */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function xor or toeplitz rule */
+ if (rss_item->rss_filter_info.conf.func !=
+ RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ (rss_item->rss_filter_info.conf.types & rss_inset) ==
+ (conf->conf.types & rss_inset))
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* config rss hash enable and set hash input set */
+static int
+i40e_config_hash_pctype_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Confirm hash input set */
+ if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss queue region */
+static int
+i40e_config_hash_queue_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hash function */
+static int
+i40e_config_hash_function_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_hash_global_conf g_cfg;
+ uint64_t rss_inset;
+
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ } else {
+ rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ g_cfg.hash_func = conf->conf.func;
+ g_cfg.sym_hash_enable_mask[0] = conf->conf.types & rss_inset;
+ g_cfg.valid_bit_mask[0] = conf->conf.types & rss_inset;
+ i40e_set_hash_filter_global_config(hw, &g_cfg);
+ }
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hena disable and set hash input set to defalut */
+static int
+i40e_config_hash_pctype_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* set hash enable register to disable */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
+ i40e_hw_rss_hash_set(pf, &rss_conf);
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash input set default */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ return 0;
+}
+
+/* config rss queue region to default */
+static int
+i40e_config_hash_queue_del(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+/* config rss hash function to default */
+static int
+i40e_config_hash_function_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i;
+ uint16_t j;
+
+ /* set symmetric hash to default status */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 0);
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash global config disable */
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j), 0);
+ }
}
- i40e_hw_rss_hash_set(pf, &rss_conf);
+ return 0;
+}
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* config rss queue region */
+ ret = i40e_config_hash_queue_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
+ /* config hash function */
+ ret = i40e_config_hash_function_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* config hash enable and input set for each pctype */
+ ret = i40e_config_hash_pctype_add(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* update rss info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_config_hash_queue_del(pf);
+ else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ i40e_config_hash_function_del(pf, conf);
+ else
+ i40e_config_hash_pctype_del(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..1e4e64ea7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
+
+/* rss filter list structure */
+struct i40e_rte_flow_rss_filter {
+ TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1039,6 +1056,7 @@ struct i40e_pf {
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rss_conf_list rss_info_list; /* rss rull list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..4774fde6d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
@@ -4438,15 +4438,66 @@ static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index or symmetric hash enable must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "no valid rules");
return -rte_errno;
}
@@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues must be empty while"
+ " setting SYMMETRIC hash function");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
{
int ret;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ struct i40e_rss_pattern_info p_info;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rull new created is always valid
+ * the existing rull covered by new rull will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rte_flow_rss_filter *rss_filter;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_info_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5352,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rte_flow_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v5] net/i40e: implement hash function in rte flow API
2020-03-24 8:17 ` [dpdk-dev] [PATCH v5] " Chenxu Di
@ 2020-03-24 12:57 ` Iremonger, Bernard
[not found] ` <87688dbf6ac946d5974a61578be1ed89@intel.com>
2020-03-27 12:49 ` Xing, Beilei
1 sibling, 1 reply; 26+ messages in thread
From: Iremonger, Bernard @ 2020-03-24 12:57 UTC (permalink / raw)
To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Xing, Beilei, Zhao1, Wei, Di, ChenxuX
Hi Chenxu,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenxu Di
> Sent: Tuesday, March 24, 2020 8:18 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [dpdk-dev] [PATCH v5] net/i40e: implement hash function in rte
> flow API
>
> implement set hash global configurations, set symmetric hash enable and
> set hash input set in rte flow API.
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> v5:
> -Modified the doc i40e.rst and various name.
> v4:
> -added check for l3 pctype with l4 input set.
> v3:
> -modified the doc i40e.rst
> v2:
> -canceled remove legacy filter functions.
> ---
> doc/guides/nics/i40e.rst | 14 +
> doc/guides/rel_notes/release_20_05.rst | 6 +
> drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
> drivers/net/i40e/i40e_ethdev.h | 18 +
> drivers/net/i40e/i40e_flow.c | 186 ++++++++--
> 5 files changed, 623 insertions(+), 72 deletions(-)
>
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> d6e578eda..03b117a99 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -569,6 +569,20 @@ details please refer to
> :doc:`../testpmd_app_ug/index`.
> testpmd> set port (port_id) queue-region flush (on|off)
> testpmd> show port (port_id) queue-region
>
> +Generic flow API
> +~~~~~~~~~~~~~~~~~~~
> +Enable set hash input set and hash enable in generic flow API.
> +For the reason queue region configuration in i40e is for all PCTYPE,
> +pctype must be empty while configuring queue region.
> +The pctype in pattern and actions must be matched.
> +For exampale, to set queue region configuration queue 0, 1, 2, 3 and
> +set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
> +
> + testpmd> flow create 0 ingress pattern end actions rss types end \
> + queues 0 1 2 3 end / end
> + testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
> + actions rss types ipv4-tcp l3-src-only end queues end / end
> +
> Limitations or Known issues
> ---------------------------
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst
> b/doc/guides/rel_notes/release_20_05.rst
> index 000bbf501..12e85118f 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -62,6 +62,12 @@ New Features
>
> * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
>
> +* **Updated Intel i40e driver.**
> +
> + Updated i40e PMD with new features and improvements, including:
> +
> + * Added support for RSS using L3/L4 source/destination only.
> +
>
> Removed Items
> -------------
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 9539b0470..2727eef80 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void
> *init_params __rte_unused)
> /* initialize mirror rule list */
> TAILQ_INIT(&pf->mirror_list);
>
> + /* initialize rss rule list */
> + TAILQ_INIT(&pf->rss_info_list);
> +
> /* initialize Traffic Manager configuration */
> i40e_tm_conf_init(dev);
>
> @@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
> static inline void i40e_rss_filter_restore(struct i40e_pf *pf) {
> - struct i40e_rte_flow_rss_conf *conf =
> - &pf->rss_info;
> - if (conf->conf.queue_num)
> - i40e_config_rss_filter(pf, conf, TRUE);
> + struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
> + struct i40e_rte_flow_rss_filter *rss_item;
> +
> + TAILQ_FOREACH(rss_item, rss_list, next) {
> + i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
> + }
> }
>
> static void
> @@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct
> rte_flow_action_rss *comp,
> sizeof(*with->queue) * with->queue_num)); }
>
> -int
> -i40e_config_rss_filter(struct i40e_pf *pf,
> - struct i40e_rte_flow_rss_conf *conf, bool add)
> +/* config rss hash input set */
> +static int
> +i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
> {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint32_t i, lut = 0;
> - uint16_t j, num;
> - struct rte_eth_rss_conf rss_conf = {
> - .rss_key = conf->conf.key_len ?
> - (void *)(uintptr_t)conf->conf.key : NULL,
> - .rss_key_len = conf->conf.key_len,
> - .rss_hf = conf->conf.types,
> + struct rte_eth_input_set_conf conf;
> + int i, ret;
> + uint32_t j;
> + static const struct {
> + uint64_t type;
> + enum rte_eth_input_set_field field;
> + } inset_type_table[] = {
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> };
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
>
> - if (!add) {
> - if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
> - i40e_pf_disable_rss(pf);
> - memset(rss_info, 0,
> - sizeof(struct i40e_rte_flow_rss_conf));
> - return 0;
> + ret = 0;
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> i++) {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(types & (1ull << i)))
> + continue;
> +
> + conf.op = RTE_ETH_INPUT_SET_SELECT;
> + conf.flow_type = i;
> + conf.inset_size = 0;
> + for (j = 0; j < RTE_DIM(inset_type_table); j++) {
> + if ((types & inset_type_table[j].type) ==
> + inset_type_table[j].type) {
> + if (inset_type_table[j].field ==
> + RTE_ETH_INPUT_SET_UNKNOWN) {
> + return -EINVAL;
> + }
> + conf.field[conf.inset_size] =
> + inset_type_table[j].field;
> + conf.inset_size++;
> + }
> }
> +
> + if (conf.inset_size) {
> + ret = i40e_hash_filter_inset_select(hw, &conf);
> + if (ret)
> + return ret;
> + }
> + }
> +
> + return ret;
> +}
> +
> +/* set existing rule invalid if it is covered */ static void
> +i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_rte_flow_rss_filter *rss_item;
> + uint64_t rss_inset;
> +
> + /* to check pctype same need without input set bits */
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> +
> + TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
> + if (!rss_item->rss_filter_info.valid)
> + continue;
> +
> + /* config rss queue rule */
> + if (conf->conf.queue_num &&
> + rss_item->rss_filter_info.conf.queue_num)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss input set rule */
> + if (conf->conf.types &&
> + (rss_item->rss_filter_info.conf.types &
> + rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function symmetric rule */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
> + rss_item->rss_filter_info.conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function xor or toeplitz rule */
> + if (rss_item->rss_filter_info.conf.func !=
> + RTE_ETH_HASH_FUNCTION_DEFAULT &&
> + conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT
> &&
> + (rss_item->rss_filter_info.conf.types & rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> + }
> +}
> +
> +/* config rss hash enable and set hash input set */ static int
> +i40e_config_hash_pctype_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> +
> + if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
> + return -ENOTSUP;
> +
> + /* Confirm hash input set */
> + if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
> return -EINVAL;
> +
> + if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> + /* Random default keys */
> + static uint32_t rss_key_default[] = {0x6b793944,
> + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
> +
> + rss_conf->rss_key = (uint8_t *)rss_key_default;
> + rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1)
> *
> + sizeof(uint32_t);
> + PMD_DRV_LOG(INFO,
> + "No valid RSS key config for i40e, using default\n");
> }
>
> + rss_conf->rss_hf |= rss_info->conf.types;
> + i40e_hw_rss_hash_set(pf, rss_conf);
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss queue region */
> +static int
> +i40e_config_hash_queue_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i, lut;
> + uint16_t j, num;
> +
> /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> * It's necessary to calculate the actual PF queues that are configured.
> */
> @@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> return -ENOTSUP;
> }
>
> + lut = 0;
> /* Fill in redirection table */
> for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> if (j == num)
> @@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> }
>
> - if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
> - i40e_pf_disable_rss(pf);
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hash function */
> +static int
> +i40e_config_hash_function_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct rte_eth_hash_global_conf g_cfg;
> + uint64_t rss_inset;
> +
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
> + i40e_set_symmetric_hash_enable_per_port(hw, 1);
> + } else {
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY |
> ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> + g_cfg.hash_func = conf->conf.func;
> + g_cfg.sym_hash_enable_mask[0] = conf->conf.types &
> rss_inset;
> + g_cfg.valid_bit_mask[0] = conf->conf.types & rss_inset;
> + i40e_set_hash_filter_global_config(hw, &g_cfg);
> + }
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hena disable and set hash input set to defalut */ static
> +int i40e_config_hash_pctype_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = pf->rss_info.conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = pf->rss_info.conf.key_len,
> + };
> + uint32_t i;
> +
> + /* set hash enable register to disable */
> + rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
> + i40e_hw_rss_hash_set(pf, &rss_conf);
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> i++) {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash input set default */
> + struct rte_eth_input_set_conf input_conf = {
> + .op = RTE_ETH_INPUT_SET_SELECT,
> + .flow_type = i,
> + .inset_size = 1,
> + };
> + input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
> + i40e_hash_filter_inset_select(hw, &input_conf);
> + }
> +
> + rss_info->conf.types = rss_conf.rss_hf;
> +
> + return 0;
> +}
> +
> +/* config rss queue region to default */ static int
> +i40e_config_hash_queue_del(struct i40e_pf *pf) {
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + uint16_t queue[I40E_MAX_Q_PER_TC];
> + uint32_t num_rxq, i, lut;
> + uint16_t j, num;
> +
> + num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues,
> I40E_MAX_Q_PER_TC);
> +
> + for (j = 0; j < num_rxq; j++)
> + queue[j] = j;
> +
> + /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> + * It's necessary to calculate the actual PF queues that are configured.
> + */
> + if (pf->dev_data->dev_conf.rxmode.mq_mode &
> ETH_MQ_RX_VMDQ_FLAG)
> + num = i40e_pf_calc_configured_queues_num(pf);
> + else
> + num = pf->dev_data->nb_rx_queues;
> +
> + num = RTE_MIN(num, num_rxq);
> + PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are
> configured",
> + num);
> +
> + if (num == 0) {
> + PMD_DRV_LOG(ERR,
> + "No PF queues are configured to enable RSS for port
> %u",
> + pf->dev_data->port_id);
> + return -ENOTSUP;
> + }
> +
> + lut = 0;
> + /* Fill in redirection table */
> + for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> + if (j == num)
> + j = 0;
> + lut = (lut << 8) | (queue[j] & ((0x1 <<
> + hw->func_caps.rss_table_entry_width) - 1));
> + if ((i & 3) == 3)
> + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> + }
> +
> + rss_info->conf.queue_num = 0;
> + memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
> +
> + return 0;
> +}
> +
> +/* config rss hash function to default */ static int
> +i40e_config_hash_function_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i;
> + uint16_t j;
> +
> + /* set symmetric hash to default status */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
> + i40e_set_symmetric_hash_enable_per_port(hw, 0);
> +
> return 0;
> }
> - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
> - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> - /* Random default keys */
> - static uint32_t rss_key_default[] = {0x6b793944,
> - 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> - 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> - 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
>
> - rss_conf.rss_key = (uint8_t *)rss_key_default;
> - rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> - sizeof(uint32_t);
> - PMD_DRV_LOG(INFO,
> - "No valid RSS key config for i40e, using default\n");
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> i++) {
> + if (!(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash global config disable */
> + for (j = I40E_FILTER_PCTYPE_INVALID + 1;
> + j < I40E_FILTER_PCTYPE_MAX; j++) {
> + if (pf->adapter->pctypes_tbl[i] &
> + (1ULL << j))
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(j), 0);
> + }
> }
>
> - i40e_hw_rss_hash_set(pf, &rss_conf);
> + return 0;
> +}
>
> - if (i40e_rss_conf_init(rss_info, &conf->conf))
> - return -EINVAL;
> +int
> +i40e_config_rss_filter(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf, bool add) {
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_flow_action_rss update_conf = rss_info->conf;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = conf->conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = conf->conf.key_len,
> + .rss_hf = conf->conf.types,
> + };
> + int ret = 0;
> +
> + if (add) {
> + if (conf->conf.queue_num) {
> + /* config rss queue region */
> + ret = i40e_config_hash_queue_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.queue_num = conf->conf.queue_num;
> + update_conf.queue = conf->conf.queue;
> + } else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT) {
> + /* config hash function */
> + ret = i40e_config_hash_function_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.func = conf->conf.func;
> + } else {
> + /* config hash enable and input set for each pctype
> */
> + ret = i40e_config_hash_pctype_add(pf, conf,
> &rss_conf);
> + if (ret)
> + return ret;
> +
> + update_conf.types = rss_conf.rss_hf;
> + update_conf.key = rss_conf.rss_key;
> + update_conf.key_len = rss_conf.rss_key_len;
> + }
> +
> + /* update rss info in pf */
> + if (i40e_rss_conf_init(rss_info, &update_conf))
> + return -EINVAL;
> + } else {
> + if (!conf->valid)
> + return 0;
> +
> + if (conf->conf.queue_num)
> + i40e_config_hash_queue_del(pf);
> + else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT)
> + i40e_config_hash_function_del(pf, conf);
> + else
> + i40e_config_hash_pctype_del(pf, conf);
> + }
>
> return 0;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev.h
> b/drivers/net/i40e/i40e_ethdev.h index aac89de91..1e4e64ea7 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx { #define
> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
> I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
>
> +#define I40E_RSS_TYPE_NONE 0ULL
> +#define I40E_RSS_TYPE_INVALID 1ULL
> +
> #define I40E_INSET_NONE 0x00000000000000000ULL
>
> /* bit0 ~ bit 7 */
> @@ -749,6 +752,11 @@ struct i40e_queue_regions {
> struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX +
> 1]; };
>
> +struct i40e_rss_pattern_info {
> + uint8_t action_flag;
> + uint64_t types;
> +};
> +
> /* Tunnel filter number HW supports */
> #define I40E_MAX_TUNNEL_FILTER_NUM 400
>
> @@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
> I40E_VFQF_HKEY_MAX_INDEX :
> I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t)]; /* Hash key. */
> uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use.
> */
> + bool valid; /* Check if it's valid */
> +};
> +
> +TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
> +
> +/* rss filter list structure */
> +struct i40e_rte_flow_rss_filter {
> + TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
> + struct i40e_rte_flow_rss_conf rss_filter_info;
> };
>
> struct i40e_vf_msg_cfg {
> @@ -1039,6 +1056,7 @@ struct i40e_pf {
> struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
> struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
> struct i40e_rte_flow_rss_conf rss_info; /* rss info */
> + struct i40e_rss_conf_list rss_info_list; /* rss rull list */
> struct i40e_queue_regions queue_region; /* queue region info */
> struct i40e_fc_conf fc_conf; /* Flow control conf */
> struct i40e_mirror_rule_list mirror_list; diff --git
> a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index
> d877ac250..4774fde6d 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev
> *dev,
> * function for RSS, or flowtype for queue region configuration.
> * For example:
> * pattern:
> - * Case 1: only ETH, indicate flowtype for queue region will be parsed.
> - * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
> - * Case 3: none, indicate RSS related will be parsed in action.
> - * Any pattern other the ETH or VLAN will be treated as invalid except END.
> + * Case 1: try to transform patterns to pctype. valid pctype will be
> + * used in parse action.
> + * Case 2: only ETH, indicate flowtype for queue region will be parsed.
> + * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
> * So, pattern choice is depened on the purpose of configuration of
> * that flow.
> * action:
> @@ -4438,15 +4438,66 @@ static int
> i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> const struct rte_flow_item *pattern,
> struct rte_flow_error *error,
> - uint8_t *action_flag,
> + struct i40e_rss_pattern_info *p_info,
> struct i40e_queue_regions *info) {
> const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
> const struct rte_flow_item *item = pattern;
> enum rte_flow_item_type item_type;
> -
> - if (item->type == RTE_FLOW_ITEM_TYPE_END)
> + struct rte_flow_item *items;
> + uint32_t item_num = 0; /* non-void item number of pattern*/
> + uint32_t i = 0;
> + static const struct {
> + enum rte_flow_item_type *item_array;
> + uint64_t type;
> + } i40e_rss_pctype_patterns[] = {
> + { pattern_fdir_ipv4,
> + ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_OTHER },
> + { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
> + { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
> + { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
> + { pattern_fdir_ipv6,
> + ETH_RSS_FRAG_IPV6 |
> ETH_RSS_NONFRAG_IPV6_OTHER },
> + { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
> + { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
> + { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
> + };
> +
> + p_info->types = I40E_RSS_TYPE_INVALID;
> +
> + if (item->type == RTE_FLOW_ITEM_TYPE_END) {
> + p_info->types = I40E_RSS_TYPE_NONE;
> return 0;
> + }
> +
> + /* convert flow to pctype */
> + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
> + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
> + item_num++;
> + i++;
> + }
> + item_num++;
> +
> + items = rte_zmalloc("i40e_pattern",
> + item_num * sizeof(struct rte_flow_item), 0);
> + if (!items) {
> + rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> + NULL, "No memory for PMD internal
> items.");
> + return -ENOMEM;
> + }
> +
> + i40e_pattern_skip_void_item(items, pattern);
> +
> + for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
> + if
> (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
> + items)) {
> + p_info->types = i40e_rss_pctype_patterns[i].type;
> + rte_free(items);
> + return 0;
> + }
> + }
> +
> + rte_free(items);
>
> for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> if (item->last) {
> @@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> item_type = item->type;
> switch (item_type) {
> case RTE_FLOW_ITEM_TYPE_ETH:
> - *action_flag = 1;
> + p_info->action_flag = 1;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> vlan_spec = item->spec;
> @@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> vlan_spec->tci) >> 13) & 0x7;
> info->region[0].user_priority_num =
> 1;
> info->queue_region_number = 1;
> - *action_flag = 0;
> + p_info->action_flag = 0;
> }
> }
> break;
> @@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused
> struct rte_eth_dev *dev,
> * max index should be 7, and so on. And also, queue index should be
> * continuous sequence and queue region index should be part of rss
> * queue index for this port.
> + * For hash params, the pctype in action and pattern must be same.
> + * Set queue index or symmetric hash enable must be with non-types.
> */
> static int
> i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
> const struct rte_flow_action *actions,
> struct rte_flow_error *error,
> - uint8_t action_flag,
> + struct i40e_rss_pattern_info p_info,
> struct i40e_queue_regions *conf_info,
> union i40e_filter_t *filter)
> {
> @@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> struct i40e_rte_flow_rss_conf *rss_config =
> &filter->rss_conf;
> struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> - uint16_t i, j, n, tmp;
> + uint16_t i, j, n, tmp, nb_types;
> uint32_t index = 0;
> uint64_t hf_bit = 1;
>
> @@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> return -rte_errno;
> }
>
> - if (action_flag) {
> + if (p_info.action_flag) {
> for (n = 0; n < 64; n++) {
> if (rss->types & (hf_bit << n)) {
> conf_info->region[0].hw_flowtype[0] = n;
> @@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> if (rss_config->queue_region_conf)
> return 0;
>
> - if (!rss || !rss->queue_num) {
> + if (!rss) {
> rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "no valid queues");
> + "no valid rules");
> return -rte_errno;
> }
>
> @@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> }
> }
>
> - if (rss_info->conf.queue_num) {
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ACTION,
> - act,
> - "rss only allow one valid rule");
> - return -rte_errno;
> + if (rss->queue_num && (p_info.types || rss->types))
Should the line above be
if (conf_info->queue_region_number && (p_info.types || rss->types))
to allow RSS configuration of types and queues in a single rule, for example:
flow create 0 ingress pattern eth / end actions rss types udp end queues 2 3 end / end
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype must be empty while configuring queue
> region");
> +
> + /* validate pattern and pctype */
> + if (!(rss->types & p_info.types) &&
> + (rss->types || p_info.types) && !rss->queue_num)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "invaild pctype");
> +
> + nb_types = 0;
> + for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
> + if (rss->types & (hf_bit << n))
> + nb_types++;
> + if (nb_types > 1)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "multi pctype is not supported");
> }
>
> + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ
> &&
> + (p_info.types || rss->types || rss->queue_num))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype and queues must be empty while"
> + " setting SYMMETRIC hash function");
> +
> /* Parse RSS related parameters from configuration */
> - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
> + if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "non-default RSS hash functions are not
> supported");
> + "RSS hash functions are not supported");
> if (rss->level)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act, @@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev
> *dev, {
> int ret;
> struct i40e_queue_regions info;
> - uint8_t action_flag = 0;
> + struct i40e_rss_pattern_info p_info;
>
> memset(&info, 0, sizeof(struct i40e_queue_regions));
> + memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
>
> ret = i40e_flow_parse_rss_pattern(dev, pattern,
> - error, &action_flag, &info);
> + error, &p_info, &info);
> if (ret)
> return ret;
>
> ret = i40e_flow_parse_rss_action(dev, actions, error,
> - action_flag, &info, filter);
> + p_info, &info, filter);
> if (ret)
> return ret;
>
> @@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rte_flow_rss_filter *rss_filter;
> int ret;
>
> if (conf->queue_region_conf) {
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
> - conf->queue_region_conf = 0;
> } else {
> ret = i40e_config_rss_filter(pf, conf, 1);
> }
> - return ret;
> +
> + if (ret)
> + return ret;
> +
> + rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
> + sizeof(*rss_filter), 0);
> + if (rss_filter == NULL) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + rss_filter->rss_filter_info = *conf;
> + /* the rull new created is always valid
> + * the existing rull covered by new rull will be set invalid
> + */
> + rss_filter->rss_filter_info.valid = true;
> +
> + TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
> +
> + return 0;
> }
>
> static int
> @@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rte_flow_rss_filter *rss_filter;
>
> - i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + if (conf->queue_region_conf)
> + i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + else
> + i40e_config_rss_filter(pf, conf, 0);
>
> - i40e_config_rss_filter(pf, conf, 0);
> + TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
> + if (!memcmp(&rss_filter->rss_filter_info, conf,
> + sizeof(struct rte_flow_action_rss))) {
> + TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
> + rte_free(rss_filter);
> + }
> + }
> return 0;
> }
>
> @@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
> &cons_filter.rss_conf);
> if (ret)
> goto free_flow;
> - flow->rule = &pf->rss_info;
> + flow->rule = TAILQ_LAST(&pf->rss_info_list,
> + i40e_rss_conf_list);
> break;
> default:
> goto free_flow;
> @@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
> break;
> case RTE_ETH_FILTER_HASH:
> ret = i40e_config_rss_filter_del(dev,
> - (struct i40e_rte_flow_rss_conf *)flow->rule);
> + &((struct i40e_rte_flow_rss_filter *)flow->rule)-
> >rss_filter_info);
> break;
> default:
> PMD_DRV_LOG(WARNING, "Filter type (%d) not
> supported", @@ -5248,13 +5352,27 @@ static int
> i40e_flow_flush_rss_filter(struct rte_eth_dev *dev) {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct rte_flow *flow;
> + void *temp;
> int32_t ret = -EINVAL;
>
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
>
> - if (rss_info->conf.queue_num)
> - ret = i40e_config_rss_filter(pf, rss_info, FALSE);
> + /* Delete rss flows in flow list. */
> + TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
> + if (flow->filter_type != RTE_ETH_FILTER_HASH)
> + continue;
> +
> + if (flow->rule) {
> + ret = i40e_config_rss_filter_del(dev,
> + &((struct i40e_rte_flow_rss_filter *)flow-
> >rule)->rss_filter_info);
> + if (ret)
> + return ret;
> + }
> + TAILQ_REMOVE(&pf->flow_list, flow, node);
> + rte_free(flow);
> + }
> +
> return ret;
> }
> --
> 2.17.1
Regards,
Bernard.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v5] net/i40e: implement hash function in rte flow API
[not found] ` <87688dbf6ac946d5974a61578be1ed89@intel.com>
@ 2020-03-25 9:48 ` Iremonger, Bernard
0 siblings, 0 replies; 26+ messages in thread
From: Iremonger, Bernard @ 2020-03-25 9:48 UTC (permalink / raw)
To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Xing, Beilei, Zhao1, Wei
Hi Chenxu,
<snip>
> [snip]
>
> > > --- a/drivers/net/i40e/i40e_flow.c
> > > +++ b/drivers/net/i40e/i40e_flow.c
> > > @@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct
> > > rte_eth_dev *dev,
> > > * function for RSS, or flowtype for queue region configuration.
> > > * For example:
> > > * pattern:
> > > - * Case 1: only ETH, indicate flowtype for queue region will be parsed.
> > > - * Case 2: only VLAN, indicate user_priority for queue region will be
> parsed.
> > > - * Case 3: none, indicate RSS related will be parsed in action.
> > > - * Any pattern other the ETH or VLAN will be treated as invalid except
> END.
> > > + * Case 1: try to transform patterns to pctype. valid pctype will be
> > > + * used in parse action.
> > > + * Case 2: only ETH, indicate flowtype for queue region will be parsed.
> > > + * Case 3: only VLAN, indicate user_priority for queue region will be
> parsed.
> > > * So, pattern choice is depened on the purpose of configuration of
> > > * that flow.
> > > * action:
> > > @@ -4438,15 +4438,66 @@ static int
> > > i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> > > const struct rte_flow_item *pattern,
> > > struct rte_flow_error *error,
> > > - uint8_t *action_flag,
> > > + struct i40e_rss_pattern_info *p_info,
> > > struct i40e_queue_regions *info) { const struct
> > > rte_flow_item_vlan *vlan_spec, *vlan_mask; const struct
> > > rte_flow_item *item = pattern; enum rte_flow_item_type item_type;
> > > -
> > > -if (item->type == RTE_FLOW_ITEM_TYPE_END)
> > > +struct rte_flow_item *items;
> > > +uint32_t item_num = 0; /* non-void item number of pattern*/
> > > +uint32_t i = 0; static const struct { enum rte_flow_item_type
> > > +*item_array; uint64_t type; } i40e_rss_pctype_patterns[] = { {
> > > +pattern_fdir_ipv4,
> > > +ETH_RSS_FRAG_IPV4 |
> > > ETH_RSS_NONFRAG_IPV4_OTHER },
> > > +{ pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP }, {
> > > +pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP }, {
> > > +pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP }, {
> > > +pattern_fdir_ipv6,
> > > +ETH_RSS_FRAG_IPV6 |
> > > ETH_RSS_NONFRAG_IPV6_OTHER },
> > > +{ pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP }, {
> > > +pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP }, {
> > > +pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP }, };
> > > +
> > > +p_info->types = I40E_RSS_TYPE_INVALID;
> > > +
> > > +if (item->type == RTE_FLOW_ITEM_TYPE_END) { p_info->types =
> > > +I40E_RSS_TYPE_NONE;
> > > return 0;
> > > +}
> > > +
> > > +/* convert flow to pctype */
> > > +while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { if
> > > +((pattern
> > > ++ i)->type != RTE_FLOW_ITEM_TYPE_VOID) item_num++;
> > > +i++;
> > > +}
> > > +item_num++;
> > > +
> > > +items = rte_zmalloc("i40e_pattern",
> > > + item_num * sizeof(struct rte_flow_item), 0); if (!items) {
> > > +rte_flow_error_set(error, ENOMEM,
> > > RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> > > + NULL, "No memory for PMD internal
> > > items.");
> > > +return -ENOMEM;
> > > +}
> > > +
> > > +i40e_pattern_skip_void_item(items, pattern);
> > > +
> > > +for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) { if
> > > (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
> > > +items)) {
> > > +p_info->types = i40e_rss_pctype_patterns[i].type; rte_free(items);
> > > +return 0; } }
> > > +
> > > +rte_free(items);
> > >
> > > for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { if
> > > (item->last) { @@ -4459,7 +4510,7 @@
> > > i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> > > item_type = item->type; switch (item_type) { case
> > > RTE_FLOW_ITEM_TYPE_ETH:
> > > -*action_flag = 1;
> > > +p_info->action_flag = 1;
> > > break;
> > > case RTE_FLOW_ITEM_TYPE_VLAN:
> > > vlan_spec = item->spec;
> > > @@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused
> > > struct rte_eth_dev *dev,
> > > vlan_spec->tci) >> 13) & 0x7;
> > > info->region[0].user_priority_num = 1; info->queue_region_number =
> > > 1; -*action_flag = 0;
> > > +p_info->action_flag = 0;
> > > }
> > > }
> > > break;
> > > @@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused
> > > struct rte_eth_dev *dev,
> > > * max index should be 7, and so on. And also, queue index should be
> > > * continuous sequence and queue region index should be part of rss
> > > * queue index for this port.
> > > + * For hash params, the pctype in action and pattern must be same.
> > > + * Set queue index or symmetric hash enable must be with non-types.
> > > */
> > > static int
> > > i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
> > > const struct rte_flow_action *actions,
> > > struct rte_flow_error *error,
> > > - uint8_t action_flag,
> > > +struct i40e_rss_pattern_info p_info,
> > > struct i40e_queue_regions *conf_info,
> > > union i40e_filter_t *filter)
> > > {
> > > @@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> > > *dev, struct i40e_rte_flow_rss_conf *rss_config =
> > > &filter->rss_conf; struct i40e_rte_flow_rss_conf *rss_info =
> > > &pf->rss_info; -uint16_t i, j, n, tmp;
> > > +uint16_t i, j, n, tmp, nb_types;
> > > uint32_t index = 0;
> > > uint64_t hf_bit = 1;
> > >
> > > @@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> > > *dev, return -rte_errno; }
> > >
> > > -if (action_flag) {
> > > +if (p_info.action_flag) {
> > > for (n = 0; n < 64; n++) {
> > > if (rss->types & (hf_bit << n)) {
> > > conf_info->region[0].hw_flowtype[0] = n; @@ -4674,11 +4727,11 @@
> > > i40e_flow_parse_rss_action(struct rte_eth_dev *dev, if
> > > (rss_config->queue_region_conf) return 0;
> > >
> > > -if (!rss || !rss->queue_num) {
> > > +if (!rss) {
> > > rte_flow_error_set(error, EINVAL,
> > > RTE_FLOW_ERROR_TYPE_ACTION,
> > > act,
> > > -"no valid queues");
> > > +"no valid rules");
> > > return -rte_errno;
> > > }
> > >
> > > @@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct
> > > rte_eth_dev *dev, } }
> > >
> > > -if (rss_info->conf.queue_num) {
> > > -rte_flow_error_set(error, EINVAL,
> > > -RTE_FLOW_ERROR_TYPE_ACTION,
> > > -act,
> > > -"rss only allow one valid rule");
> > > -return -rte_errno;
> > > +if (rss->queue_num && (p_info.types || rss->types))
> >
> > Should the line above be
> >
> > if (conf_info->queue_region_number && (p_info.types || rss->types))
> >
> > to allow RSS configuration of types and queues in a single rule, for example:
> >
> > flow create 0 ingress pattern eth / end actions rss types udp end
> > queues 2 3 end / end
> >
>
> For the conf_info->queue_region_number and rss->queue_num, In the old
> codes, if there is eth or vlan, the conf_info->queue_region_number will be
> set as 1 in the function parse pattern.
> And in the parse action function it will check the conf_info-
> >queue_region_number.
> If conf_info->queue_region_number == 1, it will do some things and return.
> It will not do the things but do others while conf_info-
> >queue_region_number == 0.
> And after parse it will call the function i40e_flush_queue_region_all_conf() if
> conf_info->queue_region_number == 1 While call the function
> i40e_config_rss_filter() if conf_info->queue_region_number == 0.
>
> So what I changed is only when conf_info->queue_region_number == 0.
>
> Btw, in i40e, the queue configuration is for a port, it can't be for one rule or
> one type.
> So I don't think it is a good idea for allowing RSS configuration of types and
> queues in a single rule.
Would you suggest two rules as follows?
To configure the queues:
flow create 0 ingress pattern end actions rss queues 2 3 end / end
To configure the hash:
flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4 end key_len 0 queues end / end
(above rule is used on the ice pmd)
Regards,
Bernard
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v5] net/i40e: implement hash function in rte flow API
2020-03-24 8:17 ` [dpdk-dev] [PATCH v5] " Chenxu Di
2020-03-24 12:57 ` Iremonger, Bernard
@ 2020-03-27 12:49 ` Xing, Beilei
1 sibling, 0 replies; 26+ messages in thread
From: Xing, Beilei @ 2020-03-27 12:49 UTC (permalink / raw)
To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Zhao1, Wei
> -----Original Message-----
> From: Di, ChenxuX <chenxux.di@intel.com>
> Sent: Tuesday, March 24, 2020 4:18 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [PATCH v5] net/i40e: implement hash function in rte flow API
>
> implement set hash global configurations, set symmetric hash enable and
What does the global configuration mean?
> set hash input set in rte flow API.
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> v5:
> -Modified the doc i40e.rst and various name.
> v4:
> -added check for l3 pctype with l4 input set.
> v3:
> -modified the doc i40e.rst
> v2:
> -canceled remove legacy filter functions.
> ---
> doc/guides/nics/i40e.rst | 14 +
> doc/guides/rel_notes/release_20_05.rst | 6 +
> drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
> drivers/net/i40e/i40e_ethdev.h | 18 +
> drivers/net/i40e/i40e_flow.c | 186 ++++++++--
> 5 files changed, 623 insertions(+), 72 deletions(-)
>
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> d6e578eda..03b117a99 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -569,6 +569,20 @@ details please refer
> to :doc:`../testpmd_app_ug/index`.
> testpmd> set port (port_id) queue-region flush (on|off)
> testpmd> show port (port_id) queue-region
>
> +Generic flow API
> +~~~~~~~~~~~~~~~~~~~
> +Enable set hash input set and hash enable in generic flow API.
> +For the reason queue region configuration in i40e is for all PCTYPE,
> +pctype must be empty while configuring queue region.
> +The pctype in pattern and actions must be matched.
> +For exampale, to set queue region configuration queue 0, 1, 2, 3 and
Example.
> +set PCTYPE ipv4-tcp hash enable and set input set l3-src-only:
Enable hash for ipv4-tcp and configure input set with l3-src-only:
> +
> + testpmd> flow create 0 ingress pattern end actions rss types end \
> + queues 0 1 2 3 end / end
> + testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
> + actions rss types ipv4-tcp l3-src-only end queues end / end
> +
> Limitations or Known issues
> ---------------------------
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst
> b/doc/guides/rel_notes/release_20_05.rst
> index 000bbf501..12e85118f 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -62,6 +62,12 @@ New Features
>
> * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
>
> +* **Updated Intel i40e driver.**
> +
> + Updated i40e PMD with new features and improvements, including:
> +
> + * Added support for RSS using L3/L4 source/destination only.
Not only input set configuration, but also other functions. And it's for rte flow.
> +
>
> Removed Items
> -------------
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 9539b0470..2727eef80 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void
> *init_params __rte_unused)
> /* initialize mirror rule list */
> TAILQ_INIT(&pf->mirror_list);
>
> + /* initialize rss rule list */
> + TAILQ_INIT(&pf->rss_info_list);
> +
> /* initialize Traffic Manager configuration */
> i40e_tm_conf_init(dev);
>
> @@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
> static inline void i40e_rss_filter_restore(struct i40e_pf *pf) {
> - struct i40e_rte_flow_rss_conf *conf =
> - &pf->rss_info;
> - if (conf->conf.queue_num)
> - i40e_config_rss_filter(pf, conf, TRUE);
> + struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
> + struct i40e_rte_flow_rss_filter *rss_item;
> +
> + TAILQ_FOREACH(rss_item, rss_list, next) {
> + i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
> + }
> }
>
> static void
> @@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct
> rte_flow_action_rss *comp,
> sizeof(*with->queue) * with->queue_num)); }
>
> -int
> -i40e_config_rss_filter(struct i40e_pf *pf,
> - struct i40e_rte_flow_rss_conf *conf, bool add)
> +/* config rss hash input set */
> +static int
> +i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
> {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint32_t i, lut = 0;
> - uint16_t j, num;
> - struct rte_eth_rss_conf rss_conf = {
> - .rss_key = conf->conf.key_len ?
> - (void *)(uintptr_t)conf->conf.key : NULL,
> - .rss_key_len = conf->conf.key_len,
> - .rss_hf = conf->conf.types,
> + struct rte_eth_input_set_conf conf;
> + int i, ret;
> + uint32_t j;
> + static const struct {
> + uint64_t type;
> + enum rte_eth_input_set_field field;
> + } inset_type_table[] = {
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> };
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
>
> - if (!add) {
> - if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
> - i40e_pf_disable_rss(pf);
> - memset(rss_info, 0,
> - sizeof(struct i40e_rte_flow_rss_conf));
> - return 0;
Since you defined ret, better to use ret as return value.
> + ret = 0;
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(types & (1ull << i)))
> + continue;
> +
> + conf.op = RTE_ETH_INPUT_SET_SELECT;
> + conf.flow_type = i;
> + conf.inset_size = 0;
> + for (j = 0; j < RTE_DIM(inset_type_table); j++) {
> + if ((types & inset_type_table[j].type) ==
> + inset_type_table[j].type) {
> + if (inset_type_table[j].field ==
> + RTE_ETH_INPUT_SET_UNKNOWN) {
> + return -EINVAL;
> + }
> + conf.field[conf.inset_size] =
> + inset_type_table[j].field;
> + conf.inset_size++;
> + }
> }
> +
> + if (conf.inset_size) {
> + ret = i40e_hash_filter_inset_select(hw, &conf);
> + if (ret)
> + return ret;
> + }
> + }
> +
> + return ret;
> +}
> +
> +/* set existing rule invalid if it is covered */ static void
> +i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_rte_flow_rss_filter *rss_item;
> + uint64_t rss_inset;
> +
> + /* to check pctype same need without input set bits */
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> +
> + TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
> + if (!rss_item->rss_filter_info.valid)
> + continue;
> +
> + /* config rss queue rule */
> + if (conf->conf.queue_num &&
> + rss_item->rss_filter_info.conf.queue_num)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss input set rule */
> + if (conf->conf.types &&
> + (rss_item->rss_filter_info.conf.types &
> + rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function symmetric rule */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
> + rss_item->rss_filter_info.conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function xor or toeplitz rule */
> + if (rss_item->rss_filter_info.conf.func !=
> + RTE_ETH_HASH_FUNCTION_DEFAULT &&
> + conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT
> &&
> + (rss_item->rss_filter_info.conf.types & rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> + }
> +}
> +
> +/* config rss hash enable and set hash input set */ static int
> +i40e_config_hash_pctype_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> +
> + if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
> + return -ENOTSUP;
> +
> + /* Confirm hash input set */
> + if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
> return -EINVAL;
> +
> + if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> + /* Random default keys */
> + static uint32_t rss_key_default[] = {0x6b793944,
> + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
> +
> + rss_conf->rss_key = (uint8_t *)rss_key_default;
> + rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> + sizeof(uint32_t);
> + PMD_DRV_LOG(INFO,
> + "No valid RSS key config for i40e, using default\n");
> }
>
> + rss_conf->rss_hf |= rss_info->conf.types;
> + i40e_hw_rss_hash_set(pf, rss_conf);
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss queue region */
> +static int
> +i40e_config_hash_queue_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i, lut;
> + uint16_t j, num;
> +
> /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> * It's necessary to calculate the actual PF queues that are configured.
> */
> @@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> return -ENOTSUP;
> }
>
> + lut = 0;
> /* Fill in redirection table */
> for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> if (j == num)
> @@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> }
>
> - if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
> - i40e_pf_disable_rss(pf);
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hash function */
> +static int
> +i40e_config_hash_function_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct rte_eth_hash_global_conf g_cfg;
> + uint64_t rss_inset;
> +
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
> + i40e_set_symmetric_hash_enable_per_port(hw, 1);
> + } else {
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY |
> ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> + g_cfg.hash_func = conf->conf.func;
> + g_cfg.sym_hash_enable_mask[0] = conf->conf.types &
> rss_inset;
> + g_cfg.valid_bit_mask[0] = conf->conf.types & rss_inset;
> + i40e_set_hash_filter_global_config(hw, &g_cfg);
> + }
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hena disable and set hash input set to defalut */ static
/* Disable RSS and configure with default input set */
> +int i40e_config_hash_pctype_del(struct i40e_pf *pf,
Why use pctype in the function name?
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = pf->rss_info.conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = pf->rss_info.conf.key_len,
> + };
> + uint32_t i;
> +
> + /* set hash enable register to disable */
> + rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
> + i40e_hw_rss_hash_set(pf, &rss_conf);
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash input set default */
> + struct rte_eth_input_set_conf input_conf = {
> + .op = RTE_ETH_INPUT_SET_SELECT,
> + .flow_type = i,
> + .inset_size = 1,
> + };
> + input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
> + i40e_hash_filter_inset_select(hw, &input_conf);
> + }
> +
> + rss_info->conf.types = rss_conf.rss_hf;
> +
> + return 0;
> +}
> +
> +/* config rss queue region to default */ static int
> +i40e_config_hash_queue_del(struct i40e_pf *pf) {
Seems the function name is not related to queue region in above comments.
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + uint16_t queue[I40E_MAX_Q_PER_TC];
> + uint32_t num_rxq, i, lut;
> + uint16_t j, num;
> +
> + num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues,
> I40E_MAX_Q_PER_TC);
> +
> + for (j = 0; j < num_rxq; j++)
> + queue[j] = j;
> +
> + /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> + * It's necessary to calculate the actual PF queues that are configured.
> + */
> + if (pf->dev_data->dev_conf.rxmode.mq_mode &
> ETH_MQ_RX_VMDQ_FLAG)
> + num = i40e_pf_calc_configured_queues_num(pf);
> + else
> + num = pf->dev_data->nb_rx_queues;
> +
> + num = RTE_MIN(num, num_rxq);
> + PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are
> configured",
> + num);
> +
> + if (num == 0) {
> + PMD_DRV_LOG(ERR,
> + "No PF queues are configured to enable RSS for
> port %u",
> + pf->dev_data->port_id);
> + return -ENOTSUP;
> + }
> +
> + lut = 0;
> + /* Fill in redirection table */
> + for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> + if (j == num)
> + j = 0;
> + lut = (lut << 8) | (queue[j] & ((0x1 <<
> + hw->func_caps.rss_table_entry_width) - 1));
> + if ((i & 3) == 3)
> + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> + }
> +
> + rss_info->conf.queue_num = 0;
> + memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
> +
> + return 0;
> +}
> +
> +/* config rss hash function to default */ static int
> +i40e_config_hash_function_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i;
> + uint16_t j;
> +
> + /* set symmetric hash to default status */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
> + i40e_set_symmetric_hash_enable_per_port(hw, 0);
> +
> return 0;
> }
> - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
> - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> - /* Random default keys */
> - static uint32_t rss_key_default[] = {0x6b793944,
> - 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> - 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> - 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
>
> - rss_conf.rss_key = (uint8_t *)rss_key_default;
> - rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> - sizeof(uint32_t);
> - PMD_DRV_LOG(INFO,
> - "No valid RSS key config for i40e, using default\n");
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash global config disable */
> + for (j = I40E_FILTER_PCTYPE_INVALID + 1;
> + j < I40E_FILTER_PCTYPE_MAX; j++) {
> + if (pf->adapter->pctypes_tbl[i] &
> + (1ULL << j))
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(j), 0);
> + }
> }
>
> - i40e_hw_rss_hash_set(pf, &rss_conf);
> + return 0;
> +}
>
> - if (i40e_rss_conf_init(rss_info, &conf->conf))
> - return -EINVAL;
> +int
> +i40e_config_rss_filter(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf, bool add) {
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_flow_action_rss update_conf = rss_info->conf;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = conf->conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = conf->conf.key_len,
> + .rss_hf = conf->conf.types,
> + };
> + int ret = 0;
> +
> + if (add) {
> + if (conf->conf.queue_num) {
> + /* config rss queue region */
> + ret = i40e_config_hash_queue_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.queue_num = conf->conf.queue_num;
> + update_conf.queue = conf->conf.queue;
> + } else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT) {
> + /* config hash function */
> + ret = i40e_config_hash_function_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.func = conf->conf.func;
> + } else {
> + /* config hash enable and input set for each pctype
> */
> + ret = i40e_config_hash_pctype_add(pf, conf,
> &rss_conf);
> + if (ret)
> + return ret;
> +
> + update_conf.types = rss_conf.rss_hf;
> + update_conf.key = rss_conf.rss_key;
> + update_conf.key_len = rss_conf.rss_key_len;
> + }
> +
> + /* update rss info in pf */
> + if (i40e_rss_conf_init(rss_info, &update_conf))
> + return -EINVAL;
> + } else {
> + if (!conf->valid)
> + return 0;
> +
> + if (conf->conf.queue_num)
> + i40e_config_hash_queue_del(pf);
> + else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT)
> + i40e_config_hash_function_del(pf, conf);
> + else
> + i40e_config_hash_pctype_del(pf, conf);
> + }
>
> return 0;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> index aac89de91..1e4e64ea7 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx { #define
> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
> I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
>
> +#define I40E_RSS_TYPE_NONE 0ULL
> +#define I40E_RSS_TYPE_INVALID 1ULL
> +
> #define I40E_INSET_NONE 0x00000000000000000ULL
>
> /* bit0 ~ bit 7 */
> @@ -749,6 +752,11 @@ struct i40e_queue_regions {
> struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX +
> 1]; };
>
> +struct i40e_rss_pattern_info {
> + uint8_t action_flag;
Why is 'action_flag' in pattern_info?
> + uint64_t types;
> +};
> +
> /* Tunnel filter number HW supports */
> #define I40E_MAX_TUNNEL_FILTER_NUM 400
>
> @@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
> I40E_VFQF_HKEY_MAX_INDEX :
> I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t)]; /* Hash key. */
> uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use.
> */
> + bool valid; /* Check if it's valid */
> +};
> +
> +TAILQ_HEAD(i40e_rss_conf_list, i40e_rte_flow_rss_filter);
> +
> +/* rss filter list structure */
> +struct i40e_rte_flow_rss_filter {
Don't use _rte_ in PMD.
> + TAILQ_ENTRY(i40e_rte_flow_rss_filter) next;
> + struct i40e_rte_flow_rss_conf rss_filter_info;
> };
>
> struct i40e_vf_msg_cfg {
> @@ -1039,6 +1056,7 @@ struct i40e_pf {
> struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
> struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
> struct i40e_rte_flow_rss_conf rss_info; /* rss info */
> + struct i40e_rss_conf_list rss_info_list; /* rss rull list */
Typo: rule list
> struct i40e_queue_regions queue_region; /* queue region info */
> struct i40e_fc_conf fc_conf; /* Flow control conf */
> struct i40e_mirror_rule_list mirror_list; diff --git
> a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index
> d877ac250..4774fde6d 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev
> *dev,
> * function for RSS, or flowtype for queue region configuration.
> * For example:
> * pattern:
> - * Case 1: only ETH, indicate flowtype for queue region will be parsed.
> - * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
> - * Case 3: none, indicate RSS related will be parsed in action.
> - * Any pattern other the ETH or VLAN will be treated as invalid except END.
> + * Case 1: try to transform patterns to pctype. valid pctype will be
> + * used in parse action.
> + * Case 2: only ETH, indicate flowtype for queue region will be parsed.
> + * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
> * So, pattern choice is depened on the purpose of configuration of
> * that flow.
> * action:
> @@ -4438,15 +4438,66 @@ static int
> i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> const struct rte_flow_item *pattern,
> struct rte_flow_error *error,
> - uint8_t *action_flag,
> + struct i40e_rss_pattern_info *p_info,
> struct i40e_queue_regions *info) {
> const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
> const struct rte_flow_item *item = pattern;
> enum rte_flow_item_type item_type;
> -
> - if (item->type == RTE_FLOW_ITEM_TYPE_END)
> + struct rte_flow_item *items;
> + uint32_t item_num = 0; /* non-void item number of pattern*/
> + uint32_t i = 0;
> + static const struct {
> + enum rte_flow_item_type *item_array;
> + uint64_t type;
> + } i40e_rss_pctype_patterns[] = {
> + { pattern_fdir_ipv4,
> + ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_OTHER },
> + { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
> + { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
> + { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
> + { pattern_fdir_ipv6,
> + ETH_RSS_FRAG_IPV6 |
> ETH_RSS_NONFRAG_IPV6_OTHER },
> + { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
> + { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
> + { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
> + };
> +
> + p_info->types = I40E_RSS_TYPE_INVALID;
> +
> + if (item->type == RTE_FLOW_ITEM_TYPE_END) {
> + p_info->types = I40E_RSS_TYPE_NONE;
> return 0;
> + }
> +
> + /* convert flow to pctype */
> + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
> + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
> + item_num++;
> + i++;
> + }
> + item_num++;
> +
> + items = rte_zmalloc("i40e_pattern",
> + item_num * sizeof(struct rte_flow_item), 0);
> + if (!items) {
> + rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> + NULL, "No memory for PMD internal
> items.");
> + return -ENOMEM;
> + }
> +
> + i40e_pattern_skip_void_item(items, pattern);
> +
> + for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
> + if
> (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
> + items)) {
> + p_info->types = i40e_rss_pctype_patterns[i].type;
> + rte_free(items);
> + return 0;
> + }
> + }
> +
> + rte_free(items);
>
> for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> if (item->last) {
> @@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> item_type = item->type;
> switch (item_type) {
> case RTE_FLOW_ITEM_TYPE_ETH:
> - *action_flag = 1;
> + p_info->action_flag = 1;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> vlan_spec = item->spec;
> @@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> vlan_spec->tci) >> 13) & 0x7;
> info->region[0].user_priority_num =
> 1;
> info->queue_region_number = 1;
> - *action_flag = 0;
> + p_info->action_flag = 0;
> }
> }
> break;
> @@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused
> struct rte_eth_dev *dev,
> * max index should be 7, and so on. And also, queue index should be
> * continuous sequence and queue region index should be part of rss
> * queue index for this port.
> + * For hash params, the pctype in action and pattern must be same.
> + * Set queue index or symmetric hash enable must be with non-types.
> */
> static int
> i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
> const struct rte_flow_action *actions,
> struct rte_flow_error *error,
> - uint8_t action_flag,
> + struct i40e_rss_pattern_info p_info,
> struct i40e_queue_regions *conf_info,
> union i40e_filter_t *filter)
> {
> @@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> struct i40e_rte_flow_rss_conf *rss_config =
> &filter->rss_conf;
> struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> - uint16_t i, j, n, tmp;
> + uint16_t i, j, n, tmp, nb_types;
> uint32_t index = 0;
> uint64_t hf_bit = 1;
>
> @@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> return -rte_errno;
> }
>
> - if (action_flag) {
> + if (p_info.action_flag) {
> for (n = 0; n < 64; n++) {
> if (rss->types & (hf_bit << n)) {
> conf_info->region[0].hw_flowtype[0] = n;
> @@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> if (rss_config->queue_region_conf)
> return 0;
>
> - if (!rss || !rss->queue_num) {
> + if (!rss) {
> rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "no valid queues");
> + "no valid rules");
> return -rte_errno;
> }
>
> @@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> }
> }
>
> - if (rss_info->conf.queue_num) {
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ACTION,
> - act,
> - "rss only allow one valid rule");
> - return -rte_errno;
> + if (rss->queue_num && (p_info.types || rss->types))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype must be empty while configuring queue
> region");
> +
> + /* validate pattern and pctype */
> + if (!(rss->types & p_info.types) &&
> + (rss->types || p_info.types) && !rss->queue_num)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "invaild pctype");
> +
> + nb_types = 0;
> + for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
> + if (rss->types & (hf_bit << n))
> + nb_types++;
> + if (nb_types > 1)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "multi pctype is not supported");
> }
>
> + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ
> &&
> + (p_info.types || rss->types || rss->queue_num))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype and queues must be empty while"
> + " setting SYMMETRIC hash function");
> +
> /* Parse RSS related parameters from configuration */
> - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
> + if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "non-default RSS hash functions are not supported");
> + "RSS hash functions are not supported");
> if (rss->level)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act, @@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev
> *dev, {
> int ret;
> struct i40e_queue_regions info;
> - uint8_t action_flag = 0;
> + struct i40e_rss_pattern_info p_info;
>
> memset(&info, 0, sizeof(struct i40e_queue_regions));
> + memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
>
> ret = i40e_flow_parse_rss_pattern(dev, pattern,
> - error, &action_flag, &info);
> + error, &p_info, &info);
> if (ret)
> return ret;
>
> ret = i40e_flow_parse_rss_action(dev, actions, error,
> - action_flag, &info, filter);
> + p_info, &info, filter);
> if (ret)
> return ret;
>
> @@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rte_flow_rss_filter *rss_filter;
> int ret;
>
> if (conf->queue_region_conf) {
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
> - conf->queue_region_conf = 0;
> } else {
> ret = i40e_config_rss_filter(pf, conf, 1);
> }
> - return ret;
> +
> + if (ret)
> + return ret;
> +
> + rss_filter = rte_zmalloc("i40e_rte_flow_rss_filter",
> + sizeof(*rss_filter), 0);
> + if (rss_filter == NULL) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + rss_filter->rss_filter_info = *conf;
> + /* the rull new created is always valid
> + * the existing rull covered by new rull will be set invalid
> + */
> + rss_filter->rss_filter_info.valid = true;
> +
> + TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
> +
> + return 0;
> }
>
> static int
> @@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rte_flow_rss_filter *rss_filter;
>
> - i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + if (conf->queue_region_conf)
> + i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + else
> + i40e_config_rss_filter(pf, conf, 0);
>
> - i40e_config_rss_filter(pf, conf, 0);
> + TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
> + if (!memcmp(&rss_filter->rss_filter_info, conf,
> + sizeof(struct rte_flow_action_rss))) {
> + TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
> + rte_free(rss_filter);
> + }
> + }
> return 0;
> }
>
> @@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
> &cons_filter.rss_conf);
> if (ret)
> goto free_flow;
> - flow->rule = &pf->rss_info;
> + flow->rule = TAILQ_LAST(&pf->rss_info_list,
> + i40e_rss_conf_list);
> break;
> default:
> goto free_flow;
> @@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
> break;
> case RTE_ETH_FILTER_HASH:
> ret = i40e_config_rss_filter_del(dev,
> - (struct i40e_rte_flow_rss_conf *)flow->rule);
> + &((struct i40e_rte_flow_rss_filter *)flow->rule)-
> >rss_filter_info);
> break;
> default:
> PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
> @@ -5248,13 +5352,27 @@ static int i40e_flow_flush_rss_filter(struct
> rte_eth_dev *dev) {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct rte_flow *flow;
> + void *temp;
> int32_t ret = -EINVAL;
>
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
>
> - if (rss_info->conf.queue_num)
> - ret = i40e_config_rss_filter(pf, rss_info, FALSE);
> + /* Delete rss flows in flow list. */
> + TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
> + if (flow->filter_type != RTE_ETH_FILTER_HASH)
> + continue;
> +
> + if (flow->rule) {
> + ret = i40e_config_rss_filter_del(dev,
> + &((struct i40e_rte_flow_rss_filter *)flow-
> >rule)->rss_filter_info);
> + if (ret)
> + return ret;
> + }
> + TAILQ_REMOVE(&pf->flow_list, flow, node);
> + rte_free(flow);
> + }
> +
> return ret;
> }
> --
> 2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v6] net/i40e: implement hash function in rte flow API
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (8 preceding siblings ...)
2020-03-24 8:17 ` [dpdk-dev] [PATCH v5] " Chenxu Di
@ 2020-03-30 7:40 ` Chenxu Di
2020-04-02 16:26 ` Iremonger, Bernard
2020-04-10 1:52 ` Xing, Beilei
2020-04-13 5:31 ` [dpdk-dev] [PATCH v7] net/i40e: enable advanced RSS Chenxu Di
` (2 subsequent siblings)
12 siblings, 2 replies; 26+ messages in thread
From: Chenxu Di @ 2020-03-30 7:40 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, beilei.xing, wei.zhao1, Chenxu Di
implement set hash global configurations, set symmetric hash enable
and set hash input set in rte flow API.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
v6:
-Modified the docs and various name.
v5:
-Modified the doc i40e.rst and various name.
v4:
-added check for l3 pctype with l4 input set.
v3:
-modified the doc i40e.rst
v2:
-canceled remove legacy filter functions.
---
doc/guides/nics/i40e.rst | 14 +
doc/guides/rel_notes/release_20_05.rst | 7 +
drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
drivers/net/i40e/i40e_ethdev.h | 18 +
drivers/net/i40e/i40e_flow.c | 186 ++++++++--
5 files changed, 624 insertions(+), 72 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..92590dadc 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,20 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+Enable set hash input set and hash enable in generic flow API.
+For the reason queue region configuration in i40e is for all PCTYPE,
+pctype must be empty while configuring queue region.
+The pctype in pattern and actions must be matched.
+Exampale, set queue region configuration queue 0, 1, 2, 3 and
+Enable hash for ipv4-tcp and configure input set with l3-src-only:
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues 0 1 2 3 end / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queues end / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf501..bf5f399fe 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,13 @@ New Features
* Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
+* **Updated Intel i40e driver.**
+
+ Updated i40e PMD with new features and improvements, including:
+
+ * Added support for RSS using L3/L4 source/destination only.
+ * Added support for setting hash function in rte flow.
+
Removed Items
-------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9539b0470..92c314e66 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize rss rule list */
+ TAILQ_INIT(&pf->rss_info_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
+ struct i40e_flow_rss_filter *rss_item;
+
+ TAILQ_FOREACH(rss_item, rss_list, next) {
+ i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
+ }
}
static void
@@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct rte_flow_action_rss *comp,
sizeof(*with->queue) * with->queue_num));
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* config rss hash input set */
+static int
+i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_type_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
};
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(types & (1ull << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_type_table); j++) {
+ if ((types & inset_type_table[j].type) ==
+ inset_type_table[j].type) {
+ if (inset_type_table[j].field ==
+ RTE_ETH_INPUT_SET_UNKNOWN) {
+ return -EINVAL;
+ }
+ conf.field[conf.inset_size] =
+ inset_type_table[j].field;
+ conf.inset_size++;
+ }
}
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+/* set existing rule invalid if it is covered */
+static void
+i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_flow_rss_filter *rss_item;
+ uint64_t rss_inset;
+
+ /* to check pctype same need without input set bits */
+ rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* config rss queue rule */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss input set rule */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ rss_inset) ==
+ (conf->conf.types & rss_inset))
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function symmetric rule */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ rss_item->rss_filter_info.valid = false;
+
+ /* config rss function xor or toeplitz rule */
+ if (rss_item->rss_filter_info.conf.func !=
+ RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ (rss_item->rss_filter_info.conf.types & rss_inset) ==
+ (conf->conf.types & rss_inset))
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* config rss hash enable and set hash input set */
+static int
+i40e_config_hash_pctype_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Confirm hash input set */
+ if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss queue region */
+static int
+i40e_config_hash_queue_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hash function */
+static int
+i40e_config_hash_function_add(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_hash_global_conf g_cfg;
+ uint64_t rss_inset;
+
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ } else {
+ rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ g_cfg.hash_func = conf->conf.func;
+ g_cfg.sym_hash_enable_mask[0] = conf->conf.types & rss_inset;
+ g_cfg.valid_bit_mask[0] = conf->conf.types & rss_inset;
+ i40e_set_hash_filter_global_config(hw, &g_cfg);
+ }
+
+ i40e_config_rss_invalidate_previous_rule(pf, conf);
+
+ return 0;
+}
+
+/* config rss hena disable and set hash input set to defalut */
+static int
+i40e_config_hash_pctype_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* set hash enable register to disable */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
+ i40e_hw_rss_hash_set(pf, &rss_conf);
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
+ !(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash input set default */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ return 0;
+}
+
+/* config rss queue region to default */
+static int
+i40e_config_hash_queue_del(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+/* config rss hash function to default */
+static int
+i40e_config_hash_function_del(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i;
+ uint16_t j;
+
+ /* set symmetric hash to default status */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 0);
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(conf->conf.types & (1ull << i)))
+ continue;
+
+ /* set hash global config disable */
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j), 0);
+ }
}
- i40e_hw_rss_hash_set(pf, &rss_conf);
+ return 0;
+}
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* config rss queue region */
+ ret = i40e_config_hash_queue_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
+ /* config hash function */
+ ret = i40e_config_hash_function_add(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* config hash enable and input set for each pctype */
+ ret = i40e_config_hash_pctype_add(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* update rss info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_config_hash_queue_del(pf);
+ else if (conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ i40e_config_hash_function_del(pf, conf);
+ else
+ i40e_config_hash_pctype_del(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..929e6b7c7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_flow_rss_filter);
+
+/* rss filter list structure */
+struct i40e_flow_rss_filter {
+ TAILQ_ENTRY(i40e_flow_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1039,6 +1056,7 @@ struct i40e_pf {
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rss_conf_list rss_info_list; /* rss rull list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..d67cd648e 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
@@ -4438,15 +4438,66 @@ static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index or symmetric hash enable must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "no valid rules");
return -rte_errno;
}
@@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues must be empty while"
+ " setting SYMMETRIC hash function");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
{
int ret;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ struct i40e_rss_pattern_info p_info;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_flow_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_flow_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rull new created is always valid
+ * the existing rull covered by new rull will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_flow_rss_filter *rss_filter;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_info_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_flow_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5352,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_flow_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/i40e: implement hash function in rte flow API
2020-03-30 7:40 ` [dpdk-dev] [PATCH v6] " Chenxu Di
@ 2020-04-02 16:26 ` Iremonger, Bernard
[not found] ` <4a1f49493dc54ef0b3ae9c2bf7018f0d@intel.com>
2020-04-10 1:52 ` Xing, Beilei
1 sibling, 1 reply; 26+ messages in thread
From: Iremonger, Bernard @ 2020-04-02 16:26 UTC (permalink / raw)
To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Xing, Beilei, Zhao1, Wei, Di, ChenxuX
Hi Chenux,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenxu Di
> Sent: Monday, March 30, 2020 8:40 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [dpdk-dev] [PATCH v6] net/i40e: implement hash function in rte
> flow API
>
> implement set hash global configurations, set symmetric hash enable and
> set hash input set in rte flow API.
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> v6:
> -Modified the docs and various name.
> v5:
> -Modified the doc i40e.rst and various name.
> v4:
> -added check for l3 pctype with l4 input set.
> v3:
> -modified the doc i40e.rst
> v2:
> -canceled remove legacy filter functions.
> ---
> doc/guides/nics/i40e.rst | 14 +
> doc/guides/rel_notes/release_20_05.rst | 7 +
> drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
> drivers/net/i40e/i40e_ethdev.h | 18 +
> drivers/net/i40e/i40e_flow.c | 186 ++++++++--
> 5 files changed, 624 insertions(+), 72 deletions(-)
>
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> d6e578eda..92590dadc 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -569,6 +569,20 @@ details please refer to
> :doc:`../testpmd_app_ug/index`.
> testpmd> set port (port_id) queue-region flush (on|off)
> testpmd> show port (port_id) queue-region
>
> +Generic flow API
> +~~~~~~~~~~~~~~~~~~~
> +Enable set hash input set and hash enable in generic flow API.
> +For the reason queue region configuration in i40e is for all PCTYPE,
> +pctype must be empty while configuring queue region.
> +The pctype in pattern and actions must be matched.
> +Exampale, set queue region configuration queue 0, 1, 2, 3 and Enable
> +hash for ipv4-tcp and configure input set with l3-src-only:
> +
> + testpmd> flow create 0 ingress pattern end actions rss types end \
> + queues 0 1 2 3 end / end
> + testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
> + actions rss types ipv4-tcp l3-src-only end queues end / end
> +
> Limitations or Known issues
> ---------------------------
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst
> b/doc/guides/rel_notes/release_20_05.rst
> index 000bbf501..bf5f399fe 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -62,6 +62,13 @@ New Features
>
> * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
>
> +* **Updated Intel i40e driver.**
> +
> + Updated i40e PMD with new features and improvements, including:
> +
> + * Added support for RSS using L3/L4 source/destination only.
> + * Added support for setting hash function in rte flow.
> +
>
> Removed Items
> -------------
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 9539b0470..92c314e66 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void
> *init_params __rte_unused)
> /* initialize mirror rule list */
> TAILQ_INIT(&pf->mirror_list);
>
> + /* initialize rss rule list */
> + TAILQ_INIT(&pf->rss_info_list);
> +
> /* initialize Traffic Manager configuration */
> i40e_tm_conf_init(dev);
>
> @@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
> static inline void i40e_rss_filter_restore(struct i40e_pf *pf) {
> - struct i40e_rte_flow_rss_conf *conf =
> - &pf->rss_info;
> - if (conf->conf.queue_num)
> - i40e_config_rss_filter(pf, conf, TRUE);
> + struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
> + struct i40e_flow_rss_filter *rss_item;
> +
> + TAILQ_FOREACH(rss_item, rss_list, next) {
> + i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
> + }
> }
>
> static void
> @@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct
> rte_flow_action_rss *comp,
> sizeof(*with->queue) * with->queue_num)); }
>
> -int
> -i40e_config_rss_filter(struct i40e_pf *pf,
> - struct i40e_rte_flow_rss_conf *conf, bool add)
> +/* config rss hash input set */
> +static int
> +i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
> {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint32_t i, lut = 0;
> - uint16_t j, num;
> - struct rte_eth_rss_conf rss_conf = {
> - .rss_key = conf->conf.key_len ?
> - (void *)(uintptr_t)conf->conf.key : NULL,
> - .rss_key_len = conf->conf.key_len,
> - .rss_hf = conf->conf.types,
> + struct rte_eth_input_set_conf conf;
> + int i, ret;
> + uint32_t j;
> + static const struct {
> + uint64_t type;
> + enum rte_eth_input_set_field field;
> + } inset_type_table[] = {
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> };
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
>
> - if (!add) {
> - if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
The function i40e_action_rss_same() is no longer used.
Should it be removed from i40e_ethdev.c and i40e_ethdev.h?
> - i40e_pf_disable_rss(pf);
> - memset(rss_info, 0,
> - sizeof(struct i40e_rte_flow_rss_conf));
> - return 0;
> + ret = 0;
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> i++) {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(types & (1ull << i)))
> + continue;
> +
> + conf.op = RTE_ETH_INPUT_SET_SELECT;
> + conf.flow_type = i;
> + conf.inset_size = 0;
> + for (j = 0; j < RTE_DIM(inset_type_table); j++) {
> + if ((types & inset_type_table[j].type) ==
> + inset_type_table[j].type) {
> + if (inset_type_table[j].field ==
> + RTE_ETH_INPUT_SET_UNKNOWN) {
> + return -EINVAL;
> + }
> + conf.field[conf.inset_size] =
> + inset_type_table[j].field;
> + conf.inset_size++;
> + }
> }
> +
> + if (conf.inset_size) {
> + ret = i40e_hash_filter_inset_select(hw, &conf);
> + if (ret)
> + return ret;
> + }
> + }
> +
> + return ret;
> +}
> +
> +/* set existing rule invalid if it is covered */ static void
> +i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_flow_rss_filter *rss_item;
> + uint64_t rss_inset;
> +
> + /* to check pctype same need without input set bits */
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> +
> + TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
> + if (!rss_item->rss_filter_info.valid)
> + continue;
> +
> + /* config rss queue rule */
> + if (conf->conf.queue_num &&
> + rss_item->rss_filter_info.conf.queue_num)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss input set rule */
> + if (conf->conf.types &&
> + (rss_item->rss_filter_info.conf.types &
> + rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function symmetric rule */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
> + rss_item->rss_filter_info.conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function xor or toeplitz rule */
> + if (rss_item->rss_filter_info.conf.func !=
> + RTE_ETH_HASH_FUNCTION_DEFAULT &&
> + conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT
> &&
> + (rss_item->rss_filter_info.conf.types & rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> + }
> +}
> +
> +/* config rss hash enable and set hash input set */ static int
> +i40e_config_hash_pctype_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> +
> + if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
> + return -ENOTSUP;
> +
> + /* Confirm hash input set */
> + if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
> return -EINVAL;
> +
> + if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> + /* Random default keys */
> + static uint32_t rss_key_default[] = {0x6b793944,
> + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
> +
> + rss_conf->rss_key = (uint8_t *)rss_key_default;
> + rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1)
> *
> + sizeof(uint32_t);
> + PMD_DRV_LOG(INFO,
> + "No valid RSS key config for i40e, using default\n");
> }
>
> + rss_conf->rss_hf |= rss_info->conf.types;
> + i40e_hw_rss_hash_set(pf, rss_conf);
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss queue region */
> +static int
> +i40e_config_hash_queue_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i, lut;
> + uint16_t j, num;
> +
> /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> * It's necessary to calculate the actual PF queues that are configured.
> */
> @@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> return -ENOTSUP;
> }
>
> + lut = 0;
> /* Fill in redirection table */
> for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> if (j == num)
> @@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> }
>
> - if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
> - i40e_pf_disable_rss(pf);
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hash function */
> +static int
> +i40e_config_hash_function_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct rte_eth_hash_global_conf g_cfg;
> + uint64_t rss_inset;
> +
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
> + i40e_set_symmetric_hash_enable_per_port(hw, 1);
> + } else {
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY |
> ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> + g_cfg.hash_func = conf->conf.func;
> + g_cfg.sym_hash_enable_mask[0] = conf->conf.types &
> rss_inset;
> + g_cfg.valid_bit_mask[0] = conf->conf.types & rss_inset;
> + i40e_set_hash_filter_global_config(hw, &g_cfg);
> + }
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hena disable and set hash input set to defalut */
Typo: defalut should be default in above comment.
static
> +int i40e_config_hash_pctype_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = pf->rss_info.conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = pf->rss_info.conf.key_len,
> + };
> + uint32_t i;
> +
> + /* set hash enable register to disable */
> + rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
> + i40e_hw_rss_hash_set(pf, &rss_conf);
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> i++) {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash input set default */
> + struct rte_eth_input_set_conf input_conf = {
> + .op = RTE_ETH_INPUT_SET_SELECT,
> + .flow_type = i,
> + .inset_size = 1,
> + };
> + input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
> + i40e_hash_filter_inset_select(hw, &input_conf);
> + }
> +
> + rss_info->conf.types = rss_conf.rss_hf;
> +
> + return 0;
> +}
> +
> +/* config rss queue region to default */ static int
> +i40e_config_hash_queue_del(struct i40e_pf *pf) {
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + uint16_t queue[I40E_MAX_Q_PER_TC];
> + uint32_t num_rxq, i, lut;
> + uint16_t j, num;
> +
> + num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues,
> I40E_MAX_Q_PER_TC);
> +
> + for (j = 0; j < num_rxq; j++)
> + queue[j] = j;
> +
> + /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> + * It's necessary to calculate the actual PF queues that are configured.
> + */
> + if (pf->dev_data->dev_conf.rxmode.mq_mode &
> ETH_MQ_RX_VMDQ_FLAG)
> + num = i40e_pf_calc_configured_queues_num(pf);
> + else
> + num = pf->dev_data->nb_rx_queues;
> +
> + num = RTE_MIN(num, num_rxq);
> + PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are
> configured",
> + num);
> +
> + if (num == 0) {
> + PMD_DRV_LOG(ERR,
> + "No PF queues are configured to enable RSS for port
> %u",
> + pf->dev_data->port_id);
> + return -ENOTSUP;
> + }
> +
> + lut = 0;
> + /* Fill in redirection table */
> + for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> + if (j == num)
> + j = 0;
> + lut = (lut << 8) | (queue[j] & ((0x1 <<
> + hw->func_caps.rss_table_entry_width) - 1));
> + if ((i & 3) == 3)
> + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> + }
> +
> + rss_info->conf.queue_num = 0;
> + memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
> +
> + return 0;
> +}
> +
> +/* config rss hash function to default */ static int
> +i40e_config_hash_function_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i;
> + uint16_t j;
> +
> + /* set symmetric hash to default status */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
> + i40e_set_symmetric_hash_enable_per_port(hw, 0);
> +
> return 0;
> }
> - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
> - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> - /* Random default keys */
> - static uint32_t rss_key_default[] = {0x6b793944,
> - 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> - 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> - 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
>
> - rss_conf.rss_key = (uint8_t *)rss_key_default;
> - rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> - sizeof(uint32_t);
> - PMD_DRV_LOG(INFO,
> - "No valid RSS key config for i40e, using default\n");
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> i++) {
> + if (!(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash global config disable */
> + for (j = I40E_FILTER_PCTYPE_INVALID + 1;
> + j < I40E_FILTER_PCTYPE_MAX; j++) {
> + if (pf->adapter->pctypes_tbl[i] &
> + (1ULL << j))
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(j), 0);
> + }
> }
>
> - i40e_hw_rss_hash_set(pf, &rss_conf);
> + return 0;
> +}
>
> - if (i40e_rss_conf_init(rss_info, &conf->conf))
> - return -EINVAL;
> +int
> +i40e_config_rss_filter(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf, bool add) {
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_flow_action_rss update_conf = rss_info->conf;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = conf->conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = conf->conf.key_len,
> + .rss_hf = conf->conf.types,
> + };
> + int ret = 0;
> +
> + if (add) {
> + if (conf->conf.queue_num) {
> + /* config rss queue region */
> + ret = i40e_config_hash_queue_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.queue_num = conf->conf.queue_num;
> + update_conf.queue = conf->conf.queue;
> + } else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT) {
> + /* config hash function */
> + ret = i40e_config_hash_function_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.func = conf->conf.func;
> + } else {
> + /* config hash enable and input set for each pctype
> */
> + ret = i40e_config_hash_pctype_add(pf, conf,
> &rss_conf);
> + if (ret)
> + return ret;
> +
> + update_conf.types = rss_conf.rss_hf;
> + update_conf.key = rss_conf.rss_key;
> + update_conf.key_len = rss_conf.rss_key_len;
> + }
> +
> + /* update rss info in pf */
> + if (i40e_rss_conf_init(rss_info, &update_conf))
> + return -EINVAL;
> + } else {
> + if (!conf->valid)
> + return 0;
> +
> + if (conf->conf.queue_num)
> + i40e_config_hash_queue_del(pf);
> + else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT)
> + i40e_config_hash_function_del(pf, conf);
> + else
> + i40e_config_hash_pctype_del(pf, conf);
> + }
>
> return 0;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev.h
> b/drivers/net/i40e/i40e_ethdev.h index aac89de91..929e6b7c7 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx { #define
> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
> I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
>
> +#define I40E_RSS_TYPE_NONE 0ULL
> +#define I40E_RSS_TYPE_INVALID 1ULL
> +
> #define I40E_INSET_NONE 0x00000000000000000ULL
>
> /* bit0 ~ bit 7 */
> @@ -749,6 +752,11 @@ struct i40e_queue_regions {
> struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX +
> 1]; };
>
> +struct i40e_rss_pattern_info {
> + uint8_t action_flag;
> + uint64_t types;
> +};
> +
> /* Tunnel filter number HW supports */
> #define I40E_MAX_TUNNEL_FILTER_NUM 400
>
> @@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
> I40E_VFQF_HKEY_MAX_INDEX :
> I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t)]; /* Hash key. */
> uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use.
> */
> + bool valid; /* Check if it's valid */
> +};
> +
> +TAILQ_HEAD(i40e_rss_conf_list, i40e_flow_rss_filter);
> +
> +/* rss filter list structure */
> +struct i40e_flow_rss_filter {
> + TAILQ_ENTRY(i40e_flow_rss_filter) next;
> + struct i40e_rte_flow_rss_conf rss_filter_info;
> };
>
> struct i40e_vf_msg_cfg {
> @@ -1039,6 +1056,7 @@ struct i40e_pf {
> struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
> struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
> struct i40e_rte_flow_rss_conf rss_info; /* rss info */
> + struct i40e_rss_conf_list rss_info_list; /* rss rull list */
> struct i40e_queue_regions queue_region; /* queue region info */
> struct i40e_fc_conf fc_conf; /* Flow control conf */
> struct i40e_mirror_rule_list mirror_list; diff --git
> a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index
> d877ac250..d67cd648e 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev
> *dev,
> * function for RSS, or flowtype for queue region configuration.
> * For example:
> * pattern:
> - * Case 1: only ETH, indicate flowtype for queue region will be parsed.
> - * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
> - * Case 3: none, indicate RSS related will be parsed in action.
> - * Any pattern other the ETH or VLAN will be treated as invalid except END.
> + * Case 1: try to transform patterns to pctype. valid pctype will be
> + * used in parse action.
> + * Case 2: only ETH, indicate flowtype for queue region will be parsed.
> + * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
> * So, pattern choice is depened on the purpose of configuration of
> * that flow.
> * action:
> @@ -4438,15 +4438,66 @@ static int
> i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> const struct rte_flow_item *pattern,
> struct rte_flow_error *error,
> - uint8_t *action_flag,
> + struct i40e_rss_pattern_info *p_info,
> struct i40e_queue_regions *info) {
> const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
> const struct rte_flow_item *item = pattern;
> enum rte_flow_item_type item_type;
> -
> - if (item->type == RTE_FLOW_ITEM_TYPE_END)
> + struct rte_flow_item *items;
> + uint32_t item_num = 0; /* non-void item number of pattern*/
> + uint32_t i = 0;
> + static const struct {
> + enum rte_flow_item_type *item_array;
> + uint64_t type;
> + } i40e_rss_pctype_patterns[] = {
> + { pattern_fdir_ipv4,
> + ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_OTHER },
> + { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
> + { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
> + { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
> + { pattern_fdir_ipv6,
> + ETH_RSS_FRAG_IPV6 |
> ETH_RSS_NONFRAG_IPV6_OTHER },
> + { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
> + { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
> + { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
> + };
> +
> + p_info->types = I40E_RSS_TYPE_INVALID;
> +
> + if (item->type == RTE_FLOW_ITEM_TYPE_END) {
> + p_info->types = I40E_RSS_TYPE_NONE;
> return 0;
> + }
> +
> + /* convert flow to pctype */
> + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
> + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
> + item_num++;
> + i++;
> + }
> + item_num++;
> +
> + items = rte_zmalloc("i40e_pattern",
> + item_num * sizeof(struct rte_flow_item), 0);
> + if (!items) {
> + rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> + NULL, "No memory for PMD internal
> items.");
> + return -ENOMEM;
> + }
> +
> + i40e_pattern_skip_void_item(items, pattern);
> +
> + for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
> + if
> (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
> + items)) {
> + p_info->types = i40e_rss_pctype_patterns[i].type;
> + rte_free(items);
> + return 0;
> + }
> + }
> +
> + rte_free(items);
>
> for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> if (item->last) {
> @@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> item_type = item->type;
> switch (item_type) {
> case RTE_FLOW_ITEM_TYPE_ETH:
> - *action_flag = 1;
> + p_info->action_flag = 1;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> vlan_spec = item->spec;
> @@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> vlan_spec->tci) >> 13) & 0x7;
> info->region[0].user_priority_num =
> 1;
> info->queue_region_number = 1;
> - *action_flag = 0;
> + p_info->action_flag = 0;
> }
> }
> break;
> @@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused
> struct rte_eth_dev *dev,
> * max index should be 7, and so on. And also, queue index should be
> * continuous sequence and queue region index should be part of rss
> * queue index for this port.
> + * For hash params, the pctype in action and pattern must be same.
> + * Set queue index or symmetric hash enable must be with non-types.
> */
> static int
> i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
> const struct rte_flow_action *actions,
> struct rte_flow_error *error,
> - uint8_t action_flag,
> + struct i40e_rss_pattern_info p_info,
> struct i40e_queue_regions *conf_info,
> union i40e_filter_t *filter)
> {
> @@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> struct i40e_rte_flow_rss_conf *rss_config =
> &filter->rss_conf;
> struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> - uint16_t i, j, n, tmp;
> + uint16_t i, j, n, tmp, nb_types;
> uint32_t index = 0;
> uint64_t hf_bit = 1;
>
> @@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> return -rte_errno;
> }
>
> - if (action_flag) {
> + if (p_info.action_flag) {
> for (n = 0; n < 64; n++) {
> if (rss->types & (hf_bit << n)) {
> conf_info->region[0].hw_flowtype[0] = n;
> @@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> if (rss_config->queue_region_conf)
> return 0;
>
> - if (!rss || !rss->queue_num) {
> + if (!rss) {
> rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "no valid queues");
> + "no valid rules");
> return -rte_errno;
> }
>
> @@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> }
> }
>
> - if (rss_info->conf.queue_num) {
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ACTION,
> - act,
> - "rss only allow one valid rule");
> - return -rte_errno;
> + if (rss->queue_num && (p_info.types || rss->types))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype must be empty while configuring queue
> region");
> +
> + /* validate pattern and pctype */
> + if (!(rss->types & p_info.types) &&
> + (rss->types || p_info.types) && !rss->queue_num)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "invaild pctype");
> +
> + nb_types = 0;
> + for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
> + if (rss->types & (hf_bit << n))
> + nb_types++;
> + if (nb_types > 1)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "multi pctype is not supported");
> }
>
> + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ
> &&
> + (p_info.types || rss->types || rss->queue_num))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype and queues must be empty while"
> + " setting SYMMETRIC hash function");
> +
> /* Parse RSS related parameters from configuration */
> - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
> + if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "non-default RSS hash functions are not
> supported");
> + "RSS hash functions are not supported");
> if (rss->level)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act, @@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev
> *dev, {
> int ret;
> struct i40e_queue_regions info;
> - uint8_t action_flag = 0;
> + struct i40e_rss_pattern_info p_info;
>
> memset(&info, 0, sizeof(struct i40e_queue_regions));
> + memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
>
> ret = i40e_flow_parse_rss_pattern(dev, pattern,
> - error, &action_flag, &info);
> + error, &p_info, &info);
> if (ret)
> return ret;
>
> ret = i40e_flow_parse_rss_action(dev, actions, error,
> - action_flag, &info, filter);
> + p_info, &info, filter);
> if (ret)
> return ret;
>
> @@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_flow_rss_filter *rss_filter;
> int ret;
>
> if (conf->queue_region_conf) {
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
> - conf->queue_region_conf = 0;
> } else {
> ret = i40e_config_rss_filter(pf, conf, 1);
> }
> - return ret;
> +
> + if (ret)
> + return ret;
> +
> + rss_filter = rte_zmalloc("i40e_flow_rss_filter",
> + sizeof(*rss_filter), 0);
> + if (rss_filter == NULL) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + rss_filter->rss_filter_info = *conf;
> + /* the rull new created is always valid
> + * the existing rull covered by new rull will be set invalid
> + */
> + rss_filter->rss_filter_info.valid = true;
> +
> + TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
> +
> + return 0;
> }
>
> static int
> @@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_flow_rss_filter *rss_filter;
>
> - i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + if (conf->queue_region_conf)
> + i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + else
> + i40e_config_rss_filter(pf, conf, 0);
>
> - i40e_config_rss_filter(pf, conf, 0);
> + TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
> + if (!memcmp(&rss_filter->rss_filter_info, conf,
> + sizeof(struct rte_flow_action_rss))) {
> + TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
> + rte_free(rss_filter);
> + }
> + }
> return 0;
> }
>
> @@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
> &cons_filter.rss_conf);
> if (ret)
> goto free_flow;
> - flow->rule = &pf->rss_info;
> + flow->rule = TAILQ_LAST(&pf->rss_info_list,
> + i40e_rss_conf_list);
> break;
> default:
> goto free_flow;
> @@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
> break;
> case RTE_ETH_FILTER_HASH:
> ret = i40e_config_rss_filter_del(dev,
> - (struct i40e_rte_flow_rss_conf *)flow->rule);
> + &((struct i40e_flow_rss_filter *)flow->rule)-
> >rss_filter_info);
> break;
> default:
> PMD_DRV_LOG(WARNING, "Filter type (%d) not
> supported", @@ -5248,13 +5352,27 @@ static int
> i40e_flow_flush_rss_filter(struct rte_eth_dev *dev) {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct rte_flow *flow;
> + void *temp;
> int32_t ret = -EINVAL;
>
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
>
> - if (rss_info->conf.queue_num)
> - ret = i40e_config_rss_filter(pf, rss_info, FALSE);
> + /* Delete rss flows in flow list. */
> + TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
> + if (flow->filter_type != RTE_ETH_FILTER_HASH)
> + continue;
> +
> + if (flow->rule) {
> + ret = i40e_config_rss_filter_del(dev,
> + &((struct i40e_flow_rss_filter *)flow->rule)-
> >rss_filter_info);
> + if (ret)
> + return ret;
> + }
> + TAILQ_REMOVE(&pf->flow_list, flow, node);
> + rte_free(flow);
> + }
> +
> return ret;
> }
> --
> 2.17.1
Regards,
Bernard
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/i40e: implement hash function in rte flow API
[not found] ` <4a1f49493dc54ef0b3ae9c2bf7018f0d@intel.com>
@ 2020-04-08 8:24 ` Iremonger, Bernard
0 siblings, 0 replies; 26+ messages in thread
From: Iremonger, Bernard @ 2020-04-08 8:24 UTC (permalink / raw)
To: Di, ChenxuX, dev, Zhang, Qi Z; +Cc: Yang, Qiming, Xing, Beilei, Zhao1, Wei
Hi Chenxu,
<snip>
> > > -----Original Message-----
> > > From: dev <dev-bounces@dpdk.org> On Behalf Of Chenxu Di
> > > Sent: Monday, March 30, 2020 8:40 AM
> > > To: dev@dpdk.org
> > > Cc: Yang, Qiming <qiming.yang@intel.com>; Xing, Beilei
> > > <beilei.xing@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>; Di,
> > > ChenxuX <chenxux.di@intel.com>
> > > Subject: [dpdk-dev] [PATCH v6] net/i40e: implement hash function in
> > > rte flow API
> > >
> > > implement set hash global configurations, set symmetric hash enable
> > > and set hash input set in rte flow API.
> > >
> > > Signed-off-by: Chenxu Di <chenxux.di@intel.com>
>
> [snip]
>
> > > -struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> > >
> > > -if (!add) {
> > > -if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
> >
> >
> > The function i40e_action_rss_same() is no longer used.
> > Should it be removed from i40e_ethdev.c and i40e_ethdev.h?
> >
>
> It seems like no one use the function.
> However I check the commit of the function and find the
> commit(ac8d22de23) is so large.
> I don't know it is ok if remove it.
Best to check with Qi or Beilei if it is ok to remove this function.
>
> >
> > > -i40e_pf_disable_rss(pf);
> > > -memset(rss_info, 0,
> > > -sizeof(struct i40e_rte_flow_rss_conf)); -return 0;
> > > +ret = 0;
> > > +
> > > +for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> > > i++) {
> > > +if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> > > + !(types & (1ull << i)))
> > > +continue;
> > > +
> > > +conf.op = RTE_ETH_INPUT_SET_SELECT; conf.flow_type = i;
> > > +conf.inset_size = 0; for (j = 0; j < RTE_DIM(inset_type_table);
> > > +j++) { if ((types &
> > > +inset_type_table[j].type) ==
> > > + inset_type_table[j].type) {
> > > +if (inset_type_table[j].field ==
> > > + RTE_ETH_INPUT_SET_UNKNOWN) {
> > > +return -EINVAL;
> > > +}
> > > +conf.field[conf.inset_size] =
> > > +inset_type_table[j].field;
> > > +conf.inset_size++;
> > > +}
> > > }
> > > +
> > > +if (conf.inset_size) {
> > > +ret = i40e_hash_filter_inset_select(hw, &conf); if (ret) return
> > > +ret; } }
> > > +
> > > +return ret;
> > > +}
> > > +
> > > +/* set existing rule invalid if it is covered */ static void
> > > +i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf, struct
> > > +i40e_rte_flow_rss_conf *conf) { struct i40e_flow_rss_filter
> > > +*rss_item; uint64_t rss_inset;
> > > +
> > > +/* to check pctype same need without input set bits */ rss_inset =
> > > +~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
> > ETH_RSS_L4_SRC_ONLY |
> > > +ETH_RSS_L4_DST_ONLY);
> > > +
> > > +TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) { if
> > > +(!rss_item->rss_filter_info.valid)
> > > +continue;
> > > +
> > > +/* config rss queue rule */
> > > +if (conf->conf.queue_num &&
> > > + rss_item->rss_filter_info.conf.queue_num)
> > > +rss_item->rss_filter_info.valid = false;
> > > +
> > > +/* config rss input set rule */
> > > +if (conf->conf.types &&
> > > + (rss_item->rss_filter_info.conf.types &
> > > + rss_inset) ==
> > > + (conf->conf.types & rss_inset)) rss_item->rss_filter_info.valid
> > > += false;
> > > +
> > > +/* config rss function symmetric rule */ if (conf->conf.func ==
> > > + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
> > > + rss_item->rss_filter_info.conf.func ==
> > > + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> > > +rss_item->rss_filter_info.valid = false;
> > > +
> > > +/* config rss function xor or toeplitz rule */ if
> > > +(rss_item->rss_filter_info.conf.func !=
> > > + RTE_ETH_HASH_FUNCTION_DEFAULT &&
> > > + conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT
> > > &&
> > > + (rss_item->rss_filter_info.conf.types & rss_inset) ==
> > > + (conf->conf.types & rss_inset)) rss_item->rss_filter_info.valid
> > > += false; } }
> > > +
> > > +/* config rss hash enable and set hash input set */ static int
> > > +i40e_config_hash_pctype_add(struct i40e_pf *pf, struct
> > > +i40e_rte_flow_rss_conf *conf, struct rte_eth_rss_conf *rss_conf) {
> > > +struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> > > +
> > > +if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask)) return
> > > +-ENOTSUP;
> > > +
> > > +/* Confirm hash input set */
> > > +if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
> > > return -EINVAL;
> > > +
> > > +if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> > > + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> > > +/* Random default keys */
> > > +static uint32_t rss_key_default[] = {0x6b793944, 0x23504cb5,
> > > +0x5bea75b6, 0x309f4f12, 0x3dc0a2b8, 0x024ddcdf, 0x339b8ca0,
> > > +0x4c4af64a, 0x34fac605, 0x55d85839, 0x3a58997d, 0x2ec938e1,
> > > +0x66031581};
> > > +
> > > +rss_conf->rss_key = (uint8_t *)rss_key_default;
> > > +rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1)
> > > *
> > > +sizeof(uint32_t);
> > > +PMD_DRV_LOG(INFO,
> > > +"No valid RSS key config for i40e, using default\n");
> > > }
> > >
> > > +rss_conf->rss_hf |= rss_info->conf.types; i40e_hw_rss_hash_set(pf,
> > > +rss_conf);
> > > +
> > > +i40e_config_rss_invalidate_previous_rule(pf, conf);
> > > +
> > > +return 0;
> > > +}
> > > +
> > > +/* config rss queue region */
> > > +static int
> > > +i40e_config_hash_queue_add(struct i40e_pf *pf, struct
> > > +i40e_rte_flow_rss_conf *conf) { struct i40e_hw *hw =
> > > +I40E_PF_TO_HW(pf); uint32_t i, lut; uint16_t j, num;
> > > +
> > > /* If both VMDQ and RSS enabled, not all of PF queues are configured.
> > > * It's necessary to calculate the actual PF queues that are configured.
> > > */
> > > @@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> > > return -ENOTSUP; }
> > >
> > > +lut = 0;
> > > /* Fill in redirection table */
> > > for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> > > if (j == num) @@ -13010,29 +13219,215 @@
> > > i40e_config_rss_filter(struct i40e_pf *pf, I40E_WRITE_REG(hw,
> > > I40E_PFQF_HLUT(i >> 2), lut); }
> > >
> > > -if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
> > > -i40e_pf_disable_rss(pf);
> > > +i40e_config_rss_invalidate_previous_rule(pf, conf);
> > > +
> > > +return 0;
> > > +}
> > > +
> > > +/* config rss hash function */
> > > +static int
> > > +i40e_config_hash_function_add(struct i40e_pf *pf, struct
> > > +i40e_rte_flow_rss_conf *conf) { struct i40e_hw *hw =
> > > +I40E_PF_TO_HW(pf); struct rte_eth_hash_global_conf g_cfg; uint64_t
> > > +rss_inset;
> > > +
> > > +if (conf->conf.func ==
> > > +RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
> > > +i40e_set_symmetric_hash_enable_per_port(hw, 1); } else { rss_inset
> > > += ~(ETH_RSS_L3_SRC_ONLY |
> > > ETH_RSS_L3_DST_ONLY |
> > > + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY); g_cfg.hash_func
> =
> > > +conf->conf.func; g_cfg.sym_hash_enable_mask[0] = conf->conf.types
> &
> > > rss_inset;
> > > +g_cfg.valid_bit_mask[0] = conf->conf.types & rss_inset;
> > > +i40e_set_hash_filter_global_config(hw, &g_cfg); }
> > > +
> > > +i40e_config_rss_invalidate_previous_rule(pf, conf);
> > > +
> > > +return 0;
> > > +}
> > > +
> > > +/* config rss hena disable and set hash input set to defalut */
> >
> > Typo: defalut should be default in above comment.
> >
>
> Yeah ,I will fix it
>
> > static
> > > +int i40e_config_hash_pctype_del(struct i40e_pf *pf, struct
> > > +i40e_rte_flow_rss_conf *conf) { struct i40e_hw *hw =
> > > +I40E_PF_TO_HW(pf); struct i40e_rte_flow_rss_conf *rss_info =
> > > +&pf->rss_info; struct rte_eth_rss_conf rss_conf = { .rss_key =
> > > +pf->rss_info.conf.key_len ?
> > > +(void *)(uintptr_t)conf->conf.key : NULL, .rss_key_len =
> > > +pf->rss_info.conf.key_len, }; uint32_t i;
> > > +
>
> [snip]
>
> > > --
> > > 2.17.1
> >
Regards,
Bernard
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/i40e: implement hash function in rte flow API
2020-03-30 7:40 ` [dpdk-dev] [PATCH v6] " Chenxu Di
2020-04-02 16:26 ` Iremonger, Bernard
@ 2020-04-10 1:52 ` Xing, Beilei
1 sibling, 0 replies; 26+ messages in thread
From: Xing, Beilei @ 2020-04-10 1:52 UTC (permalink / raw)
To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Zhao1, Wei
> -----Original Message-----
> From: Di, ChenxuX <chenxux.di@intel.com>
> Sent: Monday, March 30, 2020 3:40 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [PATCH v6] net/i40e: implement hash function in rte flow API
>
> implement set hash global configurations, set symmetric hash enable and
> set hash input set in rte flow API.
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> v6:
> -Modified the docs and various name.
> v5:
> -Modified the doc i40e.rst and various name.
> v4:
> -added check for l3 pctype with l4 input set.
> v3:
> -modified the doc i40e.rst
> v2:
> -canceled remove legacy filter functions.
> ---
> doc/guides/nics/i40e.rst | 14 +
> doc/guides/rel_notes/release_20_05.rst | 7 +
> drivers/net/i40e/i40e_ethdev.c | 471 +++++++++++++++++++++++--
> drivers/net/i40e/i40e_ethdev.h | 18 +
> drivers/net/i40e/i40e_flow.c | 186 ++++++++--
> 5 files changed, 624 insertions(+), 72 deletions(-)
>
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> d6e578eda..92590dadc 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -569,6 +569,20 @@ details please refer
> to :doc:`../testpmd_app_ug/index`.
> testpmd> set port (port_id) queue-region flush (on|off)
> testpmd> show port (port_id) queue-region
>
> +Generic flow API
> +~~~~~~~~~~~~~~~~~~~
> +Enable set hash input set and hash enable in generic flow API.
> +For the reason queue region configuration in i40e is for all PCTYPE,
> +pctype must be empty while configuring queue region.
> +The pctype in pattern and actions must be matched.
> +Exampale, set queue region configuration queue 0, 1, 2, 3 and Enable
> +hash for ipv4-tcp and configure input set with l3-src-only:
> +
> + testpmd> flow create 0 ingress pattern end actions rss types end \
> + queues 0 1 2 3 end / end
> + testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
> + actions rss types ipv4-tcp l3-src-only end queues end / end
> +
> Limitations or Known issues
> ---------------------------
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst
> b/doc/guides/rel_notes/release_20_05.rst
> index 000bbf501..bf5f399fe 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -62,6 +62,13 @@ New Features
>
> * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
>
> +* **Updated Intel i40e driver.**
> +
> + Updated i40e PMD with new features and improvements, including:
> +
> + * Added support for RSS using L3/L4 source/destination only.
> + * Added support for setting hash function in rte flow.
> +
>
> Removed Items
> -------------
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 9539b0470..92c314e66 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void
> *init_params __rte_unused)
> /* initialize mirror rule list */
> TAILQ_INIT(&pf->mirror_list);
>
> + /* initialize rss rule list */
> + TAILQ_INIT(&pf->rss_info_list);
> +
> /* initialize Traffic Manager configuration */
> i40e_tm_conf_init(dev);
>
> @@ -12329,10 +12332,12 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
> static inline void i40e_rss_filter_restore(struct i40e_pf *pf) {
> - struct i40e_rte_flow_rss_conf *conf =
> - &pf->rss_info;
> - if (conf->conf.queue_num)
> - i40e_config_rss_filter(pf, conf, TRUE);
> + struct i40e_rss_conf_list *rss_list = &pf->rss_info_list;
> + struct i40e_flow_rss_filter *rss_item;
> +
> + TAILQ_FOREACH(rss_item, rss_list, next) {
> + i40e_config_rss_filter(pf, &rss_item->rss_filter_info, TRUE);
> + }
> }
>
> static void
> @@ -12956,31 +12961,234 @@ i40e_action_rss_same(const struct
> rte_flow_action_rss *comp,
> sizeof(*with->queue) * with->queue_num)); }
>
> -int
> -i40e_config_rss_filter(struct i40e_pf *pf,
> - struct i40e_rte_flow_rss_conf *conf, bool add)
> +/* config rss hash input set */
> +static int
> +i40e_config_rss_inputset(struct i40e_pf *pf, uint64_t types)
> {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint32_t i, lut = 0;
> - uint16_t j, num;
> - struct rte_eth_rss_conf rss_conf = {
> - .rss_key = conf->conf.key_len ?
> - (void *)(uintptr_t)conf->conf.key : NULL,
> - .rss_key_len = conf->conf.key_len,
> - .rss_hf = conf->conf.types,
> + struct rte_eth_input_set_conf conf;
> + int i, ret;
> + uint32_t j;
> + static const struct {
> + uint64_t type;
> + enum rte_eth_input_set_field field;
> + } inset_type_table[] = {
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> };
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
>
> - if (!add) {
> - if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
> - i40e_pf_disable_rss(pf);
> - memset(rss_info, 0,
> - sizeof(struct i40e_rte_flow_rss_conf));
> - return 0;
> + ret = 0;
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(types & (1ull << i)))
> + continue;
> +
> + conf.op = RTE_ETH_INPUT_SET_SELECT;
Is conf.op still used?
> + conf.flow_type = i;
> + conf.inset_size = 0;
> + for (j = 0; j < RTE_DIM(inset_type_table); j++) {
> + if ((types & inset_type_table[j].type) ==
> + inset_type_table[j].type) {
> + if (inset_type_table[j].field ==
> + RTE_ETH_INPUT_SET_UNKNOWN) {
> + return -EINVAL;
> + }
> + conf.field[conf.inset_size] =
> + inset_type_table[j].field;
> + conf.inset_size++;
> + }
> }
> +
> + if (conf.inset_size) {
> + ret = i40e_hash_filter_inset_select(hw, &conf);
> + if (ret)
> + return ret;
> + }
> + }
> +
> + return ret;
> +}
> +
> +/* set existing rule invalid if it is covered */ static void
> +i40e_config_rss_invalidate_previous_rule(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_flow_rss_filter *rss_item;
> + uint64_t rss_inset;
> +
> + /* to check pctype same need without input set bits */
What's does it mean?
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> +
> + TAILQ_FOREACH(rss_item, &pf->rss_info_list, next) {
> + if (!rss_item->rss_filter_info.valid)
> + continue;
> +
> + /* config rss queue rule */
> + if (conf->conf.queue_num &&
> + rss_item->rss_filter_info.conf.queue_num)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss input set rule */
It's better to check and rework all the comments.
> + if (conf->conf.types &&
> + (rss_item->rss_filter_info.conf.types &
> + rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function symmetric rule */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
> + rss_item->rss_filter_info.conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* config rss function xor or toeplitz rule */
> + if (rss_item->rss_filter_info.conf.func !=
> + RTE_ETH_HASH_FUNCTION_DEFAULT &&
> + conf->conf.func != RTE_ETH_HASH_FUNCTION_DEFAULT
> &&
> + (rss_item->rss_filter_info.conf.types & rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> + }
> +}
> +
> +/* config rss hash enable and set hash input set */ static int
> +i40e_config_hash_pctype_add(struct i40e_pf *pf,
Is it for config rss hash
> + struct i40e_rte_flow_rss_conf *conf,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> +
> + if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
> + return -ENOTSUP;
> +
> + /* Confirm hash input set */
> + if (i40e_config_rss_inputset(pf, rss_conf->rss_hf))
> return -EINVAL;
> +
> + if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> + /* Random default keys */
> + static uint32_t rss_key_default[] = {0x6b793944,
> + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
> +
> + rss_conf->rss_key = (uint8_t *)rss_key_default;
> + rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> + sizeof(uint32_t);
> + PMD_DRV_LOG(INFO,
> + "No valid RSS key config for i40e, using default\n");
> }
>
> + rss_conf->rss_hf |= rss_info->conf.types;
> + i40e_hw_rss_hash_set(pf, rss_conf);
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss queue region */
> +static int
> +i40e_config_hash_queue_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i, lut;
> + uint16_t j, num;
> +
> /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> * It's necessary to calculate the actual PF queues that are configured.
> */
> @@ -13000,6 +13208,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> return -ENOTSUP;
> }
>
> + lut = 0;
> /* Fill in redirection table */
> for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> if (j == num)
> @@ -13010,29 +13219,215 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> }
>
> - if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
> - i40e_pf_disable_rss(pf);
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hash function */
> +static int
> +i40e_config_hash_function_add(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct rte_eth_hash_global_conf g_cfg;
> + uint64_t rss_inset;
> +
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ){
> + i40e_set_symmetric_hash_enable_per_port(hw, 1);
> + } else {
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY |
> ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> + g_cfg.hash_func = conf->conf.func;
> + g_cfg.sym_hash_enable_mask[0] = conf->conf.types &
> rss_inset;
> + g_cfg.valid_bit_mask[0] = conf->conf.types & rss_inset;
> + i40e_set_hash_filter_global_config(hw, &g_cfg);
> + }
> +
> + i40e_config_rss_invalidate_previous_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* config rss hena disable and set hash input set to defalut */ static
Disable RSS hash and configure default input set
> +int i40e_config_hash_pctype_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = pf->rss_info.conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = pf->rss_info.conf.key_len,
> + };
> + uint32_t i;
> +
> + /* set hash enable register to disable */
> + rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
> + i40e_hw_rss_hash_set(pf, &rss_conf);
> +
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(pf->adapter->flow_types_mask & (1ull << i)) ||
> + !(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash input set default */
> + struct rte_eth_input_set_conf input_conf = {
> + .op = RTE_ETH_INPUT_SET_SELECT,
> + .flow_type = i,
> + .inset_size = 1,
> + };
> + input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
> + i40e_hash_filter_inset_select(hw, &input_conf);
> + }
> +
> + rss_info->conf.types = rss_conf.rss_hf;
> +
> + return 0;
> +}
> +
> +/* config rss queue region to default */ static int
> +i40e_config_hash_queue_del(struct i40e_pf *pf) {
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + uint16_t queue[I40E_MAX_Q_PER_TC];
> + uint32_t num_rxq, i, lut;
> + uint16_t j, num;
> +
> + num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues,
> I40E_MAX_Q_PER_TC);
> +
> + for (j = 0; j < num_rxq; j++)
> + queue[j] = j;
> +
> + /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> + * It's necessary to calculate the actual PF queues that are configured.
> + */
> + if (pf->dev_data->dev_conf.rxmode.mq_mode &
> ETH_MQ_RX_VMDQ_FLAG)
> + num = i40e_pf_calc_configured_queues_num(pf);
> + else
> + num = pf->dev_data->nb_rx_queues;
> +
> + num = RTE_MIN(num, num_rxq);
> + PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are
> configured",
> + num);
> +
> + if (num == 0) {
> + PMD_DRV_LOG(ERR,
> + "No PF queues are configured to enable RSS for
> port %u",
> + pf->dev_data->port_id);
> + return -ENOTSUP;
> + }
> +
> + lut = 0;
> + /* Fill in redirection table */
> + for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> + if (j == num)
> + j = 0;
> + lut = (lut << 8) | (queue[j] & ((0x1 <<
> + hw->func_caps.rss_table_entry_width) - 1));
> + if ((i & 3) == 3)
> + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> + }
> +
> + rss_info->conf.queue_num = 0;
> + memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
> +
> + return 0;
> +}
> +
> +/* config rss hash function to default */ static int
> +i40e_config_hash_function_del(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i;
> + uint16_t j;
> +
> + /* set symmetric hash to default status */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
> + i40e_set_symmetric_hash_enable_per_port(hw, 0);
> +
> return 0;
> }
> - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
> - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> - /* Random default keys */
> - static uint32_t rss_key_default[] = {0x6b793944,
> - 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> - 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> - 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
>
> - rss_conf.rss_key = (uint8_t *)rss_key_default;
> - rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> - sizeof(uint32_t);
> - PMD_DRV_LOG(INFO,
> - "No valid RSS key config for i40e, using default\n");
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++)
> {
> + if (!(conf->conf.types & (1ull << i)))
> + continue;
> +
> + /* set hash global config disable */
> + for (j = I40E_FILTER_PCTYPE_INVALID + 1;
> + j < I40E_FILTER_PCTYPE_MAX; j++) {
> + if (pf->adapter->pctypes_tbl[i] &
> + (1ULL << j))
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(j), 0);
> + }
> }
>
> - i40e_hw_rss_hash_set(pf, &rss_conf);
> + return 0;
> +}
>
> - if (i40e_rss_conf_init(rss_info, &conf->conf))
> - return -EINVAL;
> +int
> +i40e_config_rss_filter(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf, bool add) {
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_flow_action_rss update_conf = rss_info->conf;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = conf->conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = conf->conf.key_len,
> + .rss_hf = conf->conf.types,
> + };
> + int ret = 0;
> +
> + if (add) {
> + if (conf->conf.queue_num) {
> + /* config rss queue region */
> + ret = i40e_config_hash_queue_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.queue_num = conf->conf.queue_num;
> + update_conf.queue = conf->conf.queue;
> + } else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT) {
> + /* config hash function */
> + ret = i40e_config_hash_function_add(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.func = conf->conf.func;
> + } else {
> + /* config hash enable and input set for each pctype
> */
> + ret = i40e_config_hash_pctype_add(pf, conf,
> &rss_conf);
> + if (ret)
> + return ret;
> +
> + update_conf.types = rss_conf.rss_hf;
> + update_conf.key = rss_conf.rss_key;
> + update_conf.key_len = rss_conf.rss_key_len;
> + }
> +
> + /* update rss info in pf */
> + if (i40e_rss_conf_init(rss_info, &update_conf))
> + return -EINVAL;
> + } else {
> + if (!conf->valid)
> + return 0;
> +
> + if (conf->conf.queue_num)
> + i40e_config_hash_queue_del(pf);
> + else if (conf->conf.func !=
> RTE_ETH_HASH_FUNCTION_DEFAULT)
> + i40e_config_hash_function_del(pf, conf);
> + else
> + i40e_config_hash_pctype_del(pf, conf);
> + }
>
> return 0;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
> index aac89de91..929e6b7c7 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx { #define
> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
> I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
>
> +#define I40E_RSS_TYPE_NONE 0ULL
> +#define I40E_RSS_TYPE_INVALID 1ULL
> +
> #define I40E_INSET_NONE 0x00000000000000000ULL
>
> /* bit0 ~ bit 7 */
> @@ -749,6 +752,11 @@ struct i40e_queue_regions {
> struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX +
> 1]; };
>
> +struct i40e_rss_pattern_info {
> + uint8_t action_flag;
> + uint64_t types;
> +};
> +
> /* Tunnel filter number HW supports */
> #define I40E_MAX_TUNNEL_FILTER_NUM 400
>
> @@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
> I40E_VFQF_HKEY_MAX_INDEX :
> I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t)]; /* Hash key. */
> uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use.
> */
> + bool valid; /* Check if it's valid */
> +};
> +
> +TAILQ_HEAD(i40e_rss_conf_list, i40e_flow_rss_filter);
> +
> +/* rss filter list structure */
> +struct i40e_flow_rss_filter {
> + TAILQ_ENTRY(i40e_flow_rss_filter) next;
> + struct i40e_rte_flow_rss_conf rss_filter_info;
> };
>
> struct i40e_vf_msg_cfg {
> @@ -1039,6 +1056,7 @@ struct i40e_pf {
> struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
> struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
> struct i40e_rte_flow_rss_conf rss_info; /* rss info */
> + struct i40e_rss_conf_list rss_info_list; /* rss rull list */
Rull->rule?
> struct i40e_queue_regions queue_region; /* queue region info */
> struct i40e_fc_conf fc_conf; /* Flow control conf */
> struct i40e_mirror_rule_list mirror_list; diff --git
> a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index
> d877ac250..d67cd648e 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -4424,10 +4424,10 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev
> *dev,
> * function for RSS, or flowtype for queue region configuration.
> * For example:
> * pattern:
> - * Case 1: only ETH, indicate flowtype for queue region will be parsed.
> - * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
> - * Case 3: none, indicate RSS related will be parsed in action.
> - * Any pattern other the ETH or VLAN will be treated as invalid except END.
> + * Case 1: try to transform patterns to pctype. valid pctype will be
> + * used in parse action.
> + * Case 2: only ETH, indicate flowtype for queue region will be parsed.
> + * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
> * So, pattern choice is depened on the purpose of configuration of
> * that flow.
> * action:
> @@ -4438,15 +4438,66 @@ static int
> i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> const struct rte_flow_item *pattern,
> struct rte_flow_error *error,
> - uint8_t *action_flag,
> + struct i40e_rss_pattern_info *p_info,
> struct i40e_queue_regions *info) {
> const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
> const struct rte_flow_item *item = pattern;
> enum rte_flow_item_type item_type;
> -
> - if (item->type == RTE_FLOW_ITEM_TYPE_END)
> + struct rte_flow_item *items;
> + uint32_t item_num = 0; /* non-void item number of pattern*/
> + uint32_t i = 0;
> + static const struct {
> + enum rte_flow_item_type *item_array;
> + uint64_t type;
> + } i40e_rss_pctype_patterns[] = {
> + { pattern_fdir_ipv4,
> + ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_OTHER },
> + { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
> + { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
> + { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
> + { pattern_fdir_ipv6,
> + ETH_RSS_FRAG_IPV6 |
> ETH_RSS_NONFRAG_IPV6_OTHER },
> + { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
> + { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
> + { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
> + };
> +
> + p_info->types = I40E_RSS_TYPE_INVALID;
> +
> + if (item->type == RTE_FLOW_ITEM_TYPE_END) {
> + p_info->types = I40E_RSS_TYPE_NONE;
> return 0;
> + }
> +
> + /* convert flow to pctype */
> + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
> + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
> + item_num++;
> + i++;
> + }
> + item_num++;
> +
> + items = rte_zmalloc("i40e_pattern",
> + item_num * sizeof(struct rte_flow_item), 0);
> + if (!items) {
> + rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> + NULL, "No memory for PMD internal
> items.");
> + return -ENOMEM;
> + }
> +
> + i40e_pattern_skip_void_item(items, pattern);
> +
> + for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
> + if
> (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
> + items)) {
> + p_info->types = i40e_rss_pctype_patterns[i].type;
> + rte_free(items);
> + return 0;
> + }
> + }
> +
> + rte_free(items);
>
> for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> if (item->last) {
> @@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> item_type = item->type;
> switch (item_type) {
> case RTE_FLOW_ITEM_TYPE_ETH:
> - *action_flag = 1;
> + p_info->action_flag = 1;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> vlan_spec = item->spec;
> @@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> vlan_spec->tci) >> 13) & 0x7;
> info->region[0].user_priority_num =
> 1;
> info->queue_region_number = 1;
> - *action_flag = 0;
> + p_info->action_flag = 0;
> }
> }
> break;
> @@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused
> struct rte_eth_dev *dev,
> * max index should be 7, and so on. And also, queue index should be
> * continuous sequence and queue region index should be part of rss
> * queue index for this port.
> + * For hash params, the pctype in action and pattern must be same.
> + * Set queue index or symmetric hash enable must be with non-types.
> */
> static int
> i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
> const struct rte_flow_action *actions,
> struct rte_flow_error *error,
> - uint8_t action_flag,
> + struct i40e_rss_pattern_info p_info,
> struct i40e_queue_regions *conf_info,
> union i40e_filter_t *filter)
> {
> @@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> struct i40e_rte_flow_rss_conf *rss_config =
> &filter->rss_conf;
> struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> - uint16_t i, j, n, tmp;
> + uint16_t i, j, n, tmp, nb_types;
> uint32_t index = 0;
> uint64_t hf_bit = 1;
>
> @@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> return -rte_errno;
> }
>
> - if (action_flag) {
> + if (p_info.action_flag) {
> for (n = 0; n < 64; n++) {
> if (rss->types & (hf_bit << n)) {
> conf_info->region[0].hw_flowtype[0] = n;
> @@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> if (rss_config->queue_region_conf)
> return 0;
>
> - if (!rss || !rss->queue_num) {
> + if (!rss) {
> rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "no valid queues");
> + "no valid rules");
> return -rte_errno;
> }
>
> @@ -4692,19 +4745,40 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> }
> }
>
> - if (rss_info->conf.queue_num) {
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ACTION,
> - act,
> - "rss only allow one valid rule");
> - return -rte_errno;
> + if (rss->queue_num && (p_info.types || rss->types))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype must be empty while configuring queue
> region");
> +
> + /* validate pattern and pctype */
> + if (!(rss->types & p_info.types) &&
> + (rss->types || p_info.types) && !rss->queue_num)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "invaild pctype");
> +
> + nb_types = 0;
> + for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
> + if (rss->types & (hf_bit << n))
> + nb_types++;
> + if (nb_types > 1)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "multi pctype is not supported");
> }
>
> + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ
> &&
> + (p_info.types || rss->types || rss->queue_num))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype and queues must be empty while"
> + " setting SYMMETRIC hash function");
> +
> /* Parse RSS related parameters from configuration */
> - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
> + if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "non-default RSS hash functions are not supported");
> + "RSS hash functions are not supported");
> if (rss->level)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act, @@ -4748,17 +4822,18 @@ i40e_parse_rss_filter(struct rte_eth_dev
> *dev, {
> int ret;
> struct i40e_queue_regions info;
> - uint8_t action_flag = 0;
> + struct i40e_rss_pattern_info p_info;
>
> memset(&info, 0, sizeof(struct i40e_queue_regions));
> + memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
>
> ret = i40e_flow_parse_rss_pattern(dev, pattern,
> - error, &action_flag, &info);
> + error, &p_info, &info);
> if (ret)
> return ret;
>
> ret = i40e_flow_parse_rss_action(dev, actions, error,
> - action_flag, &info, filter);
> + p_info, &info, filter);
> if (ret)
> return ret;
>
> @@ -4777,15 +4852,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_flow_rss_filter *rss_filter;
> int ret;
>
> if (conf->queue_region_conf) {
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
> - conf->queue_region_conf = 0;
> } else {
> ret = i40e_config_rss_filter(pf, conf, 1);
> }
> - return ret;
> +
> + if (ret)
> + return ret;
> +
> + rss_filter = rte_zmalloc("i40e_flow_rss_filter",
> + sizeof(*rss_filter), 0);
> + if (rss_filter == NULL) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + rss_filter->rss_filter_info = *conf;
> + /* the rull new created is always valid
> + * the existing rull covered by new rull will be set invalid
> + */
Rull-> rule?
> + rss_filter->rss_filter_info.valid = true;
> +
> + TAILQ_INSERT_TAIL(&pf->rss_info_list, rss_filter, next);
> +
> + return 0;
> }
>
> static int
> @@ -4794,10 +4887,20 @@ i40e_config_rss_filter_del(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_flow_rss_filter *rss_filter;
>
> - i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + if (conf->queue_region_conf)
> + i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + else
> + i40e_config_rss_filter(pf, conf, 0);
>
> - i40e_config_rss_filter(pf, conf, 0);
> + TAILQ_FOREACH(rss_filter, &pf->rss_info_list, next) {
Better to use TAILQ_FOREACH_SAFE since there's TAILQ_REMOVE.
> + if (!memcmp(&rss_filter->rss_filter_info, conf,
> + sizeof(struct rte_flow_action_rss))) {
> + TAILQ_REMOVE(&pf->rss_info_list, rss_filter, next);
> + rte_free(rss_filter);
> + }
> + }
> return 0;
> }
>
> @@ -4940,7 +5043,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
> &cons_filter.rss_conf);
> if (ret)
> goto free_flow;
> - flow->rule = &pf->rss_info;
> + flow->rule = TAILQ_LAST(&pf->rss_info_list,
> + i40e_rss_conf_list);
> break;
> default:
> goto free_flow;
> @@ -4990,7 +5094,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
> break;
> case RTE_ETH_FILTER_HASH:
> ret = i40e_config_rss_filter_del(dev,
> - (struct i40e_rte_flow_rss_conf *)flow->rule);
> + &((struct i40e_flow_rss_filter *)flow->rule)-
> >rss_filter_info);
> break;
> default:
> PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
> @@ -5248,13 +5352,27 @@ static int i40e_flow_flush_rss_filter(struct
> rte_eth_dev *dev) {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct rte_flow *flow;
> + void *temp;
> int32_t ret = -EINVAL;
>
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
>
> - if (rss_info->conf.queue_num)
> - ret = i40e_config_rss_filter(pf, rss_info, FALSE);
> + /* Delete rss flows in flow list. */
> + TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
> + if (flow->filter_type != RTE_ETH_FILTER_HASH)
> + continue;
> +
> + if (flow->rule) {
> + ret = i40e_config_rss_filter_del(dev,
> + &((struct i40e_flow_rss_filter *)flow->rule)-
> >rss_filter_info);
> + if (ret)
> + return ret;
> + }
> + TAILQ_REMOVE(&pf->flow_list, flow, node);
> + rte_free(flow);
> + }
> +
> return ret;
> }
> --
> 2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v7] net/i40e: enable advanced RSS
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (9 preceding siblings ...)
2020-03-30 7:40 ` [dpdk-dev] [PATCH v6] " Chenxu Di
@ 2020-04-13 5:31 ` Chenxu Di
2020-04-14 6:36 ` [dpdk-dev] [PATCH v8] " Chenxu Di
2020-04-15 8:46 ` [dpdk-dev] [PATCH v9] net/i40e: enable hash configuration in RSS flow Chenxu Di
12 siblings, 0 replies; 26+ messages in thread
From: Chenxu Di @ 2020-04-13 5:31 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, beilei.xing, Chenxu Di
This patch supports:
- symmetric hash by rte_flow RSS action.
- input set change by rte_flow RSS action.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
v7:
-Updated code about symmetric hash function
---
doc/guides/nics/i40e.rst | 35 ++
doc/guides/rel_notes/release_20_05.rst | 7 +
drivers/net/i40e/i40e_ethdev.c | 509 ++++++++++++++++++++++---
drivers/net/i40e/i40e_ethdev.h | 22 +-
drivers/net/i40e/i40e_flow.c | 197 ++++++++--
5 files changed, 682 insertions(+), 88 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..1f8fca285 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,41 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+
+- ``RSS Flow``
+
+ RSS Flow supports to set hash input set, hash function, enable hash
+ and configure queue region.
+ For example:
+ Configure queue region as queue 0, 1, 2, 3.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues 0 1 2 3 end / end
+
+ Enable hash and set input set for ipv4-tcp.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queues end / end
+
+ Set symmetric hash enable for flow type ipv4-tcp.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp end queues end func symmetric_toeplitz / end
+
+ Set hash function as simple xor.
+
+ .. code-block:: console
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues end func simple_xor / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf501..bf5f399fe 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,13 @@ New Features
* Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
+* **Updated Intel i40e driver.**
+
+ Updated i40e PMD with new features and improvements, including:
+
+ * Added support for RSS using L3/L4 source/destination only.
+ * Added support for setting hash function in rte flow.
+
Removed Items
-------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9539b0470..f33c23377 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize RSS rule list */
+ TAILQ_INIT(&pf->rss_config_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -12325,14 +12328,16 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
}
}
-/* Restore rss filter */
+/* Restore RSS filter */
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *list = &pf->rss_config_list;
+ struct i40e_rss_filter *filter;
+
+ TAILQ_FOREACH(filter, list, next) {
+ i40e_config_rss_filter(pf, &filter->rss_filter_info, TRUE);
+ }
}
static void
@@ -12942,45 +12947,274 @@ i40e_rss_conf_init(struct i40e_rte_flow_rss_conf *out,
return 0;
}
-int
-i40e_action_rss_same(const struct rte_flow_action_rss *comp,
- const struct rte_flow_action_rss *with)
+/* Set hash input set */
+static int
+i40e_rss_set_hash_inputset(struct i40e_pf *pf, uint64_t types)
{
- return (comp->func == with->func &&
- comp->level == with->level &&
- comp->types == with->types &&
- comp->key_len == with->key_len &&
- comp->queue_num == with->queue_num &&
- !memcmp(comp->key, with->key, with->key_len) &&
- !memcmp(comp->queue, with->queue,
- sizeof(*with->queue) * with->queue_num));
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_match_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ };
+
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1; i < RTE_ETH_FLOW_MAX; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ULL << i)) ||
+ !(types & (1ULL << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_match_table); j++) {
+ if ((types & inset_match_table[j].type) ==
+ inset_match_table[j].type) {
+ if (inset_match_table[j].field ==
+ RTE_ETH_INPUT_SET_UNKNOWN) {
+ return -EINVAL;
+ }
+ conf.field[conf.inset_size] =
+ inset_match_table[j].field;
+ conf.inset_size++;
+ }
+ }
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return ret;
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* Look up the conflicted rule then mark it as invalid */
+static void
+i40e_rss_mark_invalid_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rss_filter *rss_item;
+ uint64_t rss_inset;
+
+ /* Clear input set bits before comparing the pctype */
+ rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_config_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* Rule for queue region */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* Rule for hash input set */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ rss_inset) ==
+ (conf->conf.types & rss_inset))
+ rss_item->rss_filter_info.valid = false;
+
+ /* Rule for hash function */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* Configure RSS hash function */
+static int
+i40e_rss_config_hash_function(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
- };
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint64_t mask0 = conf->conf.types & pf->adapter->flow_types_mask;
+ uint32_t reg, i;
+ uint16_t j;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1;
+ mask0 && i < UINT64_BIT; i++) {
+ if (!(mask0 & (1UL << i)))
+ continue;
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] & (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j),
+ I40E_GLQF_HSYM_SYMH_ENA_MASK);
+ }
}
+
+ return 0;
+ }
+
+ /* Simple XOR */
+ reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
+ if (!(reg & I40E_GLQF_CTL_HTOEP_MASK)) {
+ PMD_DRV_LOG(DEBUG, "Hash function already set to Simple XOR");
+ goto out;
+ }
+ reg &= ~I40E_GLQF_CTL_HTOEP_MASK;
+
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
+
+out:
+ I40E_WRITE_FLUSH(hw);
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Set hash input set and enable hash */
+static int
+i40e_rss_enable_hash(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Confirm hash input set */
+ if (i40e_rss_set_hash_inputset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ i40e_rss_config_hash_function(pf, conf);
+
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS queue region */
+static int
+i40e_rss_config_queue_region(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13000,6 +13234,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -13010,29 +13245,203 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS hash function to default */
+static int
+i40e_rss_clear_hash_function(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint64_t mask0 = conf->conf.types & pf->adapter->flow_types_mask;
+ uint32_t i, reg;
+ uint16_t j;
+
+ if (conf->conf.func != RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1;
+ mask0 && i < UINT64_BIT; i++) {
+ if (mask0 & (1UL << i)) {
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j),
+ 0);
+ }
+ }
+ }
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ /* Simple XOR */
+ reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
+ if (reg & I40E_GLQF_CTL_HTOEP_MASK) {
+ PMD_DRV_LOG(DEBUG,
+ "Hash function already set to Toeplitz");
+ goto out;
}
+ reg |= I40E_GLQF_CTL_HTOEP_MASK;
+
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
+
+out:
+ I40E_WRITE_FLUSH(hw);
+
+ return 0;
+}
+/* Disable RSS hash and configure default input set */
+static int
+i40e_rss_disable_hash(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* Disable RSS hash */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
i40e_hw_rss_hash_set(pf, &rss_conf);
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ULL << i)) ||
+ !(conf->conf.types & (1ULL << i)))
+ continue;
+
+ /* Configure default input set */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ i40e_rss_clear_hash_function(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS queue region to default */
+static int
+i40e_rss_clear_queue_region(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* Configure RSS queue region */
+ ret = i40e_rss_config_queue_region(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
+ /* Configure hash function */
+ ret = i40e_rss_config_hash_function(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* Configure hash enable and input set */
+ ret = i40e_rss_enable_hash(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* Update RSS info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_rss_clear_queue_region(pf);
+ else if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
+ i40e_rss_clear_hash_function(pf, conf);
+ else
+ i40e_rss_disable_hash(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..e9d90fa35 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rss_filter);
+
+/* RSS filter list structure */
+struct i40e_rss_filter {
+ TAILQ_ENTRY(i40e_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1038,7 +1055,8 @@ struct i40e_pf {
struct i40e_fdir_info fdir; /* flow director info */
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
- struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rte_flow_rss_conf rss_info; /* RSS info */
+ struct i40e_rss_conf_list rss_config_list; /* RSS rule list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
@@ -1338,8 +1356,6 @@ int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
int i40e_rss_conf_init(struct i40e_rte_flow_rss_conf *out,
const struct rte_flow_action_rss *in);
-int i40e_action_rss_same(const struct rte_flow_action_rss *comp,
- const struct rte_flow_action_rss *with);
int i40e_config_rss_filter(struct i40e_pf *pf,
struct i40e_rte_flow_rss_conf *conf, bool add);
int i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params);
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..c7d92ab44 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,29 +4424,80 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
- * action RSS will be uaed to transmit valid parameter with
+ * action RSS will be used to transmit valid parameter with
* struct rte_flow_action_rss for all the 3 case.
*/
static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "invalid rule");
return -rte_errno;
}
@@ -4692,19 +4745,48 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "rss types must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pattern, type and queues must be empty while"
+ " setting hash function as simple_xor");
+
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ !(p_info.types && rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues can not be empty while"
+ " setting hash function as symmetric toeplitz");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX ||
+ rss->func == RTE_ETH_HASH_FUNCTION_TOEPLITZ)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4748,17 +4830,18 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
{
int ret;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ struct i40e_rss_pattern_info p_info;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4860,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rule new created is always valid
+ * the existing rule covered by new rule will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_config_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4895,21 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rss_filter *rss_filter;
+ void *temp;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH_SAFE(rss_filter, &pf->rss_config_list, next, temp) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_config_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5052,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_config_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5103,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5361,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v8] net/i40e: enable advanced RSS
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (10 preceding siblings ...)
2020-04-13 5:31 ` [dpdk-dev] [PATCH v7] net/i40e: enable advanced RSS Chenxu Di
@ 2020-04-14 6:36 ` Chenxu Di
2020-04-14 14:55 ` Iremonger, Bernard
2020-04-15 5:31 ` Xing, Beilei
2020-04-15 8:46 ` [dpdk-dev] [PATCH v9] net/i40e: enable hash configuration in RSS flow Chenxu Di
12 siblings, 2 replies; 26+ messages in thread
From: Chenxu Di @ 2020-04-14 6:36 UTC (permalink / raw)
To: dev; +Cc: beilei.xing, Chenxu Di
This patch supports:
- symmetric hash configuration
- Input set configuration
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
doc/guides/nics/i40e.rst | 35 ++
doc/guides/rel_notes/release_20_05.rst | 7 +
drivers/net/i40e/i40e_ethdev.c | 509 ++++++++++++++++++++++---
drivers/net/i40e/i40e_ethdev.h | 22 +-
drivers/net/i40e/i40e_flow.c | 199 ++++++++--
5 files changed, 683 insertions(+), 89 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..1f8fca285 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -569,6 +569,41 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+
+- ``RSS Flow``
+
+ RSS Flow supports to set hash input set, hash function, enable hash
+ and configure queue region.
+ For example:
+ Configure queue region as queue 0, 1, 2, 3.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues 0 1 2 3 end / end
+
+ Enable hash and set input set for ipv4-tcp.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queues end / end
+
+ Set symmetric hash enable for flow type ipv4-tcp.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp end queues end func symmetric_toeplitz / end
+
+ Set hash function as simple xor.
+
+ .. code-block:: console
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues end func simple_xor / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf501..bf5f399fe 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,13 @@ New Features
* Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
+* **Updated Intel i40e driver.**
+
+ Updated i40e PMD with new features and improvements, including:
+
+ * Added support for RSS using L3/L4 source/destination only.
+ * Added support for setting hash function in rte flow.
+
Removed Items
-------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9539b0470..efc113842 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize RSS rule list */
+ TAILQ_INIT(&pf->rss_config_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -12325,14 +12328,16 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
}
}
-/* Restore rss filter */
+/* Restore RSS filter */
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *list = &pf->rss_config_list;
+ struct i40e_rss_filter *filter;
+
+ TAILQ_FOREACH(filter, list, next) {
+ i40e_config_rss_filter(pf, &filter->rss_filter_info, TRUE);
+ }
}
static void
@@ -12942,45 +12947,274 @@ i40e_rss_conf_init(struct i40e_rte_flow_rss_conf *out,
return 0;
}
-int
-i40e_action_rss_same(const struct rte_flow_action_rss *comp,
- const struct rte_flow_action_rss *with)
+/* Configure hash input set */
+static int
+i40e_rss_conf_hash_inset(struct i40e_pf *pf, uint64_t types)
{
- return (comp->func == with->func &&
- comp->level == with->level &&
- comp->types == with->types &&
- comp->key_len == with->key_len &&
- comp->queue_num == with->queue_num &&
- !memcmp(comp->key, with->key, with->key_len) &&
- !memcmp(comp->queue, with->queue,
- sizeof(*with->queue) * with->queue_num));
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct rte_eth_input_set_conf conf;
+ int i, ret;
+ uint32_t j;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_match_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ };
+
+ ret = 0;
+
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1; i < RTE_ETH_FLOW_MAX; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ULL << i)) ||
+ !(types & (1ULL << i)))
+ continue;
+
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.flow_type = i;
+ conf.inset_size = 0;
+ for (j = 0; j < RTE_DIM(inset_match_table); j++) {
+ if ((types & inset_match_table[j].type) ==
+ inset_match_table[j].type) {
+ if (inset_match_table[j].field ==
+ RTE_ETH_INPUT_SET_UNKNOWN) {
+ return -EINVAL;
+ }
+ conf.field[conf.inset_size] =
+ inset_match_table[j].field;
+ conf.inset_size++;
+ }
+ }
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return ret;
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* Look up the conflicted rule then mark it as invalid */
+static void
+i40e_rss_mark_invalid_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rss_filter *rss_item;
+ uint64_t rss_inset;
+
+ /* Clear input set bits before comparing the pctype */
+ rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ TAILQ_FOREACH(rss_item, &pf->rss_config_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ /* Rule for queue region */
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ /* Rule for hash input set */
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ rss_inset) ==
+ (conf->conf.types & rss_inset))
+ rss_item->rss_filter_info.valid = false;
+
+ /* Rule for hash function */
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* Configure RSS hash function */
+static int
+i40e_rss_config_hash_function(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
- };
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint64_t mask0 = conf->conf.types & pf->adapter->flow_types_mask;
+ uint32_t reg, i;
+ uint16_t j;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
- return 0;
+ if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1;
+ mask0 && i < UINT64_BIT; i++) {
+ if (!(mask0 & (1UL << i)))
+ continue;
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] & (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j),
+ I40E_GLQF_HSYM_SYMH_ENA_MASK);
+ }
}
+
+ return 0;
+ }
+
+ /* Simple XOR */
+ reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
+ if (!(reg & I40E_GLQF_CTL_HTOEP_MASK)) {
+ PMD_DRV_LOG(DEBUG, "Hash function already set to Simple XOR");
+ goto out;
+ }
+ reg &= ~I40E_GLQF_CTL_HTOEP_MASK;
+
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
+
+out:
+ I40E_WRITE_FLUSH(hw);
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Enable RSS according to the configuration */
+static int
+i40e_rss_enable_hash(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+
+ if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ /* Configure hash input set */
+ if (i40e_rss_conf_hash_inset(pf, rss_conf->rss_hf))
return -EINVAL;
+
+ if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf->rss_key = (uint8_t *)rss_key_default;
+ rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf->rss_hf |= rss_info->conf.types;
+ i40e_hw_rss_hash_set(pf, rss_conf);
+
+ if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ i40e_rss_config_hash_function(pf, conf);
+
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS queue region */
+static int
+i40e_rss_config_queue_region(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, lut;
+ uint16_t j, num;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13000,6 +13234,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
return -ENOTSUP;
}
+ lut = 0;
/* Fill in redirection table */
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -13010,29 +13245,203 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS hash function to default */
+static int
+i40e_rss_clear_hash_function(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint64_t mask0 = conf->conf.types & pf->adapter->flow_types_mask;
+ uint32_t i, reg;
+ uint16_t j;
+
+ if (conf->conf.func != RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1;
+ mask0 && i < UINT64_BIT; i++) {
+ if (mask0 & (1UL << i)) {
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] &
+ (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j),
+ 0);
+ }
+ }
+ }
+
return 0;
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ /* Simple XOR */
+ reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
+ if (reg & I40E_GLQF_CTL_HTOEP_MASK) {
+ PMD_DRV_LOG(DEBUG,
+ "Hash function already set to Toeplitz");
+ goto out;
}
+ reg |= I40E_GLQF_CTL_HTOEP_MASK;
+
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
+
+out:
+ I40E_WRITE_FLUSH(hw);
+
+ return 0;
+}
+/* Disable RSS hash and configure default input set */
+static int
+i40e_rss_disable_hash(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = pf->rss_info.conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = pf->rss_info.conf.key_len,
+ };
+ uint32_t i;
+
+ /* Disable RSS hash */
+ rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
i40e_hw_rss_hash_set(pf, &rss_conf);
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ULL << i)) ||
+ !(conf->conf.types & (1ULL << i)))
+ continue;
+
+ /* Configure default input set */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
+ }
+
+ rss_info->conf.types = rss_conf.rss_hf;
+
+ i40e_rss_clear_hash_function(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS queue region to default */
+static int
+i40e_rss_clear_queue_region(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i, lut;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ lut = 0;
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ struct rte_eth_rss_conf rss_conf = {
+ .rss_key = conf->conf.key_len ?
+ (void *)(uintptr_t)conf->conf.key : NULL,
+ .rss_key_len = conf->conf.key_len,
+ .rss_hf = conf->conf.types,
+ };
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* Configure RSS queue region */
+ ret = i40e_rss_config_queue_region(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
+ /* Configure hash function */
+ ret = i40e_rss_config_hash_function(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* Configure hash enable and input set */
+ ret = i40e_rss_enable_hash(pf, conf, &rss_conf);
+ if (ret)
+ return ret;
+
+ update_conf.types = rss_conf.rss_hf;
+ update_conf.key = rss_conf.rss_key;
+ update_conf.key_len = rss_conf.rss_key_len;
+ }
+
+ /* Update RSS info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_rss_clear_queue_region(pf);
+ else if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
+ i40e_rss_clear_hash_function(pf, conf);
+ else
+ i40e_rss_disable_hash(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index aac89de91..e9d90fa35 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -749,6 +752,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rss_filter);
+
+/* RSS filter list structure */
+struct i40e_rss_filter {
+ TAILQ_ENTRY(i40e_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1038,7 +1055,8 @@ struct i40e_pf {
struct i40e_fdir_info fdir; /* flow director info */
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
- struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rte_flow_rss_conf rss_info; /* RSS info */
+ struct i40e_rss_conf_list rss_config_list; /* RSS rule list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
@@ -1338,8 +1356,6 @@ int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
int i40e_rss_conf_init(struct i40e_rte_flow_rss_conf *out,
const struct rte_flow_action_rss *in);
-int i40e_action_rss_same(const struct rte_flow_action_rss *comp,
- const struct rte_flow_action_rss *with);
int i40e_config_rss_filter(struct i40e_pf *pf,
struct i40e_rte_flow_rss_conf *conf, bool add);
int i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params);
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index d877ac250..f4f3c3abd 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4424,29 +4424,80 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
- * action RSS will be uaed to transmit valid parameter with
+ * action RSS will be used to transmit valid parameter with
* struct rte_flow_action_rss for all the 3 case.
*/
static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* convert flow to pctype */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* max index should be 7, and so on. And also, queue index should be
* continuous sequence and queue region index should be part of rss
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "invalid rule");
return -rte_errno;
}
@@ -4692,19 +4745,48 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "rss types must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pattern, type and queues must be empty while"
+ " setting hash function as simple_xor");
+
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ !(p_info.types && rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues can not be empty while"
+ " setting hash function as symmetric toeplitz");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX ||
+ rss->func == RTE_ETH_HASH_FUNCTION_TOEPLITZ)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4746,19 +4828,20 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
union i40e_filter_t *filter,
struct rte_flow_error *error)
{
- int ret;
+ struct i40e_rss_pattern_info p_info;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ int ret;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4777,15 +4860,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rule new created is always valid
+ * the existing rule covered by new rule will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_config_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4794,10 +4895,21 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rss_filter *rss_filter;
+ void *temp;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH_SAFE(rss_filter, &pf->rss_config_list, next, temp) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_config_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4940,7 +5052,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_config_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -4990,7 +5103,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5248,13 +5361,27 @@ static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete rss flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/i40e: enable advanced RSS
2020-04-14 6:36 ` [dpdk-dev] [PATCH v8] " Chenxu Di
@ 2020-04-14 14:55 ` Iremonger, Bernard
2020-04-15 5:31 ` Xing, Beilei
1 sibling, 0 replies; 26+ messages in thread
From: Iremonger, Bernard @ 2020-04-14 14:55 UTC (permalink / raw)
To: Di, ChenxuX, dev; +Cc: Xing, Beilei, Di, ChenxuX
Hi Chenxu,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenxu Di
> Sent: Tuesday, April 14, 2020 7:37 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [dpdk-dev] [PATCH v8] net/i40e: enable advanced RSS
>
> This patch supports:
>
> - symmetric hash configuration
> - Input set configuration
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> doc/guides/nics/i40e.rst | 35 ++
> doc/guides/rel_notes/release_20_05.rst | 7 +
> drivers/net/i40e/i40e_ethdev.c | 509 ++++++++++++++++++++++---
> drivers/net/i40e/i40e_ethdev.h | 22 +-
> drivers/net/i40e/i40e_flow.c | 199 ++++++++--
> 5 files changed, 683 insertions(+), 89 deletions(-)
>
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> d6e578eda..1f8fca285 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -569,6 +569,41 @@ details please refer to
> :doc:`../testpmd_app_ug/index`.
> testpmd> set port (port_id) queue-region flush (on|off)
> testpmd> show port (port_id) queue-region
>
> +Generic flow API
> +~~~~~~~~~~~~~~~~~~~
> +
> +- ``RSS Flow``
> +
> + RSS Flow supports to set hash input set, hash function, enable hash
> + and configure queue region.
> + For example:
> + Configure queue region as queue 0, 1, 2, 3.
> +
> + .. code-block:: console
> +
> + testpmd> flow create 0 ingress pattern end actions rss types end \
> + queues 0 1 2 3 end / end
> +
> + Enable hash and set input set for ipv4-tcp.
> +
> + .. code-block:: console
> +
> + testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
> + actions rss types ipv4-tcp l3-src-only end queues end / end
> +
/dpdk# make doc-guides-html
sphinx processing guides-html...
/root/dpdk_ipsec_gitlab_1/doc/guides/nics/i40e.rst:603: ERROR: Error in "code-block" directive:
maximum 1 argument(s) allowed, 19 supplied.
.. code-block:: console
testpmd> flow create 0 ingress pattern end actions rss types end \
queues end func simple_xor / end
> + Set symmetric hash enable for flow type ipv4-tcp.
> +
> + .. code-block:: console
> +
> + testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
> + actions rss types ipv4-tcp end queues end func symmetric_toeplitz
> + / end
> +
> + Set hash function as simple xor.
> +
> + .. code-block:: console
> + testpmd> flow create 0 ingress pattern end actions rss types end \
> + queues end func simple_xor / end
> +
> Limitations or Known issues
> ---------------------------
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst
> b/doc/guides/rel_notes/release_20_05.rst
> index 000bbf501..bf5f399fe 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -62,6 +62,13 @@ New Features
>
> * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
>
> +* **Updated Intel i40e driver.**
> +
> + Updated i40e PMD with new features and improvements, including:
> +
> + * Added support for RSS using L3/L4 source/destination only.
> + * Added support for setting hash function in rte flow.
> +
>
> Removed Items
> -------------
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 9539b0470..efc113842 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -1656,6 +1656,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void
> *init_params __rte_unused)
> /* initialize mirror rule list */
> TAILQ_INIT(&pf->mirror_list);
>
> + /* initialize RSS rule list */
> + TAILQ_INIT(&pf->rss_config_list);
> +
> /* initialize Traffic Manager configuration */
> i40e_tm_conf_init(dev);
>
> @@ -12325,14 +12328,16 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
> }
> }
>
> -/* Restore rss filter */
> +/* Restore RSS filter */
> static inline void
> i40e_rss_filter_restore(struct i40e_pf *pf) {
> - struct i40e_rte_flow_rss_conf *conf =
> - &pf->rss_info;
> - if (conf->conf.queue_num)
> - i40e_config_rss_filter(pf, conf, TRUE);
> + struct i40e_rss_conf_list *list = &pf->rss_config_list;
> + struct i40e_rss_filter *filter;
> +
> + TAILQ_FOREACH(filter, list, next) {
> + i40e_config_rss_filter(pf, &filter->rss_filter_info, TRUE);
> + }
> }
>
> static void
> @@ -12942,45 +12947,274 @@ i40e_rss_conf_init(struct
> i40e_rte_flow_rss_conf *out,
> return 0;
> }
>
> -int
> -i40e_action_rss_same(const struct rte_flow_action_rss *comp,
> - const struct rte_flow_action_rss *with)
> +/* Configure hash input set */
> +static int
> +i40e_rss_conf_hash_inset(struct i40e_pf *pf, uint64_t types)
> {
> - return (comp->func == with->func &&
> - comp->level == with->level &&
> - comp->types == with->types &&
> - comp->key_len == with->key_len &&
> - comp->queue_num == with->queue_num &&
> - !memcmp(comp->key, with->key, with->key_len) &&
> - !memcmp(comp->queue, with->queue,
> - sizeof(*with->queue) * with->queue_num));
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct rte_eth_input_set_conf conf;
> + int i, ret;
> + uint32_t j;
> + static const struct {
> + uint64_t type;
> + enum rte_eth_input_set_field field;
> + } inset_match_table[] = {
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP4},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> +
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
> + {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
> +
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L3_SRC_ONLY,
> + RTE_ETH_INPUT_SET_L3_SRC_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L3_DST_ONLY,
> + RTE_ETH_INPUT_SET_L3_DST_IP6},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L4_SRC_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + {ETH_RSS_NONFRAG_IPV6_OTHER |
> ETH_RSS_L4_DST_ONLY,
> + RTE_ETH_INPUT_SET_UNKNOWN},
> + };
> +
> + ret = 0;
> +
> + for (i = RTE_ETH_FLOW_UNKNOWN + 1; i < RTE_ETH_FLOW_MAX;
> i++) {
> + if (!(pf->adapter->flow_types_mask & (1ULL << i)) ||
> + !(types & (1ULL << i)))
> + continue;
> +
> + conf.op = RTE_ETH_INPUT_SET_SELECT;
> + conf.flow_type = i;
> + conf.inset_size = 0;
> + for (j = 0; j < RTE_DIM(inset_match_table); j++) {
> + if ((types & inset_match_table[j].type) ==
> + inset_match_table[j].type) {
> + if (inset_match_table[j].field ==
> + RTE_ETH_INPUT_SET_UNKNOWN) {
> + return -EINVAL;
> + }
> + conf.field[conf.inset_size] =
> + inset_match_table[j].field;
> + conf.inset_size++;
> + }
> + }
> +
> + if (conf.inset_size) {
> + ret = i40e_hash_filter_inset_select(hw, &conf);
> + if (ret)
> + return ret;
> + }
> + }
> +
> + return ret;
> }
>
> -int
> -i40e_config_rss_filter(struct i40e_pf *pf,
> - struct i40e_rte_flow_rss_conf *conf, bool add)
> +/* Look up the conflicted rule then mark it as invalid */ static void
> +i40e_rss_mark_invalid_rule(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_rss_filter *rss_item;
> + uint64_t rss_inset;
> +
> + /* Clear input set bits before comparing the pctype */
> + rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
> + ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
> +
> + TAILQ_FOREACH(rss_item, &pf->rss_config_list, next) {
> + if (!rss_item->rss_filter_info.valid)
> + continue;
> +
> + /* Rule for queue region */
> + if (conf->conf.queue_num &&
> + rss_item->rss_filter_info.conf.queue_num)
> + rss_item->rss_filter_info.valid = false;
> +
> + /* Rule for hash input set */
> + if (conf->conf.types &&
> + (rss_item->rss_filter_info.conf.types &
> + rss_inset) ==
> + (conf->conf.types & rss_inset))
> + rss_item->rss_filter_info.valid = false;
> +
> + /* Rule for hash function */
> + if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
> + rss_item->rss_filter_info.conf.func ==
> + RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
> + rss_item->rss_filter_info.valid = false;
> + }
> +}
> +
> +/* Configure RSS hash function */
> +static int
> +i40e_rss_config_hash_function(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint32_t i, lut = 0;
> - uint16_t j, num;
> - struct rte_eth_rss_conf rss_conf = {
> - .rss_key = conf->conf.key_len ?
> - (void *)(uintptr_t)conf->conf.key : NULL,
> - .rss_key_len = conf->conf.key_len,
> - .rss_hf = conf->conf.types,
> - };
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + uint64_t mask0 = conf->conf.types & pf->adapter-
> >flow_types_mask;
> + uint32_t reg, i;
> + uint16_t j;
>
> - if (!add) {
> - if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
> - i40e_pf_disable_rss(pf);
> - memset(rss_info, 0,
> - sizeof(struct i40e_rte_flow_rss_conf));
> - return 0;
> + if (conf->conf.func ==
> RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
> + i40e_set_symmetric_hash_enable_per_port(hw, 1);
> + for (i = RTE_ETH_FLOW_UNKNOWN + 1;
> + mask0 && i < UINT64_BIT; i++) {
> + if (!(mask0 & (1UL << i)))
> + continue;
> + for (j = I40E_FILTER_PCTYPE_INVALID + 1;
> + j < I40E_FILTER_PCTYPE_MAX; j++) {
> + if (pf->adapter->pctypes_tbl[i] & (1ULL << j))
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(j),
> +
> I40E_GLQF_HSYM_SYMH_ENA_MASK);
> + }
> }
> +
> + return 0;
> + }
> +
> + /* Simple XOR */
> + reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
> + if (!(reg & I40E_GLQF_CTL_HTOEP_MASK)) {
> + PMD_DRV_LOG(DEBUG, "Hash function already set to Simple
> XOR");
> + goto out;
> + }
> + reg &= ~I40E_GLQF_CTL_HTOEP_MASK;
> +
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
> +
> +out:
> + I40E_WRITE_FLUSH(hw);
> + i40e_rss_mark_invalid_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* Enable RSS according to the configuration */ static int
> +i40e_rss_enable_hash(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> +
> + if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
> + return -ENOTSUP;
> +
> + /* Configure hash input set */
> + if (i40e_rss_conf_hash_inset(pf, rss_conf->rss_hf))
> return -EINVAL;
> +
> + if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> + /* Random default keys */
> + static uint32_t rss_key_default[] = {0x6b793944,
> + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
> +
> + rss_conf->rss_key = (uint8_t *)rss_key_default;
> + rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1)
> *
> + sizeof(uint32_t);
> + PMD_DRV_LOG(INFO,
> + "No valid RSS key config for i40e, using default\n");
> }
>
> + rss_conf->rss_hf |= rss_info->conf.types;
> + i40e_hw_rss_hash_set(pf, rss_conf);
> +
> + if (conf->conf.func ==
> RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + i40e_rss_config_hash_function(pf, conf);
> +
> + i40e_rss_mark_invalid_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* Configure RSS queue region */
> +static int
> +i40e_rss_config_queue_region(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint32_t i, lut;
> + uint16_t j, num;
> +
> /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> * It's necessary to calculate the actual PF queues that are configured.
> */
> @@ -13000,6 +13234,7 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> return -ENOTSUP;
> }
>
> + lut = 0;
> /* Fill in redirection table */
> for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> if (j == num)
> @@ -13010,29 +13245,203 @@ i40e_config_rss_filter(struct i40e_pf *pf,
> I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> }
>
> - if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
> - i40e_pf_disable_rss(pf);
> + i40e_rss_mark_invalid_rule(pf, conf);
> +
> + return 0;
> +}
> +
> +/* Configure RSS hash function to default */ static int
> +i40e_rss_clear_hash_function(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + uint64_t mask0 = conf->conf.types & pf->adapter-
> >flow_types_mask;
> + uint32_t i, reg;
> + uint16_t j;
> +
> + if (conf->conf.func != RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
> + for (i = RTE_ETH_FLOW_UNKNOWN + 1;
> + mask0 && i < UINT64_BIT; i++) {
> + if (mask0 & (1UL << i)) {
> + for (j = I40E_FILTER_PCTYPE_INVALID + 1;
> + j < I40E_FILTER_PCTYPE_MAX; j++) {
> + if (pf->adapter->pctypes_tbl[i] &
> + (1ULL << j))
> + i40e_write_global_rx_ctl(hw,
> + I40E_GLQF_HSYM(j),
> + 0);
> + }
> + }
> + }
> +
> return 0;
> }
> - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
> - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> - /* Random default keys */
> - static uint32_t rss_key_default[] = {0x6b793944,
> - 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> - 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> - 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
>
> - rss_conf.rss_key = (uint8_t *)rss_key_default;
> - rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> - sizeof(uint32_t);
> - PMD_DRV_LOG(INFO,
> - "No valid RSS key config for i40e, using default\n");
> + /* Simple XOR */
> + reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
> + if (reg & I40E_GLQF_CTL_HTOEP_MASK) {
> + PMD_DRV_LOG(DEBUG,
> + "Hash function already set to Toeplitz");
> + goto out;
> }
> + reg |= I40E_GLQF_CTL_HTOEP_MASK;
> +
> + i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
> +
> +out:
> + I40E_WRITE_FLUSH(hw);
> +
> + return 0;
> +}
>
> +/* Disable RSS hash and configure default input set */ static int
> +i40e_rss_disable_hash(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf)
> +{
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = pf->rss_info.conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = pf->rss_info.conf.key_len,
> + };
> + uint32_t i;
> +
> + /* Disable RSS hash */
> + rss_conf.rss_hf = rss_info->conf.types & ~(conf->conf.types);
> i40e_hw_rss_hash_set(pf, &rss_conf);
>
> - if (i40e_rss_conf_init(rss_info, &conf->conf))
> - return -EINVAL;
> + for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD;
> i++) {
> + if (!(pf->adapter->flow_types_mask & (1ULL << i)) ||
> + !(conf->conf.types & (1ULL << i)))
> + continue;
> +
> + /* Configure default input set */
> + struct rte_eth_input_set_conf input_conf = {
> + .op = RTE_ETH_INPUT_SET_SELECT,
> + .flow_type = i,
> + .inset_size = 1,
> + };
> + input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
> + i40e_hash_filter_inset_select(hw, &input_conf);
> + }
> +
> + rss_info->conf.types = rss_conf.rss_hf;
> +
> + i40e_rss_clear_hash_function(pf, conf);
> +
> + return 0;
> +}
> +
> +/* Configure RSS queue region to default */ static int
> +i40e_rss_clear_queue_region(struct i40e_pf *pf) {
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + uint16_t queue[I40E_MAX_Q_PER_TC];
> + uint32_t num_rxq, i, lut;
> + uint16_t j, num;
> +
> + num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues,
> I40E_MAX_Q_PER_TC);
> +
> + for (j = 0; j < num_rxq; j++)
> + queue[j] = j;
> +
> + /* If both VMDQ and RSS enabled, not all of PF queues are
> configured.
> + * It's necessary to calculate the actual PF queues that are configured.
> + */
> + if (pf->dev_data->dev_conf.rxmode.mq_mode &
> ETH_MQ_RX_VMDQ_FLAG)
> + num = i40e_pf_calc_configured_queues_num(pf);
> + else
> + num = pf->dev_data->nb_rx_queues;
> +
> + num = RTE_MIN(num, num_rxq);
> + PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are
> configured",
> + num);
> +
> + if (num == 0) {
> + PMD_DRV_LOG(ERR,
> + "No PF queues are configured to enable RSS for port
> %u",
> + pf->dev_data->port_id);
> + return -ENOTSUP;
> + }
> +
> + lut = 0;
> + /* Fill in redirection table */
> + for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> + if (j == num)
> + j = 0;
> + lut = (lut << 8) | (queue[j] & ((0x1 <<
> + hw->func_caps.rss_table_entry_width) - 1));
> + if ((i & 3) == 3)
> + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
> + }
> +
> + rss_info->conf.queue_num = 0;
> + memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
> +
> + return 0;
> +}
> +
> +int
> +i40e_config_rss_filter(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf, bool add) {
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> + struct rte_flow_action_rss update_conf = rss_info->conf;
> + struct rte_eth_rss_conf rss_conf = {
> + .rss_key = conf->conf.key_len ?
> + (void *)(uintptr_t)conf->conf.key : NULL,
> + .rss_key_len = conf->conf.key_len,
> + .rss_hf = conf->conf.types,
> + };
> + int ret = 0;
> +
> + if (add) {
> + if (conf->conf.queue_num) {
> + /* Configure RSS queue region */
> + ret = i40e_rss_config_queue_region(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.queue_num = conf->conf.queue_num;
> + update_conf.queue = conf->conf.queue;
> + } else if (conf->conf.func ==
> + RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
> + /* Configure hash function */
> + ret = i40e_rss_config_hash_function(pf, conf);
> + if (ret)
> + return ret;
> +
> + update_conf.func = conf->conf.func;
> + } else {
> + /* Configure hash enable and input set */
> + ret = i40e_rss_enable_hash(pf, conf, &rss_conf);
> + if (ret)
> + return ret;
> +
> + update_conf.types = rss_conf.rss_hf;
> + update_conf.key = rss_conf.rss_key;
> + update_conf.key_len = rss_conf.rss_key_len;
> + }
> +
> + /* Update RSS info in pf */
> + if (i40e_rss_conf_init(rss_info, &update_conf))
> + return -EINVAL;
> + } else {
> + if (!conf->valid)
> + return 0;
> +
> + if (conf->conf.queue_num)
> + i40e_rss_clear_queue_region(pf);
> + else if (conf->conf.func ==
> RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
> + i40e_rss_clear_hash_function(pf, conf);
> + else
> + i40e_rss_disable_hash(pf, conf);
> + }
>
> return 0;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev.h
> b/drivers/net/i40e/i40e_ethdev.h index aac89de91..e9d90fa35 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -194,6 +194,9 @@ enum i40e_flxpld_layer_idx { #define
> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
> I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
>
> +#define I40E_RSS_TYPE_NONE 0ULL
> +#define I40E_RSS_TYPE_INVALID 1ULL
> +
> #define I40E_INSET_NONE 0x00000000000000000ULL
>
> /* bit0 ~ bit 7 */
> @@ -749,6 +752,11 @@ struct i40e_queue_regions {
> struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX +
> 1]; };
>
> +struct i40e_rss_pattern_info {
> + uint8_t action_flag;
> + uint64_t types;
> +};
> +
> /* Tunnel filter number HW supports */
> #define I40E_MAX_TUNNEL_FILTER_NUM 400
>
> @@ -968,6 +976,15 @@ struct i40e_rte_flow_rss_conf {
> I40E_VFQF_HKEY_MAX_INDEX :
> I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t)]; /* Hash key. */
> uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use.
> */
> + bool valid; /* Check if it's valid */
> +};
> +
> +TAILQ_HEAD(i40e_rss_conf_list, i40e_rss_filter);
> +
> +/* RSS filter list structure */
> +struct i40e_rss_filter {
> + TAILQ_ENTRY(i40e_rss_filter) next;
> + struct i40e_rte_flow_rss_conf rss_filter_info;
> };
>
> struct i40e_vf_msg_cfg {
> @@ -1038,7 +1055,8 @@ struct i40e_pf {
> struct i40e_fdir_info fdir; /* flow director info */
> struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
> struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
> - struct i40e_rte_flow_rss_conf rss_info; /* rss info */
> + struct i40e_rte_flow_rss_conf rss_info; /* RSS info */
> + struct i40e_rss_conf_list rss_config_list; /* RSS rule list */
> struct i40e_queue_regions queue_region; /* queue region info */
> struct i40e_fc_conf fc_conf; /* Flow control conf */
> struct i40e_mirror_rule_list mirror_list; @@ -1338,8 +1356,6 @@ int
> i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len); int
> i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size); int
> i40e_rss_conf_init(struct i40e_rte_flow_rss_conf *out,
> const struct rte_flow_action_rss *in); -int
> i40e_action_rss_same(const struct rte_flow_action_rss *comp,
> - const struct rte_flow_action_rss *with);
> int i40e_config_rss_filter(struct i40e_pf *pf,
> struct i40e_rte_flow_rss_conf *conf, bool add); int
> i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params);
> diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
> index d877ac250..f4f3c3abd 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -4424,29 +4424,80 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev
> *dev,
> * function for RSS, or flowtype for queue region configuration.
> * For example:
> * pattern:
> - * Case 1: only ETH, indicate flowtype for queue region will be parsed.
> - * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
> - * Case 3: none, indicate RSS related will be parsed in action.
> - * Any pattern other the ETH or VLAN will be treated as invalid except END.
> + * Case 1: try to transform patterns to pctype. valid pctype will be
> + * used in parse action.
> + * Case 2: only ETH, indicate flowtype for queue region will be parsed.
> + * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
> * So, pattern choice is depened on the purpose of configuration of
> * that flow.
> * action:
> - * action RSS will be uaed to transmit valid parameter with
> + * action RSS will be used to transmit valid parameter with
> * struct rte_flow_action_rss for all the 3 case.
> */
> static int
> i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
> const struct rte_flow_item *pattern,
> struct rte_flow_error *error,
> - uint8_t *action_flag,
> + struct i40e_rss_pattern_info *p_info,
> struct i40e_queue_regions *info) {
> const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
> const struct rte_flow_item *item = pattern;
> enum rte_flow_item_type item_type;
> -
> - if (item->type == RTE_FLOW_ITEM_TYPE_END)
> + struct rte_flow_item *items;
> + uint32_t item_num = 0; /* non-void item number of pattern*/
> + uint32_t i = 0;
> + static const struct {
> + enum rte_flow_item_type *item_array;
> + uint64_t type;
> + } i40e_rss_pctype_patterns[] = {
> + { pattern_fdir_ipv4,
> + ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_OTHER },
> + { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
> + { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
> + { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
> + { pattern_fdir_ipv6,
> + ETH_RSS_FRAG_IPV6 |
> ETH_RSS_NONFRAG_IPV6_OTHER },
> + { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
> + { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
> + { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
> + };
> +
> + p_info->types = I40E_RSS_TYPE_INVALID;
> +
> + if (item->type == RTE_FLOW_ITEM_TYPE_END) {
> + p_info->types = I40E_RSS_TYPE_NONE;
> return 0;
> + }
> +
> + /* convert flow to pctype */
> + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
> + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
> + item_num++;
> + i++;
> + }
> + item_num++;
> +
> + items = rte_zmalloc("i40e_pattern",
> + item_num * sizeof(struct rte_flow_item), 0);
> + if (!items) {
> + rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> + NULL, "No memory for PMD internal
> items.");
> + return -ENOMEM;
> + }
> +
> + i40e_pattern_skip_void_item(items, pattern);
> +
> + for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
> + if
> (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
> + items)) {
> + p_info->types = i40e_rss_pctype_patterns[i].type;
> + rte_free(items);
> + return 0;
> + }
> + }
> +
> + rte_free(items);
>
> for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> if (item->last) {
> @@ -4459,7 +4510,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> item_type = item->type;
> switch (item_type) {
> case RTE_FLOW_ITEM_TYPE_ETH:
> - *action_flag = 1;
> + p_info->action_flag = 1;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> vlan_spec = item->spec;
> @@ -4472,7 +4523,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct
> rte_eth_dev *dev,
> vlan_spec->tci) >> 13) & 0x7;
> info->region[0].user_priority_num =
> 1;
> info->queue_region_number = 1;
> - *action_flag = 0;
> + p_info->action_flag = 0;
> }
> }
> break;
> @@ -4500,12 +4551,14 @@ i40e_flow_parse_rss_pattern(__rte_unused
> struct rte_eth_dev *dev,
> * max index should be 7, and so on. And also, queue index should be
> * continuous sequence and queue region index should be part of rss
> * queue index for this port.
> + * For hash params, the pctype in action and pattern must be same.
> + * Set queue index must be with non-types.
> */
> static int
> i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
> const struct rte_flow_action *actions,
> struct rte_flow_error *error,
> - uint8_t action_flag,
> + struct i40e_rss_pattern_info p_info,
> struct i40e_queue_regions *conf_info,
> union i40e_filter_t *filter)
> {
> @@ -4516,7 +4569,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> struct i40e_rte_flow_rss_conf *rss_config =
> &filter->rss_conf;
> struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> - uint16_t i, j, n, tmp;
> + uint16_t i, j, n, tmp, nb_types;
> uint32_t index = 0;
> uint64_t hf_bit = 1;
>
> @@ -4535,7 +4588,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> return -rte_errno;
> }
>
> - if (action_flag) {
> + if (p_info.action_flag) {
> for (n = 0; n < 64; n++) {
> if (rss->types & (hf_bit << n)) {
> conf_info->region[0].hw_flowtype[0] = n;
> @@ -4674,11 +4727,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> if (rss_config->queue_region_conf)
> return 0;
>
> - if (!rss || !rss->queue_num) {
> + if (!rss) {
> rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "no valid queues");
> + "invalid rule");
> return -rte_errno;
> }
>
> @@ -4692,19 +4745,48 @@ i40e_flow_parse_rss_action(struct rte_eth_dev
> *dev,
> }
> }
>
> - if (rss_info->conf.queue_num) {
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ACTION,
> - act,
> - "rss only allow one valid rule");
> - return -rte_errno;
> + if (rss->queue_num && (p_info.types || rss->types))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "rss types must be empty while configuring queue
> region");
> +
> + /* validate pattern and pctype */
> + if (!(rss->types & p_info.types) &&
> + (rss->types || p_info.types) && !rss->queue_num)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "invaild pctype");
> +
> + nb_types = 0;
> + for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
> + if (rss->types & (hf_bit << n))
> + nb_types++;
> + if (nb_types > 1)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "multi pctype is not supported");
> }
>
> + if (rss->func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
> + (p_info.types || rss->types || rss->queue_num))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pattern, type and queues must be empty while"
> + " setting hash function as simple_xor");
> +
> + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ
> &&
> + !(p_info.types && rss->types))
> + return rte_flow_error_set
> + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> + "pctype and queues can not be empty while"
> + " setting hash function as symmetric toeplitz");
> +
> /* Parse RSS related parameters from configuration */
> - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
> + if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX ||
> + rss->func == RTE_ETH_HASH_FUNCTION_TOEPLITZ)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act,
> - "non-default RSS hash functions are not
> supported");
> + "RSS hash functions are not supported");
> if (rss->level)
> return rte_flow_error_set
> (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
> act, @@ -4746,19 +4828,20 @@ i40e_parse_rss_filter(struct rte_eth_dev
> *dev,
> union i40e_filter_t *filter,
> struct rte_flow_error *error)
> {
> - int ret;
> + struct i40e_rss_pattern_info p_info;
> struct i40e_queue_regions info;
> - uint8_t action_flag = 0;
> + int ret;
>
> memset(&info, 0, sizeof(struct i40e_queue_regions));
> + memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
>
> ret = i40e_flow_parse_rss_pattern(dev, pattern,
> - error, &action_flag, &info);
> + error, &p_info, &info);
> if (ret)
> return ret;
>
> ret = i40e_flow_parse_rss_action(dev, actions, error,
> - action_flag, &info, filter);
> + p_info, &info, filter);
> if (ret)
> return ret;
>
> @@ -4777,15 +4860,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rss_filter *rss_filter;
> int ret;
>
> if (conf->queue_region_conf) {
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
> - conf->queue_region_conf = 0;
> } else {
> ret = i40e_config_rss_filter(pf, conf, 1);
> }
> - return ret;
> +
> + if (ret)
> + return ret;
> +
> + rss_filter = rte_zmalloc("i40e_rss_filter",
> + sizeof(*rss_filter), 0);
> + if (rss_filter == NULL) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + rss_filter->rss_filter_info = *conf;
> + /* the rule new created is always valid
> + * the existing rule covered by new rule will be set invalid
> + */
> + rss_filter->rss_filter_info.valid = true;
> +
> + TAILQ_INSERT_TAIL(&pf->rss_config_list, rss_filter, next);
> +
> + return 0;
> }
>
> static int
> @@ -4794,10 +4895,21 @@ i40e_config_rss_filter_del(struct rte_eth_dev
> *dev, {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct i40e_rss_filter *rss_filter;
> + void *temp;
>
> - i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + if (conf->queue_region_conf)
> + i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
> + else
> + i40e_config_rss_filter(pf, conf, 0);
>
> - i40e_config_rss_filter(pf, conf, 0);
> + TAILQ_FOREACH_SAFE(rss_filter, &pf->rss_config_list, next, temp) {
> + if (!memcmp(&rss_filter->rss_filter_info, conf,
> + sizeof(struct rte_flow_action_rss))) {
> + TAILQ_REMOVE(&pf->rss_config_list, rss_filter,
> next);
> + rte_free(rss_filter);
> + }
> + }
> return 0;
> }
>
> @@ -4940,7 +5052,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
> &cons_filter.rss_conf);
> if (ret)
> goto free_flow;
> - flow->rule = &pf->rss_info;
> + flow->rule = TAILQ_LAST(&pf->rss_config_list,
> + i40e_rss_conf_list);
> break;
> default:
> goto free_flow;
> @@ -4990,7 +5103,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
> break;
> case RTE_ETH_FILTER_HASH:
> ret = i40e_config_rss_filter_del(dev,
> - (struct i40e_rte_flow_rss_conf *)flow->rule);
> + &((struct i40e_rss_filter *)flow->rule)-
> >rss_filter_info);
> break;
> default:
> PMD_DRV_LOG(WARNING, "Filter type (%d) not
> supported", @@ -5248,13 +5361,27 @@ static int
> i40e_flow_flush_rss_filter(struct rte_eth_dev *dev) {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> + struct rte_flow *flow;
> + void *temp;
> int32_t ret = -EINVAL;
>
> ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
>
> - if (rss_info->conf.queue_num)
> - ret = i40e_config_rss_filter(pf, rss_info, FALSE);
> + /* Delete rss flows in flow list. */
> + TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
> + if (flow->filter_type != RTE_ETH_FILTER_HASH)
> + continue;
> +
> + if (flow->rule) {
> + ret = i40e_config_rss_filter_del(dev,
> + &((struct i40e_rss_filter *)flow->rule)-
> >rss_filter_info);
> + if (ret)
> + return ret;
> + }
> + TAILQ_REMOVE(&pf->flow_list, flow, node);
> + rte_free(flow);
> + }
> +
> return ret;
> }
> --
> 2.17.1
Regards,
Bernard.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/i40e: enable advanced RSS
2020-04-14 6:36 ` [dpdk-dev] [PATCH v8] " Chenxu Di
2020-04-14 14:55 ` Iremonger, Bernard
@ 2020-04-15 5:31 ` Xing, Beilei
1 sibling, 0 replies; 26+ messages in thread
From: Xing, Beilei @ 2020-04-15 5:31 UTC (permalink / raw)
To: Di, ChenxuX, dev
> -----Original Message-----
> From: Di, ChenxuX <chenxux.di@intel.com>
> Sent: Tuesday, April 14, 2020 2:37 PM
> To: dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>; Di, ChenxuX <chenxux.di@intel.com>
> Subject: [PATCH v8] net/i40e: enable advanced RSS
>
> This patch supports:
>
> - symmetric hash configuration
> - Input set configuration
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
> doc/guides/nics/i40e.rst | 35 ++
> doc/guides/rel_notes/release_20_05.rst | 7 +
> drivers/net/i40e/i40e_ethdev.c | 509 ++++++++++++++++++++++---
> drivers/net/i40e/i40e_ethdev.h | 22 +-
> drivers/net/i40e/i40e_flow.c | 199 ++++++++--
> 5 files changed, 683 insertions(+), 89 deletions(-)
>
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> d6e578eda..1f8fca285 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -569,6 +569,41 @@ details please refer
> to :doc:`../testpmd_app_ug/index`.
> +
> +/* Enable RSS according to the configuration */ static int
> +i40e_rss_enable_hash(struct i40e_pf *pf,
> + struct i40e_rte_flow_rss_conf *conf,
> + struct rte_eth_rss_conf *rss_conf)
I think one parameter for RSS configuration is enough, why need two parameters here?
Beilei
> +{
> + struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
> +
> + if (!(rss_conf->rss_hf & pf->adapter->flow_types_mask))
> + return -ENOTSUP;
> +
> + /* Configure hash input set */
> + if (i40e_rss_conf_hash_inset(pf, rss_conf->rss_hf))
> return -EINVAL;
> +
> + if (rss_conf->rss_key == NULL || rss_conf->rss_key_len <
> + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
> + /* Random default keys */
> + static uint32_t rss_key_default[] = {0x6b793944,
> + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
> + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
> + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
> +
> + rss_conf->rss_key = (uint8_t *)rss_key_default;
> + rss_conf->rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> + sizeof(uint32_t);
> + PMD_DRV_LOG(INFO,
> + "No valid RSS key config for i40e, using default\n");
> }
>
> + rss_conf->rss_hf |= rss_info->conf.types;
> + i40e_hw_rss_hash_set(pf, rss_conf);
> +
> + if (conf->conf.func ==
> RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + i40e_rss_config_hash_function(pf, conf);
> +
> + i40e_rss_mark_invalid_rule(pf, conf);
> +
> + return 0;
> +}
> +
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCH v9] net/i40e: enable hash configuration in RSS flow
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
` (11 preceding siblings ...)
2020-04-14 6:36 ` [dpdk-dev] [PATCH v8] " Chenxu Di
@ 2020-04-15 8:46 ` Chenxu Di
2020-04-15 9:52 ` Xing, Beilei
12 siblings, 1 reply; 26+ messages in thread
From: Chenxu Di @ 2020-04-15 8:46 UTC (permalink / raw)
To: dev; +Cc: Yang Qiming, beilei.xing, Chenxu Di
This patch supports:
- Symmetric hash configuration
- Hash input set configuration
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
v9:
-Updated code about enable hash.
---
doc/guides/nics/i40e.rst | 37 ++
doc/guides/rel_notes/release_20_05.rst | 2 +
drivers/net/i40e/i40e_ethdev.c | 528 ++++++++++++++++++++++---
drivers/net/i40e/i40e_ethdev.h | 22 +-
drivers/net/i40e/i40e_flow.c | 209 ++++++++--
5 files changed, 703 insertions(+), 95 deletions(-)
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index d6e578eda..f72a54ba6 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -44,6 +44,7 @@ Features of the i40e PMD are:
- Queue region configuration
- Virtual Function Port Representors
- Malicious Device Drive event catch and notify
+- Generic flow API
Prerequisites
-------------
@@ -569,6 +570,42 @@ details please refer to :doc:`../testpmd_app_ug/index`.
testpmd> set port (port_id) queue-region flush (on|off)
testpmd> show port (port_id) queue-region
+Generic flow API
+~~~~~~~~~~~~~~~~~~~
+
+- ``RSS Flow``
+
+ RSS Flow supports to set hash input set, hash function, enable hash
+ and configure queue region.
+ For example:
+ Configure queue region as queue 0, 1, 2, 3.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues 0 1 2 3 end / end
+
+ Enable hash and set input set for ipv4-tcp.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp l3-src-only end queues end / end
+
+ Set symmetric hash enable for flow type ipv4-tcp.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
+ actions rss types ipv4-tcp end queues end func symmetric_toeplitz / end
+
+ Set hash function as simple xor.
+
+ .. code-block:: console
+
+ testpmd> flow create 0 ingress pattern end actions rss types end \
+ queues end func simple_xor / end
+
Limitations or Known issues
---------------------------
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index e32746291..76e8dfb7d 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -83,6 +83,8 @@ New Features
Updated i40e PMD with new features and improvements, including:
* Enable MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
+ * Added support for RSS using L3/L4 source/destination only.
+ * Added support for setting hash function in rte flow.
* **Updated Amazon ena driver.**
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 60de68fd8..5ac3714be 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1657,6 +1657,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize RSS rule list */
+ TAILQ_INIT(&pf->rss_config_list);
+
/* initialize Traffic Manager configuration */
i40e_tm_conf_init(dev);
@@ -1676,7 +1679,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* initialize queue region configuration */
i40e_init_queue_region_conf(dev);
- /* initialize rss configuration from rte_flow */
+ /* initialize RSS configuration from rte_flow */
memset(&pf->rss_info, 0,
sizeof(struct i40e_rte_flow_rss_conf));
@@ -12329,14 +12332,16 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
}
}
-/* Restore rss filter */
+/* Restore RSS filter */
static inline void
i40e_rss_filter_restore(struct i40e_pf *pf)
{
- struct i40e_rte_flow_rss_conf *conf =
- &pf->rss_info;
- if (conf->conf.queue_num)
- i40e_config_rss_filter(pf, conf, TRUE);
+ struct i40e_rss_conf_list *list = &pf->rss_config_list;
+ struct i40e_rss_filter *filter;
+
+ TAILQ_FOREACH(filter, list, next) {
+ i40e_config_rss_filter(pf, &filter->rss_filter_info, TRUE);
+ }
}
static void
@@ -12946,45 +12951,300 @@ i40e_rss_conf_init(struct i40e_rte_flow_rss_conf *out,
return 0;
}
-int
-i40e_action_rss_same(const struct rte_flow_action_rss *comp,
- const struct rte_flow_action_rss *with)
+/* Write HENA register to enable hash */
+static int
+i40e_rss_hash_set(struct i40e_pf *pf, struct i40e_rte_flow_rss_conf *rss_conf)
{
- return (comp->func == with->func &&
- comp->level == with->level &&
- comp->types == with->types &&
- comp->key_len == with->key_len &&
- comp->queue_num == with->queue_num &&
- !memcmp(comp->key, with->key, with->key_len) &&
- !memcmp(comp->queue, with->queue,
- sizeof(*with->queue) * with->queue_num));
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint8_t *key = (void *)(uintptr_t)rss_conf->conf.key;
+ uint64_t hena;
+ int ret;
+
+ ret = i40e_set_rss_key(pf->main_vsi, key,
+ rss_conf->conf.key_len);
+ if (ret)
+ return ret;
+
+ hena = i40e_config_hena(pf->adapter, rss_conf->conf.types);
+ i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), (uint32_t)hena);
+ i40e_write_rx_ctl(hw, I40E_PFQF_HENA(1), (uint32_t)(hena >> 32));
+ I40E_WRITE_FLUSH(hw);
+
+ return 0;
}
-int
-i40e_config_rss_filter(struct i40e_pf *pf,
- struct i40e_rte_flow_rss_conf *conf, bool add)
+/* Configure hash input set */
+static int
+i40e_rss_conf_hash_inset(struct i40e_pf *pf, uint64_t types)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint32_t i, lut = 0;
- uint16_t j, num;
- struct rte_eth_rss_conf rss_conf = {
- .rss_key = conf->conf.key_len ?
- (void *)(uintptr_t)conf->conf.key : NULL,
- .rss_key_len = conf->conf.key_len,
- .rss_hf = conf->conf.types,
+ struct rte_eth_input_set_conf conf;
+ uint64_t mask0;
+ int ret = 0;
+ uint32_t j;
+ int i;
+ static const struct {
+ uint64_t type;
+ enum rte_eth_input_set_field field;
+ } inset_match_table[] = {
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV4 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP4},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_FRAG_IPV6 | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_TCP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_UDP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT},
+ {ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT},
+
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ RTE_ETH_INPUT_SET_L3_SRC_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ RTE_ETH_INPUT_SET_L3_DST_IP6},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_SRC_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
+ {ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L4_DST_ONLY,
+ RTE_ETH_INPUT_SET_UNKNOWN},
};
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- if (!add) {
- if (i40e_action_rss_same(&rss_info->conf, &conf->conf)) {
- i40e_pf_disable_rss(pf);
- memset(rss_info, 0,
- sizeof(struct i40e_rte_flow_rss_conf));
+ mask0 = types & pf->adapter->flow_types_mask;
+ conf.op = RTE_ETH_INPUT_SET_SELECT;
+ conf.inset_size = 0;
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1; i < RTE_ETH_FLOW_MAX; i++) {
+ if (mask0 & (1ULL << i)) {
+ conf.flow_type = i;
+ break;
+ }
+ }
+
+ for (j = 0; j < RTE_DIM(inset_match_table); j++) {
+ if ((types & inset_match_table[j].type) ==
+ inset_match_table[j].type) {
+ if (inset_match_table[j].field ==
+ RTE_ETH_INPUT_SET_UNKNOWN)
+ return -EINVAL;
+
+ conf.field[conf.inset_size] =
+ inset_match_table[j].field;
+ conf.inset_size++;
+ }
+ }
+
+ if (conf.inset_size) {
+ ret = i40e_hash_filter_inset_select(hw, &conf);
+ if (ret)
+ return ret;
+ }
+
+ return ret;
+}
+
+/* Look up the conflicted rule then mark it as invalid */
+static void
+i40e_rss_mark_invalid_rule(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rss_filter *rss_item;
+ uint64_t rss_inset;
+
+ /* Clear input set bits before comparing the pctype */
+ rss_inset = ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
+ ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+
+ /* Look up the conflicted rule then mark it as invalid */
+ TAILQ_FOREACH(rss_item, &pf->rss_config_list, next) {
+ if (!rss_item->rss_filter_info.valid)
+ continue;
+
+ if (conf->conf.queue_num &&
+ rss_item->rss_filter_info.conf.queue_num)
+ rss_item->rss_filter_info.valid = false;
+
+ if (conf->conf.types &&
+ (rss_item->rss_filter_info.conf.types &
+ rss_inset) ==
+ (conf->conf.types & rss_inset))
+ rss_item->rss_filter_info.valid = false;
+
+ if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
+ rss_item->rss_filter_info.conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
+ rss_item->rss_filter_info.valid = false;
+ }
+}
+
+/* Configure RSS hash function */
+static int
+i40e_rss_config_hash_function(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t reg, i;
+ uint64_t mask0;
+ uint16_t j;
+
+ if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
+ reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
+ if (!(reg & I40E_GLQF_CTL_HTOEP_MASK)) {
+ PMD_DRV_LOG(DEBUG, "Hash function already set to Simple XOR");
+ I40E_WRITE_FLUSH(hw);
+ i40e_rss_mark_invalid_rule(pf, conf);
+
return 0;
}
+ reg &= ~I40E_GLQF_CTL_HTOEP_MASK;
+
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
+ I40E_WRITE_FLUSH(hw);
+ i40e_rss_mark_invalid_rule(pf, conf);
+ } else if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ mask0 = conf->conf.types & pf->adapter->flow_types_mask;
+
+ i40e_set_symmetric_hash_enable_per_port(hw, 1);
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1; i < UINT64_BIT; i++) {
+ if (mask0 & (1UL << i))
+ break;
+ }
+
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] & (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j),
+ I40E_GLQF_HSYM_SYMH_ENA_MASK);
+ }
+ }
+
+ return 0;
+}
+
+/* Enable RSS according to the configuration */
+static int
+i40e_rss_enable_hash(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct i40e_rte_flow_rss_conf rss_conf;
+
+ if (!(conf->conf.types & pf->adapter->flow_types_mask))
+ return -ENOTSUP;
+
+ memset(&rss_conf, 0, sizeof(rss_conf));
+ rte_memcpy(&rss_conf, conf, sizeof(rss_conf));
+
+ /* Configure hash input set */
+ if (i40e_rss_conf_hash_inset(pf, conf->conf.types))
return -EINVAL;
+
+ if (rss_conf.conf.key == NULL || rss_conf.conf.key_len <
+ (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
+ /* Random default keys */
+ static uint32_t rss_key_default[] = {0x6b793944,
+ 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
+ 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
+ 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+
+ rss_conf.conf.key = (uint8_t *)rss_key_default;
+ rss_conf.conf.key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t);
+ PMD_DRV_LOG(INFO,
+ "No valid RSS key config for i40e, using default\n");
}
+ rss_conf.conf.types |= rss_info->conf.types;
+ i40e_rss_hash_set(pf, &rss_conf);
+
+ if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
+ i40e_rss_config_hash_function(pf, conf);
+
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS queue region */
+static int
+i40e_rss_config_queue_region(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t lut = 0;
+ uint16_t j, num;
+ uint32_t i;
+
/* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calculate the actual PF queues that are configured.
*/
@@ -13014,29 +13274,195 @@ i40e_config_rss_filter(struct i40e_pf *pf,
I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
}
- if ((rss_conf.rss_hf & pf->adapter->flow_types_mask) == 0) {
- i40e_pf_disable_rss(pf);
- return 0;
+ i40e_rss_mark_invalid_rule(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS hash function to default */
+static int
+i40e_rss_clear_hash_function(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ uint32_t i, reg;
+ uint64_t mask0;
+ uint16_t j;
+
+ if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
+ reg = i40e_read_rx_ctl(hw, I40E_GLQF_CTL);
+ if (reg & I40E_GLQF_CTL_HTOEP_MASK) {
+ PMD_DRV_LOG(DEBUG,
+ "Hash function already set to Toeplitz");
+ I40E_WRITE_FLUSH(hw);
+
+ return 0;
+ }
+ reg |= I40E_GLQF_CTL_HTOEP_MASK;
+
+ i40e_write_global_rx_ctl(hw, I40E_GLQF_CTL, reg);
+ I40E_WRITE_FLUSH(hw);
+ } else if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ mask0 = conf->conf.types & pf->adapter->flow_types_mask;
+
+ for (i = RTE_ETH_FLOW_UNKNOWN + 1; i < UINT64_BIT; i++) {
+ if (mask0 & (1UL << i))
+ break;
+ }
+
+ for (j = I40E_FILTER_PCTYPE_INVALID + 1;
+ j < I40E_FILTER_PCTYPE_MAX; j++) {
+ if (pf->adapter->pctypes_tbl[i] & (1ULL << j))
+ i40e_write_global_rx_ctl(hw,
+ I40E_GLQF_HSYM(j),
+ 0);
+ }
}
- if (rss_conf.rss_key == NULL || rss_conf.rss_key_len <
- (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) {
- /* Random default keys */
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
- rss_conf.rss_key = (uint8_t *)rss_key_default;
- rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- PMD_DRV_LOG(INFO,
- "No valid RSS key config for i40e, using default\n");
+ return 0;
+}
+
+/* Disable RSS hash and configure default input set */
+static int
+i40e_rss_disable_hash(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf rss_conf;
+ uint32_t i;
+
+ memset(&rss_conf, 0, sizeof(rss_conf));
+ rte_memcpy(&rss_conf, conf, sizeof(rss_conf));
+
+ /* Disable RSS hash */
+ rss_conf.conf.types = rss_info->conf.types & ~(conf->conf.types);
+ i40e_rss_hash_set(pf, &rss_conf);
+
+ for (i = RTE_ETH_FLOW_IPV4; i <= RTE_ETH_FLOW_L2_PAYLOAD; i++) {
+ if (!(pf->adapter->flow_types_mask & (1ULL << i)) ||
+ !(conf->conf.types & (1ULL << i)))
+ continue;
+
+ /* Configure default input set */
+ struct rte_eth_input_set_conf input_conf = {
+ .op = RTE_ETH_INPUT_SET_SELECT,
+ .flow_type = i,
+ .inset_size = 1,
+ };
+ input_conf.field[0] = RTE_ETH_INPUT_SET_DEFAULT;
+ i40e_hash_filter_inset_select(hw, &input_conf);
}
- i40e_hw_rss_hash_set(pf, &rss_conf);
+ rss_info->conf.types = rss_conf.conf.types;
- if (i40e_rss_conf_init(rss_info, &conf->conf))
- return -EINVAL;
+ i40e_rss_clear_hash_function(pf, conf);
+
+ return 0;
+}
+
+/* Configure RSS queue region to default */
+static int
+i40e_rss_clear_queue_region(struct i40e_pf *pf)
+{
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ uint16_t queue[I40E_MAX_Q_PER_TC];
+ uint32_t num_rxq, i;
+ uint32_t lut = 0;
+ uint16_t j, num;
+
+ num_rxq = RTE_MIN(pf->dev_data->nb_rx_queues, I40E_MAX_Q_PER_TC);
+
+ for (j = 0; j < num_rxq; j++)
+ queue[j] = j;
+
+ /* If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ num = i40e_pf_calc_configured_queues_num(pf);
+ else
+ num = pf->dev_data->nb_rx_queues;
+
+ num = RTE_MIN(num, num_rxq);
+ PMD_DRV_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR,
+ "No PF queues are configured to enable RSS for port %u",
+ pf->dev_data->port_id);
+ return -ENOTSUP;
+ }
+
+ /* Fill in redirection table */
+ for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
+ if (j == num)
+ j = 0;
+ lut = (lut << 8) | (queue[j] & ((0x1 <<
+ hw->func_caps.rss_table_entry_width) - 1));
+ if ((i & 3) == 3)
+ I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut);
+ }
+
+ rss_info->conf.queue_num = 0;
+ memset(&rss_info->conf.queue, 0, sizeof(uint16_t));
+
+ return 0;
+}
+
+int
+i40e_config_rss_filter(struct i40e_pf *pf,
+ struct i40e_rte_flow_rss_conf *conf, bool add)
+{
+ struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
+ struct rte_flow_action_rss update_conf = rss_info->conf;
+ int ret = 0;
+
+ if (add) {
+ if (conf->conf.queue_num) {
+ /* Configure RSS queue region */
+ ret = i40e_rss_config_queue_region(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.queue_num = conf->conf.queue_num;
+ update_conf.queue = conf->conf.queue;
+ } else if (conf->conf.func ==
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) {
+ /* Configure hash function */
+ ret = i40e_rss_config_hash_function(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.func = conf->conf.func;
+ } else {
+ /* Configure hash enable and input set */
+ ret = i40e_rss_enable_hash(pf, conf);
+ if (ret)
+ return ret;
+
+ update_conf.types |= conf->conf.types;
+ update_conf.key = conf->conf.key;
+ update_conf.key_len = conf->conf.key_len;
+ }
+
+ /* Update RSS info in pf */
+ if (i40e_rss_conf_init(rss_info, &update_conf))
+ return -EINVAL;
+ } else {
+ if (!conf->valid)
+ return 0;
+
+ if (conf->conf.queue_num)
+ i40e_rss_clear_queue_region(pf);
+ else if (conf->conf.func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)
+ i40e_rss_clear_hash_function(pf, conf);
+ else
+ i40e_rss_disable_hash(pf, conf);
+ }
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 461959e08..e5d0ce53f 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -192,6 +192,9 @@ enum i40e_flxpld_layer_idx {
#define I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_MASK \
I40E_MASK(0xFFFF, I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT)
+#define I40E_RSS_TYPE_NONE 0ULL
+#define I40E_RSS_TYPE_INVALID 1ULL
+
#define I40E_INSET_NONE 0x00000000000000000ULL
/* bit0 ~ bit 7 */
@@ -754,6 +757,11 @@ struct i40e_queue_regions {
struct i40e_queue_region_info region[I40E_REGION_MAX_INDEX + 1];
};
+struct i40e_rss_pattern_info {
+ uint8_t action_flag;
+ uint64_t types;
+};
+
/* Tunnel filter number HW supports */
#define I40E_MAX_TUNNEL_FILTER_NUM 400
@@ -973,6 +981,15 @@ struct i40e_rte_flow_rss_conf {
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /* Hash key. */
uint16_t queue[I40E_MAX_Q_PER_TC]; /**< Queues indices to use. */
+ bool valid; /* Check if it's valid */
+};
+
+TAILQ_HEAD(i40e_rss_conf_list, i40e_rss_filter);
+
+/* RSS filter list structure */
+struct i40e_rss_filter {
+ TAILQ_ENTRY(i40e_rss_filter) next;
+ struct i40e_rte_flow_rss_conf rss_filter_info;
};
struct i40e_vf_msg_cfg {
@@ -1043,7 +1060,8 @@ struct i40e_pf {
struct i40e_fdir_info fdir; /* flow director info */
struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */
struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */
- struct i40e_rte_flow_rss_conf rss_info; /* rss info */
+ struct i40e_rte_flow_rss_conf rss_info; /* RSS info */
+ struct i40e_rss_conf_list rss_config_list; /* RSS rule list */
struct i40e_queue_regions queue_region; /* queue region info */
struct i40e_fc_conf fc_conf; /* Flow control conf */
struct i40e_mirror_rule_list mirror_list;
@@ -1343,8 +1361,6 @@ int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
int i40e_rss_conf_init(struct i40e_rte_flow_rss_conf *out,
const struct rte_flow_action_rss *in);
-int i40e_action_rss_same(const struct rte_flow_action_rss *comp,
- const struct rte_flow_action_rss *with);
int i40e_config_rss_filter(struct i40e_pf *pf,
struct i40e_rte_flow_rss_conf *conf, bool add);
int i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params);
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index b1861a7db..1d0eaf61c 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4475,29 +4475,80 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
* function for RSS, or flowtype for queue region configuration.
* For example:
* pattern:
- * Case 1: only ETH, indicate flowtype for queue region will be parsed.
- * Case 2: only VLAN, indicate user_priority for queue region will be parsed.
- * Case 3: none, indicate RSS related will be parsed in action.
- * Any pattern other the ETH or VLAN will be treated as invalid except END.
+ * Case 1: try to transform patterns to pctype. valid pctype will be
+ * used in parse action.
+ * Case 2: only ETH, indicate flowtype for queue region will be parsed.
+ * Case 3: only VLAN, indicate user_priority for queue region will be parsed.
* So, pattern choice is depened on the purpose of configuration of
* that flow.
* action:
- * action RSS will be uaed to transmit valid parameter with
+ * action RSS will be used to transmit valid parameter with
* struct rte_flow_action_rss for all the 3 case.
*/
static int
i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
- uint8_t *action_flag,
+ struct i40e_rss_pattern_info *p_info,
struct i40e_queue_regions *info)
{
const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
const struct rte_flow_item *item = pattern;
enum rte_flow_item_type item_type;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
+ struct rte_flow_item *items;
+ uint32_t item_num = 0; /* non-void item number of pattern*/
+ uint32_t i = 0;
+ static const struct {
+ enum rte_flow_item_type *item_array;
+ uint64_t type;
+ } i40e_rss_pctype_patterns[] = {
+ { pattern_fdir_ipv4,
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER },
+ { pattern_fdir_ipv4_tcp, ETH_RSS_NONFRAG_IPV4_TCP },
+ { pattern_fdir_ipv4_udp, ETH_RSS_NONFRAG_IPV4_UDP },
+ { pattern_fdir_ipv4_sctp, ETH_RSS_NONFRAG_IPV4_SCTP },
+ { pattern_fdir_ipv6,
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER },
+ { pattern_fdir_ipv6_tcp, ETH_RSS_NONFRAG_IPV6_TCP },
+ { pattern_fdir_ipv6_udp, ETH_RSS_NONFRAG_IPV6_UDP },
+ { pattern_fdir_ipv6_sctp, ETH_RSS_NONFRAG_IPV6_SCTP },
+ };
+
+ p_info->types = I40E_RSS_TYPE_INVALID;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_END) {
+ p_info->types = I40E_RSS_TYPE_NONE;
return 0;
+ }
+
+ /* Convert pattern to RSS offload types */
+ while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
+ if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
+ item_num++;
+ i++;
+ }
+ item_num++;
+
+ items = rte_zmalloc("i40e_pattern",
+ item_num * sizeof(struct rte_flow_item), 0);
+ if (!items) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "No memory for PMD internal items.");
+ return -ENOMEM;
+ }
+
+ i40e_pattern_skip_void_item(items, pattern);
+
+ for (i = 0; i < RTE_DIM(i40e_rss_pctype_patterns); i++) {
+ if (i40e_match_pattern(i40e_rss_pctype_patterns[i].item_array,
+ items)) {
+ p_info->types = i40e_rss_pctype_patterns[i].type;
+ rte_free(items);
+ return 0;
+ }
+ }
+
+ rte_free(items);
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -4510,7 +4561,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
item_type = item->type;
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- *action_flag = 1;
+ p_info->action_flag = 1;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
@@ -4523,7 +4574,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
vlan_spec->tci) >> 13) & 0x7;
info->region[0].user_priority_num = 1;
info->queue_region_number = 1;
- *action_flag = 0;
+ p_info->action_flag = 0;
}
}
break;
@@ -4540,7 +4591,7 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
}
/**
- * This function is used to parse rss queue index, total queue number and
+ * This function is used to parse RSS queue index, total queue number and
* hash functions, If the purpose of this configuration is for queue region
* configuration, it will set queue_region_conf flag to TRUE, else to FALSE.
* In queue region configuration, it also need to parse hardware flowtype
@@ -4549,14 +4600,16 @@ i40e_flow_parse_rss_pattern(__rte_unused struct rte_eth_dev *dev,
* be any of the following values: 1, 2, 4, 8, 16, 32, 64, the
* hw_flowtype or PCTYPE max index should be 63, the user priority
* max index should be 7, and so on. And also, queue index should be
- * continuous sequence and queue region index should be part of rss
+ * continuous sequence and queue region index should be part of RSS
* queue index for this port.
+ * For hash params, the pctype in action and pattern must be same.
+ * Set queue index must be with non-types.
*/
static int
i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
- uint8_t action_flag,
+ struct i40e_rss_pattern_info p_info,
struct i40e_queue_regions *conf_info,
union i40e_filter_t *filter)
{
@@ -4567,7 +4620,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
struct i40e_rte_flow_rss_conf *rss_config =
&filter->rss_conf;
struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
- uint16_t i, j, n, tmp;
+ uint16_t i, j, n, tmp, nb_types;
uint32_t index = 0;
uint64_t hf_bit = 1;
@@ -4575,7 +4628,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
rss = act->conf;
/**
- * rss only supports forwarding,
+ * RSS only supports forwarding,
* check if the first not void action is RSS.
*/
if (act->type != RTE_FLOW_ACTION_TYPE_RSS) {
@@ -4586,7 +4639,7 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
return -rte_errno;
}
- if (action_flag) {
+ if (p_info.action_flag) {
for (n = 0; n < 64; n++) {
if (rss->types & (hf_bit << n)) {
conf_info->region[0].hw_flowtype[0] = n;
@@ -4725,11 +4778,11 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
if (rss_config->queue_region_conf)
return 0;
- if (!rss || !rss->queue_num) {
+ if (!rss) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION,
act,
- "no valid queues");
+ "invalid rule");
return -rte_errno;
}
@@ -4743,19 +4796,48 @@ i40e_flow_parse_rss_action(struct rte_eth_dev *dev,
}
}
- if (rss_info->conf.queue_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "rss only allow one valid rule");
- return -rte_errno;
+ if (rss->queue_num && (p_info.types || rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "RSS types must be empty while configuring queue region");
+
+ /* validate pattern and pctype */
+ if (!(rss->types & p_info.types) &&
+ (rss->types || p_info.types) && !rss->queue_num)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "invaild pctype");
+
+ nb_types = 0;
+ for (n = 0; n < RTE_ETH_FLOW_MAX; n++) {
+ if (rss->types & (hf_bit << n))
+ nb_types++;
+ if (nb_types > 1)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi pctype is not supported");
}
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR &&
+ (p_info.types || rss->types || rss->queue_num))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pattern, type and queues must be empty while"
+ " setting hash function as simple_xor");
+
+ if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ &&
+ !(p_info.types && rss->types))
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "pctype and queues can not be empty while"
+ " setting hash function as symmetric toeplitz");
+
/* Parse RSS related parameters from configuration */
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ if (rss->func >= RTE_ETH_HASH_FUNCTION_MAX ||
+ rss->func == RTE_ETH_HASH_FUNCTION_TOEPLITZ)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions are not supported");
+ "RSS hash functions are not supported");
if (rss->level)
return rte_flow_error_set
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
@@ -4797,19 +4879,20 @@ i40e_parse_rss_filter(struct rte_eth_dev *dev,
union i40e_filter_t *filter,
struct rte_flow_error *error)
{
- int ret;
+ struct i40e_rss_pattern_info p_info;
struct i40e_queue_regions info;
- uint8_t action_flag = 0;
+ int ret;
memset(&info, 0, sizeof(struct i40e_queue_regions));
+ memset(&p_info, 0, sizeof(struct i40e_rss_pattern_info));
ret = i40e_flow_parse_rss_pattern(dev, pattern,
- error, &action_flag, &info);
+ error, &p_info, &info);
if (ret)
return ret;
ret = i40e_flow_parse_rss_action(dev, actions, error,
- action_flag, &info, filter);
+ p_info, &info, filter);
if (ret)
return ret;
@@ -4828,15 +4911,33 @@ i40e_config_rss_filter_set(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rss_filter *rss_filter;
int ret;
if (conf->queue_region_conf) {
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 1);
- conf->queue_region_conf = 0;
} else {
ret = i40e_config_rss_filter(pf, conf, 1);
}
- return ret;
+
+ if (ret)
+ return ret;
+
+ rss_filter = rte_zmalloc("i40e_rss_filter",
+ sizeof(*rss_filter), 0);
+ if (rss_filter == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory.");
+ return -ENOMEM;
+ }
+ rss_filter->rss_filter_info = *conf;
+ /* the rule new created is always valid
+ * the existing rule covered by new rule will be set invalid
+ */
+ rss_filter->rss_filter_info.valid = true;
+
+ TAILQ_INSERT_TAIL(&pf->rss_config_list, rss_filter, next);
+
+ return 0;
}
static int
@@ -4845,10 +4946,21 @@ i40e_config_rss_filter_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_rss_filter *rss_filter;
+ void *temp;
- i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ if (conf->queue_region_conf)
+ i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
+ else
+ i40e_config_rss_filter(pf, conf, 0);
- i40e_config_rss_filter(pf, conf, 0);
+ TAILQ_FOREACH_SAFE(rss_filter, &pf->rss_config_list, next, temp) {
+ if (!memcmp(&rss_filter->rss_filter_info, conf,
+ sizeof(struct rte_flow_action_rss))) {
+ TAILQ_REMOVE(&pf->rss_config_list, rss_filter, next);
+ rte_free(rss_filter);
+ }
+ }
return 0;
}
@@ -4991,7 +5103,8 @@ i40e_flow_create(struct rte_eth_dev *dev,
&cons_filter.rss_conf);
if (ret)
goto free_flow;
- flow->rule = &pf->rss_info;
+ flow->rule = TAILQ_LAST(&pf->rss_config_list,
+ i40e_rss_conf_list);
break;
default:
goto free_flow;
@@ -5041,7 +5154,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_HASH:
ret = i40e_config_rss_filter_del(dev,
- (struct i40e_rte_flow_rss_conf *)flow->rule);
+ &((struct i40e_rss_filter *)flow->rule)->rss_filter_info);
break;
default:
PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -5189,7 +5302,7 @@ i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
if (ret) {
rte_flow_error_set(error, -ret,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to flush rss flows.");
+ "Failed to flush RSS flows.");
return -rte_errno;
}
@@ -5294,18 +5407,32 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
return ret;
}
-/* remove the rss filter */
+/* remove the RSS filter */
static int
i40e_flow_flush_rss_filter(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_rte_flow_rss_conf *rss_info = &pf->rss_info;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_flow *flow;
+ void *temp;
int32_t ret = -EINVAL;
ret = i40e_flush_queue_region_all_conf(dev, hw, pf, 0);
- if (rss_info->conf.queue_num)
- ret = i40e_config_rss_filter(pf, rss_info, FALSE);
+ /* Delete RSS flows in flow list. */
+ TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ if (flow->filter_type != RTE_ETH_FILTER_HASH)
+ continue;
+
+ if (flow->rule) {
+ ret = i40e_config_rss_filter_del(dev,
+ &((struct i40e_rss_filter *)flow->rule)->rss_filter_info);
+ if (ret)
+ return ret;
+ }
+ TAILQ_REMOVE(&pf->flow_list, flow, node);
+ rte_free(flow);
+ }
+
return ret;
}
--
2.17.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v9] net/i40e: enable hash configuration in RSS flow
2020-04-15 8:46 ` [dpdk-dev] [PATCH v9] net/i40e: enable hash configuration in RSS flow Chenxu Di
@ 2020-04-15 9:52 ` Xing, Beilei
2020-04-15 9:59 ` Ye Xiaolong
0 siblings, 1 reply; 26+ messages in thread
From: Xing, Beilei @ 2020-04-15 9:52 UTC (permalink / raw)
To: Di, ChenxuX, dev; +Cc: Yang, Qiming
> -----Original Message-----
> From: Di, ChenxuX <chenxux.di@intel.com>
> Sent: Wednesday, April 15, 2020 4:46 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Di, ChenxuX <chenxux.di@intel.com>
> Subject: [PATCH v9] net/i40e: enable hash configuration in RSS flow
>
> This patch supports:
>
> - Symmetric hash configuration
> - Hash input set configuration
>
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCH v9] net/i40e: enable hash configuration in RSS flow
2020-04-15 9:52 ` Xing, Beilei
@ 2020-04-15 9:59 ` Ye Xiaolong
0 siblings, 0 replies; 26+ messages in thread
From: Ye Xiaolong @ 2020-04-15 9:59 UTC (permalink / raw)
To: Xing, Beilei; +Cc: Di, ChenxuX, dev, Yang, Qiming
On 04/15, Xing, Beilei wrote:
>
>
>> -----Original Message-----
>> From: Di, ChenxuX <chenxux.di@intel.com>
>> Sent: Wednesday, April 15, 2020 4:46 PM
>> To: dev@dpdk.org
>> Cc: Yang, Qiming <qiming.yang@intel.com>; Xing, Beilei
>> <beilei.xing@intel.com>; Di, ChenxuX <chenxux.di@intel.com>
>> Subject: [PATCH v9] net/i40e: enable hash configuration in RSS flow
>>
>> This patch supports:
>>
>> - Symmetric hash configuration
>> - Hash input set configuration
>>
>> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
>
>Acked-by: Beilei Xing <beilei.xing@intel.com>
Applied to dpdk-next-net-intel, Thanks.
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2020-04-15 10:03 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-18 1:47 [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 1/4] net/e1000: remove the legacy filter functions Chenxu Di
2020-03-18 3:15 ` Yang, Qiming
2020-03-18 1:47 ` [dpdk-dev] [PATCH 2/4] net/ixgbe: " Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 3/4] net/i40e: " Chenxu Di
2020-03-18 1:47 ` [dpdk-dev] [PATCH 4/4] net/i40e: implement hash function in rte flow API Chenxu Di
2020-03-18 3:00 ` [dpdk-dev] [PATCH 0/4] drivers/net: remove legacy filter API and switch to rte flow Stephen Hemminger
2020-03-19 6:39 ` [dpdk-dev] [PATCH v2] net/i40e: implement hash function in rte flow API Chenxu Di
2020-03-20 1:24 ` [dpdk-dev] [PATCH v3] " Chenxu Di
2020-03-23 8:25 ` [dpdk-dev] [PATCH v4] " Chenxu Di
2020-03-24 3:28 ` Yang, Qiming
2020-03-24 8:17 ` [dpdk-dev] [PATCH v5] " Chenxu Di
2020-03-24 12:57 ` Iremonger, Bernard
[not found] ` <87688dbf6ac946d5974a61578be1ed89@intel.com>
2020-03-25 9:48 ` Iremonger, Bernard
2020-03-27 12:49 ` Xing, Beilei
2020-03-30 7:40 ` [dpdk-dev] [PATCH v6] " Chenxu Di
2020-04-02 16:26 ` Iremonger, Bernard
[not found] ` <4a1f49493dc54ef0b3ae9c2bf7018f0d@intel.com>
2020-04-08 8:24 ` Iremonger, Bernard
2020-04-10 1:52 ` Xing, Beilei
2020-04-13 5:31 ` [dpdk-dev] [PATCH v7] net/i40e: enable advanced RSS Chenxu Di
2020-04-14 6:36 ` [dpdk-dev] [PATCH v8] " Chenxu Di
2020-04-14 14:55 ` Iremonger, Bernard
2020-04-15 5:31 ` Xing, Beilei
2020-04-15 8:46 ` [dpdk-dev] [PATCH v9] net/i40e: enable hash configuration in RSS flow Chenxu Di
2020-04-15 9:52 ` Xing, Beilei
2020-04-15 9:59 ` Ye Xiaolong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).