* [PATCH 00/39] support full function of DCF
@ 2022-04-07 10:56 Kevin Liu
2022-04-07 10:56 ` [PATCH 01/39] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
` (39 more replies)
0 siblings, 40 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
These functions have been customized and implemented
on DPDK-20.11, Now it's time to migrate the function
to DPDK-22.07.
Alvin Zhang (18):
net/ice: support dcf promisc configuration
net/ice: support dcf VLAN filter and offload configuration
net/ice: support DCF new VLAN capabilities
common/iavf: support flushing rules and reporting DCF id
net/ice/base: fix ethertype filter input set
net/iavf: support checking if device is an MDCF instance
net/ice/base: support custom DDP buildin recipe
net/ice: support buildin recipe configuration
net/ice/base: support IPv6 GRE UDP pattern
net/ice: support IPv6 NVGRE tunnel
net/ice: support new pattern of IPv4
net/ice/base: support new patterns of TCP and UDP
net/ice: support new patterns of TCP and UDP
net/ice/base: support IPv4 GRE tunnel
net/ice: support IPv4 GRE raw pattern type
net/ice/base: support custom ddp package version
net/ice: treat unknown package as OS default package
net/ice: fix DCF ACL flow engine
Dapeng Yu (1):
net/ice: enable CVL DCF device reset API
Jie Wang (2):
net/ice: add ops MTU-SET to dcf
net/ice: add ops dev-supported-ptypes-get to dcf
Junfeng Guo (4):
net/ice/base: add VXLAN support for switch filter
net/ice: add VXLAN support for switch filter
net/ice/base: update Profile ID table for VXLAN
net/ice/base: update Protocol ID table to match DVM DDP
Kevin Liu (5):
net/ice: support dcf MAC configuration
net/ice: support MDCF(multi-DCF) instance
net/ice: disable ACL function for MDCF instance
net/ice: add enable/disable queues for DCF large VF
net/ice: fix DCF reset
Qi Zhang (1):
testpmd: force flow flush
Robin Zhang (1):
net/ice: cleanup Tx buffers
Steve Yang (7):
net/ice: enable RSS RETA ops for DCF hardware
net/ice: enable RSS HASH ops for DCF hardware
net/ice: handle virtchnl event message without interrupt
net/ice: add DCF request queues function
net/ice: negotiate large VF and request more queues
net/ice: enable multiple queues configurations for large VF
net/ice: enable IRQ mapping configuration for large VF
app/test-pmd/config.c | 6 +-
drivers/common/iavf/virtchnl.h | 13 +
drivers/net/iavf/iavf_ethdev.c | 2 +-
drivers/net/ice/base/ice_common.c | 29 +-
drivers/net/ice/base/ice_fdir.c | 3 +
drivers/net/ice/base/ice_flex_pipe.c | 41 +-
drivers/net/ice/base/ice_flex_pipe.h | 3 +-
drivers/net/ice/base/ice_protocol_type.h | 22 +
drivers/net/ice/base/ice_switch.c | 626 ++++++++++++-
drivers/net/ice/base/ice_switch.h | 12 +
drivers/net/ice/base/ice_type.h | 2 +
drivers/net/ice/ice_acl_filter.c | 31 +-
drivers/net/ice/ice_dcf.c | 398 ++++++++-
drivers/net/ice/ice_dcf.h | 34 +-
drivers/net/ice/ice_dcf_ethdev.c | 1038 ++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 14 +
drivers/net/ice/ice_dcf_parent.c | 11 +
drivers/net/ice/ice_ethdev.c | 13 +-
drivers/net/ice/ice_generic_flow.c | 91 +-
drivers/net/ice/ice_generic_flow.h | 13 +
drivers/net/ice/ice_switch_filter.c | 168 +++-
21 files changed, 2385 insertions(+), 185 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 01/39] net/ice: enable RSS RETA ops for DCF hardware
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 02/39] net/ice: enable RSS HASH " Kevin Liu
` (38 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS RETA should be updated and queried by application,
Add related ops ('.reta_update', '.reta_query') for DCF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++++
3 files changed, 79 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f0c074b01..070d1b71ac 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
return err;
}
-static int
+int
ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_lut *rss_lut;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 6ec766ebda..b2c6aa2684 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59610e058f..1ac66ed990 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint8_t *lut;
+ uint16_t i, idx, shift;
+ int ret;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ /* store the old lut table temporarily */
+ rte_memcpy(lut, hw->rss_lut, reta_size);
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ /* send virtchnnl ops to configure rss*/
+ ret = ice_dcf_configure_rss_lut(hw);
+ if (ret) /* revert back */
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint16_t i, idx, shift;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = hw->rss_lut[i];
+ }
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
.tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 02/39] net/ice: enable RSS HASH ops for DCF hardware
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
2022-04-07 10:56 ` [PATCH 01/39] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 03/39] net/ice: cleanup Tx buffers Kevin Liu
` (37 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS HASH should be updated and queried by application,
Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF.
Because DCF doesn't support configure RSS HASH, only HASH key can be
updated within ops '.rss_hash_update'.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 070d1b71ac..89c0203ba3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
hw->ets_config = NULL;
}
-static int
+int
ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_key *rss_key;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index b2c6aa2684..f0b45af5ae 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ac66ed990..ccad7fc304 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* HENA setting, it is enabled by default, no change */
+ if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) {
+ PMD_DRV_LOG(ERR, "The size of hash key configured "
+ "(%d) doesn't match the size of hardware can "
+ "support (%d)", rss_conf->rss_key_len,
+ hw->vf_res->rss_key_size);
+ return -EINVAL;
+ }
+
+ rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ return ice_dcf_configure_rss_key(hw);
+}
+
+static int
+ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* Just set it to default value now. */
+ rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL;
+
+ if (!rss_conf->rss_key)
+ return 0;
+
+ rss_conf->rss_key_len = hw->vf_res->rss_key_size;
+ rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len);
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tm_ops_get = ice_dcf_tm_ops_get,
.reta_update = ice_dcf_dev_rss_reta_update,
.reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 03/39] net/ice: cleanup Tx buffers
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
2022-04-07 10:56 ` [PATCH 01/39] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-07 10:56 ` [PATCH 02/39] net/ice: enable RSS HASH " Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 04/39] net/ice: add ops MTU-SET to dcf Kevin Liu
` (36 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Robin Zhang, Kevin Liu
From: Robin Zhang <robinx.zhang@intel.com>
Add support for ops rte_eth_tx_done_cleanup in dcf
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ccad7fc304..d8b5961514 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.reta_query = ice_dcf_dev_rss_reta_query,
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 04/39] net/ice: add ops MTU-SET to dcf
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (2 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 03/39] net/ice: cleanup Tx buffers Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 05/39] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
` (35 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "mtu_set" to dcf, and it can configure the port mtu through
cmdline.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++
2 files changed, 20 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d8b5961514..06d752fd61 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &new_link);
}
+static int
+ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started != 0) {
+ PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
+ dev->data->port_id);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
bool
ice_dcf_adminq_need_retry(struct ice_adapter *ad)
{
@@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
.tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 11a1305038..f2faf26f58 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -15,6 +15,12 @@
#define ICE_DCF_MAX_RINGS 1
+#define ICE_DCF_FRAME_SIZE_MAX 9728
+#define ICE_DCF_VLAN_TAG_SIZE 4
+#define ICE_DCF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
+#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+
struct ice_dcf_queue {
uint64_t dummy;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 05/39] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (3 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 04/39] net/ice: add ops MTU-SET to dcf Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 06/39] net/ice: support dcf promisc configuration Kevin Liu
` (34 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get
ptypes through the new API.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 80 +++++++++++++++++++-------------
1 file changed, 49 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 06d752fd61..6a577a6582 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+static const uint32_t *
+ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+ return ptypes;
+}
+
static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
- .dev_start = ice_dcf_dev_start,
- .dev_stop = ice_dcf_dev_stop,
- .dev_close = ice_dcf_dev_close,
- .dev_reset = ice_dcf_dev_reset,
- .dev_configure = ice_dcf_dev_configure,
- .dev_infos_get = ice_dcf_dev_info_get,
- .rx_queue_setup = ice_rx_queue_setup,
- .tx_queue_setup = ice_tx_queue_setup,
- .rx_queue_release = ice_dev_rx_queue_release,
- .tx_queue_release = ice_dev_tx_queue_release,
- .rx_queue_start = ice_dcf_rx_queue_start,
- .tx_queue_start = ice_dcf_tx_queue_start,
- .rx_queue_stop = ice_dcf_rx_queue_stop,
- .tx_queue_stop = ice_dcf_tx_queue_stop,
- .link_update = ice_dcf_link_update,
- .stats_get = ice_dcf_stats_get,
- .stats_reset = ice_dcf_stats_reset,
- .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
- .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
- .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
- .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
- .flow_ops_get = ice_dcf_dev_flow_ops_get,
- .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
- .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
- .tm_ops_get = ice_dcf_tm_ops_get,
- .reta_update = ice_dcf_dev_rss_reta_update,
- .reta_query = ice_dcf_dev_rss_reta_query,
- .rss_hash_update = ice_dcf_dev_rss_hash_update,
- .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
- .tx_done_cleanup = ice_tx_done_cleanup,
- .mtu_set = ice_dcf_dev_mtu_set,
+ .dev_start = ice_dcf_dev_start,
+ .dev_stop = ice_dcf_dev_stop,
+ .dev_close = ice_dcf_dev_close,
+ .dev_reset = ice_dcf_dev_reset,
+ .dev_configure = ice_dcf_dev_configure,
+ .dev_infos_get = ice_dcf_dev_info_get,
+ .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .rx_queue_release = ice_dev_rx_queue_release,
+ .tx_queue_release = ice_dev_tx_queue_release,
+ .rx_queue_start = ice_dcf_rx_queue_start,
+ .tx_queue_start = ice_dcf_tx_queue_start,
+ .rx_queue_stop = ice_dcf_rx_queue_stop,
+ .tx_queue_stop = ice_dcf_tx_queue_stop,
+ .link_update = ice_dcf_link_update,
+ .stats_get = ice_dcf_stats_get,
+ .stats_reset = ice_dcf_stats_reset,
+ .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
+ .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
+ .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
+ .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .flow_ops_get = ice_dcf_dev_flow_ops_get,
+ .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
+ .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
+ .tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 06/39] net/ice: support dcf promisc configuration
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (4 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 05/39] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 07/39] net/ice: support dcf MAC configuration Kevin Liu
` (33 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support configuration of unicast and multicast promisc on dcf.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 3 ++
2 files changed, 76 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6a577a6582..87d281ee93 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
}
static int
-ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+dcf_config_promisc(struct ice_dcf_adapter *adapter,
+ bool enable_unicast,
+ bool enable_multicast)
{
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_promisc_info promisc;
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ promisc.flags = 0;
+ promisc.vsi_id = hw->vsi_res->vsi_id;
+
+ if (enable_unicast)
+ promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+ if (enable_multicast)
+ promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+ args.req_msg = (uint8_t *)&promisc;
+ args.req_msglen = sizeof(promisc);
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE");
+ return err;
+ }
+
+ adapter->promisc_unicast_enabled = enable_unicast;
+ adapter->promisc_multicast_enabled = enable_multicast;
return 0;
}
+static int
+ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, true,
+ adapter->promisc_multicast_enabled);
+}
+
static int
ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, false,
+ adapter->promisc_multicast_enabled);
}
static int
ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ true);
}
static int
ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ false);
}
static int
@@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
+ dcf_config_promisc(adapter, false, false);
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index f2faf26f58..22e450527b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -33,6 +33,9 @@ struct ice_dcf_adapter {
struct ice_adapter parent; /* Must be first */
struct ice_dcf_hw real_hw;
+ bool promisc_unicast_enabled;
+ bool promisc_multicast_enabled;
+
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 07/39] net/ice: support dcf MAC configuration
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (5 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 06/39] net/ice: support dcf promisc configuration Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 08/39] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
` (32 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
Below PMD ops are supported in this patch:
.mac_addr_add = dcf_dev_add_mac_addr
.mac_addr_remove = dcf_dev_del_mac_addr
.set_mc_addr_list = dcf_set_mc_addr_list
.mac_addr_set = dcf_dev_set_default_mac_addr
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 9 +-
drivers/net/ice/ice_dcf.h | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 5 +-
4 files changed, 226 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 89c0203ba3..55ae68c456 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
}
int
-ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr,
+ bool add, uint8_t type)
{
struct virtchnl_ether_addr_list *list;
- struct rte_ether_addr *addr;
struct dcf_virtchnl_cmd args;
int len, err = 0;
@@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
}
len = sizeof(struct virtchnl_ether_addr_list);
- addr = hw->eth_dev->data->mac_addrs;
len += sizeof(struct virtchnl_ether_addr);
list = rte_zmalloc(NULL, len, 0);
@@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
rte_memcpy(list->list[0].addr, addr->addr_bytes,
sizeof(addr->addr_bytes));
+
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
-
+ list->list[0].type = type;
list->vsi_id = hw->vsi_res->vsi_id;
list->num_elements = 1;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index f0b45af5ae..78df202a77 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
-int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr, bool add,
+ uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 87d281ee93..0d944f9fd2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -26,6 +26,12 @@
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#define DCF_NUM_MACADDR_MAX 64
+
+static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add);
+
static int
ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- ret = ice_dcf_add_del_all_mac_addr(hw, true);
+ ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs,
+ true, VIRTCHNL_ETHER_ADDR_PRIMARY);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to add mac addr");
return ret;
}
+ if (dcf_ad->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, true);
+ if (ret)
+ return ret;
+ }
+
+
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
@@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
rte_intr_efd_disable(intr_handle);
rte_intr_vec_list_free(intr_handle);
- ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
+ ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw,
+ dcf_ad->real_hw.eth_dev->data->mac_addrs,
+ false, VIRTCHNL_ETHER_ADDR_PRIMARY);
+
+ if (dcf_ad->mc_addrs_num)
+ /* flush previous addresses */
+ (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw,
+ dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, false);
+
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- dev_info->max_mac_addrs = 1;
+ dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
@@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
false);
}
+static int
+dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ int err;
+
+ if (rte_is_zero_ether_addr(addr)) {
+ PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+ return -EINVAL;
+ }
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to add MAC address");
+ return err;
+ }
+
+ return 0;
+}
+
+static void
+dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct rte_ether_addr *addr = &dev->data->mac_addrs[index];
+ int err;
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to remove MAC address");
+}
+
+static int
+dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add)
+{
+ struct virtchnl_ether_addr_list *list;
+ struct dcf_virtchnl_cmd args;
+ uint32_t i;
+ int len, err = 0;
+
+ len = sizeof(struct virtchnl_ether_addr_list);
+ len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
+
+ list = rte_zmalloc(NULL, len, 0);
+ if (!list) {
+ PMD_DRV_LOG(ERR, "fail to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
+ sizeof(list->list[i].addr));
+ list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ list->vsi_id = hw->vsi_res->vsi_id;
+ list->num_elements = mc_addrs_num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+ VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.req_msg = (uint8_t *)list;
+ args.req_msglen = len;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" :
+ "OP_DEL_ETHER_ADDRESS");
+ rte_free(list);
+ return err;
+}
+
+static int
+dcf_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i;
+ int ret;
+
+
+ if (mc_addrs_num > DCF_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR,
+ "can't add more than a limited number (%u) of addresses.",
+ (uint32_t)DCF_NUM_MACADDR_MAX);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addrs[i])) {
+ const uint8_t *mac = mc_addrs[i].addr_bytes;
+
+ PMD_DRV_LOG(ERR,
+ "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x",
+ mac[0], mac[1], mac[2], mac[3], mac[4],
+ mac[5]);
+ return -EINVAL;
+ }
+ }
+
+ if (adapter->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num, false);
+ if (ret)
+ return ret;
+ }
+ if (!mc_addrs_num) {
+ adapter->mc_addrs_num = 0;
+ return 0;
+ }
+
+ /* add new ones */
+ ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true);
+ if (ret) {
+ /* if adding mac address list fails, should add the
+ * previous addresses back.
+ */
+ if (adapter->mc_addrs_num)
+ (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num,
+ true);
+ return ret;
+ }
+ adapter->mc_addrs_num = mc_addrs_num;
+ memcpy(adapter->mc_addrs,
+ mc_addrs, mc_addrs_num * sizeof(*mc_addrs));
+
+ return 0;
+}
+
+static int
+dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_ether_addr *old_addr;
+ int ret;
+
+ old_addr = hw->eth_dev->data->mac_addrs;
+ if (rte_is_same_ether_addr(old_addr, mac_addr))
+ return 0;
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ old_addr->addr_bytes[0],
+ old_addr->addr_bytes[1],
+ old_addr->addr_bytes[2],
+ old_addr->addr_bytes[3],
+ old_addr->addr_bytes[4],
+ old_addr->addr_bytes[5]);
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ mac_addr->addr_bytes[0],
+ mac_addr->addr_bytes[1],
+ mac_addr->addr_bytes[2],
+ mac_addr->addr_bytes[3],
+ mac_addr->addr_bytes[4],
+ mac_addr->addr_bytes[5]);
+
+ if (ret)
+ return -EIO;
+
+ rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs);
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
.allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .mac_addr_add = dcf_dev_add_mac_addr,
+ .mac_addr_remove = dcf_dev_del_mac_addr,
+ .set_mc_addr_list = dcf_set_mc_addr_list,
+ .mac_addr_set = dcf_dev_set_default_mac_addr,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 22e450527b..27f6402786 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -14,7 +14,7 @@
#include "ice_dcf.h"
#define ICE_DCF_MAX_RINGS 1
-
+#define DCF_NUM_MACADDR_MAX 64
#define ICE_DCF_FRAME_SIZE_MAX 9728
#define ICE_DCF_VLAN_TAG_SIZE 4
#define ICE_DCF_ETH_OVERHEAD \
@@ -35,7 +35,8 @@ struct ice_dcf_adapter {
bool promisc_unicast_enabled;
bool promisc_multicast_enabled;
-
+ uint32_t mc_addrs_num;
+ struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX];
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 08/39] net/ice: support dcf VLAN filter and offload configuration
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (6 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 07/39] net/ice: support dcf MAC configuration Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 09/39] net/ice: support DCF new VLAN capabilities Kevin Liu
` (31 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Below PMD ops are supported in this patch:
.vlan_filter_set = dcf_dev_vlan_filter_set
.vlan_offload_set = dcf_dev_vlan_offload_set
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0d944f9fd2..e58cdf47d2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_filter_list *vlan_list;
+ uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+ sizeof(uint16_t)];
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+ vlan_list->vsi_id = hw->vsi_res->vsi_id;
+ vlan_list->num_elements = 1;
+ vlan_list->vlan_id[0] = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+ args.req_msg = cmd_buffer;
+ args.req_msglen = sizeof(cmd_buffer);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN" : "OP_DEL_VLAN");
+
+ return err;
+}
+
+static int
+dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_ENABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static int
+dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ /* Vlan stripping setting */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ /* Enable or disable VLAN stripping */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ err = dcf_enable_vlan_strip(hw);
+ else
+ err = dcf_disable_vlan_strip(hw);
+
+ if (err)
+ return -EIO;
+ }
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mac_addr_remove = dcf_dev_del_mac_addr,
.set_mc_addr_list = dcf_set_mc_addr_list,
.mac_addr_set = dcf_dev_set_default_mac_addr,
+ .vlan_filter_set = dcf_dev_vlan_filter_set,
+ .vlan_offload_set = dcf_dev_vlan_offload_set,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 09/39] net/ice: support DCF new VLAN capabilities
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (7 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 08/39] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 10/39] net/ice: enable CVL DCF device reset API Kevin Liu
` (30 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The DCF needs to query the VLAN capabilities based on current device
configuration firstly.
DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 27 +++++
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++++++++---
3 files changed, 182 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 55ae68c456..885d58c0f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
return 0;
}
+static int
+dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_vlan_caps vlan_v2_caps;
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS;
+ args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps;
+ args.rsp_buflen = sizeof(vlan_v2_caps);
+
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS");
+ return ret;
+ }
+
+ rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
+ return 0;
+}
+
int
ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
@@ -701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
+ if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) &&
+ dcf_get_vlan_offload_caps_v2(hw))
+ goto err_rss;
+
return 0;
err_rss:
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78df202a77..32e6031bd9 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -107,6 +107,7 @@ struct ice_dcf_hw {
uint16_t nb_msix;
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
+ struct virtchnl_vlan_caps vlan_v2_caps;
/* Link status */
bool link_up;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e58cdf47d2..d4bfa182a4 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,46 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_supported_caps *supported_caps =
+ &hw->vlan_v2_caps.filtering.filtering_support;
+ struct virtchnl_vlan *vlan_setting;
+ struct virtchnl_vlan_filter_list_v2 vlan_filter;
+ struct dcf_virtchnl_cmd args;
+ uint32_t filtering_caps;
+ int err;
+
+ if (supported_caps->outer) {
+ filtering_caps = supported_caps->outer;
+ vlan_setting = &vlan_filter.filters[0].outer;
+ } else {
+ filtering_caps = supported_caps->inner;
+ vlan_setting = &vlan_filter.filters[0].inner;
+ }
+
+ if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100))
+ return -ENOTSUP;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.vport_id = hw->vsi_res->vsi_id;
+ vlan_filter.num_elements = 1;
+ vlan_setting->tpid = RTE_ETHER_TYPE_VLAN;
+ vlan_setting->tci = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2;
+ args.req_msg = (uint8_t *)&vlan_filter;
+ args.req_msglen = sizeof(vlan_filter);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2");
+
+ return err;
+}
+
static int
dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
{
@@ -1052,6 +1092,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
return err;
}
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
+ err = dcf_add_del_vlan_v2(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+ }
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static void
+dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable)
+{
+ struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i, j;
+ uint64_t ids;
+
+ for (i = 0; i < RTE_DIM(vfc->ids); i++) {
+ if (vfc->ids[i] == 0)
+ continue;
+
+ ids = vfc->ids[i];
+ for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) {
+ if (ids & 1)
+ dcf_add_del_vlan_v2(hw, 64 * i + j, enable);
+ }
+ }
+}
+
+static int
+dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable)
+{
+ struct virtchnl_vlan_supported_caps *stripping_caps =
+ &hw->vlan_v2_caps.offloads.stripping_support;
+ struct virtchnl_vlan_setting vlan_strip;
+ struct dcf_virtchnl_cmd args;
+ uint32_t *ethertype;
+ int ret;
+
+ if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.outer_ethertype_setting;
+ else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.inner_ethertype_setting;
+ else
+ return -ENOTSUP;
+
+ memset(&vlan_strip, 0, sizeof(vlan_strip));
+ vlan_strip.vport_id = hw->vsi_res->vsi_id;
+ *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 :
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2;
+ args.req_msg = (uint8_t *)&vlan_strip;
+ args.req_msglen = sizeof(vlan_strip);
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" :
+ "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ bool enable;
+ int err;
+
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
+
+ dcf_iterate_vlan_filters_v2(dev, enable);
+ }
+
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
+
+ err = dcf_config_vlan_strip_v2(hw, enable);
+ /* If not support, the stripping is already disabled by PF */
+ if (err == -ENOTSUP && !enable)
+ err = 0;
+ if (err)
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int
dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
{
@@ -1084,30 +1234,17 @@ dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
return ret;
}
-static int
-dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct ice_dcf_adapter *adapter = dev->data->dev_private;
- struct ice_dcf_hw *hw = &adapter->real_hw;
- int err;
-
- if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
- return -ENOTSUP;
-
- err = dcf_add_del_vlan(hw, vlan_id, on);
- if (err)
- return -EIO;
- return 0;
-}
-
static int
dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
int err;
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
+ return dcf_dev_vlan_offload_set_v2(dev, mask);
+
if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
return -ENOTSUP;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 10/39] net/ice: enable CVL DCF device reset API
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (8 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 09/39] net/ice: support DCF new VLAN capabilities Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 11/39] net/ice/base: add VXLAN support for switch filter Kevin Liu
` (29 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Dapeng Yu, Kevin Liu
From: Dapeng Yu <dapengx.yu@intel.com>
Enable CVL DCF device reset API.
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 24 ++++++++++++++++++++++++
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 885d58c0f4..9c2f13cf72 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1163,3 +1163,27 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
rte_free(list);
return err;
}
+
+int
+ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
+{
+ int ret;
+
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+ ice_dcf_disable_irq0(hw);
+ rte_intr_disable(intr_handle);
+ rte_intr_callback_unregister(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ ret = ice_dcf_mode_disable(hw);
+ if (ret)
+ goto err;
+ ret = ice_dcf_get_vf_resource(hw);
+err:
+ rte_intr_callback_register(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ rte_intr_enable(intr_handle);
+ ice_dcf_enable_irq0(hw);
+ return ret;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 32e6031bd9..8cf17e7700 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -137,6 +137,7 @@ int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
+int ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
void ice_dcf_tm_conf_uninit(struct rte_eth_dev *dev);
int ice_dcf_replay_vf_bw(struct ice_dcf_hw *hw, uint16_t vf_id);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 11/39] net/ice/base: add VXLAN support for switch filter
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (9 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 10/39] net/ice: enable CVL DCF device reset API Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 12/39] net/ice: " Kevin Liu
` (28 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
1. Add profile rule for VXLAN on Switch Filter, including
pattern_eth_ipv4_udp_vxlan_any
pattern_eth_ipv6_udp_vxlan_any
pattern_eth_ipv4_udp_vxlan_eth_ipv4
pattern_eth_ipv4_udp_vxlan_eth_ipv6
pattern_eth_ipv6_udp_vxlan_eth_ipv4
pattern_eth_ipv6_udp_vxlan_eth_ipv6
2. Add common rule for VXLAN on Switch Filter, including
+-----------------+-----------------------------------------------------+
| Pattern | Input Set |
+-----------------+-----------------------------------------------------+
| ipv4_vxlan_ipv4 | vni, inner dmac, inner dst/src ip, outer dst/src ip |
| ipv4_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv4 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
+-----------------+-----------------------------------------------------+
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_protocol_type.h | 6 +
drivers/net/ice/base/ice_switch.c | 213 ++++++++++++++++++++++-
drivers/net/ice/base/ice_switch.h | 12 ++
3 files changed, 230 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index 0e6e5990be..d6332c5690 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -112,6 +112,12 @@ enum ice_sw_tunnel_type {
ICE_SW_TUN_IPV6_NAT_T,
ICE_SW_TUN_IPV4_L2TPV3,
ICE_SW_TUN_IPV6_L2TPV3,
+ ICE_SW_TUN_PROFID_IPV4_VXLAN,
+ ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4,
+ ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6,
+ ICE_SW_TUN_PROFID_IPV6_VXLAN,
+ ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4,
+ ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6,
ICE_SW_TUN_PROFID_IPV6_ESP,
ICE_SW_TUN_PROFID_IPV6_AH,
ICE_SW_TUN_PROFID_MAC_IPV6_L2TPV3,
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index d4cc664ad7..b0c50c8f40 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -228,6 +228,117 @@ static const u8 dummy_udp_tun_udp_packet[] = {
0x00, 0x08, 0x00, 0x00,
};
+static const
+struct ice_dummy_pkt_offsets dummy_udp_tun_ipv6_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_UDP_OF, 34 },
+ { ICE_VXLAN, 42 },
+ { ICE_GENEVE, 42 },
+ { ICE_VXLAN_GPE, 42 },
+ { ICE_MAC_IL, 50 },
+ { ICE_IPV6_IL, 64 },
+ { ICE_TCP_IL, 104 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_udp_tun_ipv6_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x5a, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+ 0x00, 0x46, 0x00, 0x00,
+
+ 0x00, 0x00, 0x65, 0x58, /* ICE_VXLAN 42 */
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x86, 0xdd,
+
+ 0x60, 0x00, 0x00, 0x00, /* ICE_IPV4_IL 64 */
+ 0x00, 0x00, 0x06, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 104 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x50, 0x02, 0x20, 0x00,
+ 0x00, 0x00, 0x00, 0x00
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_udp_tun_ipv6_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_UDP_OF, 34 },
+ { ICE_VXLAN, 42 },
+ { ICE_GENEVE, 42 },
+ { ICE_VXLAN_GPE, 42 },
+ { ICE_MAC_IL, 50 },
+ { ICE_IPV6_IL, 64 },
+ { ICE_UDP_ILOS, 104 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_udp_tun_ipv6_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x4e, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x00, 0x11, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+ 0x00, 0x3a, 0x00, 0x00,
+
+ 0x00, 0x00, 0x65, 0x58, /* ICE_VXLAN 42 */
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x86, 0xdd,
+
+ 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_IL 64 */
+ 0x00, 0x58, 0x11, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 104 */
+ 0x00, 0x08, 0x00, 0x00,
+};
+
/* offset info for MAC + IPv4 + UDP dummy packet */
static const struct ice_dummy_pkt_offsets dummy_udp_packet_offsets[] = {
{ ICE_MAC_OFOS, 0 },
@@ -2001,6 +2112,10 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan)
u8 gre_profile[12] = {13, 14, 15, 19, 20, 21, 28, 29, 30, 31, 32, 33};
u8 pppoe_profile[7] = {34, 35, 36, 37, 38, 39, 40};
u8 non_tun_profile[6] = {4, 5, 6, 7, 8, 9};
+ bool ipv4_vxlan_ipv4_valid = false;
+ bool ipv4_vxlan_ipv6_valid = false;
+ bool ipv6_vxlan_ipv4_valid = false;
+ bool ipv6_vxlan_ipv6_valid = false;
enum ice_sw_tunnel_type tun_type;
u16 i, j, k, profile_num = 0;
bool non_tun_valid = false;
@@ -2022,8 +2137,17 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan)
}
for (i = 0; i < 12; i++) {
- if (vxlan_profile[i] == j)
+ if (vxlan_profile[i] == j) {
vxlan_valid = true;
+ if (i < 3)
+ ipv4_vxlan_ipv4_valid = true;
+ else if (i < 6)
+ ipv6_vxlan_ipv4_valid = true;
+ else if (i < 9)
+ ipv4_vxlan_ipv6_valid = true;
+ else if (i < 12)
+ ipv6_vxlan_ipv6_valid = true;
+ }
}
for (i = 0; i < 7; i++) {
@@ -2083,6 +2207,20 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan)
break;
}
}
+ if (tun_type == ICE_SW_TUN_VXLAN) {
+ if (ipv4_vxlan_ipv4_valid && ipv4_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN;
+ else if (ipv6_vxlan_ipv4_valid && ipv6_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN;
+ else if (ipv4_vxlan_ipv4_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4;
+ else if (ipv4_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6;
+ else if (ipv6_vxlan_ipv4_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4;
+ else if (ipv6_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6;
+ }
if (profile_num == 1 && (flag_valid || non_tun_valid || pppoe_valid)) {
for (j = 0; j < ICE_MAX_NUM_PROFILES; j++) {
@@ -7496,6 +7634,12 @@ static bool ice_tun_type_match_word(enum ice_sw_tunnel_type tun_type, u16 *mask)
case ICE_SW_TUN_VXLAN_GPE:
case ICE_SW_TUN_GENEVE:
case ICE_SW_TUN_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
case ICE_SW_TUN_NVGRE:
case ICE_SW_TUN_UDP:
case ICE_ALL_TUNNELS:
@@ -7613,6 +7757,42 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo,
case ICE_SW_TUN_PPPOE_IPV6_UDP:
ice_set_bit(ICE_PROFID_PPPOE_IPV6_UDP, bm);
return;
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_OTHER, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_OTHER, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_OTHER, bm);
+ return;
case ICE_SW_TUN_PROFID_IPV6_ESP:
case ICE_SW_TUN_IPV6_ESP:
ice_set_bit(ICE_PROFID_IPV6_ESP, bm);
@@ -7780,6 +7960,12 @@ bool ice_is_prof_rule(enum ice_sw_tunnel_type type)
{
switch (type) {
case ICE_SW_TUN_AND_NON_TUN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
case ICE_SW_TUN_PROFID_IPV6_ESP:
case ICE_SW_TUN_PROFID_IPV6_AH:
case ICE_SW_TUN_PROFID_MAC_IPV6_L2TPV3:
@@ -8396,8 +8582,27 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
return;
}
+ if (tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6 ||
+ tun_type == ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6) {
+ if (tcp) {
+ *pkt = dummy_udp_tun_ipv6_tcp_packet;
+ *pkt_len = sizeof(dummy_udp_tun_ipv6_tcp_packet);
+ *offsets = dummy_udp_tun_ipv6_tcp_packet_offsets;
+ return;
+ }
+
+ *pkt = dummy_udp_tun_ipv6_udp_packet;
+ *pkt_len = sizeof(dummy_udp_tun_ipv6_udp_packet);
+ *offsets = dummy_udp_tun_ipv6_udp_packet_offsets;
+ return;
+ }
+
if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
tun_type == ICE_SW_TUN_VXLAN_GPE || tun_type == ICE_SW_TUN_UDP ||
+ tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN ||
+ tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4 ||
+ tun_type == ICE_SW_TUN_PROFID_IPV6_VXLAN ||
+ tun_type == ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4 ||
tun_type == ICE_SW_TUN_GENEVE_VLAN ||
tun_type == ICE_SW_TUN_VXLAN_VLAN) {
if (tcp) {
@@ -8613,6 +8818,12 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type,
case ICE_SW_TUN_AND_NON_TUN:
case ICE_SW_TUN_VXLAN_GPE:
case ICE_SW_TUN_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
case ICE_SW_TUN_VXLAN_VLAN:
case ICE_SW_TUN_UDP:
if (!ice_get_open_tunnel_port(hw, TNL_VXLAN, &open_port))
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index a2b3c80107..efb9399b77 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -20,6 +20,18 @@
#define ICE_PROFID_IPV4_UDP 5
#define ICE_PROFID_IPV6_TCP 7
#define ICE_PROFID_IPV6_UDP 8
+#define ICE_PROFID_IPV4_TUN_M_IPV4_TCP 10
+#define ICE_PROFID_IPV4_TUN_M_IPV4_UDP 11
+#define ICE_PROFID_IPV4_TUN_M_IPV4_OTHER 12
+#define ICE_PROFID_IPV6_TUN_M_IPV4_TCP 16
+#define ICE_PROFID_IPV6_TUN_M_IPV4_UDP 17
+#define ICE_PROFID_IPV6_TUN_M_IPV4_OTHER 18
+#define ICE_PROFID_IPV4_TUN_M_IPV6_TCP 22
+#define ICE_PROFID_IPV4_TUN_M_IPV6_UDP 23
+#define ICE_PROFID_IPV4_TUN_M_IPV6_OTHER 24
+#define ICE_PROFID_IPV6_TUN_M_IPV6_TCP 25
+#define ICE_PROFID_IPV6_TUN_M_IPV6_UDP 26
+#define ICE_PROFID_IPV6_TUN_M_IPV6_OTHER 27
#define ICE_PROFID_PPPOE_PAY 34
#define ICE_PROFID_PPPOE_IPV4_TCP 35
#define ICE_PROFID_PPPOE_IPV4_UDP 36
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 12/39] net/ice: add VXLAN support for switch filter
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (10 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 11/39] net/ice/base: add VXLAN support for switch filter Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 13/39] common/iavf: support flushing rules and reporting DCF id Kevin Liu
` (27 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
1. Add profile rule for VXLAN on Switch Filter, including
pattern_eth_ipv4_udp_vxlan_any
pattern_eth_ipv6_udp_vxlan_any
pattern_eth_ipv4_udp_vxlan_eth_ipv4
pattern_eth_ipv4_udp_vxlan_eth_ipv6
pattern_eth_ipv6_udp_vxlan_eth_ipv4
pattern_eth_ipv6_udp_vxlan_eth_ipv6
2. Add common rule for VXLAN on Switch Filter, including
+-----------------+-----------------------------------------------------+
| Pattern | Input Set |
+-----------------+-----------------------------------------------------+
| ipv4_vxlan_ipv4 | vni, inner dmac, inner dst/src ip, outer dst/src ip |
| ipv4_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv4 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
+-----------------+-----------------------------------------------------+
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_generic_flow.c | 20 ++++++++++
drivers/net/ice/ice_generic_flow.h | 4 ++
drivers/net/ice/ice_switch_filter.c | 59 +++++++++++++++++++++++++++--
3 files changed, 80 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 53b1c0b69a..1433094ed4 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -375,6 +375,26 @@ enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_ipv4_icmp[] = {
RTE_FLOW_ITEM_TYPE_END,
};
+/* IPv4 VXLAN ANY */
+enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_any[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_VXLAN,
+ RTE_FLOW_ITEM_TYPE_ANY,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+/* IPv6 VXLAN ANY */
+enum rte_flow_item_type pattern_eth_ipv6_udp_vxlan_any[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_VXLAN,
+ RTE_FLOW_ITEM_TYPE_ANY,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
/* IPv4 VXLAN MAC IPv4 */
enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv4[] = {
RTE_FLOW_ITEM_TYPE_ETH,
diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h
index 11f51a5c15..def7e2d6d6 100644
--- a/drivers/net/ice/ice_generic_flow.h
+++ b/drivers/net/ice/ice_generic_flow.h
@@ -175,6 +175,10 @@ extern enum rte_flow_item_type pattern_eth_ipv6_icmp6[];
extern enum rte_flow_item_type pattern_eth_vlan_ipv6_icmp6[];
extern enum rte_flow_item_type pattern_eth_qinq_ipv6_icmp6[];
+/* IPv4/IPv6 VXLAN ANY */
+extern enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_any[];
+extern enum rte_flow_item_type pattern_eth_ipv6_udp_vxlan_any[];
+
/* IPv4 VXLAN IPv4 */
extern enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_ipv4[];
extern enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_ipv4_udp[];
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 36c9bffb73..e90e109eca 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -85,6 +85,19 @@
#define ICE_SW_INSET_DIST_VXLAN_IPV4 ( \
ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | ICE_INSET_DMAC | \
ICE_INSET_VXLAN_VNI)
+#define ICE_SW_INSET_DIST_IPV4_VXLAN_IPV4 ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST)
+#define ICE_SW_INSET_DIST_IPV4_VXLAN_IPV6 ( \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_IPV6_SRC | ICE_INSET_IPV6_DST)
+#define ICE_SW_INSET_DIST_IPV6_VXLAN_IPV4 ( \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST)
+#define ICE_SW_INSET_DIST_IPV6_VXLAN_IPV6 ( \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_IPV6_SRC | ICE_INSET_IPV6_DST)
#define ICE_SW_INSET_DIST_NVGRE_IPV4_TCP ( \
ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
ICE_INSET_TCP_SRC_PORT | ICE_INSET_TCP_DST_PORT | \
@@ -112,6 +125,9 @@
ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
ICE_INSET_UDP_SRC_PORT | ICE_INSET_UDP_DST_PORT | \
ICE_INSET_IPV4_TOS)
+#define ICE_SW_INSET_PERM_TUNNEL_IPV6 ( \
+ ICE_INSET_IPV6_SRC | ICE_INSET_IPV6_DST | \
+ ICE_INSET_IPV6_NEXT_HDR | ICE_INSET_IPV6_TC)
#define ICE_SW_INSET_MAC_PPPOE ( \
ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION)
@@ -217,9 +233,14 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_VXLAN_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_VXLAN_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_VXLAN_IPV4_TCP, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV4_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv4, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_NVGRE_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_udp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
@@ -301,9 +322,14 @@ ice_pattern_match_item ice_switch_pattern_perm_list[] = {
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV4_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv4, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_udp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
@@ -566,6 +592,11 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
bool inner_ipv6_valid = 0;
bool inner_tcp_valid = 0;
bool inner_udp_valid = 0;
+ bool ipv4_ipv4_valid = 0;
+ bool ipv4_ipv6_valid = 0;
+ bool ipv6_ipv4_valid = 0;
+ bool ipv6_ipv6_valid = 0;
+ bool any_valid = 0;
uint16_t j, k, t = 0;
if (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
@@ -586,6 +617,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ANY:
*tun_type = ICE_SW_TUN_AND_NON_TUN;
+ any_valid = 1;
break;
case RTE_FLOW_ITEM_TYPE_ETH:
@@ -654,6 +686,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
case RTE_FLOW_ITEM_TYPE_IPV4:
ipv4_spec = item->spec;
ipv4_mask = item->mask;
+ if (ipv4_valid)
+ ipv4_ipv4_valid = 1;
+ if (ipv6_valid)
+ ipv6_ipv4_valid = 1;
if (tunnel_valid) {
inner_ipv4_valid = 1;
input = &inner_input_set;
@@ -734,6 +770,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
case RTE_FLOW_ITEM_TYPE_IPV6:
ipv6_spec = item->spec;
ipv6_mask = item->mask;
+ if (ipv4_valid)
+ ipv4_ipv6_valid = 1;
+ if (ipv6_valid)
+ ipv6_ipv6_valid = 1;
if (tunnel_valid) {
inner_ipv6_valid = 1;
input = &inner_input_set;
@@ -1577,9 +1617,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
}
if (*tun_type == ICE_NON_TUN) {
- if (vxlan_valid)
- *tun_type = ICE_SW_TUN_VXLAN;
- else if (nvgre_valid)
+ if (nvgre_valid)
*tun_type = ICE_SW_TUN_NVGRE;
else if (ipv4_valid && tcp_valid)
*tun_type = ICE_SW_IPV4_TCP;
@@ -1591,6 +1629,21 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
*tun_type = ICE_SW_IPV6_UDP;
}
+ if (vxlan_valid) {
+ if (ipv4_ipv4_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4;
+ else if (ipv4_ipv6_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6;
+ else if (ipv6_ipv4_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4;
+ else if (ipv6_ipv6_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6;
+ else if (ipv6_valid && any_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN;
+ else if (ipv4_valid && any_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN;
+ }
+
if (input_set_byte > MAX_INPUT_SET_BYTE) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 13/39] common/iavf: support flushing rules and reporting DCF id
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (11 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 12/39] net/ice: " Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 14/39] net/ice/base: fix ethertype filter input set Kevin Liu
` (26 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add virtual channel opcode for DCF flushing rules.
Add virtual channel event for PF reporting DCF id.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/common/iavf/virtchnl.h | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 3e44eca7d8..6e2a24b281 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -164,6 +164,12 @@ enum virtchnl_ops {
VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107,
VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108,
VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111,
+
+ /**
+ * To reduce the risk for future combability issue,
+ * set VIRTCHNL_OP_DCF_RULE_FLUSH carefully by using a special value.
+ */
+ VIRTCHNL_OP_DCF_RULE_FLUSH = 6000,
VIRTCHNL_OP_MAX,
};
@@ -1424,6 +1430,12 @@ enum virtchnl_event_codes {
VIRTCHNL_EVENT_RESET_IMPENDING,
VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE,
+
+ /**
+ * To reduce the risk for future combability issue,
+ * set VIRTCHNL_EVENT_DCF_VSI_INFO carefully by using a special value.
+ */
+ VIRTCHNL_EVENT_DCF_VSI_INFO = 1000,
};
#define PF_EVENT_SEVERITY_INFO 0
@@ -2200,6 +2212,7 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
*/
valid_len = msglen;
break;
+ case VIRTCHNL_OP_DCF_RULE_FLUSH:
case VIRTCHNL_OP_DCF_DISABLE:
case VIRTCHNL_OP_DCF_GET_VSI_MAP:
case VIRTCHNL_OP_DCF_GET_PKG_INFO:
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 14/39] net/ice/base: fix ethertype filter input set
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (12 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 13/39] common/iavf: support flushing rules and reporting DCF id Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 15/39] net/iavf: support checking if device is an MDCF instance Kevin Liu
` (25 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add destination and source MAC as the input sets to ethertype filter.
For example:
flow create 0 ingress pattern eth dst is 00:11:22:33:44:55
type is 0x802 / end actions queue index 2 / end
This flow will result in all the matched ingress packets be
forwarded to queue 2.
Fixes: 1f70fb3e958a ("net/ice/base: support flow director for non-IP packets")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_fdir.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index ae76361102..0a1d45a9d7 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -3935,6 +3935,9 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
input->ip.v6.dst_port);
break;
case ICE_FLTR_PTYPE_NON_IP_L2:
+ ice_pkt_insert_mac_addr(loc, input->ext_data.dst_mac);
+ ice_pkt_insert_mac_addr(loc + ETH_ALEN,
+ input->ext_data.src_mac);
ice_pkt_insert_u16(loc, ICE_MAC_ETHTYPE_OFFSET,
input->ext_data.ether_type);
break;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 15/39] net/iavf: support checking if device is an MDCF instance
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (13 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 14/39] net/ice/base: fix ethertype filter input set Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 16/39] net/ice: support MDCF(multi-DCF) instance Kevin Liu
` (24 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
For an MDCF instance (with 'mdcf' in the parameter list),
it should not be bound to iavf PMD.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/iavf/iavf_ethdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index d6190ac24a..afc1ee53e7 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -2678,7 +2678,7 @@ static int
iavf_dcf_cap_check_handler(__rte_unused const char *key,
const char *value, __rte_unused void *opaque)
{
- if (strcmp(value, "dcf"))
+ if (strcmp(value, "dcf") && strcmp(value, "mdcf"))
return -1;
return 0;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 16/39] net/ice: support MDCF(multi-DCF) instance
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (14 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 15/39] net/iavf: support checking if device is an MDCF instance Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 17/39] net/ice/base: support custom DDP buildin recipe Kevin Liu
` (23 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Steven Zou, Alvin Zhang
Add MDCF flushing flow rule ops.
Support parsing commandline device capability 'mdcf'.
Support PF reporting current DCF id and disabling the DCF capability
of an MDCF instance.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 23 ++++++-
drivers/net/ice/ice_dcf.h | 3 +
drivers/net/ice/ice_dcf_ethdev.c | 99 ++++++++++++++++-------------
drivers/net/ice/ice_dcf_parent.c | 8 +++
drivers/net/ice/ice_generic_flow.c | 10 +++
drivers/net/ice/ice_switch_filter.c | 5 +-
6 files changed, 100 insertions(+), 48 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 9c2f13cf72..7987b6261d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -681,7 +681,8 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
if (ice_dcf_get_vf_vsi_map(hw) < 0) {
PMD_INIT_LOG(ERR, "Failed to get VF VSI map");
- ice_dcf_mode_disable(hw);
+ if (!hw->multi_inst)
+ ice_dcf_mode_disable(hw);
goto err_alloc;
}
@@ -759,8 +760,8 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
rte_intr_disable(intr_handle);
rte_intr_callback_unregister(intr_handle,
ice_dcf_dev_interrupt_handler, hw);
-
- ice_dcf_mode_disable(hw);
+ if (!hw->multi_inst)
+ ice_dcf_mode_disable(hw);
iavf_shutdown_adminq(&hw->avf);
rte_free(hw->arq_buf);
@@ -1187,3 +1188,19 @@ ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
ice_dcf_enable_irq0(hw);
return ret;
}
+
+int
+ice_dcf_flush_rules(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int err = 0;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DCF_RULE_FLUSH;
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(WARNING, "fail to execute command OF_DCF_RULE_FLUSH, DCF role must be preempted.");
+
+ return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 8cf17e7700..42f4404a37 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -98,6 +98,8 @@ struct ice_dcf_hw {
uint16_t vsi_id;
struct rte_eth_dev *eth_dev;
+ bool multi_inst;
+ bool dcf_replaced;
uint8_t *rss_lut;
uint8_t *rss_key;
uint64_t supported_rxdid;
@@ -142,5 +144,6 @@ void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
void ice_dcf_tm_conf_uninit(struct rte_eth_dev *dev);
int ice_dcf_replay_vf_bw(struct ice_dcf_hw *hw, uint16_t vf_id);
int ice_dcf_clear_bw(struct ice_dcf_hw *hw);
+int ice_dcf_flush_rules(struct ice_dcf_hw *hw);
#endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d4bfa182a4..90787d8c49 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -19,6 +19,7 @@
#include <rte_malloc.h>
#include <rte_memzone.h>
#include <rte_dev.h>
+#include <rte_ethdev.h>
#include <iavf_devids.h>
@@ -1788,12 +1789,66 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mtu_set = ice_dcf_dev_mtu_set,
};
+static int
+ice_dcf_cap_check_handler(__rte_unused const char *key,
+ const char *value, void *opaque)
+{
+ bool *mi = opaque;
+
+ if (!strcmp(value, "dcf")) {
+ *mi = 0;
+ return 0;
+ }
+ if (!strcmp(value, "mdcf")) {
+ *mi = 1;
+ return 0;
+ }
+
+ return -1;
+}
+
+static int
+ice_dcf_cap_selected(struct ice_dcf_adapter *adapter,
+ struct rte_devargs *devargs)
+{
+ struct ice_adapter *ad = &adapter->parent;
+ struct rte_kvargs *kvlist;
+ const char *key_cap = "cap";
+ int ret = 0;
+
+ if (devargs == NULL)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return 0;
+
+ if (!rte_kvargs_count(kvlist, key_cap))
+ goto exit;
+
+ /* dcf capability selected when there's a key-value pair: cap=dcf */
+ if (rte_kvargs_process(kvlist, key_cap,
+ ice_dcf_cap_check_handler,
+ &adapter->real_hw.multi_inst) < 0)
+ goto exit;
+
+ ret = 1;
+
+exit:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
static int
ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
struct ice_dcf_adapter *adapter = eth_dev->data->dev_private;
struct ice_adapter *parent_adapter = &adapter->parent;
+ if (!ice_dcf_cap_selected(adapter, pci_dev->device.devargs))
+ return 1;
+
eth_dev->dev_ops = &ice_dcf_eth_dev_ops;
eth_dev->rx_pkt_burst = ice_dcf_recv_pkts;
eth_dev->tx_pkt_burst = ice_dcf_xmit_pkts;
@@ -1829,45 +1884,6 @@ ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev)
return 0;
}
-static int
-ice_dcf_cap_check_handler(__rte_unused const char *key,
- const char *value, __rte_unused void *opaque)
-{
- if (strcmp(value, "dcf"))
- return -1;
-
- return 0;
-}
-
-static int
-ice_dcf_cap_selected(struct rte_devargs *devargs)
-{
- struct rte_kvargs *kvlist;
- const char *key = "cap";
- int ret = 0;
-
- if (devargs == NULL)
- return 0;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return 0;
-
- if (!rte_kvargs_count(kvlist, key))
- goto exit;
-
- /* dcf capability selected when there's a key-value pair: cap=dcf */
- if (rte_kvargs_process(kvlist, key,
- ice_dcf_cap_check_handler, NULL) < 0)
- goto exit;
-
- ret = 1;
-
-exit:
- rte_kvargs_free(kvlist);
- return ret;
-}
-
static int
eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
struct rte_pci_device *pci_dev)
@@ -1880,9 +1896,6 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
uint16_t dcf_vsi_id;
int i, ret;
- if (!ice_dcf_cap_selected(pci_dev->device.devargs))
- return 1;
-
ret = rte_eth_devargs_parse(pci_dev->device.devargs->args, ð_da);
if (ret)
return ret;
@@ -1995,4 +2008,4 @@ static struct rte_pci_driver rte_ice_dcf_pmd = {
RTE_PMD_REGISTER_PCI(net_ice_dcf, rte_ice_dcf_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_ice_dcf, pci_id_ice_dcf_map);
RTE_PMD_REGISTER_KMOD_DEP(net_ice_dcf, "* igb_uio | vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(net_ice_dcf, "cap=dcf");
+RTE_PMD_REGISTER_PARAM_STRING(net_ice_dcf, "cap=dcf|mdcf");
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 2f96dedcce..2aa69c7368 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -125,6 +125,9 @@ ice_dcf_vsi_update_service_handler(void *param)
pthread_detach(pthread_self());
+ if (hw->multi_inst)
+ return NULL;
+
rte_delay_us(ICE_DCF_VSI_UPDATE_SERVICE_INTERVAL);
rte_spinlock_lock(&vsi_update_lock);
@@ -269,6 +272,10 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
start_vsi_reset_thread(dcf_hw, true,
pf_msg->event_data.vf_vsi_map.vf_id);
break;
+ case VIRTCHNL_EVENT_DCF_VSI_INFO:
+ if (dcf_hw->vsi_id != pf_msg->event_data.vf_vsi_map.vsi_id)
+ dcf_hw->dcf_replaced = true;
+ break;
default:
PMD_DRV_LOG(ERR, "Unknown event received %u", pf_msg->event);
break;
@@ -436,6 +443,7 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
parent_hw->aq_send_cmd_fn = ice_dcf_send_aq_cmd;
parent_hw->aq_send_cmd_param = &adapter->real_hw;
parent_hw->dcf_enabled = true;
+ hw->dcf_replaced = false;
err = ice_dcf_init_parent_hw(parent_hw);
if (err) {
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 1433094ed4..2ebe9a1cce 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -17,6 +17,7 @@
#include "ice_ethdev.h"
#include "ice_generic_flow.h"
+#include "ice_dcf.h"
/**
* Non-pipeline mode, fdir and switch both used as distributor,
@@ -2533,10 +2534,16 @@ ice_flow_flush(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_adapter *ad =
+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct ice_dcf_hw *hw = ad->hw.aq_send_cmd_param;
struct rte_flow *p_flow;
void *temp;
int ret = 0;
+ if (ad->hw.dcf_enabled && hw->dcf_replaced)
+ return ret;
+
RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
ret = ice_flow_destroy(dev, p_flow, error);
if (ret) {
@@ -2547,6 +2554,9 @@ ice_flow_flush(struct rte_eth_dev *dev,
}
}
+ if (ad->hw.dcf_enabled && hw->multi_inst)
+ return ice_dcf_flush_rules(ad->hw.aq_send_cmd_param);
+
return ret;
}
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index e90e109eca..1e8625e71e 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -497,6 +497,7 @@ ice_switch_destroy(struct ice_adapter *ad,
struct rte_flow *flow,
struct rte_flow_error *error)
{
+ struct ice_dcf_hw *dcf_hw = ad->hw.aq_send_cmd_param;
struct ice_hw *hw = &ad->hw;
int ret;
struct ice_switch_filter_conf *filter_conf_ptr;
@@ -524,7 +525,7 @@ ice_switch_destroy(struct ice_adapter *ad,
}
ret = ice_rem_adv_rule_by_id(hw, &filter_conf_ptr->sw_query_data);
- if (ret) {
+ if (ret && !(hw->dcf_enabled && dcf_hw->multi_inst)) {
if (ice_dcf_adminq_need_retry(ad))
ret = -EAGAIN;
else
@@ -537,7 +538,7 @@ ice_switch_destroy(struct ice_adapter *ad,
}
ice_switch_filter_rule_free(flow);
- return ret;
+ return 0;
}
static bool
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 17/39] net/ice/base: support custom DDP buildin recipe
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (15 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 16/39] net/ice: support MDCF(multi-DCF) instance Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 18/39] net/ice: support buildin recipe configuration Kevin Liu
` (22 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add control flag and data pointer for custom DDP package buildin recipe.
Init the data pointer of buildin recipe.
Support dumping buildin recipe lookup info.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_common.c | 25 +++++++++++++++
drivers/net/ice/base/ice_switch.c | 52 ++++++++++++++++++++++++++++++-
drivers/net/ice/base/ice_type.h | 2 ++
3 files changed, 78 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index db87bacd97..5d5ce894ff 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -732,6 +732,28 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd)
return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
}
+static int ice_buildin_recipe_init(struct ice_hw *hw)
+{
+ struct ice_switch_info *sw = hw->switch_info;
+ struct ice_sw_recipe *recipe;
+
+ sw->buildin_recipes = ice_malloc(hw,
+ sizeof(sw->buildin_recipes[0]) * ICE_MAX_NUM_RECIPES);
+
+ if (!sw->buildin_recipes)
+ return ICE_ERR_NO_MEMORY;
+
+ recipe = &sw->buildin_recipes[10];
+ recipe->is_root = 1;
+
+ recipe->lkup_exts.n_val_words = 1;
+ recipe->lkup_exts.field_mask[0] = 0x00ff;
+ recipe->lkup_exts.fv_words[0].off = 8;
+ recipe->lkup_exts.fv_words[0].prot_id = 32;
+
+ return ICE_SUCCESS;
+}
+
/**
* ice_init_fltr_mgmt_struct - initializes filter management list and locks
* @hw: pointer to the HW struct
@@ -752,6 +774,8 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
INIT_LIST_HEAD(&sw->vsi_list_map_head);
sw->prof_res_bm_init = 0;
+ ice_buildin_recipe_init(hw);
+
status = ice_init_def_sw_recp(hw, &hw->switch_info->recp_list);
if (status) {
ice_free(hw, hw->switch_info);
@@ -822,6 +846,7 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
ice_free(hw, recps[i].root_buf);
}
ice_rm_sw_replay_rule_info(hw, sw);
+ ice_free(hw, sw->buildin_recipes);
ice_free(hw, sw->recp_list);
ice_free(hw, sw);
}
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index b0c50c8f40..d9bb1e7c31 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -6910,6 +6910,47 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
{ ICE_VLAN_IN, ICE_VLAN_OL_HW },
};
+static u16 buildin_recipe_get(struct ice_switch_info *sw,
+ struct ice_prot_lkup_ext *lkup_exts)
+{
+ int i;
+
+ if (!sw->buildin_recipes)
+ return ICE_MAX_NUM_RECIPES;
+
+ for (i = 10; i < ICE_MAX_NUM_RECIPES; i++) {
+ struct ice_sw_recipe *recp = &sw->buildin_recipes[i];
+ struct ice_fv_word *a = lkup_exts->fv_words;
+ struct ice_fv_word *b = recp->lkup_exts.fv_words;
+ u16 *c = recp->lkup_exts.field_mask;
+ u16 *d = lkup_exts->field_mask;
+ bool found = true;
+ u8 p, q;
+
+ if (!recp->is_root)
+ continue;
+
+ if (recp->lkup_exts.n_val_words != lkup_exts->n_val_words)
+ continue;
+
+ for (p = 0; p < lkup_exts->n_val_words; p++) {
+ for (q = 0; q < recp->lkup_exts.n_val_words; q++) {
+ if (a[p].off == b[q].off &&
+ a[p].prot_id == b[q].prot_id &&
+ d[p] == c[q])
+ break;
+ }
+ if (q >= recp->lkup_exts.n_val_words) {
+ found = false;
+ break;
+ }
+ }
+ if (found)
+ return i;
+ }
+ return ICE_MAX_NUM_RECIPES;
+}
+
/**
* ice_find_recp - find a recipe
* @hw: pointer to the hardware structure
@@ -6922,8 +6963,15 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts,
{
bool refresh_required = true;
struct ice_sw_recipe *recp;
+ u16 buildin_rid;
u8 i;
+ if (hw->use_buildin_recipe) {
+ buildin_rid = buildin_recipe_get(hw->switch_info, lkup_exts);
+ if (buildin_rid < ICE_MAX_NUM_RECIPES)
+ return buildin_rid;
+ }
+
/* Walk through existing recipes to find a match */
recp = hw->switch_info->recp_list;
for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
@@ -9457,8 +9505,10 @@ ice_rem_adv_rule_by_id(struct ice_hw *hw,
struct ice_switch_info *sw;
sw = hw->switch_info;
- if (!sw->recp_list[remove_entry->rid].recp_created)
+ if (!sw->buildin_recipes[remove_entry->rid].is_root &&
+ !sw->recp_list[remove_entry->rid].recp_created)
return ICE_ERR_PARAM;
+
list_head = &sw->recp_list[remove_entry->rid].filt_rules;
LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry,
list_entry) {
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index d81984633a..48144ea065 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -1107,6 +1107,7 @@ struct ice_switch_info {
u16 max_used_prof_index;
ice_declare_bitmap(prof_res_bm[ICE_MAX_NUM_PROFILES], ICE_MAX_FV_WORDS);
+ struct ice_sw_recipe *buildin_recipes;
};
/* Port hardware description */
@@ -1263,6 +1264,7 @@ struct ice_hw {
ice_declare_bitmap(hw_ptype, ICE_FLOW_PTYPE_MAX);
u8 dvm_ena;
__le16 io_expander_handle;
+ u8 use_buildin_recipe;
};
/* Statistics collected by each port, VSI, VEB, and S-channel */
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 18/39] net/ice: support buildin recipe configuration
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (16 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 17/39] net/ice/base: support custom DDP buildin recipe Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 19/39] net/ice/base: support IPv6 GRE UDP pattern Kevin Liu
` (21 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support parsing 'br'(buildin recipe) parameter in device parameter list.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 90787d8c49..a165f74e26 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1807,6 +1807,23 @@ ice_dcf_cap_check_handler(__rte_unused const char *key,
return -1;
}
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+ bool *i = (bool *)args;
+ int num = atoi(value);
+
+ if (num != 0 && num != 1) {
+ PMD_DRV_LOG(WARNING,
+ "invalid value:\"%s\" for key:\"%s\", must be 0 or 1",
+ value, key);
+ return -1;
+ }
+
+ *i = (bool)num;
+ return 0;
+}
+
static int
ice_dcf_cap_selected(struct ice_dcf_adapter *adapter,
struct rte_devargs *devargs)
@@ -1814,7 +1831,9 @@ ice_dcf_cap_selected(struct ice_dcf_adapter *adapter,
struct ice_adapter *ad = &adapter->parent;
struct rte_kvargs *kvlist;
const char *key_cap = "cap";
+ const char *key_br = "br";
int ret = 0;
+ bool br = 0;
if (devargs == NULL)
return 0;
@@ -1832,6 +1851,11 @@ ice_dcf_cap_selected(struct ice_dcf_adapter *adapter,
&adapter->real_hw.multi_inst) < 0)
goto exit;
+ /* dcf capability selected when there's a key-value pair: cap=dcf */
+ if (rte_kvargs_process(kvlist, key_br, parse_bool, &br) < 0)
+ goto exit;
+
+ ad->hw.use_buildin_recipe = br;
ret = 1;
exit:
@@ -2008,4 +2032,4 @@ static struct rte_pci_driver rte_ice_dcf_pmd = {
RTE_PMD_REGISTER_PCI(net_ice_dcf, rte_ice_dcf_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_ice_dcf, pci_id_ice_dcf_map);
RTE_PMD_REGISTER_KMOD_DEP(net_ice_dcf, "* igb_uio | vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(net_ice_dcf, "cap=dcf|mdcf");
+RTE_PMD_REGISTER_PARAM_STRING(net_ice_dcf, "cap=dcf|mdcf br=<1|0>");
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 19/39] net/ice/base: support IPv6 GRE UDP pattern
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (17 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 18/39] net/ice: support buildin recipe configuration Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 20/39] net/ice: support IPv6 NVGRE tunnel Kevin Liu
` (20 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add supports(trainer packet and it's offsets, definitions,
pattern matching) for IPv6 GRE UDP pattern.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_protocol_type.h | 1 +
drivers/net/ice/base/ice_switch.c | 43 +++++++++++++++++++++++-
2 files changed, 43 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index d6332c5690..eec9f27823 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -44,6 +44,7 @@ enum ice_protocol_type {
ICE_GENEVE,
ICE_VXLAN_GPE,
ICE_NVGRE,
+ ICE_GRE,
ICE_GTP,
ICE_PPPOE,
ICE_PFCP,
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index d9bb1e7c31..e3658117fc 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -12,6 +12,7 @@
#define ICE_MAX_VLAN_ID 0xFFF
#define ICE_IPV6_ETHER_ID 0x86DD
#define ICE_IPV4_NVGRE_PROTO_ID 0x002F
+#define ICE_IPV6_GRE_PROTO_ID 0x002F
#define ICE_PPP_IPV6_PROTO_ID 0x0057
#define ICE_TCP_PROTO_ID 0x06
#define ICE_GTPU_PROFILE 24
@@ -129,6 +130,34 @@ static const u8 dummy_gre_udp_packet[] = {
0x00, 0x08, 0x00, 0x00,
};
+static const struct ice_dummy_pkt_offsets
+dummy_ipv6_gre_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV6_OFOS, 14 },
+ { ICE_GRE, 54 },
+ { ICE_IPV6_IL, 58 },
+ { ICE_UDP_ILOS, 98 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_ipv6_gre_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x86, 0xdd, 0x60, 0x00,
+ 0x00, 0x00, 0x00, 0x36, 0x2f, 0x40, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
+ 0x86, 0xdd, 0x60, 0x00, 0x00, 0x00, 0x00, 0x0a,
+ 0x11, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a,
+ 0xff, 0xd8, 0x00, 0x00,
+};
+
static const struct ice_dummy_pkt_offsets dummy_udp_tun_tcp_packet_offsets[] = {
{ ICE_MAC_OFOS, 0 },
{ ICE_ETYPE_OL, 12 },
@@ -8255,8 +8284,13 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
udp = true;
else if (lkups[i].type == ICE_TCP_IL)
tcp = true;
- else if (lkups[i].type == ICE_IPV6_OFOS)
+ else if (lkups[i].type == ICE_IPV6_OFOS) {
ipv6 = true;
+ if (lkups[i].h_u.ipv6_hdr.next_hdr ==
+ ICE_IPV6_GRE_PROTO_ID &&
+ lkups[i].m_u.ipv6_hdr.next_hdr == 0xFF)
+ gre = true;
+ }
else if (lkups[i].type == ICE_VLAN_OFOS)
vlan = true;
else if (lkups[i].type == ICE_ETYPE_OL &&
@@ -8616,6 +8650,13 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
return;
}
+ if (ipv6 && gre) {
+ *pkt = dummy_ipv6_gre_udp_packet;
+ *pkt_len = sizeof(dummy_ipv6_gre_udp_packet);
+ *offsets = dummy_ipv6_gre_udp_packet_offsets;
+ return;
+ }
+
if (tun_type == ICE_SW_TUN_NVGRE || gre) {
if (tcp) {
*pkt = dummy_gre_tcp_packet;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 20/39] net/ice: support IPv6 NVGRE tunnel
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (18 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 19/39] net/ice/base: support IPv6 GRE UDP pattern Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 21/39] net/ice: support new pattern of IPv4 Kevin Liu
` (19 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add protocol definition and pattern matching for IPv6 NVGRE tunnel.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 1e8625e71e..e87baa6234 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -31,6 +31,7 @@
#define ICE_PPP_IPV4_PROTO 0x0021
#define ICE_PPP_IPV6_PROTO 0x0057
#define ICE_IPV4_PROTO_NVGRE 0x002F
+#define ICE_IPV6_PROTO_NVGRE 0x002F
#define ICE_SW_PRI_BASE 6
#define ICE_SW_INSET_ETHER ( \
@@ -804,6 +805,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
break;
}
}
+ if ((ipv6_spec->hdr.proto &
+ ipv6_mask->hdr.proto) ==
+ ICE_IPV6_PROTO_NVGRE)
+ *tun_type = ICE_SW_TUN_AND_NON_TUN;
if (ipv6_mask->hdr.proto)
*input |= ICE_INSET_IPV6_NEXT_HDR;
if (ipv6_mask->hdr.hop_limits)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 21/39] net/ice: support new pattern of IPv4
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (19 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 20/39] net/ice: support IPv6 NVGRE tunnel Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 22/39] net/ice/base: support new patterns of TCP and UDP Kevin Liu
` (18 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definition and pattern entry for IPv4 pattern: MAC/VLAN/IPv4
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index e87baa6234..41086d7929 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -38,6 +38,8 @@
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
#define ICE_SW_INSET_MAC_VLAN ( \
ICE_SW_INSET_ETHER | ICE_INSET_VLAN_INNER)
+#define ICE_SW_INSET_MAC_VLAN_IPV4 ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4)
#define ICE_SW_INSET_MAC_QINQ ( \
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_VLAN_INNER | \
ICE_INSET_VLAN_OUTER)
@@ -231,6 +233,7 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv4, ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4, ICE_SW_INSET_MAC_VLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 22/39] net/ice/base: support new patterns of TCP and UDP
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (20 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 21/39] net/ice: support new pattern of IPv4 Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 23/39] net/ice: " Kevin Liu
` (17 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Find training packets for below TCP and UDP patterns:
MAC/VLAN/IPv4/TCP
MAC/VLAN/IPv4/UDP
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_switch.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index e3658117fc..75cc861e93 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -8616,6 +8616,12 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
if (tun_type == ICE_SW_IPV4_TCP) {
+ if (vlan && tcp) {
+ *pkt = dummy_vlan_tcp_packet;
+ *pkt_len = sizeof(dummy_vlan_tcp_packet);
+ *offsets = dummy_vlan_tcp_packet_offsets;
+ return;
+ }
*pkt = dummy_tcp_packet;
*pkt_len = sizeof(dummy_tcp_packet);
*offsets = dummy_tcp_packet_offsets;
@@ -8623,6 +8629,12 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
if (tun_type == ICE_SW_IPV4_UDP) {
+ if (vlan && udp) {
+ *pkt = dummy_vlan_udp_packet;
+ *pkt_len = sizeof(dummy_vlan_udp_packet);
+ *offsets = dummy_vlan_udp_packet_offsets;
+ return;
+ }
*pkt = dummy_udp_packet;
*pkt_len = sizeof(dummy_udp_packet);
*offsets = dummy_udp_packet_offsets;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 23/39] net/ice: support new patterns of TCP and UDP
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (21 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 22/39] net/ice/base: support new patterns of TCP and UDP Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 24/39] net/ice/base: support IPv4 GRE tunnel Kevin Liu
` (16 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definitions and pattern entries for below TCP and UDP patterns:
MAC/VLAN/IPv4/TCP
MAC/VLAN/IPv4/UDP
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 41086d7929..7c2038d089 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -62,6 +62,10 @@
ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \
ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS | \
ICE_INSET_UDP_DST_PORT | ICE_INSET_UDP_SRC_PORT)
+#define ICE_SW_INSET_MAC_VLAN_IPV4_TCP ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_VLAN_IPV4_UDP ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4_UDP)
#define ICE_SW_INSET_MAC_IPV6 ( \
ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \
ICE_INSET_IPV6_TC | ICE_INSET_IPV6_HOP_LIMIT | \
@@ -234,6 +238,8 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_vlan_ipv4, ICE_SW_INSET_MAC_VLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4_tcp, ICE_SW_INSET_MAC_VLAN_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4_udp, ICE_SW_INSET_MAC_VLAN_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 24/39] net/ice/base: support IPv4 GRE tunnel
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (22 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 23/39] net/ice: " Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 25/39] net/ice: support IPv4 GRE raw pattern type Kevin Liu
` (15 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definitions, trainer packets and routine path for IPv4 GRE tunnel.
Ref:
https://www.ietf.org/rfc/rfc1701.html
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_flex_pipe.c | 37 ++-
drivers/net/ice/base/ice_flex_pipe.h | 3 +-
drivers/net/ice/base/ice_protocol_type.h | 15 ++
drivers/net/ice/base/ice_switch.c | 304 ++++++++++++++++++++++-
4 files changed, 332 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index f6a29f87c5..8672c41c69 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1851,6 +1851,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs,
* @ids_cnt: lookup/protocol count
* @bm: bitmap of field vectors to consider
* @fv_list: Head of a list
+ * @lkup_exts: lookup elements
*
* Finds all the field vector entries from switch block that contain
* a given protocol ID and returns a list of structures of type
@@ -1861,7 +1862,8 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs,
*/
enum ice_status
ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
- ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list)
+ ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list,
+ struct ice_prot_lkup_ext *lkup_exts)
{
struct ice_sw_fv_list_entry *fvl;
struct ice_sw_fv_list_entry *tmp;
@@ -1892,29 +1894,26 @@ ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
if (!ice_is_bit_set(bm, (u16)offset))
continue;
- for (i = 0; i < ids_cnt; i++) {
+ int found = 1;
+ for (i = 0; i < lkup_exts->n_val_words; i++) {
int j;
- /* This code assumes that if a switch field vector line
- * has a matching protocol, then this line will contain
- * the entries necessary to represent every field in
- * that protocol header.
- */
for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
- if (fv->ew[j].prot_id == prot_ids[i])
+ if (fv->ew[j].prot_id ==
+ lkup_exts->fv_words[i].prot_id &&
+ fv->ew[j].off == lkup_exts->fv_words[i].off)
break;
if (j >= hw->blk[ICE_BLK_SW].es.fvw)
- break;
- if (i + 1 == ids_cnt) {
- fvl = (struct ice_sw_fv_list_entry *)
- ice_malloc(hw, sizeof(*fvl));
- if (!fvl)
- goto err;
- fvl->fv_ptr = fv;
- fvl->profile_id = offset;
- LIST_ADD(&fvl->list_entry, fv_list);
- break;
- }
+ found = 0;
+ }
+ if (found) {
+ fvl = (struct ice_sw_fv_list_entry *)
+ ice_malloc(hw, sizeof(*fvl));
+ if (!fvl)
+ goto err;
+ fvl->fv_ptr = fv;
+ fvl->profile_id = offset;
+ LIST_ADD(&fvl->list_entry, fv_list);
}
} while (fv);
if (LIST_EMPTY(fv_list))
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 23ba45564a..a22d66f3cf 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -37,7 +37,8 @@ void
ice_init_prof_result_bm(struct ice_hw *hw);
enum ice_status
ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
- ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list);
+ ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list,
+ struct ice_prot_lkup_ext *lkup_exts);
enum ice_status
ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count);
u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld);
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index eec9f27823..ffd34606e0 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -67,6 +67,7 @@ enum ice_sw_tunnel_type {
ICE_SW_TUN_VXLAN, /* VXLAN matches only non-VLAN pkts */
ICE_SW_TUN_VXLAN_VLAN, /* VXLAN matches both VLAN and non-VLAN pkts */
ICE_SW_TUN_NVGRE,
+ ICE_SW_TUN_GRE,
ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
* and GENEVE
*/
@@ -231,6 +232,10 @@ enum ice_prot_id {
#define ICE_TUN_FLAG_VLAN_MASK 0x01
#define ICE_TUN_FLAG_FV_IND 2
+#define ICE_GRE_FLAG_MDID 22
+#define ICE_GRE_FLAG_MDID_OFF (ICE_MDID_SIZE * ICE_GRE_FLAG_MDID)
+#define ICE_GRE_FLAG_MASK 0x01C0
+
#define ICE_PROTOCOL_MAX_ENTRIES 16
/* Mapping of software defined protocol ID to hardware defined protocol ID */
@@ -371,6 +376,15 @@ struct ice_nvgre {
__be32 tni_flow;
};
+struct ice_gre {
+ __be16 flags;
+ __be16 protocol;
+ __be16 chksum;
+ __be16 offset;
+ __be32 key;
+ __be32 seqnum;
+};
+
union ice_prot_hdr {
struct ice_ether_hdr eth_hdr;
struct ice_ethtype_hdr ethertype;
@@ -381,6 +395,7 @@ union ice_prot_hdr {
struct ice_sctp_hdr sctp_hdr;
struct ice_udp_tnl_hdr tnl_hdr;
struct ice_nvgre nvgre_hdr;
+ struct ice_gre gre_hdr;
struct ice_udp_gtp_hdr gtp_hdr;
struct ice_pppoe_hdr pppoe_hdr;
struct ice_pfcp_hdr pfcp_hdr;
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 75cc861e93..b367efaf02 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -12,6 +12,7 @@
#define ICE_MAX_VLAN_ID 0xFFF
#define ICE_IPV6_ETHER_ID 0x86DD
#define ICE_IPV4_NVGRE_PROTO_ID 0x002F
+#define ICE_IPV4_GRE_PROTO_ID 0x002F
#define ICE_IPV6_GRE_PROTO_ID 0x002F
#define ICE_PPP_IPV6_PROTO_ID 0x0057
#define ICE_TCP_PROTO_ID 0x06
@@ -158,6 +159,188 @@ static const u8 dummy_ipv6_gre_udp_packet[] = {
0xff, 0xd8, 0x00, 0x00,
};
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c1k1_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 50 },
+ { ICE_TCP_IL, 70 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c1k1_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x4e, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x2f, 0x7c, 0x7e,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0xb0, 0x00, 0x08, 0x00, /* ICE_GRE 34 */
+ 0x46, 0x1e, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x45, 0x00, 0x00, 0x2a, /* ICE_IPV4_IL 50 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x06, 0x7c, 0xcb,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0x00, 0x14, 0x00, 0x50, /* ICE_TCP_IL 70 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x50, 0x02, 0x20, 0x00,
+ 0x91, 0x7a, 0x00, 0x00,
+
+ 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c1k1_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 50 },
+ { ICE_UDP_ILOS, 70 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c1k1_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x42, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x2f, 0x7c, 0x8a,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0xb0, 0x00, 0x08, 0x00, /* ICE_GRE 34 */
+ 0x46, 0x1d, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x45, 0x00, 0x00, 0x1e, /* ICE_IPV4_IL 50 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x7c, 0xcc,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0x00, 0x35, 0x00, 0x35, /* ICE_UDP_ILOS 70 */
+ 0x00, 0x0a, 0x01, 0x6e,
+
+ 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k1_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 46 },
+ { ICE_TCP_IL, 66 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k1_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x4a, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x82, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x30, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x45, 0x00,
+ 0x00, 0x2a, 0x00, 0x01, 0x00, 0x00, 0x40, 0x06,
+ 0x7c, 0xcb, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x00, 0x14, 0x00, 0x50, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x50, 0x02,
+ 0x20, 0x00, 0x91, 0x7a, 0x00, 0x00, 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k1_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 46 },
+ { ICE_UDP_ILOS, 66 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k1_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x3e, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x8e, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x30, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x45, 0x00,
+ 0x00, 0x1e, 0x00, 0x01, 0x00, 0x00, 0x40, 0x11,
+ 0x7c, 0xcc, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x00, 0x35, 0x00, 0x35, 0x00, 0x0a,
+ 0x01, 0x6e, 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k0_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 42 },
+ { ICE_TCP_IL, 62 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k0_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x46, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x86, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x10, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x45, 0x00, 0x00, 0x2a, 0x00, 0x01,
+ 0x00, 0x00, 0x40, 0x06, 0x7c, 0xcb, 0x7f, 0x00,
+ 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0x00, 0x14,
+ 0x00, 0x50, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x50, 0x02, 0x20, 0x00, 0x91, 0x7a,
+ 0x00, 0x00, 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k0_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 42 },
+ { ICE_UDP_ILOS, 62 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k0_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x3a, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x92, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x10, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x45, 0x00, 0x00, 0x1e, 0x00, 0x01,
+ 0x00, 0x00, 0x40, 0x11, 0x7c, 0xcc, 0x7f, 0x00,
+ 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0x00, 0x35,
+ 0x00, 0x35, 0x00, 0x0a, 0x01, 0x6e, 0x00, 0x00,
+};
+
static const struct ice_dummy_pkt_offsets dummy_udp_tun_tcp_packet_offsets[] = {
{ ICE_MAC_OFOS, 0 },
{ ICE_ETYPE_OL, 12 },
@@ -173,7 +356,7 @@ static const struct ice_dummy_pkt_offsets dummy_udp_tun_tcp_packet_offsets[] = {
};
static const u8 dummy_udp_tun_tcp_packet[] = {
- 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
@@ -224,7 +407,7 @@ static const struct ice_dummy_pkt_offsets dummy_udp_tun_udp_packet_offsets[] = {
};
static const u8 dummy_udp_tun_udp_packet[] = {
- 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
@@ -6892,6 +7075,7 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[ICE_PROTOCOL_LAST] = {
{ ICE_GENEVE, { 8, 10, 12, 14 } },
{ ICE_VXLAN_GPE, { 8, 10, 12, 14 } },
{ ICE_NVGRE, { 0, 2, 4, 6 } },
+ { ICE_GRE, { 0, 2, 4, 6, 8, 10, 12, 14 } },
{ ICE_GTP, { 8, 10, 12, 14, 16, 18, 20, 22 } },
{ ICE_PPPOE, { 0, 2, 4, 6 } },
{ ICE_PFCP, { 8, 10, 12, 14, 16, 18, 20, 22 } },
@@ -6927,6 +7111,7 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
{ ICE_GENEVE, ICE_UDP_OF_HW },
{ ICE_VXLAN_GPE, ICE_UDP_OF_HW },
{ ICE_NVGRE, ICE_GRE_OF_HW },
+ { ICE_GRE, ICE_GRE_OF_HW },
{ ICE_GTP, ICE_UDP_OF_HW },
{ ICE_PPPOE, ICE_PPPOE_HW },
{ ICE_PFCP, ICE_UDP_ILOS_HW },
@@ -7113,6 +7298,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
struct ice_prot_lkup_ext *lkup_exts)
{
u8 j, word, prot_id, ret_val;
+ u8 extra_byte = 0;
if (!ice_prot_type_to_id(rule->type, &prot_id))
return 0;
@@ -7125,8 +7311,15 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
/* No more space to accommodate */
if (word >= ICE_MAX_CHAIN_WORDS)
return 0;
+ if (rule->type == ICE_GRE) {
+ if (ice_prot_ext[rule->type].offs[j] == 0) {
+ if (((u16 *)&rule->h_u)[j] == 0x20)
+ extra_byte = 4;
+ continue;
+ }
+ }
lkup_exts->fv_words[word].off =
- ice_prot_ext[rule->type].offs[j];
+ ice_prot_ext[rule->type].offs[j] - extra_byte;
lkup_exts->fv_words[word].prot_id =
ice_prot_id_tbl[rule->type].protocol_id;
lkup_exts->field_mask[word] =
@@ -7670,10 +7863,12 @@ ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
* @lkups_cnt: number of protocols
* @bm: bitmap of field vectors to consider
* @fv_list: pointer to a list that holds the returned field vectors
+ * @lkup_exts: lookup elements
*/
static enum ice_status
ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
- ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list)
+ ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list,
+ struct ice_prot_lkup_ext *lkup_exts)
{
enum ice_status status;
u8 *prot_ids;
@@ -7693,7 +7888,8 @@ ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
/* Find field vectors that include all specified protocol types */
- status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, bm, fv_list);
+ status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, bm, fv_list,
+ lkup_exts);
free_mem:
ice_free(hw, prot_ids);
@@ -7729,6 +7925,10 @@ static bool ice_tun_type_match_word(enum ice_sw_tunnel_type tun_type, u16 *mask)
*mask = ICE_TUN_FLAG_MASK;
return true;
+ case ICE_SW_TUN_GRE:
+ *mask = ICE_GRE_FLAG_MASK;
+ return true;
+
case ICE_SW_TUN_GENEVE_VLAN:
case ICE_SW_TUN_VXLAN_VLAN:
*mask = ICE_TUN_FLAG_MASK & ~ICE_TUN_FLAG_VLAN_MASK;
@@ -7750,6 +7950,12 @@ ice_add_special_words(struct ice_adv_rule_info *rinfo,
struct ice_prot_lkup_ext *lkup_exts)
{
u16 mask;
+ u8 has_gre_key = 0;
+ u8 i;
+
+ for (i = 0; i < lkup_exts->n_val_words; i++)
+ if (lkup_exts->fv_words[i].prot_id == 0x40)
+ has_gre_key = 1;
/* If this is a tunneled packet, then add recipe index to match the
* tunnel bit in the packet metadata flags.
@@ -7761,6 +7967,13 @@ ice_add_special_words(struct ice_adv_rule_info *rinfo,
lkup_exts->fv_words[word].prot_id = ICE_META_DATA_ID_HW;
lkup_exts->fv_words[word].off = ICE_TUN_FLAG_MDID_OFF;
lkup_exts->field_mask[word] = mask;
+
+ if (rinfo->tun_type == ICE_SW_TUN_GRE)
+ lkup_exts->fv_words[word].off =
+ ICE_GRE_FLAG_MDID_OFF;
+
+ if (!has_gre_key)
+ lkup_exts->field_mask[word] = 0x0140;
} else {
return ICE_ERR_MAX_LIMIT;
}
@@ -7802,6 +8015,9 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo,
case ICE_SW_TUN_NVGRE:
prof_type = ICE_PROF_TUN_GRE;
break;
+ case ICE_SW_TUN_GRE:
+ prof_type = ICE_PROF_TUN_GRE;
+ break;
case ICE_SW_TUN_PPPOE:
case ICE_SW_TUN_PPPOE_QINQ:
prof_type = ICE_PROF_TUN_PPPOE;
@@ -8127,7 +8343,8 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
*/
ice_get_compat_fv_bitmap(hw, rinfo, fv_bitmap);
- status = ice_get_fv(hw, lkups, lkups_cnt, fv_bitmap, &rm->fv_list);
+ status = ice_get_fv(hw, lkups, lkups_cnt, fv_bitmap, &rm->fv_list,
+ lkup_exts);
if (status)
goto err_unroll;
@@ -8276,6 +8493,8 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
const struct ice_dummy_pkt_offsets **offsets)
{
bool tcp = false, udp = false, ipv6 = false, vlan = false;
+ bool gre_c_bit = false;
+ bool gre_k_bit = false;
bool gre = false, mpls = false;
u16 i;
@@ -8293,6 +8512,17 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
else if (lkups[i].type == ICE_VLAN_OFOS)
vlan = true;
+ else if (lkups[i].type == ICE_GRE) {
+ if (lkups[i].h_u.gre_hdr.flags & 0x20)
+ gre_k_bit = true;
+ if (lkups[i].h_u.gre_hdr.flags & 0x80)
+ gre_c_bit = true;
+ } else if (lkups[i].type == ICE_IPV4_OFOS &&
+ lkups[i].h_u.ipv4_hdr.protocol ==
+ ICE_IPV4_GRE_PROTO_ID &&
+ lkups[i].m_u.ipv4_hdr.protocol ==
+ 0xFF)
+ gre = true;
else if (lkups[i].type == ICE_ETYPE_OL &&
lkups[i].h_u.ethertype.ethtype_id ==
CPU_TO_BE16(ICE_IPV6_ETHER_ID) &&
@@ -8698,6 +8928,46 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
return;
}
+ if (tun_type == ICE_SW_TUN_GRE && tcp) {
+ if (gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c1k1_tcp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c1k1_tcp_packet);
+ *offsets = dummy_gre_rfc1701_c1k1_tcp_packet_offsets;
+ return;
+ }
+ if (!gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c0k1_tcp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k1_tcp_packet);
+ *offsets = dummy_gre_rfc1701_c0k1_tcp_packet_offsets;
+ return;
+ }
+
+ *pkt = dummy_gre_rfc1701_c0k0_tcp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k0_tcp_packet);
+ *offsets = dummy_gre_rfc1701_c0k0_tcp_packet_offsets;
+ return;
+ }
+
+ if (tun_type == ICE_SW_TUN_GRE) {
+ if (gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c1k1_udp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c1k1_udp_packet);
+ *offsets = dummy_gre_rfc1701_c1k1_udp_packet_offsets;
+ return;
+ }
+ if (!gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c0k1_udp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k1_udp_packet);
+ *offsets = dummy_gre_rfc1701_c0k1_udp_packet_offsets;
+ return;
+ }
+
+ *pkt = dummy_gre_rfc1701_c0k0_udp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k0_udp_packet);
+ *offsets = dummy_gre_rfc1701_c0k0_udp_packet_offsets;
+ return;
+ }
+
if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
tun_type == ICE_SW_TUN_VXLAN_GPE || tun_type == ICE_SW_TUN_UDP ||
tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN ||
@@ -8848,6 +9118,9 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
case ICE_NVGRE:
len = sizeof(struct ice_nvgre);
break;
+ case ICE_GRE:
+ len = sizeof(struct ice_gre);
+ break;
case ICE_VXLAN:
case ICE_GENEVE:
case ICE_VXLAN_GPE:
@@ -8881,6 +9154,20 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
if (len % ICE_BYTES_PER_WORD)
return ICE_ERR_CFG;
+ if (lkups[i].type == ICE_GRE) {
+ if (lkups[i].h_u.gre_hdr.flags == 0x20)
+ offset -= 4;
+
+ for (j = 1; j < len / sizeof(u16); j++)
+ if (((u16 *)&lkups[i].m_u)[j])
+ ((u16 *)(pkt + offset))[j] =
+ (((u16 *)(pkt + offset))[j] &
+ ~((u16 *)&lkups[i].m_u)[j]) |
+ (((u16 *)&lkups[i].h_u)[j] &
+ ((u16 *)&lkups[i].m_u)[j]);
+ continue;
+ }
+
/* We have the offset to the header start, the length, the
* caller's header values and mask. Use this information to
* copy the data into the dummy packet appropriately based on
@@ -9468,8 +9755,11 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
return ICE_ERR_CFG;
count = ice_fill_valid_words(&lkups[i], &lkup_exts);
- if (!count)
+ if (!count) {
+ if (lkups[i].type == ICE_GRE)
+ continue;
return ICE_ERR_CFG;
+ }
}
/* Create any special protocol/offset pairs, such as looking at tunnel
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 25/39] net/ice: support IPv4 GRE raw pattern type
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (23 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 24/39] net/ice/base: support IPv4 GRE tunnel Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 26/39] net/ice/base: support custom ddp package version Kevin Liu
` (14 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definitions, matching entries, parsers for below patterns:
ETH/IPV4/GRE/RAW/IPV4
ETH/IPV4/GRE/RAW/IPV4/UDP
ETH/IPV4/GRE/RAW/IPV4/TCP
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_generic_flow.c | 27 +++++++++
drivers/net/ice/ice_generic_flow.h | 9 +++
drivers/net/ice/ice_switch_filter.c | 90 +++++++++++++++++++++++++++++
3 files changed, 126 insertions(+)
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 2ebe9a1cce..2d7e4c19f8 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1085,6 +1085,33 @@ enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_icmp6[] = {
RTE_FLOW_ITEM_TYPE_ICMP6,
RTE_FLOW_ITEM_TYPE_END,
};
+/* IPv4 GRE RAW IPv4 */
+enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_GRE,
+ RTE_FLOW_ITEM_TYPE_RAW,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_udp[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_GRE,
+ RTE_FLOW_ITEM_TYPE_RAW,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_tcp[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_GRE,
+ RTE_FLOW_ITEM_TYPE_RAW,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_TCP,
+ RTE_FLOW_ITEM_TYPE_END,
+};
/*IPv4 GTPU (EH) */
enum rte_flow_item_type pattern_eth_ipv4_gtpu[] = {
diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h
index def7e2d6d6..12193cbd9d 100644
--- a/drivers/net/ice/ice_generic_flow.h
+++ b/drivers/net/ice/ice_generic_flow.h
@@ -27,6 +27,7 @@
#define ICE_PROT_L2TPV3OIP BIT_ULL(16)
#define ICE_PROT_PFCP BIT_ULL(17)
#define ICE_PROT_NAT_T_ESP BIT_ULL(18)
+#define ICE_PROT_GRE BIT_ULL(19)
/* field */
@@ -54,6 +55,7 @@
#define ICE_PFCP_SEID BIT_ULL(42)
#define ICE_PFCP_S_FIELD BIT_ULL(41)
#define ICE_IP_PK_ID BIT_ULL(40)
+#define ICE_RAW_PATTERN BIT_ULL(39)
/* input set */
@@ -104,6 +106,8 @@
(ICE_PROT_GTPU | ICE_GTPU_TEID)
#define ICE_INSET_GTPU_QFI \
(ICE_PROT_GTPU | ICE_GTPU_QFI)
+#define ICE_INSET_RAW \
+ (ICE_PROT_GRE | ICE_RAW_PATTERN)
#define ICE_INSET_PPPOE_SESSION \
(ICE_PROT_PPPOE_S | ICE_PPPOE_SESSION)
#define ICE_INSET_PPPOE_PROTO \
@@ -291,6 +295,11 @@ extern enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_udp[];
extern enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_sctp[];
extern enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_icmp6[];
+/* IPv4 GRE RAW IPv4 */
+extern enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4[];
+extern enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_udp[];
+extern enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_tcp[];
+
/* IPv4 GTPU (EH) */
extern enum rte_flow_item_type pattern_eth_ipv4_gtpu[];
extern enum rte_flow_item_type pattern_eth_ipv4_gtpu_eh[];
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7c2038d089..a61d3d0aaa 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -196,6 +196,22 @@
#define ICE_SW_INSET_GTPU_IPV6_TCP ( \
ICE_SW_INSET_GTPU_IPV6 | ICE_INSET_TCP_SRC_PORT | \
ICE_INSET_TCP_DST_PORT)
+#define ICE_SW_INSET_DIST_GRE_RAW_IPV4 ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_RAW)
+#define ICE_SW_INSET_DIST_GRE_RAW_IPV4_TCP ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_TCP_SRC_PORT | ICE_INSET_TCP_DST_PORT | \
+ ICE_INSET_RAW)
+#define ICE_SW_INSET_DIST_GRE_RAW_IPV4_UDP ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_UDP_SRC_PORT | ICE_INSET_UDP_DST_PORT | \
+ ICE_INSET_RAW)
+
+#define CUSTOM_GRE_KEY_OFFSET 4
+#define GRE_CFLAG 0x80
+#define GRE_KFLAG 0x20
+#define GRE_SFLAG 0x10
struct sw_meta {
struct ice_adv_lkup_elem *list;
@@ -317,6 +333,9 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv6_gtpu_eh_ipv6_udp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_UDP, ICE_INSET_NONE},
{pattern_eth_ipv6_gtpu_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE},
{pattern_eth_ipv6_gtpu_eh_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE},
+ {pattern_eth_ipv4_gre_raw_ipv4, ICE_SW_INSET_DIST_GRE_RAW_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_gre_raw_ipv4_tcp, ICE_SW_INSET_DIST_GRE_RAW_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_gre_raw_ipv4_udp, ICE_SW_INSET_DIST_GRE_RAW_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
};
static struct
@@ -609,6 +628,11 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
bool ipv6_ipv6_valid = 0;
bool any_valid = 0;
uint16_t j, k, t = 0;
+ uint16_t c_rsvd0_ver = 0;
+ bool gre_valid = 0;
+
+#define set_cur_item_einval(msg) \
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, (msg))
if (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
*tun_type == ICE_NON_TUN_QINQ)
@@ -1101,6 +1125,70 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
}
break;
+ case RTE_FLOW_ITEM_TYPE_GRE: {
+ const struct rte_flow_item_gre *gre_spec = item->spec;
+ const struct rte_flow_item_gre *gre_mask = item->mask;
+
+ gre_valid = 1;
+ tunnel_valid = 1;
+ if (gre_spec && gre_mask) {
+ list[t].type = ICE_GRE;
+ if (gre_mask->c_rsvd0_ver) {
+ /* GRE RFC1701 */
+ list[t].h_u.gre_hdr.flags =
+ gre_spec->c_rsvd0_ver;
+ list[t].m_u.gre_hdr.flags =
+ gre_mask->c_rsvd0_ver;
+ c_rsvd0_ver = gre_spec->c_rsvd0_ver &
+ gre_mask->c_rsvd0_ver;
+ }
+ }
+ break;
+ }
+
+ case RTE_FLOW_ITEM_TYPE_RAW: {
+ const struct rte_flow_item_raw *raw_spec;
+ char *endp = NULL;
+ unsigned long key;
+ char s[sizeof("0x12345678")];
+
+ raw_spec = item->spec;
+
+ if (list[t].type != ICE_GRE)
+ return set_cur_item_einval("RAW must follow GRE.");
+
+ if (!(c_rsvd0_ver & GRE_KFLAG)) {
+ if (!raw_spec)
+ break;
+
+ return set_cur_item_einval("Invalid pattern! k_bit is 0 while raw pattern exists.");
+ }
+
+ if (!raw_spec)
+ return set_cur_item_einval("Invalid pattern! k_bit is 1 while raw pattern doesn't exist.");
+
+ if ((c_rsvd0_ver & GRE_CFLAG) == GRE_CFLAG &&
+ raw_spec->offset != CUSTOM_GRE_KEY_OFFSET)
+ return set_cur_item_einval("Invalid pattern! c_bit is 1 while offset is not 4.");
+
+ if (raw_spec->length >= sizeof(s))
+ return set_cur_item_einval("Invalid key");
+
+ memcpy(s, raw_spec->pattern, raw_spec->length);
+ s[raw_spec->length] = '\0';
+ key = strtol(s, &endp, 16);
+ if (*endp != '\0' || key > UINT32_MAX)
+ return set_cur_item_einval("Invalid key");
+
+ list[t].h_u.gre_hdr.key = (uint32_t)key;
+ list[t].m_u.gre_hdr.key = UINT32_MAX;
+ *input |= ICE_INSET_RAW;
+ input_set_byte += 2;
+ t++;
+
+ break;
+ }
+
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
vlan_mask = item->mask;
@@ -1634,6 +1722,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
if (*tun_type == ICE_NON_TUN) {
if (nvgre_valid)
*tun_type = ICE_SW_TUN_NVGRE;
+ else if (gre_valid)
+ *tun_type = ICE_SW_TUN_GRE;
else if (ipv4_valid && tcp_valid)
*tun_type = ICE_SW_IPV4_TCP;
else if (ipv4_valid && udp_valid)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 26/39] net/ice/base: support custom ddp package version
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (24 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 25/39] net/ice: support IPv4 GRE raw pattern type Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 27/39] net/ice: disable ACL function for MDCF instance Kevin Liu
` (13 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add checking if the current ddp package is a custom package.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_flex_pipe.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 8672c41c69..1827993f44 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1467,6 +1467,10 @@ static void ice_init_pkg_regs(struct ice_hw *hw)
*/
static enum ice_status ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver)
{
+ /* 0xFF indicate a custom pkg */
+ if (pkg_ver->major == 0xFF)
+ return ICE_SUCCESS;
+
if (pkg_ver->major != ICE_PKG_SUPP_VER_MAJ ||
pkg_ver->minor != ICE_PKG_SUPP_VER_MNR)
return ICE_ERR_NOT_SUPPORTED;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 27/39] net/ice: disable ACL function for MDCF instance
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (25 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 26/39] net/ice/base: support custom ddp package version Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 28/39] net/ice: treat unknown package as OS default package Kevin Liu
` (12 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Steven Zou, Alvin Zhang
MDCF instance does not support ACL, so disable it.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_acl_filter.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0..61bb016395 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -25,6 +25,7 @@
#include "ice_ethdev.h"
#include "ice_generic_flow.h"
#include "base/ice_flow.h"
+#include "ice_dcf_ethdev.h"
#define MAX_ACL_SLOTS_ID 2048
@@ -994,8 +995,11 @@ ice_acl_init(struct ice_adapter *ad)
struct ice_pf *pf = &ad->pf;
struct ice_hw *hw = ICE_PF_TO_HW(pf);
struct ice_flow_parser *parser = &ice_acl_parser;
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[ad->pf.dev_data->port_id];
+ struct ice_dcf_adapter *dcf_adapter = eth_dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &dcf_adapter->real_hw;
- if (!ad->hw.dcf_enabled)
+ if (!ad->hw.dcf_enabled || dcf_hw->multi_inst)
return 0;
ret = ice_acl_prof_alloc(hw);
@@ -1041,8 +1045,11 @@ ice_acl_uninit(struct ice_adapter *ad)
struct ice_pf *pf = &ad->pf;
struct ice_hw *hw = ICE_PF_TO_HW(pf);
struct ice_flow_parser *parser = &ice_acl_parser;
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[ad->pf.dev_data->port_id];
+ struct ice_dcf_adapter *dcf_adapter = eth_dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &dcf_adapter->real_hw;
- if (ad->hw.dcf_enabled) {
+ if (ad->hw.dcf_enabled && !dcf_hw->multi_inst) {
ice_unregister_parser(parser, ad);
ice_deinit_acl(pf);
ice_acl_prof_free(hw);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 28/39] net/ice: treat unknown package as OS default package
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (26 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 27/39] net/ice: disable ACL function for MDCF instance Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 29/39] net/ice/base: update Profile ID table for VXLAN Kevin Liu
` (11 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
In order to use custom package, unknown package should be treated
as OS default package.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 73e550f5fb..ad9b09d081 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1710,13 +1710,16 @@ ice_load_pkg_type(struct ice_hw *hw)
/* store the activated package type (OS default or Comms) */
if (!strncmp((char *)hw->active_pkg_name, ICE_OS_DEFAULT_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_OS_DEFAULT;
- else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ } else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_COMMS;
- else
- package_type = ICE_PKG_TYPE_UNKNOWN;
+ } else {
+ PMD_INIT_LOG(WARNING,
+ "The package type is not identified, treaded as OS default type");
+ package_type = ICE_PKG_TYPE_OS_DEFAULT;
+ }
PMD_INIT_LOG(NOTICE, "Active package is: %d.%d.%d.%d, %s (%s VLAN mode)",
hw->active_pkg_ver.major, hw->active_pkg_ver.minor,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 29/39] net/ice/base: update Profile ID table for VXLAN
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (27 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 28/39] net/ice: treat unknown package as OS default package Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 30/39] net/ice/base: update Protocol ID table to match DVM DDP Kevin Liu
` (10 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
Update Profile ID table for VXLAN to align with Tencent customed DDP.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_switch.h | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index efb9399b77..c8071aa50d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -23,15 +23,15 @@
#define ICE_PROFID_IPV4_TUN_M_IPV4_TCP 10
#define ICE_PROFID_IPV4_TUN_M_IPV4_UDP 11
#define ICE_PROFID_IPV4_TUN_M_IPV4_OTHER 12
-#define ICE_PROFID_IPV6_TUN_M_IPV4_TCP 16
-#define ICE_PROFID_IPV6_TUN_M_IPV4_UDP 17
-#define ICE_PROFID_IPV6_TUN_M_IPV4_OTHER 18
-#define ICE_PROFID_IPV4_TUN_M_IPV6_TCP 22
-#define ICE_PROFID_IPV4_TUN_M_IPV6_UDP 23
-#define ICE_PROFID_IPV4_TUN_M_IPV6_OTHER 24
-#define ICE_PROFID_IPV6_TUN_M_IPV6_TCP 25
-#define ICE_PROFID_IPV6_TUN_M_IPV6_UDP 26
-#define ICE_PROFID_IPV6_TUN_M_IPV6_OTHER 27
+#define ICE_PROFID_IPV6_TUN_M_IPV4_TCP 34
+#define ICE_PROFID_IPV6_TUN_M_IPV4_UDP 35
+#define ICE_PROFID_IPV6_TUN_M_IPV4_OTHER 36
+#define ICE_PROFID_IPV4_TUN_M_IPV6_TCP 40
+#define ICE_PROFID_IPV4_TUN_M_IPV6_UDP 41
+#define ICE_PROFID_IPV4_TUN_M_IPV6_OTHER 42
+#define ICE_PROFID_IPV6_TUN_M_IPV6_TCP 43
+#define ICE_PROFID_IPV6_TUN_M_IPV6_UDP 44
+#define ICE_PROFID_IPV6_TUN_M_IPV6_OTHER 45
#define ICE_PROFID_PPPOE_PAY 34
#define ICE_PROFID_PPPOE_IPV4_TCP 35
#define ICE_PROFID_PPPOE_IPV4_UDP 36
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 30/39] net/ice/base: update Protocol ID table to match DVM DDP
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (28 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 29/39] net/ice/base: update Profile ID table for VXLAN Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 31/39] net/ice: handle virtchnl event message without interrupt Kevin Liu
` (9 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
The ice kernel driver and DDP is working in Double VLAN Mode (DVM),
but the DVM is not supported on this PMD. Thus update the SW to HW
Protocol ID table for VLAN to support common switch filtering with
single VLAN layer.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_switch.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index b367efaf02..3bb9e28898 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -7098,7 +7098,7 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
{ ICE_MAC_OFOS, ICE_MAC_OFOS_HW },
{ ICE_MAC_IL, ICE_MAC_IL_HW },
{ ICE_ETYPE_OL, ICE_ETYPE_OL_HW },
- { ICE_VLAN_OFOS, ICE_VLAN_OL_HW },
+ { ICE_VLAN_OFOS, ICE_VLAN_OF_HW },
{ ICE_IPV4_OFOS, ICE_IPV4_OFOS_HW },
{ ICE_IPV4_IL, ICE_IPV4_IL_HW },
{ ICE_IPV6_OFOS, ICE_IPV6_OFOS_HW },
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 31/39] net/ice: handle virtchnl event message without interrupt
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (29 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 30/39] net/ice/base: update Protocol ID table to match DVM DDP Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:56 ` [PATCH 32/39] net/ice: add DCF request queues function Kevin Liu
` (8 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Currently, VF can only handle virtchnl event message by calling interrupt.
It is not available in two cases:
1. If the event message comes during VF initialization before interrupt
is enabled, this message will not be handled correctly.
2. Some virtchnl commands need to receive the event message and handle
it with interrupt disabled.
To solve this issue, we add the virtchnl event message handling in the
process of reading vitchnl messages in adminq from PF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7987b6261d..8c47f96341 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -63,11 +63,32 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
goto again;
v_op = rte_le_to_cpu_32(event.desc.cookie_high);
- if (v_op != op)
- goto again;
+
+ if (v_op == VIRTCHNL_OP_EVENT) {
+ struct virtchnl_pf_event *vpe =
+ (struct virtchnl_pf_event *)event.msg_buf;
+ switch (vpe->event) {
+ case VIRTCHNL_EVENT_RESET_IMPENDING:
+ hw->resetting = true;
+ if (rsp_msglen)
+ *rsp_msglen = 0;
+ return IAVF_SUCCESS;
+ default:
+ goto again;
+ }
+ } else {
+ /* async reply msg on command issued by vf previously */
+ if (v_op != op) {
+ PMD_DRV_LOG(WARNING,
+ "command mismatch, expect %u, get %u",
+ op, v_op);
+ goto again;
+ }
+ }
if (rsp_msglen != NULL)
*rsp_msglen = event.msg_len;
+
return rte_le_to_cpu_32(event.desc.cookie_low);
again:
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 32/39] net/ice: add DCF request queues function
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (30 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 31/39] net/ice: handle virtchnl event message without interrupt Kevin Liu
@ 2022-04-07 10:56 ` Kevin Liu
2022-04-07 10:57 ` [PATCH 33/39] net/ice: negotiate large VF and request more queues Kevin Liu
` (7 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:56 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Add a new virtchnl function to request additional queues from PF. Current
default queue pairs number is 16. In order to support up to 256 queue
pairs DCF port, enable this request queues function.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 98 +++++++++++++++++++++++++++++++++------
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 86 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 8c47f96341..2e651adda7 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -468,18 +468,38 @@ ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
goto ret;
}
- do {
- if (!cmd->pending)
- break;
-
- rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
- } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
-
- if (cmd->v_ret != IAVF_SUCCESS) {
- err = -1;
- PMD_DRV_LOG(ERR,
- "No response (%d times) or return failure (%d) for cmd %d",
- i, cmd->v_ret, cmd->v_op);
+ switch (cmd->v_op) {
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ err = ice_dcf_recv_cmd_rsp_no_irq(hw,
+ VIRTCHNL_OP_REQUEST_QUEUES,
+ cmd->rsp_msgbuf,
+ cmd->rsp_buflen,
+ NULL);
+ if (err != IAVF_SUCCESS || !hw->resetting) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "Failed to get response of "
+ "VIRTCHNL_OP_REQUEST_QUEUES %d",
+ err);
+ }
+ break;
+ default:
+ /* For other virtchnl ops in running time,
+ * wait for the cmd done flag.
+ */
+ do {
+ if (!cmd->pending)
+ break;
+ rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
+ } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
+
+ if (cmd->v_ret != IAVF_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "No response (%d times) or "
+ "return failure (%d) for cmd %d",
+ i, cmd->v_ret, cmd->v_op);
+ }
}
ret:
@@ -1012,6 +1032,58 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
+{
+ struct virtchnl_vf_res_request vfres;
+ struct dcf_virtchnl_cmd args;
+ uint16_t num_queue_pairs;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) {
+ PMD_DRV_LOG(ERR, "request queues not supported");
+ return -1;
+ }
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR, "queue number cannot be zero");
+ return -1;
+ }
+ vfres.num_queue_pairs = num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_REQUEST_QUEUES;
+
+ args.req_msg = (u8 *)&vfres;
+ args.req_msglen = sizeof(vfres);
+
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ /*
+ * disable interrupt to avoid the admin queue message to be read
+ * before iavf_read_msg_from_pf.
+ */
+ rte_intr_disable(hw->eth_dev->intr_handle);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ rte_intr_enable(hw->eth_dev->intr_handle);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES");
+ return err;
+ }
+
+ /* request additional queues failed, return available number */
+ num_queue_pairs = ((struct virtchnl_vf_res_request *)
+ args.rsp_msgbuf)->num_queue_pairs;
+ PMD_DRV_LOG(ERR,
+ "request queues failed, only %u queues available",
+ num_queue_pairs);
+
+ return -1;
+}
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 42f4404a37..46e0010848 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -129,6 +129,7 @@ int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 33/39] net/ice: negotiate large VF and request more queues
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (31 preceding siblings ...)
2022-04-07 10:56 ` [PATCH 32/39] net/ice: add DCF request queues function Kevin Liu
@ 2022-04-07 10:57 ` Kevin Liu
2022-04-07 10:57 ` [PATCH 34/39] net/ice: enable multiple queues configurations for large VF Kevin Liu
` (6 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:57 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Negotiate large VF capability with PF during VF initialization. If large
VF is supported and the number of queues larger than 16 is required, VF
requests additional queues from PF. Mark the state that large VF is
supported.
If the allocated queues number is larger than 16, the max RSS queue
region cannot be 16 anymore. Add the function to query max RSS queue
region from PF, use it in the RSS initialization and future filters
configuration.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 34 +++++++++++++++-
drivers/net/ice/ice_dcf.h | 4 ++
drivers/net/ice/ice_dcf_ethdev.c | 69 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 2 +
4 files changed, 106 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 2e651adda7..8807308bb2 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,8 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -1084,6 +1085,37 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return -1;
}
+int
+ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ uint16_t qregion_width;
+ int err;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_MAX_RSS_QREGION;
+ args.req_msg = NULL;
+ args.req_msglen = 0;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of "
+ "VIRTCHNL_OP_GET_MAX_RSS_QREGION");
+ return err;
+ }
+
+ qregion_width = ((struct virtchnl_max_rss_qregion *)
+ args.rsp_msgbuf)->qregion_width;
+ hw->max_rss_qregion = (uint16_t)(1 << qregion_width);
+
+ return 0;
+}
+
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 46e0010848..8efa3e5b23 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -107,6 +107,7 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
+ uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -116,6 +117,8 @@ struct ice_dcf_hw {
uint32_t link_speed;
bool resetting;
+ /* Indicate large VF support enabled or not */
+ bool lv_enabled;
};
int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -130,6 +133,7 @@ int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
+int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a165f74e26..4ffc10b0de 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -40,6 +40,8 @@ static int
ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num);
+
static int
ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
@@ -664,6 +666,11 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
{
struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
struct ice_adapter *ad = &dcf_ad->parent;
+ struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ int ret;
+
+ uint16_t num_queue_pairs =
+ RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
@@ -671,6 +678,47 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ /* Large VF setting */
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_DFLT) {
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS)) {
+ PMD_DRV_LOG(ERR, "large VF is not supported");
+ return -1;
+ }
+
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_LV) {
+ PMD_DRV_LOG(ERR,
+ "queue pairs number cannot be larger than %u",
+ ICE_DCF_MAX_NUM_QUEUES_LV);
+ return -1;
+ }
+
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ ret = ice_dcf_get_max_rss_queue_region(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "get max rss queue region failed");
+ return ret;
+ }
+
+ hw->lv_enabled = true;
+ } else {
+ /* Check if large VF is already enabled. If so, disable and
+ * release redundant queue resource.
+ */
+ if (hw->lv_enabled) {
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ hw->lv_enabled = false;
+ }
+ /* if large VF is not required, use default rss queue region */
+ hw->max_rss_qregion = ICE_DCF_MAX_NUM_QUEUES_DFLT;
+ }
+
return 0;
}
@@ -682,8 +730,8 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_hw *hw = &adapter->real_hw;
dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
- dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
- dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+ dev_info->max_rx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
+ dev_info->max_tx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
dev_info->hash_key_size = hw->vf_res->rss_key_size;
@@ -1908,6 +1956,23 @@ ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev)
return 0;
}
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int ret;
+
+ ret = ice_dcf_request_queues(hw, num);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "request queues from PF failed");
+ return ret;
+ }
+ PMD_DRV_LOG(INFO, "change queue pairs from %u to %u",
+ hw->vsi_res->num_queue_pairs, num);
+
+ return ice_dcf_dev_reset(dev);
+}
+
static int
eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
struct rte_pci_device *pci_dev)
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 27f6402786..4a08d32e0c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,6 +20,8 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 34/39] net/ice: enable multiple queues configurations for large VF
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (32 preceding siblings ...)
2022-04-07 10:57 ` [PATCH 33/39] net/ice: negotiate large VF and request more queues Kevin Liu
@ 2022-04-07 10:57 ` Kevin Liu
2022-04-07 10:57 ` [PATCH 35/39] net/ice: enable IRQ mapping configuration " Kevin Liu
` (5 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:57 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Since the adminq buffer size has a 4K limitation, the current virtchnl
command VIRTCHNL_OP_CONFIG_VSI_QUEUES cannot send the message only once to
configure up to 256 queues. In this patch, we send the messages multiple
times to make sure that the buffer size is less than 4K each time.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 11 ++++++-----
drivers/net/ice/ice_dcf.h | 3 ++-
drivers/net/ice/ice_dcf_ethdev.c | 20 ++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 27 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 8807308bb2..7a0a9a3534 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -950,7 +950,8 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
#define IAVF_RXDID_COMMS_OVS_1 22
int
-ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
@@ -963,16 +964,16 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
int err;
size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+ sizeof(vc_config->qpair[0]) * num_queue_pairs;
vc_config = rte_zmalloc("cfg_queue", size, 0);
if (!vc_config)
return -ENOMEM;
vc_config->vsi_id = hw->vsi_res->vsi_id;
- vc_config->num_queue_pairs = hw->num_queue_pairs;
+ vc_config->num_queue_pairs = num_queue_pairs;
- for (i = 0, vc_qp = vc_config->qpair;
- i < hw->num_queue_pairs;
+ for (i = index, vc_qp = vc_config->qpair;
+ i < index + num_queue_pairs;
i++, vc_qp++) {
vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
vc_qp->txq.queue_id = i;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 8efa3e5b23..1f45881315 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,8 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
-int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4ffc10b0de..211a2510fa 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -514,6 +514,8 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
struct ice_adapter *ad = &dcf_ad->parent;
struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ uint16_t num_queue_pairs;
+ uint16_t index = 0;
int ret;
if (hw->resetting) {
@@ -532,6 +534,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
+ num_queue_pairs = hw->num_queue_pairs;
ret = ice_dcf_init_rx_queues(dev);
if (ret) {
@@ -547,7 +550,20 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
}
}
- ret = ice_dcf_configure_queues(hw);
+ /* If needed, send configure queues msg multiple times to make the
+ * adminq buffer length smaller than the 4K limitation.
+ */
+ while (num_queue_pairs > ICE_DCF_CFG_Q_NUM_PER_BUF) {
+ if (ice_dcf_configure_queues(hw,
+ ICE_DCF_CFG_Q_NUM_PER_BUF, index) != 0) {
+ PMD_DRV_LOG(ERR, "configure queues failed");
+ goto err_queue;
+ }
+ num_queue_pairs -= ICE_DCF_CFG_Q_NUM_PER_BUF;
+ index += ICE_DCF_CFG_Q_NUM_PER_BUF;
+ }
+
+ ret = ice_dcf_configure_queues(hw, num_queue_pairs, index);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to config queues");
return ret;
@@ -587,7 +603,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
+err_queue:
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 4a08d32e0c..2fac1e5b21 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -22,6 +22,7 @@
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 35/39] net/ice: enable IRQ mapping configuration for large VF
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (33 preceding siblings ...)
2022-04-07 10:57 ` [PATCH 34/39] net/ice: enable multiple queues configurations for large VF Kevin Liu
@ 2022-04-07 10:57 ` Kevin Liu
2022-04-07 10:57 ` [PATCH 36/39] net/ice: add enable/disable queues for DCF " Kevin Liu
` (4 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:57 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
The current IRQ mapping configuration only supports max 16 queues and
16 MSIX vectors. Change the queue vector mapping structure to indicate
up to 256 queues. A new opcode is used to handle the case with large
number of queues. To avoid adminq buffer size limitation, we support
to send the virtchnl message multiple times if needed.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 50 +++++++++++++++++++++++++++----
drivers/net/ice/ice_dcf.h | 10 ++++++-
drivers/net/ice/ice_dcf_ethdev.c | 51 +++++++++++++++++++++++++++-----
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 99 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7a0a9a3534..90af99f8d0 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1116,7 +1116,6 @@ ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
return 0;
}
-
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
@@ -1133,13 +1132,14 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return -ENOMEM;
map_info->num_vectors = hw->nb_msix;
- for (i = 0; i < hw->nb_msix; i++) {
- vecmap = &map_info->vecmap[i];
+ for (i = 0; i < hw->eth_dev->data->nb_rx_queues; i++) {
+ vecmap =
+ &map_info->vecmap[hw->qv_map[i].vector_id - hw->msix_base];
vecmap->vsi_id = hw->vsi_res->vsi_id;
vecmap->rxitr_idx = 0;
- vecmap->vector_id = hw->msix_base + i;
+ vecmap->vector_id = hw->qv_map[i].vector_id;
vecmap->txq_map = 0;
- vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+ vecmap->rxq_map |= 1 << hw->qv_map[i].queue_id;
}
memset(&args, 0, sizeof(args));
@@ -1155,6 +1155,46 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index)
+{
+ struct virtchnl_queue_vector_maps *map_info;
+ struct virtchnl_queue_vector *qv_maps;
+ struct dcf_virtchnl_cmd args;
+ int len, i, err;
+ int count = 0;
+
+ len = sizeof(struct virtchnl_queue_vector_maps) +
+ sizeof(struct virtchnl_queue_vector) * (num - 1);
+
+ map_info = rte_zmalloc("map_info", len, 0);
+ if (!map_info)
+ return -ENOMEM;
+
+ map_info->vport_id = hw->vsi_res->vsi_id;
+ map_info->num_qv_maps = num;
+ for (i = index; i < index + map_info->num_qv_maps; i++) {
+ qv_maps = &map_info->qv_maps[count++];
+ qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
+ qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
+ qv_maps->queue_id = hw->qv_map[i].queue_id;
+ qv_maps->vector_id = hw->qv_map[i].vector_id;
+ }
+
+ args.v_op = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
+ args.req_msg = (u8 *)map_info;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
+
+ rte_free(map_info);
+ return err;
+}
+
int
ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 1f45881315..bd88424034 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -74,6 +74,11 @@ struct ice_dcf_tm_conf {
bool committed;
};
+struct ice_dcf_qv_map {
+ uint16_t queue_id;
+ uint16_t vector_id;
+};
+
struct ice_dcf_hw {
struct iavf_hw avf;
@@ -108,7 +113,8 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
- uint16_t rxq_map[16];
+
+ struct ice_dcf_qv_map *qv_map; /* queue vector mapping */
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -136,6 +142,8 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 211a2510fa..82d97fd049 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -144,6 +144,7 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
{
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct ice_dcf_qv_map *qv_map;
uint16_t interval, i;
int vec;
@@ -162,6 +163,14 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
}
+ qv_map = rte_zmalloc("qv_map",
+ dev->data->nb_rx_queues * sizeof(struct ice_dcf_qv_map), 0);
+ if (!qv_map) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+ dev->data->nb_rx_queues);
+ return -1;
+ }
+
if (!dev->data->dev_conf.intr_conf.rxq ||
!rte_intr_dp_is_en(intr_handle)) {
/* Rx interrupt disabled, Map interrupt only for writeback */
@@ -197,17 +206,22 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
IAVF_WRITE_FLUSH(&hw->avf);
/* map all queues to the same interrupt */
- for (i = 0; i < dev->data->nb_rx_queues; i++)
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
+ }
+ hw->qv_map = qv_map;
} else {
if (!rte_intr_allow_others(intr_handle)) {
hw->nb_msix = 1;
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
rte_intr_vec_list_index_set(intr_handle,
i, IAVF_MISC_VEC_ID);
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
hw->msix_base);
@@ -220,21 +234,44 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[vec] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = vec;
rte_intr_vec_list_index_set(intr_handle,
i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"%u vectors are mapping to %u Rx queues",
hw->nb_msix, dev->data->nb_rx_queues);
}
}
- if (ice_dcf_config_irq_map(hw)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping failed");
- return -1;
+ if (!hw->lv_enabled) {
+ if (ice_dcf_config_irq_map(hw)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+ return -1;
+ }
+ } else {
+ uint16_t num_qv_maps = dev->data->nb_rx_queues;
+ uint16_t index = 0;
+
+ while (num_qv_maps > ICE_DCF_IRQ_MAP_NUM_PER_BUF) {
+ if (ice_dcf_config_irq_map_lv(hw,
+ ICE_DCF_IRQ_MAP_NUM_PER_BUF, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+ num_qv_maps -= ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ index += ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ }
+
+ if (ice_dcf_config_irq_map_lv(hw, num_qv_maps, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+
}
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 2fac1e5b21..9ef524c97c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -23,6 +23,7 @@
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 36/39] net/ice: add enable/disable queues for DCF large VF
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (34 preceding siblings ...)
2022-04-07 10:57 ` [PATCH 35/39] net/ice: enable IRQ mapping configuration " Kevin Liu
@ 2022-04-07 10:57 ` Kevin Liu
2022-04-07 10:57 ` [PATCH 37/39] net/ice: fix DCF ACL flow engine Kevin Liu
` (3 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:57 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The current virtchnl structure for enable/disable queues only supports
max 32 queue pairs. Use a new opcode and structure to indicate up to 256
queue pairs, in order to enable/disable queues in large VF case.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 99 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf.h | 5 ++
drivers/net/ice/ice_dcf_ethdev.c | 26 +++++++--
drivers/net/ice/ice_dcf_ethdev.h | 8 +--
4 files changed, 125 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 90af99f8d0..6b210176a0 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -90,7 +90,6 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
*rsp_msglen = event.msg_len;
return rte_le_to_cpu_32(event.desc.cookie_low);
-
again:
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
} while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
@@ -897,7 +896,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
{
struct rte_eth_dev *dev = hw->eth_dev;
struct rte_eth_rss_conf *rss_conf;
- uint8_t i, j, nb_q;
+ uint16_t i, j, nb_q;
int ret;
rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
@@ -1076,6 +1075,12 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return err;
}
+ /* request queues succeeded, vf is resetting */
+ if (hw->resetting) {
+ PMD_DRV_LOG(INFO, "vf is resetting");
+ return 0;
+ }
+
/* request additional queues failed, return available number */
num_queue_pairs = ((struct virtchnl_vf_res_request *)
args.rsp_msgbuf)->num_queue_pairs;
@@ -1186,7 +1191,8 @@ ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
args.req_msg = (u8 *)map_info;
args.req_msglen = len;
args.rsp_msgbuf = hw->arq_buf;
- args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
@@ -1226,6 +1232,50 @@ ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
return err;
}
+int
+ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ if (rx) {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ } else {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ }
+
+ if (on)
+ args.v_op = VIRTCHNL_OP_ENABLE_QUEUES_V2;
+ else
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+ on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_disable_queues(struct ice_dcf_hw *hw)
{
@@ -1255,6 +1305,49 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues) +
+ sizeof(struct virtchnl_queue_chunk) *
+ (ICE_DCF_RXTX_QUEUE_CHUNKS_NUM - 1);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = ICE_DCF_RXTX_QUEUE_CHUNKS_NUM;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].num_queues =
+ hw->eth_dev->data->nb_tx_queues;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].num_queues =
+ hw->eth_dev->data->nb_rx_queues;
+
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats)
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index bd88424034..a6dec86b9b 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,8 @@
#include "base/ice_type.h"
#include "ice_logs.h"
+#define ICE_DCF_RXTX_QUEUE_CHUNKS_NUM 2
+
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -145,7 +147,10 @@ int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw,
+ uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 82d97fd049..b5381cdfc4 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -318,6 +318,7 @@ static int
ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_rx_queue *rxq;
int err = 0;
@@ -340,7 +341,11 @@ ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, rx_queue_id, true, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, rx_queue_id, true, true);
+
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
rx_queue_id);
@@ -449,6 +454,7 @@ static int
ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_tx_queue *txq;
int err = 0;
@@ -464,7 +470,10 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, tx_queue_id, false, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, tx_queue_id, false, true);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
@@ -651,12 +660,17 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
struct ice_tx_queue *txq;
- int ret, i;
+ int i;
/* Stop All queues */
- ret = ice_dcf_disable_queues(hw);
- if (ret)
- PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ if (!hw->lv_enabled) {
+ if (ice_dcf_disable_queues(hw))
+ PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ } else {
+ if (ice_dcf_disable_queues_lv(hw))
+ PMD_DRV_LOG(WARNING,
+ "Fail to stop queues for large VF");
+ }
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 9ef524c97c..3f740e2c7b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,10 +20,10 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
-#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
-#define ICE_DCF_MAX_NUM_QUEUES_LV 256
-#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
-#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 37/39] net/ice: fix DCF ACL flow engine
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (35 preceding siblings ...)
2022-04-07 10:57 ` [PATCH 36/39] net/ice: add enable/disable queues for DCF " Kevin Liu
@ 2022-04-07 10:57 ` Kevin Liu
2022-04-07 10:57 ` [PATCH 38/39] testpmd: force flow flush Kevin Liu
` (2 subsequent siblings)
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:57 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
ACL is not a necessary feature for DCF, it may not be supported by
the ice kernel driver, so in this patch the program does not return
the ACL initiation fails to high level functions, as substitute it
prints some error logs, cleans the related resources and unregisters
the ACL engine.
Fixes: 40d466fa9f76 ("net/ice: support ACL filter in DCF")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_acl_filter.c | 20 ++++++++++++++----
drivers/net/ice/ice_generic_flow.c | 34 +++++++++++++++++++++++-------
2 files changed, 42 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 61bb016395..58ccdb53d7 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -57,6 +57,8 @@ ice_pattern_match_item ice_acl_pattern[] = {
{pattern_eth_ipv4_sctp, ICE_ACL_INSET_ETH_IPV4_SCTP, ICE_INSET_NONE, ICE_INSET_NONE},
};
+static void ice_acl_prof_free(struct ice_hw *hw);
+
static int
ice_acl_prof_alloc(struct ice_hw *hw)
{
@@ -1011,17 +1013,27 @@ ice_acl_init(struct ice_adapter *ad)
ret = ice_acl_setup(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_bitmap_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_prof_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
- return ice_register_parser(parser, ad);
+ ret = ice_register_parser(parser, ad);
+ if (ret)
+ goto deinit_acl;
+
+ return 0;
+
+deinit_acl:
+ ice_deinit_acl(pf);
+ ice_acl_prof_free(hw);
+ PMD_DRV_LOG(ERR, "ACL init failed, may not supported!");
+ return ret;
}
static void
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 2d7e4c19f8..18183bb5e6 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1865,6 +1865,12 @@ ice_register_flow_engine(struct ice_flow_engine *engine)
TAILQ_INSERT_TAIL(&engine_list, engine, node);
}
+static void
+ice_unregister_flow_engine(struct ice_flow_engine *engine)
+{
+ TAILQ_REMOVE(&engine_list, engine, node);
+}
+
int
ice_flow_init(struct ice_adapter *ad)
{
@@ -1888,9 +1894,18 @@ ice_flow_init(struct ice_adapter *ad)
ret = engine->init(ad);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to initialize engine %d",
- engine->type);
- return ret;
+ /**
+ * ACL may not supported in kernel driver,
+ * so just unregister the engine.
+ */
+ if (engine->type == ICE_FLOW_ENGINE_ACL) {
+ ice_unregister_flow_engine(engine);
+ } else {
+ PMD_INIT_LOG(ERR,
+ "Failed to initialize engine %d",
+ engine->type);
+ return ret;
+ }
}
}
return 0;
@@ -1977,7 +1992,7 @@ ice_register_parser(struct ice_flow_parser *parser,
list = ice_get_parser_list(parser, ad);
if (list == NULL)
- return -EINVAL;
+ goto err;
if (ad->devargs.pipe_mode_support) {
TAILQ_INSERT_TAIL(list, parser_node, node);
@@ -1989,7 +2004,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -2000,7 +2015,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_SWITCH) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -2009,11 +2024,14 @@ ice_register_parser(struct ice_flow_parser *parser,
} else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_HEAD(list, parser_node, node);
} else {
- return -EINVAL;
+ goto err;
}
}
-DONE:
return 0;
+err:
+ rte_free(parser_node);
+ PMD_DRV_LOG(ERR, "%s failed.", __func__);
+ return -EINVAL;
}
void
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 38/39] testpmd: force flow flush
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (36 preceding siblings ...)
2022-04-07 10:57 ` [PATCH 37/39] net/ice: fix DCF ACL flow engine Kevin Liu
@ 2022-04-07 10:57 ` Kevin Liu
2022-04-07 10:57 ` [PATCH 39/39] net/ice: fix DCF reset Kevin Liu
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:57 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Qi Zhang <qi.z.zhang@intel.com>
For mdcf, rte_flow_flush is still need to be invoked even there are
no flows be created in current instance.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
app/test-pmd/config.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cc8e7aa138..3d40e3e43d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2923,15 +2923,15 @@ port_flow_flush(portid_t port_id)
port = &ports[port_id];
- if (port->flow_list == NULL)
- return ret;
-
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x44, sizeof(error));
if (rte_flow_flush(port_id, &error)) {
port_flow_complain(&error);
}
+ if (port->flow_list == NULL)
+ return ret;
+
while (port->flow_list) {
struct port_flow *pf = port->flow_list->next;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH 39/39] net/ice: fix DCF reset
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (37 preceding siblings ...)
2022-04-07 10:57 ` [PATCH 38/39] testpmd: force flow flush Kevin Liu
@ 2022-04-07 10:57 ` Kevin Liu
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
39 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-07 10:57 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
After the PF triggers the VF reset, before the VF PMD can perform
any operations on the hardware, it must reinitialize the all resources.
This patch adds a flag to indicate whether the VF has been reset by
PF, and update the DCF resetting operations according to this flag.
Fixes: 1a86f4dbdf42 ("net/ice: support DCF device reset")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_common.c | 4 +++-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 17 ++++++++++++++++-
drivers/net/ice/ice_dcf_parent.c | 3 +++
4 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 5d5ce894ff..530e766abf 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -779,6 +779,7 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
status = ice_init_def_sw_recp(hw, &hw->switch_info->recp_list);
if (status) {
ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
return status;
}
return ICE_SUCCESS;
@@ -848,7 +849,6 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
ice_rm_sw_replay_rule_info(hw, sw);
ice_free(hw, sw->buildin_recipes);
ice_free(hw, sw->recp_list);
- ice_free(hw, sw);
}
/**
@@ -858,6 +858,8 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
{
ice_cleanup_fltr_mgmt_single(hw, hw->switch_info);
+ ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
}
/**
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 6b210176a0..dfd6d5ff64 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1430,7 +1430,7 @@ ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
int ret;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
ice_dcf_disable_irq0(hw);
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b5381cdfc4..e09570cd40 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1005,6 +1005,15 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
uint32_t i;
int len, err = 0;
+ if (hw->resetting) {
+ if (!add)
+ return 0;
+
+ PMD_DRV_LOG(ERR,
+ "fail to add multicast MACs for VF resetting");
+ return -EIO;
+ }
+
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
@@ -1643,7 +1652,13 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
- (void)ice_dcf_dev_stop(dev);
+ if (adapter->parent.pf.adapter_stopped)
+ (void)ice_dcf_dev_stop(dev);
+
+ if (adapter->real_hw.resetting) {
+ ice_dcf_uninit_hw(dev, &adapter->real_hw);
+ ice_dcf_init_hw(dev, &adapter->real_hw);
+ }
ice_free_queues(dev);
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 2aa69c7368..2a936bd2c1 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -243,6 +243,9 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
case VIRTCHNL_EVENT_RESET_IMPENDING:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
dcf_hw->resetting = true;
+ rte_eth_dev_callback_process(dcf_hw->eth_dev,
+ RTE_ETH_EVENT_INTR_RESET,
+ NULL);
break;
case VIRTCHNL_EVENT_LINK_CHANGE:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 00/33] support full function of DCF
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
` (38 preceding siblings ...)
2022-04-07 10:57 ` [PATCH 39/39] net/ice: fix DCF reset Kevin Liu
@ 2022-04-13 16:08 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 01/33] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
` (33 more replies)
39 siblings, 34 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:08 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
These functions have been customized and implemented
on DPDK-20.11, Now it's time to migrate the function
to DPDK-22.07.
v2:
* remove patch:
1.net/iavf: support checking if device is an MDCF instance
2.net/ice: support MDCF(multi-DCF) instance
3.net/ice/base: support custom DDP buildin recipe
4.net/ice: support buildin recipe configuration
5.net/ice/base: support custom ddp package version
6.net/ice: disable ACL function for MDCF instance
Alvin Zhang (14):
net/ice: support dcf promisc configuration
net/ice: support dcf VLAN filter and offload configuration
net/ice: support DCF new VLAN capabilities
common/iavf: support flushing rules and reporting DCF id
net/ice/base: fix ethertype filter input set
net/ice/base: support IPv6 GRE UDP pattern
net/ice: support IPv6 NVGRE tunnel
net/ice: support new pattern of IPv4
net/ice/base: support new patterns of TCP and UDP
net/ice: support new patterns of TCP and UDP
net/ice/base: support IPv4 GRE tunnel
net/ice: support IPv4 GRE raw pattern type
net/ice: treat unknown package as OS default package
net/ice: fix DCF ACL flow engine
Dapeng Yu (1):
net/ice: enable CVL DCF device reset API
Jie Wang (2):
net/ice: add ops MTU-SET to dcf
net/ice: add ops dev-supported-ptypes-get to dcf
Junfeng Guo (4):
net/ice/base: add VXLAN support for switch filter
net/ice: add VXLAN support for switch filter
net/ice/base: update Profile ID table for VXLAN
net/ice/base: update Protocol ID table to match DVM DDP
Kevin Liu (3):
net/ice: support dcf MAC configuration
net/ice: add enable/disable queues for DCF large VF
net/ice: fix DCF reset
Qi Zhang (1):
testpmd: force flow flush
Robin Zhang (1):
net/ice: cleanup Tx buffers
Steve Yang (7):
net/ice: enable RSS RETA ops for DCF hardware
net/ice: enable RSS HASH ops for DCF hardware
net/ice: handle virtchnl event message without interrupt
net/ice: add DCF request queues function
net/ice: negotiate large VF and request more queues
net/ice: enable multiple queues configurations for large VF
net/ice: enable IRQ mapping configuration for large VF
app/test-pmd/config.c | 6 +-
drivers/common/iavf/virtchnl.h | 13 +
drivers/net/ice/base/ice_common.c | 4 +-
drivers/net/ice/base/ice_fdir.c | 3 +
drivers/net/ice/base/ice_flex_pipe.c | 37 +-
drivers/net/ice/base/ice_flex_pipe.h | 3 +-
drivers/net/ice/base/ice_protocol_type.h | 22 +
drivers/net/ice/base/ice_switch.c | 574 +++++++++++++-
drivers/net/ice/base/ice_switch.h | 12 +
drivers/net/ice/ice_acl_filter.c | 20 +-
drivers/net/ice/ice_dcf.c | 375 ++++++++-
drivers/net/ice/ice_dcf.h | 31 +-
drivers/net/ice/ice_dcf_ethdev.c | 925 +++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 14 +
drivers/net/ice/ice_dcf_parent.c | 3 +
drivers/net/ice/ice_ethdev.c | 13 +-
drivers/net/ice/ice_generic_flow.c | 81 +-
drivers/net/ice/ice_generic_flow.h | 13 +
drivers/net/ice/ice_switch_filter.c | 163 +++-
19 files changed, 2174 insertions(+), 138 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 01/33] net/ice: enable RSS RETA ops for DCF hardware
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 02/33] net/ice: enable RSS HASH " Kevin Liu
` (32 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS RETA should be updated and queried by application,
Add related ops ('.reta_update', '.reta_query') for DCF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++++
3 files changed, 79 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f0c074b01..070d1b71ac 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
return err;
}
-static int
+int
ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_lut *rss_lut;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 6ec766ebda..b2c6aa2684 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59610e058f..1ac66ed990 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint8_t *lut;
+ uint16_t i, idx, shift;
+ int ret;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ /* store the old lut table temporarily */
+ rte_memcpy(lut, hw->rss_lut, reta_size);
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ /* send virtchnnl ops to configure rss*/
+ ret = ice_dcf_configure_rss_lut(hw);
+ if (ret) /* revert back */
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint16_t i, idx, shift;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = hw->rss_lut[i];
+ }
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
.tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 02/33] net/ice: enable RSS HASH ops for DCF hardware
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
2022-04-13 16:09 ` [PATCH v2 01/33] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 03/33] net/ice: cleanup Tx buffers Kevin Liu
` (31 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS HASH should be updated and queried by application,
Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF.
Because DCF doesn't support configure RSS HASH, only HASH key can be
updated within ops '.rss_hash_update'.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 070d1b71ac..89c0203ba3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
hw->ets_config = NULL;
}
-static int
+int
ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_key *rss_key;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index b2c6aa2684..f0b45af5ae 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ac66ed990..ccad7fc304 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* HENA setting, it is enabled by default, no change */
+ if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) {
+ PMD_DRV_LOG(ERR, "The size of hash key configured "
+ "(%d) doesn't match the size of hardware can "
+ "support (%d)", rss_conf->rss_key_len,
+ hw->vf_res->rss_key_size);
+ return -EINVAL;
+ }
+
+ rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ return ice_dcf_configure_rss_key(hw);
+}
+
+static int
+ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* Just set it to default value now. */
+ rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL;
+
+ if (!rss_conf->rss_key)
+ return 0;
+
+ rss_conf->rss_key_len = hw->vf_res->rss_key_size;
+ rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len);
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tm_ops_get = ice_dcf_tm_ops_get,
.reta_update = ice_dcf_dev_rss_reta_update,
.reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 03/33] net/ice: cleanup Tx buffers
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
2022-04-13 16:09 ` [PATCH v2 01/33] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-13 16:09 ` [PATCH v2 02/33] net/ice: enable RSS HASH " Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 04/33] net/ice: add ops MTU-SET to dcf Kevin Liu
` (30 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Robin Zhang, Kevin Liu
From: Robin Zhang <robinx.zhang@intel.com>
Add support for ops rte_eth_tx_done_cleanup in dcf
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ccad7fc304..d8b5961514 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.reta_query = ice_dcf_dev_rss_reta_query,
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 04/33] net/ice: add ops MTU-SET to dcf
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (2 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 03/33] net/ice: cleanup Tx buffers Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 05/33] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
` (29 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "mtu_set" to dcf, and it can configure the port mtu through
cmdline.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++
2 files changed, 20 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d8b5961514..06d752fd61 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &new_link);
}
+static int
+ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started != 0) {
+ PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
+ dev->data->port_id);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
bool
ice_dcf_adminq_need_retry(struct ice_adapter *ad)
{
@@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
.tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 11a1305038..f2faf26f58 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -15,6 +15,12 @@
#define ICE_DCF_MAX_RINGS 1
+#define ICE_DCF_FRAME_SIZE_MAX 9728
+#define ICE_DCF_VLAN_TAG_SIZE 4
+#define ICE_DCF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
+#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+
struct ice_dcf_queue {
uint64_t dummy;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 05/33] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (3 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 04/33] net/ice: add ops MTU-SET to dcf Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 06/33] net/ice: support dcf promisc configuration Kevin Liu
` (28 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get
ptypes through the new API.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 80 +++++++++++++++++++-------------
1 file changed, 49 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 06d752fd61..6a577a6582 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+static const uint32_t *
+ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+ return ptypes;
+}
+
static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
- .dev_start = ice_dcf_dev_start,
- .dev_stop = ice_dcf_dev_stop,
- .dev_close = ice_dcf_dev_close,
- .dev_reset = ice_dcf_dev_reset,
- .dev_configure = ice_dcf_dev_configure,
- .dev_infos_get = ice_dcf_dev_info_get,
- .rx_queue_setup = ice_rx_queue_setup,
- .tx_queue_setup = ice_tx_queue_setup,
- .rx_queue_release = ice_dev_rx_queue_release,
- .tx_queue_release = ice_dev_tx_queue_release,
- .rx_queue_start = ice_dcf_rx_queue_start,
- .tx_queue_start = ice_dcf_tx_queue_start,
- .rx_queue_stop = ice_dcf_rx_queue_stop,
- .tx_queue_stop = ice_dcf_tx_queue_stop,
- .link_update = ice_dcf_link_update,
- .stats_get = ice_dcf_stats_get,
- .stats_reset = ice_dcf_stats_reset,
- .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
- .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
- .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
- .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
- .flow_ops_get = ice_dcf_dev_flow_ops_get,
- .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
- .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
- .tm_ops_get = ice_dcf_tm_ops_get,
- .reta_update = ice_dcf_dev_rss_reta_update,
- .reta_query = ice_dcf_dev_rss_reta_query,
- .rss_hash_update = ice_dcf_dev_rss_hash_update,
- .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
- .tx_done_cleanup = ice_tx_done_cleanup,
- .mtu_set = ice_dcf_dev_mtu_set,
+ .dev_start = ice_dcf_dev_start,
+ .dev_stop = ice_dcf_dev_stop,
+ .dev_close = ice_dcf_dev_close,
+ .dev_reset = ice_dcf_dev_reset,
+ .dev_configure = ice_dcf_dev_configure,
+ .dev_infos_get = ice_dcf_dev_info_get,
+ .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .rx_queue_release = ice_dev_rx_queue_release,
+ .tx_queue_release = ice_dev_tx_queue_release,
+ .rx_queue_start = ice_dcf_rx_queue_start,
+ .tx_queue_start = ice_dcf_tx_queue_start,
+ .rx_queue_stop = ice_dcf_rx_queue_stop,
+ .tx_queue_stop = ice_dcf_tx_queue_stop,
+ .link_update = ice_dcf_link_update,
+ .stats_get = ice_dcf_stats_get,
+ .stats_reset = ice_dcf_stats_reset,
+ .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
+ .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
+ .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
+ .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .flow_ops_get = ice_dcf_dev_flow_ops_get,
+ .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
+ .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
+ .tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 06/33] net/ice: support dcf promisc configuration
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (4 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 05/33] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 07/33] net/ice: support dcf MAC configuration Kevin Liu
` (27 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support configuration of unicast and multicast promisc on dcf.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 3 ++
2 files changed, 76 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6a577a6582..87d281ee93 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
}
static int
-ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+dcf_config_promisc(struct ice_dcf_adapter *adapter,
+ bool enable_unicast,
+ bool enable_multicast)
{
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_promisc_info promisc;
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ promisc.flags = 0;
+ promisc.vsi_id = hw->vsi_res->vsi_id;
+
+ if (enable_unicast)
+ promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+ if (enable_multicast)
+ promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+ args.req_msg = (uint8_t *)&promisc;
+ args.req_msglen = sizeof(promisc);
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE");
+ return err;
+ }
+
+ adapter->promisc_unicast_enabled = enable_unicast;
+ adapter->promisc_multicast_enabled = enable_multicast;
return 0;
}
+static int
+ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, true,
+ adapter->promisc_multicast_enabled);
+}
+
static int
ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, false,
+ adapter->promisc_multicast_enabled);
}
static int
ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ true);
}
static int
ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ false);
}
static int
@@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
+ dcf_config_promisc(adapter, false, false);
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index f2faf26f58..22e450527b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -33,6 +33,9 @@ struct ice_dcf_adapter {
struct ice_adapter parent; /* Must be first */
struct ice_dcf_hw real_hw;
+ bool promisc_unicast_enabled;
+ bool promisc_multicast_enabled;
+
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 07/33] net/ice: support dcf MAC configuration
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (5 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 06/33] net/ice: support dcf promisc configuration Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 08/33] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
` (26 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
Below PMD ops are supported in this patch:
.mac_addr_add = dcf_dev_add_mac_addr
.mac_addr_remove = dcf_dev_del_mac_addr
.set_mc_addr_list = dcf_set_mc_addr_list
.mac_addr_set = dcf_dev_set_default_mac_addr
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 9 +-
drivers/net/ice/ice_dcf.h | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 5 +-
4 files changed, 226 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 89c0203ba3..55ae68c456 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
}
int
-ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr,
+ bool add, uint8_t type)
{
struct virtchnl_ether_addr_list *list;
- struct rte_ether_addr *addr;
struct dcf_virtchnl_cmd args;
int len, err = 0;
@@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
}
len = sizeof(struct virtchnl_ether_addr_list);
- addr = hw->eth_dev->data->mac_addrs;
len += sizeof(struct virtchnl_ether_addr);
list = rte_zmalloc(NULL, len, 0);
@@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
rte_memcpy(list->list[0].addr, addr->addr_bytes,
sizeof(addr->addr_bytes));
+
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
-
+ list->list[0].type = type;
list->vsi_id = hw->vsi_res->vsi_id;
list->num_elements = 1;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index f0b45af5ae..78df202a77 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
-int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr, bool add,
+ uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 87d281ee93..0d944f9fd2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -26,6 +26,12 @@
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#define DCF_NUM_MACADDR_MAX 64
+
+static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add);
+
static int
ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- ret = ice_dcf_add_del_all_mac_addr(hw, true);
+ ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs,
+ true, VIRTCHNL_ETHER_ADDR_PRIMARY);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to add mac addr");
return ret;
}
+ if (dcf_ad->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, true);
+ if (ret)
+ return ret;
+ }
+
+
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
@@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
rte_intr_efd_disable(intr_handle);
rte_intr_vec_list_free(intr_handle);
- ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
+ ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw,
+ dcf_ad->real_hw.eth_dev->data->mac_addrs,
+ false, VIRTCHNL_ETHER_ADDR_PRIMARY);
+
+ if (dcf_ad->mc_addrs_num)
+ /* flush previous addresses */
+ (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw,
+ dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, false);
+
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- dev_info->max_mac_addrs = 1;
+ dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
@@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
false);
}
+static int
+dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ int err;
+
+ if (rte_is_zero_ether_addr(addr)) {
+ PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+ return -EINVAL;
+ }
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to add MAC address");
+ return err;
+ }
+
+ return 0;
+}
+
+static void
+dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct rte_ether_addr *addr = &dev->data->mac_addrs[index];
+ int err;
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to remove MAC address");
+}
+
+static int
+dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add)
+{
+ struct virtchnl_ether_addr_list *list;
+ struct dcf_virtchnl_cmd args;
+ uint32_t i;
+ int len, err = 0;
+
+ len = sizeof(struct virtchnl_ether_addr_list);
+ len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
+
+ list = rte_zmalloc(NULL, len, 0);
+ if (!list) {
+ PMD_DRV_LOG(ERR, "fail to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
+ sizeof(list->list[i].addr));
+ list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ list->vsi_id = hw->vsi_res->vsi_id;
+ list->num_elements = mc_addrs_num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+ VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.req_msg = (uint8_t *)list;
+ args.req_msglen = len;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" :
+ "OP_DEL_ETHER_ADDRESS");
+ rte_free(list);
+ return err;
+}
+
+static int
+dcf_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i;
+ int ret;
+
+
+ if (mc_addrs_num > DCF_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR,
+ "can't add more than a limited number (%u) of addresses.",
+ (uint32_t)DCF_NUM_MACADDR_MAX);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addrs[i])) {
+ const uint8_t *mac = mc_addrs[i].addr_bytes;
+
+ PMD_DRV_LOG(ERR,
+ "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x",
+ mac[0], mac[1], mac[2], mac[3], mac[4],
+ mac[5]);
+ return -EINVAL;
+ }
+ }
+
+ if (adapter->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num, false);
+ if (ret)
+ return ret;
+ }
+ if (!mc_addrs_num) {
+ adapter->mc_addrs_num = 0;
+ return 0;
+ }
+
+ /* add new ones */
+ ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true);
+ if (ret) {
+ /* if adding mac address list fails, should add the
+ * previous addresses back.
+ */
+ if (adapter->mc_addrs_num)
+ (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num,
+ true);
+ return ret;
+ }
+ adapter->mc_addrs_num = mc_addrs_num;
+ memcpy(adapter->mc_addrs,
+ mc_addrs, mc_addrs_num * sizeof(*mc_addrs));
+
+ return 0;
+}
+
+static int
+dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_ether_addr *old_addr;
+ int ret;
+
+ old_addr = hw->eth_dev->data->mac_addrs;
+ if (rte_is_same_ether_addr(old_addr, mac_addr))
+ return 0;
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ old_addr->addr_bytes[0],
+ old_addr->addr_bytes[1],
+ old_addr->addr_bytes[2],
+ old_addr->addr_bytes[3],
+ old_addr->addr_bytes[4],
+ old_addr->addr_bytes[5]);
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ mac_addr->addr_bytes[0],
+ mac_addr->addr_bytes[1],
+ mac_addr->addr_bytes[2],
+ mac_addr->addr_bytes[3],
+ mac_addr->addr_bytes[4],
+ mac_addr->addr_bytes[5]);
+
+ if (ret)
+ return -EIO;
+
+ rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs);
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
.allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .mac_addr_add = dcf_dev_add_mac_addr,
+ .mac_addr_remove = dcf_dev_del_mac_addr,
+ .set_mc_addr_list = dcf_set_mc_addr_list,
+ .mac_addr_set = dcf_dev_set_default_mac_addr,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 22e450527b..27f6402786 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -14,7 +14,7 @@
#include "ice_dcf.h"
#define ICE_DCF_MAX_RINGS 1
-
+#define DCF_NUM_MACADDR_MAX 64
#define ICE_DCF_FRAME_SIZE_MAX 9728
#define ICE_DCF_VLAN_TAG_SIZE 4
#define ICE_DCF_ETH_OVERHEAD \
@@ -35,7 +35,8 @@ struct ice_dcf_adapter {
bool promisc_unicast_enabled;
bool promisc_multicast_enabled;
-
+ uint32_t mc_addrs_num;
+ struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX];
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 08/33] net/ice: support dcf VLAN filter and offload configuration
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (6 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 07/33] net/ice: support dcf MAC configuration Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 09/33] net/ice: support DCF new VLAN capabilities Kevin Liu
` (25 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Below PMD ops are supported in this patch:
.vlan_filter_set = dcf_dev_vlan_filter_set
.vlan_offload_set = dcf_dev_vlan_offload_set
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0d944f9fd2..e58cdf47d2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_filter_list *vlan_list;
+ uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+ sizeof(uint16_t)];
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+ vlan_list->vsi_id = hw->vsi_res->vsi_id;
+ vlan_list->num_elements = 1;
+ vlan_list->vlan_id[0] = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+ args.req_msg = cmd_buffer;
+ args.req_msglen = sizeof(cmd_buffer);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN" : "OP_DEL_VLAN");
+
+ return err;
+}
+
+static int
+dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_ENABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static int
+dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ /* Vlan stripping setting */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ /* Enable or disable VLAN stripping */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ err = dcf_enable_vlan_strip(hw);
+ else
+ err = dcf_disable_vlan_strip(hw);
+
+ if (err)
+ return -EIO;
+ }
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mac_addr_remove = dcf_dev_del_mac_addr,
.set_mc_addr_list = dcf_set_mc_addr_list,
.mac_addr_set = dcf_dev_set_default_mac_addr,
+ .vlan_filter_set = dcf_dev_vlan_filter_set,
+ .vlan_offload_set = dcf_dev_vlan_offload_set,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 09/33] net/ice: support DCF new VLAN capabilities
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (7 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 08/33] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 10/33] net/ice: enable CVL DCF device reset API Kevin Liu
` (24 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The DCF needs to query the VLAN capabilities based on current device
configuration firstly.
DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 27 +++++
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++++++++---
3 files changed, 182 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 55ae68c456..885d58c0f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
return 0;
}
+static int
+dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_vlan_caps vlan_v2_caps;
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS;
+ args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps;
+ args.rsp_buflen = sizeof(vlan_v2_caps);
+
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS");
+ return ret;
+ }
+
+ rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
+ return 0;
+}
+
int
ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
@@ -701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
+ if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) &&
+ dcf_get_vlan_offload_caps_v2(hw))
+ goto err_rss;
+
return 0;
err_rss:
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78df202a77..32e6031bd9 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -107,6 +107,7 @@ struct ice_dcf_hw {
uint16_t nb_msix;
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
+ struct virtchnl_vlan_caps vlan_v2_caps;
/* Link status */
bool link_up;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e58cdf47d2..d4bfa182a4 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,46 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_supported_caps *supported_caps =
+ &hw->vlan_v2_caps.filtering.filtering_support;
+ struct virtchnl_vlan *vlan_setting;
+ struct virtchnl_vlan_filter_list_v2 vlan_filter;
+ struct dcf_virtchnl_cmd args;
+ uint32_t filtering_caps;
+ int err;
+
+ if (supported_caps->outer) {
+ filtering_caps = supported_caps->outer;
+ vlan_setting = &vlan_filter.filters[0].outer;
+ } else {
+ filtering_caps = supported_caps->inner;
+ vlan_setting = &vlan_filter.filters[0].inner;
+ }
+
+ if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100))
+ return -ENOTSUP;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.vport_id = hw->vsi_res->vsi_id;
+ vlan_filter.num_elements = 1;
+ vlan_setting->tpid = RTE_ETHER_TYPE_VLAN;
+ vlan_setting->tci = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2;
+ args.req_msg = (uint8_t *)&vlan_filter;
+ args.req_msglen = sizeof(vlan_filter);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2");
+
+ return err;
+}
+
static int
dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
{
@@ -1052,6 +1092,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
return err;
}
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
+ err = dcf_add_del_vlan_v2(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+ }
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static void
+dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable)
+{
+ struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i, j;
+ uint64_t ids;
+
+ for (i = 0; i < RTE_DIM(vfc->ids); i++) {
+ if (vfc->ids[i] == 0)
+ continue;
+
+ ids = vfc->ids[i];
+ for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) {
+ if (ids & 1)
+ dcf_add_del_vlan_v2(hw, 64 * i + j, enable);
+ }
+ }
+}
+
+static int
+dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable)
+{
+ struct virtchnl_vlan_supported_caps *stripping_caps =
+ &hw->vlan_v2_caps.offloads.stripping_support;
+ struct virtchnl_vlan_setting vlan_strip;
+ struct dcf_virtchnl_cmd args;
+ uint32_t *ethertype;
+ int ret;
+
+ if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.outer_ethertype_setting;
+ else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.inner_ethertype_setting;
+ else
+ return -ENOTSUP;
+
+ memset(&vlan_strip, 0, sizeof(vlan_strip));
+ vlan_strip.vport_id = hw->vsi_res->vsi_id;
+ *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 :
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2;
+ args.req_msg = (uint8_t *)&vlan_strip;
+ args.req_msglen = sizeof(vlan_strip);
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" :
+ "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ bool enable;
+ int err;
+
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
+
+ dcf_iterate_vlan_filters_v2(dev, enable);
+ }
+
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
+
+ err = dcf_config_vlan_strip_v2(hw, enable);
+ /* If not support, the stripping is already disabled by PF */
+ if (err == -ENOTSUP && !enable)
+ err = 0;
+ if (err)
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int
dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
{
@@ -1084,30 +1234,17 @@ dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
return ret;
}
-static int
-dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct ice_dcf_adapter *adapter = dev->data->dev_private;
- struct ice_dcf_hw *hw = &adapter->real_hw;
- int err;
-
- if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
- return -ENOTSUP;
-
- err = dcf_add_del_vlan(hw, vlan_id, on);
- if (err)
- return -EIO;
- return 0;
-}
-
static int
dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
int err;
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
+ return dcf_dev_vlan_offload_set_v2(dev, mask);
+
if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
return -ENOTSUP;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 10/33] net/ice: enable CVL DCF device reset API
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (8 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 09/33] net/ice: support DCF new VLAN capabilities Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 11/33] net/ice/base: add VXLAN support for switch filter Kevin Liu
` (23 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Dapeng Yu, Kevin Liu
From: Dapeng Yu <dapengx.yu@intel.com>
Enable CVL DCF device reset API.
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 24 ++++++++++++++++++++++++
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 885d58c0f4..9c2f13cf72 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1163,3 +1163,27 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
rte_free(list);
return err;
}
+
+int
+ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
+{
+ int ret;
+
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+ ice_dcf_disable_irq0(hw);
+ rte_intr_disable(intr_handle);
+ rte_intr_callback_unregister(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ ret = ice_dcf_mode_disable(hw);
+ if (ret)
+ goto err;
+ ret = ice_dcf_get_vf_resource(hw);
+err:
+ rte_intr_callback_register(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ rte_intr_enable(intr_handle);
+ ice_dcf_enable_irq0(hw);
+ return ret;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 32e6031bd9..8cf17e7700 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -137,6 +137,7 @@ int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
+int ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
void ice_dcf_tm_conf_uninit(struct rte_eth_dev *dev);
int ice_dcf_replay_vf_bw(struct ice_dcf_hw *hw, uint16_t vf_id);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 11/33] net/ice/base: add VXLAN support for switch filter
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (9 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 10/33] net/ice: enable CVL DCF device reset API Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 12/33] net/ice: " Kevin Liu
` (22 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
1. Add profile rule for VXLAN on Switch Filter, including
pattern_eth_ipv4_udp_vxlan_any
pattern_eth_ipv6_udp_vxlan_any
pattern_eth_ipv4_udp_vxlan_eth_ipv4
pattern_eth_ipv4_udp_vxlan_eth_ipv6
pattern_eth_ipv6_udp_vxlan_eth_ipv4
pattern_eth_ipv6_udp_vxlan_eth_ipv6
2. Add common rule for VXLAN on Switch Filter, including
+-----------------+-----------------------------------------------------+
| Pattern | Input Set |
+-----------------+-----------------------------------------------------+
| ipv4_vxlan_ipv4 | vni, inner dmac, inner dst/src ip, outer dst/src ip |
| ipv4_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv4 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
+-----------------+-----------------------------------------------------+
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_protocol_type.h | 6 +
drivers/net/ice/base/ice_switch.c | 213 ++++++++++++++++++++++-
drivers/net/ice/base/ice_switch.h | 12 ++
3 files changed, 230 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index 0e6e5990be..d6332c5690 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -112,6 +112,12 @@ enum ice_sw_tunnel_type {
ICE_SW_TUN_IPV6_NAT_T,
ICE_SW_TUN_IPV4_L2TPV3,
ICE_SW_TUN_IPV6_L2TPV3,
+ ICE_SW_TUN_PROFID_IPV4_VXLAN,
+ ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4,
+ ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6,
+ ICE_SW_TUN_PROFID_IPV6_VXLAN,
+ ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4,
+ ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6,
ICE_SW_TUN_PROFID_IPV6_ESP,
ICE_SW_TUN_PROFID_IPV6_AH,
ICE_SW_TUN_PROFID_MAC_IPV6_L2TPV3,
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index d4cc664ad7..b0c50c8f40 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -228,6 +228,117 @@ static const u8 dummy_udp_tun_udp_packet[] = {
0x00, 0x08, 0x00, 0x00,
};
+static const
+struct ice_dummy_pkt_offsets dummy_udp_tun_ipv6_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_UDP_OF, 34 },
+ { ICE_VXLAN, 42 },
+ { ICE_GENEVE, 42 },
+ { ICE_VXLAN_GPE, 42 },
+ { ICE_MAC_IL, 50 },
+ { ICE_IPV6_IL, 64 },
+ { ICE_TCP_IL, 104 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_udp_tun_ipv6_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x5a, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+ 0x00, 0x46, 0x00, 0x00,
+
+ 0x00, 0x00, 0x65, 0x58, /* ICE_VXLAN 42 */
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x86, 0xdd,
+
+ 0x60, 0x00, 0x00, 0x00, /* ICE_IPV4_IL 64 */
+ 0x00, 0x00, 0x06, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 104 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x50, 0x02, 0x20, 0x00,
+ 0x00, 0x00, 0x00, 0x00
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_udp_tun_ipv6_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_UDP_OF, 34 },
+ { ICE_VXLAN, 42 },
+ { ICE_GENEVE, 42 },
+ { ICE_VXLAN_GPE, 42 },
+ { ICE_MAC_IL, 50 },
+ { ICE_IPV6_IL, 64 },
+ { ICE_UDP_ILOS, 104 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_udp_tun_ipv6_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x4e, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x00, 0x11, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+ 0x00, 0x3a, 0x00, 0x00,
+
+ 0x00, 0x00, 0x65, 0x58, /* ICE_VXLAN 42 */
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x86, 0xdd,
+
+ 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_IL 64 */
+ 0x00, 0x58, 0x11, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 104 */
+ 0x00, 0x08, 0x00, 0x00,
+};
+
/* offset info for MAC + IPv4 + UDP dummy packet */
static const struct ice_dummy_pkt_offsets dummy_udp_packet_offsets[] = {
{ ICE_MAC_OFOS, 0 },
@@ -2001,6 +2112,10 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan)
u8 gre_profile[12] = {13, 14, 15, 19, 20, 21, 28, 29, 30, 31, 32, 33};
u8 pppoe_profile[7] = {34, 35, 36, 37, 38, 39, 40};
u8 non_tun_profile[6] = {4, 5, 6, 7, 8, 9};
+ bool ipv4_vxlan_ipv4_valid = false;
+ bool ipv4_vxlan_ipv6_valid = false;
+ bool ipv6_vxlan_ipv4_valid = false;
+ bool ipv6_vxlan_ipv6_valid = false;
enum ice_sw_tunnel_type tun_type;
u16 i, j, k, profile_num = 0;
bool non_tun_valid = false;
@@ -2022,8 +2137,17 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan)
}
for (i = 0; i < 12; i++) {
- if (vxlan_profile[i] == j)
+ if (vxlan_profile[i] == j) {
vxlan_valid = true;
+ if (i < 3)
+ ipv4_vxlan_ipv4_valid = true;
+ else if (i < 6)
+ ipv6_vxlan_ipv4_valid = true;
+ else if (i < 9)
+ ipv4_vxlan_ipv6_valid = true;
+ else if (i < 12)
+ ipv6_vxlan_ipv6_valid = true;
+ }
}
for (i = 0; i < 7; i++) {
@@ -2083,6 +2207,20 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan)
break;
}
}
+ if (tun_type == ICE_SW_TUN_VXLAN) {
+ if (ipv4_vxlan_ipv4_valid && ipv4_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN;
+ else if (ipv6_vxlan_ipv4_valid && ipv6_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN;
+ else if (ipv4_vxlan_ipv4_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4;
+ else if (ipv4_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6;
+ else if (ipv6_vxlan_ipv4_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4;
+ else if (ipv6_vxlan_ipv6_valid)
+ tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6;
+ }
if (profile_num == 1 && (flag_valid || non_tun_valid || pppoe_valid)) {
for (j = 0; j < ICE_MAX_NUM_PROFILES; j++) {
@@ -7496,6 +7634,12 @@ static bool ice_tun_type_match_word(enum ice_sw_tunnel_type tun_type, u16 *mask)
case ICE_SW_TUN_VXLAN_GPE:
case ICE_SW_TUN_GENEVE:
case ICE_SW_TUN_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
case ICE_SW_TUN_NVGRE:
case ICE_SW_TUN_UDP:
case ICE_ALL_TUNNELS:
@@ -7613,6 +7757,42 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo,
case ICE_SW_TUN_PPPOE_IPV6_UDP:
ice_set_bit(ICE_PROFID_PPPOE_IPV6_UDP, bm);
return;
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_OTHER, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV4_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV4_TUN_M_IPV6_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_OTHER, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV4_OTHER, bm);
+ return;
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_TCP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_UDP, bm);
+ ice_set_bit(ICE_PROFID_IPV6_TUN_M_IPV6_OTHER, bm);
+ return;
case ICE_SW_TUN_PROFID_IPV6_ESP:
case ICE_SW_TUN_IPV6_ESP:
ice_set_bit(ICE_PROFID_IPV6_ESP, bm);
@@ -7780,6 +7960,12 @@ bool ice_is_prof_rule(enum ice_sw_tunnel_type type)
{
switch (type) {
case ICE_SW_TUN_AND_NON_TUN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
case ICE_SW_TUN_PROFID_IPV6_ESP:
case ICE_SW_TUN_PROFID_IPV6_AH:
case ICE_SW_TUN_PROFID_MAC_IPV6_L2TPV3:
@@ -8396,8 +8582,27 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
return;
}
+ if (tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6 ||
+ tun_type == ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6) {
+ if (tcp) {
+ *pkt = dummy_udp_tun_ipv6_tcp_packet;
+ *pkt_len = sizeof(dummy_udp_tun_ipv6_tcp_packet);
+ *offsets = dummy_udp_tun_ipv6_tcp_packet_offsets;
+ return;
+ }
+
+ *pkt = dummy_udp_tun_ipv6_udp_packet;
+ *pkt_len = sizeof(dummy_udp_tun_ipv6_udp_packet);
+ *offsets = dummy_udp_tun_ipv6_udp_packet_offsets;
+ return;
+ }
+
if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
tun_type == ICE_SW_TUN_VXLAN_GPE || tun_type == ICE_SW_TUN_UDP ||
+ tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN ||
+ tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4 ||
+ tun_type == ICE_SW_TUN_PROFID_IPV6_VXLAN ||
+ tun_type == ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4 ||
tun_type == ICE_SW_TUN_GENEVE_VLAN ||
tun_type == ICE_SW_TUN_VXLAN_VLAN) {
if (tcp) {
@@ -8613,6 +8818,12 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type,
case ICE_SW_TUN_AND_NON_TUN:
case ICE_SW_TUN_VXLAN_GPE:
case ICE_SW_TUN_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4:
+ case ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6:
case ICE_SW_TUN_VXLAN_VLAN:
case ICE_SW_TUN_UDP:
if (!ice_get_open_tunnel_port(hw, TNL_VXLAN, &open_port))
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index a2b3c80107..efb9399b77 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -20,6 +20,18 @@
#define ICE_PROFID_IPV4_UDP 5
#define ICE_PROFID_IPV6_TCP 7
#define ICE_PROFID_IPV6_UDP 8
+#define ICE_PROFID_IPV4_TUN_M_IPV4_TCP 10
+#define ICE_PROFID_IPV4_TUN_M_IPV4_UDP 11
+#define ICE_PROFID_IPV4_TUN_M_IPV4_OTHER 12
+#define ICE_PROFID_IPV6_TUN_M_IPV4_TCP 16
+#define ICE_PROFID_IPV6_TUN_M_IPV4_UDP 17
+#define ICE_PROFID_IPV6_TUN_M_IPV4_OTHER 18
+#define ICE_PROFID_IPV4_TUN_M_IPV6_TCP 22
+#define ICE_PROFID_IPV4_TUN_M_IPV6_UDP 23
+#define ICE_PROFID_IPV4_TUN_M_IPV6_OTHER 24
+#define ICE_PROFID_IPV6_TUN_M_IPV6_TCP 25
+#define ICE_PROFID_IPV6_TUN_M_IPV6_UDP 26
+#define ICE_PROFID_IPV6_TUN_M_IPV6_OTHER 27
#define ICE_PROFID_PPPOE_PAY 34
#define ICE_PROFID_PPPOE_IPV4_TCP 35
#define ICE_PROFID_PPPOE_IPV4_UDP 36
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 12/33] net/ice: add VXLAN support for switch filter
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (10 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 11/33] net/ice/base: add VXLAN support for switch filter Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 13/33] common/iavf: support flushing rules and reporting DCF id Kevin Liu
` (21 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
1. Add profile rule for VXLAN on Switch Filter, including
pattern_eth_ipv4_udp_vxlan_any
pattern_eth_ipv6_udp_vxlan_any
pattern_eth_ipv4_udp_vxlan_eth_ipv4
pattern_eth_ipv4_udp_vxlan_eth_ipv6
pattern_eth_ipv6_udp_vxlan_eth_ipv4
pattern_eth_ipv6_udp_vxlan_eth_ipv6
2. Add common rule for VXLAN on Switch Filter, including
+-----------------+-----------------------------------------------------+
| Pattern | Input Set |
+-----------------+-----------------------------------------------------+
| ipv4_vxlan_ipv4 | vni, inner dmac, inner dst/src ip, outer dst/src ip |
| ipv4_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv4 | vni, inner dmac, inner dst/src ip |
| ipv6_vxlan_ipv6 | vni, inner dmac, inner dst/src ip |
+-----------------+-----------------------------------------------------+
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_generic_flow.c | 20 ++++++++++
drivers/net/ice/ice_generic_flow.h | 4 ++
drivers/net/ice/ice_switch_filter.c | 59 +++++++++++++++++++++++++++--
3 files changed, 80 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 53b1c0b69a..1433094ed4 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -375,6 +375,26 @@ enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_ipv4_icmp[] = {
RTE_FLOW_ITEM_TYPE_END,
};
+/* IPv4 VXLAN ANY */
+enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_any[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_VXLAN,
+ RTE_FLOW_ITEM_TYPE_ANY,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+/* IPv6 VXLAN ANY */
+enum rte_flow_item_type pattern_eth_ipv6_udp_vxlan_any[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_VXLAN,
+ RTE_FLOW_ITEM_TYPE_ANY,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
/* IPv4 VXLAN MAC IPv4 */
enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv4[] = {
RTE_FLOW_ITEM_TYPE_ETH,
diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h
index 11f51a5c15..def7e2d6d6 100644
--- a/drivers/net/ice/ice_generic_flow.h
+++ b/drivers/net/ice/ice_generic_flow.h
@@ -175,6 +175,10 @@ extern enum rte_flow_item_type pattern_eth_ipv6_icmp6[];
extern enum rte_flow_item_type pattern_eth_vlan_ipv6_icmp6[];
extern enum rte_flow_item_type pattern_eth_qinq_ipv6_icmp6[];
+/* IPv4/IPv6 VXLAN ANY */
+extern enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_any[];
+extern enum rte_flow_item_type pattern_eth_ipv6_udp_vxlan_any[];
+
/* IPv4 VXLAN IPv4 */
extern enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_ipv4[];
extern enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_ipv4_udp[];
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 36c9bffb73..e90e109eca 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -85,6 +85,19 @@
#define ICE_SW_INSET_DIST_VXLAN_IPV4 ( \
ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | ICE_INSET_DMAC | \
ICE_INSET_VXLAN_VNI)
+#define ICE_SW_INSET_DIST_IPV4_VXLAN_IPV4 ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST)
+#define ICE_SW_INSET_DIST_IPV4_VXLAN_IPV6 ( \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_IPV6_SRC | ICE_INSET_IPV6_DST)
+#define ICE_SW_INSET_DIST_IPV6_VXLAN_IPV4 ( \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST)
+#define ICE_SW_INSET_DIST_IPV6_VXLAN_IPV6 ( \
+ ICE_INSET_DMAC | ICE_INSET_VXLAN_VNI | \
+ ICE_INSET_IPV6_SRC | ICE_INSET_IPV6_DST)
#define ICE_SW_INSET_DIST_NVGRE_IPV4_TCP ( \
ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
ICE_INSET_TCP_SRC_PORT | ICE_INSET_TCP_DST_PORT | \
@@ -112,6 +125,9 @@
ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
ICE_INSET_UDP_SRC_PORT | ICE_INSET_UDP_DST_PORT | \
ICE_INSET_IPV4_TOS)
+#define ICE_SW_INSET_PERM_TUNNEL_IPV6 ( \
+ ICE_INSET_IPV6_SRC | ICE_INSET_IPV6_DST | \
+ ICE_INSET_IPV6_NEXT_HDR | ICE_INSET_IPV6_TC)
#define ICE_SW_INSET_MAC_PPPOE ( \
ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION)
@@ -217,9 +233,14 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_VXLAN_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_VXLAN_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_VXLAN_IPV4_TCP, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV4_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv4, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_NVGRE_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_udp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_INSET_IPV4_DST, ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
@@ -301,9 +322,14 @@ ice_pattern_match_item ice_switch_pattern_perm_list[] = {
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
+ {pattern_eth_ipv4_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV4_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv4, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv6_udp_vxlan_eth_ipv6, ICE_SW_INSET_DIST_IPV6_VXLAN_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_udp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
{pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
@@ -566,6 +592,11 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
bool inner_ipv6_valid = 0;
bool inner_tcp_valid = 0;
bool inner_udp_valid = 0;
+ bool ipv4_ipv4_valid = 0;
+ bool ipv4_ipv6_valid = 0;
+ bool ipv6_ipv4_valid = 0;
+ bool ipv6_ipv6_valid = 0;
+ bool any_valid = 0;
uint16_t j, k, t = 0;
if (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
@@ -586,6 +617,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ANY:
*tun_type = ICE_SW_TUN_AND_NON_TUN;
+ any_valid = 1;
break;
case RTE_FLOW_ITEM_TYPE_ETH:
@@ -654,6 +686,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
case RTE_FLOW_ITEM_TYPE_IPV4:
ipv4_spec = item->spec;
ipv4_mask = item->mask;
+ if (ipv4_valid)
+ ipv4_ipv4_valid = 1;
+ if (ipv6_valid)
+ ipv6_ipv4_valid = 1;
if (tunnel_valid) {
inner_ipv4_valid = 1;
input = &inner_input_set;
@@ -734,6 +770,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
case RTE_FLOW_ITEM_TYPE_IPV6:
ipv6_spec = item->spec;
ipv6_mask = item->mask;
+ if (ipv4_valid)
+ ipv4_ipv6_valid = 1;
+ if (ipv6_valid)
+ ipv6_ipv6_valid = 1;
if (tunnel_valid) {
inner_ipv6_valid = 1;
input = &inner_input_set;
@@ -1577,9 +1617,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
}
if (*tun_type == ICE_NON_TUN) {
- if (vxlan_valid)
- *tun_type = ICE_SW_TUN_VXLAN;
- else if (nvgre_valid)
+ if (nvgre_valid)
*tun_type = ICE_SW_TUN_NVGRE;
else if (ipv4_valid && tcp_valid)
*tun_type = ICE_SW_IPV4_TCP;
@@ -1591,6 +1629,21 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
*tun_type = ICE_SW_IPV6_UDP;
}
+ if (vxlan_valid) {
+ if (ipv4_ipv4_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV4;
+ else if (ipv4_ipv6_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN_IPV6;
+ else if (ipv6_ipv4_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV4;
+ else if (ipv6_ipv6_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN_IPV6;
+ else if (ipv6_valid && any_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV6_VXLAN;
+ else if (ipv4_valid && any_valid)
+ *tun_type = ICE_SW_TUN_PROFID_IPV4_VXLAN;
+ }
+
if (input_set_byte > MAX_INPUT_SET_BYTE) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 13/33] common/iavf: support flushing rules and reporting DCF id
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (11 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 12/33] net/ice: " Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 14/33] net/ice/base: fix ethertype filter input set Kevin Liu
` (20 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add virtual channel opcode for DCF flushing rules.
Add virtual channel event for PF reporting DCF id.
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/common/iavf/virtchnl.h | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 3e44eca7d8..6e2a24b281 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -164,6 +164,12 @@ enum virtchnl_ops {
VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107,
VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108,
VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111,
+
+ /**
+ * To reduce the risk for future combability issue,
+ * set VIRTCHNL_OP_DCF_RULE_FLUSH carefully by using a special value.
+ */
+ VIRTCHNL_OP_DCF_RULE_FLUSH = 6000,
VIRTCHNL_OP_MAX,
};
@@ -1424,6 +1430,12 @@ enum virtchnl_event_codes {
VIRTCHNL_EVENT_RESET_IMPENDING,
VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE,
+
+ /**
+ * To reduce the risk for future combability issue,
+ * set VIRTCHNL_EVENT_DCF_VSI_INFO carefully by using a special value.
+ */
+ VIRTCHNL_EVENT_DCF_VSI_INFO = 1000,
};
#define PF_EVENT_SEVERITY_INFO 0
@@ -2200,6 +2212,7 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
*/
valid_len = msglen;
break;
+ case VIRTCHNL_OP_DCF_RULE_FLUSH:
case VIRTCHNL_OP_DCF_DISABLE:
case VIRTCHNL_OP_DCF_GET_VSI_MAP:
case VIRTCHNL_OP_DCF_GET_PKG_INFO:
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 14/33] net/ice/base: fix ethertype filter input set
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (12 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 13/33] common/iavf: support flushing rules and reporting DCF id Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 15/33] net/ice/base: support IPv6 GRE UDP pattern Kevin Liu
` (19 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add destination and source MAC as the input sets to ethertype filter.
For example:
flow create 0 ingress pattern eth dst is 00:11:22:33:44:55
type is 0x802 / end actions queue index 2 / end
This flow will result in all the matched ingress packets be
forwarded to queue 2.
Fixes: 1f70fb3e958a ("net/ice/base: support flow director for non-IP packets")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_fdir.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index ae76361102..0a1d45a9d7 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -3935,6 +3935,9 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
input->ip.v6.dst_port);
break;
case ICE_FLTR_PTYPE_NON_IP_L2:
+ ice_pkt_insert_mac_addr(loc, input->ext_data.dst_mac);
+ ice_pkt_insert_mac_addr(loc + ETH_ALEN,
+ input->ext_data.src_mac);
ice_pkt_insert_u16(loc, ICE_MAC_ETHTYPE_OFFSET,
input->ext_data.ether_type);
break;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 15/33] net/ice/base: support IPv6 GRE UDP pattern
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (13 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 14/33] net/ice/base: fix ethertype filter input set Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 16/33] net/ice: support IPv6 NVGRE tunnel Kevin Liu
` (18 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add supports(trainer packet and it's offsets, definitions,
pattern matching) for IPv6 GRE UDP pattern.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_protocol_type.h | 1 +
drivers/net/ice/base/ice_switch.c | 43 +++++++++++++++++++++++-
2 files changed, 43 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index d6332c5690..eec9f27823 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -44,6 +44,7 @@ enum ice_protocol_type {
ICE_GENEVE,
ICE_VXLAN_GPE,
ICE_NVGRE,
+ ICE_GRE,
ICE_GTP,
ICE_PPPOE,
ICE_PFCP,
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index b0c50c8f40..f444a2da07 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -12,6 +12,7 @@
#define ICE_MAX_VLAN_ID 0xFFF
#define ICE_IPV6_ETHER_ID 0x86DD
#define ICE_IPV4_NVGRE_PROTO_ID 0x002F
+#define ICE_IPV6_GRE_PROTO_ID 0x002F
#define ICE_PPP_IPV6_PROTO_ID 0x0057
#define ICE_TCP_PROTO_ID 0x06
#define ICE_GTPU_PROFILE 24
@@ -129,6 +130,34 @@ static const u8 dummy_gre_udp_packet[] = {
0x00, 0x08, 0x00, 0x00,
};
+static const struct ice_dummy_pkt_offsets
+dummy_ipv6_gre_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV6_OFOS, 14 },
+ { ICE_GRE, 54 },
+ { ICE_IPV6_IL, 58 },
+ { ICE_UDP_ILOS, 98 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_ipv6_gre_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x86, 0xdd, 0x60, 0x00,
+ 0x00, 0x00, 0x00, 0x36, 0x2f, 0x40, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
+ 0x86, 0xdd, 0x60, 0x00, 0x00, 0x00, 0x00, 0x0a,
+ 0x11, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a,
+ 0xff, 0xd8, 0x00, 0x00,
+};
+
static const struct ice_dummy_pkt_offsets dummy_udp_tun_tcp_packet_offsets[] = {
{ ICE_MAC_OFOS, 0 },
{ ICE_ETYPE_OL, 12 },
@@ -8207,8 +8236,13 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
udp = true;
else if (lkups[i].type == ICE_TCP_IL)
tcp = true;
- else if (lkups[i].type == ICE_IPV6_OFOS)
+ else if (lkups[i].type == ICE_IPV6_OFOS) {
ipv6 = true;
+ if (lkups[i].h_u.ipv6_hdr.next_hdr ==
+ ICE_IPV6_GRE_PROTO_ID &&
+ lkups[i].m_u.ipv6_hdr.next_hdr == 0xFF)
+ gre = true;
+ }
else if (lkups[i].type == ICE_VLAN_OFOS)
vlan = true;
else if (lkups[i].type == ICE_ETYPE_OL &&
@@ -8568,6 +8602,13 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
return;
}
+ if (ipv6 && gre) {
+ *pkt = dummy_ipv6_gre_udp_packet;
+ *pkt_len = sizeof(dummy_ipv6_gre_udp_packet);
+ *offsets = dummy_ipv6_gre_udp_packet_offsets;
+ return;
+ }
+
if (tun_type == ICE_SW_TUN_NVGRE || gre) {
if (tcp) {
*pkt = dummy_gre_tcp_packet;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 16/33] net/ice: support IPv6 NVGRE tunnel
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (14 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 15/33] net/ice/base: support IPv6 GRE UDP pattern Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 17/33] net/ice: support new pattern of IPv4 Kevin Liu
` (17 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add protocol definition and pattern matching for IPv6 NVGRE tunnel.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index e90e109eca..4e9c85aed4 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -31,6 +31,7 @@
#define ICE_PPP_IPV4_PROTO 0x0021
#define ICE_PPP_IPV6_PROTO 0x0057
#define ICE_IPV4_PROTO_NVGRE 0x002F
+#define ICE_IPV6_PROTO_NVGRE 0x002F
#define ICE_SW_PRI_BASE 6
#define ICE_SW_INSET_ETHER ( \
@@ -803,6 +804,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
break;
}
}
+ if ((ipv6_spec->hdr.proto &
+ ipv6_mask->hdr.proto) ==
+ ICE_IPV6_PROTO_NVGRE)
+ *tun_type = ICE_SW_TUN_AND_NON_TUN;
if (ipv6_mask->hdr.proto)
*input |= ICE_INSET_IPV6_NEXT_HDR;
if (ipv6_mask->hdr.hop_limits)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 17/33] net/ice: support new pattern of IPv4
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (15 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 16/33] net/ice: support IPv6 NVGRE tunnel Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 18/33] net/ice/base: support new patterns of TCP and UDP Kevin Liu
` (16 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definition and pattern entry for IPv4 pattern: MAC/VLAN/IPv4
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 4e9c85aed4..a8cb70ee0c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -38,6 +38,8 @@
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
#define ICE_SW_INSET_MAC_VLAN ( \
ICE_SW_INSET_ETHER | ICE_INSET_VLAN_INNER)
+#define ICE_SW_INSET_MAC_VLAN_IPV4 ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4)
#define ICE_SW_INSET_MAC_QINQ ( \
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_VLAN_INNER | \
ICE_INSET_VLAN_OUTER)
@@ -231,6 +233,7 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv4, ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4, ICE_SW_INSET_MAC_VLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 18/33] net/ice/base: support new patterns of TCP and UDP
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (16 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 17/33] net/ice: support new pattern of IPv4 Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 19/33] net/ice: " Kevin Liu
` (15 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Find training packets for below TCP and UDP patterns:
MAC/VLAN/IPv4/TCP
MAC/VLAN/IPv4/UDP
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_switch.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index f444a2da07..c742dba138 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -8568,6 +8568,12 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
if (tun_type == ICE_SW_IPV4_TCP) {
+ if (vlan && tcp) {
+ *pkt = dummy_vlan_tcp_packet;
+ *pkt_len = sizeof(dummy_vlan_tcp_packet);
+ *offsets = dummy_vlan_tcp_packet_offsets;
+ return;
+ }
*pkt = dummy_tcp_packet;
*pkt_len = sizeof(dummy_tcp_packet);
*offsets = dummy_tcp_packet_offsets;
@@ -8575,6 +8581,12 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
if (tun_type == ICE_SW_IPV4_UDP) {
+ if (vlan && udp) {
+ *pkt = dummy_vlan_udp_packet;
+ *pkt_len = sizeof(dummy_vlan_udp_packet);
+ *offsets = dummy_vlan_udp_packet_offsets;
+ return;
+ }
*pkt = dummy_udp_packet;
*pkt_len = sizeof(dummy_udp_packet);
*offsets = dummy_udp_packet_offsets;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 19/33] net/ice: support new patterns of TCP and UDP
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (17 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 18/33] net/ice/base: support new patterns of TCP and UDP Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 20/33] net/ice/base: support IPv4 GRE tunnel Kevin Liu
` (14 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definitions and pattern entries for below TCP and UDP patterns:
MAC/VLAN/IPv4/TCP
MAC/VLAN/IPv4/UDP
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index a8cb70ee0c..44046f803c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -62,6 +62,10 @@
ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \
ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS | \
ICE_INSET_UDP_DST_PORT | ICE_INSET_UDP_SRC_PORT)
+#define ICE_SW_INSET_MAC_VLAN_IPV4_TCP ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_VLAN_IPV4_UDP ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4_UDP)
#define ICE_SW_INSET_MAC_IPV6 ( \
ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \
ICE_INSET_IPV6_TC | ICE_INSET_IPV6_HOP_LIMIT | \
@@ -234,6 +238,8 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_vlan_ipv4, ICE_SW_INSET_MAC_VLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4_tcp, ICE_SW_INSET_MAC_VLAN_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4_udp, ICE_SW_INSET_MAC_VLAN_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 20/33] net/ice/base: support IPv4 GRE tunnel
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (18 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 19/33] net/ice: " Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 21/33] net/ice: support IPv4 GRE raw pattern type Kevin Liu
` (13 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definitions, trainer packets and routine path for IPv4 GRE tunnel.
Ref:
https://www.ietf.org/rfc/rfc1701.html
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_flex_pipe.c | 37 ++-
drivers/net/ice/base/ice_flex_pipe.h | 3 +-
drivers/net/ice/base/ice_protocol_type.h | 15 ++
drivers/net/ice/base/ice_switch.c | 304 ++++++++++++++++++++++-
4 files changed, 332 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index f6a29f87c5..8672c41c69 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1851,6 +1851,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs,
* @ids_cnt: lookup/protocol count
* @bm: bitmap of field vectors to consider
* @fv_list: Head of a list
+ * @lkup_exts: lookup elements
*
* Finds all the field vector entries from switch block that contain
* a given protocol ID and returns a list of structures of type
@@ -1861,7 +1862,8 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs,
*/
enum ice_status
ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
- ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list)
+ ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list,
+ struct ice_prot_lkup_ext *lkup_exts)
{
struct ice_sw_fv_list_entry *fvl;
struct ice_sw_fv_list_entry *tmp;
@@ -1892,29 +1894,26 @@ ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
if (!ice_is_bit_set(bm, (u16)offset))
continue;
- for (i = 0; i < ids_cnt; i++) {
+ int found = 1;
+ for (i = 0; i < lkup_exts->n_val_words; i++) {
int j;
- /* This code assumes that if a switch field vector line
- * has a matching protocol, then this line will contain
- * the entries necessary to represent every field in
- * that protocol header.
- */
for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
- if (fv->ew[j].prot_id == prot_ids[i])
+ if (fv->ew[j].prot_id ==
+ lkup_exts->fv_words[i].prot_id &&
+ fv->ew[j].off == lkup_exts->fv_words[i].off)
break;
if (j >= hw->blk[ICE_BLK_SW].es.fvw)
- break;
- if (i + 1 == ids_cnt) {
- fvl = (struct ice_sw_fv_list_entry *)
- ice_malloc(hw, sizeof(*fvl));
- if (!fvl)
- goto err;
- fvl->fv_ptr = fv;
- fvl->profile_id = offset;
- LIST_ADD(&fvl->list_entry, fv_list);
- break;
- }
+ found = 0;
+ }
+ if (found) {
+ fvl = (struct ice_sw_fv_list_entry *)
+ ice_malloc(hw, sizeof(*fvl));
+ if (!fvl)
+ goto err;
+ fvl->fv_ptr = fv;
+ fvl->profile_id = offset;
+ LIST_ADD(&fvl->list_entry, fv_list);
}
} while (fv);
if (LIST_EMPTY(fv_list))
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 23ba45564a..a22d66f3cf 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -37,7 +37,8 @@ void
ice_init_prof_result_bm(struct ice_hw *hw);
enum ice_status
ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
- ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list);
+ ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list,
+ struct ice_prot_lkup_ext *lkup_exts);
enum ice_status
ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count);
u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld);
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index eec9f27823..ffd34606e0 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -67,6 +67,7 @@ enum ice_sw_tunnel_type {
ICE_SW_TUN_VXLAN, /* VXLAN matches only non-VLAN pkts */
ICE_SW_TUN_VXLAN_VLAN, /* VXLAN matches both VLAN and non-VLAN pkts */
ICE_SW_TUN_NVGRE,
+ ICE_SW_TUN_GRE,
ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
* and GENEVE
*/
@@ -231,6 +232,10 @@ enum ice_prot_id {
#define ICE_TUN_FLAG_VLAN_MASK 0x01
#define ICE_TUN_FLAG_FV_IND 2
+#define ICE_GRE_FLAG_MDID 22
+#define ICE_GRE_FLAG_MDID_OFF (ICE_MDID_SIZE * ICE_GRE_FLAG_MDID)
+#define ICE_GRE_FLAG_MASK 0x01C0
+
#define ICE_PROTOCOL_MAX_ENTRIES 16
/* Mapping of software defined protocol ID to hardware defined protocol ID */
@@ -371,6 +376,15 @@ struct ice_nvgre {
__be32 tni_flow;
};
+struct ice_gre {
+ __be16 flags;
+ __be16 protocol;
+ __be16 chksum;
+ __be16 offset;
+ __be32 key;
+ __be32 seqnum;
+};
+
union ice_prot_hdr {
struct ice_ether_hdr eth_hdr;
struct ice_ethtype_hdr ethertype;
@@ -381,6 +395,7 @@ union ice_prot_hdr {
struct ice_sctp_hdr sctp_hdr;
struct ice_udp_tnl_hdr tnl_hdr;
struct ice_nvgre nvgre_hdr;
+ struct ice_gre gre_hdr;
struct ice_udp_gtp_hdr gtp_hdr;
struct ice_pppoe_hdr pppoe_hdr;
struct ice_pfcp_hdr pfcp_hdr;
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index c742dba138..1b51cd4321 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -12,6 +12,7 @@
#define ICE_MAX_VLAN_ID 0xFFF
#define ICE_IPV6_ETHER_ID 0x86DD
#define ICE_IPV4_NVGRE_PROTO_ID 0x002F
+#define ICE_IPV4_GRE_PROTO_ID 0x002F
#define ICE_IPV6_GRE_PROTO_ID 0x002F
#define ICE_PPP_IPV6_PROTO_ID 0x0057
#define ICE_TCP_PROTO_ID 0x06
@@ -158,6 +159,188 @@ static const u8 dummy_ipv6_gre_udp_packet[] = {
0xff, 0xd8, 0x00, 0x00,
};
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c1k1_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 50 },
+ { ICE_TCP_IL, 70 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c1k1_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x4e, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x2f, 0x7c, 0x7e,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0xb0, 0x00, 0x08, 0x00, /* ICE_GRE 34 */
+ 0x46, 0x1e, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x45, 0x00, 0x00, 0x2a, /* ICE_IPV4_IL 50 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x06, 0x7c, 0xcb,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0x00, 0x14, 0x00, 0x50, /* ICE_TCP_IL 70 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x50, 0x02, 0x20, 0x00,
+ 0x91, 0x7a, 0x00, 0x00,
+
+ 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c1k1_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 50 },
+ { ICE_UDP_ILOS, 70 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c1k1_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x08, 0x00, /* ICE_ETYPE_OL 12 */
+
+ 0x45, 0x00, 0x00, 0x42, /* ICE_IPV4_OFOS 14 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x2f, 0x7c, 0x8a,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0xb0, 0x00, 0x08, 0x00, /* ICE_GRE 34 */
+ 0x46, 0x1d, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+
+ 0x45, 0x00, 0x00, 0x1e, /* ICE_IPV4_IL 50 */
+ 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x7c, 0xcc,
+ 0x7f, 0x00, 0x00, 0x01,
+ 0x7f, 0x00, 0x00, 0x01,
+
+ 0x00, 0x35, 0x00, 0x35, /* ICE_UDP_ILOS 70 */
+ 0x00, 0x0a, 0x01, 0x6e,
+
+ 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k1_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 46 },
+ { ICE_TCP_IL, 66 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k1_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x4a, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x82, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x30, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x45, 0x00,
+ 0x00, 0x2a, 0x00, 0x01, 0x00, 0x00, 0x40, 0x06,
+ 0x7c, 0xcb, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x00, 0x14, 0x00, 0x50, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x50, 0x02,
+ 0x20, 0x00, 0x91, 0x7a, 0x00, 0x00, 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k1_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 46 },
+ { ICE_UDP_ILOS, 66 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k1_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x3e, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x8e, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x30, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x45, 0x00,
+ 0x00, 0x1e, 0x00, 0x01, 0x00, 0x00, 0x40, 0x11,
+ 0x7c, 0xcc, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x00, 0x35, 0x00, 0x35, 0x00, 0x0a,
+ 0x01, 0x6e, 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k0_tcp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 42 },
+ { ICE_TCP_IL, 62 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k0_tcp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x46, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x86, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x10, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x45, 0x00, 0x00, 0x2a, 0x00, 0x01,
+ 0x00, 0x00, 0x40, 0x06, 0x7c, 0xcb, 0x7f, 0x00,
+ 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0x00, 0x14,
+ 0x00, 0x50, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x50, 0x02, 0x20, 0x00, 0x91, 0x7a,
+ 0x00, 0x00, 0x00, 0x00,
+};
+
+static const struct ice_dummy_pkt_offsets
+dummy_gre_rfc1701_c0k0_udp_packet_offsets[] = {
+ { ICE_MAC_OFOS, 0 },
+ { ICE_ETYPE_OL, 12 },
+ { ICE_IPV4_OFOS, 14 },
+ { ICE_GRE, 34 },
+ { ICE_IPV4_IL, 42 },
+ { ICE_UDP_ILOS, 62 },
+ { ICE_PROTOCOL_LAST, 0 },
+};
+
+static const u8 dummy_gre_rfc1701_c0k0_udp_packet[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00,
+ 0x00, 0x3a, 0x00, 0x01, 0x00, 0x00, 0x40, 0x2f,
+ 0x7c, 0x92, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00,
+ 0x00, 0x01, 0x10, 0x00, 0x08, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x45, 0x00, 0x00, 0x1e, 0x00, 0x01,
+ 0x00, 0x00, 0x40, 0x11, 0x7c, 0xcc, 0x7f, 0x00,
+ 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0x00, 0x35,
+ 0x00, 0x35, 0x00, 0x0a, 0x01, 0x6e, 0x00, 0x00,
+};
+
static const struct ice_dummy_pkt_offsets dummy_udp_tun_tcp_packet_offsets[] = {
{ ICE_MAC_OFOS, 0 },
{ ICE_ETYPE_OL, 12 },
@@ -173,7 +356,7 @@ static const struct ice_dummy_pkt_offsets dummy_udp_tun_tcp_packet_offsets[] = {
};
static const u8 dummy_udp_tun_tcp_packet[] = {
- 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
@@ -224,7 +407,7 @@ static const struct ice_dummy_pkt_offsets dummy_udp_tun_udp_packet_offsets[] = {
};
static const u8 dummy_udp_tun_udp_packet[] = {
- 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+ 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
@@ -6892,6 +7075,7 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[ICE_PROTOCOL_LAST] = {
{ ICE_GENEVE, { 8, 10, 12, 14 } },
{ ICE_VXLAN_GPE, { 8, 10, 12, 14 } },
{ ICE_NVGRE, { 0, 2, 4, 6 } },
+ { ICE_GRE, { 0, 2, 4, 6, 8, 10, 12, 14 } },
{ ICE_GTP, { 8, 10, 12, 14, 16, 18, 20, 22 } },
{ ICE_PPPOE, { 0, 2, 4, 6 } },
{ ICE_PFCP, { 8, 10, 12, 14, 16, 18, 20, 22 } },
@@ -6927,6 +7111,7 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
{ ICE_GENEVE, ICE_UDP_OF_HW },
{ ICE_VXLAN_GPE, ICE_UDP_OF_HW },
{ ICE_NVGRE, ICE_GRE_OF_HW },
+ { ICE_GRE, ICE_GRE_OF_HW },
{ ICE_GTP, ICE_UDP_OF_HW },
{ ICE_PPPOE, ICE_PPPOE_HW },
{ ICE_PFCP, ICE_UDP_ILOS_HW },
@@ -7065,6 +7250,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
struct ice_prot_lkup_ext *lkup_exts)
{
u8 j, word, prot_id, ret_val;
+ u8 extra_byte = 0;
if (!ice_prot_type_to_id(rule->type, &prot_id))
return 0;
@@ -7077,8 +7263,15 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
/* No more space to accommodate */
if (word >= ICE_MAX_CHAIN_WORDS)
return 0;
+ if (rule->type == ICE_GRE) {
+ if (ice_prot_ext[rule->type].offs[j] == 0) {
+ if (((u16 *)&rule->h_u)[j] == 0x20)
+ extra_byte = 4;
+ continue;
+ }
+ }
lkup_exts->fv_words[word].off =
- ice_prot_ext[rule->type].offs[j];
+ ice_prot_ext[rule->type].offs[j] - extra_byte;
lkup_exts->fv_words[word].prot_id =
ice_prot_id_tbl[rule->type].protocol_id;
lkup_exts->field_mask[word] =
@@ -7622,10 +7815,12 @@ ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
* @lkups_cnt: number of protocols
* @bm: bitmap of field vectors to consider
* @fv_list: pointer to a list that holds the returned field vectors
+ * @lkup_exts: lookup elements
*/
static enum ice_status
ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
- ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list)
+ ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list,
+ struct ice_prot_lkup_ext *lkup_exts)
{
enum ice_status status;
u8 *prot_ids;
@@ -7645,7 +7840,8 @@ ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
/* Find field vectors that include all specified protocol types */
- status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, bm, fv_list);
+ status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, bm, fv_list,
+ lkup_exts);
free_mem:
ice_free(hw, prot_ids);
@@ -7681,6 +7877,10 @@ static bool ice_tun_type_match_word(enum ice_sw_tunnel_type tun_type, u16 *mask)
*mask = ICE_TUN_FLAG_MASK;
return true;
+ case ICE_SW_TUN_GRE:
+ *mask = ICE_GRE_FLAG_MASK;
+ return true;
+
case ICE_SW_TUN_GENEVE_VLAN:
case ICE_SW_TUN_VXLAN_VLAN:
*mask = ICE_TUN_FLAG_MASK & ~ICE_TUN_FLAG_VLAN_MASK;
@@ -7702,6 +7902,12 @@ ice_add_special_words(struct ice_adv_rule_info *rinfo,
struct ice_prot_lkup_ext *lkup_exts)
{
u16 mask;
+ u8 has_gre_key = 0;
+ u8 i;
+
+ for (i = 0; i < lkup_exts->n_val_words; i++)
+ if (lkup_exts->fv_words[i].prot_id == 0x40)
+ has_gre_key = 1;
/* If this is a tunneled packet, then add recipe index to match the
* tunnel bit in the packet metadata flags.
@@ -7713,6 +7919,13 @@ ice_add_special_words(struct ice_adv_rule_info *rinfo,
lkup_exts->fv_words[word].prot_id = ICE_META_DATA_ID_HW;
lkup_exts->fv_words[word].off = ICE_TUN_FLAG_MDID_OFF;
lkup_exts->field_mask[word] = mask;
+
+ if (rinfo->tun_type == ICE_SW_TUN_GRE)
+ lkup_exts->fv_words[word].off =
+ ICE_GRE_FLAG_MDID_OFF;
+
+ if (!has_gre_key)
+ lkup_exts->field_mask[word] = 0x0140;
} else {
return ICE_ERR_MAX_LIMIT;
}
@@ -7754,6 +7967,9 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo,
case ICE_SW_TUN_NVGRE:
prof_type = ICE_PROF_TUN_GRE;
break;
+ case ICE_SW_TUN_GRE:
+ prof_type = ICE_PROF_TUN_GRE;
+ break;
case ICE_SW_TUN_PPPOE:
case ICE_SW_TUN_PPPOE_QINQ:
prof_type = ICE_PROF_TUN_PPPOE;
@@ -8079,7 +8295,8 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
*/
ice_get_compat_fv_bitmap(hw, rinfo, fv_bitmap);
- status = ice_get_fv(hw, lkups, lkups_cnt, fv_bitmap, &rm->fv_list);
+ status = ice_get_fv(hw, lkups, lkups_cnt, fv_bitmap, &rm->fv_list,
+ lkup_exts);
if (status)
goto err_unroll;
@@ -8228,6 +8445,8 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
const struct ice_dummy_pkt_offsets **offsets)
{
bool tcp = false, udp = false, ipv6 = false, vlan = false;
+ bool gre_c_bit = false;
+ bool gre_k_bit = false;
bool gre = false, mpls = false;
u16 i;
@@ -8245,6 +8464,17 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
}
else if (lkups[i].type == ICE_VLAN_OFOS)
vlan = true;
+ else if (lkups[i].type == ICE_GRE) {
+ if (lkups[i].h_u.gre_hdr.flags & 0x20)
+ gre_k_bit = true;
+ if (lkups[i].h_u.gre_hdr.flags & 0x80)
+ gre_c_bit = true;
+ } else if (lkups[i].type == ICE_IPV4_OFOS &&
+ lkups[i].h_u.ipv4_hdr.protocol ==
+ ICE_IPV4_GRE_PROTO_ID &&
+ lkups[i].m_u.ipv4_hdr.protocol ==
+ 0xFF)
+ gre = true;
else if (lkups[i].type == ICE_ETYPE_OL &&
lkups[i].h_u.ethertype.ethtype_id ==
CPU_TO_BE16(ICE_IPV6_ETHER_ID) &&
@@ -8650,6 +8880,46 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
return;
}
+ if (tun_type == ICE_SW_TUN_GRE && tcp) {
+ if (gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c1k1_tcp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c1k1_tcp_packet);
+ *offsets = dummy_gre_rfc1701_c1k1_tcp_packet_offsets;
+ return;
+ }
+ if (!gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c0k1_tcp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k1_tcp_packet);
+ *offsets = dummy_gre_rfc1701_c0k1_tcp_packet_offsets;
+ return;
+ }
+
+ *pkt = dummy_gre_rfc1701_c0k0_tcp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k0_tcp_packet);
+ *offsets = dummy_gre_rfc1701_c0k0_tcp_packet_offsets;
+ return;
+ }
+
+ if (tun_type == ICE_SW_TUN_GRE) {
+ if (gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c1k1_udp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c1k1_udp_packet);
+ *offsets = dummy_gre_rfc1701_c1k1_udp_packet_offsets;
+ return;
+ }
+ if (!gre_c_bit && gre_k_bit) {
+ *pkt = dummy_gre_rfc1701_c0k1_udp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k1_udp_packet);
+ *offsets = dummy_gre_rfc1701_c0k1_udp_packet_offsets;
+ return;
+ }
+
+ *pkt = dummy_gre_rfc1701_c0k0_udp_packet;
+ *pkt_len = sizeof(dummy_gre_rfc1701_c0k0_udp_packet);
+ *offsets = dummy_gre_rfc1701_c0k0_udp_packet_offsets;
+ return;
+ }
+
if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
tun_type == ICE_SW_TUN_VXLAN_GPE || tun_type == ICE_SW_TUN_UDP ||
tun_type == ICE_SW_TUN_PROFID_IPV4_VXLAN ||
@@ -8800,6 +9070,9 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
case ICE_NVGRE:
len = sizeof(struct ice_nvgre);
break;
+ case ICE_GRE:
+ len = sizeof(struct ice_gre);
+ break;
case ICE_VXLAN:
case ICE_GENEVE:
case ICE_VXLAN_GPE:
@@ -8833,6 +9106,20 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
if (len % ICE_BYTES_PER_WORD)
return ICE_ERR_CFG;
+ if (lkups[i].type == ICE_GRE) {
+ if (lkups[i].h_u.gre_hdr.flags == 0x20)
+ offset -= 4;
+
+ for (j = 1; j < len / sizeof(u16); j++)
+ if (((u16 *)&lkups[i].m_u)[j])
+ ((u16 *)(pkt + offset))[j] =
+ (((u16 *)(pkt + offset))[j] &
+ ~((u16 *)&lkups[i].m_u)[j]) |
+ (((u16 *)&lkups[i].h_u)[j] &
+ ((u16 *)&lkups[i].m_u)[j]);
+ continue;
+ }
+
/* We have the offset to the header start, the length, the
* caller's header values and mask. Use this information to
* copy the data into the dummy packet appropriately based on
@@ -9420,8 +9707,11 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
return ICE_ERR_CFG;
count = ice_fill_valid_words(&lkups[i], &lkup_exts);
- if (!count)
+ if (!count) {
+ if (lkups[i].type == ICE_GRE)
+ continue;
return ICE_ERR_CFG;
+ }
}
/* Create any special protocol/offset pairs, such as looking at tunnel
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 21/33] net/ice: support IPv4 GRE raw pattern type
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (19 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 20/33] net/ice/base: support IPv4 GRE tunnel Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 22/33] net/ice: treat unknown package as OS default package Kevin Liu
` (12 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Steven Zou, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definitions, matching entries, parsers for below patterns:
ETH/IPV4/GRE/RAW/IPV4
ETH/IPV4/GRE/RAW/IPV4/UDP
ETH/IPV4/GRE/RAW/IPV4/TCP
Signed-off-by: Steven Zou <steven.zou@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_generic_flow.c | 27 +++++++++
drivers/net/ice/ice_generic_flow.h | 9 +++
drivers/net/ice/ice_switch_filter.c | 90 +++++++++++++++++++++++++++++
3 files changed, 126 insertions(+)
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 1433094ed4..6663a85ed0 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1084,6 +1084,33 @@ enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_icmp6[] = {
RTE_FLOW_ITEM_TYPE_ICMP6,
RTE_FLOW_ITEM_TYPE_END,
};
+/* IPv4 GRE RAW IPv4 */
+enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_GRE,
+ RTE_FLOW_ITEM_TYPE_RAW,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_udp[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_GRE,
+ RTE_FLOW_ITEM_TYPE_RAW,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_tcp[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_GRE,
+ RTE_FLOW_ITEM_TYPE_RAW,
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_TCP,
+ RTE_FLOW_ITEM_TYPE_END,
+};
/*IPv4 GTPU (EH) */
enum rte_flow_item_type pattern_eth_ipv4_gtpu[] = {
diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h
index def7e2d6d6..12193cbd9d 100644
--- a/drivers/net/ice/ice_generic_flow.h
+++ b/drivers/net/ice/ice_generic_flow.h
@@ -27,6 +27,7 @@
#define ICE_PROT_L2TPV3OIP BIT_ULL(16)
#define ICE_PROT_PFCP BIT_ULL(17)
#define ICE_PROT_NAT_T_ESP BIT_ULL(18)
+#define ICE_PROT_GRE BIT_ULL(19)
/* field */
@@ -54,6 +55,7 @@
#define ICE_PFCP_SEID BIT_ULL(42)
#define ICE_PFCP_S_FIELD BIT_ULL(41)
#define ICE_IP_PK_ID BIT_ULL(40)
+#define ICE_RAW_PATTERN BIT_ULL(39)
/* input set */
@@ -104,6 +106,8 @@
(ICE_PROT_GTPU | ICE_GTPU_TEID)
#define ICE_INSET_GTPU_QFI \
(ICE_PROT_GTPU | ICE_GTPU_QFI)
+#define ICE_INSET_RAW \
+ (ICE_PROT_GRE | ICE_RAW_PATTERN)
#define ICE_INSET_PPPOE_SESSION \
(ICE_PROT_PPPOE_S | ICE_PPPOE_SESSION)
#define ICE_INSET_PPPOE_PROTO \
@@ -291,6 +295,11 @@ extern enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_udp[];
extern enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_sctp[];
extern enum rte_flow_item_type pattern_eth_ipv6_nvgre_eth_ipv6_icmp6[];
+/* IPv4 GRE RAW IPv4 */
+extern enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4[];
+extern enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_udp[];
+extern enum rte_flow_item_type pattern_eth_ipv4_gre_raw_ipv4_tcp[];
+
/* IPv4 GTPU (EH) */
extern enum rte_flow_item_type pattern_eth_ipv4_gtpu[];
extern enum rte_flow_item_type pattern_eth_ipv4_gtpu_eh[];
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 44046f803c..435ca5a05c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -196,6 +196,22 @@
#define ICE_SW_INSET_GTPU_IPV6_TCP ( \
ICE_SW_INSET_GTPU_IPV6 | ICE_INSET_TCP_SRC_PORT | \
ICE_INSET_TCP_DST_PORT)
+#define ICE_SW_INSET_DIST_GRE_RAW_IPV4 ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_RAW)
+#define ICE_SW_INSET_DIST_GRE_RAW_IPV4_TCP ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_TCP_SRC_PORT | ICE_INSET_TCP_DST_PORT | \
+ ICE_INSET_RAW)
+#define ICE_SW_INSET_DIST_GRE_RAW_IPV4_UDP ( \
+ ICE_INSET_IPV4_SRC | ICE_INSET_IPV4_DST | \
+ ICE_INSET_UDP_SRC_PORT | ICE_INSET_UDP_DST_PORT | \
+ ICE_INSET_RAW)
+
+#define CUSTOM_GRE_KEY_OFFSET 4
+#define GRE_CFLAG 0x80
+#define GRE_KFLAG 0x20
+#define GRE_SFLAG 0x10
struct sw_meta {
struct ice_adv_lkup_elem *list;
@@ -317,6 +333,9 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv6_gtpu_eh_ipv6_udp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_UDP, ICE_INSET_NONE},
{pattern_eth_ipv6_gtpu_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE},
{pattern_eth_ipv6_gtpu_eh_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE},
+ {pattern_eth_ipv4_gre_raw_ipv4, ICE_SW_INSET_DIST_GRE_RAW_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_gre_raw_ipv4_tcp, ICE_SW_INSET_DIST_GRE_RAW_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_ipv4_gre_raw_ipv4_udp, ICE_SW_INSET_DIST_GRE_RAW_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
};
static struct
@@ -608,6 +627,11 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
bool ipv6_ipv6_valid = 0;
bool any_valid = 0;
uint16_t j, k, t = 0;
+ uint16_t c_rsvd0_ver = 0;
+ bool gre_valid = 0;
+
+#define set_cur_item_einval(msg) \
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, (msg))
if (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
*tun_type == ICE_NON_TUN_QINQ)
@@ -1100,6 +1124,70 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
}
break;
+ case RTE_FLOW_ITEM_TYPE_GRE: {
+ const struct rte_flow_item_gre *gre_spec = item->spec;
+ const struct rte_flow_item_gre *gre_mask = item->mask;
+
+ gre_valid = 1;
+ tunnel_valid = 1;
+ if (gre_spec && gre_mask) {
+ list[t].type = ICE_GRE;
+ if (gre_mask->c_rsvd0_ver) {
+ /* GRE RFC1701 */
+ list[t].h_u.gre_hdr.flags =
+ gre_spec->c_rsvd0_ver;
+ list[t].m_u.gre_hdr.flags =
+ gre_mask->c_rsvd0_ver;
+ c_rsvd0_ver = gre_spec->c_rsvd0_ver &
+ gre_mask->c_rsvd0_ver;
+ }
+ }
+ break;
+ }
+
+ case RTE_FLOW_ITEM_TYPE_RAW: {
+ const struct rte_flow_item_raw *raw_spec;
+ char *endp = NULL;
+ unsigned long key;
+ char s[sizeof("0x12345678")];
+
+ raw_spec = item->spec;
+
+ if (list[t].type != ICE_GRE)
+ return set_cur_item_einval("RAW must follow GRE.");
+
+ if (!(c_rsvd0_ver & GRE_KFLAG)) {
+ if (!raw_spec)
+ break;
+
+ return set_cur_item_einval("Invalid pattern! k_bit is 0 while raw pattern exists.");
+ }
+
+ if (!raw_spec)
+ return set_cur_item_einval("Invalid pattern! k_bit is 1 while raw pattern doesn't exist.");
+
+ if ((c_rsvd0_ver & GRE_CFLAG) == GRE_CFLAG &&
+ raw_spec->offset != CUSTOM_GRE_KEY_OFFSET)
+ return set_cur_item_einval("Invalid pattern! c_bit is 1 while offset is not 4.");
+
+ if (raw_spec->length >= sizeof(s))
+ return set_cur_item_einval("Invalid key");
+
+ memcpy(s, raw_spec->pattern, raw_spec->length);
+ s[raw_spec->length] = '\0';
+ key = strtol(s, &endp, 16);
+ if (*endp != '\0' || key > UINT32_MAX)
+ return set_cur_item_einval("Invalid key");
+
+ list[t].h_u.gre_hdr.key = (uint32_t)key;
+ list[t].m_u.gre_hdr.key = UINT32_MAX;
+ *input |= ICE_INSET_RAW;
+ input_set_byte += 2;
+ t++;
+
+ break;
+ }
+
case RTE_FLOW_ITEM_TYPE_VLAN:
vlan_spec = item->spec;
vlan_mask = item->mask;
@@ -1633,6 +1721,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
if (*tun_type == ICE_NON_TUN) {
if (nvgre_valid)
*tun_type = ICE_SW_TUN_NVGRE;
+ else if (gre_valid)
+ *tun_type = ICE_SW_TUN_GRE;
else if (ipv4_valid && tcp_valid)
*tun_type = ICE_SW_IPV4_TCP;
else if (ipv4_valid && udp_valid)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 22/33] net/ice: treat unknown package as OS default package
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (20 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 21/33] net/ice: support IPv4 GRE raw pattern type Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 23/33] net/ice/base: update Profile ID table for VXLAN Kevin Liu
` (11 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
In order to use custom package, unknown package should be treated
as OS default package.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 73e550f5fb..ad9b09d081 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1710,13 +1710,16 @@ ice_load_pkg_type(struct ice_hw *hw)
/* store the activated package type (OS default or Comms) */
if (!strncmp((char *)hw->active_pkg_name, ICE_OS_DEFAULT_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_OS_DEFAULT;
- else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ } else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_COMMS;
- else
- package_type = ICE_PKG_TYPE_UNKNOWN;
+ } else {
+ PMD_INIT_LOG(WARNING,
+ "The package type is not identified, treaded as OS default type");
+ package_type = ICE_PKG_TYPE_OS_DEFAULT;
+ }
PMD_INIT_LOG(NOTICE, "Active package is: %d.%d.%d.%d, %s (%s VLAN mode)",
hw->active_pkg_ver.major, hw->active_pkg_ver.minor,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 23/33] net/ice/base: update Profile ID table for VXLAN
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (21 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 22/33] net/ice: treat unknown package as OS default package Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 24/33] net/ice/base: update Protocol ID table to match DVM DDP Kevin Liu
` (10 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
Update Profile ID table for VXLAN to align with Tencent customed DDP.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_switch.h | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index efb9399b77..c8071aa50d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -23,15 +23,15 @@
#define ICE_PROFID_IPV4_TUN_M_IPV4_TCP 10
#define ICE_PROFID_IPV4_TUN_M_IPV4_UDP 11
#define ICE_PROFID_IPV4_TUN_M_IPV4_OTHER 12
-#define ICE_PROFID_IPV6_TUN_M_IPV4_TCP 16
-#define ICE_PROFID_IPV6_TUN_M_IPV4_UDP 17
-#define ICE_PROFID_IPV6_TUN_M_IPV4_OTHER 18
-#define ICE_PROFID_IPV4_TUN_M_IPV6_TCP 22
-#define ICE_PROFID_IPV4_TUN_M_IPV6_UDP 23
-#define ICE_PROFID_IPV4_TUN_M_IPV6_OTHER 24
-#define ICE_PROFID_IPV6_TUN_M_IPV6_TCP 25
-#define ICE_PROFID_IPV6_TUN_M_IPV6_UDP 26
-#define ICE_PROFID_IPV6_TUN_M_IPV6_OTHER 27
+#define ICE_PROFID_IPV6_TUN_M_IPV4_TCP 34
+#define ICE_PROFID_IPV6_TUN_M_IPV4_UDP 35
+#define ICE_PROFID_IPV6_TUN_M_IPV4_OTHER 36
+#define ICE_PROFID_IPV4_TUN_M_IPV6_TCP 40
+#define ICE_PROFID_IPV4_TUN_M_IPV6_UDP 41
+#define ICE_PROFID_IPV4_TUN_M_IPV6_OTHER 42
+#define ICE_PROFID_IPV6_TUN_M_IPV6_TCP 43
+#define ICE_PROFID_IPV6_TUN_M_IPV6_UDP 44
+#define ICE_PROFID_IPV6_TUN_M_IPV6_OTHER 45
#define ICE_PROFID_PPPOE_PAY 34
#define ICE_PROFID_PPPOE_IPV4_TCP 35
#define ICE_PROFID_PPPOE_IPV4_UDP 36
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 24/33] net/ice/base: update Protocol ID table to match DVM DDP
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (22 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 23/33] net/ice/base: update Profile ID table for VXLAN Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 25/33] net/ice: handle virtchnl event message without interrupt Kevin Liu
` (9 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Junfeng Guo, Kevin Liu
From: Junfeng Guo <junfeng.guo@intel.com>
The ice kernel driver and DDP is working in Double VLAN Mode (DVM),
but the DVM is not supported on this PMD. Thus update the SW to HW
Protocol ID table for VLAN to support common switch filtering with
single VLAN layer.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_switch.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 1b51cd4321..64302b1617 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -7098,7 +7098,7 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
{ ICE_MAC_OFOS, ICE_MAC_OFOS_HW },
{ ICE_MAC_IL, ICE_MAC_IL_HW },
{ ICE_ETYPE_OL, ICE_ETYPE_OL_HW },
- { ICE_VLAN_OFOS, ICE_VLAN_OL_HW },
+ { ICE_VLAN_OFOS, ICE_VLAN_OF_HW },
{ ICE_IPV4_OFOS, ICE_IPV4_OFOS_HW },
{ ICE_IPV4_IL, ICE_IPV4_IL_HW },
{ ICE_IPV6_OFOS, ICE_IPV6_OFOS_HW },
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 25/33] net/ice: handle virtchnl event message without interrupt
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (23 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 24/33] net/ice/base: update Protocol ID table to match DVM DDP Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 26/33] net/ice: add DCF request queues function Kevin Liu
` (8 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Currently, VF can only handle virtchnl event message by calling interrupt.
It is not available in two cases:
1. If the event message comes during VF initialization before interrupt
is enabled, this message will not be handled correctly.
2. Some virtchnl commands need to receive the event message and handle
it with interrupt disabled.
To solve this issue, we add the virtchnl event message handling in the
process of reading vitchnl messages in adminq from PF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 9c2f13cf72..1415f26ac3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -63,11 +63,32 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
goto again;
v_op = rte_le_to_cpu_32(event.desc.cookie_high);
- if (v_op != op)
- goto again;
+
+ if (v_op == VIRTCHNL_OP_EVENT) {
+ struct virtchnl_pf_event *vpe =
+ (struct virtchnl_pf_event *)event.msg_buf;
+ switch (vpe->event) {
+ case VIRTCHNL_EVENT_RESET_IMPENDING:
+ hw->resetting = true;
+ if (rsp_msglen)
+ *rsp_msglen = 0;
+ return IAVF_SUCCESS;
+ default:
+ goto again;
+ }
+ } else {
+ /* async reply msg on command issued by vf previously */
+ if (v_op != op) {
+ PMD_DRV_LOG(WARNING,
+ "command mismatch, expect %u, get %u",
+ op, v_op);
+ goto again;
+ }
+ }
if (rsp_msglen != NULL)
*rsp_msglen = event.msg_len;
+
return rte_le_to_cpu_32(event.desc.cookie_low);
again:
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 26/33] net/ice: add DCF request queues function
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (24 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 25/33] net/ice: handle virtchnl event message without interrupt Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 27/33] net/ice: negotiate large VF and request more queues Kevin Liu
` (7 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Add a new virtchnl function to request additional queues from PF. Current
default queue pairs number is 16. In order to support up to 256 queue
pairs DCF port, enable this request queues function.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 98 +++++++++++++++++++++++++++++++++------
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 86 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1415f26ac3..6aeafa6681 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -468,18 +468,38 @@ ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
goto ret;
}
- do {
- if (!cmd->pending)
- break;
-
- rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
- } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
-
- if (cmd->v_ret != IAVF_SUCCESS) {
- err = -1;
- PMD_DRV_LOG(ERR,
- "No response (%d times) or return failure (%d) for cmd %d",
- i, cmd->v_ret, cmd->v_op);
+ switch (cmd->v_op) {
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ err = ice_dcf_recv_cmd_rsp_no_irq(hw,
+ VIRTCHNL_OP_REQUEST_QUEUES,
+ cmd->rsp_msgbuf,
+ cmd->rsp_buflen,
+ NULL);
+ if (err != IAVF_SUCCESS || !hw->resetting) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "Failed to get response of "
+ "VIRTCHNL_OP_REQUEST_QUEUES %d",
+ err);
+ }
+ break;
+ default:
+ /* For other virtchnl ops in running time,
+ * wait for the cmd done flag.
+ */
+ do {
+ if (!cmd->pending)
+ break;
+ rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
+ } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
+
+ if (cmd->v_ret != IAVF_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "No response (%d times) or "
+ "return failure (%d) for cmd %d",
+ i, cmd->v_ret, cmd->v_op);
+ }
}
ret:
@@ -1011,6 +1031,58 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
+{
+ struct virtchnl_vf_res_request vfres;
+ struct dcf_virtchnl_cmd args;
+ uint16_t num_queue_pairs;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) {
+ PMD_DRV_LOG(ERR, "request queues not supported");
+ return -1;
+ }
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR, "queue number cannot be zero");
+ return -1;
+ }
+ vfres.num_queue_pairs = num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_REQUEST_QUEUES;
+
+ args.req_msg = (u8 *)&vfres;
+ args.req_msglen = sizeof(vfres);
+
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ /*
+ * disable interrupt to avoid the admin queue message to be read
+ * before iavf_read_msg_from_pf.
+ */
+ rte_intr_disable(hw->eth_dev->intr_handle);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ rte_intr_enable(hw->eth_dev->intr_handle);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES");
+ return err;
+ }
+
+ /* request additional queues failed, return available number */
+ num_queue_pairs = ((struct virtchnl_vf_res_request *)
+ args.rsp_msgbuf)->num_queue_pairs;
+ PMD_DRV_LOG(ERR,
+ "request queues failed, only %u queues available",
+ num_queue_pairs);
+
+ return -1;
+}
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 8cf17e7700..99498e2184 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -127,6 +127,7 @@ int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 27/33] net/ice: negotiate large VF and request more queues
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (25 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 26/33] net/ice: add DCF request queues function Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 28/33] net/ice: enable multiple queues configurations for large VF Kevin Liu
` (6 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Negotiate large VF capability with PF during VF initialization. If large
VF is supported and the number of queues larger than 16 is required, VF
requests additional queues from PF. Mark the state that large VF is
supported.
If the allocated queues number is larger than 16, the max RSS queue
region cannot be 16 anymore. Add the function to query max RSS queue
region from PF, use it in the RSS initialization and future filters
configuration.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 34 +++++++++++++++-
drivers/net/ice/ice_dcf.h | 4 ++
drivers/net/ice/ice_dcf_ethdev.c | 69 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 2 +
4 files changed, 106 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 6aeafa6681..7091658841 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,8 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -1083,6 +1084,37 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return -1;
}
+int
+ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ uint16_t qregion_width;
+ int err;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_MAX_RSS_QREGION;
+ args.req_msg = NULL;
+ args.req_msglen = 0;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of "
+ "VIRTCHNL_OP_GET_MAX_RSS_QREGION");
+ return err;
+ }
+
+ qregion_width = ((struct virtchnl_max_rss_qregion *)
+ args.rsp_msgbuf)->qregion_width;
+ hw->max_rss_qregion = (uint16_t)(1 << qregion_width);
+
+ return 0;
+}
+
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 99498e2184..05ea91d2a5 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -105,6 +105,7 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
+ uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -114,6 +115,8 @@ struct ice_dcf_hw {
uint32_t link_speed;
bool resetting;
+ /* Indicate large VF support enabled or not */
+ bool lv_enabled;
};
int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -128,6 +131,7 @@ int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
+int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d4bfa182a4..a43c5a320d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -39,6 +39,8 @@ static int
ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num);
+
static int
ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
@@ -663,6 +665,11 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
{
struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
struct ice_adapter *ad = &dcf_ad->parent;
+ struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ int ret;
+
+ uint16_t num_queue_pairs =
+ RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
@@ -670,6 +677,47 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ /* Large VF setting */
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_DFLT) {
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS)) {
+ PMD_DRV_LOG(ERR, "large VF is not supported");
+ return -1;
+ }
+
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_LV) {
+ PMD_DRV_LOG(ERR,
+ "queue pairs number cannot be larger than %u",
+ ICE_DCF_MAX_NUM_QUEUES_LV);
+ return -1;
+ }
+
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ ret = ice_dcf_get_max_rss_queue_region(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "get max rss queue region failed");
+ return ret;
+ }
+
+ hw->lv_enabled = true;
+ } else {
+ /* Check if large VF is already enabled. If so, disable and
+ * release redundant queue resource.
+ */
+ if (hw->lv_enabled) {
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ hw->lv_enabled = false;
+ }
+ /* if large VF is not required, use default rss queue region */
+ hw->max_rss_qregion = ICE_DCF_MAX_NUM_QUEUES_DFLT;
+ }
+
return 0;
}
@@ -681,8 +729,8 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_hw *hw = &adapter->real_hw;
dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
- dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
- dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+ dev_info->max_rx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
+ dev_info->max_tx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
dev_info->hash_key_size = hw->vf_res->rss_key_size;
@@ -1829,6 +1877,23 @@ ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev)
return 0;
}
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int ret;
+
+ ret = ice_dcf_request_queues(hw, num);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "request queues from PF failed");
+ return ret;
+ }
+ PMD_DRV_LOG(INFO, "change queue pairs from %u to %u",
+ hw->vsi_res->num_queue_pairs, num);
+
+ return ice_dcf_dev_reset(dev);
+}
+
static int
ice_dcf_cap_check_handler(__rte_unused const char *key,
const char *value, __rte_unused void *opaque)
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 27f6402786..4a08d32e0c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,6 +20,8 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 28/33] net/ice: enable multiple queues configurations for large VF
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (26 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 27/33] net/ice: negotiate large VF and request more queues Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 29/33] net/ice: enable IRQ mapping configuration " Kevin Liu
` (5 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Since the adminq buffer size has a 4K limitation, the current virtchnl
command VIRTCHNL_OP_CONFIG_VSI_QUEUES cannot send the message only once to
configure up to 256 queues. In this patch, we send the messages multiple
times to make sure that the buffer size is less than 4K each time.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 11 ++++++-----
drivers/net/ice/ice_dcf.h | 3 ++-
drivers/net/ice/ice_dcf_ethdev.c | 20 ++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 27 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7091658841..7004c00f1c 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -949,7 +949,8 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
#define IAVF_RXDID_COMMS_OVS_1 22
int
-ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
@@ -962,16 +963,16 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
int err;
size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+ sizeof(vc_config->qpair[0]) * num_queue_pairs;
vc_config = rte_zmalloc("cfg_queue", size, 0);
if (!vc_config)
return -ENOMEM;
vc_config->vsi_id = hw->vsi_res->vsi_id;
- vc_config->num_queue_pairs = hw->num_queue_pairs;
+ vc_config->num_queue_pairs = num_queue_pairs;
- for (i = 0, vc_qp = vc_config->qpair;
- i < hw->num_queue_pairs;
+ for (i = index, vc_qp = vc_config->qpair;
+ i < index + num_queue_pairs;
i++, vc_qp++) {
vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
vc_qp->txq.queue_id = i;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 05ea91d2a5..e36428a92a 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -129,7 +129,8 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
-int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a43c5a320d..78df82d5b5 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -513,6 +513,8 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
struct ice_adapter *ad = &dcf_ad->parent;
struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ uint16_t num_queue_pairs;
+ uint16_t index = 0;
int ret;
if (hw->resetting) {
@@ -531,6 +533,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
+ num_queue_pairs = hw->num_queue_pairs;
ret = ice_dcf_init_rx_queues(dev);
if (ret) {
@@ -546,7 +549,20 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
}
}
- ret = ice_dcf_configure_queues(hw);
+ /* If needed, send configure queues msg multiple times to make the
+ * adminq buffer length smaller than the 4K limitation.
+ */
+ while (num_queue_pairs > ICE_DCF_CFG_Q_NUM_PER_BUF) {
+ if (ice_dcf_configure_queues(hw,
+ ICE_DCF_CFG_Q_NUM_PER_BUF, index) != 0) {
+ PMD_DRV_LOG(ERR, "configure queues failed");
+ goto err_queue;
+ }
+ num_queue_pairs -= ICE_DCF_CFG_Q_NUM_PER_BUF;
+ index += ICE_DCF_CFG_Q_NUM_PER_BUF;
+ }
+
+ ret = ice_dcf_configure_queues(hw, num_queue_pairs, index);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to config queues");
return ret;
@@ -586,7 +602,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
+err_queue:
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 4a08d32e0c..2fac1e5b21 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -22,6 +22,7 @@
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 29/33] net/ice: enable IRQ mapping configuration for large VF
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (27 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 28/33] net/ice: enable multiple queues configurations for large VF Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 30/33] net/ice: add enable/disable queues for DCF " Kevin Liu
` (4 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
The current IRQ mapping configuration only supports max 16 queues and
16 MSIX vectors. Change the queue vector mapping structure to indicate
up to 256 queues. A new opcode is used to handle the case with large
number of queues. To avoid adminq buffer size limitation, we support
to send the virtchnl message multiple times if needed.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 50 +++++++++++++++++++++++++++----
drivers/net/ice/ice_dcf.h | 10 ++++++-
drivers/net/ice/ice_dcf_ethdev.c | 51 +++++++++++++++++++++++++++-----
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 99 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7004c00f1c..290f754049 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1115,7 +1115,6 @@ ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
return 0;
}
-
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
@@ -1132,13 +1131,14 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return -ENOMEM;
map_info->num_vectors = hw->nb_msix;
- for (i = 0; i < hw->nb_msix; i++) {
- vecmap = &map_info->vecmap[i];
+ for (i = 0; i < hw->eth_dev->data->nb_rx_queues; i++) {
+ vecmap =
+ &map_info->vecmap[hw->qv_map[i].vector_id - hw->msix_base];
vecmap->vsi_id = hw->vsi_res->vsi_id;
vecmap->rxitr_idx = 0;
- vecmap->vector_id = hw->msix_base + i;
+ vecmap->vector_id = hw->qv_map[i].vector_id;
vecmap->txq_map = 0;
- vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+ vecmap->rxq_map |= 1 << hw->qv_map[i].queue_id;
}
memset(&args, 0, sizeof(args));
@@ -1154,6 +1154,46 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index)
+{
+ struct virtchnl_queue_vector_maps *map_info;
+ struct virtchnl_queue_vector *qv_maps;
+ struct dcf_virtchnl_cmd args;
+ int len, i, err;
+ int count = 0;
+
+ len = sizeof(struct virtchnl_queue_vector_maps) +
+ sizeof(struct virtchnl_queue_vector) * (num - 1);
+
+ map_info = rte_zmalloc("map_info", len, 0);
+ if (!map_info)
+ return -ENOMEM;
+
+ map_info->vport_id = hw->vsi_res->vsi_id;
+ map_info->num_qv_maps = num;
+ for (i = index; i < index + map_info->num_qv_maps; i++) {
+ qv_maps = &map_info->qv_maps[count++];
+ qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
+ qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
+ qv_maps->queue_id = hw->qv_map[i].queue_id;
+ qv_maps->vector_id = hw->qv_map[i].vector_id;
+ }
+
+ args.v_op = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
+ args.req_msg = (u8 *)map_info;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
+
+ rte_free(map_info);
+ return err;
+}
+
int
ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index e36428a92a..ce57a687ab 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -74,6 +74,11 @@ struct ice_dcf_tm_conf {
bool committed;
};
+struct ice_dcf_qv_map {
+ uint16_t queue_id;
+ uint16_t vector_id;
+};
+
struct ice_dcf_hw {
struct iavf_hw avf;
@@ -106,7 +111,8 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
- uint16_t rxq_map[16];
+
+ struct ice_dcf_qv_map *qv_map; /* queue vector mapping */
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -134,6 +140,8 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 78df82d5b5..1ddba02ebb 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -143,6 +143,7 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
{
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct ice_dcf_qv_map *qv_map;
uint16_t interval, i;
int vec;
@@ -161,6 +162,14 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
}
+ qv_map = rte_zmalloc("qv_map",
+ dev->data->nb_rx_queues * sizeof(struct ice_dcf_qv_map), 0);
+ if (!qv_map) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+ dev->data->nb_rx_queues);
+ return -1;
+ }
+
if (!dev->data->dev_conf.intr_conf.rxq ||
!rte_intr_dp_is_en(intr_handle)) {
/* Rx interrupt disabled, Map interrupt only for writeback */
@@ -196,17 +205,22 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
IAVF_WRITE_FLUSH(&hw->avf);
/* map all queues to the same interrupt */
- for (i = 0; i < dev->data->nb_rx_queues; i++)
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
+ }
+ hw->qv_map = qv_map;
} else {
if (!rte_intr_allow_others(intr_handle)) {
hw->nb_msix = 1;
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
rte_intr_vec_list_index_set(intr_handle,
i, IAVF_MISC_VEC_ID);
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
hw->msix_base);
@@ -219,21 +233,44 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[vec] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = vec;
rte_intr_vec_list_index_set(intr_handle,
i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"%u vectors are mapping to %u Rx queues",
hw->nb_msix, dev->data->nb_rx_queues);
}
}
- if (ice_dcf_config_irq_map(hw)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping failed");
- return -1;
+ if (!hw->lv_enabled) {
+ if (ice_dcf_config_irq_map(hw)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+ return -1;
+ }
+ } else {
+ uint16_t num_qv_maps = dev->data->nb_rx_queues;
+ uint16_t index = 0;
+
+ while (num_qv_maps > ICE_DCF_IRQ_MAP_NUM_PER_BUF) {
+ if (ice_dcf_config_irq_map_lv(hw,
+ ICE_DCF_IRQ_MAP_NUM_PER_BUF, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+ num_qv_maps -= ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ index += ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ }
+
+ if (ice_dcf_config_irq_map_lv(hw, num_qv_maps, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+
}
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 2fac1e5b21..9ef524c97c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -23,6 +23,7 @@
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 30/33] net/ice: add enable/disable queues for DCF large VF
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (28 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 29/33] net/ice: enable IRQ mapping configuration " Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 31/33] net/ice: fix DCF ACL flow engine Kevin Liu
` (3 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The current virtchnl structure for enable/disable queues only supports
max 32 queue pairs. Use a new opcode and structure to indicate up to 256
queue pairs, in order to enable/disable queues in large VF case.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 99 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf.h | 5 ++
drivers/net/ice/ice_dcf_ethdev.c | 26 +++++++--
drivers/net/ice/ice_dcf_ethdev.h | 8 +--
4 files changed, 125 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 290f754049..23edfd09b1 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -90,7 +90,6 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
*rsp_msglen = event.msg_len;
return rte_le_to_cpu_32(event.desc.cookie_low);
-
again:
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
} while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
@@ -896,7 +895,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
{
struct rte_eth_dev *dev = hw->eth_dev;
struct rte_eth_rss_conf *rss_conf;
- uint8_t i, j, nb_q;
+ uint16_t i, j, nb_q;
int ret;
rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
@@ -1075,6 +1074,12 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return err;
}
+ /* request queues succeeded, vf is resetting */
+ if (hw->resetting) {
+ PMD_DRV_LOG(INFO, "vf is resetting");
+ return 0;
+ }
+
/* request additional queues failed, return available number */
num_queue_pairs = ((struct virtchnl_vf_res_request *)
args.rsp_msgbuf)->num_queue_pairs;
@@ -1185,7 +1190,8 @@ ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
args.req_msg = (u8 *)map_info;
args.req_msglen = len;
args.rsp_msgbuf = hw->arq_buf;
- args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
@@ -1225,6 +1231,50 @@ ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
return err;
}
+int
+ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ if (rx) {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ } else {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ }
+
+ if (on)
+ args.v_op = VIRTCHNL_OP_ENABLE_QUEUES_V2;
+ else
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+ on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_disable_queues(struct ice_dcf_hw *hw)
{
@@ -1254,6 +1304,49 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues) +
+ sizeof(struct virtchnl_queue_chunk) *
+ (ICE_DCF_RXTX_QUEUE_CHUNKS_NUM - 1);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = ICE_DCF_RXTX_QUEUE_CHUNKS_NUM;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].num_queues =
+ hw->eth_dev->data->nb_tx_queues;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].num_queues =
+ hw->eth_dev->data->nb_rx_queues;
+
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats)
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index ce57a687ab..78ab23aaa6 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,8 @@
#include "base/ice_type.h"
#include "ice_logs.h"
+#define ICE_DCF_RXTX_QUEUE_CHUNKS_NUM 2
+
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -143,7 +145,10 @@ int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw,
+ uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ddba02ebb..e46c8405aa 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -317,6 +317,7 @@ static int
ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_rx_queue *rxq;
int err = 0;
@@ -339,7 +340,11 @@ ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, rx_queue_id, true, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, rx_queue_id, true, true);
+
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
rx_queue_id);
@@ -448,6 +453,7 @@ static int
ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_tx_queue *txq;
int err = 0;
@@ -463,7 +469,10 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, tx_queue_id, false, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, tx_queue_id, false, true);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
@@ -650,12 +659,17 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
struct ice_tx_queue *txq;
- int ret, i;
+ int i;
/* Stop All queues */
- ret = ice_dcf_disable_queues(hw);
- if (ret)
- PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ if (!hw->lv_enabled) {
+ if (ice_dcf_disable_queues(hw))
+ PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ } else {
+ if (ice_dcf_disable_queues_lv(hw))
+ PMD_DRV_LOG(WARNING,
+ "Fail to stop queues for large VF");
+ }
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 9ef524c97c..3f740e2c7b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,10 +20,10 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
-#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
-#define ICE_DCF_MAX_NUM_QUEUES_LV 256
-#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
-#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 31/33] net/ice: fix DCF ACL flow engine
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (29 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 30/33] net/ice: add enable/disable queues for DCF " Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 32/33] testpmd: force flow flush Kevin Liu
` (2 subsequent siblings)
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
ACL is not a necessary feature for DCF, it may not be supported by
the ice kernel driver, so in this patch the program does not return
the ACL initiation fails to high level functions, as substitute it
prints some error logs, cleans the related resources and unregisters
the ACL engine.
Fixes: 40d466fa9f76 ("net/ice: support ACL filter in DCF")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_acl_filter.c | 20 ++++++++++++++----
drivers/net/ice/ice_generic_flow.c | 34 +++++++++++++++++++++++-------
2 files changed, 42 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0..20a1f86c43 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -56,6 +56,8 @@ ice_pattern_match_item ice_acl_pattern[] = {
{pattern_eth_ipv4_sctp, ICE_ACL_INSET_ETH_IPV4_SCTP, ICE_INSET_NONE, ICE_INSET_NONE},
};
+static void ice_acl_prof_free(struct ice_hw *hw);
+
static int
ice_acl_prof_alloc(struct ice_hw *hw)
{
@@ -1007,17 +1009,27 @@ ice_acl_init(struct ice_adapter *ad)
ret = ice_acl_setup(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_bitmap_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_prof_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
- return ice_register_parser(parser, ad);
+ ret = ice_register_parser(parser, ad);
+ if (ret)
+ goto deinit_acl;
+
+ return 0;
+
+deinit_acl:
+ ice_deinit_acl(pf);
+ ice_acl_prof_free(hw);
+ PMD_DRV_LOG(ERR, "ACL init failed, may not supported!");
+ return ret;
}
static void
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 6663a85ed0..e9e4d776b2 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1864,6 +1864,12 @@ ice_register_flow_engine(struct ice_flow_engine *engine)
TAILQ_INSERT_TAIL(&engine_list, engine, node);
}
+static void
+ice_unregister_flow_engine(struct ice_flow_engine *engine)
+{
+ TAILQ_REMOVE(&engine_list, engine, node);
+}
+
int
ice_flow_init(struct ice_adapter *ad)
{
@@ -1887,9 +1893,18 @@ ice_flow_init(struct ice_adapter *ad)
ret = engine->init(ad);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to initialize engine %d",
- engine->type);
- return ret;
+ /**
+ * ACL may not supported in kernel driver,
+ * so just unregister the engine.
+ */
+ if (engine->type == ICE_FLOW_ENGINE_ACL) {
+ ice_unregister_flow_engine(engine);
+ } else {
+ PMD_INIT_LOG(ERR,
+ "Failed to initialize engine %d",
+ engine->type);
+ return ret;
+ }
}
}
return 0;
@@ -1976,7 +1991,7 @@ ice_register_parser(struct ice_flow_parser *parser,
list = ice_get_parser_list(parser, ad);
if (list == NULL)
- return -EINVAL;
+ goto err;
if (ad->devargs.pipe_mode_support) {
TAILQ_INSERT_TAIL(list, parser_node, node);
@@ -1988,7 +2003,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -1999,7 +2014,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_SWITCH) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -2008,11 +2023,14 @@ ice_register_parser(struct ice_flow_parser *parser,
} else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_HEAD(list, parser_node, node);
} else {
- return -EINVAL;
+ goto err;
}
}
-DONE:
return 0;
+err:
+ rte_free(parser_node);
+ PMD_DRV_LOG(ERR, "%s failed.", __func__);
+ return -EINVAL;
}
void
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 32/33] testpmd: force flow flush
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (30 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 31/33] net/ice: fix DCF ACL flow engine Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 16:09 ` [PATCH v2 33/33] net/ice: fix DCF reset Kevin Liu
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Qi Zhang <qi.z.zhang@intel.com>
For mdcf, rte_flow_flush is still need to be invoked even there are
no flows be created in current instance.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
app/test-pmd/config.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cc8e7aa138..3d40e3e43d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2923,15 +2923,15 @@ port_flow_flush(portid_t port_id)
port = &ports[port_id];
- if (port->flow_list == NULL)
- return ret;
-
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x44, sizeof(error));
if (rte_flow_flush(port_id, &error)) {
port_flow_complain(&error);
}
+ if (port->flow_list == NULL)
+ return ret;
+
while (port->flow_list) {
struct port_flow *pf = port->flow_list->next;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v2 33/33] net/ice: fix DCF reset
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (31 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 32/33] testpmd: force flow flush Kevin Liu
@ 2022-04-13 16:09 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
33 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 16:09 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
After the PF triggers the VF reset, before the VF PMD can perform
any operations on the hardware, it must reinitialize the all resources.
This patch adds a flag to indicate whether the VF has been reset by
PF, and update the DCF resetting operations according to this flag.
Fixes: 1a86f4dbdf42 ("net/ice: support DCF device reset")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_common.c | 4 +++-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 17 ++++++++++++++++-
drivers/net/ice/ice_dcf_parent.c | 3 +++
4 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index db87bacd97..13feb55469 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -755,6 +755,7 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
status = ice_init_def_sw_recp(hw, &hw->switch_info->recp_list);
if (status) {
ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
return status;
}
return ICE_SUCCESS;
@@ -823,7 +824,6 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
}
ice_rm_sw_replay_rule_info(hw, sw);
ice_free(hw, sw->recp_list);
- ice_free(hw, sw);
}
/**
@@ -833,6 +833,8 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
{
ice_cleanup_fltr_mgmt_single(hw, hw->switch_info);
+ ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
}
/**
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 23edfd09b1..35773e2acd 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1429,7 +1429,7 @@ ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
int ret;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
ice_dcf_disable_irq0(hw);
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e46c8405aa..0315e694d7 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1004,6 +1004,15 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
uint32_t i;
int len, err = 0;
+ if (hw->resetting) {
+ if (!add)
+ return 0;
+
+ PMD_DRV_LOG(ERR,
+ "fail to add multicast MACs for VF resetting");
+ return -EIO;
+ }
+
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
@@ -1642,7 +1651,13 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
- (void)ice_dcf_dev_stop(dev);
+ if (adapter->parent.pf.adapter_stopped)
+ (void)ice_dcf_dev_stop(dev);
+
+ if (adapter->real_hw.resetting) {
+ ice_dcf_uninit_hw(dev, &adapter->real_hw);
+ ice_dcf_init_hw(dev, &adapter->real_hw);
+ }
ice_free_queues(dev);
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 2f96dedcce..7f7ed796e2 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -240,6 +240,9 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
case VIRTCHNL_EVENT_RESET_IMPENDING:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
dcf_hw->resetting = true;
+ rte_eth_dev_callback_process(dcf_hw->eth_dev,
+ RTE_ETH_EVENT_INTR_RESET,
+ NULL);
break;
case VIRTCHNL_EVENT_LINK_CHANGE:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 00/22] support full function of DCF
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
` (32 preceding siblings ...)
2022-04-13 16:09 ` [PATCH v2 33/33] net/ice: fix DCF reset Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 01/22] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
` (23 more replies)
33 siblings, 24 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
v3:
* remove patch:
1.net/ice/base: add VXLAN support for switch filter
2.net/ice: add VXLAN support for switch filter
3.common/iavf: support flushing rules and reporting DCF id
4.net/ice/base: fix ethertype filter input set
5.net/ice/base: support IPv6 GRE UDP pattern
6.net/ice/base: support new patterns of TCP and UDP
7.net/ice: support new patterns of TCP and UDP
8.net/ice/base: support IPv4 GRE tunnel
9.net/ice: support IPv4 GRE raw pattern type
10.net/ice/base: update Profile ID table for VXLAN
11.net/ice/base: update Protocol ID table to match DVM DDP
v2:
* remove patch:
1.net/iavf: support checking if device is an MDCF instance
2.net/ice: support MDCF(multi-DCF) instance
3.net/ice/base: support custom DDP buildin recipe
4.net/ice: support buildin recipe configuration
5.net/ice/base: support custom ddp package version
6.net/ice: disable ACL function for MDCF instance
Alvin Zhang (7):
net/ice: support dcf promisc configuration
net/ice: support dcf VLAN filter and offload configuration
net/ice: support DCF new VLAN capabilities
net/ice: support IPv6 NVGRE tunnel
net/ice: support new pattern of IPv4
net/ice: treat unknown package as OS default package
net/ice: fix DCF ACL flow engine
Dapeng Yu (1):
net/ice: enable CVL DCF device reset API
Jie Wang (2):
net/ice: add ops MTU-SET to dcf
net/ice: add ops dev-supported-ptypes-get to dcf
Kevin Liu (3):
net/ice: support dcf MAC configuration
net/ice: add enable/disable queues for DCF large VF
net/ice: fix DCF reset
Qi Zhang (1):
testpmd: force flow flush
Robin Zhang (1):
net/ice: cleanup Tx buffers
Steve Yang (7):
net/ice: enable RSS RETA ops for DCF hardware
net/ice: enable RSS HASH ops for DCF hardware
net/ice: handle virtchnl event message without interrupt
net/ice: add DCF request queues function
net/ice: negotiate large VF and request more queues
net/ice: enable multiple queues configurations for large VF
net/ice: enable IRQ mapping configuration for large VF
app/test-pmd/config.c | 6 +-
drivers/net/ice/base/ice_common.c | 4 +-
drivers/net/ice/ice_acl_filter.c | 20 +-
drivers/net/ice/ice_dcf.c | 375 ++++++++++-
drivers/net/ice/ice_dcf.h | 31 +-
drivers/net/ice/ice_dcf_ethdev.c | 925 ++++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 14 +
drivers/net/ice/ice_dcf_parent.c | 3 +
drivers/net/ice/ice_ethdev.c | 13 +-
drivers/net/ice/ice_generic_flow.c | 34 +-
drivers/net/ice/ice_switch_filter.c | 8 +
11 files changed, 1328 insertions(+), 105 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 01/22] net/ice: enable RSS RETA ops for DCF hardware
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 02/22] net/ice: enable RSS HASH " Kevin Liu
` (22 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS RETA should be updated and queried by application,
Add related ops ('.reta_update', '.reta_query') for DCF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++++
3 files changed, 79 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f0c074b01..070d1b71ac 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
return err;
}
-static int
+int
ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_lut *rss_lut;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 6ec766ebda..b2c6aa2684 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59610e058f..1ac66ed990 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint8_t *lut;
+ uint16_t i, idx, shift;
+ int ret;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ /* store the old lut table temporarily */
+ rte_memcpy(lut, hw->rss_lut, reta_size);
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ /* send virtchnnl ops to configure rss*/
+ ret = ice_dcf_configure_rss_lut(hw);
+ if (ret) /* revert back */
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint16_t i, idx, shift;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = hw->rss_lut[i];
+ }
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
.tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 02/22] net/ice: enable RSS HASH ops for DCF hardware
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
2022-04-13 17:10 ` [PATCH v3 01/22] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 03/22] net/ice: cleanup Tx buffers Kevin Liu
` (21 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS HASH should be updated and queried by application,
Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF.
Because DCF doesn't support configure RSS HASH, only HASH key can be
updated within ops '.rss_hash_update'.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 070d1b71ac..89c0203ba3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
hw->ets_config = NULL;
}
-static int
+int
ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_key *rss_key;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index b2c6aa2684..f0b45af5ae 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ac66ed990..ccad7fc304 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* HENA setting, it is enabled by default, no change */
+ if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) {
+ PMD_DRV_LOG(ERR, "The size of hash key configured "
+ "(%d) doesn't match the size of hardware can "
+ "support (%d)", rss_conf->rss_key_len,
+ hw->vf_res->rss_key_size);
+ return -EINVAL;
+ }
+
+ rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ return ice_dcf_configure_rss_key(hw);
+}
+
+static int
+ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* Just set it to default value now. */
+ rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL;
+
+ if (!rss_conf->rss_key)
+ return 0;
+
+ rss_conf->rss_key_len = hw->vf_res->rss_key_size;
+ rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len);
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tm_ops_get = ice_dcf_tm_ops_get,
.reta_update = ice_dcf_dev_rss_reta_update,
.reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 03/22] net/ice: cleanup Tx buffers
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
2022-04-13 17:10 ` [PATCH v3 01/22] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-13 17:10 ` [PATCH v3 02/22] net/ice: enable RSS HASH " Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 04/22] net/ice: add ops MTU-SET to dcf Kevin Liu
` (20 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Robin Zhang, Kevin Liu
From: Robin Zhang <robinx.zhang@intel.com>
Add support for ops rte_eth_tx_done_cleanup in dcf
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ccad7fc304..d8b5961514 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.reta_query = ice_dcf_dev_rss_reta_query,
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 04/22] net/ice: add ops MTU-SET to dcf
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (2 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 03/22] net/ice: cleanup Tx buffers Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 05/22] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
` (19 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "mtu_set" to dcf, and it can configure the port mtu through
cmdline.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++
2 files changed, 20 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d8b5961514..06d752fd61 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &new_link);
}
+static int
+ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started != 0) {
+ PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
+ dev->data->port_id);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
bool
ice_dcf_adminq_need_retry(struct ice_adapter *ad)
{
@@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
.tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 11a1305038..f2faf26f58 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -15,6 +15,12 @@
#define ICE_DCF_MAX_RINGS 1
+#define ICE_DCF_FRAME_SIZE_MAX 9728
+#define ICE_DCF_VLAN_TAG_SIZE 4
+#define ICE_DCF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
+#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+
struct ice_dcf_queue {
uint64_t dummy;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 05/22] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (3 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 04/22] net/ice: add ops MTU-SET to dcf Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 06/22] net/ice: support dcf promisc configuration Kevin Liu
` (18 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get
ptypes through the new API.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 80 +++++++++++++++++++-------------
1 file changed, 49 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 06d752fd61..6a577a6582 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+static const uint32_t *
+ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+ return ptypes;
+}
+
static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
- .dev_start = ice_dcf_dev_start,
- .dev_stop = ice_dcf_dev_stop,
- .dev_close = ice_dcf_dev_close,
- .dev_reset = ice_dcf_dev_reset,
- .dev_configure = ice_dcf_dev_configure,
- .dev_infos_get = ice_dcf_dev_info_get,
- .rx_queue_setup = ice_rx_queue_setup,
- .tx_queue_setup = ice_tx_queue_setup,
- .rx_queue_release = ice_dev_rx_queue_release,
- .tx_queue_release = ice_dev_tx_queue_release,
- .rx_queue_start = ice_dcf_rx_queue_start,
- .tx_queue_start = ice_dcf_tx_queue_start,
- .rx_queue_stop = ice_dcf_rx_queue_stop,
- .tx_queue_stop = ice_dcf_tx_queue_stop,
- .link_update = ice_dcf_link_update,
- .stats_get = ice_dcf_stats_get,
- .stats_reset = ice_dcf_stats_reset,
- .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
- .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
- .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
- .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
- .flow_ops_get = ice_dcf_dev_flow_ops_get,
- .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
- .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
- .tm_ops_get = ice_dcf_tm_ops_get,
- .reta_update = ice_dcf_dev_rss_reta_update,
- .reta_query = ice_dcf_dev_rss_reta_query,
- .rss_hash_update = ice_dcf_dev_rss_hash_update,
- .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
- .tx_done_cleanup = ice_tx_done_cleanup,
- .mtu_set = ice_dcf_dev_mtu_set,
+ .dev_start = ice_dcf_dev_start,
+ .dev_stop = ice_dcf_dev_stop,
+ .dev_close = ice_dcf_dev_close,
+ .dev_reset = ice_dcf_dev_reset,
+ .dev_configure = ice_dcf_dev_configure,
+ .dev_infos_get = ice_dcf_dev_info_get,
+ .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .rx_queue_release = ice_dev_rx_queue_release,
+ .tx_queue_release = ice_dev_tx_queue_release,
+ .rx_queue_start = ice_dcf_rx_queue_start,
+ .tx_queue_start = ice_dcf_tx_queue_start,
+ .rx_queue_stop = ice_dcf_rx_queue_stop,
+ .tx_queue_stop = ice_dcf_tx_queue_stop,
+ .link_update = ice_dcf_link_update,
+ .stats_get = ice_dcf_stats_get,
+ .stats_reset = ice_dcf_stats_reset,
+ .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
+ .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
+ .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
+ .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .flow_ops_get = ice_dcf_dev_flow_ops_get,
+ .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
+ .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
+ .tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 06/22] net/ice: support dcf promisc configuration
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (4 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 05/22] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 07/22] net/ice: support dcf MAC configuration Kevin Liu
` (17 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support configuration of unicast and multicast promisc on dcf.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 3 ++
2 files changed, 76 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6a577a6582..87d281ee93 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
}
static int
-ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+dcf_config_promisc(struct ice_dcf_adapter *adapter,
+ bool enable_unicast,
+ bool enable_multicast)
{
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_promisc_info promisc;
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ promisc.flags = 0;
+ promisc.vsi_id = hw->vsi_res->vsi_id;
+
+ if (enable_unicast)
+ promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+ if (enable_multicast)
+ promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+ args.req_msg = (uint8_t *)&promisc;
+ args.req_msglen = sizeof(promisc);
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE");
+ return err;
+ }
+
+ adapter->promisc_unicast_enabled = enable_unicast;
+ adapter->promisc_multicast_enabled = enable_multicast;
return 0;
}
+static int
+ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, true,
+ adapter->promisc_multicast_enabled);
+}
+
static int
ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, false,
+ adapter->promisc_multicast_enabled);
}
static int
ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ true);
}
static int
ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ false);
}
static int
@@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
+ dcf_config_promisc(adapter, false, false);
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index f2faf26f58..22e450527b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -33,6 +33,9 @@ struct ice_dcf_adapter {
struct ice_adapter parent; /* Must be first */
struct ice_dcf_hw real_hw;
+ bool promisc_unicast_enabled;
+ bool promisc_multicast_enabled;
+
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 07/22] net/ice: support dcf MAC configuration
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (5 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 06/22] net/ice: support dcf promisc configuration Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 08/22] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
` (16 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
Below PMD ops are supported in this patch:
.mac_addr_add = dcf_dev_add_mac_addr
.mac_addr_remove = dcf_dev_del_mac_addr
.set_mc_addr_list = dcf_set_mc_addr_list
.mac_addr_set = dcf_dev_set_default_mac_addr
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 9 +-
drivers/net/ice/ice_dcf.h | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 5 +-
4 files changed, 226 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 89c0203ba3..55ae68c456 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
}
int
-ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr,
+ bool add, uint8_t type)
{
struct virtchnl_ether_addr_list *list;
- struct rte_ether_addr *addr;
struct dcf_virtchnl_cmd args;
int len, err = 0;
@@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
}
len = sizeof(struct virtchnl_ether_addr_list);
- addr = hw->eth_dev->data->mac_addrs;
len += sizeof(struct virtchnl_ether_addr);
list = rte_zmalloc(NULL, len, 0);
@@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
rte_memcpy(list->list[0].addr, addr->addr_bytes,
sizeof(addr->addr_bytes));
+
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
-
+ list->list[0].type = type;
list->vsi_id = hw->vsi_res->vsi_id;
list->num_elements = 1;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index f0b45af5ae..78df202a77 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
-int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr, bool add,
+ uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 87d281ee93..0d944f9fd2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -26,6 +26,12 @@
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#define DCF_NUM_MACADDR_MAX 64
+
+static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add);
+
static int
ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- ret = ice_dcf_add_del_all_mac_addr(hw, true);
+ ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs,
+ true, VIRTCHNL_ETHER_ADDR_PRIMARY);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to add mac addr");
return ret;
}
+ if (dcf_ad->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, true);
+ if (ret)
+ return ret;
+ }
+
+
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
@@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
rte_intr_efd_disable(intr_handle);
rte_intr_vec_list_free(intr_handle);
- ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
+ ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw,
+ dcf_ad->real_hw.eth_dev->data->mac_addrs,
+ false, VIRTCHNL_ETHER_ADDR_PRIMARY);
+
+ if (dcf_ad->mc_addrs_num)
+ /* flush previous addresses */
+ (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw,
+ dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, false);
+
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- dev_info->max_mac_addrs = 1;
+ dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
@@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
false);
}
+static int
+dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ int err;
+
+ if (rte_is_zero_ether_addr(addr)) {
+ PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+ return -EINVAL;
+ }
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to add MAC address");
+ return err;
+ }
+
+ return 0;
+}
+
+static void
+dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct rte_ether_addr *addr = &dev->data->mac_addrs[index];
+ int err;
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to remove MAC address");
+}
+
+static int
+dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add)
+{
+ struct virtchnl_ether_addr_list *list;
+ struct dcf_virtchnl_cmd args;
+ uint32_t i;
+ int len, err = 0;
+
+ len = sizeof(struct virtchnl_ether_addr_list);
+ len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
+
+ list = rte_zmalloc(NULL, len, 0);
+ if (!list) {
+ PMD_DRV_LOG(ERR, "fail to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
+ sizeof(list->list[i].addr));
+ list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ list->vsi_id = hw->vsi_res->vsi_id;
+ list->num_elements = mc_addrs_num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+ VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.req_msg = (uint8_t *)list;
+ args.req_msglen = len;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" :
+ "OP_DEL_ETHER_ADDRESS");
+ rte_free(list);
+ return err;
+}
+
+static int
+dcf_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i;
+ int ret;
+
+
+ if (mc_addrs_num > DCF_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR,
+ "can't add more than a limited number (%u) of addresses.",
+ (uint32_t)DCF_NUM_MACADDR_MAX);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addrs[i])) {
+ const uint8_t *mac = mc_addrs[i].addr_bytes;
+
+ PMD_DRV_LOG(ERR,
+ "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x",
+ mac[0], mac[1], mac[2], mac[3], mac[4],
+ mac[5]);
+ return -EINVAL;
+ }
+ }
+
+ if (adapter->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num, false);
+ if (ret)
+ return ret;
+ }
+ if (!mc_addrs_num) {
+ adapter->mc_addrs_num = 0;
+ return 0;
+ }
+
+ /* add new ones */
+ ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true);
+ if (ret) {
+ /* if adding mac address list fails, should add the
+ * previous addresses back.
+ */
+ if (adapter->mc_addrs_num)
+ (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num,
+ true);
+ return ret;
+ }
+ adapter->mc_addrs_num = mc_addrs_num;
+ memcpy(adapter->mc_addrs,
+ mc_addrs, mc_addrs_num * sizeof(*mc_addrs));
+
+ return 0;
+}
+
+static int
+dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_ether_addr *old_addr;
+ int ret;
+
+ old_addr = hw->eth_dev->data->mac_addrs;
+ if (rte_is_same_ether_addr(old_addr, mac_addr))
+ return 0;
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ old_addr->addr_bytes[0],
+ old_addr->addr_bytes[1],
+ old_addr->addr_bytes[2],
+ old_addr->addr_bytes[3],
+ old_addr->addr_bytes[4],
+ old_addr->addr_bytes[5]);
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ mac_addr->addr_bytes[0],
+ mac_addr->addr_bytes[1],
+ mac_addr->addr_bytes[2],
+ mac_addr->addr_bytes[3],
+ mac_addr->addr_bytes[4],
+ mac_addr->addr_bytes[5]);
+
+ if (ret)
+ return -EIO;
+
+ rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs);
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
.allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .mac_addr_add = dcf_dev_add_mac_addr,
+ .mac_addr_remove = dcf_dev_del_mac_addr,
+ .set_mc_addr_list = dcf_set_mc_addr_list,
+ .mac_addr_set = dcf_dev_set_default_mac_addr,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 22e450527b..27f6402786 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -14,7 +14,7 @@
#include "ice_dcf.h"
#define ICE_DCF_MAX_RINGS 1
-
+#define DCF_NUM_MACADDR_MAX 64
#define ICE_DCF_FRAME_SIZE_MAX 9728
#define ICE_DCF_VLAN_TAG_SIZE 4
#define ICE_DCF_ETH_OVERHEAD \
@@ -35,7 +35,8 @@ struct ice_dcf_adapter {
bool promisc_unicast_enabled;
bool promisc_multicast_enabled;
-
+ uint32_t mc_addrs_num;
+ struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX];
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 08/22] net/ice: support dcf VLAN filter and offload configuration
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (6 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 07/22] net/ice: support dcf MAC configuration Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 09/22] net/ice: support DCF new VLAN capabilities Kevin Liu
` (15 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Below PMD ops are supported in this patch:
.vlan_filter_set = dcf_dev_vlan_filter_set
.vlan_offload_set = dcf_dev_vlan_offload_set
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0d944f9fd2..e58cdf47d2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_filter_list *vlan_list;
+ uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+ sizeof(uint16_t)];
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+ vlan_list->vsi_id = hw->vsi_res->vsi_id;
+ vlan_list->num_elements = 1;
+ vlan_list->vlan_id[0] = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+ args.req_msg = cmd_buffer;
+ args.req_msglen = sizeof(cmd_buffer);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN" : "OP_DEL_VLAN");
+
+ return err;
+}
+
+static int
+dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_ENABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static int
+dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ /* Vlan stripping setting */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ /* Enable or disable VLAN stripping */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ err = dcf_enable_vlan_strip(hw);
+ else
+ err = dcf_disable_vlan_strip(hw);
+
+ if (err)
+ return -EIO;
+ }
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mac_addr_remove = dcf_dev_del_mac_addr,
.set_mc_addr_list = dcf_set_mc_addr_list,
.mac_addr_set = dcf_dev_set_default_mac_addr,
+ .vlan_filter_set = dcf_dev_vlan_filter_set,
+ .vlan_offload_set = dcf_dev_vlan_offload_set,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 09/22] net/ice: support DCF new VLAN capabilities
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (7 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 08/22] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 10/22] net/ice: enable CVL DCF device reset API Kevin Liu
` (14 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The DCF needs to query the VLAN capabilities based on current device
configuration firstly.
DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 27 +++++
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++++++++---
3 files changed, 182 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 55ae68c456..885d58c0f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
return 0;
}
+static int
+dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_vlan_caps vlan_v2_caps;
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS;
+ args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps;
+ args.rsp_buflen = sizeof(vlan_v2_caps);
+
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS");
+ return ret;
+ }
+
+ rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
+ return 0;
+}
+
int
ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
@@ -701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
+ if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) &&
+ dcf_get_vlan_offload_caps_v2(hw))
+ goto err_rss;
+
return 0;
err_rss:
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78df202a77..32e6031bd9 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -107,6 +107,7 @@ struct ice_dcf_hw {
uint16_t nb_msix;
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
+ struct virtchnl_vlan_caps vlan_v2_caps;
/* Link status */
bool link_up;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e58cdf47d2..d4bfa182a4 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,46 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_supported_caps *supported_caps =
+ &hw->vlan_v2_caps.filtering.filtering_support;
+ struct virtchnl_vlan *vlan_setting;
+ struct virtchnl_vlan_filter_list_v2 vlan_filter;
+ struct dcf_virtchnl_cmd args;
+ uint32_t filtering_caps;
+ int err;
+
+ if (supported_caps->outer) {
+ filtering_caps = supported_caps->outer;
+ vlan_setting = &vlan_filter.filters[0].outer;
+ } else {
+ filtering_caps = supported_caps->inner;
+ vlan_setting = &vlan_filter.filters[0].inner;
+ }
+
+ if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100))
+ return -ENOTSUP;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.vport_id = hw->vsi_res->vsi_id;
+ vlan_filter.num_elements = 1;
+ vlan_setting->tpid = RTE_ETHER_TYPE_VLAN;
+ vlan_setting->tci = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2;
+ args.req_msg = (uint8_t *)&vlan_filter;
+ args.req_msglen = sizeof(vlan_filter);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2");
+
+ return err;
+}
+
static int
dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
{
@@ -1052,6 +1092,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
return err;
}
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
+ err = dcf_add_del_vlan_v2(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+ }
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static void
+dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable)
+{
+ struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i, j;
+ uint64_t ids;
+
+ for (i = 0; i < RTE_DIM(vfc->ids); i++) {
+ if (vfc->ids[i] == 0)
+ continue;
+
+ ids = vfc->ids[i];
+ for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) {
+ if (ids & 1)
+ dcf_add_del_vlan_v2(hw, 64 * i + j, enable);
+ }
+ }
+}
+
+static int
+dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable)
+{
+ struct virtchnl_vlan_supported_caps *stripping_caps =
+ &hw->vlan_v2_caps.offloads.stripping_support;
+ struct virtchnl_vlan_setting vlan_strip;
+ struct dcf_virtchnl_cmd args;
+ uint32_t *ethertype;
+ int ret;
+
+ if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.outer_ethertype_setting;
+ else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.inner_ethertype_setting;
+ else
+ return -ENOTSUP;
+
+ memset(&vlan_strip, 0, sizeof(vlan_strip));
+ vlan_strip.vport_id = hw->vsi_res->vsi_id;
+ *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 :
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2;
+ args.req_msg = (uint8_t *)&vlan_strip;
+ args.req_msglen = sizeof(vlan_strip);
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" :
+ "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ bool enable;
+ int err;
+
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
+
+ dcf_iterate_vlan_filters_v2(dev, enable);
+ }
+
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
+
+ err = dcf_config_vlan_strip_v2(hw, enable);
+ /* If not support, the stripping is already disabled by PF */
+ if (err == -ENOTSUP && !enable)
+ err = 0;
+ if (err)
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int
dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
{
@@ -1084,30 +1234,17 @@ dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
return ret;
}
-static int
-dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct ice_dcf_adapter *adapter = dev->data->dev_private;
- struct ice_dcf_hw *hw = &adapter->real_hw;
- int err;
-
- if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
- return -ENOTSUP;
-
- err = dcf_add_del_vlan(hw, vlan_id, on);
- if (err)
- return -EIO;
- return 0;
-}
-
static int
dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
int err;
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
+ return dcf_dev_vlan_offload_set_v2(dev, mask);
+
if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
return -ENOTSUP;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 10/22] net/ice: enable CVL DCF device reset API
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (8 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 09/22] net/ice: support DCF new VLAN capabilities Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 11/22] net/ice: support IPv6 NVGRE tunnel Kevin Liu
` (13 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Dapeng Yu, Kevin Liu
From: Dapeng Yu <dapengx.yu@intel.com>
Enable CVL DCF device reset API.
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 24 ++++++++++++++++++++++++
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 885d58c0f4..9c2f13cf72 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1163,3 +1163,27 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
rte_free(list);
return err;
}
+
+int
+ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
+{
+ int ret;
+
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+ ice_dcf_disable_irq0(hw);
+ rte_intr_disable(intr_handle);
+ rte_intr_callback_unregister(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ ret = ice_dcf_mode_disable(hw);
+ if (ret)
+ goto err;
+ ret = ice_dcf_get_vf_resource(hw);
+err:
+ rte_intr_callback_register(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ rte_intr_enable(intr_handle);
+ ice_dcf_enable_irq0(hw);
+ return ret;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 32e6031bd9..8cf17e7700 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -137,6 +137,7 @@ int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
+int ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
void ice_dcf_tm_conf_uninit(struct rte_eth_dev *dev);
int ice_dcf_replay_vf_bw(struct ice_dcf_hw *hw, uint16_t vf_id);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 11/22] net/ice: support IPv6 NVGRE tunnel
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (9 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 10/22] net/ice: enable CVL DCF device reset API Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 12/22] net/ice: support new pattern of IPv4 Kevin Liu
` (12 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add protocol definition and pattern matching for IPv6 NVGRE tunnel.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 36c9bffb73..c04547235c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -31,6 +31,7 @@
#define ICE_PPP_IPV4_PROTO 0x0021
#define ICE_PPP_IPV6_PROTO 0x0057
#define ICE_IPV4_PROTO_NVGRE 0x002F
+#define ICE_IPV6_PROTO_NVGRE 0x002F
#define ICE_SW_PRI_BASE 6
#define ICE_SW_INSET_ETHER ( \
@@ -763,6 +764,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
break;
}
}
+ if ((ipv6_spec->hdr.proto &
+ ipv6_mask->hdr.proto) ==
+ ICE_IPV6_PROTO_NVGRE)
+ *tun_type = ICE_SW_TUN_AND_NON_TUN;
if (ipv6_mask->hdr.proto)
*input |= ICE_INSET_IPV6_NEXT_HDR;
if (ipv6_mask->hdr.hop_limits)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 12/22] net/ice: support new pattern of IPv4
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (10 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 11/22] net/ice: support IPv6 NVGRE tunnel Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 13/22] net/ice: treat unknown package as OS default package Kevin Liu
` (11 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definition and pattern entry for IPv4 pattern: MAC/VLAN/IPv4
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c04547235c..4db7021e3f 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -38,6 +38,8 @@
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
#define ICE_SW_INSET_MAC_VLAN ( \
ICE_SW_INSET_ETHER | ICE_INSET_VLAN_INNER)
+#define ICE_SW_INSET_MAC_VLAN_IPV4 ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4)
#define ICE_SW_INSET_MAC_QINQ ( \
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_VLAN_INNER | \
ICE_INSET_VLAN_OUTER)
@@ -215,6 +217,7 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv4, ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4, ICE_SW_INSET_MAC_VLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 13/22] net/ice: treat unknown package as OS default package
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (11 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 12/22] net/ice: support new pattern of IPv4 Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 14/22] net/ice: handle virtchnl event message without interrupt Kevin Liu
` (10 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
In order to use custom package, unknown package should be treated
as OS default package.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 73e550f5fb..ad9b09d081 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1710,13 +1710,16 @@ ice_load_pkg_type(struct ice_hw *hw)
/* store the activated package type (OS default or Comms) */
if (!strncmp((char *)hw->active_pkg_name, ICE_OS_DEFAULT_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_OS_DEFAULT;
- else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ } else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_COMMS;
- else
- package_type = ICE_PKG_TYPE_UNKNOWN;
+ } else {
+ PMD_INIT_LOG(WARNING,
+ "The package type is not identified, treaded as OS default type");
+ package_type = ICE_PKG_TYPE_OS_DEFAULT;
+ }
PMD_INIT_LOG(NOTICE, "Active package is: %d.%d.%d.%d, %s (%s VLAN mode)",
hw->active_pkg_ver.major, hw->active_pkg_ver.minor,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 14/22] net/ice: handle virtchnl event message without interrupt
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (12 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 13/22] net/ice: treat unknown package as OS default package Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 15/22] net/ice: add DCF request queues function Kevin Liu
` (9 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Currently, VF can only handle virtchnl event message by calling interrupt.
It is not available in two cases:
1. If the event message comes during VF initialization before interrupt
is enabled, this message will not be handled correctly.
2. Some virtchnl commands need to receive the event message and handle
it with interrupt disabled.
To solve this issue, we add the virtchnl event message handling in the
process of reading vitchnl messages in adminq from PF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 9c2f13cf72..1415f26ac3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -63,11 +63,32 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
goto again;
v_op = rte_le_to_cpu_32(event.desc.cookie_high);
- if (v_op != op)
- goto again;
+
+ if (v_op == VIRTCHNL_OP_EVENT) {
+ struct virtchnl_pf_event *vpe =
+ (struct virtchnl_pf_event *)event.msg_buf;
+ switch (vpe->event) {
+ case VIRTCHNL_EVENT_RESET_IMPENDING:
+ hw->resetting = true;
+ if (rsp_msglen)
+ *rsp_msglen = 0;
+ return IAVF_SUCCESS;
+ default:
+ goto again;
+ }
+ } else {
+ /* async reply msg on command issued by vf previously */
+ if (v_op != op) {
+ PMD_DRV_LOG(WARNING,
+ "command mismatch, expect %u, get %u",
+ op, v_op);
+ goto again;
+ }
+ }
if (rsp_msglen != NULL)
*rsp_msglen = event.msg_len;
+
return rte_le_to_cpu_32(event.desc.cookie_low);
again:
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 15/22] net/ice: add DCF request queues function
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (13 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 14/22] net/ice: handle virtchnl event message without interrupt Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 16/22] net/ice: negotiate large VF and request more queues Kevin Liu
` (8 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Add a new virtchnl function to request additional queues from PF. Current
default queue pairs number is 16. In order to support up to 256 queue
pairs DCF port, enable this request queues function.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 98 +++++++++++++++++++++++++++++++++------
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 86 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1415f26ac3..6aeafa6681 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -468,18 +468,38 @@ ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
goto ret;
}
- do {
- if (!cmd->pending)
- break;
-
- rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
- } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
-
- if (cmd->v_ret != IAVF_SUCCESS) {
- err = -1;
- PMD_DRV_LOG(ERR,
- "No response (%d times) or return failure (%d) for cmd %d",
- i, cmd->v_ret, cmd->v_op);
+ switch (cmd->v_op) {
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ err = ice_dcf_recv_cmd_rsp_no_irq(hw,
+ VIRTCHNL_OP_REQUEST_QUEUES,
+ cmd->rsp_msgbuf,
+ cmd->rsp_buflen,
+ NULL);
+ if (err != IAVF_SUCCESS || !hw->resetting) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "Failed to get response of "
+ "VIRTCHNL_OP_REQUEST_QUEUES %d",
+ err);
+ }
+ break;
+ default:
+ /* For other virtchnl ops in running time,
+ * wait for the cmd done flag.
+ */
+ do {
+ if (!cmd->pending)
+ break;
+ rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
+ } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
+
+ if (cmd->v_ret != IAVF_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "No response (%d times) or "
+ "return failure (%d) for cmd %d",
+ i, cmd->v_ret, cmd->v_op);
+ }
}
ret:
@@ -1011,6 +1031,58 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
+{
+ struct virtchnl_vf_res_request vfres;
+ struct dcf_virtchnl_cmd args;
+ uint16_t num_queue_pairs;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) {
+ PMD_DRV_LOG(ERR, "request queues not supported");
+ return -1;
+ }
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR, "queue number cannot be zero");
+ return -1;
+ }
+ vfres.num_queue_pairs = num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_REQUEST_QUEUES;
+
+ args.req_msg = (u8 *)&vfres;
+ args.req_msglen = sizeof(vfres);
+
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ /*
+ * disable interrupt to avoid the admin queue message to be read
+ * before iavf_read_msg_from_pf.
+ */
+ rte_intr_disable(hw->eth_dev->intr_handle);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ rte_intr_enable(hw->eth_dev->intr_handle);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES");
+ return err;
+ }
+
+ /* request additional queues failed, return available number */
+ num_queue_pairs = ((struct virtchnl_vf_res_request *)
+ args.rsp_msgbuf)->num_queue_pairs;
+ PMD_DRV_LOG(ERR,
+ "request queues failed, only %u queues available",
+ num_queue_pairs);
+
+ return -1;
+}
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 8cf17e7700..99498e2184 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -127,6 +127,7 @@ int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 16/22] net/ice: negotiate large VF and request more queues
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (14 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 15/22] net/ice: add DCF request queues function Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 17/22] net/ice: enable multiple queues configurations for large VF Kevin Liu
` (7 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Negotiate large VF capability with PF during VF initialization. If large
VF is supported and the number of queues larger than 16 is required, VF
requests additional queues from PF. Mark the state that large VF is
supported.
If the allocated queues number is larger than 16, the max RSS queue
region cannot be 16 anymore. Add the function to query max RSS queue
region from PF, use it in the RSS initialization and future filters
configuration.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 34 +++++++++++++++-
drivers/net/ice/ice_dcf.h | 4 ++
drivers/net/ice/ice_dcf_ethdev.c | 69 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 2 +
4 files changed, 106 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 6aeafa6681..7091658841 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,8 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -1083,6 +1084,37 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return -1;
}
+int
+ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ uint16_t qregion_width;
+ int err;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_MAX_RSS_QREGION;
+ args.req_msg = NULL;
+ args.req_msglen = 0;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of "
+ "VIRTCHNL_OP_GET_MAX_RSS_QREGION");
+ return err;
+ }
+
+ qregion_width = ((struct virtchnl_max_rss_qregion *)
+ args.rsp_msgbuf)->qregion_width;
+ hw->max_rss_qregion = (uint16_t)(1 << qregion_width);
+
+ return 0;
+}
+
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 99498e2184..05ea91d2a5 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -105,6 +105,7 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
+ uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -114,6 +115,8 @@ struct ice_dcf_hw {
uint32_t link_speed;
bool resetting;
+ /* Indicate large VF support enabled or not */
+ bool lv_enabled;
};
int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -128,6 +131,7 @@ int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
+int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d4bfa182a4..a43c5a320d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -39,6 +39,8 @@ static int
ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num);
+
static int
ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
@@ -663,6 +665,11 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
{
struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
struct ice_adapter *ad = &dcf_ad->parent;
+ struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ int ret;
+
+ uint16_t num_queue_pairs =
+ RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
@@ -670,6 +677,47 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ /* Large VF setting */
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_DFLT) {
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS)) {
+ PMD_DRV_LOG(ERR, "large VF is not supported");
+ return -1;
+ }
+
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_LV) {
+ PMD_DRV_LOG(ERR,
+ "queue pairs number cannot be larger than %u",
+ ICE_DCF_MAX_NUM_QUEUES_LV);
+ return -1;
+ }
+
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ ret = ice_dcf_get_max_rss_queue_region(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "get max rss queue region failed");
+ return ret;
+ }
+
+ hw->lv_enabled = true;
+ } else {
+ /* Check if large VF is already enabled. If so, disable and
+ * release redundant queue resource.
+ */
+ if (hw->lv_enabled) {
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ hw->lv_enabled = false;
+ }
+ /* if large VF is not required, use default rss queue region */
+ hw->max_rss_qregion = ICE_DCF_MAX_NUM_QUEUES_DFLT;
+ }
+
return 0;
}
@@ -681,8 +729,8 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_hw *hw = &adapter->real_hw;
dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
- dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
- dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+ dev_info->max_rx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
+ dev_info->max_tx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
dev_info->hash_key_size = hw->vf_res->rss_key_size;
@@ -1829,6 +1877,23 @@ ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev)
return 0;
}
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int ret;
+
+ ret = ice_dcf_request_queues(hw, num);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "request queues from PF failed");
+ return ret;
+ }
+ PMD_DRV_LOG(INFO, "change queue pairs from %u to %u",
+ hw->vsi_res->num_queue_pairs, num);
+
+ return ice_dcf_dev_reset(dev);
+}
+
static int
ice_dcf_cap_check_handler(__rte_unused const char *key,
const char *value, __rte_unused void *opaque)
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 27f6402786..4a08d32e0c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,6 +20,8 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 17/22] net/ice: enable multiple queues configurations for large VF
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (15 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 16/22] net/ice: negotiate large VF and request more queues Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 18/22] net/ice: enable IRQ mapping configuration " Kevin Liu
` (6 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Since the adminq buffer size has a 4K limitation, the current virtchnl
command VIRTCHNL_OP_CONFIG_VSI_QUEUES cannot send the message only once to
configure up to 256 queues. In this patch, we send the messages multiple
times to make sure that the buffer size is less than 4K each time.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 11 ++++++-----
drivers/net/ice/ice_dcf.h | 3 ++-
drivers/net/ice/ice_dcf_ethdev.c | 20 ++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 27 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7091658841..7004c00f1c 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -949,7 +949,8 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
#define IAVF_RXDID_COMMS_OVS_1 22
int
-ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
@@ -962,16 +963,16 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
int err;
size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+ sizeof(vc_config->qpair[0]) * num_queue_pairs;
vc_config = rte_zmalloc("cfg_queue", size, 0);
if (!vc_config)
return -ENOMEM;
vc_config->vsi_id = hw->vsi_res->vsi_id;
- vc_config->num_queue_pairs = hw->num_queue_pairs;
+ vc_config->num_queue_pairs = num_queue_pairs;
- for (i = 0, vc_qp = vc_config->qpair;
- i < hw->num_queue_pairs;
+ for (i = index, vc_qp = vc_config->qpair;
+ i < index + num_queue_pairs;
i++, vc_qp++) {
vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
vc_qp->txq.queue_id = i;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 05ea91d2a5..e36428a92a 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -129,7 +129,8 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
-int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a43c5a320d..78df82d5b5 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -513,6 +513,8 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
struct ice_adapter *ad = &dcf_ad->parent;
struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ uint16_t num_queue_pairs;
+ uint16_t index = 0;
int ret;
if (hw->resetting) {
@@ -531,6 +533,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
+ num_queue_pairs = hw->num_queue_pairs;
ret = ice_dcf_init_rx_queues(dev);
if (ret) {
@@ -546,7 +549,20 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
}
}
- ret = ice_dcf_configure_queues(hw);
+ /* If needed, send configure queues msg multiple times to make the
+ * adminq buffer length smaller than the 4K limitation.
+ */
+ while (num_queue_pairs > ICE_DCF_CFG_Q_NUM_PER_BUF) {
+ if (ice_dcf_configure_queues(hw,
+ ICE_DCF_CFG_Q_NUM_PER_BUF, index) != 0) {
+ PMD_DRV_LOG(ERR, "configure queues failed");
+ goto err_queue;
+ }
+ num_queue_pairs -= ICE_DCF_CFG_Q_NUM_PER_BUF;
+ index += ICE_DCF_CFG_Q_NUM_PER_BUF;
+ }
+
+ ret = ice_dcf_configure_queues(hw, num_queue_pairs, index);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to config queues");
return ret;
@@ -586,7 +602,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
+err_queue:
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 4a08d32e0c..2fac1e5b21 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -22,6 +22,7 @@
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 18/22] net/ice: enable IRQ mapping configuration for large VF
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (16 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 17/22] net/ice: enable multiple queues configurations for large VF Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 19/22] net/ice: add enable/disable queues for DCF " Kevin Liu
` (5 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
The current IRQ mapping configuration only supports max 16 queues and
16 MSIX vectors. Change the queue vector mapping structure to indicate
up to 256 queues. A new opcode is used to handle the case with large
number of queues. To avoid adminq buffer size limitation, we support
to send the virtchnl message multiple times if needed.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 50 +++++++++++++++++++++++++++----
drivers/net/ice/ice_dcf.h | 10 ++++++-
drivers/net/ice/ice_dcf_ethdev.c | 51 +++++++++++++++++++++++++++-----
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 99 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7004c00f1c..290f754049 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1115,7 +1115,6 @@ ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
return 0;
}
-
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
@@ -1132,13 +1131,14 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return -ENOMEM;
map_info->num_vectors = hw->nb_msix;
- for (i = 0; i < hw->nb_msix; i++) {
- vecmap = &map_info->vecmap[i];
+ for (i = 0; i < hw->eth_dev->data->nb_rx_queues; i++) {
+ vecmap =
+ &map_info->vecmap[hw->qv_map[i].vector_id - hw->msix_base];
vecmap->vsi_id = hw->vsi_res->vsi_id;
vecmap->rxitr_idx = 0;
- vecmap->vector_id = hw->msix_base + i;
+ vecmap->vector_id = hw->qv_map[i].vector_id;
vecmap->txq_map = 0;
- vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+ vecmap->rxq_map |= 1 << hw->qv_map[i].queue_id;
}
memset(&args, 0, sizeof(args));
@@ -1154,6 +1154,46 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index)
+{
+ struct virtchnl_queue_vector_maps *map_info;
+ struct virtchnl_queue_vector *qv_maps;
+ struct dcf_virtchnl_cmd args;
+ int len, i, err;
+ int count = 0;
+
+ len = sizeof(struct virtchnl_queue_vector_maps) +
+ sizeof(struct virtchnl_queue_vector) * (num - 1);
+
+ map_info = rte_zmalloc("map_info", len, 0);
+ if (!map_info)
+ return -ENOMEM;
+
+ map_info->vport_id = hw->vsi_res->vsi_id;
+ map_info->num_qv_maps = num;
+ for (i = index; i < index + map_info->num_qv_maps; i++) {
+ qv_maps = &map_info->qv_maps[count++];
+ qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
+ qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
+ qv_maps->queue_id = hw->qv_map[i].queue_id;
+ qv_maps->vector_id = hw->qv_map[i].vector_id;
+ }
+
+ args.v_op = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
+ args.req_msg = (u8 *)map_info;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
+
+ rte_free(map_info);
+ return err;
+}
+
int
ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index e36428a92a..ce57a687ab 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -74,6 +74,11 @@ struct ice_dcf_tm_conf {
bool committed;
};
+struct ice_dcf_qv_map {
+ uint16_t queue_id;
+ uint16_t vector_id;
+};
+
struct ice_dcf_hw {
struct iavf_hw avf;
@@ -106,7 +111,8 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
- uint16_t rxq_map[16];
+
+ struct ice_dcf_qv_map *qv_map; /* queue vector mapping */
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -134,6 +140,8 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 78df82d5b5..1ddba02ebb 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -143,6 +143,7 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
{
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct ice_dcf_qv_map *qv_map;
uint16_t interval, i;
int vec;
@@ -161,6 +162,14 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
}
+ qv_map = rte_zmalloc("qv_map",
+ dev->data->nb_rx_queues * sizeof(struct ice_dcf_qv_map), 0);
+ if (!qv_map) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+ dev->data->nb_rx_queues);
+ return -1;
+ }
+
if (!dev->data->dev_conf.intr_conf.rxq ||
!rte_intr_dp_is_en(intr_handle)) {
/* Rx interrupt disabled, Map interrupt only for writeback */
@@ -196,17 +205,22 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
IAVF_WRITE_FLUSH(&hw->avf);
/* map all queues to the same interrupt */
- for (i = 0; i < dev->data->nb_rx_queues; i++)
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
+ }
+ hw->qv_map = qv_map;
} else {
if (!rte_intr_allow_others(intr_handle)) {
hw->nb_msix = 1;
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
rte_intr_vec_list_index_set(intr_handle,
i, IAVF_MISC_VEC_ID);
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
hw->msix_base);
@@ -219,21 +233,44 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[vec] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = vec;
rte_intr_vec_list_index_set(intr_handle,
i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"%u vectors are mapping to %u Rx queues",
hw->nb_msix, dev->data->nb_rx_queues);
}
}
- if (ice_dcf_config_irq_map(hw)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping failed");
- return -1;
+ if (!hw->lv_enabled) {
+ if (ice_dcf_config_irq_map(hw)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+ return -1;
+ }
+ } else {
+ uint16_t num_qv_maps = dev->data->nb_rx_queues;
+ uint16_t index = 0;
+
+ while (num_qv_maps > ICE_DCF_IRQ_MAP_NUM_PER_BUF) {
+ if (ice_dcf_config_irq_map_lv(hw,
+ ICE_DCF_IRQ_MAP_NUM_PER_BUF, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+ num_qv_maps -= ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ index += ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ }
+
+ if (ice_dcf_config_irq_map_lv(hw, num_qv_maps, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+
}
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 2fac1e5b21..9ef524c97c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -23,6 +23,7 @@
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 19/22] net/ice: add enable/disable queues for DCF large VF
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (17 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 18/22] net/ice: enable IRQ mapping configuration " Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 20/22] net/ice: fix DCF ACL flow engine Kevin Liu
` (4 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The current virtchnl structure for enable/disable queues only supports
max 32 queue pairs. Use a new opcode and structure to indicate up to 256
queue pairs, in order to enable/disable queues in large VF case.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 99 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf.h | 5 ++
drivers/net/ice/ice_dcf_ethdev.c | 26 +++++++--
drivers/net/ice/ice_dcf_ethdev.h | 8 +--
4 files changed, 125 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 290f754049..23edfd09b1 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -90,7 +90,6 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
*rsp_msglen = event.msg_len;
return rte_le_to_cpu_32(event.desc.cookie_low);
-
again:
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
} while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
@@ -896,7 +895,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
{
struct rte_eth_dev *dev = hw->eth_dev;
struct rte_eth_rss_conf *rss_conf;
- uint8_t i, j, nb_q;
+ uint16_t i, j, nb_q;
int ret;
rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
@@ -1075,6 +1074,12 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return err;
}
+ /* request queues succeeded, vf is resetting */
+ if (hw->resetting) {
+ PMD_DRV_LOG(INFO, "vf is resetting");
+ return 0;
+ }
+
/* request additional queues failed, return available number */
num_queue_pairs = ((struct virtchnl_vf_res_request *)
args.rsp_msgbuf)->num_queue_pairs;
@@ -1185,7 +1190,8 @@ ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
args.req_msg = (u8 *)map_info;
args.req_msglen = len;
args.rsp_msgbuf = hw->arq_buf;
- args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
@@ -1225,6 +1231,50 @@ ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
return err;
}
+int
+ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ if (rx) {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ } else {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ }
+
+ if (on)
+ args.v_op = VIRTCHNL_OP_ENABLE_QUEUES_V2;
+ else
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+ on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_disable_queues(struct ice_dcf_hw *hw)
{
@@ -1254,6 +1304,49 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues) +
+ sizeof(struct virtchnl_queue_chunk) *
+ (ICE_DCF_RXTX_QUEUE_CHUNKS_NUM - 1);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = ICE_DCF_RXTX_QUEUE_CHUNKS_NUM;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].num_queues =
+ hw->eth_dev->data->nb_tx_queues;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].num_queues =
+ hw->eth_dev->data->nb_rx_queues;
+
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats)
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index ce57a687ab..78ab23aaa6 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,8 @@
#include "base/ice_type.h"
#include "ice_logs.h"
+#define ICE_DCF_RXTX_QUEUE_CHUNKS_NUM 2
+
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -143,7 +145,10 @@ int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw,
+ uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ddba02ebb..e46c8405aa 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -317,6 +317,7 @@ static int
ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_rx_queue *rxq;
int err = 0;
@@ -339,7 +340,11 @@ ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, rx_queue_id, true, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, rx_queue_id, true, true);
+
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
rx_queue_id);
@@ -448,6 +453,7 @@ static int
ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_tx_queue *txq;
int err = 0;
@@ -463,7 +469,10 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, tx_queue_id, false, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, tx_queue_id, false, true);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
@@ -650,12 +659,17 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
struct ice_tx_queue *txq;
- int ret, i;
+ int i;
/* Stop All queues */
- ret = ice_dcf_disable_queues(hw);
- if (ret)
- PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ if (!hw->lv_enabled) {
+ if (ice_dcf_disable_queues(hw))
+ PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ } else {
+ if (ice_dcf_disable_queues_lv(hw))
+ PMD_DRV_LOG(WARNING,
+ "Fail to stop queues for large VF");
+ }
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 9ef524c97c..3f740e2c7b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,10 +20,10 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
-#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
-#define ICE_DCF_MAX_NUM_QUEUES_LV 256
-#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
-#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 20/22] net/ice: fix DCF ACL flow engine
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (18 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 19/22] net/ice: add enable/disable queues for DCF " Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 21/22] testpmd: force flow flush Kevin Liu
` (3 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
ACL is not a necessary feature for DCF, it may not be supported by
the ice kernel driver, so in this patch the program does not return
the ACL initiation fails to high level functions, as substitute it
prints some error logs, cleans the related resources and unregisters
the ACL engine.
Fixes: 40d466fa9f76 ("net/ice: support ACL filter in DCF")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_acl_filter.c | 20 ++++++++++++++----
drivers/net/ice/ice_generic_flow.c | 34 +++++++++++++++++++++++-------
2 files changed, 42 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0..20a1f86c43 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -56,6 +56,8 @@ ice_pattern_match_item ice_acl_pattern[] = {
{pattern_eth_ipv4_sctp, ICE_ACL_INSET_ETH_IPV4_SCTP, ICE_INSET_NONE, ICE_INSET_NONE},
};
+static void ice_acl_prof_free(struct ice_hw *hw);
+
static int
ice_acl_prof_alloc(struct ice_hw *hw)
{
@@ -1007,17 +1009,27 @@ ice_acl_init(struct ice_adapter *ad)
ret = ice_acl_setup(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_bitmap_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_prof_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
- return ice_register_parser(parser, ad);
+ ret = ice_register_parser(parser, ad);
+ if (ret)
+ goto deinit_acl;
+
+ return 0;
+
+deinit_acl:
+ ice_deinit_acl(pf);
+ ice_acl_prof_free(hw);
+ PMD_DRV_LOG(ERR, "ACL init failed, may not supported!");
+ return ret;
}
static void
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 53b1c0b69a..205ba5d21b 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1817,6 +1817,12 @@ ice_register_flow_engine(struct ice_flow_engine *engine)
TAILQ_INSERT_TAIL(&engine_list, engine, node);
}
+static void
+ice_unregister_flow_engine(struct ice_flow_engine *engine)
+{
+ TAILQ_REMOVE(&engine_list, engine, node);
+}
+
int
ice_flow_init(struct ice_adapter *ad)
{
@@ -1840,9 +1846,18 @@ ice_flow_init(struct ice_adapter *ad)
ret = engine->init(ad);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to initialize engine %d",
- engine->type);
- return ret;
+ /**
+ * ACL may not supported in kernel driver,
+ * so just unregister the engine.
+ */
+ if (engine->type == ICE_FLOW_ENGINE_ACL) {
+ ice_unregister_flow_engine(engine);
+ } else {
+ PMD_INIT_LOG(ERR,
+ "Failed to initialize engine %d",
+ engine->type);
+ return ret;
+ }
}
}
return 0;
@@ -1929,7 +1944,7 @@ ice_register_parser(struct ice_flow_parser *parser,
list = ice_get_parser_list(parser, ad);
if (list == NULL)
- return -EINVAL;
+ goto err;
if (ad->devargs.pipe_mode_support) {
TAILQ_INSERT_TAIL(list, parser_node, node);
@@ -1941,7 +1956,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -1952,7 +1967,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_SWITCH) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -1961,11 +1976,14 @@ ice_register_parser(struct ice_flow_parser *parser,
} else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_HEAD(list, parser_node, node);
} else {
- return -EINVAL;
+ goto err;
}
}
-DONE:
return 0;
+err:
+ rte_free(parser_node);
+ PMD_DRV_LOG(ERR, "%s failed.", __func__);
+ return -EINVAL;
}
void
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 21/22] testpmd: force flow flush
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (19 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 20/22] net/ice: fix DCF ACL flow engine Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-13 17:10 ` [PATCH v3 22/22] net/ice: fix DCF reset Kevin Liu
` (2 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Qi Zhang <qi.z.zhang@intel.com>
For mdcf, rte_flow_flush is still need to be invoked even there are
no flows be created in current instance.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
app/test-pmd/config.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cc8e7aa138..3d40e3e43d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2923,15 +2923,15 @@ port_flow_flush(portid_t port_id)
port = &ports[port_id];
- if (port->flow_list == NULL)
- return ret;
-
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x44, sizeof(error));
if (rte_flow_flush(port_id, &error)) {
port_flow_complain(&error);
}
+ if (port->flow_list == NULL)
+ return ret;
+
while (port->flow_list) {
struct port_flow *pf = port->flow_list->next;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v3 22/22] net/ice: fix DCF reset
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (20 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 21/22] testpmd: force flow flush Kevin Liu
@ 2022-04-13 17:10 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
2022-04-19 16:01 ` [PATCH v4 0/2] fix DCF function defect Kevin Liu
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-13 17:10 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
After the PF triggers the VF reset, before the VF PMD can perform
any operations on the hardware, it must reinitialize the all resources.
This patch adds a flag to indicate whether the VF has been reset by
PF, and update the DCF resetting operations according to this flag.
Fixes: 1a86f4dbdf42 ("net/ice: support DCF device reset")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_common.c | 4 +++-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 17 ++++++++++++++++-
drivers/net/ice/ice_dcf_parent.c | 3 +++
4 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index db87bacd97..13feb55469 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -755,6 +755,7 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
status = ice_init_def_sw_recp(hw, &hw->switch_info->recp_list);
if (status) {
ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
return status;
}
return ICE_SUCCESS;
@@ -823,7 +824,6 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
}
ice_rm_sw_replay_rule_info(hw, sw);
ice_free(hw, sw->recp_list);
- ice_free(hw, sw);
}
/**
@@ -833,6 +833,8 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
{
ice_cleanup_fltr_mgmt_single(hw, hw->switch_info);
+ ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
}
/**
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 23edfd09b1..35773e2acd 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1429,7 +1429,7 @@ ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
int ret;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
ice_dcf_disable_irq0(hw);
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e46c8405aa..0315e694d7 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1004,6 +1004,15 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
uint32_t i;
int len, err = 0;
+ if (hw->resetting) {
+ if (!add)
+ return 0;
+
+ PMD_DRV_LOG(ERR,
+ "fail to add multicast MACs for VF resetting");
+ return -EIO;
+ }
+
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
@@ -1642,7 +1651,13 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
- (void)ice_dcf_dev_stop(dev);
+ if (adapter->parent.pf.adapter_stopped)
+ (void)ice_dcf_dev_stop(dev);
+
+ if (adapter->real_hw.resetting) {
+ ice_dcf_uninit_hw(dev, &adapter->real_hw);
+ ice_dcf_init_hw(dev, &adapter->real_hw);
+ }
ice_free_queues(dev);
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 2f96dedcce..7f7ed796e2 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -240,6 +240,9 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
case VIRTCHNL_EVENT_RESET_IMPENDING:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
dcf_hw->resetting = true;
+ rte_eth_dev_callback_process(dcf_hw->eth_dev,
+ RTE_ETH_EVENT_INTR_RESET,
+ NULL);
break;
case VIRTCHNL_EVENT_LINK_CHANGE:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 00/23] complete common VF features for DCF
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (21 preceding siblings ...)
2022-04-13 17:10 ` [PATCH v3 22/22] net/ice: fix DCF reset Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 01/23] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
` (23 more replies)
2022-04-19 16:01 ` [PATCH v4 0/2] fix DCF function defect Kevin Liu
23 siblings, 24 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The DCF PMD support the below dev ops,
dev_supported_ptypes_get
dev_link_update
xstats_get
xstats_get_names
xstats_reset
promiscuous_enable
promiscuous_disable
allmulticast_enable
allmulticast_disable
mac_addr_add
mac_addr_remove
set_mc_addr_list
vlan_filter_set
vlan_offload_set
mac_addr_set
reta_update
reta_query
rss_hash_update
rss_hash_conf_get
rxq_info_get
txq_info_get
mtu_set
tx_done_cleanup
get_monitor_addr
v4:
* remove patch:
1.testpmd: force flow flush
2.net/ice: fix DCF ACL flow engine
3.net/ice: fix DCF reset
* add patch:
1.net/ice: add extended stats
2.net/ice: support queue information getting
3.net/ice: implement power management
4.doc: update for ice DCF datapath configuration
v3:
* remove patch:
1.net/ice/base: add VXLAN support for switch filter
2.net/ice: add VXLAN support for switch filter
3.common/iavf: support flushing rules and reporting DCF id
4.net/ice/base: fix ethertype filter input set
5.net/ice/base: support IPv6 GRE UDP pattern
6.net/ice/base: support new patterns of TCP and UDP
7.net/ice: support new patterns of TCP and UDP
8.net/ice/base: support IPv4 GRE tunnel
9.net/ice: support IPv4 GRE raw pattern type
10.net/ice/base: update Profile ID table for VXLAN
11.net/ice/base: update Protocol ID table to match DVM DDP
v2:
* remove patch:
1.net/iavf: support checking if device is an MDCF instance
2.net/ice: support MDCF(multi-DCF) instance
3.net/ice/base: support custom DDP buildin recipe
4.net/ice: support buildin recipe configuration
5.net/ice/base: support custom ddp package version
6.net/ice: disable ACL function for MDCF instance
Alvin Zhang (6):
net/ice: support dcf promisc configuration
net/ice: support dcf VLAN filter and offload configuration
net/ice: support DCF new VLAN capabilities
net/ice: support IPv6 NVGRE tunnel
net/ice: support new pattern of IPv4
net/ice: treat unknown package as OS default package
Dapeng Yu (1):
net/ice: enable CVL DCF device reset API
Jie Wang (2):
net/ice: add ops MTU-SET to dcf
net/ice: add ops dev-supported-ptypes-get to dcf
Kevin Liu (6):
net/ice: support dcf MAC configuration
net/ice: add enable/disable queues for DCF large VF
net/ice: add extended stats
net/ice: support queue information getting
net/ice: implement power management
doc: update for ice DCF datapath configuration
Robin Zhang (1):
net/ice: cleanup Tx buffers
Steve Yang (7):
net/ice: enable RSS RETA ops for DCF hardware
net/ice: enable RSS HASH ops for DCF hardware
net/ice: handle virtchnl event message without interrupt
net/ice: add DCF request queues function
net/ice: negotiate large VF and request more queues
net/ice: enable multiple queues configurations for large VF
net/ice: enable IRQ mapping configuration for large VF
doc/guides/nics/features/ice_dcf.ini | 15 +
drivers/net/ice/ice_dcf.c | 375 +++++++++-
drivers/net/ice/ice_dcf.h | 52 +-
drivers/net/ice/ice_dcf_ethdev.c | 986 +++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 14 +
drivers/net/ice/ice_ethdev.c | 13 +-
drivers/net/ice/ice_switch_filter.c | 8 +
7 files changed, 1375 insertions(+), 88 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 01/23] net/ice: enable RSS RETA ops for DCF hardware
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 02/23] net/ice: enable RSS HASH " Kevin Liu
` (22 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS RETA should be updated and queried by application,
Add related ops ('.reta_update', '.reta_query') for DCF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++++
3 files changed, 79 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f0c074b01..070d1b71ac 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
return err;
}
-static int
+int
ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_lut *rss_lut;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 6ec766ebda..b2c6aa2684 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59610e058f..1ac66ed990 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint8_t *lut;
+ uint16_t i, idx, shift;
+ int ret;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ /* store the old lut table temporarily */
+ rte_memcpy(lut, hw->rss_lut, reta_size);
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ /* send virtchnnl ops to configure rss*/
+ ret = ice_dcf_configure_rss_lut(hw);
+ if (ret) /* revert back */
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint16_t i, idx, shift;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = hw->rss_lut[i];
+ }
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
.tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 02/23] net/ice: enable RSS HASH ops for DCF hardware
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
2022-04-19 15:45 ` [PATCH v4 01/23] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 03/23] net/ice: cleanup Tx buffers Kevin Liu
` (21 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS HASH should be updated and queried by application,
Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF.
Because DCF doesn't support configure RSS HASH, only HASH key can be
updated within ops '.rss_hash_update'.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 070d1b71ac..89c0203ba3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
hw->ets_config = NULL;
}
-static int
+int
ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_key *rss_key;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index b2c6aa2684..f0b45af5ae 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ac66ed990..ccad7fc304 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* HENA setting, it is enabled by default, no change */
+ if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) {
+ PMD_DRV_LOG(ERR, "The size of hash key configured "
+ "(%d) doesn't match the size of hardware can "
+ "support (%d)", rss_conf->rss_key_len,
+ hw->vf_res->rss_key_size);
+ return -EINVAL;
+ }
+
+ rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ return ice_dcf_configure_rss_key(hw);
+}
+
+static int
+ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* Just set it to default value now. */
+ rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL;
+
+ if (!rss_conf->rss_key)
+ return 0;
+
+ rss_conf->rss_key_len = hw->vf_res->rss_key_size;
+ rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len);
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tm_ops_get = ice_dcf_tm_ops_get,
.reta_update = ice_dcf_dev_rss_reta_update,
.reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 03/23] net/ice: cleanup Tx buffers
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
2022-04-19 15:45 ` [PATCH v4 01/23] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-19 15:45 ` [PATCH v4 02/23] net/ice: enable RSS HASH " Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 04/23] net/ice: add ops MTU-SET to dcf Kevin Liu
` (20 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Robin Zhang, Kevin Liu
From: Robin Zhang <robinx.zhang@intel.com>
Add support for ops rte_eth_tx_done_cleanup in dcf
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ccad7fc304..d8b5961514 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.reta_query = ice_dcf_dev_rss_reta_query,
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 04/23] net/ice: add ops MTU-SET to dcf
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (2 preceding siblings ...)
2022-04-19 15:45 ` [PATCH v4 03/23] net/ice: cleanup Tx buffers Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 05/23] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
` (19 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "mtu_set" to dcf, and it can configure the port mtu through
cmdline.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++
2 files changed, 20 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d8b5961514..06d752fd61 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &new_link);
}
+static int
+ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started != 0) {
+ PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
+ dev->data->port_id);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
bool
ice_dcf_adminq_need_retry(struct ice_adapter *ad)
{
@@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
.tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 11a1305038..f2faf26f58 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -15,6 +15,12 @@
#define ICE_DCF_MAX_RINGS 1
+#define ICE_DCF_FRAME_SIZE_MAX 9728
+#define ICE_DCF_VLAN_TAG_SIZE 4
+#define ICE_DCF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
+#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+
struct ice_dcf_queue {
uint64_t dummy;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 05/23] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (3 preceding siblings ...)
2022-04-19 15:45 ` [PATCH v4 04/23] net/ice: add ops MTU-SET to dcf Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 06/23] net/ice: support dcf promisc configuration Kevin Liu
` (18 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get
ptypes through the new API.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 80 +++++++++++++++++++-------------
1 file changed, 49 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 06d752fd61..6a577a6582 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+static const uint32_t *
+ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+ return ptypes;
+}
+
static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
- .dev_start = ice_dcf_dev_start,
- .dev_stop = ice_dcf_dev_stop,
- .dev_close = ice_dcf_dev_close,
- .dev_reset = ice_dcf_dev_reset,
- .dev_configure = ice_dcf_dev_configure,
- .dev_infos_get = ice_dcf_dev_info_get,
- .rx_queue_setup = ice_rx_queue_setup,
- .tx_queue_setup = ice_tx_queue_setup,
- .rx_queue_release = ice_dev_rx_queue_release,
- .tx_queue_release = ice_dev_tx_queue_release,
- .rx_queue_start = ice_dcf_rx_queue_start,
- .tx_queue_start = ice_dcf_tx_queue_start,
- .rx_queue_stop = ice_dcf_rx_queue_stop,
- .tx_queue_stop = ice_dcf_tx_queue_stop,
- .link_update = ice_dcf_link_update,
- .stats_get = ice_dcf_stats_get,
- .stats_reset = ice_dcf_stats_reset,
- .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
- .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
- .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
- .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
- .flow_ops_get = ice_dcf_dev_flow_ops_get,
- .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
- .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
- .tm_ops_get = ice_dcf_tm_ops_get,
- .reta_update = ice_dcf_dev_rss_reta_update,
- .reta_query = ice_dcf_dev_rss_reta_query,
- .rss_hash_update = ice_dcf_dev_rss_hash_update,
- .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
- .tx_done_cleanup = ice_tx_done_cleanup,
- .mtu_set = ice_dcf_dev_mtu_set,
+ .dev_start = ice_dcf_dev_start,
+ .dev_stop = ice_dcf_dev_stop,
+ .dev_close = ice_dcf_dev_close,
+ .dev_reset = ice_dcf_dev_reset,
+ .dev_configure = ice_dcf_dev_configure,
+ .dev_infos_get = ice_dcf_dev_info_get,
+ .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .rx_queue_release = ice_dev_rx_queue_release,
+ .tx_queue_release = ice_dev_tx_queue_release,
+ .rx_queue_start = ice_dcf_rx_queue_start,
+ .tx_queue_start = ice_dcf_tx_queue_start,
+ .rx_queue_stop = ice_dcf_rx_queue_stop,
+ .tx_queue_stop = ice_dcf_tx_queue_stop,
+ .link_update = ice_dcf_link_update,
+ .stats_get = ice_dcf_stats_get,
+ .stats_reset = ice_dcf_stats_reset,
+ .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
+ .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
+ .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
+ .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .flow_ops_get = ice_dcf_dev_flow_ops_get,
+ .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
+ .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
+ .tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 06/23] net/ice: support dcf promisc configuration
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (4 preceding siblings ...)
2022-04-19 15:45 ` [PATCH v4 05/23] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 07/23] net/ice: support dcf MAC configuration Kevin Liu
` (17 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support configuration of unicast and multicast promisc on dcf.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 3 ++
2 files changed, 76 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6a577a6582..87d281ee93 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
}
static int
-ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+dcf_config_promisc(struct ice_dcf_adapter *adapter,
+ bool enable_unicast,
+ bool enable_multicast)
{
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_promisc_info promisc;
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ promisc.flags = 0;
+ promisc.vsi_id = hw->vsi_res->vsi_id;
+
+ if (enable_unicast)
+ promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+ if (enable_multicast)
+ promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+ args.req_msg = (uint8_t *)&promisc;
+ args.req_msglen = sizeof(promisc);
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE");
+ return err;
+ }
+
+ adapter->promisc_unicast_enabled = enable_unicast;
+ adapter->promisc_multicast_enabled = enable_multicast;
return 0;
}
+static int
+ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, true,
+ adapter->promisc_multicast_enabled);
+}
+
static int
ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, false,
+ adapter->promisc_multicast_enabled);
}
static int
ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ true);
}
static int
ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ false);
}
static int
@@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
+ dcf_config_promisc(adapter, false, false);
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index f2faf26f58..22e450527b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -33,6 +33,9 @@ struct ice_dcf_adapter {
struct ice_adapter parent; /* Must be first */
struct ice_dcf_hw real_hw;
+ bool promisc_unicast_enabled;
+ bool promisc_multicast_enabled;
+
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 07/23] net/ice: support dcf MAC configuration
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (5 preceding siblings ...)
2022-04-19 15:45 ` [PATCH v4 06/23] net/ice: support dcf promisc configuration Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:45 ` [PATCH v4 08/23] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
` (16 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
Below PMD ops are supported in this patch:
.mac_addr_add = dcf_dev_add_mac_addr
.mac_addr_remove = dcf_dev_del_mac_addr
.set_mc_addr_list = dcf_set_mc_addr_list
.mac_addr_set = dcf_dev_set_default_mac_addr
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 9 +-
drivers/net/ice/ice_dcf.h | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 5 +-
4 files changed, 226 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 89c0203ba3..55ae68c456 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
}
int
-ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr,
+ bool add, uint8_t type)
{
struct virtchnl_ether_addr_list *list;
- struct rte_ether_addr *addr;
struct dcf_virtchnl_cmd args;
int len, err = 0;
@@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
}
len = sizeof(struct virtchnl_ether_addr_list);
- addr = hw->eth_dev->data->mac_addrs;
len += sizeof(struct virtchnl_ether_addr);
list = rte_zmalloc(NULL, len, 0);
@@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
rte_memcpy(list->list[0].addr, addr->addr_bytes,
sizeof(addr->addr_bytes));
+
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
-
+ list->list[0].type = type;
list->vsi_id = hw->vsi_res->vsi_id;
list->num_elements = 1;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index f0b45af5ae..78df202a77 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
-int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr, bool add,
+ uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 87d281ee93..0d944f9fd2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -26,6 +26,12 @@
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#define DCF_NUM_MACADDR_MAX 64
+
+static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add);
+
static int
ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- ret = ice_dcf_add_del_all_mac_addr(hw, true);
+ ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs,
+ true, VIRTCHNL_ETHER_ADDR_PRIMARY);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to add mac addr");
return ret;
}
+ if (dcf_ad->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, true);
+ if (ret)
+ return ret;
+ }
+
+
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
@@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
rte_intr_efd_disable(intr_handle);
rte_intr_vec_list_free(intr_handle);
- ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
+ ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw,
+ dcf_ad->real_hw.eth_dev->data->mac_addrs,
+ false, VIRTCHNL_ETHER_ADDR_PRIMARY);
+
+ if (dcf_ad->mc_addrs_num)
+ /* flush previous addresses */
+ (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw,
+ dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, false);
+
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- dev_info->max_mac_addrs = 1;
+ dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
@@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
false);
}
+static int
+dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ int err;
+
+ if (rte_is_zero_ether_addr(addr)) {
+ PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+ return -EINVAL;
+ }
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to add MAC address");
+ return err;
+ }
+
+ return 0;
+}
+
+static void
+dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct rte_ether_addr *addr = &dev->data->mac_addrs[index];
+ int err;
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to remove MAC address");
+}
+
+static int
+dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add)
+{
+ struct virtchnl_ether_addr_list *list;
+ struct dcf_virtchnl_cmd args;
+ uint32_t i;
+ int len, err = 0;
+
+ len = sizeof(struct virtchnl_ether_addr_list);
+ len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
+
+ list = rte_zmalloc(NULL, len, 0);
+ if (!list) {
+ PMD_DRV_LOG(ERR, "fail to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
+ sizeof(list->list[i].addr));
+ list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ list->vsi_id = hw->vsi_res->vsi_id;
+ list->num_elements = mc_addrs_num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+ VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.req_msg = (uint8_t *)list;
+ args.req_msglen = len;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" :
+ "OP_DEL_ETHER_ADDRESS");
+ rte_free(list);
+ return err;
+}
+
+static int
+dcf_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i;
+ int ret;
+
+
+ if (mc_addrs_num > DCF_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR,
+ "can't add more than a limited number (%u) of addresses.",
+ (uint32_t)DCF_NUM_MACADDR_MAX);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addrs[i])) {
+ const uint8_t *mac = mc_addrs[i].addr_bytes;
+
+ PMD_DRV_LOG(ERR,
+ "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x",
+ mac[0], mac[1], mac[2], mac[3], mac[4],
+ mac[5]);
+ return -EINVAL;
+ }
+ }
+
+ if (adapter->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num, false);
+ if (ret)
+ return ret;
+ }
+ if (!mc_addrs_num) {
+ adapter->mc_addrs_num = 0;
+ return 0;
+ }
+
+ /* add new ones */
+ ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true);
+ if (ret) {
+ /* if adding mac address list fails, should add the
+ * previous addresses back.
+ */
+ if (adapter->mc_addrs_num)
+ (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num,
+ true);
+ return ret;
+ }
+ adapter->mc_addrs_num = mc_addrs_num;
+ memcpy(adapter->mc_addrs,
+ mc_addrs, mc_addrs_num * sizeof(*mc_addrs));
+
+ return 0;
+}
+
+static int
+dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_ether_addr *old_addr;
+ int ret;
+
+ old_addr = hw->eth_dev->data->mac_addrs;
+ if (rte_is_same_ether_addr(old_addr, mac_addr))
+ return 0;
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ old_addr->addr_bytes[0],
+ old_addr->addr_bytes[1],
+ old_addr->addr_bytes[2],
+ old_addr->addr_bytes[3],
+ old_addr->addr_bytes[4],
+ old_addr->addr_bytes[5]);
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ mac_addr->addr_bytes[0],
+ mac_addr->addr_bytes[1],
+ mac_addr->addr_bytes[2],
+ mac_addr->addr_bytes[3],
+ mac_addr->addr_bytes[4],
+ mac_addr->addr_bytes[5]);
+
+ if (ret)
+ return -EIO;
+
+ rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs);
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
.allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .mac_addr_add = dcf_dev_add_mac_addr,
+ .mac_addr_remove = dcf_dev_del_mac_addr,
+ .set_mc_addr_list = dcf_set_mc_addr_list,
+ .mac_addr_set = dcf_dev_set_default_mac_addr,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 22e450527b..27f6402786 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -14,7 +14,7 @@
#include "ice_dcf.h"
#define ICE_DCF_MAX_RINGS 1
-
+#define DCF_NUM_MACADDR_MAX 64
#define ICE_DCF_FRAME_SIZE_MAX 9728
#define ICE_DCF_VLAN_TAG_SIZE 4
#define ICE_DCF_ETH_OVERHEAD \
@@ -35,7 +35,8 @@ struct ice_dcf_adapter {
bool promisc_unicast_enabled;
bool promisc_multicast_enabled;
-
+ uint32_t mc_addrs_num;
+ struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX];
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 08/23] net/ice: support dcf VLAN filter and offload configuration
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (6 preceding siblings ...)
2022-04-19 15:45 ` [PATCH v4 07/23] net/ice: support dcf MAC configuration Kevin Liu
@ 2022-04-19 15:45 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 09/23] net/ice: support DCF new VLAN capabilities Kevin Liu
` (15 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:45 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Below PMD ops are supported in this patch:
.vlan_filter_set = dcf_dev_vlan_filter_set
.vlan_offload_set = dcf_dev_vlan_offload_set
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0d944f9fd2..e58cdf47d2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_filter_list *vlan_list;
+ uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+ sizeof(uint16_t)];
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+ vlan_list->vsi_id = hw->vsi_res->vsi_id;
+ vlan_list->num_elements = 1;
+ vlan_list->vlan_id[0] = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+ args.req_msg = cmd_buffer;
+ args.req_msglen = sizeof(cmd_buffer);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN" : "OP_DEL_VLAN");
+
+ return err;
+}
+
+static int
+dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_ENABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static int
+dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ /* Vlan stripping setting */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ /* Enable or disable VLAN stripping */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ err = dcf_enable_vlan_strip(hw);
+ else
+ err = dcf_disable_vlan_strip(hw);
+
+ if (err)
+ return -EIO;
+ }
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mac_addr_remove = dcf_dev_del_mac_addr,
.set_mc_addr_list = dcf_set_mc_addr_list,
.mac_addr_set = dcf_dev_set_default_mac_addr,
+ .vlan_filter_set = dcf_dev_vlan_filter_set,
+ .vlan_offload_set = dcf_dev_vlan_offload_set,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 09/23] net/ice: support DCF new VLAN capabilities
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (7 preceding siblings ...)
2022-04-19 15:45 ` [PATCH v4 08/23] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 10/23] net/ice: enable CVL DCF device reset API Kevin Liu
` (14 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The DCF needs to query the VLAN capabilities based on current device
configuration firstly.
DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 27 +++++
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++++++++---
3 files changed, 182 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 55ae68c456..885d58c0f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
return 0;
}
+static int
+dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_vlan_caps vlan_v2_caps;
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS;
+ args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps;
+ args.rsp_buflen = sizeof(vlan_v2_caps);
+
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS");
+ return ret;
+ }
+
+ rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
+ return 0;
+}
+
int
ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
@@ -701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
+ if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) &&
+ dcf_get_vlan_offload_caps_v2(hw))
+ goto err_rss;
+
return 0;
err_rss:
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78df202a77..32e6031bd9 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -107,6 +107,7 @@ struct ice_dcf_hw {
uint16_t nb_msix;
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
+ struct virtchnl_vlan_caps vlan_v2_caps;
/* Link status */
bool link_up;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e58cdf47d2..d4bfa182a4 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,46 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_supported_caps *supported_caps =
+ &hw->vlan_v2_caps.filtering.filtering_support;
+ struct virtchnl_vlan *vlan_setting;
+ struct virtchnl_vlan_filter_list_v2 vlan_filter;
+ struct dcf_virtchnl_cmd args;
+ uint32_t filtering_caps;
+ int err;
+
+ if (supported_caps->outer) {
+ filtering_caps = supported_caps->outer;
+ vlan_setting = &vlan_filter.filters[0].outer;
+ } else {
+ filtering_caps = supported_caps->inner;
+ vlan_setting = &vlan_filter.filters[0].inner;
+ }
+
+ if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100))
+ return -ENOTSUP;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.vport_id = hw->vsi_res->vsi_id;
+ vlan_filter.num_elements = 1;
+ vlan_setting->tpid = RTE_ETHER_TYPE_VLAN;
+ vlan_setting->tci = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2;
+ args.req_msg = (uint8_t *)&vlan_filter;
+ args.req_msglen = sizeof(vlan_filter);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2");
+
+ return err;
+}
+
static int
dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
{
@@ -1052,6 +1092,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
return err;
}
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
+ err = dcf_add_del_vlan_v2(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+ }
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static void
+dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable)
+{
+ struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i, j;
+ uint64_t ids;
+
+ for (i = 0; i < RTE_DIM(vfc->ids); i++) {
+ if (vfc->ids[i] == 0)
+ continue;
+
+ ids = vfc->ids[i];
+ for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) {
+ if (ids & 1)
+ dcf_add_del_vlan_v2(hw, 64 * i + j, enable);
+ }
+ }
+}
+
+static int
+dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable)
+{
+ struct virtchnl_vlan_supported_caps *stripping_caps =
+ &hw->vlan_v2_caps.offloads.stripping_support;
+ struct virtchnl_vlan_setting vlan_strip;
+ struct dcf_virtchnl_cmd args;
+ uint32_t *ethertype;
+ int ret;
+
+ if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.outer_ethertype_setting;
+ else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.inner_ethertype_setting;
+ else
+ return -ENOTSUP;
+
+ memset(&vlan_strip, 0, sizeof(vlan_strip));
+ vlan_strip.vport_id = hw->vsi_res->vsi_id;
+ *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 :
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2;
+ args.req_msg = (uint8_t *)&vlan_strip;
+ args.req_msglen = sizeof(vlan_strip);
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" :
+ "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ bool enable;
+ int err;
+
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
+
+ dcf_iterate_vlan_filters_v2(dev, enable);
+ }
+
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
+
+ err = dcf_config_vlan_strip_v2(hw, enable);
+ /* If not support, the stripping is already disabled by PF */
+ if (err == -ENOTSUP && !enable)
+ err = 0;
+ if (err)
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int
dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
{
@@ -1084,30 +1234,17 @@ dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
return ret;
}
-static int
-dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct ice_dcf_adapter *adapter = dev->data->dev_private;
- struct ice_dcf_hw *hw = &adapter->real_hw;
- int err;
-
- if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
- return -ENOTSUP;
-
- err = dcf_add_del_vlan(hw, vlan_id, on);
- if (err)
- return -EIO;
- return 0;
-}
-
static int
dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
int err;
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
+ return dcf_dev_vlan_offload_set_v2(dev, mask);
+
if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
return -ENOTSUP;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 10/23] net/ice: enable CVL DCF device reset API
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (8 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 09/23] net/ice: support DCF new VLAN capabilities Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 11/23] net/ice: support IPv6 NVGRE tunnel Kevin Liu
` (13 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Dapeng Yu, Kevin Liu
From: Dapeng Yu <dapengx.yu@intel.com>
Enable CVL DCF device reset API.
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 24 ++++++++++++++++++++++++
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 885d58c0f4..9c2f13cf72 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1163,3 +1163,27 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
rte_free(list);
return err;
}
+
+int
+ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
+{
+ int ret;
+
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+ ice_dcf_disable_irq0(hw);
+ rte_intr_disable(intr_handle);
+ rte_intr_callback_unregister(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ ret = ice_dcf_mode_disable(hw);
+ if (ret)
+ goto err;
+ ret = ice_dcf_get_vf_resource(hw);
+err:
+ rte_intr_callback_register(intr_handle, ice_dcf_dev_interrupt_handler,
+ hw);
+ rte_intr_enable(intr_handle);
+ ice_dcf_enable_irq0(hw);
+ return ret;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 32e6031bd9..8cf17e7700 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -137,6 +137,7 @@ int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
+int ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
void ice_dcf_tm_conf_uninit(struct rte_eth_dev *dev);
int ice_dcf_replay_vf_bw(struct ice_dcf_hw *hw, uint16_t vf_id);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 11/23] net/ice: support IPv6 NVGRE tunnel
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (9 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 10/23] net/ice: enable CVL DCF device reset API Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 12/23] net/ice: support new pattern of IPv4 Kevin Liu
` (12 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add protocol definition and pattern matching for IPv6 NVGRE tunnel.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 36c9bffb73..c04547235c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -31,6 +31,7 @@
#define ICE_PPP_IPV4_PROTO 0x0021
#define ICE_PPP_IPV6_PROTO 0x0057
#define ICE_IPV4_PROTO_NVGRE 0x002F
+#define ICE_IPV6_PROTO_NVGRE 0x002F
#define ICE_SW_PRI_BASE 6
#define ICE_SW_INSET_ETHER ( \
@@ -763,6 +764,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
break;
}
}
+ if ((ipv6_spec->hdr.proto &
+ ipv6_mask->hdr.proto) ==
+ ICE_IPV6_PROTO_NVGRE)
+ *tun_type = ICE_SW_TUN_AND_NON_TUN;
if (ipv6_mask->hdr.proto)
*input |= ICE_INSET_IPV6_NEXT_HDR;
if (ipv6_mask->hdr.hop_limits)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 12/23] net/ice: support new pattern of IPv4
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (10 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 11/23] net/ice: support IPv6 NVGRE tunnel Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 13/23] net/ice: treat unknown package as OS default package Kevin Liu
` (11 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev
Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Junfeng Guo,
Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Add definition and pattern entry for IPv4 pattern: MAC/VLAN/IPv4
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_switch_filter.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c04547235c..4db7021e3f 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -38,6 +38,8 @@
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
#define ICE_SW_INSET_MAC_VLAN ( \
ICE_SW_INSET_ETHER | ICE_INSET_VLAN_INNER)
+#define ICE_SW_INSET_MAC_VLAN_IPV4 ( \
+ ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4)
#define ICE_SW_INSET_MAC_QINQ ( \
ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_VLAN_INNER | \
ICE_INSET_VLAN_OUTER)
@@ -215,6 +217,7 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = {
{pattern_eth_ipv4, ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
+ {pattern_eth_vlan_ipv4, ICE_SW_INSET_MAC_VLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE},
{pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE},
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 13/23] net/ice: treat unknown package as OS default package
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (11 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 12/23] net/ice: support new pattern of IPv4 Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 14/23] net/ice: handle virtchnl event message without interrupt Kevin Liu
` (10 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
In order to use custom package, unknown package should be treated
as OS default package.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8bb34b874b..f868d12d7c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1710,13 +1710,16 @@ ice_load_pkg_type(struct ice_hw *hw)
/* store the activated package type (OS default or Comms) */
if (!strncmp((char *)hw->active_pkg_name, ICE_OS_DEFAULT_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_OS_DEFAULT;
- else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
- ICE_PKG_NAME_SIZE))
+ } else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME,
+ ICE_PKG_NAME_SIZE)) {
package_type = ICE_PKG_TYPE_COMMS;
- else
- package_type = ICE_PKG_TYPE_UNKNOWN;
+ } else {
+ PMD_INIT_LOG(WARNING,
+ "The package type is not identified, treaded as OS default type");
+ package_type = ICE_PKG_TYPE_OS_DEFAULT;
+ }
PMD_INIT_LOG(NOTICE, "Active package is: %d.%d.%d.%d, %s (%s VLAN mode)",
hw->active_pkg_ver.major, hw->active_pkg_ver.minor,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 14/23] net/ice: handle virtchnl event message without interrupt
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (12 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 13/23] net/ice: treat unknown package as OS default package Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 15/23] net/ice: add DCF request queues function Kevin Liu
` (9 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Currently, VF can only handle virtchnl event message by calling interrupt.
It is not available in two cases:
1. If the event message comes during VF initialization before interrupt
is enabled, this message will not be handled correctly.
2. Some virtchnl commands need to receive the event message and handle
it with interrupt disabled.
To solve this issue, we add the virtchnl event message handling in the
process of reading vitchnl messages in adminq from PF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 9c2f13cf72..1415f26ac3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -63,11 +63,32 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
goto again;
v_op = rte_le_to_cpu_32(event.desc.cookie_high);
- if (v_op != op)
- goto again;
+
+ if (v_op == VIRTCHNL_OP_EVENT) {
+ struct virtchnl_pf_event *vpe =
+ (struct virtchnl_pf_event *)event.msg_buf;
+ switch (vpe->event) {
+ case VIRTCHNL_EVENT_RESET_IMPENDING:
+ hw->resetting = true;
+ if (rsp_msglen)
+ *rsp_msglen = 0;
+ return IAVF_SUCCESS;
+ default:
+ goto again;
+ }
+ } else {
+ /* async reply msg on command issued by vf previously */
+ if (v_op != op) {
+ PMD_DRV_LOG(WARNING,
+ "command mismatch, expect %u, get %u",
+ op, v_op);
+ goto again;
+ }
+ }
if (rsp_msglen != NULL)
*rsp_msglen = event.msg_len;
+
return rte_le_to_cpu_32(event.desc.cookie_low);
again:
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 15/23] net/ice: add DCF request queues function
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (13 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 14/23] net/ice: handle virtchnl event message without interrupt Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 16/23] net/ice: negotiate large VF and request more queues Kevin Liu
` (8 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Add a new virtchnl function to request additional queues from PF. Current
default queue pairs number is 16. In order to support up to 256 queue
pairs DCF port, enable this request queues function.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 98 +++++++++++++++++++++++++++++++++------
drivers/net/ice/ice_dcf.h | 1 +
2 files changed, 86 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 1415f26ac3..6aeafa6681 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -468,18 +468,38 @@ ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
goto ret;
}
- do {
- if (!cmd->pending)
- break;
-
- rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
- } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
-
- if (cmd->v_ret != IAVF_SUCCESS) {
- err = -1;
- PMD_DRV_LOG(ERR,
- "No response (%d times) or return failure (%d) for cmd %d",
- i, cmd->v_ret, cmd->v_op);
+ switch (cmd->v_op) {
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ err = ice_dcf_recv_cmd_rsp_no_irq(hw,
+ VIRTCHNL_OP_REQUEST_QUEUES,
+ cmd->rsp_msgbuf,
+ cmd->rsp_buflen,
+ NULL);
+ if (err != IAVF_SUCCESS || !hw->resetting) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "Failed to get response of "
+ "VIRTCHNL_OP_REQUEST_QUEUES %d",
+ err);
+ }
+ break;
+ default:
+ /* For other virtchnl ops in running time,
+ * wait for the cmd done flag.
+ */
+ do {
+ if (!cmd->pending)
+ break;
+ rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
+ } while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
+
+ if (cmd->v_ret != IAVF_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR,
+ "No response (%d times) or "
+ "return failure (%d) for cmd %d",
+ i, cmd->v_ret, cmd->v_op);
+ }
}
ret:
@@ -1011,6 +1031,58 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
+{
+ struct virtchnl_vf_res_request vfres;
+ struct dcf_virtchnl_cmd args;
+ uint16_t num_queue_pairs;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) {
+ PMD_DRV_LOG(ERR, "request queues not supported");
+ return -1;
+ }
+
+ if (num == 0) {
+ PMD_DRV_LOG(ERR, "queue number cannot be zero");
+ return -1;
+ }
+ vfres.num_queue_pairs = num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_REQUEST_QUEUES;
+
+ args.req_msg = (u8 *)&vfres;
+ args.req_msglen = sizeof(vfres);
+
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ /*
+ * disable interrupt to avoid the admin queue message to be read
+ * before iavf_read_msg_from_pf.
+ */
+ rte_intr_disable(hw->eth_dev->intr_handle);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ rte_intr_enable(hw->eth_dev->intr_handle);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES");
+ return err;
+ }
+
+ /* request additional queues failed, return available number */
+ num_queue_pairs = ((struct virtchnl_vf_res_request *)
+ args.rsp_msgbuf)->num_queue_pairs;
+ PMD_DRV_LOG(ERR,
+ "request queues failed, only %u queues available",
+ num_queue_pairs);
+
+ return -1;
+}
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 8cf17e7700..99498e2184 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -127,6 +127,7 @@ int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 16/23] net/ice: negotiate large VF and request more queues
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (14 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 15/23] net/ice: add DCF request queues function Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 17/23] net/ice: enable multiple queues configurations for large VF Kevin Liu
` (7 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Negotiate large VF capability with PF during VF initialization. If large
VF is supported and the number of queues larger than 16 is required, VF
requests additional queues from PF. Mark the state that large VF is
supported.
If the allocated queues number is larger than 16, the max RSS queue
region cannot be 16 anymore. Add the function to query max RSS queue
region from PF, use it in the RSS initialization and future filters
configuration.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 34 +++++++++++++++-
drivers/net/ice/ice_dcf.h | 4 ++
drivers/net/ice/ice_dcf_ethdev.c | 69 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 2 +
4 files changed, 106 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 6aeafa6681..7091658841 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -257,7 +257,8 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
- VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
+ VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS;
err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
(uint8_t *)&caps, sizeof(caps));
@@ -1083,6 +1084,37 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return -1;
}
+int
+ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ uint16_t qregion_width;
+ int err;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_MAX_RSS_QREGION;
+ args.req_msg = NULL;
+ args.req_msglen = 0;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of "
+ "VIRTCHNL_OP_GET_MAX_RSS_QREGION");
+ return err;
+ }
+
+ qregion_width = ((struct virtchnl_max_rss_qregion *)
+ args.rsp_msgbuf)->qregion_width;
+ hw->max_rss_qregion = (uint16_t)(1 << qregion_width);
+
+ return 0;
+}
+
+
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 99498e2184..05ea91d2a5 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -105,6 +105,7 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
+ uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -114,6 +115,8 @@ struct ice_dcf_hw {
uint32_t link_speed;
bool resetting;
+ /* Indicate large VF support enabled or not */
+ bool lv_enabled;
};
int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -128,6 +131,7 @@ int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
+int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d4bfa182a4..a43c5a320d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -39,6 +39,8 @@ static int
ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num);
+
static int
ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
@@ -663,6 +665,11 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
{
struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
struct ice_adapter *ad = &dcf_ad->parent;
+ struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ int ret;
+
+ uint16_t num_queue_pairs =
+ RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
@@ -670,6 +677,47 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ /* Large VF setting */
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_DFLT) {
+ if (!(hw->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_LARGE_NUM_QPAIRS)) {
+ PMD_DRV_LOG(ERR, "large VF is not supported");
+ return -1;
+ }
+
+ if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_LV) {
+ PMD_DRV_LOG(ERR,
+ "queue pairs number cannot be larger than %u",
+ ICE_DCF_MAX_NUM_QUEUES_LV);
+ return -1;
+ }
+
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ ret = ice_dcf_get_max_rss_queue_region(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "get max rss queue region failed");
+ return ret;
+ }
+
+ hw->lv_enabled = true;
+ } else {
+ /* Check if large VF is already enabled. If so, disable and
+ * release redundant queue resource.
+ */
+ if (hw->lv_enabled) {
+ ret = ice_dcf_queues_req_reset(dev, num_queue_pairs);
+ if (ret)
+ return ret;
+
+ hw->lv_enabled = false;
+ }
+ /* if large VF is not required, use default rss queue region */
+ hw->max_rss_qregion = ICE_DCF_MAX_NUM_QUEUES_DFLT;
+ }
+
return 0;
}
@@ -681,8 +729,8 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_hw *hw = &adapter->real_hw;
dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
- dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
- dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+ dev_info->max_rx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
+ dev_info->max_tx_queues = ICE_DCF_MAX_NUM_QUEUES_LV;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
dev_info->hash_key_size = hw->vf_res->rss_key_size;
@@ -1829,6 +1877,23 @@ ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev)
return 0;
}
+static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int ret;
+
+ ret = ice_dcf_request_queues(hw, num);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "request queues from PF failed");
+ return ret;
+ }
+ PMD_DRV_LOG(INFO, "change queue pairs from %u to %u",
+ hw->vsi_res->num_queue_pairs, num);
+
+ return ice_dcf_dev_reset(dev);
+}
+
static int
ice_dcf_cap_check_handler(__rte_unused const char *key,
const char *value, __rte_unused void *opaque)
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 27f6402786..4a08d32e0c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,6 +20,8 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 17/23] net/ice: enable multiple queues configurations for large VF
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (15 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 16/23] net/ice: negotiate large VF and request more queues Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 18/23] net/ice: enable IRQ mapping configuration " Kevin Liu
` (6 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
Since the adminq buffer size has a 4K limitation, the current virtchnl
command VIRTCHNL_OP_CONFIG_VSI_QUEUES cannot send the message only once to
configure up to 256 queues. In this patch, we send the messages multiple
times to make sure that the buffer size is less than 4K each time.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 11 ++++++-----
drivers/net/ice/ice_dcf.h | 3 ++-
drivers/net/ice/ice_dcf_ethdev.c | 20 ++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 27 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7091658841..7004c00f1c 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -949,7 +949,8 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
#define IAVF_RXDID_COMMS_OVS_1 22
int
-ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index)
{
struct ice_rx_queue **rxq =
(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
@@ -962,16 +963,16 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw)
int err;
size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+ sizeof(vc_config->qpair[0]) * num_queue_pairs;
vc_config = rte_zmalloc("cfg_queue", size, 0);
if (!vc_config)
return -ENOMEM;
vc_config->vsi_id = hw->vsi_res->vsi_id;
- vc_config->num_queue_pairs = hw->num_queue_pairs;
+ vc_config->num_queue_pairs = num_queue_pairs;
- for (i = 0, vc_qp = vc_config->qpair;
- i < hw->num_queue_pairs;
+ for (i = index, vc_qp = vc_config->qpair;
+ i < index + num_queue_pairs;
i++, vc_qp++) {
vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
vc_qp->txq.queue_id = i;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 05ea91d2a5..e36428a92a 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -129,7 +129,8 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
-int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
+ uint16_t num_queue_pairs, uint16_t index);
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a43c5a320d..78df82d5b5 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -513,6 +513,8 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
struct ice_adapter *ad = &dcf_ad->parent;
struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+ uint16_t num_queue_pairs;
+ uint16_t index = 0;
int ret;
if (hw->resetting) {
@@ -531,6 +533,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
+ num_queue_pairs = hw->num_queue_pairs;
ret = ice_dcf_init_rx_queues(dev);
if (ret) {
@@ -546,7 +549,20 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
}
}
- ret = ice_dcf_configure_queues(hw);
+ /* If needed, send configure queues msg multiple times to make the
+ * adminq buffer length smaller than the 4K limitation.
+ */
+ while (num_queue_pairs > ICE_DCF_CFG_Q_NUM_PER_BUF) {
+ if (ice_dcf_configure_queues(hw,
+ ICE_DCF_CFG_Q_NUM_PER_BUF, index) != 0) {
+ PMD_DRV_LOG(ERR, "configure queues failed");
+ goto err_queue;
+ }
+ num_queue_pairs -= ICE_DCF_CFG_Q_NUM_PER_BUF;
+ index += ICE_DCF_CFG_Q_NUM_PER_BUF;
+ }
+
+ ret = ice_dcf_configure_queues(hw, num_queue_pairs, index);
if (ret) {
PMD_DRV_LOG(ERR, "Fail to config queues");
return ret;
@@ -586,7 +602,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
+err_queue:
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 4a08d32e0c..2fac1e5b21 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -22,6 +22,7 @@
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 18/23] net/ice: enable IRQ mapping configuration for large VF
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (16 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 17/23] net/ice: enable multiple queues configurations for large VF Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 19/23] net/ice: add enable/disable queues for DCF " Kevin Liu
` (5 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
The current IRQ mapping configuration only supports max 16 queues and
16 MSIX vectors. Change the queue vector mapping structure to indicate
up to 256 queues. A new opcode is used to handle the case with large
number of queues. To avoid adminq buffer size limitation, we support
to send the virtchnl message multiple times if needed.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 50 +++++++++++++++++++++++++++----
drivers/net/ice/ice_dcf.h | 10 ++++++-
drivers/net/ice/ice_dcf_ethdev.c | 51 +++++++++++++++++++++++++++-----
drivers/net/ice/ice_dcf_ethdev.h | 1 +
4 files changed, 99 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7004c00f1c..290f754049 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1115,7 +1115,6 @@ ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw)
return 0;
}
-
int
ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
{
@@ -1132,13 +1131,14 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return -ENOMEM;
map_info->num_vectors = hw->nb_msix;
- for (i = 0; i < hw->nb_msix; i++) {
- vecmap = &map_info->vecmap[i];
+ for (i = 0; i < hw->eth_dev->data->nb_rx_queues; i++) {
+ vecmap =
+ &map_info->vecmap[hw->qv_map[i].vector_id - hw->msix_base];
vecmap->vsi_id = hw->vsi_res->vsi_id;
vecmap->rxitr_idx = 0;
- vecmap->vector_id = hw->msix_base + i;
+ vecmap->vector_id = hw->qv_map[i].vector_id;
vecmap->txq_map = 0;
- vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+ vecmap->rxq_map |= 1 << hw->qv_map[i].queue_id;
}
memset(&args, 0, sizeof(args));
@@ -1154,6 +1154,46 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index)
+{
+ struct virtchnl_queue_vector_maps *map_info;
+ struct virtchnl_queue_vector *qv_maps;
+ struct dcf_virtchnl_cmd args;
+ int len, i, err;
+ int count = 0;
+
+ len = sizeof(struct virtchnl_queue_vector_maps) +
+ sizeof(struct virtchnl_queue_vector) * (num - 1);
+
+ map_info = rte_zmalloc("map_info", len, 0);
+ if (!map_info)
+ return -ENOMEM;
+
+ map_info->vport_id = hw->vsi_res->vsi_id;
+ map_info->num_qv_maps = num;
+ for (i = index; i < index + map_info->num_qv_maps; i++) {
+ qv_maps = &map_info->qv_maps[count++];
+ qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
+ qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
+ qv_maps->queue_id = hw->qv_map[i].queue_id;
+ qv_maps->vector_id = hw->qv_map[i].vector_id;
+ }
+
+ args.v_op = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
+ args.req_msg = (u8 *)map_info;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
+
+ rte_free(map_info);
+ return err;
+}
+
int
ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
{
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index e36428a92a..ce57a687ab 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -74,6 +74,11 @@ struct ice_dcf_tm_conf {
bool committed;
};
+struct ice_dcf_qv_map {
+ uint16_t queue_id;
+ uint16_t vector_id;
+};
+
struct ice_dcf_hw {
struct iavf_hw avf;
@@ -106,7 +111,8 @@ struct ice_dcf_hw {
uint16_t msix_base;
uint16_t nb_msix;
uint16_t max_rss_qregion; /* max RSS queue region supported by PF */
- uint16_t rxq_map[16];
+
+ struct ice_dcf_qv_map *qv_map; /* queue vector mapping */
struct virtchnl_eth_stats eth_stats_offset;
struct virtchnl_vlan_caps vlan_v2_caps;
@@ -134,6 +140,8 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw,
int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num);
int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
+ uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 78df82d5b5..1ddba02ebb 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -143,6 +143,7 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
{
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct ice_dcf_qv_map *qv_map;
uint16_t interval, i;
int vec;
@@ -161,6 +162,14 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
}
+ qv_map = rte_zmalloc("qv_map",
+ dev->data->nb_rx_queues * sizeof(struct ice_dcf_qv_map), 0);
+ if (!qv_map) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+ dev->data->nb_rx_queues);
+ return -1;
+ }
+
if (!dev->data->dev_conf.intr_conf.rxq ||
!rte_intr_dp_is_en(intr_handle)) {
/* Rx interrupt disabled, Map interrupt only for writeback */
@@ -196,17 +205,22 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
}
IAVF_WRITE_FLUSH(&hw->avf);
/* map all queues to the same interrupt */
- for (i = 0; i < dev->data->nb_rx_queues; i++)
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
+ }
+ hw->qv_map = qv_map;
} else {
if (!rte_intr_allow_others(intr_handle)) {
hw->nb_msix = 1;
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[hw->msix_base] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = hw->msix_base;
rte_intr_vec_list_index_set(intr_handle,
i, IAVF_MISC_VEC_ID);
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
hw->msix_base);
@@ -219,21 +233,44 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
- hw->rxq_map[vec] |= 1 << i;
+ qv_map[i].queue_id = i;
+ qv_map[i].vector_id = vec;
rte_intr_vec_list_index_set(intr_handle,
i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
+ hw->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
"%u vectors are mapping to %u Rx queues",
hw->nb_msix, dev->data->nb_rx_queues);
}
}
- if (ice_dcf_config_irq_map(hw)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping failed");
- return -1;
+ if (!hw->lv_enabled) {
+ if (ice_dcf_config_irq_map(hw)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+ return -1;
+ }
+ } else {
+ uint16_t num_qv_maps = dev->data->nb_rx_queues;
+ uint16_t index = 0;
+
+ while (num_qv_maps > ICE_DCF_IRQ_MAP_NUM_PER_BUF) {
+ if (ice_dcf_config_irq_map_lv(hw,
+ ICE_DCF_IRQ_MAP_NUM_PER_BUF, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+ num_qv_maps -= ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ index += ICE_DCF_IRQ_MAP_NUM_PER_BUF;
+ }
+
+ if (ice_dcf_config_irq_map_lv(hw, num_qv_maps, index)) {
+ PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
+ return -1;
+ }
+
}
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 2fac1e5b21..9ef524c97c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -23,6 +23,7 @@
#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
#define ICE_DCF_MAX_NUM_QUEUES_LV 256
#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 19/23] net/ice: add enable/disable queues for DCF large VF
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (17 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 18/23] net/ice: enable IRQ mapping configuration " Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 20/23] net/ice: add extended stats Kevin Liu
` (4 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The current virtchnl structure for enable/disable queues only supports
max 32 queue pairs. Use a new opcode and structure to indicate up to 256
queue pairs, in order to enable/disable queues in large VF case.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 99 +++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf.h | 5 ++
drivers/net/ice/ice_dcf_ethdev.c | 26 +++++++--
drivers/net/ice/ice_dcf_ethdev.h | 8 +--
4 files changed, 125 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 290f754049..23edfd09b1 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -90,7 +90,6 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op,
*rsp_msglen = event.msg_len;
return rte_le_to_cpu_32(event.desc.cookie_low);
-
again:
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
} while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
@@ -896,7 +895,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
{
struct rte_eth_dev *dev = hw->eth_dev;
struct rte_eth_rss_conf *rss_conf;
- uint8_t i, j, nb_q;
+ uint16_t i, j, nb_q;
int ret;
rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
@@ -1075,6 +1074,12 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num)
return err;
}
+ /* request queues succeeded, vf is resetting */
+ if (hw->resetting) {
+ PMD_DRV_LOG(INFO, "vf is resetting");
+ return 0;
+ }
+
/* request additional queues failed, return available number */
num_queue_pairs = ((struct virtchnl_vf_res_request *)
args.rsp_msgbuf)->num_queue_pairs;
@@ -1185,7 +1190,8 @@ ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
args.req_msg = (u8 *)map_info;
args.req_msglen = len;
args.rsp_msgbuf = hw->arq_buf;
- args.req_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
@@ -1225,6 +1231,50 @@ ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
return err;
}
+int
+ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ if (rx) {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ } else {
+ queue_chunk->type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+ }
+
+ if (on)
+ args.v_op = VIRTCHNL_OP_ENABLE_QUEUES_V2;
+ else
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+ on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_disable_queues(struct ice_dcf_hw *hw)
{
@@ -1254,6 +1304,49 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
return err;
}
+int
+ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_del_ena_dis_queues *queue_select;
+ struct virtchnl_queue_chunk *queue_chunk;
+ struct dcf_virtchnl_cmd args;
+ int err, len;
+
+ len = sizeof(struct virtchnl_del_ena_dis_queues) +
+ sizeof(struct virtchnl_queue_chunk) *
+ (ICE_DCF_RXTX_QUEUE_CHUNKS_NUM - 1);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = ICE_DCF_RXTX_QUEUE_CHUNKS_NUM;
+ queue_select->vport_id = hw->vsi_res->vsi_id;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].type = VIRTCHNL_QUEUE_TYPE_TX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].num_queues =
+ hw->eth_dev->data->nb_tx_queues;
+
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].start_queue_id = 0;
+ queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].num_queues =
+ hw->eth_dev->data->nb_rx_queues;
+
+ args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2;
+ args.req_msg = (u8 *)queue_select;
+ args.req_msglen = len;
+ args.rsp_msgbuf = hw->arq_buf;
+ args.rsp_msglen = ICE_DCF_AQ_BUF_SZ;
+ args.rsp_buflen = ICE_DCF_AQ_BUF_SZ;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_QUEUES_V2");
+ rte_free(queue_select);
+ return err;
+}
+
int
ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats)
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index ce57a687ab..78ab23aaa6 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,8 @@
#include "base/ice_type.h"
#include "ice_logs.h"
+#define ICE_DCF_RXTX_QUEUE_CHUNKS_NUM 2
+
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -143,7 +145,10 @@ int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw,
uint16_t num, uint16_t index);
int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw,
+ uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ddba02ebb..e46c8405aa 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -317,6 +317,7 @@ static int
ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_rx_queue *rxq;
int err = 0;
@@ -339,7 +340,11 @@ ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, rx_queue_id, true, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, rx_queue_id, true, true);
+
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
rx_queue_id);
@@ -448,6 +453,7 @@ static int
ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct ice_dcf_adapter *ad = dev->data->dev_private;
+ struct ice_dcf_hw *dcf_hw = &ad->real_hw;
struct iavf_hw *hw = &ad->real_hw.avf;
struct ice_tx_queue *txq;
int err = 0;
@@ -463,7 +469,10 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
IAVF_WRITE_FLUSH(hw);
/* Ready to switch the queue on */
- err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+ if (!dcf_hw->lv_enabled)
+ err = ice_dcf_switch_queue(dcf_hw, tx_queue_id, false, true);
+ else
+ err = ice_dcf_switch_queue_lv(dcf_hw, tx_queue_id, false, true);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
@@ -650,12 +659,17 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev)
struct ice_dcf_hw *hw = &ad->real_hw;
struct ice_rx_queue *rxq;
struct ice_tx_queue *txq;
- int ret, i;
+ int i;
/* Stop All queues */
- ret = ice_dcf_disable_queues(hw);
- if (ret)
- PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ if (!hw->lv_enabled) {
+ if (ice_dcf_disable_queues(hw))
+ PMD_DRV_LOG(WARNING, "Fail to stop queues");
+ } else {
+ if (ice_dcf_disable_queues_lv(hw))
+ PMD_DRV_LOG(WARNING,
+ "Fail to stop queues for large VF");
+ }
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 9ef524c97c..3f740e2c7b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -20,10 +20,10 @@
#define ICE_DCF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
-#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
-#define ICE_DCF_MAX_NUM_QUEUES_LV 256
-#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
-#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
+#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16
+#define ICE_DCF_MAX_NUM_QUEUES_LV 256
+#define ICE_DCF_CFG_Q_NUM_PER_BUF 32
+#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128
struct ice_dcf_queue {
uint64_t dummy;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 20/23] net/ice: add extended stats
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (18 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 19/23] net/ice: add enable/disable queues for DCF " Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 21/23] net/ice: support queue information getting Kevin Liu
` (3 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add implementation of xstats() functions in DCF PMD.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.h | 23 +++++++++-
drivers/net/ice/ice_dcf_ethdev.c | 75 ++++++++++++++++++++++++++++++++
2 files changed, 97 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78ab23aaa6..8bdad679b1 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -16,7 +16,11 @@
#include "ice_logs.h"
#define ICE_DCF_RXTX_QUEUE_CHUNKS_NUM 2
-
+/* ICE_DCF_DEV_PRIVATE_TO */
+#define ICE_DCF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+ ((struct ice_dcf_adapter *)adapter)
+#define ICE_DCF_DEV_PRIVATE_TO_VF(adapter) \
+ (&((struct ice_dcf_adapter *)adapter)->vf)
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -81,6 +85,23 @@ struct ice_dcf_qv_map {
uint16_t vector_id;
};
+struct ice_dcf_eth_stats {
+ u64 rx_bytes; /* gorc */
+ u64 rx_unicast; /* uprc */
+ u64 rx_multicast; /* mprc */
+ u64 rx_broadcast; /* bprc */
+ u64 rx_discards; /* rdpc */
+ u64 rx_unknown_protocol; /* rupp */
+ u64 tx_bytes; /* gotc */
+ u64 tx_unicast; /* uptc */
+ u64 tx_multicast; /* mptc */
+ u64 tx_broadcast; /* bptc */
+ u64 tx_discards; /* tdpc */
+ u64 tx_errors; /* tepc */
+ u64 rx_no_desc; /* repc */
+ u64 rx_errors; /* repc */
+};
+
struct ice_dcf_hw {
struct iavf_hw avf;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e46c8405aa..a4f0ec36a1 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -47,6 +47,30 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
static int
ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev);
+struct rte_ice_dcf_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+static const struct rte_ice_dcf_xstats_name_off rte_ice_dcf_stats_strings[] = {
+ {"rx_bytes", offsetof(struct ice_dcf_eth_stats, rx_bytes)},
+ {"rx_unicast_packets", offsetof(struct ice_dcf_eth_stats, rx_unicast)},
+ {"rx_multicast_packets", offsetof(struct ice_dcf_eth_stats, rx_multicast)},
+ {"rx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, rx_broadcast)},
+ {"rx_dropped_packets", offsetof(struct ice_dcf_eth_stats, rx_discards)},
+ {"rx_unknown_protocol_packets", offsetof(struct ice_dcf_eth_stats,
+ rx_unknown_protocol)},
+ {"tx_bytes", offsetof(struct ice_dcf_eth_stats, tx_bytes)},
+ {"tx_unicast_packets", offsetof(struct ice_dcf_eth_stats, tx_unicast)},
+ {"tx_multicast_packets", offsetof(struct ice_dcf_eth_stats, tx_multicast)},
+ {"tx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, tx_broadcast)},
+ {"tx_dropped_packets", offsetof(struct ice_dcf_eth_stats, tx_discards)},
+ {"tx_error_packets", offsetof(struct ice_dcf_eth_stats, tx_errors)},
+};
+
+#define ICE_DCF_NB_XSTATS (sizeof(rte_ice_dcf_stats_strings) / \
+ sizeof(rte_ice_dcf_stats_strings[0]))
+
static uint16_t
ice_dcf_recv_pkts(__rte_unused void *rx_queue,
__rte_unused struct rte_mbuf **bufs,
@@ -1610,6 +1634,54 @@ ice_dcf_stats_reset(struct rte_eth_dev *dev)
return 0;
}
+static int ice_dcf_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ unsigned int i;
+
+ if (xstats_names != NULL)
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ snprintf(xstats_names[i].name,
+ sizeof(xstats_names[i].name),
+ "%s", rte_ice_dcf_stats_strings[i].name);
+ }
+ return ICE_DCF_NB_XSTATS;
+}
+
+static int ice_dcf_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n)
+{
+ int ret;
+ unsigned int i;
+ struct ice_dcf_adapter *adapter =
+ ICE_DCF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_eth_stats *postats = &hw->eth_stats_offset;
+ struct virtchnl_eth_stats pnstats;
+
+ if (n < ICE_DCF_NB_XSTATS)
+ return ICE_DCF_NB_XSTATS;
+
+ ret = ice_dcf_query_stats(hw, &pnstats);
+ if (ret != 0)
+ return 0;
+
+ if (!xstats)
+ return 0;
+
+ ice_dcf_update_stats(postats, &pnstats);
+
+ /* loop over xstats array and values from pstats */
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ xstats[i].id = i;
+ xstats[i].value = *(uint64_t *)(((char *)&pnstats) +
+ rte_ice_dcf_stats_strings[i].offset);
+ }
+
+ return ICE_DCF_NB_XSTATS;
+}
+
static void
ice_dcf_free_repr_info(struct ice_dcf_adapter *dcf_adapter)
{
@@ -1881,6 +1953,9 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
+ .xstats_get = ice_dcf_xstats_get,
+ .xstats_get_names = ice_dcf_xstats_get_names,
+ .xstats_reset = ice_dcf_stats_reset,
.promiscuous_enable = ice_dcf_dev_promiscuous_enable,
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 21/23] net/ice: support queue information getting
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (19 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 20/23] net/ice: add extended stats Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 22/23] net/ice: implement power management Kevin Liu
` (2 subsequent siblings)
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add below ops,
rxq_info_get
txq_info_get
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a4f0ec36a1..02d9bd0fa7 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1950,6 +1950,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_start = ice_dcf_tx_queue_start,
.rx_queue_stop = ice_dcf_rx_queue_stop,
.tx_queue_stop = ice_dcf_tx_queue_stop,
+ .rxq_info_get = ice_rxq_info_get,
+ .txq_info_get = ice_txq_info_get,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 22/23] net/ice: implement power management
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (20 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 21/23] net/ice: support queue information getting Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-19 15:46 ` [PATCH v4 23/23] doc: update for ice DCF datapath configuration Kevin Liu
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Implement support for the power management API by implementing a
'get_monitor_addr' function that will return an address of an RX ring's
status bit.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 02d9bd0fa7..0a7ae54079 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1952,6 +1952,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_stop = ice_dcf_tx_queue_stop,
.rxq_info_get = ice_rxq_info_get,
.txq_info_get = ice_txq_info_get,
+ .get_monitor_addr = ice_get_monitor_addr,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 23/23] doc: update for ice DCF datapath configuration
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (21 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 22/23] net/ice: implement power management Kevin Liu
@ 2022-04-19 15:46 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
23 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 15:46 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Update "ice_dcf" driver feature list.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 54073f0b88..2f3e14a24e 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -15,6 +15,21 @@ L3 checksum offload = P
L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
+Promiscuous mode = Y
+Allmulticast mode = Y
+Unicast MAC filter = Y
+Link status = Y
+Link status event = Y
+Packet type parsing = Y
+VLAN filter = Y
+VLAN offload = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Basic stats = Y
+Extended stats = Y
+MTU update = Y
+Power mgmt address monitor = Y
Basic stats = Y
Linux = Y
x86-32 = Y
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 0/2] fix DCF function defect
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
` (22 preceding siblings ...)
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
@ 2022-04-19 16:01 ` Kevin Liu
2022-04-19 16:01 ` [PATCH v4 1/2] net/ice: fix DCF ACL flow engine Kevin Liu
2022-04-19 16:01 ` [PATCH v4 2/2] net/ice: fix DCF reset Kevin Liu
23 siblings, 2 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 16:01 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Some modifications of associated complete common VF
features for DCF.
v4:
Transfer the fix patch from the feature cover-letter to here.
v3:
* remove patch:
1.net/ice/base: add VXLAN support for switch filter
2.net/ice: add VXLAN support for switch filter
3.common/iavf: support flushing rules and reporting DCF id
4.net/ice/base: fix ethertype filter input set
5.net/ice/base: support IPv6 GRE UDP pattern
6.net/ice/base: support new patterns of TCP and UDP
7.net/ice: support new patterns of TCP and UDP
8.net/ice/base: support IPv4 GRE tunnel
9.net/ice: support IPv4 GRE raw pattern type
10.net/ice/base: update Profile ID table for VXLAN
11.net/ice/base: update Protocol ID table to match DVM DDP
v2:
* remove patch:
1.net/iavf: support checking if device is an MDCF instance
2.net/ice: support MDCF(multi-DCF) instance
3.net/ice/base: support custom DDP buildin recipe
4.net/ice: support buildin recipe configuration
5.net/ice/base: support custom ddp package version
6.net/ice: disable ACL function for MDCF instance
Alvin Zhang (1):
net/ice: fix DCF ACL flow engine
Kevin Liu (1):
net/ice: fix DCF reset
drivers/net/ice/base/ice_common.c | 4 +++-
drivers/net/ice/ice_acl_filter.c | 20 ++++++++++++++----
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 17 ++++++++++++++-
drivers/net/ice/ice_dcf_parent.c | 3 +++
drivers/net/ice/ice_generic_flow.c | 34 +++++++++++++++++++++++-------
6 files changed, 65 insertions(+), 15 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 1/2] net/ice: fix DCF ACL flow engine
2022-04-19 16:01 ` [PATCH v4 0/2] fix DCF function defect Kevin Liu
@ 2022-04-19 16:01 ` Kevin Liu
2022-04-20 12:01 ` Zhang, Qi Z
2022-04-19 16:01 ` [PATCH v4 2/2] net/ice: fix DCF reset Kevin Liu
1 sibling, 1 reply; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 16:01 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
ACL is not a necessary feature for DCF, it may not be supported by
the ice kernel driver, so in this patch the program does not return
the ACL initiation fails to high level functions, as substitute it
prints some error logs, cleans the related resources and unregisters
the ACL engine.
Fixes: 40d466fa9f76 ("net/ice: support ACL filter in DCF")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_acl_filter.c | 20 ++++++++++++++----
drivers/net/ice/ice_generic_flow.c | 34 +++++++++++++++++++++++-------
2 files changed, 42 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0..20a1f86c43 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -56,6 +56,8 @@ ice_pattern_match_item ice_acl_pattern[] = {
{pattern_eth_ipv4_sctp, ICE_ACL_INSET_ETH_IPV4_SCTP, ICE_INSET_NONE, ICE_INSET_NONE},
};
+static void ice_acl_prof_free(struct ice_hw *hw);
+
static int
ice_acl_prof_alloc(struct ice_hw *hw)
{
@@ -1007,17 +1009,27 @@ ice_acl_init(struct ice_adapter *ad)
ret = ice_acl_setup(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_bitmap_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
ret = ice_acl_prof_init(pf);
if (ret)
- return ret;
+ goto deinit_acl;
- return ice_register_parser(parser, ad);
+ ret = ice_register_parser(parser, ad);
+ if (ret)
+ goto deinit_acl;
+
+ return 0;
+
+deinit_acl:
+ ice_deinit_acl(pf);
+ ice_acl_prof_free(hw);
+ PMD_DRV_LOG(ERR, "ACL init failed, may not supported!");
+ return ret;
}
static void
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 57eb002bde..cfdc4bd697 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1817,6 +1817,12 @@ ice_register_flow_engine(struct ice_flow_engine *engine)
TAILQ_INSERT_TAIL(&engine_list, engine, node);
}
+static void
+ice_unregister_flow_engine(struct ice_flow_engine *engine)
+{
+ TAILQ_REMOVE(&engine_list, engine, node);
+}
+
int
ice_flow_init(struct ice_adapter *ad)
{
@@ -1843,9 +1849,18 @@ ice_flow_init(struct ice_adapter *ad)
ret = engine->init(ad);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to initialize engine %d",
- engine->type);
- return ret;
+ /**
+ * ACL may not supported in kernel driver,
+ * so just unregister the engine.
+ */
+ if (engine->type == ICE_FLOW_ENGINE_ACL) {
+ ice_unregister_flow_engine(engine);
+ } else {
+ PMD_INIT_LOG(ERR,
+ "Failed to initialize engine %d",
+ engine->type);
+ return ret;
+ }
}
}
return 0;
@@ -1937,7 +1952,7 @@ ice_register_parser(struct ice_flow_parser *parser,
list = ice_get_parser_list(parser, ad);
if (list == NULL)
- return -EINVAL;
+ goto err;
if (ad->devargs.pipe_mode_support) {
TAILQ_INSERT_TAIL(list, parser_node, node);
@@ -1949,7 +1964,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -1960,7 +1975,7 @@ ice_register_parser(struct ice_flow_parser *parser,
ICE_FLOW_ENGINE_SWITCH) {
TAILQ_INSERT_AFTER(list, existing_node,
parser_node, node);
- goto DONE;
+ return 0;
}
}
TAILQ_INSERT_HEAD(list, parser_node, node);
@@ -1969,11 +1984,14 @@ ice_register_parser(struct ice_flow_parser *parser,
} else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) {
TAILQ_INSERT_HEAD(list, parser_node, node);
} else {
- return -EINVAL;
+ goto err;
}
}
-DONE:
return 0;
+err:
+ rte_free(parser_node);
+ PMD_DRV_LOG(ERR, "%s failed.", __func__);
+ return -EINVAL;
}
void
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v4 2/2] net/ice: fix DCF reset
2022-04-19 16:01 ` [PATCH v4 0/2] fix DCF function defect Kevin Liu
2022-04-19 16:01 ` [PATCH v4 1/2] net/ice: fix DCF ACL flow engine Kevin Liu
@ 2022-04-19 16:01 ` Kevin Liu
1 sibling, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-19 16:01 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
After the PF triggers the VF reset, before the VF PMD can perform
any operations on the hardware, it must reinitialize the all resources.
This patch adds a flag to indicate whether the VF has been reset by
PF, and update the DCF resetting operations according to this flag.
Fixes: 1a86f4dbdf42 ("net/ice: support DCF device reset")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/base/ice_common.c | 4 +++-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 17 ++++++++++++++++-
drivers/net/ice/ice_dcf_parent.c | 3 +++
4 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index db87bacd97..13feb55469 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -755,6 +755,7 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
status = ice_init_def_sw_recp(hw, &hw->switch_info->recp_list);
if (status) {
ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
return status;
}
return ICE_SUCCESS;
@@ -823,7 +824,6 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
}
ice_rm_sw_replay_rule_info(hw, sw);
ice_free(hw, sw->recp_list);
- ice_free(hw, sw);
}
/**
@@ -833,6 +833,8 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw)
void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
{
ice_cleanup_fltr_mgmt_single(hw, hw->switch_info);
+ ice_free(hw, hw->switch_info);
+ hw->switch_info = NULL;
}
/**
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 23edfd09b1..35773e2acd 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1429,7 +1429,7 @@ ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
int ret;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
ice_dcf_disable_irq0(hw);
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0a7ae54079..08306442a2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1028,6 +1028,15 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
uint32_t i;
int len, err = 0;
+ if (hw->resetting) {
+ if (!add)
+ return 0;
+
+ PMD_DRV_LOG(ERR,
+ "fail to add multicast MACs for VF resetting");
+ return -EIO;
+ }
+
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
@@ -1714,7 +1723,13 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
- (void)ice_dcf_dev_stop(dev);
+ if (adapter->parent.pf.adapter_stopped)
+ (void)ice_dcf_dev_stop(dev);
+
+ if (adapter->real_hw.resetting) {
+ ice_dcf_uninit_hw(dev, &adapter->real_hw);
+ ice_dcf_init_hw(dev, &adapter->real_hw);
+ }
ice_free_queues(dev);
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 2f96dedcce..7f7ed796e2 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -240,6 +240,9 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
case VIRTCHNL_EVENT_RESET_IMPENDING:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
dcf_hw->resetting = true;
+ rte_eth_dev_callback_process(dcf_hw->eth_dev,
+ RTE_ETH_EVENT_INTR_RESET,
+ NULL);
break;
case VIRTCHNL_EVENT_LINK_CHANGE:
PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* RE: [PATCH v4 1/2] net/ice: fix DCF ACL flow engine
2022-04-19 16:01 ` [PATCH v4 1/2] net/ice: fix DCF ACL flow engine Kevin Liu
@ 2022-04-20 12:01 ` Zhang, Qi Z
0 siblings, 0 replies; 170+ messages in thread
From: Zhang, Qi Z @ 2022-04-20 12:01 UTC (permalink / raw)
To: Liu, KevinX, dev; +Cc: Yang, Qiming, Yang, SteveX, Alvin Zhang
> -----Original Message-----
> From: Liu, KevinX <kevinx.liu@intel.com>
> Sent: Wednesday, April 20, 2022 12:02 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Yang, SteveX <stevex.yang@intel.com>; Alvin Zhang
> <alvinx.zhang@intel.com>; Liu, KevinX <kevinx.liu@intel.com>
> Subject: [PATCH v4 1/2] net/ice: fix DCF ACL flow engine
>
> From: Alvin Zhang <alvinx.zhang@intel.com>
>
> ACL is not a necessary feature for DCF, it may not be supported by the ice
> kernel driver, so in this patch the program does not return the ACL initiation
> fails to high level functions, as substitute it prints some error logs, cleans the
> related resources and unregisters the ACL engine.
>
> Fixes: 40d466fa9f76 ("net/ice: support ACL filter in DCF")
>
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
> ---
> drivers/net/ice/ice_acl_filter.c | 20 ++++++++++++++----
> drivers/net/ice/ice_generic_flow.c | 34 +++++++++++++++++++++++-------
> 2 files changed, 42 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
> index 8fe6f5aeb0..20a1f86c43 100644
> --- a/drivers/net/ice/ice_acl_filter.c
> +++ b/drivers/net/ice/ice_acl_filter.c
> @@ -56,6 +56,8 @@ ice_pattern_match_item ice_acl_pattern[] = {
> {pattern_eth_ipv4_sctp, ICE_ACL_INSET_ETH_IPV4_SCTP,
> ICE_INSET_NONE, ICE_INSET_NONE},
> };
>
> +static void ice_acl_prof_free(struct ice_hw *hw);
> +
> static int
> ice_acl_prof_alloc(struct ice_hw *hw)
> {
> @@ -1007,17 +1009,27 @@ ice_acl_init(struct ice_adapter *ad)
>
> ret = ice_acl_setup(pf);
> if (ret)
> - return ret;
> + goto deinit_acl;
>
> ret = ice_acl_bitmap_init(pf);
> if (ret)
> - return ret;
> + goto deinit_acl;
>
> ret = ice_acl_prof_init(pf);
> if (ret)
> - return ret;
> + goto deinit_acl;
>
> - return ice_register_parser(parser, ad);
> + ret = ice_register_parser(parser, ad);
> + if (ret)
> + goto deinit_acl;
> +
> + return 0;
> +
> +deinit_acl:
> + ice_deinit_acl(pf);
> + ice_acl_prof_free(hw);
> + PMD_DRV_LOG(ERR, "ACL init failed, may not supported!");
Better to print the error message at the place where the error happens, for easy debugging.
> + return ret;
> }
>
> static void
> diff --git a/drivers/net/ice/ice_generic_flow.c
> b/drivers/net/ice/ice_generic_flow.c
> index 57eb002bde..cfdc4bd697 100644
> --- a/drivers/net/ice/ice_generic_flow.c
> +++ b/drivers/net/ice/ice_generic_flow.c
> @@ -1817,6 +1817,12 @@ ice_register_flow_engine(struct ice_flow_engine
> *engine)
> TAILQ_INSERT_TAIL(&engine_list, engine, node); }
>
> +static void
> +ice_unregister_flow_engine(struct ice_flow_engine *engine) {
> + TAILQ_REMOVE(&engine_list, engine, node); }
> +
> int
> ice_flow_init(struct ice_adapter *ad)
> {
> @@ -1843,9 +1849,18 @@ ice_flow_init(struct ice_adapter *ad)
>
> ret = engine->init(ad);
> if (ret) {
> - PMD_INIT_LOG(ERR, "Failed to initialize engine %d",
> - engine->type);
> - return ret;
> + /**
> + * ACL may not supported in kernel driver,
may not be supported
> + * so just unregister the engine.
> + */
> + if (engine->type == ICE_FLOW_ENGINE_ACL) {
> + ice_unregister_flow_engine(engine);
> + } else {
> + PMD_INIT_LOG(ERR,
> + "Failed to initialize engine %d",
> + engine->type);
> + return ret;
> + }
> }
> }
> return 0;
> @@ -1937,7 +1952,7 @@ ice_register_parser(struct ice_flow_parser *parser,
>
> list = ice_get_parser_list(parser, ad);
> if (list == NULL)
> - return -EINVAL;
> + goto err;
>
> if (ad->devargs.pipe_mode_support) {
> TAILQ_INSERT_TAIL(list, parser_node, node); @@ -1949,7
> +1964,7 @@ ice_register_parser(struct ice_flow_parser *parser,
> ICE_FLOW_ENGINE_ACL) {
> TAILQ_INSERT_AFTER(list,
> existing_node,
> parser_node, node);
> - goto DONE;
> + return 0;
> }
> }
> TAILQ_INSERT_HEAD(list, parser_node, node); @@ -
> 1960,7 +1975,7 @@ ice_register_parser(struct ice_flow_parser *parser,
> ICE_FLOW_ENGINE_SWITCH) {
> TAILQ_INSERT_AFTER(list,
> existing_node,
> parser_node, node);
> - goto DONE;
> + return 0;
> }
> }
> TAILQ_INSERT_HEAD(list, parser_node, node); @@ -
> 1969,11 +1984,14 @@ ice_register_parser(struct ice_flow_parser *parser,
> } else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) {
> TAILQ_INSERT_HEAD(list, parser_node, node);
> } else {
> - return -EINVAL;
> + goto err;
> }
> }
> -DONE:
> return 0;
> +err:
> + rte_free(parser_node);
> + PMD_DRV_LOG(ERR, "%s failed.", __func__);
> + return -EINVAL;
> }
>
> void
> --
> 2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 00/12] complete common VF features for DCF
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
` (22 preceding siblings ...)
2022-04-19 15:46 ` [PATCH v4 23/23] doc: update for ice DCF datapath configuration Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
` (12 more replies)
23 siblings, 13 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The DCF PMD support the below dev ops,
dev_supported_ptypes_get
dev_link_update
xstats_get
xstats_get_names
xstats_reset
promiscuous_enable
promiscuous_disable
allmulticast_enable
allmulticast_disable
mac_addr_add
mac_addr_remove
set_mc_addr_list
vlan_filter_set
vlan_offload_set
mac_addr_set
reta_update
reta_query
rss_hash_update
rss_hash_conf_get
rxq_info_get
txq_info_get
mtu_set
tx_done_cleanup
get_monitor_addr
v5:
* remove patch:
1.complete common VF features for DCF
2.net/ice: enable CVL DCF device reset API
3.net/ice: support IPv6 NVGRE tunnel
4.net/ice: support new pattern of IPv4
5.net/ice: treat unknown package as OS default package
6.net/ice: handle virtchnl event message without interrupt
7.net/ice: add DCF request queues function
8.net/ice: negotiate large VF and request more queues
9.net/ice: enable multiple queues configurations for large VF
10.net/ice: enable IRQ mapping configuration for large VF
11.net/ice: add enable/disable queues for DCF large VF
v4:
* remove patch:
1.testpmd: force flow flush
2.net/ice: fix DCF ACL flow engine
3.net/ice: fix DCF reset
* add patch:
1.net/ice: add extended stats
2.net/ice: support queue information getting
3.net/ice: implement power management
4.doc: update for ice DCF datapath configuration
v3:
* remove patch:
1.net/ice/base: add VXLAN support for switch filter
2.net/ice: add VXLAN support for switch filter
3.common/iavf: support flushing rules and reporting DCF id
4.net/ice/base: fix ethertype filter input set
5.net/ice/base: support IPv6 GRE UDP pattern
6.net/ice/base: support new patterns of TCP and UDP
7.net/ice: support new patterns of TCP and UDP
8.net/ice/base: support IPv4 GRE tunnel
9.net/ice: support IPv4 GRE raw pattern type
10.net/ice/base: update Profile ID table for VXLAN
11.net/ice/base: update Protocol ID table to match DVM DDP
v2:
* remove patch:
1.net/iavf: support checking if device is an MDCF instance
2.net/ice: support MDCF(multi-DCF) instance
3.net/ice/base: support custom DDP buildin recipe
4.net/ice: support buildin recipe configuration
5.net/ice/base: support custom ddp package version
6.net/ice: disable ACL function for MDCF instance
Alvin Zhang (2):
net/ice: support dcf promisc configuration
net/ice: support dcf VLAN filter and offload configuration
Jie Wang (2):
net/ice: add ops MTU-SET to dcf
net/ice: add ops dev-supported-ptypes-get to dcf
Kevin Liu (5):
net/ice: support dcf MAC configuration
net/ice: add extended stats
net/ice: support queue information getting
net/ice: implement power management
doc: update for ice DCF datapath configuration
Robin Zhang (1):
net/ice: cleanup Tx buffers
Steve Yang (2):
net/ice: enable RSS RETA ops for DCF hardware
net/ice: enable RSS HASH ops for DCF hardware
doc/guides/nics/features/ice_dcf.ini | 15 +
drivers/net/ice/ice_dcf.c | 13 +-
drivers/net/ice/ice_dcf.h | 28 +-
drivers/net/ice/ice_dcf_ethdev.c | 683 +++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 10 +
5 files changed, 711 insertions(+), 38 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 01/12] net/ice: enable RSS RETA ops for DCF hardware
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 02/12] net/ice: enable RSS HASH " Kevin Liu
` (11 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS RETA should be updated and queried by application,
Add related ops ('.reta_update', '.reta_query') for DCF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++++
3 files changed, 79 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f0c074b01..070d1b71ac 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
return err;
}
-static int
+int
ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_lut *rss_lut;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 6ec766ebda..b2c6aa2684 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59610e058f..1ac66ed990 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint8_t *lut;
+ uint16_t i, idx, shift;
+ int ret;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ /* store the old lut table temporarily */
+ rte_memcpy(lut, hw->rss_lut, reta_size);
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ /* send virtchnnl ops to configure rss*/
+ ret = ice_dcf_configure_rss_lut(hw);
+ if (ret) /* revert back */
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint16_t i, idx, shift;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = hw->rss_lut[i];
+ }
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
.tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 02/12] net/ice: enable RSS HASH ops for DCF hardware
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
2022-04-21 11:13 ` [PATCH v5 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 03/12] net/ice: cleanup Tx buffers Kevin Liu
` (10 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS HASH should be updated and queried by application,
Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF.
Because DCF doesn't support configure RSS HASH, only HASH key can be
updated within ops '.rss_hash_update'.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 070d1b71ac..89c0203ba3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
hw->ets_config = NULL;
}
-static int
+int
ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_key *rss_key;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index b2c6aa2684..f0b45af5ae 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ac66ed990..ccad7fc304 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* HENA setting, it is enabled by default, no change */
+ if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) {
+ PMD_DRV_LOG(ERR, "The size of hash key configured "
+ "(%d) doesn't match the size of hardware can "
+ "support (%d)", rss_conf->rss_key_len,
+ hw->vf_res->rss_key_size);
+ return -EINVAL;
+ }
+
+ rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ return ice_dcf_configure_rss_key(hw);
+}
+
+static int
+ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* Just set it to default value now. */
+ rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL;
+
+ if (!rss_conf->rss_key)
+ return 0;
+
+ rss_conf->rss_key_len = hw->vf_res->rss_key_size;
+ rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len);
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tm_ops_get = ice_dcf_tm_ops_get,
.reta_update = ice_dcf_dev_rss_reta_update,
.reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 03/12] net/ice: cleanup Tx buffers
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
2022-04-21 11:13 ` [PATCH v5 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-21 11:13 ` [PATCH v5 02/12] net/ice: enable RSS HASH " Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 04/12] net/ice: add ops MTU-SET to dcf Kevin Liu
` (9 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Robin Zhang, Kevin Liu
From: Robin Zhang <robinx.zhang@intel.com>
Add support for ops rte_eth_tx_done_cleanup in dcf
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ccad7fc304..d8b5961514 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.reta_query = ice_dcf_dev_rss_reta_query,
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 04/12] net/ice: add ops MTU-SET to dcf
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (2 preceding siblings ...)
2022-04-21 11:13 ` [PATCH v5 03/12] net/ice: cleanup Tx buffers Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 05/12] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
` (8 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "mtu_set" to dcf, and it can configure the port mtu through
cmdline.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++
2 files changed, 20 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d8b5961514..06d752fd61 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &new_link);
}
+static int
+ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started != 0) {
+ PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
+ dev->data->port_id);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
bool
ice_dcf_adminq_need_retry(struct ice_adapter *ad)
{
@@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
.tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 11a1305038..f2faf26f58 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -15,6 +15,12 @@
#define ICE_DCF_MAX_RINGS 1
+#define ICE_DCF_FRAME_SIZE_MAX 9728
+#define ICE_DCF_VLAN_TAG_SIZE 4
+#define ICE_DCF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
+#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+
struct ice_dcf_queue {
uint64_t dummy;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 05/12] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (3 preceding siblings ...)
2022-04-21 11:13 ` [PATCH v5 04/12] net/ice: add ops MTU-SET to dcf Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 06/12] net/ice: support dcf promisc configuration Kevin Liu
` (7 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get
ptypes through the new API.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 80 +++++++++++++++++++-------------
1 file changed, 49 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 06d752fd61..6a577a6582 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+static const uint32_t *
+ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+ return ptypes;
+}
+
static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
- .dev_start = ice_dcf_dev_start,
- .dev_stop = ice_dcf_dev_stop,
- .dev_close = ice_dcf_dev_close,
- .dev_reset = ice_dcf_dev_reset,
- .dev_configure = ice_dcf_dev_configure,
- .dev_infos_get = ice_dcf_dev_info_get,
- .rx_queue_setup = ice_rx_queue_setup,
- .tx_queue_setup = ice_tx_queue_setup,
- .rx_queue_release = ice_dev_rx_queue_release,
- .tx_queue_release = ice_dev_tx_queue_release,
- .rx_queue_start = ice_dcf_rx_queue_start,
- .tx_queue_start = ice_dcf_tx_queue_start,
- .rx_queue_stop = ice_dcf_rx_queue_stop,
- .tx_queue_stop = ice_dcf_tx_queue_stop,
- .link_update = ice_dcf_link_update,
- .stats_get = ice_dcf_stats_get,
- .stats_reset = ice_dcf_stats_reset,
- .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
- .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
- .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
- .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
- .flow_ops_get = ice_dcf_dev_flow_ops_get,
- .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
- .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
- .tm_ops_get = ice_dcf_tm_ops_get,
- .reta_update = ice_dcf_dev_rss_reta_update,
- .reta_query = ice_dcf_dev_rss_reta_query,
- .rss_hash_update = ice_dcf_dev_rss_hash_update,
- .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
- .tx_done_cleanup = ice_tx_done_cleanup,
- .mtu_set = ice_dcf_dev_mtu_set,
+ .dev_start = ice_dcf_dev_start,
+ .dev_stop = ice_dcf_dev_stop,
+ .dev_close = ice_dcf_dev_close,
+ .dev_reset = ice_dcf_dev_reset,
+ .dev_configure = ice_dcf_dev_configure,
+ .dev_infos_get = ice_dcf_dev_info_get,
+ .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .rx_queue_release = ice_dev_rx_queue_release,
+ .tx_queue_release = ice_dev_tx_queue_release,
+ .rx_queue_start = ice_dcf_rx_queue_start,
+ .tx_queue_start = ice_dcf_tx_queue_start,
+ .rx_queue_stop = ice_dcf_rx_queue_stop,
+ .tx_queue_stop = ice_dcf_tx_queue_stop,
+ .link_update = ice_dcf_link_update,
+ .stats_get = ice_dcf_stats_get,
+ .stats_reset = ice_dcf_stats_reset,
+ .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
+ .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
+ .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
+ .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .flow_ops_get = ice_dcf_dev_flow_ops_get,
+ .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
+ .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
+ .tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 06/12] net/ice: support dcf promisc configuration
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (4 preceding siblings ...)
2022-04-21 11:13 ` [PATCH v5 05/12] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 07/12] net/ice: support dcf MAC configuration Kevin Liu
` (6 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support configuration of unicast and multicast promisc on dcf.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 3 ++
2 files changed, 76 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6a577a6582..87d281ee93 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
}
static int
-ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+dcf_config_promisc(struct ice_dcf_adapter *adapter,
+ bool enable_unicast,
+ bool enable_multicast)
{
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_promisc_info promisc;
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ promisc.flags = 0;
+ promisc.vsi_id = hw->vsi_res->vsi_id;
+
+ if (enable_unicast)
+ promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+ if (enable_multicast)
+ promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+ args.req_msg = (uint8_t *)&promisc;
+ args.req_msglen = sizeof(promisc);
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE");
+ return err;
+ }
+
+ adapter->promisc_unicast_enabled = enable_unicast;
+ adapter->promisc_multicast_enabled = enable_multicast;
return 0;
}
+static int
+ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, true,
+ adapter->promisc_multicast_enabled);
+}
+
static int
ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, false,
+ adapter->promisc_multicast_enabled);
}
static int
ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ true);
}
static int
ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ false);
}
static int
@@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
+ dcf_config_promisc(adapter, false, false);
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index f2faf26f58..22e450527b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -33,6 +33,9 @@ struct ice_dcf_adapter {
struct ice_adapter parent; /* Must be first */
struct ice_dcf_hw real_hw;
+ bool promisc_unicast_enabled;
+ bool promisc_multicast_enabled;
+
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 07/12] net/ice: support dcf MAC configuration
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (5 preceding siblings ...)
2022-04-21 11:13 ` [PATCH v5 06/12] net/ice: support dcf promisc configuration Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:13 ` [PATCH v5 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
` (5 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
Below PMD ops are supported in this patch:
.mac_addr_add = dcf_dev_add_mac_addr
.mac_addr_remove = dcf_dev_del_mac_addr
.set_mc_addr_list = dcf_set_mc_addr_list
.mac_addr_set = dcf_dev_set_default_mac_addr
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 9 +-
drivers/net/ice/ice_dcf.h | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 5 +-
4 files changed, 226 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 89c0203ba3..55ae68c456 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
}
int
-ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr,
+ bool add, uint8_t type)
{
struct virtchnl_ether_addr_list *list;
- struct rte_ether_addr *addr;
struct dcf_virtchnl_cmd args;
int len, err = 0;
@@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
}
len = sizeof(struct virtchnl_ether_addr_list);
- addr = hw->eth_dev->data->mac_addrs;
len += sizeof(struct virtchnl_ether_addr);
list = rte_zmalloc(NULL, len, 0);
@@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
rte_memcpy(list->list[0].addr, addr->addr_bytes,
sizeof(addr->addr_bytes));
+
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
-
+ list->list[0].type = type;
list->vsi_id = hw->vsi_res->vsi_id;
list->num_elements = 1;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index f0b45af5ae..78df202a77 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
-int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr, bool add,
+ uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 87d281ee93..0d944f9fd2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -26,6 +26,12 @@
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#define DCF_NUM_MACADDR_MAX 64
+
+static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add);
+
static int
ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- ret = ice_dcf_add_del_all_mac_addr(hw, true);
+ ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs,
+ true, VIRTCHNL_ETHER_ADDR_PRIMARY);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to add mac addr");
return ret;
}
+ if (dcf_ad->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, true);
+ if (ret)
+ return ret;
+ }
+
+
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
@@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
rte_intr_efd_disable(intr_handle);
rte_intr_vec_list_free(intr_handle);
- ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
+ ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw,
+ dcf_ad->real_hw.eth_dev->data->mac_addrs,
+ false, VIRTCHNL_ETHER_ADDR_PRIMARY);
+
+ if (dcf_ad->mc_addrs_num)
+ /* flush previous addresses */
+ (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw,
+ dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, false);
+
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- dev_info->max_mac_addrs = 1;
+ dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
@@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
false);
}
+static int
+dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ int err;
+
+ if (rte_is_zero_ether_addr(addr)) {
+ PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+ return -EINVAL;
+ }
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to add MAC address");
+ return err;
+ }
+
+ return 0;
+}
+
+static void
+dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct rte_ether_addr *addr = &dev->data->mac_addrs[index];
+ int err;
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to remove MAC address");
+}
+
+static int
+dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add)
+{
+ struct virtchnl_ether_addr_list *list;
+ struct dcf_virtchnl_cmd args;
+ uint32_t i;
+ int len, err = 0;
+
+ len = sizeof(struct virtchnl_ether_addr_list);
+ len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
+
+ list = rte_zmalloc(NULL, len, 0);
+ if (!list) {
+ PMD_DRV_LOG(ERR, "fail to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
+ sizeof(list->list[i].addr));
+ list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ list->vsi_id = hw->vsi_res->vsi_id;
+ list->num_elements = mc_addrs_num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+ VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.req_msg = (uint8_t *)list;
+ args.req_msglen = len;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" :
+ "OP_DEL_ETHER_ADDRESS");
+ rte_free(list);
+ return err;
+}
+
+static int
+dcf_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i;
+ int ret;
+
+
+ if (mc_addrs_num > DCF_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR,
+ "can't add more than a limited number (%u) of addresses.",
+ (uint32_t)DCF_NUM_MACADDR_MAX);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addrs[i])) {
+ const uint8_t *mac = mc_addrs[i].addr_bytes;
+
+ PMD_DRV_LOG(ERR,
+ "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x",
+ mac[0], mac[1], mac[2], mac[3], mac[4],
+ mac[5]);
+ return -EINVAL;
+ }
+ }
+
+ if (adapter->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num, false);
+ if (ret)
+ return ret;
+ }
+ if (!mc_addrs_num) {
+ adapter->mc_addrs_num = 0;
+ return 0;
+ }
+
+ /* add new ones */
+ ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true);
+ if (ret) {
+ /* if adding mac address list fails, should add the
+ * previous addresses back.
+ */
+ if (adapter->mc_addrs_num)
+ (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num,
+ true);
+ return ret;
+ }
+ adapter->mc_addrs_num = mc_addrs_num;
+ memcpy(adapter->mc_addrs,
+ mc_addrs, mc_addrs_num * sizeof(*mc_addrs));
+
+ return 0;
+}
+
+static int
+dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_ether_addr *old_addr;
+ int ret;
+
+ old_addr = hw->eth_dev->data->mac_addrs;
+ if (rte_is_same_ether_addr(old_addr, mac_addr))
+ return 0;
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ old_addr->addr_bytes[0],
+ old_addr->addr_bytes[1],
+ old_addr->addr_bytes[2],
+ old_addr->addr_bytes[3],
+ old_addr->addr_bytes[4],
+ old_addr->addr_bytes[5]);
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ mac_addr->addr_bytes[0],
+ mac_addr->addr_bytes[1],
+ mac_addr->addr_bytes[2],
+ mac_addr->addr_bytes[3],
+ mac_addr->addr_bytes[4],
+ mac_addr->addr_bytes[5]);
+
+ if (ret)
+ return -EIO;
+
+ rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs);
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
.allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .mac_addr_add = dcf_dev_add_mac_addr,
+ .mac_addr_remove = dcf_dev_del_mac_addr,
+ .set_mc_addr_list = dcf_set_mc_addr_list,
+ .mac_addr_set = dcf_dev_set_default_mac_addr,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 22e450527b..27f6402786 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -14,7 +14,7 @@
#include "ice_dcf.h"
#define ICE_DCF_MAX_RINGS 1
-
+#define DCF_NUM_MACADDR_MAX 64
#define ICE_DCF_FRAME_SIZE_MAX 9728
#define ICE_DCF_VLAN_TAG_SIZE 4
#define ICE_DCF_ETH_OVERHEAD \
@@ -35,7 +35,8 @@ struct ice_dcf_adapter {
bool promisc_unicast_enabled;
bool promisc_multicast_enabled;
-
+ uint32_t mc_addrs_num;
+ struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX];
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 08/12] net/ice: support dcf VLAN filter and offload configuration
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (6 preceding siblings ...)
2022-04-21 11:13 ` [PATCH v5 07/12] net/ice: support dcf MAC configuration Kevin Liu
@ 2022-04-21 11:13 ` Kevin Liu
2022-04-21 11:14 ` [PATCH v5 09/12] net/ice: add extended stats Kevin Liu
` (4 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Below PMD ops are supported in this patch:
.vlan_filter_set = dcf_dev_vlan_filter_set
.vlan_offload_set = dcf_dev_vlan_offload_set
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0d944f9fd2..e58cdf47d2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_filter_list *vlan_list;
+ uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+ sizeof(uint16_t)];
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+ vlan_list->vsi_id = hw->vsi_res->vsi_id;
+ vlan_list->num_elements = 1;
+ vlan_list->vlan_id[0] = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+ args.req_msg = cmd_buffer;
+ args.req_msglen = sizeof(cmd_buffer);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN" : "OP_DEL_VLAN");
+
+ return err;
+}
+
+static int
+dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_ENABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static int
+dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ /* Vlan stripping setting */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ /* Enable or disable VLAN stripping */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ err = dcf_enable_vlan_strip(hw);
+ else
+ err = dcf_disable_vlan_strip(hw);
+
+ if (err)
+ return -EIO;
+ }
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mac_addr_remove = dcf_dev_del_mac_addr,
.set_mc_addr_list = dcf_set_mc_addr_list,
.mac_addr_set = dcf_dev_set_default_mac_addr,
+ .vlan_filter_set = dcf_dev_vlan_filter_set,
+ .vlan_offload_set = dcf_dev_vlan_offload_set,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 09/12] net/ice: add extended stats
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (7 preceding siblings ...)
2022-04-21 11:13 ` [PATCH v5 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
@ 2022-04-21 11:14 ` Kevin Liu
2022-04-21 11:14 ` [PATCH v5 10/12] net/ice: support queue information getting Kevin Liu
` (3 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:14 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add implementation of xstats() functions in DCF PMD.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.h | 22 ++++++++++
drivers/net/ice/ice_dcf_ethdev.c | 75 ++++++++++++++++++++++++++++++++
2 files changed, 97 insertions(+)
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78df202a77..44a61404c3 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,12 @@
#include "base/ice_type.h"
#include "ice_logs.h"
+/* ICE_DCF_DEV_PRIVATE_TO */
+#define ICE_DCF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+ ((struct ice_dcf_adapter *)adapter)
+#define ICE_DCF_DEV_PRIVATE_TO_VF(adapter) \
+ (&((struct ice_dcf_adapter *)adapter)->vf)
+
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -74,6 +80,22 @@ struct ice_dcf_tm_conf {
bool committed;
};
+struct ice_dcf_eth_stats {
+ u64 rx_bytes; /* gorc */
+ u64 rx_unicast; /* uprc */
+ u64 rx_multicast; /* mprc */
+ u64 rx_broadcast; /* bprc */
+ u64 rx_discards; /* rdpc */
+ u64 rx_unknown_protocol; /* rupp */
+ u64 tx_bytes; /* gotc */
+ u64 tx_unicast; /* uptc */
+ u64 tx_multicast; /* mptc */
+ u64 tx_broadcast; /* bptc */
+ u64 tx_discards; /* tdpc */
+ u64 tx_errors; /* tepc */
+ u64 rx_no_desc; /* repc */
+ u64 rx_errors; /* repc */
+};
struct ice_dcf_hw {
struct iavf_hw avf;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e58cdf47d2..6503700e02 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -45,6 +45,30 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
static int
ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev);
+struct rte_ice_dcf_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+static const struct rte_ice_dcf_xstats_name_off rte_ice_dcf_stats_strings[] = {
+ {"rx_bytes", offsetof(struct ice_dcf_eth_stats, rx_bytes)},
+ {"rx_unicast_packets", offsetof(struct ice_dcf_eth_stats, rx_unicast)},
+ {"rx_multicast_packets", offsetof(struct ice_dcf_eth_stats, rx_multicast)},
+ {"rx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, rx_broadcast)},
+ {"rx_dropped_packets", offsetof(struct ice_dcf_eth_stats, rx_discards)},
+ {"rx_unknown_protocol_packets", offsetof(struct ice_dcf_eth_stats,
+ rx_unknown_protocol)},
+ {"tx_bytes", offsetof(struct ice_dcf_eth_stats, tx_bytes)},
+ {"tx_unicast_packets", offsetof(struct ice_dcf_eth_stats, tx_unicast)},
+ {"tx_multicast_packets", offsetof(struct ice_dcf_eth_stats, tx_multicast)},
+ {"tx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, tx_broadcast)},
+ {"tx_dropped_packets", offsetof(struct ice_dcf_eth_stats, tx_discards)},
+ {"tx_error_packets", offsetof(struct ice_dcf_eth_stats, tx_errors)},
+};
+
+#define ICE_DCF_NB_XSTATS (sizeof(rte_ice_dcf_stats_strings) / \
+ sizeof(rte_ice_dcf_stats_strings[0]))
+
static uint16_t
ice_dcf_recv_pkts(__rte_unused void *rx_queue,
__rte_unused struct rte_mbuf **bufs,
@@ -1358,6 +1382,54 @@ ice_dcf_stats_reset(struct rte_eth_dev *dev)
return 0;
}
+static int ice_dcf_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ unsigned int i;
+
+ if (xstats_names != NULL)
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ snprintf(xstats_names[i].name,
+ sizeof(xstats_names[i].name),
+ "%s", rte_ice_dcf_stats_strings[i].name);
+ }
+ return ICE_DCF_NB_XSTATS;
+}
+
+static int ice_dcf_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n)
+{
+ int ret;
+ unsigned int i;
+ struct ice_dcf_adapter *adapter =
+ ICE_DCF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_eth_stats *postats = &hw->eth_stats_offset;
+ struct virtchnl_eth_stats pnstats;
+
+ if (n < ICE_DCF_NB_XSTATS)
+ return ICE_DCF_NB_XSTATS;
+
+ ret = ice_dcf_query_stats(hw, &pnstats);
+ if (ret != 0)
+ return 0;
+
+ if (!xstats)
+ return 0;
+
+ ice_dcf_update_stats(postats, &pnstats);
+
+ /* loop over xstats array and values from pstats */
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ xstats[i].id = i;
+ xstats[i].value = *(uint64_t *)(((char *)&pnstats) +
+ rte_ice_dcf_stats_strings[i].offset);
+ }
+
+ return ICE_DCF_NB_XSTATS;
+}
+
static void
ice_dcf_free_repr_info(struct ice_dcf_adapter *dcf_adapter)
{
@@ -1629,6 +1701,9 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
+ .xstats_get = ice_dcf_xstats_get,
+ .xstats_get_names = ice_dcf_xstats_get_names,
+ .xstats_reset = ice_dcf_stats_reset,
.promiscuous_enable = ice_dcf_dev_promiscuous_enable,
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 10/12] net/ice: support queue information getting
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (8 preceding siblings ...)
2022-04-21 11:14 ` [PATCH v5 09/12] net/ice: add extended stats Kevin Liu
@ 2022-04-21 11:14 ` Kevin Liu
2022-04-21 11:14 ` [PATCH v5 11/12] net/ice: implement power management Kevin Liu
` (2 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:14 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add below ops,
rxq_info_get
txq_info_get
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6503700e02..9217392d04 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1698,6 +1698,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_start = ice_dcf_tx_queue_start,
.rx_queue_stop = ice_dcf_rx_queue_stop,
.tx_queue_stop = ice_dcf_tx_queue_stop,
+ .rxq_info_get = ice_rxq_info_get,
+ .txq_info_get = ice_txq_info_get,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 11/12] net/ice: implement power management
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (9 preceding siblings ...)
2022-04-21 11:14 ` [PATCH v5 10/12] net/ice: support queue information getting Kevin Liu
@ 2022-04-21 11:14 ` Kevin Liu
2022-04-21 11:14 ` [PATCH v5 12/12] doc: update for ice DCF datapath configuration Kevin Liu
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:14 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Implement support for the power management API by implementing a
'get_monitor_addr' function that will return an address of an RX ring's
status bit.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 9217392d04..236c0395e0 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1700,6 +1700,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_stop = ice_dcf_tx_queue_stop,
.rxq_info_get = ice_rxq_info_get,
.txq_info_get = ice_txq_info_get,
+ .get_monitor_addr = ice_get_monitor_addr,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v5 12/12] doc: update for ice DCF datapath configuration
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (10 preceding siblings ...)
2022-04-21 11:14 ` [PATCH v5 11/12] net/ice: implement power management Kevin Liu
@ 2022-04-21 11:14 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-21 11:14 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Update "ice_dcf" driver feature list.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 54073f0b88..2f3e14a24e 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -15,6 +15,21 @@ L3 checksum offload = P
L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
+Promiscuous mode = Y
+Allmulticast mode = Y
+Unicast MAC filter = Y
+Link status = Y
+Link status event = Y
+Packet type parsing = Y
+VLAN filter = Y
+VLAN offload = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Basic stats = Y
+Extended stats = Y
+MTU update = Y
+Power mgmt address monitor = Y
Basic stats = Y
Linux = Y
x86-32 = Y
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* RE: [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware
2022-04-27 18:12 ` [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
@ 2022-04-27 10:38 ` Zhang, Qi Z
0 siblings, 0 replies; 170+ messages in thread
From: Zhang, Qi Z @ 2022-04-27 10:38 UTC (permalink / raw)
To: Liu, KevinX, dev; +Cc: Yang, Qiming, Yang, SteveX
> -----Original Message-----
> From: Liu, KevinX <kevinx.liu@intel.com>
> Sent: Thursday, April 28, 2022 2:13 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Yang, SteveX <stevex.yang@intel.com>; Liu, KevinX
> <kevinx.liu@intel.com>
> Subject: [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware
>
> From: Steve Yang <stevex.yang@intel.com>
>
> RSS RETA should be updated and queried by application, Add related ops
> ('.reta_update', '.reta_query') for DCF.
>
> Signed-off-by: Steve Yang <stevex.yang@intel.com>
> Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
> ---
> doc/guides/nics/features/ice_dcf.ini | 1 +
> doc/guides/rel_notes/release_22_07.rst | 3 +
> drivers/net/ice/ice_dcf.c | 2 +-
> drivers/net/ice/ice_dcf.h | 1 +
> drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++
> 5 files changed, 83 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/nics/features/ice_dcf.ini
> b/doc/guides/nics/features/ice_dcf.ini
> index 54073f0b88..5221c99a9c 100644
> --- a/doc/guides/nics/features/ice_dcf.ini
> +++ b/doc/guides/nics/features/ice_dcf.ini
> @@ -15,6 +15,7 @@ L3 checksum offload = P
> L4 checksum offload = P
> Inner L3 checksum = P
> Inner L4 checksum = P
> +RSS reta update = Y
> Basic stats = Y
> Linux = Y
> x86-32 = Y
> diff --git a/doc/guides/rel_notes/release_22_07.rst
> b/doc/guides/rel_notes/release_22_07.rst
> index 90123bb807..cbdc90760c 100644
> --- a/doc/guides/rel_notes/release_22_07.rst
> +++ b/doc/guides/rel_notes/release_22_07.rst
> @@ -60,6 +60,9 @@ New Features
> * Added Tx QoS queue rate limitation support.
> * Added quanta size configuration support.
>
> +* **Updated Intel ice driver.**
> +
> + * Added enable RSS RETA ops for DCF hardware.
There is no DCF hardware, better change to
Added support for RSS RETA configure in DCF mode.
^ permalink raw reply [flat|nested] 170+ messages in thread
* RE: [PATCH v6 03/12] net/ice: cleanup Tx buffers
2022-04-27 18:12 ` [PATCH v6 03/12] net/ice: cleanup Tx buffers Kevin Liu
@ 2022-04-27 10:41 ` Zhang, Qi Z
0 siblings, 0 replies; 170+ messages in thread
From: Zhang, Qi Z @ 2022-04-27 10:41 UTC (permalink / raw)
To: Liu, KevinX, dev; +Cc: Yang, Qiming, Yang, SteveX, Zhang, RobinX
> -----Original Message-----
> From: Liu, KevinX <kevinx.liu@intel.com>
> Sent: Thursday, April 28, 2022 2:13 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Yang, SteveX <stevex.yang@intel.com>; Zhang,
> RobinX <robinx.zhang@intel.com>; Liu, KevinX <kevinx.liu@intel.com>
> Subject: [PATCH v6 03/12] net/ice: cleanup Tx buffers
>
> From: Robin Zhang <robinx.zhang@intel.com>
>
> Add support for ops rte_eth_tx_done_cleanup in dcf
>
> Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
> Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
> ---
> doc/guides/rel_notes/release_22_07.rst | 1 +
> drivers/net/ice/ice_dcf_ethdev.c | 1 +
> 2 files changed, 2 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_22_07.rst
> b/doc/guides/rel_notes/release_22_07.rst
> index cc2c243e81..bbd3d296de 100644
> --- a/doc/guides/rel_notes/release_22_07.rst
> +++ b/doc/guides/rel_notes/release_22_07.rst
> @@ -64,6 +64,7 @@ New Features
>
> * Added enable RSS RETA ops for DCF hardware.
> * Added enable RSS HASH ops for DCF hardware.
> + * Added cleanup Tx buffers.
Please keep the pattern be consistent,
Added support for Tx buffer cleanup in DCF mode.
Anyway, this is not worth for a release note update, you can remove it.
>
> Removed Items
> -------------
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
> index ccad7fc304..d8b5961514 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -1235,6 +1235,7 @@ static const struct eth_dev_ops
> ice_dcf_eth_dev_ops = {
> .reta_query = ice_dcf_dev_rss_reta_query,
> .rss_hash_update = ice_dcf_dev_rss_hash_update,
> .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
> + .tx_done_cleanup = ice_tx_done_cleanup,
> };
>
> static int
> --
> 2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* RE: [PATCH v6 05/12] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-27 18:12 ` [PATCH v6 05/12] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
@ 2022-04-27 10:44 ` Zhang, Qi Z
0 siblings, 0 replies; 170+ messages in thread
From: Zhang, Qi Z @ 2022-04-27 10:44 UTC (permalink / raw)
To: Liu, KevinX, dev; +Cc: Yang, Qiming, Yang, SteveX, Wang, Jie1X
> -----Original Message-----
> From: Liu, KevinX <kevinx.liu@intel.com>
> Sent: Thursday, April 28, 2022 2:13 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Yang, SteveX <stevex.yang@intel.com>; Wang, Jie1X
> <jie1x.wang@intel.com>; Liu, KevinX <kevinx.liu@intel.com>
> Subject: [PATCH v6 05/12] net/ice: add ops dev-supported-ptypes-get to dcf
>
> From: Jie Wang <jie1x.wang@intel.com>
>
> add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get ptypes
> through the new API.
>
> Signed-off-by: Jie Wang <jie1x.wang@intel.com>
> Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
> ---
> doc/guides/rel_notes/release_22_07.rst | 1 +
> drivers/net/ice/ice_dcf_ethdev.c | 80 ++++++++++++++++----------
> 2 files changed, 50 insertions(+), 31 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_22_07.rst
> b/doc/guides/rel_notes/release_22_07.rst
> index dc37de85f3..a39196c605 100644
> --- a/doc/guides/rel_notes/release_22_07.rst
> +++ b/doc/guides/rel_notes/release_22_07.rst
> @@ -66,6 +66,7 @@ New Features
> * Added enable RSS HASH ops for DCF hardware.
> * Added cleanup Tx buffers.
> * Added add ops MTU-SET to dcf.
> + * Added add ops dev-supported-ptypes-get to dcf.
Misc feature is not necessary for release notes update, please remove this.
>
> Removed Items
> -------------
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
> index 06d752fd61..6a577a6582 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
> return ret;
> }
>
> +static const uint32_t *
> +ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
> +{
> + static const uint32_t ptypes[] = {
> + RTE_PTYPE_L2_ETHER,
> + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
> + RTE_PTYPE_L4_FRAG,
> + RTE_PTYPE_L4_ICMP,
> + RTE_PTYPE_L4_NONFRAG,
> + RTE_PTYPE_L4_SCTP,
> + RTE_PTYPE_L4_TCP,
> + RTE_PTYPE_L4_UDP,
> + RTE_PTYPE_UNKNOWN
> + };
> + return ptypes;
> +}
> +
> static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
> - .dev_start = ice_dcf_dev_start,
> - .dev_stop = ice_dcf_dev_stop,
> - .dev_close = ice_dcf_dev_close,
> - .dev_reset = ice_dcf_dev_reset,
> - .dev_configure = ice_dcf_dev_configure,
> - .dev_infos_get = ice_dcf_dev_info_get,
> - .rx_queue_setup = ice_rx_queue_setup,
> - .tx_queue_setup = ice_tx_queue_setup,
> - .rx_queue_release = ice_dev_rx_queue_release,
> - .tx_queue_release = ice_dev_tx_queue_release,
> - .rx_queue_start = ice_dcf_rx_queue_start,
> - .tx_queue_start = ice_dcf_tx_queue_start,
> - .rx_queue_stop = ice_dcf_rx_queue_stop,
> - .tx_queue_stop = ice_dcf_tx_queue_stop,
> - .link_update = ice_dcf_link_update,
> - .stats_get = ice_dcf_stats_get,
> - .stats_reset = ice_dcf_stats_reset,
> - .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
> - .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
> - .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
> - .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
> - .flow_ops_get = ice_dcf_dev_flow_ops_get,
> - .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
> - .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
> - .tm_ops_get = ice_dcf_tm_ops_get,
> - .reta_update = ice_dcf_dev_rss_reta_update,
> - .reta_query = ice_dcf_dev_rss_reta_query,
> - .rss_hash_update = ice_dcf_dev_rss_hash_update,
> - .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
> - .tx_done_cleanup = ice_tx_done_cleanup,
> - .mtu_set = ice_dcf_dev_mtu_set,
> + .dev_start = ice_dcf_dev_start,
> + .dev_stop = ice_dcf_dev_stop,
> + .dev_close = ice_dcf_dev_close,
> + .dev_reset = ice_dcf_dev_reset,
> + .dev_configure = ice_dcf_dev_configure,
> + .dev_infos_get = ice_dcf_dev_info_get,
> + .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
> + .rx_queue_setup = ice_rx_queue_setup,
> + .tx_queue_setup = ice_tx_queue_setup,
> + .rx_queue_release = ice_dev_rx_queue_release,
> + .tx_queue_release = ice_dev_tx_queue_release,
> + .rx_queue_start = ice_dcf_rx_queue_start,
> + .tx_queue_start = ice_dcf_tx_queue_start,
> + .rx_queue_stop = ice_dcf_rx_queue_stop,
> + .tx_queue_stop = ice_dcf_tx_queue_stop,
> + .link_update = ice_dcf_link_update,
> + .stats_get = ice_dcf_stats_get,
> + .stats_reset = ice_dcf_stats_reset,
> + .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
> + .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
> + .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
> + .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
> + .flow_ops_get = ice_dcf_dev_flow_ops_get,
> + .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
> + .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
> + .tm_ops_get = ice_dcf_tm_ops_get,
> + .reta_update = ice_dcf_dev_rss_reta_update,
> + .reta_query = ice_dcf_dev_rss_reta_query,
> + .rss_hash_update = ice_dcf_dev_rss_hash_update,
> + .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
> + .tx_done_cleanup = ice_tx_done_cleanup,
> + .mtu_set = ice_dcf_dev_mtu_set,
> };
>
> static int
> --
> 2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* RE: [PATCH v6 12/12] net/ice: support DCF new VLAN capabilities
2022-04-27 18:13 ` [PATCH v6 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
@ 2022-04-27 10:46 ` Zhang, Qi Z
0 siblings, 0 replies; 170+ messages in thread
From: Zhang, Qi Z @ 2022-04-27 10:46 UTC (permalink / raw)
To: Liu, KevinX, dev; +Cc: Yang, Qiming, Yang, SteveX, Alvin Zhang
> -----Original Message-----
> From: Liu, KevinX <kevinx.liu@intel.com>
> Sent: Thursday, April 28, 2022 2:13 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Yang, SteveX <stevex.yang@intel.com>; Alvin Zhang
> <alvinx.zhang@intel.com>; Liu, KevinX <kevinx.liu@intel.com>
> Subject: [PATCH v6 12/12] net/ice: support DCF new VLAN capabilities
>
> From: Alvin Zhang <alvinx.zhang@intel.com>
>
> The new VLAN virtchnl opcodes introduce new capabilities like VLAN filtering,
> stripping and insertion.
>
> The DCF needs to query the VLAN capabilities based on current device
> configuration firstly.
>
> DCF is able to configure inner VLAN filter when port VLAN is enabled base on
> negotiation; and DCF is able to configure outer VLAN (0x8100) if port VLAN is
> disabled to be compatible with legacy mode.
>
> When port VLAN is updated by DCF, the DCF needs to reset to query the new
> VLAN capabilities.
>
> Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
> ---
> doc/guides/rel_notes/release_22_07.rst | 1 +
> drivers/net/ice/ice_dcf.c | 27 ++++
> drivers/net/ice/ice_dcf.h | 1 +
> drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++---
> 4 files changed, 183 insertions(+), 17 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_22_07.rst
> b/doc/guides/rel_notes/release_22_07.rst
> index 004a6d3343..7c932a7c8a 100644
> --- a/doc/guides/rel_notes/release_22_07.rst
> +++ b/doc/guides/rel_notes/release_22_07.rst
> @@ -73,6 +73,7 @@ New Features
> * Added add extended stats.
> * Added support queue information getting.
> * Added implement power management.
> + * Added support DCF new VLAN capabilities.
This feature is not exposed to user, no need release note update.
>
> Removed Items
> -------------
> diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index
> 55ae68c456..885d58c0f4 100644
> --- a/drivers/net/ice/ice_dcf.c
> +++ b/drivers/net/ice/ice_dcf.c
> @@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
> return 0;
> }
>
> +static int
> +dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw) {
> + struct virtchnl_vlan_caps vlan_v2_caps;
> + struct dcf_virtchnl_cmd args;
> + int ret;
> +
> + memset(&args, 0, sizeof(args));
> + args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS;
> + args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps;
> + args.rsp_buflen = sizeof(vlan_v2_caps);
> +
> + ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
> + if (ret) {
> + PMD_DRV_LOG(ERR,
> + "Failed to execute command of
> VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS");
> + return ret;
> + }
> +
> + rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps,
> sizeof(vlan_v2_caps));
> + return 0;
> +}
> +
> int
> ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) { @@ -
> 701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct
> ice_dcf_hw *hw)
> rte_intr_enable(pci_dev->intr_handle);
> ice_dcf_enable_irq0(hw);
>
> + if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
> &&
> + dcf_get_vlan_offload_caps_v2(hw))
> + goto err_rss;
> +
> return 0;
>
> err_rss:
> diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index
> 44a61404c3..7f42ebabe9 100644
> --- a/drivers/net/ice/ice_dcf.h
> +++ b/drivers/net/ice/ice_dcf.h
> @@ -129,6 +129,7 @@ struct ice_dcf_hw {
> uint16_t nb_msix;
> uint16_t rxq_map[16];
> struct virtchnl_eth_stats eth_stats_offset;
> + struct virtchnl_vlan_caps vlan_v2_caps;
>
> /* Link status */
> bool link_up;
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
> index 236c0395e0..8005eb2ab8 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -1050,6 +1050,46 @@ dcf_dev_set_default_mac_addr(struct
> rte_eth_dev *dev,
> return 0;
> }
>
> +static int
> +dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add) {
> + struct virtchnl_vlan_supported_caps *supported_caps =
> + &hw->vlan_v2_caps.filtering.filtering_support;
> + struct virtchnl_vlan *vlan_setting;
> + struct virtchnl_vlan_filter_list_v2 vlan_filter;
> + struct dcf_virtchnl_cmd args;
> + uint32_t filtering_caps;
> + int err;
> +
> + if (supported_caps->outer) {
> + filtering_caps = supported_caps->outer;
> + vlan_setting = &vlan_filter.filters[0].outer;
> + } else {
> + filtering_caps = supported_caps->inner;
> + vlan_setting = &vlan_filter.filters[0].inner;
> + }
> +
> + if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100))
> + return -ENOTSUP;
> +
> + memset(&vlan_filter, 0, sizeof(vlan_filter));
> + vlan_filter.vport_id = hw->vsi_res->vsi_id;
> + vlan_filter.num_elements = 1;
> + vlan_setting->tpid = RTE_ETHER_TYPE_VLAN;
> + vlan_setting->tci = vlanid;
> +
> + memset(&args, 0, sizeof(args));
> + args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 :
> VIRTCHNL_OP_DEL_VLAN_V2;
> + args.req_msg = (uint8_t *)&vlan_filter;
> + args.req_msglen = sizeof(vlan_filter);
> + err = ice_dcf_execute_virtchnl_cmd(hw, &args);
> + if (err)
> + PMD_DRV_LOG(ERR, "fail to execute command %s",
> + add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2");
> +
> + return err;
> +}
> +
> static int
> dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add) { @@ -
> 1076,6 +1116,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t
> vlanid, bool add)
> return err;
> }
>
> +static int
> +dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int
> +on) {
> + struct ice_dcf_adapter *adapter = dev->data->dev_private;
> + struct ice_dcf_hw *hw = &adapter->real_hw;
> + int err;
> +
> + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
> + err = dcf_add_del_vlan_v2(hw, vlan_id, on);
> + if (err)
> + return -EIO;
> + return 0;
> + }
> +
> + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
> + return -ENOTSUP;
> +
> + err = dcf_add_del_vlan(hw, vlan_id, on);
> + if (err)
> + return -EIO;
> + return 0;
> +}
> +
> +static void
> +dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable) {
> + struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf;
> + struct ice_dcf_adapter *adapter = dev->data->dev_private;
> + struct ice_dcf_hw *hw = &adapter->real_hw;
> + uint32_t i, j;
> + uint64_t ids;
> +
> + for (i = 0; i < RTE_DIM(vfc->ids); i++) {
> + if (vfc->ids[i] == 0)
> + continue;
> +
> + ids = vfc->ids[i];
> + for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) {
> + if (ids & 1)
> + dcf_add_del_vlan_v2(hw, 64 * i + j, enable);
> + }
> + }
> +}
> +
> +static int
> +dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable) {
> + struct virtchnl_vlan_supported_caps *stripping_caps =
> + &hw->vlan_v2_caps.offloads.stripping_support;
> + struct virtchnl_vlan_setting vlan_strip;
> + struct dcf_virtchnl_cmd args;
> + uint32_t *ethertype;
> + int ret;
> +
> + if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
> + (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE))
> + ethertype = &vlan_strip.outer_ethertype_setting;
> + else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100)
> &&
> + (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE))
> + ethertype = &vlan_strip.inner_ethertype_setting;
> + else
> + return -ENOTSUP;
> +
> + memset(&vlan_strip, 0, sizeof(vlan_strip));
> + vlan_strip.vport_id = hw->vsi_res->vsi_id;
> + *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100;
> +
> + memset(&args, 0, sizeof(args));
> + args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 :
> + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2;
> + args.req_msg = (uint8_t *)&vlan_strip;
> + args.req_msglen = sizeof(vlan_strip);
> + ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
> + if (ret)
> + PMD_DRV_LOG(ERR, "fail to execute command %s",
> + enable ?
> "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" :
> +
> "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2");
> +
> + return ret;
> +}
> +
> +static int
> +dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask) {
> + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> + struct ice_dcf_adapter *adapter = dev->data->dev_private;
> + struct ice_dcf_hw *hw = &adapter->real_hw;
> + bool enable;
> + int err;
> +
> + if (mask & RTE_ETH_VLAN_FILTER_MASK) {
> + enable = !!(rxmode->offloads &
> RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
> +
> + dcf_iterate_vlan_filters_v2(dev, enable);
> + }
> +
> + if (mask & RTE_ETH_VLAN_STRIP_MASK) {
> + enable = !!(rxmode->offloads &
> RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
> +
> + err = dcf_config_vlan_strip_v2(hw, enable);
> + /* If not support, the stripping is already disabled by PF */
> + if (err == -ENOTSUP && !enable)
> + err = 0;
> + if (err)
> + return -EIO;
> + }
> +
> + return 0;
> +}
> +
> static int
> dcf_enable_vlan_strip(struct ice_dcf_hw *hw) { @@ -1108,30 +1258,17 @@
> dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
> return ret;
> }
>
> -static int
> -dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) -{
> - struct ice_dcf_adapter *adapter = dev->data->dev_private;
> - struct ice_dcf_hw *hw = &adapter->real_hw;
> - int err;
> -
> - if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
> - return -ENOTSUP;
> -
> - err = dcf_add_del_vlan(hw, vlan_id, on);
> - if (err)
> - return -EIO;
> - return 0;
> -}
> -
> static int
> dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) {
> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> struct ice_dcf_adapter *adapter = dev->data->dev_private;
> struct ice_dcf_hw *hw = &adapter->real_hw;
> - struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> int err;
>
> + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
> + return dcf_dev_vlan_offload_set_v2(dev, mask);
> +
> if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
> return -ENOTSUP;
>
> --
> 2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 00/12] complete common VF features for DCF
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
` (11 preceding siblings ...)
2022-04-21 11:14 ` [PATCH v5 12/12] doc: update for ice DCF datapath configuration Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
` (12 more replies)
12 siblings, 13 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The DCF PMD support the below dev ops,
dev_supported_ptypes_get
dev_link_update
xstats_get
xstats_get_names
xstats_reset
promiscuous_enable
promiscuous_disable
allmulticast_enable
allmulticast_disable
mac_addr_add
mac_addr_remove
set_mc_addr_list
vlan_filter_set
vlan_offload_set
mac_addr_set
reta_update
reta_query
rss_hash_update
rss_hash_conf_get
rxq_info_get
txq_info_get
mtu_set
tx_done_cleanup
get_monitor_addr
v6:
* add patch:
1.net/ice: support DCF new VLAN capabilities
* remove patch:
1.doc: update for ice DCF datapath configuration
* Split doc into specific patch.
v5:
* remove patch:
1.complete common VF features for DCF
2.net/ice: enable CVL DCF device reset API
3.net/ice: support IPv6 NVGRE tunnel
4.net/ice: support new pattern of IPv4
5.net/ice: treat unknown package as OS default package
6.net/ice: handle virtchnl event message without interrupt
7.net/ice: add DCF request queues function
8.net/ice: negotiate large VF and request more queues
9.net/ice: enable multiple queues configurations for large VF
10.net/ice: enable IRQ mapping configuration for large VF
11.net/ice: add enable/disable queues for DCF large VF
v4:
* remove patch:
1.testpmd: force flow flush
2.net/ice: fix DCF ACL flow engine
3.net/ice: fix DCF reset
* add patch:
1.net/ice: add extended stats
2.net/ice: support queue information getting
3.net/ice: implement power management
4.doc: update for ice DCF datapath configuration
v3:
* remove patch:
1.net/ice/base: add VXLAN support for switch filter
2.net/ice: add VXLAN support for switch filter
3.common/iavf: support flushing rules and reporting DCF id
4.net/ice/base: fix ethertype filter input set
5.net/ice/base: support IPv6 GRE UDP pattern
6.net/ice/base: support new patterns of TCP and UDP
7.net/ice: support new patterns of TCP and UDP
8.net/ice/base: support IPv4 GRE tunnel
9.net/ice: support IPv4 GRE raw pattern type
10.net/ice/base: update Profile ID table for VXLAN
11.net/ice/base: update Protocol ID table to match DVM DDP
v2:
* remove patch:
1.net/iavf: support checking if device is an MDCF instance
2.net/ice: support MDCF(multi-DCF) instance
3.net/ice/base: support custom DDP buildin recipe
4.net/ice: support buildin recipe configuration
5.net/ice/base: support custom ddp package version
6.net/ice: disable ACL function for MDCF instance
Alvin Zhang (3):
net/ice: support dcf promisc configuration
net/ice: support dcf VLAN filter and offload configuration
net/ice: support DCF new VLAN capabilities
Jie Wang (2):
net/ice: add ops MTU-SET to dcf
net/ice: add ops dev-supported-ptypes-get to dcf
Kevin Liu (4):
net/ice: support dcf MAC configuration
net/ice: add extended stats
net/ice: support queue information getting
net/ice: implement power management
Robin Zhang (1):
net/ice: cleanup Tx buffers
Steve Yang (2):
net/ice: enable RSS RETA ops for DCF hardware
net/ice: enable RSS HASH ops for DCF hardware
doc/guides/nics/features/ice_dcf.ini | 10 +
doc/guides/rel_notes/release_22_07.rst | 14 +
drivers/net/ice/ice_dcf.c | 40 +-
drivers/net/ice/ice_dcf.h | 29 +-
drivers/net/ice/ice_dcf_ethdev.c | 820 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 10 +
6 files changed, 885 insertions(+), 38 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 10:38 ` Zhang, Qi Z
2022-04-27 18:12 ` [PATCH v6 02/12] net/ice: enable RSS HASH " Kevin Liu
` (11 subsequent siblings)
12 siblings, 1 reply; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS RETA should be updated and queried by application,
Add related ops ('.reta_update', '.reta_query') for DCF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 3 +
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++
5 files changed, 83 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 54073f0b88..5221c99a9c 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -15,6 +15,7 @@ L3 checksum offload = P
L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
+RSS reta update = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 90123bb807..cbdc90760c 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -60,6 +60,9 @@ New Features
* Added Tx QoS queue rate limitation support.
* Added quanta size configuration support.
+* **Updated Intel ice driver.**
+
+ * Added enable RSS RETA ops for DCF hardware.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f0c074b01..070d1b71ac 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
return err;
}
-static int
+int
ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_lut *rss_lut;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 6ec766ebda..b2c6aa2684 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59610e058f..1ac66ed990 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint8_t *lut;
+ uint16_t i, idx, shift;
+ int ret;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ /* store the old lut table temporarily */
+ rte_memcpy(lut, hw->rss_lut, reta_size);
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ /* send virtchnnl ops to configure rss*/
+ ret = ice_dcf_configure_rss_lut(hw);
+ if (ret) /* revert back */
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint16_t i, idx, shift;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = hw->rss_lut[i];
+ }
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
.tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 02/12] net/ice: enable RSS HASH ops for DCF hardware
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
2022-04-27 18:12 ` [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 03/12] net/ice: cleanup Tx buffers Kevin Liu
` (10 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS HASH should be updated and queried by application,
Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF.
Because DCF doesn't support configure RSS HASH, only HASH key can be
updated within ops '.rss_hash_update'.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++
5 files changed, 55 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 5221c99a9c..d9c1b25407 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -16,6 +16,7 @@ L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
RSS reta update = Y
+RSS key update = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index cbdc90760c..cc2c243e81 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -63,6 +63,7 @@ New Features
* **Updated Intel ice driver.**
* Added enable RSS RETA ops for DCF hardware.
+ * Added enable RSS HASH ops for DCF hardware.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 070d1b71ac..89c0203ba3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
hw->ets_config = NULL;
}
-static int
+int
ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_key *rss_key;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index b2c6aa2684..f0b45af5ae 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ac66ed990..ccad7fc304 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* HENA setting, it is enabled by default, no change */
+ if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) {
+ PMD_DRV_LOG(ERR, "The size of hash key configured "
+ "(%d) doesn't match the size of hardware can "
+ "support (%d)", rss_conf->rss_key_len,
+ hw->vf_res->rss_key_size);
+ return -EINVAL;
+ }
+
+ rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ return ice_dcf_configure_rss_key(hw);
+}
+
+static int
+ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* Just set it to default value now. */
+ rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL;
+
+ if (!rss_conf->rss_key)
+ return 0;
+
+ rss_conf->rss_key_len = hw->vf_res->rss_key_size;
+ rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len);
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tm_ops_get = ice_dcf_tm_ops_get,
.reta_update = ice_dcf_dev_rss_reta_update,
.reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 03/12] net/ice: cleanup Tx buffers
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
2022-04-27 18:12 ` [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-27 18:12 ` [PATCH v6 02/12] net/ice: enable RSS HASH " Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 10:41 ` Zhang, Qi Z
2022-04-27 18:12 ` [PATCH v6 04/12] net/ice: add ops MTU-SET to dcf Kevin Liu
` (9 subsequent siblings)
12 siblings, 1 reply; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Robin Zhang, Kevin Liu
From: Robin Zhang <robinx.zhang@intel.com>
Add support for ops rte_eth_tx_done_cleanup in dcf
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index cc2c243e81..bbd3d296de 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -64,6 +64,7 @@ New Features
* Added enable RSS RETA ops for DCF hardware.
* Added enable RSS HASH ops for DCF hardware.
+ * Added cleanup Tx buffers.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ccad7fc304..d8b5961514 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.reta_query = ice_dcf_dev_rss_reta_query,
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 04/12] net/ice: add ops MTU-SET to dcf
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (2 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 03/12] net/ice: cleanup Tx buffers Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 05/12] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
` (8 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "mtu_set" to dcf, and it can configure the port mtu through
cmdline.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++
4 files changed, 22 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index d9c1b25407..be34ab4692 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -17,6 +17,7 @@ Inner L3 checksum = P
Inner L4 checksum = P
RSS reta update = Y
RSS key update = Y
+MTU update = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index bbd3d296de..dc37de85f3 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -65,6 +65,7 @@ New Features
* Added enable RSS RETA ops for DCF hardware.
* Added enable RSS HASH ops for DCF hardware.
* Added cleanup Tx buffers.
+ * Added add ops MTU-SET to dcf.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d8b5961514..06d752fd61 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &new_link);
}
+static int
+ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started != 0) {
+ PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
+ dev->data->port_id);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
bool
ice_dcf_adminq_need_retry(struct ice_adapter *ad)
{
@@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
.tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 11a1305038..f2faf26f58 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -15,6 +15,12 @@
#define ICE_DCF_MAX_RINGS 1
+#define ICE_DCF_FRAME_SIZE_MAX 9728
+#define ICE_DCF_VLAN_TAG_SIZE 4
+#define ICE_DCF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
+#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+
struct ice_dcf_queue {
uint64_t dummy;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 05/12] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (3 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 04/12] net/ice: add ops MTU-SET to dcf Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 10:44 ` Zhang, Qi Z
2022-04-27 18:12 ` [PATCH v6 06/12] net/ice: support dcf promisc configuration Kevin Liu
` (7 subsequent siblings)
12 siblings, 1 reply; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get
ptypes through the new API.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 80 ++++++++++++++++----------
2 files changed, 50 insertions(+), 31 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index dc37de85f3..a39196c605 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -66,6 +66,7 @@ New Features
* Added enable RSS HASH ops for DCF hardware.
* Added cleanup Tx buffers.
* Added add ops MTU-SET to dcf.
+ * Added add ops dev-supported-ptypes-get to dcf.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 06d752fd61..6a577a6582 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+static const uint32_t *
+ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+ return ptypes;
+}
+
static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
- .dev_start = ice_dcf_dev_start,
- .dev_stop = ice_dcf_dev_stop,
- .dev_close = ice_dcf_dev_close,
- .dev_reset = ice_dcf_dev_reset,
- .dev_configure = ice_dcf_dev_configure,
- .dev_infos_get = ice_dcf_dev_info_get,
- .rx_queue_setup = ice_rx_queue_setup,
- .tx_queue_setup = ice_tx_queue_setup,
- .rx_queue_release = ice_dev_rx_queue_release,
- .tx_queue_release = ice_dev_tx_queue_release,
- .rx_queue_start = ice_dcf_rx_queue_start,
- .tx_queue_start = ice_dcf_tx_queue_start,
- .rx_queue_stop = ice_dcf_rx_queue_stop,
- .tx_queue_stop = ice_dcf_tx_queue_stop,
- .link_update = ice_dcf_link_update,
- .stats_get = ice_dcf_stats_get,
- .stats_reset = ice_dcf_stats_reset,
- .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
- .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
- .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
- .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
- .flow_ops_get = ice_dcf_dev_flow_ops_get,
- .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
- .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
- .tm_ops_get = ice_dcf_tm_ops_get,
- .reta_update = ice_dcf_dev_rss_reta_update,
- .reta_query = ice_dcf_dev_rss_reta_query,
- .rss_hash_update = ice_dcf_dev_rss_hash_update,
- .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
- .tx_done_cleanup = ice_tx_done_cleanup,
- .mtu_set = ice_dcf_dev_mtu_set,
+ .dev_start = ice_dcf_dev_start,
+ .dev_stop = ice_dcf_dev_stop,
+ .dev_close = ice_dcf_dev_close,
+ .dev_reset = ice_dcf_dev_reset,
+ .dev_configure = ice_dcf_dev_configure,
+ .dev_infos_get = ice_dcf_dev_info_get,
+ .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .rx_queue_release = ice_dev_rx_queue_release,
+ .tx_queue_release = ice_dev_tx_queue_release,
+ .rx_queue_start = ice_dcf_rx_queue_start,
+ .tx_queue_start = ice_dcf_tx_queue_start,
+ .rx_queue_stop = ice_dcf_rx_queue_stop,
+ .tx_queue_stop = ice_dcf_tx_queue_stop,
+ .link_update = ice_dcf_link_update,
+ .stats_get = ice_dcf_stats_get,
+ .stats_reset = ice_dcf_stats_reset,
+ .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
+ .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
+ .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
+ .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .flow_ops_get = ice_dcf_dev_flow_ops_get,
+ .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
+ .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
+ .tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 06/12] net/ice: support dcf promisc configuration
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (4 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 05/12] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 07/12] net/ice: support dcf MAC configuration Kevin Liu
` (6 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support configuration of unicast and multicast promisc on dcf.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 2 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 3 +
4 files changed, 79 insertions(+), 4 deletions(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index be34ab4692..fe3ada8733 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -18,6 +18,8 @@ Inner L4 checksum = P
RSS reta update = Y
RSS key update = Y
MTU update = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a39196c605..c7ba4453ff 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -67,6 +67,7 @@ New Features
* Added cleanup Tx buffers.
* Added add ops MTU-SET to dcf.
* Added add ops dev-supported-ptypes-get to dcf.
+ * Added support dcf promisc configuration.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6a577a6582..87d281ee93 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
}
static int
-ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+dcf_config_promisc(struct ice_dcf_adapter *adapter,
+ bool enable_unicast,
+ bool enable_multicast)
{
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_promisc_info promisc;
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ promisc.flags = 0;
+ promisc.vsi_id = hw->vsi_res->vsi_id;
+
+ if (enable_unicast)
+ promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+ if (enable_multicast)
+ promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+ args.req_msg = (uint8_t *)&promisc;
+ args.req_msglen = sizeof(promisc);
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE");
+ return err;
+ }
+
+ adapter->promisc_unicast_enabled = enable_unicast;
+ adapter->promisc_multicast_enabled = enable_multicast;
return 0;
}
+static int
+ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, true,
+ adapter->promisc_multicast_enabled);
+}
+
static int
ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, false,
+ adapter->promisc_multicast_enabled);
}
static int
ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ true);
}
static int
ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ false);
}
static int
@@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
+ dcf_config_promisc(adapter, false, false);
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index f2faf26f58..22e450527b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -33,6 +33,9 @@ struct ice_dcf_adapter {
struct ice_adapter parent; /* Must be first */
struct ice_dcf_hw real_hw;
+ bool promisc_unicast_enabled;
+ bool promisc_multicast_enabled;
+
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 07/12] net/ice: support dcf MAC configuration
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (5 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 06/12] net/ice: support dcf promisc configuration Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
` (5 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
Below PMD ops are supported in this patch:
.mac_addr_add = dcf_dev_add_mac_addr
.mac_addr_remove = dcf_dev_del_mac_addr
.set_mc_addr_list = dcf_set_mc_addr_list
.mac_addr_set = dcf_dev_set_default_mac_addr
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf.c | 9 +-
drivers/net/ice/ice_dcf.h | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 5 +-
6 files changed, 228 insertions(+), 10 deletions(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index fe3ada8733..c9bdbcd6cc 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -20,6 +20,7 @@ RSS key update = Y
MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index c7ba4453ff..e29ec16720 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -68,6 +68,7 @@ New Features
* Added add ops MTU-SET to dcf.
* Added add ops dev-supported-ptypes-get to dcf.
* Added support dcf promisc configuration.
+ * Added support dcf MAC configuration.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 89c0203ba3..55ae68c456 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
}
int
-ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr,
+ bool add, uint8_t type)
{
struct virtchnl_ether_addr_list *list;
- struct rte_ether_addr *addr;
struct dcf_virtchnl_cmd args;
int len, err = 0;
@@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
}
len = sizeof(struct virtchnl_ether_addr_list);
- addr = hw->eth_dev->data->mac_addrs;
len += sizeof(struct virtchnl_ether_addr);
list = rte_zmalloc(NULL, len, 0);
@@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
rte_memcpy(list->list[0].addr, addr->addr_bytes,
sizeof(addr->addr_bytes));
+
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
-
+ list->list[0].type = type;
list->vsi_id = hw->vsi_res->vsi_id;
list->num_elements = 1;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index f0b45af5ae..78df202a77 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
-int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr, bool add,
+ uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 87d281ee93..0d944f9fd2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -26,6 +26,12 @@
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#define DCF_NUM_MACADDR_MAX 64
+
+static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add);
+
static int
ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- ret = ice_dcf_add_del_all_mac_addr(hw, true);
+ ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs,
+ true, VIRTCHNL_ETHER_ADDR_PRIMARY);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to add mac addr");
return ret;
}
+ if (dcf_ad->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, true);
+ if (ret)
+ return ret;
+ }
+
+
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
@@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
rte_intr_efd_disable(intr_handle);
rte_intr_vec_list_free(intr_handle);
- ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
+ ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw,
+ dcf_ad->real_hw.eth_dev->data->mac_addrs,
+ false, VIRTCHNL_ETHER_ADDR_PRIMARY);
+
+ if (dcf_ad->mc_addrs_num)
+ /* flush previous addresses */
+ (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw,
+ dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, false);
+
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- dev_info->max_mac_addrs = 1;
+ dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
@@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
false);
}
+static int
+dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ int err;
+
+ if (rte_is_zero_ether_addr(addr)) {
+ PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+ return -EINVAL;
+ }
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to add MAC address");
+ return err;
+ }
+
+ return 0;
+}
+
+static void
+dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct rte_ether_addr *addr = &dev->data->mac_addrs[index];
+ int err;
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to remove MAC address");
+}
+
+static int
+dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add)
+{
+ struct virtchnl_ether_addr_list *list;
+ struct dcf_virtchnl_cmd args;
+ uint32_t i;
+ int len, err = 0;
+
+ len = sizeof(struct virtchnl_ether_addr_list);
+ len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
+
+ list = rte_zmalloc(NULL, len, 0);
+ if (!list) {
+ PMD_DRV_LOG(ERR, "fail to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
+ sizeof(list->list[i].addr));
+ list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ list->vsi_id = hw->vsi_res->vsi_id;
+ list->num_elements = mc_addrs_num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+ VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.req_msg = (uint8_t *)list;
+ args.req_msglen = len;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" :
+ "OP_DEL_ETHER_ADDRESS");
+ rte_free(list);
+ return err;
+}
+
+static int
+dcf_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i;
+ int ret;
+
+
+ if (mc_addrs_num > DCF_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR,
+ "can't add more than a limited number (%u) of addresses.",
+ (uint32_t)DCF_NUM_MACADDR_MAX);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addrs[i])) {
+ const uint8_t *mac = mc_addrs[i].addr_bytes;
+
+ PMD_DRV_LOG(ERR,
+ "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x",
+ mac[0], mac[1], mac[2], mac[3], mac[4],
+ mac[5]);
+ return -EINVAL;
+ }
+ }
+
+ if (adapter->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num, false);
+ if (ret)
+ return ret;
+ }
+ if (!mc_addrs_num) {
+ adapter->mc_addrs_num = 0;
+ return 0;
+ }
+
+ /* add new ones */
+ ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true);
+ if (ret) {
+ /* if adding mac address list fails, should add the
+ * previous addresses back.
+ */
+ if (adapter->mc_addrs_num)
+ (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num,
+ true);
+ return ret;
+ }
+ adapter->mc_addrs_num = mc_addrs_num;
+ memcpy(adapter->mc_addrs,
+ mc_addrs, mc_addrs_num * sizeof(*mc_addrs));
+
+ return 0;
+}
+
+static int
+dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_ether_addr *old_addr;
+ int ret;
+
+ old_addr = hw->eth_dev->data->mac_addrs;
+ if (rte_is_same_ether_addr(old_addr, mac_addr))
+ return 0;
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ old_addr->addr_bytes[0],
+ old_addr->addr_bytes[1],
+ old_addr->addr_bytes[2],
+ old_addr->addr_bytes[3],
+ old_addr->addr_bytes[4],
+ old_addr->addr_bytes[5]);
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ mac_addr->addr_bytes[0],
+ mac_addr->addr_bytes[1],
+ mac_addr->addr_bytes[2],
+ mac_addr->addr_bytes[3],
+ mac_addr->addr_bytes[4],
+ mac_addr->addr_bytes[5]);
+
+ if (ret)
+ return -EIO;
+
+ rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs);
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
.allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .mac_addr_add = dcf_dev_add_mac_addr,
+ .mac_addr_remove = dcf_dev_del_mac_addr,
+ .set_mc_addr_list = dcf_set_mc_addr_list,
+ .mac_addr_set = dcf_dev_set_default_mac_addr,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 22e450527b..27f6402786 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -14,7 +14,7 @@
#include "ice_dcf.h"
#define ICE_DCF_MAX_RINGS 1
-
+#define DCF_NUM_MACADDR_MAX 64
#define ICE_DCF_FRAME_SIZE_MAX 9728
#define ICE_DCF_VLAN_TAG_SIZE 4
#define ICE_DCF_ETH_OVERHEAD \
@@ -35,7 +35,8 @@ struct ice_dcf_adapter {
bool promisc_unicast_enabled;
bool promisc_multicast_enabled;
-
+ uint32_t mc_addrs_num;
+ struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX];
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 08/12] net/ice: support dcf VLAN filter and offload configuration
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (6 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 07/12] net/ice: support dcf MAC configuration Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 09/12] net/ice: add extended stats Kevin Liu
` (4 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Below PMD ops are supported in this patch:
.vlan_filter_set = dcf_dev_vlan_filter_set
.vlan_offload_set = dcf_dev_vlan_offload_set
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 2 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++
3 files changed, 104 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index c9bdbcd6cc..01e7527915 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -21,6 +21,8 @@ MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+VLAN filter = Y
+VLAN offload = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index e29ec16720..268f3bba9a 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -69,6 +69,7 @@ New Features
* Added add ops dev-supported-ptypes-get to dcf.
* Added support dcf promisc configuration.
* Added support dcf MAC configuration.
+ * Added support dcf VLAN filter and offload configuration.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0d944f9fd2..e58cdf47d2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_filter_list *vlan_list;
+ uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+ sizeof(uint16_t)];
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+ vlan_list->vsi_id = hw->vsi_res->vsi_id;
+ vlan_list->num_elements = 1;
+ vlan_list->vlan_id[0] = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+ args.req_msg = cmd_buffer;
+ args.req_msglen = sizeof(cmd_buffer);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN" : "OP_DEL_VLAN");
+
+ return err;
+}
+
+static int
+dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_ENABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static int
+dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ /* Vlan stripping setting */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ /* Enable or disable VLAN stripping */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ err = dcf_enable_vlan_strip(hw);
+ else
+ err = dcf_disable_vlan_strip(hw);
+
+ if (err)
+ return -EIO;
+ }
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mac_addr_remove = dcf_dev_del_mac_addr,
.set_mc_addr_list = dcf_set_mc_addr_list,
.mac_addr_set = dcf_dev_set_default_mac_addr,
+ .vlan_filter_set = dcf_dev_vlan_filter_set,
+ .vlan_offload_set = dcf_dev_vlan_offload_set,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 09/12] net/ice: add extended stats
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (7 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:12 ` [PATCH v6 10/12] net/ice: support queue information getting Kevin Liu
` (3 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add implementation of xstats() functions in DCF PMD.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf.h | 22 ++++++++
drivers/net/ice/ice_dcf_ethdev.c | 75 ++++++++++++++++++++++++++
4 files changed, 99 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 01e7527915..54ea7f150c 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -23,6 +23,7 @@ Allmulticast mode = Y
Unicast MAC filter = Y
VLAN filter = Y
VLAN offload = Y
+Extended stats = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 268f3bba9a..1f404a6ee5 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -70,6 +70,7 @@ New Features
* Added support dcf promisc configuration.
* Added support dcf MAC configuration.
* Added support dcf VLAN filter and offload configuration.
+ * Added add extended stats.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78df202a77..44a61404c3 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,12 @@
#include "base/ice_type.h"
#include "ice_logs.h"
+/* ICE_DCF_DEV_PRIVATE_TO */
+#define ICE_DCF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+ ((struct ice_dcf_adapter *)adapter)
+#define ICE_DCF_DEV_PRIVATE_TO_VF(adapter) \
+ (&((struct ice_dcf_adapter *)adapter)->vf)
+
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -74,6 +80,22 @@ struct ice_dcf_tm_conf {
bool committed;
};
+struct ice_dcf_eth_stats {
+ u64 rx_bytes; /* gorc */
+ u64 rx_unicast; /* uprc */
+ u64 rx_multicast; /* mprc */
+ u64 rx_broadcast; /* bprc */
+ u64 rx_discards; /* rdpc */
+ u64 rx_unknown_protocol; /* rupp */
+ u64 tx_bytes; /* gotc */
+ u64 tx_unicast; /* uptc */
+ u64 tx_multicast; /* mptc */
+ u64 tx_broadcast; /* bptc */
+ u64 tx_discards; /* tdpc */
+ u64 tx_errors; /* tepc */
+ u64 rx_no_desc; /* repc */
+ u64 rx_errors; /* repc */
+};
struct ice_dcf_hw {
struct iavf_hw avf;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e58cdf47d2..6503700e02 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -45,6 +45,30 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
static int
ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev);
+struct rte_ice_dcf_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+static const struct rte_ice_dcf_xstats_name_off rte_ice_dcf_stats_strings[] = {
+ {"rx_bytes", offsetof(struct ice_dcf_eth_stats, rx_bytes)},
+ {"rx_unicast_packets", offsetof(struct ice_dcf_eth_stats, rx_unicast)},
+ {"rx_multicast_packets", offsetof(struct ice_dcf_eth_stats, rx_multicast)},
+ {"rx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, rx_broadcast)},
+ {"rx_dropped_packets", offsetof(struct ice_dcf_eth_stats, rx_discards)},
+ {"rx_unknown_protocol_packets", offsetof(struct ice_dcf_eth_stats,
+ rx_unknown_protocol)},
+ {"tx_bytes", offsetof(struct ice_dcf_eth_stats, tx_bytes)},
+ {"tx_unicast_packets", offsetof(struct ice_dcf_eth_stats, tx_unicast)},
+ {"tx_multicast_packets", offsetof(struct ice_dcf_eth_stats, tx_multicast)},
+ {"tx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, tx_broadcast)},
+ {"tx_dropped_packets", offsetof(struct ice_dcf_eth_stats, tx_discards)},
+ {"tx_error_packets", offsetof(struct ice_dcf_eth_stats, tx_errors)},
+};
+
+#define ICE_DCF_NB_XSTATS (sizeof(rte_ice_dcf_stats_strings) / \
+ sizeof(rte_ice_dcf_stats_strings[0]))
+
static uint16_t
ice_dcf_recv_pkts(__rte_unused void *rx_queue,
__rte_unused struct rte_mbuf **bufs,
@@ -1358,6 +1382,54 @@ ice_dcf_stats_reset(struct rte_eth_dev *dev)
return 0;
}
+static int ice_dcf_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ unsigned int i;
+
+ if (xstats_names != NULL)
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ snprintf(xstats_names[i].name,
+ sizeof(xstats_names[i].name),
+ "%s", rte_ice_dcf_stats_strings[i].name);
+ }
+ return ICE_DCF_NB_XSTATS;
+}
+
+static int ice_dcf_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n)
+{
+ int ret;
+ unsigned int i;
+ struct ice_dcf_adapter *adapter =
+ ICE_DCF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_eth_stats *postats = &hw->eth_stats_offset;
+ struct virtchnl_eth_stats pnstats;
+
+ if (n < ICE_DCF_NB_XSTATS)
+ return ICE_DCF_NB_XSTATS;
+
+ ret = ice_dcf_query_stats(hw, &pnstats);
+ if (ret != 0)
+ return 0;
+
+ if (!xstats)
+ return 0;
+
+ ice_dcf_update_stats(postats, &pnstats);
+
+ /* loop over xstats array and values from pstats */
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ xstats[i].id = i;
+ xstats[i].value = *(uint64_t *)(((char *)&pnstats) +
+ rte_ice_dcf_stats_strings[i].offset);
+ }
+
+ return ICE_DCF_NB_XSTATS;
+}
+
static void
ice_dcf_free_repr_info(struct ice_dcf_adapter *dcf_adapter)
{
@@ -1629,6 +1701,9 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
+ .xstats_get = ice_dcf_xstats_get,
+ .xstats_get_names = ice_dcf_xstats_get_names,
+ .xstats_reset = ice_dcf_stats_reset,
.promiscuous_enable = ice_dcf_dev_promiscuous_enable,
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 10/12] net/ice: support queue information getting
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (8 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 09/12] net/ice: add extended stats Kevin Liu
@ 2022-04-27 18:12 ` Kevin Liu
2022-04-27 18:13 ` [PATCH v6 11/12] net/ice: implement power management Kevin Liu
` (2 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:12 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add below ops,
rxq_info_get
txq_info_get
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 2 ++
2 files changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 1f404a6ee5..0d6577cd74 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -71,6 +71,7 @@ New Features
* Added support dcf MAC configuration.
* Added support dcf VLAN filter and offload configuration.
* Added add extended stats.
+ * Added support queue information getting.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6503700e02..9217392d04 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1698,6 +1698,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_start = ice_dcf_tx_queue_start,
.rx_queue_stop = ice_dcf_rx_queue_stop,
.tx_queue_stop = ice_dcf_tx_queue_stop,
+ .rxq_info_get = ice_rxq_info_get,
+ .txq_info_get = ice_txq_info_get,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 11/12] net/ice: implement power management
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (9 preceding siblings ...)
2022-04-27 18:12 ` [PATCH v6 10/12] net/ice: support queue information getting Kevin Liu
@ 2022-04-27 18:13 ` Kevin Liu
2022-04-27 18:13 ` [PATCH v6 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Implement support for the power management API by implementing a
'get_monitor_addr' function that will return an address of an RX ring's
status bit.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 1 +
3 files changed, 3 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 54ea7f150c..3b11622d4c 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -25,6 +25,7 @@ VLAN filter = Y
VLAN offload = Y
Extended stats = Y
Basic stats = Y
+Power mgmt address monitor = Y
Linux = Y
x86-32 = Y
x86-64 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 0d6577cd74..004a6d3343 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -72,6 +72,7 @@ New Features
* Added support dcf VLAN filter and offload configuration.
* Added add extended stats.
* Added support queue information getting.
+ * Added implement power management.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 9217392d04..236c0395e0 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1700,6 +1700,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_stop = ice_dcf_tx_queue_stop,
.rxq_info_get = ice_rxq_info_get,
.txq_info_get = ice_txq_info_get,
+ .get_monitor_addr = ice_get_monitor_addr,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v6 12/12] net/ice: support DCF new VLAN capabilities
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (10 preceding siblings ...)
2022-04-27 18:13 ` [PATCH v6 11/12] net/ice: implement power management Kevin Liu
@ 2022-04-27 18:13 ` Kevin Liu
2022-04-27 10:46 ` Zhang, Qi Z
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
12 siblings, 1 reply; 170+ messages in thread
From: Kevin Liu @ 2022-04-27 18:13 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The DCF needs to query the VLAN capabilities based on current device
configuration firstly.
DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf.c | 27 ++++
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++---
4 files changed, 183 insertions(+), 17 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 004a6d3343..7c932a7c8a 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -73,6 +73,7 @@ New Features
* Added add extended stats.
* Added support queue information getting.
* Added implement power management.
+ * Added support DCF new VLAN capabilities.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 55ae68c456..885d58c0f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
return 0;
}
+static int
+dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_vlan_caps vlan_v2_caps;
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS;
+ args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps;
+ args.rsp_buflen = sizeof(vlan_v2_caps);
+
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS");
+ return ret;
+ }
+
+ rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
+ return 0;
+}
+
int
ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
@@ -701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
+ if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) &&
+ dcf_get_vlan_offload_caps_v2(hw))
+ goto err_rss;
+
return 0;
err_rss:
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 44a61404c3..7f42ebabe9 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -129,6 +129,7 @@ struct ice_dcf_hw {
uint16_t nb_msix;
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
+ struct virtchnl_vlan_caps vlan_v2_caps;
/* Link status */
bool link_up;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 236c0395e0..8005eb2ab8 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1050,6 +1050,46 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_supported_caps *supported_caps =
+ &hw->vlan_v2_caps.filtering.filtering_support;
+ struct virtchnl_vlan *vlan_setting;
+ struct virtchnl_vlan_filter_list_v2 vlan_filter;
+ struct dcf_virtchnl_cmd args;
+ uint32_t filtering_caps;
+ int err;
+
+ if (supported_caps->outer) {
+ filtering_caps = supported_caps->outer;
+ vlan_setting = &vlan_filter.filters[0].outer;
+ } else {
+ filtering_caps = supported_caps->inner;
+ vlan_setting = &vlan_filter.filters[0].inner;
+ }
+
+ if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100))
+ return -ENOTSUP;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.vport_id = hw->vsi_res->vsi_id;
+ vlan_filter.num_elements = 1;
+ vlan_setting->tpid = RTE_ETHER_TYPE_VLAN;
+ vlan_setting->tci = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2;
+ args.req_msg = (uint8_t *)&vlan_filter;
+ args.req_msglen = sizeof(vlan_filter);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2");
+
+ return err;
+}
+
static int
dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
{
@@ -1076,6 +1116,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
return err;
}
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
+ err = dcf_add_del_vlan_v2(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+ }
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static void
+dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable)
+{
+ struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i, j;
+ uint64_t ids;
+
+ for (i = 0; i < RTE_DIM(vfc->ids); i++) {
+ if (vfc->ids[i] == 0)
+ continue;
+
+ ids = vfc->ids[i];
+ for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) {
+ if (ids & 1)
+ dcf_add_del_vlan_v2(hw, 64 * i + j, enable);
+ }
+ }
+}
+
+static int
+dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable)
+{
+ struct virtchnl_vlan_supported_caps *stripping_caps =
+ &hw->vlan_v2_caps.offloads.stripping_support;
+ struct virtchnl_vlan_setting vlan_strip;
+ struct dcf_virtchnl_cmd args;
+ uint32_t *ethertype;
+ int ret;
+
+ if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.outer_ethertype_setting;
+ else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.inner_ethertype_setting;
+ else
+ return -ENOTSUP;
+
+ memset(&vlan_strip, 0, sizeof(vlan_strip));
+ vlan_strip.vport_id = hw->vsi_res->vsi_id;
+ *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 :
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2;
+ args.req_msg = (uint8_t *)&vlan_strip;
+ args.req_msglen = sizeof(vlan_strip);
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" :
+ "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ bool enable;
+ int err;
+
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
+
+ dcf_iterate_vlan_filters_v2(dev, enable);
+ }
+
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
+
+ err = dcf_config_vlan_strip_v2(hw, enable);
+ /* If not support, the stripping is already disabled by PF */
+ if (err == -ENOTSUP && !enable)
+ err = 0;
+ if (err)
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int
dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
{
@@ -1108,30 +1258,17 @@ dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
return ret;
}
-static int
-dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct ice_dcf_adapter *adapter = dev->data->dev_private;
- struct ice_dcf_hw *hw = &adapter->real_hw;
- int err;
-
- if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
- return -ENOTSUP;
-
- err = dcf_add_del_vlan(hw, vlan_id, on);
- if (err)
- return -EIO;
- return 0;
-}
-
static int
dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
int err;
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
+ return dcf_dev_vlan_offload_set_v2(dev, mask);
+
if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
return -ENOTSUP;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* RE: [PATCH v7 00/12] complete common VF features for DCF
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
@ 2022-04-29 2:32 ` Zhang, Qi Z
2022-04-29 9:19 ` [PATCH v7 01/12] net/ice: support for RSS RETA configure in DCF mode Kevin Liu
` (11 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Zhang, Qi Z @ 2022-04-29 2:32 UTC (permalink / raw)
To: Liu, KevinX, dev; +Cc: Yang, Qiming, Yang, SteveX
> -----Original Message-----
> From: Liu, KevinX <kevinx.liu@intel.com>
> Sent: Friday, April 29, 2022 5:20 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Yang, SteveX <stevex.yang@intel.com>; Liu, KevinX
> <kevinx.liu@intel.com>
> Subject: [PATCH v7 00/12] complete common VF features for DCF
>
> The DCF PMD support the below dev ops,
> dev_supported_ptypes_get
> dev_link_update
> xstats_get
> xstats_get_names
> xstats_reset
> promiscuous_enable
> promiscuous_disable
> allmulticast_enable
> allmulticast_disable
> mac_addr_add
> mac_addr_remove
> set_mc_addr_list
> vlan_filter_set
> vlan_offload_set
> mac_addr_set
> reta_update
> reta_query
> rss_hash_update
> rss_hash_conf_get
> rxq_info_get
> txq_info_get
> mtu_set
> tx_done_cleanup
> get_monitor_addr
>
> v7:
> * Update release note and patch title.
>
> v6:
> * add patch:
> 1.net/ice: support DCF new VLAN capabilities
> * remove patch:
> 1.doc: update for ice DCF datapath configuration
> * Split doc into specific patch.
>
> v5:
> * remove patch:
> 1.complete common VF features for DCF
> 2.net/ice: enable CVL DCF device reset API
> 3.net/ice: support IPv6 NVGRE tunnel
> 4.net/ice: support new pattern of IPv4
> 5.net/ice: treat unknown package as OS default package
> 6.net/ice: handle virtchnl event message without interrupt
> 7.net/ice: add DCF request queues function
> 8.net/ice: negotiate large VF and request more queues
> 9.net/ice: enable multiple queues configurations for large VF
> 10.net/ice: enable IRQ mapping configuration for large VF
> 11.net/ice: add enable/disable queues for DCF large VF
>
> v4:
> * remove patch:
> 1.testpmd: force flow flush
> 2.net/ice: fix DCF ACL flow engine
> 3.net/ice: fix DCF reset
> * add patch:
> 1.net/ice: add extended stats
> 2.net/ice: support queue information getting
> 3.net/ice: implement power management
> 4.doc: update for ice DCF datapath configuration
>
> v3:
> * remove patch:
> 1.net/ice/base: add VXLAN support for switch filter
> 2.net/ice: add VXLAN support for switch filter
> 3.common/iavf: support flushing rules and reporting DCF id
> 4.net/ice/base: fix ethertype filter input set
> 5.net/ice/base: support IPv6 GRE UDP pattern
> 6.net/ice/base: support new patterns of TCP and UDP
> 7.net/ice: support new patterns of TCP and UDP
> 8.net/ice/base: support IPv4 GRE tunnel
> 9.net/ice: support IPv4 GRE raw pattern type
> 10.net/ice/base: update Profile ID table for VXLAN
> 11.net/ice/base: update Protocol ID table to match DVM DDP
>
> v2:
> * remove patch:
> 1.net/iavf: support checking if device is an MDCF instance
> 2.net/ice: support MDCF(multi-DCF) instance
> 3.net/ice/base: support custom DDP buildin recipe
> 4.net/ice: support buildin recipe configuration
> 5.net/ice/base: support custom ddp package version
> 6.net/ice: disable ACL function for MDCF instance
>
> Alvin Zhang (3):
> net/ice: support dcf promisc configuration
> net/ice: support dcf VLAN filter and offload configuration
> net/ice: support DCF new VLAN capabilities
>
> Jie Wang (2):
> net/ice: support for MTU configure in DCF mode
> net/ice: add ops dev-supported-ptypes-get to dcf
>
> Kevin Liu (4):
> net/ice: support dcf MAC configuration
> net/ice: add extended stats
> net/ice: support queue information getting
> net/ice: add implement power management
>
> Robin Zhang (1):
> net/ice: support cleanup Tx buffers in DCF mode
>
> Steve Yang (2):
> net/ice: support for RSS RETA configure in DCF mode
> net/ice: support for RSS HASH configure in DCF mode
>
> doc/guides/nics/features/ice_dcf.ini | 10 +
> doc/guides/rel_notes/release_22_07.rst | 8 +
> drivers/net/ice/ice_dcf.c | 40 +-
> drivers/net/ice/ice_dcf.h | 29 +-
> drivers/net/ice/ice_dcf_ethdev.c | 820 ++++++++++++++++++++++++-
> drivers/net/ice/ice_dcf_ethdev.h | 10 +
> 6 files changed, 879 insertions(+), 38 deletions(-)
>
> --
> 2.33.1
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Applied to dpdk-next-net-intel.
Thanks
Qi
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 00/12] complete common VF features for DCF
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
` (11 preceding siblings ...)
2022-04-27 18:13 ` [PATCH v6 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 2:32 ` Zhang, Qi Z
` (12 more replies)
12 siblings, 13 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
The DCF PMD support the below dev ops,
dev_supported_ptypes_get
dev_link_update
xstats_get
xstats_get_names
xstats_reset
promiscuous_enable
promiscuous_disable
allmulticast_enable
allmulticast_disable
mac_addr_add
mac_addr_remove
set_mc_addr_list
vlan_filter_set
vlan_offload_set
mac_addr_set
reta_update
reta_query
rss_hash_update
rss_hash_conf_get
rxq_info_get
txq_info_get
mtu_set
tx_done_cleanup
get_monitor_addr
v7:
* Update release note and patch title.
v6:
* add patch:
1.net/ice: support DCF new VLAN capabilities
* remove patch:
1.doc: update for ice DCF datapath configuration
* Split doc into specific patch.
v5:
* remove patch:
1.complete common VF features for DCF
2.net/ice: enable CVL DCF device reset API
3.net/ice: support IPv6 NVGRE tunnel
4.net/ice: support new pattern of IPv4
5.net/ice: treat unknown package as OS default package
6.net/ice: handle virtchnl event message without interrupt
7.net/ice: add DCF request queues function
8.net/ice: negotiate large VF and request more queues
9.net/ice: enable multiple queues configurations for large VF
10.net/ice: enable IRQ mapping configuration for large VF
11.net/ice: add enable/disable queues for DCF large VF
v4:
* remove patch:
1.testpmd: force flow flush
2.net/ice: fix DCF ACL flow engine
3.net/ice: fix DCF reset
* add patch:
1.net/ice: add extended stats
2.net/ice: support queue information getting
3.net/ice: implement power management
4.doc: update for ice DCF datapath configuration
v3:
* remove patch:
1.net/ice/base: add VXLAN support for switch filter
2.net/ice: add VXLAN support for switch filter
3.common/iavf: support flushing rules and reporting DCF id
4.net/ice/base: fix ethertype filter input set
5.net/ice/base: support IPv6 GRE UDP pattern
6.net/ice/base: support new patterns of TCP and UDP
7.net/ice: support new patterns of TCP and UDP
8.net/ice/base: support IPv4 GRE tunnel
9.net/ice: support IPv4 GRE raw pattern type
10.net/ice/base: update Profile ID table for VXLAN
11.net/ice/base: update Protocol ID table to match DVM DDP
v2:
* remove patch:
1.net/iavf: support checking if device is an MDCF instance
2.net/ice: support MDCF(multi-DCF) instance
3.net/ice/base: support custom DDP buildin recipe
4.net/ice: support buildin recipe configuration
5.net/ice/base: support custom ddp package version
6.net/ice: disable ACL function for MDCF instance
Alvin Zhang (3):
net/ice: support dcf promisc configuration
net/ice: support dcf VLAN filter and offload configuration
net/ice: support DCF new VLAN capabilities
Jie Wang (2):
net/ice: support for MTU configure in DCF mode
net/ice: add ops dev-supported-ptypes-get to dcf
Kevin Liu (4):
net/ice: support dcf MAC configuration
net/ice: add extended stats
net/ice: support queue information getting
net/ice: add implement power management
Robin Zhang (1):
net/ice: support cleanup Tx buffers in DCF mode
Steve Yang (2):
net/ice: support for RSS RETA configure in DCF mode
net/ice: support for RSS HASH configure in DCF mode
doc/guides/nics/features/ice_dcf.ini | 10 +
doc/guides/rel_notes/release_22_07.rst | 8 +
drivers/net/ice/ice_dcf.c | 40 +-
drivers/net/ice/ice_dcf.h | 29 +-
drivers/net/ice/ice_dcf_ethdev.c | 820 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 10 +
6 files changed, 879 insertions(+), 38 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 01/12] net/ice: support for RSS RETA configure in DCF mode
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
2022-04-29 2:32 ` Zhang, Qi Z
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 02/12] net/ice: support for RSS HASH " Kevin Liu
` (10 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS RETA should be updated and queried by application,
Add related ops ('.reta_update', '.reta_query') for DCF.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 3 +
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++
5 files changed, 83 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 54073f0b88..5221c99a9c 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -15,6 +15,7 @@ L3 checksum offload = P
L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
+RSS reta update = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 90123bb807..1f07d3e1b3 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -60,6 +60,9 @@ New Features
* Added Tx QoS queue rate limitation support.
* Added quanta size configuration support.
+* **Updated Intel ice driver.**
+
+ * Added support for RSS RETA configure in DCF mode.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f0c074b01..070d1b71ac 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
return err;
}
-static int
+int
ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_lut *rss_lut;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 6ec766ebda..b2c6aa2684 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59610e058f..1ac66ed990 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint8_t *lut;
+ uint16_t i, idx, shift;
+ int ret;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ /* store the old lut table temporarily */
+ rte_memcpy(lut, hw->rss_lut, reta_size);
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ /* send virtchnnl ops to configure rss*/
+ ret = ice_dcf_configure_rss_lut(hw);
+ if (ret) /* revert back */
+ rte_memcpy(hw->rss_lut, lut, reta_size);
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint16_t i, idx, shift;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ if (reta_size != hw->vf_res->rss_lut_size) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number of hardware can "
+ "support (%d)", reta_size, hw->vf_res->rss_lut_size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = hw->rss_lut[i];
+ }
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
.tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 02/12] net/ice: support for RSS HASH configure in DCF mode
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
2022-04-29 2:32 ` Zhang, Qi Z
2022-04-29 9:19 ` [PATCH v7 01/12] net/ice: support for RSS RETA configure in DCF mode Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 03/12] net/ice: support cleanup Tx buffers " Kevin Liu
` (9 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
From: Steve Yang <stevex.yang@intel.com>
RSS HASH should be updated and queried by application,
Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF.
Because DCF doesn't support configure RSS HASH, only HASH key can be
updated within ops '.rss_hash_update'.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++
5 files changed, 55 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 5221c99a9c..d9c1b25407 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -16,6 +16,7 @@ L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
RSS reta update = Y
+RSS key update = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 1f07d3e1b3..866af8c0b3 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -63,6 +63,7 @@ New Features
* **Updated Intel ice driver.**
* Added support for RSS RETA configure in DCF mode.
+ * Added support for RSS HASH configure in DCF mode.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 070d1b71ac..89c0203ba3 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
hw->ets_config = NULL;
}
-static int
+int
ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
{
struct virtchnl_rss_key *rss_key;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index b2c6aa2684..f0b45af5ae 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw);
int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw);
int ice_dcf_init_rss(struct ice_dcf_hw *hw);
int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1ac66ed990..ccad7fc304 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* HENA setting, it is enabled by default, no change */
+ if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) {
+ PMD_DRV_LOG(ERR, "The size of hash key configured "
+ "(%d) doesn't match the size of hardware can "
+ "support (%d)", rss_conf->rss_key_len,
+ hw->vf_res->rss_key_size);
+ return -EINVAL;
+ }
+
+ rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ return ice_dcf_configure_rss_key(hw);
+}
+
+static int
+ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+ return -ENOTSUP;
+
+ /* Just set it to default value now. */
+ rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL;
+
+ if (!rss_conf->rss_key)
+ return 0;
+
+ rss_conf->rss_key_len = hw->vf_res->rss_key_size;
+ rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len);
+
+ return 0;
+}
+
#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
@@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tm_ops_get = ice_dcf_tm_ops_get,
.reta_update = ice_dcf_dev_rss_reta_update,
.reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 03/12] net/ice: support cleanup Tx buffers in DCF mode
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (2 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 02/12] net/ice: support for RSS HASH " Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 04/12] net/ice: support for MTU configure " Kevin Liu
` (8 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Robin Zhang, Kevin Liu
From: Robin Zhang <robinx.zhang@intel.com>
Add support for ops rte_eth_tx_done_cleanup in dcf
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ccad7fc304..d8b5961514 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.reta_query = ice_dcf_dev_rss_reta_query,
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 04/12] net/ice: support for MTU configure in DCF mode
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (3 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 03/12] net/ice: support cleanup Tx buffers " Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 05/12] net/ice: add ops dev-supported-ptypes-get to dcf Kevin Liu
` (7 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "mtu_set" to dcf, and it can configure the port mtu through
cmdline.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++
4 files changed, 22 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index d9c1b25407..be34ab4692 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -17,6 +17,7 @@ Inner L3 checksum = P
Inner L4 checksum = P
RSS reta update = Y
RSS key update = Y
+MTU update = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 866af8c0b3..3c8412c82e 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -64,6 +64,7 @@ New Features
* Added support for RSS RETA configure in DCF mode.
* Added support for RSS HASH configure in DCF mode.
+ * Added support for MTU configure in DCF mode.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d8b5961514..06d752fd61 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &new_link);
}
+static int
+ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started != 0) {
+ PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
+ dev->data->port_id);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
bool
ice_dcf_adminq_need_retry(struct ice_adapter *ad)
{
@@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.rss_hash_update = ice_dcf_dev_rss_hash_update,
.rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
.tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 11a1305038..f2faf26f58 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -15,6 +15,12 @@
#define ICE_DCF_MAX_RINGS 1
+#define ICE_DCF_FRAME_SIZE_MAX 9728
+#define ICE_DCF_VLAN_TAG_SIZE 4
+#define ICE_DCF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2)
+#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD)
+
struct ice_dcf_queue {
uint64_t dummy;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 05/12] net/ice: add ops dev-supported-ptypes-get to dcf
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (4 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 04/12] net/ice: support for MTU configure " Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 06/12] net/ice: support dcf promisc configuration Kevin Liu
` (6 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Jie Wang, Kevin Liu
From: Jie Wang <jie1x.wang@intel.com>
add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get
ptypes through the new API.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 80 +++++++++++++++++++-------------
1 file changed, 49 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 06d752fd61..6a577a6582 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+static const uint32_t *
+ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+ return ptypes;
+}
+
static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
- .dev_start = ice_dcf_dev_start,
- .dev_stop = ice_dcf_dev_stop,
- .dev_close = ice_dcf_dev_close,
- .dev_reset = ice_dcf_dev_reset,
- .dev_configure = ice_dcf_dev_configure,
- .dev_infos_get = ice_dcf_dev_info_get,
- .rx_queue_setup = ice_rx_queue_setup,
- .tx_queue_setup = ice_tx_queue_setup,
- .rx_queue_release = ice_dev_rx_queue_release,
- .tx_queue_release = ice_dev_tx_queue_release,
- .rx_queue_start = ice_dcf_rx_queue_start,
- .tx_queue_start = ice_dcf_tx_queue_start,
- .rx_queue_stop = ice_dcf_rx_queue_stop,
- .tx_queue_stop = ice_dcf_tx_queue_stop,
- .link_update = ice_dcf_link_update,
- .stats_get = ice_dcf_stats_get,
- .stats_reset = ice_dcf_stats_reset,
- .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
- .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
- .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
- .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
- .flow_ops_get = ice_dcf_dev_flow_ops_get,
- .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
- .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
- .tm_ops_get = ice_dcf_tm_ops_get,
- .reta_update = ice_dcf_dev_rss_reta_update,
- .reta_query = ice_dcf_dev_rss_reta_query,
- .rss_hash_update = ice_dcf_dev_rss_hash_update,
- .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
- .tx_done_cleanup = ice_tx_done_cleanup,
- .mtu_set = ice_dcf_dev_mtu_set,
+ .dev_start = ice_dcf_dev_start,
+ .dev_stop = ice_dcf_dev_stop,
+ .dev_close = ice_dcf_dev_close,
+ .dev_reset = ice_dcf_dev_reset,
+ .dev_configure = ice_dcf_dev_configure,
+ .dev_infos_get = ice_dcf_dev_info_get,
+ .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .rx_queue_release = ice_dev_rx_queue_release,
+ .tx_queue_release = ice_dev_tx_queue_release,
+ .rx_queue_start = ice_dcf_rx_queue_start,
+ .tx_queue_start = ice_dcf_tx_queue_start,
+ .rx_queue_stop = ice_dcf_rx_queue_stop,
+ .tx_queue_stop = ice_dcf_tx_queue_stop,
+ .link_update = ice_dcf_link_update,
+ .stats_get = ice_dcf_stats_get,
+ .stats_reset = ice_dcf_stats_reset,
+ .promiscuous_enable = ice_dcf_dev_promiscuous_enable,
+ .promiscuous_disable = ice_dcf_dev_promiscuous_disable,
+ .allmulticast_enable = ice_dcf_dev_allmulticast_enable,
+ .allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .flow_ops_get = ice_dcf_dev_flow_ops_get,
+ .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
+ .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
+ .tm_ops_get = ice_dcf_tm_ops_get,
+ .reta_update = ice_dcf_dev_rss_reta_update,
+ .reta_query = ice_dcf_dev_rss_reta_query,
+ .rss_hash_update = ice_dcf_dev_rss_hash_update,
+ .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get,
+ .tx_done_cleanup = ice_tx_done_cleanup,
+ .mtu_set = ice_dcf_dev_mtu_set,
};
static int
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 06/12] net/ice: support dcf promisc configuration
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (5 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 05/12] net/ice: add ops dev-supported-ptypes-get to dcf Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 07/12] net/ice: support dcf MAC configuration Kevin Liu
` (5 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Support configuration of unicast and multicast promisc on dcf.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 2 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++--
drivers/net/ice/ice_dcf_ethdev.h | 3 +
4 files changed, 79 insertions(+), 4 deletions(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index be34ab4692..fe3ada8733 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -18,6 +18,8 @@ Inner L4 checksum = P
RSS reta update = Y
RSS key update = Y
MTU update = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 3c8412c82e..f23e5cafd1 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -65,6 +65,7 @@ New Features
* Added support for RSS RETA configure in DCF mode.
* Added support for RSS HASH configure in DCF mode.
* Added support for MTU configure in DCF mode.
+ * Added support for promisc configuration in DCF mode.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6a577a6582..87d281ee93 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
}
static int
-ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+dcf_config_promisc(struct ice_dcf_adapter *adapter,
+ bool enable_unicast,
+ bool enable_multicast)
{
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_promisc_info promisc;
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ promisc.flags = 0;
+ promisc.vsi_id = hw->vsi_res->vsi_id;
+
+ if (enable_unicast)
+ promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+ if (enable_multicast)
+ promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+ args.req_msg = (uint8_t *)&promisc;
+ args.req_msglen = sizeof(promisc);
+
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE");
+ return err;
+ }
+
+ adapter->promisc_unicast_enabled = enable_unicast;
+ adapter->promisc_multicast_enabled = enable_multicast;
return 0;
}
+static int
+ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, true,
+ adapter->promisc_multicast_enabled);
+}
+
static int
ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_unicast_enabled) {
+ PMD_DRV_LOG(INFO, "promiscuous has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, false,
+ adapter->promisc_multicast_enabled);
}
static int
ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been enabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ true);
}
static int
ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
{
- return 0;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+
+ if (!adapter->promisc_multicast_enabled) {
+ PMD_DRV_LOG(INFO, "allmulticast has been disabled");
+ return 0;
+ }
+
+ return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled,
+ false);
}
static int
@@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
+ dcf_config_promisc(adapter, false, false);
return 0;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index f2faf26f58..22e450527b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -33,6 +33,9 @@ struct ice_dcf_adapter {
struct ice_adapter parent; /* Must be first */
struct ice_dcf_hw real_hw;
+ bool promisc_unicast_enabled;
+ bool promisc_multicast_enabled;
+
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 07/12] net/ice: support dcf MAC configuration
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (6 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 06/12] net/ice: support dcf promisc configuration Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
` (4 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu, Alvin Zhang
Below PMD ops are supported in this patch:
.mac_addr_add = dcf_dev_add_mac_addr
.mac_addr_remove = dcf_dev_del_mac_addr
.set_mc_addr_list = dcf_set_mc_addr_list
.mac_addr_set = dcf_dev_set_default_mac_addr
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf.c | 9 +-
drivers/net/ice/ice_dcf.h | 4 +-
drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++-
drivers/net/ice/ice_dcf_ethdev.h | 5 +-
6 files changed, 228 insertions(+), 10 deletions(-)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index fe3ada8733..c9bdbcd6cc 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -20,6 +20,7 @@ RSS key update = Y
MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index f23e5cafd1..97517d303e 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -66,6 +66,7 @@ New Features
* Added support for RSS HASH configure in DCF mode.
* Added support for MTU configure in DCF mode.
* Added support for promisc configuration in DCF mode.
+ * Added support for MAC configuration in DCF mode.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 89c0203ba3..55ae68c456 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
}
int
-ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr,
+ bool add, uint8_t type)
{
struct virtchnl_ether_addr_list *list;
- struct rte_ether_addr *addr;
struct dcf_virtchnl_cmd args;
int len, err = 0;
@@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
}
len = sizeof(struct virtchnl_ether_addr_list);
- addr = hw->eth_dev->data->mac_addrs;
len += sizeof(struct virtchnl_ether_addr);
list = rte_zmalloc(NULL, len, 0);
@@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
rte_memcpy(list->list[0].addr, addr->addr_bytes,
sizeof(addr->addr_bytes));
+
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
-
+ list->list[0].type = type;
list->vsi_id = hw->vsi_res->vsi_id;
list->num_elements = 1;
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index f0b45af5ae..78df202a77 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
int ice_dcf_query_stats(struct ice_dcf_hw *hw,
struct virtchnl_eth_stats *pstats);
-int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *addr, bool add,
+ uint8_t type);
int ice_dcf_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
void ice_dcf_tm_conf_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 87d281ee93..0d944f9fd2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -26,6 +26,12 @@
#include "ice_dcf_ethdev.h"
#include "ice_rxtx.h"
+#define DCF_NUM_MACADDR_MAX 64
+
+static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add);
+
static int
ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
struct rte_eth_udp_tunnel *udp_tunnel);
@@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- ret = ice_dcf_add_del_all_mac_addr(hw, true);
+ ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs,
+ true, VIRTCHNL_ETHER_ADDR_PRIMARY);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to add mac addr");
return ret;
}
+ if (dcf_ad->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, true);
+ if (ret)
+ return ret;
+ }
+
+
dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
@@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
rte_intr_efd_disable(intr_handle);
rte_intr_vec_list_free(intr_handle);
- ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
+ ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw,
+ dcf_ad->real_hw.eth_dev->data->mac_addrs,
+ false, VIRTCHNL_ETHER_ADDR_PRIMARY);
+
+ if (dcf_ad->mc_addrs_num)
+ /* flush previous addresses */
+ (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw,
+ dcf_ad->mc_addrs,
+ dcf_ad->mc_addrs_num, false);
+
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
hw->tm_conf.committed = false;
@@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- dev_info->max_mac_addrs = 1;
+ dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX;
dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
@@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev)
false);
}
+static int
+dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ int err;
+
+ if (rte_is_zero_ether_addr(addr)) {
+ PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+ return -EINVAL;
+ }
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to add MAC address");
+ return err;
+ }
+
+ return 0;
+}
+
+static void
+dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct rte_ether_addr *addr = &dev->data->mac_addrs[index];
+ int err;
+
+ err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false,
+ VIRTCHNL_ETHER_ADDR_EXTRA);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to remove MAC address");
+}
+
+static int
+dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num, bool add)
+{
+ struct virtchnl_ether_addr_list *list;
+ struct dcf_virtchnl_cmd args;
+ uint32_t i;
+ int len, err = 0;
+
+ len = sizeof(struct virtchnl_ether_addr_list);
+ len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
+
+ list = rte_zmalloc(NULL, len, 0);
+ if (!list) {
+ PMD_DRV_LOG(ERR, "fail to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
+ sizeof(list->list[i].addr));
+ list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ list->vsi_id = hw->vsi_res->vsi_id;
+ list->num_elements = mc_addrs_num;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+ VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.req_msg = (uint8_t *)list;
+ args.req_msglen = len;
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" :
+ "OP_DEL_ETHER_ADDRESS");
+ rte_free(list);
+ return err;
+}
+
+static int
+dcf_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addrs,
+ uint32_t mc_addrs_num)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i;
+ int ret;
+
+
+ if (mc_addrs_num > DCF_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR,
+ "can't add more than a limited number (%u) of addresses.",
+ (uint32_t)DCF_NUM_MACADDR_MAX);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mc_addrs_num; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addrs[i])) {
+ const uint8_t *mac = mc_addrs[i].addr_bytes;
+
+ PMD_DRV_LOG(ERR,
+ "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x",
+ mac[0], mac[1], mac[2], mac[3], mac[4],
+ mac[5]);
+ return -EINVAL;
+ }
+ }
+
+ if (adapter->mc_addrs_num) {
+ /* flush previous addresses */
+ ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num, false);
+ if (ret)
+ return ret;
+ }
+ if (!mc_addrs_num) {
+ adapter->mc_addrs_num = 0;
+ return 0;
+ }
+
+ /* add new ones */
+ ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true);
+ if (ret) {
+ /* if adding mac address list fails, should add the
+ * previous addresses back.
+ */
+ if (adapter->mc_addrs_num)
+ (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs,
+ adapter->mc_addrs_num,
+ true);
+ return ret;
+ }
+ adapter->mc_addrs_num = mc_addrs_num;
+ memcpy(adapter->mc_addrs,
+ mc_addrs, mc_addrs_num * sizeof(*mc_addrs));
+
+ return 0;
+}
+
+static int
+dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_ether_addr *old_addr;
+ int ret;
+
+ old_addr = hw->eth_dev->data->mac_addrs;
+ if (rte_is_same_ether_addr(old_addr, mac_addr))
+ return 0;
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ old_addr->addr_bytes[0],
+ old_addr->addr_bytes[1],
+ old_addr->addr_bytes[2],
+ old_addr->addr_bytes[3],
+ old_addr->addr_bytes[4],
+ old_addr->addr_bytes[5]);
+
+ ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true,
+ VIRTCHNL_ETHER_ADDR_PRIMARY);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+ " %02X:%02X:%02X:%02X:%02X:%02X",
+ mac_addr->addr_bytes[0],
+ mac_addr->addr_bytes[1],
+ mac_addr->addr_bytes[2],
+ mac_addr->addr_bytes[3],
+ mac_addr->addr_bytes[4],
+ mac_addr->addr_bytes[5]);
+
+ if (ret)
+ return -EIO;
+
+ rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs);
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
.allmulticast_disable = ice_dcf_dev_allmulticast_disable,
+ .mac_addr_add = dcf_dev_add_mac_addr,
+ .mac_addr_remove = dcf_dev_del_mac_addr,
+ .set_mc_addr_list = dcf_set_mc_addr_list,
+ .mac_addr_set = dcf_dev_set_default_mac_addr,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 22e450527b..27f6402786 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -14,7 +14,7 @@
#include "ice_dcf.h"
#define ICE_DCF_MAX_RINGS 1
-
+#define DCF_NUM_MACADDR_MAX 64
#define ICE_DCF_FRAME_SIZE_MAX 9728
#define ICE_DCF_VLAN_TAG_SIZE 4
#define ICE_DCF_ETH_OVERHEAD \
@@ -35,7 +35,8 @@ struct ice_dcf_adapter {
bool promisc_unicast_enabled;
bool promisc_multicast_enabled;
-
+ uint32_t mc_addrs_num;
+ struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX];
int num_reprs;
struct ice_dcf_repr_info *repr_infos;
};
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 08/12] net/ice: support dcf VLAN filter and offload configuration
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (7 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 07/12] net/ice: support dcf MAC configuration Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 09/12] net/ice: add extended stats Kevin Liu
` (3 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
Below PMD ops are supported in this patch:
.vlan_filter_set = dcf_dev_vlan_filter_set
.vlan_offload_set = dcf_dev_vlan_offload_set
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 2 +
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++
3 files changed, 104 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index c9bdbcd6cc..01e7527915 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -21,6 +21,8 @@ MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+VLAN filter = Y
+VLAN offload = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 97517d303e..e66f84db9c 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -67,6 +67,7 @@ New Features
* Added support for MTU configure in DCF mode.
* Added support for promisc configuration in DCF mode.
* Added support for MAC configuration in DCF mode.
+ * Added support for VLAN filter and offload configuration in DCF mode.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0d944f9fd2..e58cdf47d2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_filter_list *vlan_list;
+ uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+ sizeof(uint16_t)];
+ struct dcf_virtchnl_cmd args;
+ int err;
+
+ vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+ vlan_list->vsi_id = hw->vsi_res->vsi_id;
+ vlan_list->num_elements = 1;
+ vlan_list->vlan_id[0] = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+ args.req_msg = cmd_buffer;
+ args.req_msglen = sizeof(cmd_buffer);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN" : "OP_DEL_VLAN");
+
+ return err;
+}
+
+static int
+dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_ENABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
+{
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of OP_DISABLE_VLAN_STRIPPING");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static int
+dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ int err;
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ /* Vlan stripping setting */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ /* Enable or disable VLAN stripping */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ err = dcf_enable_vlan_strip(hw);
+ else
+ err = dcf_disable_vlan_strip(hw);
+
+ if (err)
+ return -EIO;
+ }
+ return 0;
+}
+
static int
ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev,
const struct rte_flow_ops **ops)
@@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.mac_addr_remove = dcf_dev_del_mac_addr,
.set_mc_addr_list = dcf_set_mc_addr_list,
.mac_addr_set = dcf_dev_set_default_mac_addr,
+ .vlan_filter_set = dcf_dev_vlan_filter_set,
+ .vlan_offload_set = dcf_dev_vlan_offload_set,
.flow_ops_get = ice_dcf_dev_flow_ops_get,
.udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 09/12] net/ice: add extended stats
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (8 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 10/12] net/ice: support queue information getting Kevin Liu
` (2 subsequent siblings)
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add implementation of xstats() functions in DCF PMD.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
drivers/net/ice/ice_dcf.h | 22 ++++++++
drivers/net/ice/ice_dcf_ethdev.c | 75 ++++++++++++++++++++++++++++
3 files changed, 98 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 01e7527915..54ea7f150c 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -23,6 +23,7 @@ Allmulticast mode = Y
Unicast MAC filter = Y
VLAN filter = Y
VLAN offload = Y
+Extended stats = Y
Basic stats = Y
Linux = Y
x86-32 = Y
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 78df202a77..44a61404c3 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -15,6 +15,12 @@
#include "base/ice_type.h"
#include "ice_logs.h"
+/* ICE_DCF_DEV_PRIVATE_TO */
+#define ICE_DCF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+ ((struct ice_dcf_adapter *)adapter)
+#define ICE_DCF_DEV_PRIVATE_TO_VF(adapter) \
+ (&((struct ice_dcf_adapter *)adapter)->vf)
+
struct dcf_virtchnl_cmd {
TAILQ_ENTRY(dcf_virtchnl_cmd) next;
@@ -74,6 +80,22 @@ struct ice_dcf_tm_conf {
bool committed;
};
+struct ice_dcf_eth_stats {
+ u64 rx_bytes; /* gorc */
+ u64 rx_unicast; /* uprc */
+ u64 rx_multicast; /* mprc */
+ u64 rx_broadcast; /* bprc */
+ u64 rx_discards; /* rdpc */
+ u64 rx_unknown_protocol; /* rupp */
+ u64 tx_bytes; /* gotc */
+ u64 tx_unicast; /* uptc */
+ u64 tx_multicast; /* mptc */
+ u64 tx_broadcast; /* bptc */
+ u64 tx_discards; /* tdpc */
+ u64 tx_errors; /* tepc */
+ u64 rx_no_desc; /* repc */
+ u64 rx_errors; /* repc */
+};
struct ice_dcf_hw {
struct iavf_hw avf;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e58cdf47d2..6503700e02 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -45,6 +45,30 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev);
static int
ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev);
+struct rte_ice_dcf_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+static const struct rte_ice_dcf_xstats_name_off rte_ice_dcf_stats_strings[] = {
+ {"rx_bytes", offsetof(struct ice_dcf_eth_stats, rx_bytes)},
+ {"rx_unicast_packets", offsetof(struct ice_dcf_eth_stats, rx_unicast)},
+ {"rx_multicast_packets", offsetof(struct ice_dcf_eth_stats, rx_multicast)},
+ {"rx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, rx_broadcast)},
+ {"rx_dropped_packets", offsetof(struct ice_dcf_eth_stats, rx_discards)},
+ {"rx_unknown_protocol_packets", offsetof(struct ice_dcf_eth_stats,
+ rx_unknown_protocol)},
+ {"tx_bytes", offsetof(struct ice_dcf_eth_stats, tx_bytes)},
+ {"tx_unicast_packets", offsetof(struct ice_dcf_eth_stats, tx_unicast)},
+ {"tx_multicast_packets", offsetof(struct ice_dcf_eth_stats, tx_multicast)},
+ {"tx_broadcast_packets", offsetof(struct ice_dcf_eth_stats, tx_broadcast)},
+ {"tx_dropped_packets", offsetof(struct ice_dcf_eth_stats, tx_discards)},
+ {"tx_error_packets", offsetof(struct ice_dcf_eth_stats, tx_errors)},
+};
+
+#define ICE_DCF_NB_XSTATS (sizeof(rte_ice_dcf_stats_strings) / \
+ sizeof(rte_ice_dcf_stats_strings[0]))
+
static uint16_t
ice_dcf_recv_pkts(__rte_unused void *rx_queue,
__rte_unused struct rte_mbuf **bufs,
@@ -1358,6 +1382,54 @@ ice_dcf_stats_reset(struct rte_eth_dev *dev)
return 0;
}
+static int ice_dcf_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ unsigned int i;
+
+ if (xstats_names != NULL)
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ snprintf(xstats_names[i].name,
+ sizeof(xstats_names[i].name),
+ "%s", rte_ice_dcf_stats_strings[i].name);
+ }
+ return ICE_DCF_NB_XSTATS;
+}
+
+static int ice_dcf_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n)
+{
+ int ret;
+ unsigned int i;
+ struct ice_dcf_adapter *adapter =
+ ICE_DCF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ struct virtchnl_eth_stats *postats = &hw->eth_stats_offset;
+ struct virtchnl_eth_stats pnstats;
+
+ if (n < ICE_DCF_NB_XSTATS)
+ return ICE_DCF_NB_XSTATS;
+
+ ret = ice_dcf_query_stats(hw, &pnstats);
+ if (ret != 0)
+ return 0;
+
+ if (!xstats)
+ return 0;
+
+ ice_dcf_update_stats(postats, &pnstats);
+
+ /* loop over xstats array and values from pstats */
+ for (i = 0; i < ICE_DCF_NB_XSTATS; i++) {
+ xstats[i].id = i;
+ xstats[i].value = *(uint64_t *)(((char *)&pnstats) +
+ rte_ice_dcf_stats_strings[i].offset);
+ }
+
+ return ICE_DCF_NB_XSTATS;
+}
+
static void
ice_dcf_free_repr_info(struct ice_dcf_adapter *dcf_adapter)
{
@@ -1629,6 +1701,9 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
+ .xstats_get = ice_dcf_xstats_get,
+ .xstats_get_names = ice_dcf_xstats_get_names,
+ .xstats_reset = ice_dcf_stats_reset,
.promiscuous_enable = ice_dcf_dev_promiscuous_enable,
.promiscuous_disable = ice_dcf_dev_promiscuous_disable,
.allmulticast_enable = ice_dcf_dev_allmulticast_enable,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 10/12] net/ice: support queue information getting
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (9 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 09/12] net/ice: add extended stats Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 11/12] net/ice: add implement power management Kevin Liu
2022-04-29 9:19 ` [PATCH v7 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Add below ops,
rxq_info_get
txq_info_get
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6503700e02..9217392d04 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1698,6 +1698,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_start = ice_dcf_tx_queue_start,
.rx_queue_stop = ice_dcf_rx_queue_stop,
.tx_queue_stop = ice_dcf_tx_queue_stop,
+ .rxq_info_get = ice_rxq_info_get,
+ .txq_info_get = ice_txq_info_get,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 11/12] net/ice: add implement power management
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (10 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 10/12] net/ice: support queue information getting Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-04-29 9:19 ` [PATCH v7 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
12 siblings, 0 replies; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Kevin Liu
Implement support for the power management API by implementing a
'get_monitor_addr' function that will return an address of an RX ring's
status bit.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
doc/guides/nics/features/ice_dcf.ini | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 54ea7f150c..3b11622d4c 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -25,6 +25,7 @@ VLAN filter = Y
VLAN offload = Y
Extended stats = Y
Basic stats = Y
+Power mgmt address monitor = Y
Linux = Y
x86-32 = Y
x86-64 = Y
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 9217392d04..236c0395e0 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1700,6 +1700,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
.tx_queue_stop = ice_dcf_tx_queue_stop,
.rxq_info_get = ice_rxq_info_get,
.txq_info_get = ice_txq_info_get,
+ .get_monitor_addr = ice_get_monitor_addr,
.link_update = ice_dcf_link_update,
.stats_get = ice_dcf_stats_get,
.stats_reset = ice_dcf_stats_reset,
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* [PATCH v7 12/12] net/ice: support DCF new VLAN capabilities
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
` (11 preceding siblings ...)
2022-04-29 9:19 ` [PATCH v7 11/12] net/ice: add implement power management Kevin Liu
@ 2022-04-29 9:19 ` Kevin Liu
2022-05-11 0:06 ` Zhang, Qi Z
12 siblings, 1 reply; 170+ messages in thread
From: Kevin Liu @ 2022-04-29 9:19 UTC (permalink / raw)
To: dev; +Cc: qiming.yang, qi.z.zhang, stevex.yang, Alvin Zhang, Kevin Liu
From: Alvin Zhang <alvinx.zhang@intel.com>
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The DCF needs to query the VLAN capabilities based on current device
configuration firstly.
DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
---
drivers/net/ice/ice_dcf.c | 27 +++++
drivers/net/ice/ice_dcf.h | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++++++++---
3 files changed, 182 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 55ae68c456..885d58c0f4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
return 0;
}
+static int
+dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw)
+{
+ struct virtchnl_vlan_caps vlan_v2_caps;
+ struct dcf_virtchnl_cmd args;
+ int ret;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS;
+ args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps;
+ args.rsp_buflen = sizeof(vlan_v2_caps);
+
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS");
+ return ret;
+ }
+
+ rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
+ return 0;
+}
+
int
ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
@@ -701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
+ if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) &&
+ dcf_get_vlan_offload_caps_v2(hw))
+ goto err_rss;
+
return 0;
err_rss:
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 44a61404c3..7f42ebabe9 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -129,6 +129,7 @@ struct ice_dcf_hw {
uint16_t nb_msix;
uint16_t rxq_map[16];
struct virtchnl_eth_stats eth_stats_offset;
+ struct virtchnl_vlan_caps vlan_v2_caps;
/* Link status */
bool link_up;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 236c0395e0..8005eb2ab8 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1050,6 +1050,46 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
return 0;
}
+static int
+dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
+{
+ struct virtchnl_vlan_supported_caps *supported_caps =
+ &hw->vlan_v2_caps.filtering.filtering_support;
+ struct virtchnl_vlan *vlan_setting;
+ struct virtchnl_vlan_filter_list_v2 vlan_filter;
+ struct dcf_virtchnl_cmd args;
+ uint32_t filtering_caps;
+ int err;
+
+ if (supported_caps->outer) {
+ filtering_caps = supported_caps->outer;
+ vlan_setting = &vlan_filter.filters[0].outer;
+ } else {
+ filtering_caps = supported_caps->inner;
+ vlan_setting = &vlan_filter.filters[0].inner;
+ }
+
+ if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100))
+ return -ENOTSUP;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.vport_id = hw->vsi_res->vsi_id;
+ vlan_filter.num_elements = 1;
+ vlan_setting->tpid = RTE_ETHER_TYPE_VLAN;
+ vlan_setting->tci = vlanid;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2;
+ args.req_msg = (uint8_t *)&vlan_filter;
+ args.req_msglen = sizeof(vlan_filter);
+ err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2");
+
+ return err;
+}
+
static int
dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
{
@@ -1076,6 +1116,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add)
return err;
}
+static int
+dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ int err;
+
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {
+ err = dcf_add_del_vlan_v2(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+ }
+
+ if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+ return -ENOTSUP;
+
+ err = dcf_add_del_vlan(hw, vlan_id, on);
+ if (err)
+ return -EIO;
+ return 0;
+}
+
+static void
+dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable)
+{
+ struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ uint32_t i, j;
+ uint64_t ids;
+
+ for (i = 0; i < RTE_DIM(vfc->ids); i++) {
+ if (vfc->ids[i] == 0)
+ continue;
+
+ ids = vfc->ids[i];
+ for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) {
+ if (ids & 1)
+ dcf_add_del_vlan_v2(hw, 64 * i + j, enable);
+ }
+ }
+}
+
+static int
+dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable)
+{
+ struct virtchnl_vlan_supported_caps *stripping_caps =
+ &hw->vlan_v2_caps.offloads.stripping_support;
+ struct virtchnl_vlan_setting vlan_strip;
+ struct dcf_virtchnl_cmd args;
+ uint32_t *ethertype;
+ int ret;
+
+ if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.outer_ethertype_setting;
+ else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100) &&
+ (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE))
+ ethertype = &vlan_strip.inner_ethertype_setting;
+ else
+ return -ENOTSUP;
+
+ memset(&vlan_strip, 0, sizeof(vlan_strip));
+ vlan_strip.vport_id = hw->vsi_res->vsi_id;
+ *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100;
+
+ memset(&args, 0, sizeof(args));
+ args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 :
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2;
+ args.req_msg = (uint8_t *)&vlan_strip;
+ args.req_msglen = sizeof(vlan_strip);
+ ret = ice_dcf_execute_virtchnl_cmd(hw, &args);
+ if (ret)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" :
+ "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2");
+
+ return ret;
+}
+
+static int
+dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ struct ice_dcf_adapter *adapter = dev->data->dev_private;
+ struct ice_dcf_hw *hw = &adapter->real_hw;
+ bool enable;
+ int err;
+
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
+
+ dcf_iterate_vlan_filters_v2(dev, enable);
+ }
+
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
+
+ err = dcf_config_vlan_strip_v2(hw, enable);
+ /* If not support, the stripping is already disabled by PF */
+ if (err == -ENOTSUP && !enable)
+ err = 0;
+ if (err)
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int
dcf_enable_vlan_strip(struct ice_dcf_hw *hw)
{
@@ -1108,30 +1258,17 @@ dcf_disable_vlan_strip(struct ice_dcf_hw *hw)
return ret;
}
-static int
-dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct ice_dcf_adapter *adapter = dev->data->dev_private;
- struct ice_dcf_hw *hw = &adapter->real_hw;
- int err;
-
- if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
- return -ENOTSUP;
-
- err = dcf_add_del_vlan(hw, vlan_id, on);
- if (err)
- return -EIO;
- return 0;
-}
-
static int
dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
struct ice_dcf_adapter *adapter = dev->data->dev_private;
struct ice_dcf_hw *hw = &adapter->real_hw;
- struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
int err;
+ if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
+ return dcf_dev_vlan_offload_set_v2(dev, mask);
+
if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
return -ENOTSUP;
--
2.33.1
^ permalink raw reply [flat|nested] 170+ messages in thread
* RE: [PATCH v7 12/12] net/ice: support DCF new VLAN capabilities
2022-04-29 9:19 ` [PATCH v7 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
@ 2022-05-11 0:06 ` Zhang, Qi Z
0 siblings, 0 replies; 170+ messages in thread
From: Zhang, Qi Z @ 2022-05-11 0:06 UTC (permalink / raw)
To: Liu, KevinX, dev; +Cc: Yang, Qiming, Yang, SteveX, Alvin Zhang
> -----Original Message-----
> From: Liu, KevinX <kevinx.liu@intel.com>
> Sent: Friday, April 29, 2022 5:20 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Yang, SteveX <stevex.yang@intel.com>; Alvin Zhang
> <alvinx.zhang@intel.com>; Liu, KevinX <kevinx.liu@intel.com>
> Subject: [PATCH v7 12/12] net/ice: support DCF new VLAN capabilities
Refine the title as " complete VLAN offload capability for DCF" in dpdk-next-net-intel.
As it does not introduce any new VLAN offload capability
^ permalink raw reply [flat|nested] 170+ messages in thread
end of thread, other threads:[~2022-05-11 0:06 UTC | newest]
Thread overview: 170+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-07 10:56 [PATCH 00/39] support full function of DCF Kevin Liu
2022-04-07 10:56 ` [PATCH 01/39] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-07 10:56 ` [PATCH 02/39] net/ice: enable RSS HASH " Kevin Liu
2022-04-07 10:56 ` [PATCH 03/39] net/ice: cleanup Tx buffers Kevin Liu
2022-04-07 10:56 ` [PATCH 04/39] net/ice: add ops MTU-SET to dcf Kevin Liu
2022-04-07 10:56 ` [PATCH 05/39] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
2022-04-07 10:56 ` [PATCH 06/39] net/ice: support dcf promisc configuration Kevin Liu
2022-04-07 10:56 ` [PATCH 07/39] net/ice: support dcf MAC configuration Kevin Liu
2022-04-07 10:56 ` [PATCH 08/39] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
2022-04-07 10:56 ` [PATCH 09/39] net/ice: support DCF new VLAN capabilities Kevin Liu
2022-04-07 10:56 ` [PATCH 10/39] net/ice: enable CVL DCF device reset API Kevin Liu
2022-04-07 10:56 ` [PATCH 11/39] net/ice/base: add VXLAN support for switch filter Kevin Liu
2022-04-07 10:56 ` [PATCH 12/39] net/ice: " Kevin Liu
2022-04-07 10:56 ` [PATCH 13/39] common/iavf: support flushing rules and reporting DCF id Kevin Liu
2022-04-07 10:56 ` [PATCH 14/39] net/ice/base: fix ethertype filter input set Kevin Liu
2022-04-07 10:56 ` [PATCH 15/39] net/iavf: support checking if device is an MDCF instance Kevin Liu
2022-04-07 10:56 ` [PATCH 16/39] net/ice: support MDCF(multi-DCF) instance Kevin Liu
2022-04-07 10:56 ` [PATCH 17/39] net/ice/base: support custom DDP buildin recipe Kevin Liu
2022-04-07 10:56 ` [PATCH 18/39] net/ice: support buildin recipe configuration Kevin Liu
2022-04-07 10:56 ` [PATCH 19/39] net/ice/base: support IPv6 GRE UDP pattern Kevin Liu
2022-04-07 10:56 ` [PATCH 20/39] net/ice: support IPv6 NVGRE tunnel Kevin Liu
2022-04-07 10:56 ` [PATCH 21/39] net/ice: support new pattern of IPv4 Kevin Liu
2022-04-07 10:56 ` [PATCH 22/39] net/ice/base: support new patterns of TCP and UDP Kevin Liu
2022-04-07 10:56 ` [PATCH 23/39] net/ice: " Kevin Liu
2022-04-07 10:56 ` [PATCH 24/39] net/ice/base: support IPv4 GRE tunnel Kevin Liu
2022-04-07 10:56 ` [PATCH 25/39] net/ice: support IPv4 GRE raw pattern type Kevin Liu
2022-04-07 10:56 ` [PATCH 26/39] net/ice/base: support custom ddp package version Kevin Liu
2022-04-07 10:56 ` [PATCH 27/39] net/ice: disable ACL function for MDCF instance Kevin Liu
2022-04-07 10:56 ` [PATCH 28/39] net/ice: treat unknown package as OS default package Kevin Liu
2022-04-07 10:56 ` [PATCH 29/39] net/ice/base: update Profile ID table for VXLAN Kevin Liu
2022-04-07 10:56 ` [PATCH 30/39] net/ice/base: update Protocol ID table to match DVM DDP Kevin Liu
2022-04-07 10:56 ` [PATCH 31/39] net/ice: handle virtchnl event message without interrupt Kevin Liu
2022-04-07 10:56 ` [PATCH 32/39] net/ice: add DCF request queues function Kevin Liu
2022-04-07 10:57 ` [PATCH 33/39] net/ice: negotiate large VF and request more queues Kevin Liu
2022-04-07 10:57 ` [PATCH 34/39] net/ice: enable multiple queues configurations for large VF Kevin Liu
2022-04-07 10:57 ` [PATCH 35/39] net/ice: enable IRQ mapping configuration " Kevin Liu
2022-04-07 10:57 ` [PATCH 36/39] net/ice: add enable/disable queues for DCF " Kevin Liu
2022-04-07 10:57 ` [PATCH 37/39] net/ice: fix DCF ACL flow engine Kevin Liu
2022-04-07 10:57 ` [PATCH 38/39] testpmd: force flow flush Kevin Liu
2022-04-07 10:57 ` [PATCH 39/39] net/ice: fix DCF reset Kevin Liu
2022-04-13 16:08 ` [PATCH v2 00/33] support full function of DCF Kevin Liu
2022-04-13 16:09 ` [PATCH v2 01/33] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-13 16:09 ` [PATCH v2 02/33] net/ice: enable RSS HASH " Kevin Liu
2022-04-13 16:09 ` [PATCH v2 03/33] net/ice: cleanup Tx buffers Kevin Liu
2022-04-13 16:09 ` [PATCH v2 04/33] net/ice: add ops MTU-SET to dcf Kevin Liu
2022-04-13 16:09 ` [PATCH v2 05/33] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
2022-04-13 16:09 ` [PATCH v2 06/33] net/ice: support dcf promisc configuration Kevin Liu
2022-04-13 16:09 ` [PATCH v2 07/33] net/ice: support dcf MAC configuration Kevin Liu
2022-04-13 16:09 ` [PATCH v2 08/33] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
2022-04-13 16:09 ` [PATCH v2 09/33] net/ice: support DCF new VLAN capabilities Kevin Liu
2022-04-13 16:09 ` [PATCH v2 10/33] net/ice: enable CVL DCF device reset API Kevin Liu
2022-04-13 16:09 ` [PATCH v2 11/33] net/ice/base: add VXLAN support for switch filter Kevin Liu
2022-04-13 16:09 ` [PATCH v2 12/33] net/ice: " Kevin Liu
2022-04-13 16:09 ` [PATCH v2 13/33] common/iavf: support flushing rules and reporting DCF id Kevin Liu
2022-04-13 16:09 ` [PATCH v2 14/33] net/ice/base: fix ethertype filter input set Kevin Liu
2022-04-13 16:09 ` [PATCH v2 15/33] net/ice/base: support IPv6 GRE UDP pattern Kevin Liu
2022-04-13 16:09 ` [PATCH v2 16/33] net/ice: support IPv6 NVGRE tunnel Kevin Liu
2022-04-13 16:09 ` [PATCH v2 17/33] net/ice: support new pattern of IPv4 Kevin Liu
2022-04-13 16:09 ` [PATCH v2 18/33] net/ice/base: support new patterns of TCP and UDP Kevin Liu
2022-04-13 16:09 ` [PATCH v2 19/33] net/ice: " Kevin Liu
2022-04-13 16:09 ` [PATCH v2 20/33] net/ice/base: support IPv4 GRE tunnel Kevin Liu
2022-04-13 16:09 ` [PATCH v2 21/33] net/ice: support IPv4 GRE raw pattern type Kevin Liu
2022-04-13 16:09 ` [PATCH v2 22/33] net/ice: treat unknown package as OS default package Kevin Liu
2022-04-13 16:09 ` [PATCH v2 23/33] net/ice/base: update Profile ID table for VXLAN Kevin Liu
2022-04-13 16:09 ` [PATCH v2 24/33] net/ice/base: update Protocol ID table to match DVM DDP Kevin Liu
2022-04-13 16:09 ` [PATCH v2 25/33] net/ice: handle virtchnl event message without interrupt Kevin Liu
2022-04-13 16:09 ` [PATCH v2 26/33] net/ice: add DCF request queues function Kevin Liu
2022-04-13 16:09 ` [PATCH v2 27/33] net/ice: negotiate large VF and request more queues Kevin Liu
2022-04-13 16:09 ` [PATCH v2 28/33] net/ice: enable multiple queues configurations for large VF Kevin Liu
2022-04-13 16:09 ` [PATCH v2 29/33] net/ice: enable IRQ mapping configuration " Kevin Liu
2022-04-13 16:09 ` [PATCH v2 30/33] net/ice: add enable/disable queues for DCF " Kevin Liu
2022-04-13 16:09 ` [PATCH v2 31/33] net/ice: fix DCF ACL flow engine Kevin Liu
2022-04-13 16:09 ` [PATCH v2 32/33] testpmd: force flow flush Kevin Liu
2022-04-13 16:09 ` [PATCH v2 33/33] net/ice: fix DCF reset Kevin Liu
2022-04-13 17:10 ` [PATCH v3 00/22] support full function of DCF Kevin Liu
2022-04-13 17:10 ` [PATCH v3 01/22] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-13 17:10 ` [PATCH v3 02/22] net/ice: enable RSS HASH " Kevin Liu
2022-04-13 17:10 ` [PATCH v3 03/22] net/ice: cleanup Tx buffers Kevin Liu
2022-04-13 17:10 ` [PATCH v3 04/22] net/ice: add ops MTU-SET to dcf Kevin Liu
2022-04-13 17:10 ` [PATCH v3 05/22] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
2022-04-13 17:10 ` [PATCH v3 06/22] net/ice: support dcf promisc configuration Kevin Liu
2022-04-13 17:10 ` [PATCH v3 07/22] net/ice: support dcf MAC configuration Kevin Liu
2022-04-13 17:10 ` [PATCH v3 08/22] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
2022-04-13 17:10 ` [PATCH v3 09/22] net/ice: support DCF new VLAN capabilities Kevin Liu
2022-04-13 17:10 ` [PATCH v3 10/22] net/ice: enable CVL DCF device reset API Kevin Liu
2022-04-13 17:10 ` [PATCH v3 11/22] net/ice: support IPv6 NVGRE tunnel Kevin Liu
2022-04-13 17:10 ` [PATCH v3 12/22] net/ice: support new pattern of IPv4 Kevin Liu
2022-04-13 17:10 ` [PATCH v3 13/22] net/ice: treat unknown package as OS default package Kevin Liu
2022-04-13 17:10 ` [PATCH v3 14/22] net/ice: handle virtchnl event message without interrupt Kevin Liu
2022-04-13 17:10 ` [PATCH v3 15/22] net/ice: add DCF request queues function Kevin Liu
2022-04-13 17:10 ` [PATCH v3 16/22] net/ice: negotiate large VF and request more queues Kevin Liu
2022-04-13 17:10 ` [PATCH v3 17/22] net/ice: enable multiple queues configurations for large VF Kevin Liu
2022-04-13 17:10 ` [PATCH v3 18/22] net/ice: enable IRQ mapping configuration " Kevin Liu
2022-04-13 17:10 ` [PATCH v3 19/22] net/ice: add enable/disable queues for DCF " Kevin Liu
2022-04-13 17:10 ` [PATCH v3 20/22] net/ice: fix DCF ACL flow engine Kevin Liu
2022-04-13 17:10 ` [PATCH v3 21/22] testpmd: force flow flush Kevin Liu
2022-04-13 17:10 ` [PATCH v3 22/22] net/ice: fix DCF reset Kevin Liu
2022-04-19 15:45 ` [PATCH v4 00/23] complete common VF features for DCF Kevin Liu
2022-04-19 15:45 ` [PATCH v4 01/23] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-19 15:45 ` [PATCH v4 02/23] net/ice: enable RSS HASH " Kevin Liu
2022-04-19 15:45 ` [PATCH v4 03/23] net/ice: cleanup Tx buffers Kevin Liu
2022-04-19 15:45 ` [PATCH v4 04/23] net/ice: add ops MTU-SET to dcf Kevin Liu
2022-04-19 15:45 ` [PATCH v4 05/23] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
2022-04-19 15:45 ` [PATCH v4 06/23] net/ice: support dcf promisc configuration Kevin Liu
2022-04-19 15:45 ` [PATCH v4 07/23] net/ice: support dcf MAC configuration Kevin Liu
2022-04-19 15:45 ` [PATCH v4 08/23] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
2022-04-19 15:46 ` [PATCH v4 09/23] net/ice: support DCF new VLAN capabilities Kevin Liu
2022-04-19 15:46 ` [PATCH v4 10/23] net/ice: enable CVL DCF device reset API Kevin Liu
2022-04-19 15:46 ` [PATCH v4 11/23] net/ice: support IPv6 NVGRE tunnel Kevin Liu
2022-04-19 15:46 ` [PATCH v4 12/23] net/ice: support new pattern of IPv4 Kevin Liu
2022-04-19 15:46 ` [PATCH v4 13/23] net/ice: treat unknown package as OS default package Kevin Liu
2022-04-19 15:46 ` [PATCH v4 14/23] net/ice: handle virtchnl event message without interrupt Kevin Liu
2022-04-19 15:46 ` [PATCH v4 15/23] net/ice: add DCF request queues function Kevin Liu
2022-04-19 15:46 ` [PATCH v4 16/23] net/ice: negotiate large VF and request more queues Kevin Liu
2022-04-19 15:46 ` [PATCH v4 17/23] net/ice: enable multiple queues configurations for large VF Kevin Liu
2022-04-19 15:46 ` [PATCH v4 18/23] net/ice: enable IRQ mapping configuration " Kevin Liu
2022-04-19 15:46 ` [PATCH v4 19/23] net/ice: add enable/disable queues for DCF " Kevin Liu
2022-04-19 15:46 ` [PATCH v4 20/23] net/ice: add extended stats Kevin Liu
2022-04-19 15:46 ` [PATCH v4 21/23] net/ice: support queue information getting Kevin Liu
2022-04-19 15:46 ` [PATCH v4 22/23] net/ice: implement power management Kevin Liu
2022-04-19 15:46 ` [PATCH v4 23/23] doc: update for ice DCF datapath configuration Kevin Liu
2022-04-21 11:13 ` [PATCH v5 00/12] complete common VF features for DCF Kevin Liu
2022-04-21 11:13 ` [PATCH v5 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-21 11:13 ` [PATCH v5 02/12] net/ice: enable RSS HASH " Kevin Liu
2022-04-21 11:13 ` [PATCH v5 03/12] net/ice: cleanup Tx buffers Kevin Liu
2022-04-21 11:13 ` [PATCH v5 04/12] net/ice: add ops MTU-SET to dcf Kevin Liu
2022-04-21 11:13 ` [PATCH v5 05/12] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
2022-04-21 11:13 ` [PATCH v5 06/12] net/ice: support dcf promisc configuration Kevin Liu
2022-04-21 11:13 ` [PATCH v5 07/12] net/ice: support dcf MAC configuration Kevin Liu
2022-04-21 11:13 ` [PATCH v5 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
2022-04-21 11:14 ` [PATCH v5 09/12] net/ice: add extended stats Kevin Liu
2022-04-21 11:14 ` [PATCH v5 10/12] net/ice: support queue information getting Kevin Liu
2022-04-21 11:14 ` [PATCH v5 11/12] net/ice: implement power management Kevin Liu
2022-04-21 11:14 ` [PATCH v5 12/12] doc: update for ice DCF datapath configuration Kevin Liu
2022-04-27 18:12 ` [PATCH v6 00/12] complete common VF features for DCF Kevin Liu
2022-04-27 18:12 ` [PATCH v6 01/12] net/ice: enable RSS RETA ops for DCF hardware Kevin Liu
2022-04-27 10:38 ` Zhang, Qi Z
2022-04-27 18:12 ` [PATCH v6 02/12] net/ice: enable RSS HASH " Kevin Liu
2022-04-27 18:12 ` [PATCH v6 03/12] net/ice: cleanup Tx buffers Kevin Liu
2022-04-27 10:41 ` Zhang, Qi Z
2022-04-27 18:12 ` [PATCH v6 04/12] net/ice: add ops MTU-SET to dcf Kevin Liu
2022-04-27 18:12 ` [PATCH v6 05/12] net/ice: add ops dev-supported-ptypes-get " Kevin Liu
2022-04-27 10:44 ` Zhang, Qi Z
2022-04-27 18:12 ` [PATCH v6 06/12] net/ice: support dcf promisc configuration Kevin Liu
2022-04-27 18:12 ` [PATCH v6 07/12] net/ice: support dcf MAC configuration Kevin Liu
2022-04-27 18:12 ` [PATCH v6 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
2022-04-27 18:12 ` [PATCH v6 09/12] net/ice: add extended stats Kevin Liu
2022-04-27 18:12 ` [PATCH v6 10/12] net/ice: support queue information getting Kevin Liu
2022-04-27 18:13 ` [PATCH v6 11/12] net/ice: implement power management Kevin Liu
2022-04-27 18:13 ` [PATCH v6 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
2022-04-27 10:46 ` Zhang, Qi Z
2022-04-29 9:19 ` [PATCH v7 00/12] complete common VF features for DCF Kevin Liu
2022-04-29 2:32 ` Zhang, Qi Z
2022-04-29 9:19 ` [PATCH v7 01/12] net/ice: support for RSS RETA configure in DCF mode Kevin Liu
2022-04-29 9:19 ` [PATCH v7 02/12] net/ice: support for RSS HASH " Kevin Liu
2022-04-29 9:19 ` [PATCH v7 03/12] net/ice: support cleanup Tx buffers " Kevin Liu
2022-04-29 9:19 ` [PATCH v7 04/12] net/ice: support for MTU configure " Kevin Liu
2022-04-29 9:19 ` [PATCH v7 05/12] net/ice: add ops dev-supported-ptypes-get to dcf Kevin Liu
2022-04-29 9:19 ` [PATCH v7 06/12] net/ice: support dcf promisc configuration Kevin Liu
2022-04-29 9:19 ` [PATCH v7 07/12] net/ice: support dcf MAC configuration Kevin Liu
2022-04-29 9:19 ` [PATCH v7 08/12] net/ice: support dcf VLAN filter and offload configuration Kevin Liu
2022-04-29 9:19 ` [PATCH v7 09/12] net/ice: add extended stats Kevin Liu
2022-04-29 9:19 ` [PATCH v7 10/12] net/ice: support queue information getting Kevin Liu
2022-04-29 9:19 ` [PATCH v7 11/12] net/ice: add implement power management Kevin Liu
2022-04-29 9:19 ` [PATCH v7 12/12] net/ice: support DCF new VLAN capabilities Kevin Liu
2022-05-11 0:06 ` Zhang, Qi Z
2022-04-19 16:01 ` [PATCH v4 0/2] fix DCF function defect Kevin Liu
2022-04-19 16:01 ` [PATCH v4 1/2] net/ice: fix DCF ACL flow engine Kevin Liu
2022-04-20 12:01 ` Zhang, Qi Z
2022-04-19 16:01 ` [PATCH v4 2/2] net/ice: fix DCF reset Kevin Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).