* [dpdk-dev] [PATCH v2 1/4] app/testpmd: add packet id for IP fragment
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
@ 2021-03-24 13:48 ` Jeff Guo
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 2/4] common/iavf: add proto header " Jeff Guo
` (6 subsequent siblings)
7 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-03-24 13:48 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: yuying.zhang, ting.xu, dev, jia.guo
Add the new items to support the flow configuration for IP fragment
packets.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 49d9f9c043..331a08eec4 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -166,6 +166,7 @@ enum index {
ITEM_VLAN_HAS_MORE_VLAN,
ITEM_IPV4,
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -236,6 +237,7 @@ enum index {
ITEM_IPV6_FRAG_EXT,
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_ICMP6,
ITEM_ICMP6_TYPE,
ITEM_ICMP6_CODE,
@@ -1026,6 +1028,7 @@ static const enum index item_vlan[] = {
static const enum index item_ipv4[] = {
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -1162,6 +1165,7 @@ static const enum index item_ipv6_ext[] = {
static const enum index item_ipv6_frag_ext[] = {
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_NEXT,
ZERO,
};
@@ -2462,6 +2466,13 @@ static const struct token token_list[] = {
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
hdr.type_of_service)),
},
+ [ITEM_IPV4_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
+ hdr.packet_id)),
+ },
[ITEM_IPV4_FRAGMENT_OFFSET] = {
.name = "fragment_offset",
.help = "fragmentation flags and fragment offset",
@@ -2965,12 +2976,20 @@ static const struct token token_list[] = {
},
[ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
.name = "frag_data",
- .help = "Fragment flags and offset",
+ .help = "fragment flags and offset",
.next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
hdr.frag_data)),
},
+ [ITEM_IPV6_FRAG_EXT_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
+ hdr.id)),
+ },
[ITEM_ICMP6] = {
.name = "icmp6",
.help = "match any ICMPv6 header",
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v2 2/4] common/iavf: add proto header for IP fragment
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
@ 2021-03-24 13:48 ` Jeff Guo
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 3/4] net/iavf: support RSS hash " Jeff Guo
` (5 subsequent siblings)
7 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-03-24 13:48 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: yuying.zhang, ting.xu, dev, jia.guo
Add new virtchnl protocol header type and fields for IP fragment packets
to support RSS hash and FDIR.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/common/iavf/virtchnl.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 6b99e170f0..1042ee7cae 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1415,7 +1415,9 @@ enum virtchnl_proto_hdr_type {
VIRTCHNL_PROTO_HDR_S_VLAN,
VIRTCHNL_PROTO_HDR_C_VLAN,
VIRTCHNL_PROTO_HDR_IPV4,
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG,
VIRTCHNL_PROTO_HDR_IPV6,
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG,
VIRTCHNL_PROTO_HDR_TCP,
VIRTCHNL_PROTO_HDR_UDP,
VIRTCHNL_PROTO_HDR_SCTP,
@@ -1452,6 +1454,7 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV4_DSCP,
VIRTCHNL_PROTO_HDR_IPV4_TTL,
VIRTCHNL_PROTO_HDR_IPV4_PROT,
+ VIRTCHNL_PROTO_HDR_IPV4_PKID,
/* IPV6 */
VIRTCHNL_PROTO_HDR_IPV6_SRC =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6),
@@ -1472,6 +1475,9 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST,
+ /* IPv6 Extension Header Fragment */
+ VIRTCHNL_PROTO_HDR_IPV6_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
/* TCP */
VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v2 3/4] net/iavf: support RSS hash for IP fragment
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 2/4] common/iavf: add proto header " Jeff Guo
@ 2021-03-24 13:48 ` Jeff Guo
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
` (4 subsequent siblings)
7 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-03-24 13:48 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: yuying.zhang, ting.xu, dev, jia.guo
New pattern and RSS hash flow parsing are added to handle fragmented
IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_generic_flow.c | 24 ++++++++++++++++++++
drivers/net/iavf/iavf_generic_flow.h | 3 +++
drivers/net/iavf/iavf_hash.c | 33 +++++++++++++++++++++++-----
3 files changed, 54 insertions(+), 6 deletions(-)
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 8635ff83ca..242bb4abc5 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -219,6 +219,30 @@ enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[] = {
RTE_FLOW_ITEM_TYPE_END,
};
+enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV6,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index 0ccf5901b4..ce3d12bcd9 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -172,6 +172,9 @@ extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv4_icmp[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[];
+extern enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_udp[];
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index d8d22f8009..d46529c61e 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -112,6 +112,10 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_DST), {BUFF_NOUSED} }
+#define proto_hdr_ipv6_frag { \
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG, \
+ FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_PKID), {BUFF_NOUSED} }
+
#define proto_hdr_ipv6_with_prot { \
VIRTCHNL_PROTO_HDR_IPV6, \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
@@ -190,6 +194,12 @@ struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
};
+struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
+ TUNNEL_LEVEL_OUTER, 5,
+ {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+ proto_hdr_ipv6, proto_hdr_ipv6_frag}
+};
+
struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
TUNNEL_LEVEL_OUTER, 5,
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
@@ -303,7 +313,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
ETH_RSS_NONFRAG_IPV4_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
@@ -312,6 +323,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 outer */
#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
+ ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_NONFRAG_IPV6_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
@@ -330,6 +343,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
+ ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
@@ -415,10 +430,12 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
/* IPv6 */
{iavf_pattern_eth_ipv6, IAVF_RSS_TYPE_OUTER_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_ipv6_udp, IAVF_RSS_TYPE_OUTER_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv6_tcp, IAVF_RSS_TYPE_OUTER_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_ipv6_sctp, IAVF_RSS_TYPE_OUTER_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
{iavf_pattern_eth_vlan_ipv6, IAVF_RSS_TYPE_VLAN_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_vlan_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
@@ -647,11 +664,13 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 |
+ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
ETH_RSS_NONFRAG_IPV4_UDP |
ETH_RSS_NONFRAG_IPV4_TCP |
ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & ETH_RSS_FRAG_IPV4) {
+ REFINE_PROTO_FLD(ADD, IPV4_PKID);
+ } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
@@ -667,7 +686,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 |
+ (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
ETH_RSS_NONFRAG_IPV6_UDP |
ETH_RSS_NONFRAG_IPV6_TCP |
ETH_RSS_NONFRAG_IPV6_SCTP)) {
@@ -885,8 +904,10 @@ struct rss_attr_type {
ETH_RSS_NONFRAG_IPV6_TCP | \
ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | VALID_RSS_IPV6_L4)
+#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+ VALID_RSS_IPV4_L4)
+#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+ VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v2 4/4] net/iavf: support FDIR for IP fragment packet
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
` (2 preceding siblings ...)
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 3/4] net/iavf: support RSS hash " Jeff Guo
@ 2021-03-24 13:48 ` Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
` (3 subsequent siblings)
7 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-03-24 13:48 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: yuying.zhang, ting.xu, dev, jia.guo
New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_fdir.c | 296 ++++++++++++++++++---------
drivers/net/iavf/iavf_generic_flow.h | 5 +
2 files changed, 209 insertions(+), 92 deletions(-)
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 459c09f6fb..df8d1d431e 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -34,7 +34,7 @@
#define IAVF_FDIR_INSET_ETH_IPV4 (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
IAVF_INSET_IPV4_PROTO | IAVF_INSET_IPV4_TOS | \
- IAVF_INSET_IPV4_TTL)
+ IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID)
#define IAVF_FDIR_INSET_ETH_IPV4_UDP (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
@@ -56,6 +56,9 @@
IAVF_INSET_IPV6_NEXT_HDR | IAVF_INSET_IPV6_TC | \
IAVF_INSET_IPV6_HOP_LIMIT)
+#define IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT (\
+ IAVF_INSET_IPV6_ID)
+
#define IAVF_FDIR_INSET_ETH_IPV6_UDP (\
IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \
IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \
@@ -113,10 +116,12 @@
static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
{iavf_pattern_ethertype, IAVF_FDIR_INSET_ETH, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv4, IAVF_FDIR_INSET_ETH_IPV4, IAVF_INSET_NONE},
+ {iavf_pattern_eth_ipv4, IAVF_FDIR_INSET_ETH_IPV4, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv4_udp, IAVF_FDIR_INSET_ETH_IPV4_UDP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv4_tcp, IAVF_FDIR_INSET_ETH_IPV4_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv4_sctp, IAVF_FDIR_INSET_ETH_IPV4_SCTP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6, IAVF_FDIR_INSET_ETH_IPV6, IAVF_INSET_NONE},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_udp, IAVF_FDIR_INSET_ETH_IPV6_UDP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_tcp, IAVF_FDIR_INSET_ETH_IPV6_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_sctp, IAVF_FDIR_INSET_ETH_IPV6_SCTP, IAVF_INSET_NONE},
@@ -497,12 +502,13 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
{
struct virtchnl_proto_hdrs *hdrs =
&filter->add_fltr.rule_cfg.proto_hdrs;
- const struct rte_flow_item *item = pattern;
- enum rte_flow_item_type item_type;
enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
const struct rte_flow_item_eth *eth_spec, *eth_mask;
- const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+ const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_spec;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_last;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_mask;
const struct rte_flow_item_udp *udp_spec, *udp_mask;
const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
const struct rte_flow_item_sctp *sctp_spec, *sctp_mask;
@@ -513,15 +519,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
const struct rte_flow_item_ah *ah_spec, *ah_mask;
const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
const struct rte_flow_item_ecpri *ecpri_spec, *ecpri_mask;
+ const struct rte_flow_item *item = pattern;
+ struct virtchnl_proto_hdr *hdr, *hdr1 = NULL;
struct rte_ecpri_common_hdr ecpri_common;
uint64_t input_set = IAVF_INSET_NONE;
- uint8_t proto_id;
-
+ enum rte_flow_item_type item_type;
enum rte_flow_item_type next_type;
+ bool spec_all_pid = false;
uint16_t ether_type;
-
+ uint8_t proto_id;
int layer = 0;
- struct virtchnl_proto_hdr *hdr;
uint8_t ipv6_addr_mask[16] = {
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
@@ -529,26 +536,28 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
};
for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
+ item_type = item->type;
+
+ if (item->last && (item_type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ item_type !=
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT)) {
rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Not support range");
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Not support range");
}
- item_type = item->type;
-
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
eth_spec = item->spec;
eth_mask = item->mask;
next_type = (item + 1)->type;
- hdr = &hdrs->proto_hdr[layer];
+ hdr1 = &hdrs->proto_hdr[layer];
- VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ETH);
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, ETH);
if (next_type == RTE_FLOW_ITEM_TYPE_END &&
- (!eth_spec || !eth_mask)) {
+ (!eth_spec || !eth_mask)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item, "NULL eth spec/mask.");
@@ -584,10 +593,11 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
}
input_set |= IAVF_INSET_ETHERTYPE;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ETH, ETHERTYPE);
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
- rte_memcpy(hdr->buffer,
- eth_spec, sizeof(struct rte_ether_hdr));
+ rte_memcpy(hdr1->buffer, eth_spec,
+ sizeof(struct rte_ether_hdr));
}
hdrs->count = ++layer;
@@ -596,51 +606,102 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
case RTE_FLOW_ITEM_TYPE_IPV4:
l3 = RTE_FLOW_ITEM_TYPE_IPV4;
ipv4_spec = item->spec;
+ ipv4_last = item->last;
ipv4_mask = item->mask;
+ next_type = (item + 1)->type;
hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV4);
- if (ipv4_spec && ipv4_mask) {
- if (ipv4_mask->hdr.version_ihl ||
- ipv4_mask->hdr.total_length ||
- ipv4_mask->hdr.packet_id ||
- ipv4_mask->hdr.fragment_offset ||
- ipv4_mask->hdr.hdr_checksum) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv4 mask.");
- return -rte_errno;
- }
+ if (!(ipv4_spec && ipv4_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if (ipv4_mask->hdr.type_of_service ==
- UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TOS;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DSCP);
- }
- if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_PROTO;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, PROT);
- }
- if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TTL;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, TTL);
- }
- if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, SRC);
- }
- if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DST);
- }
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.hdr_checksum) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 mask.");
+ return -rte_errno;
+ }
- rte_memcpy(hdr->buffer,
- &ipv4_spec->hdr,
- sizeof(ipv4_spec->hdr));
+ if (ipv4_last->hdr.version_ihl ||
+ ipv4_last->hdr.type_of_service ||
+ ipv4_last->hdr.time_to_live ||
+ ipv4_last->hdr.total_length |
+ ipv4_last->hdr.next_proto_id ||
+ ipv4_last->hdr.hdr_checksum ||
+ ipv4_last->hdr.src_addr ||
+ ipv4_last->hdr.dst_addr) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 last.");
+ return -rte_errno;
+ }
+
+ if (ipv4_mask->hdr.type_of_service ==
+ UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TOS;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DSCP);
+ }
+
+ if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_PROTO;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ PROT);
+ }
+
+ if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TTL;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ TTL);
+ }
+
+ if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ SRC);
}
+ if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DST);
+ }
+
+ if (ipv4_mask->hdr.packet_id == UINT16_MAX) {
+ input_set |= IAVF_INSET_IPV4_ID;
+ if (ipv4_last &&
+ ipv4_spec->hdr.packet_id == 0 &&
+ ipv4_last->hdr.packet_id == 0xffff)
+ spec_all_pid = true;
+
+ /* All IPv4 fragment packet has the same
+ * ethertype, if the spec is for all invalid
+ * packet id, set ethertype into input set.
+ */
+ input_set |= spec_all_pid ?
+ IAVF_INSET_ETHERTYPE :
+ IAVF_INSET_IPV4_ID;
+
+ if (spec_all_pid)
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
+ ETH, ETHERTYPE);
+ else
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
+ IPV4, PKID);
+ }
+
+ if (ipv4_mask->hdr.fragment_offset == UINT16_MAX)
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV4_FRAG);
+
+ rte_memcpy(hdr->buffer, &ipv4_spec->hdr,
+ sizeof(ipv4_spec->hdr));
+
hdrs->count = ++layer;
break;
@@ -653,46 +714,92 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6);
- if (ipv6_spec && ipv6_mask) {
- if (ipv6_mask->hdr.payload_len) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv6 mask");
- return -rte_errno;
- }
+ if (!(ipv6_spec && ipv6_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if ((ipv6_mask->hdr.vtc_flow &
- rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
- == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
- input_set |= IAVF_INSET_IPV6_TC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC);
- }
- if (ipv6_mask->hdr.proto == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_NEXT_HDR;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT);
- }
- if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT);
- }
- if (!memcmp(ipv6_mask->hdr.src_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.src_addr))) {
- input_set |= IAVF_INSET_IPV6_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC);
- }
- if (!memcmp(ipv6_mask->hdr.dst_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.dst_addr))) {
- input_set |= IAVF_INSET_IPV6_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST);
- }
+ if (ipv6_mask->hdr.payload_len) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv6 mask");
+ return -rte_errno;
+ }
- rte_memcpy(hdr->buffer,
- &ipv6_spec->hdr,
- sizeof(ipv6_spec->hdr));
+ if ((ipv6_mask->hdr.vtc_flow &
+ rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
+ == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
+ input_set |= IAVF_INSET_IPV6_TC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ TC);
}
+ if (ipv6_mask->hdr.proto == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_NEXT_HDR;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ PROT);
+ }
+
+ if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ HOP_LIMIT);
+ }
+
+ if (!memcmp(ipv6_mask->hdr.src_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.src_addr))) {
+ input_set |= IAVF_INSET_IPV6_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ SRC);
+ }
+ if (!memcmp(ipv6_mask->hdr.dst_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.dst_addr))) {
+ input_set |= IAVF_INSET_IPV6_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ DST);
+ }
+
+ rte_memcpy(hdr->buffer, &ipv6_spec->hdr,
+ sizeof(ipv6_spec->hdr));
+
+ hdrs->count = ++layer;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+ ipv6_frag_spec = item->spec;
+ ipv6_frag_last = item->last;
+ ipv6_frag_mask = item->mask;
+ next_type = (item + 1)->type;
+
+ hdr = &hdrs->proto_hdr[layer];
+
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6_EH_FRAG);
+
+ if (!(ipv6_frag_spec && ipv6_frag_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
+
+ if (ipv6_frag_last && ipv6_frag_spec->hdr.id == 0 &&
+ ipv6_frag_last->hdr.id == 0xffffffff)
+ spec_all_pid = true;
+
+ /* All IPv6 fragment packet has the same ethertype, if
+ * the spec is for all invalid packet id, set ethertype
+ * into input set.
+ */
+ input_set |= spec_all_pid ? IAVF_INSET_ETHERTYPE :
+ IAVF_INSET_IPV6_ID;
+ if (spec_all_pid)
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
+ else
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ PKID);
+
+ rte_memcpy(hdr->buffer, &ipv6_frag_spec->hdr,
+ sizeof(ipv6_frag_spec->hdr));
+
hdrs->count = ++layer;
break;
@@ -1010,8 +1117,13 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
return -rte_errno;
}
- if (input_set & ~input_set_mask)
- return -EINVAL;
+ if (spec_all_pid) {
+ if (input_set & ~(input_set_mask | IAVF_INSET_ETHERTYPE))
+ return -EINVAL;
+ } else {
+ if (input_set & ~input_set_mask)
+ return -EINVAL;
+ }
filter->input_set = input_set;
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index ce3d12bcd9..b7b9bd2495 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -61,6 +61,7 @@
#define IAVF_PFCP_S_FIELD (1ULL << 44)
#define IAVF_PFCP_SEID (1ULL << 43)
#define IAVF_ECPRI_PC_RTC_ID (1ULL << 42)
+#define IAVF_IP_PKID (1ULL << 41)
/* input set */
@@ -84,6 +85,8 @@
(IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO)
#define IAVF_INSET_IPV4_TTL \
(IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL)
+#define IAVF_INSET_IPV4_ID \
+ (IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_IPV6_SRC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC)
#define IAVF_INSET_IPV6_DST \
@@ -94,6 +97,8 @@
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL)
#define IAVF_INSET_IPV6_TC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS)
+#define IAVF_INSET_IPV6_ID \
+ (IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_TCP_SRC_PORT \
(IAVF_PROT_TCP_OUTER | IAVF_SPORT)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
` (3 preceding siblings ...)
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
@ 2021-04-11 6:01 ` Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 2/4] common/iavf: add proto header " Jeff Guo
` (2 more replies)
2021-04-11 6:07 ` [dpdk-dev] [PATCH v3 0/3] support flow for IP fragment in ICE Jeff Guo
` (2 subsequent siblings)
7 siblings, 3 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:01 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
Add the new items to support the flow configuration for IP fragment
packets.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb7a3a8bd3..46ae342b85 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -166,6 +166,7 @@ enum index {
ITEM_VLAN_HAS_MORE_VLAN,
ITEM_IPV4,
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -236,6 +237,7 @@ enum index {
ITEM_IPV6_FRAG_EXT,
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_ICMP6,
ITEM_ICMP6_TYPE,
ITEM_ICMP6_CODE,
@@ -1028,6 +1030,7 @@ static const enum index item_vlan[] = {
static const enum index item_ipv4[] = {
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -1164,6 +1167,7 @@ static const enum index item_ipv6_ext[] = {
static const enum index item_ipv6_frag_ext[] = {
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_NEXT,
ZERO,
};
@@ -2466,6 +2470,13 @@ static const struct token token_list[] = {
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
hdr.type_of_service)),
},
+ [ITEM_IPV4_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
+ hdr.packet_id)),
+ },
[ITEM_IPV4_FRAGMENT_OFFSET] = {
.name = "fragment_offset",
.help = "fragmentation flags and fragment offset",
@@ -2969,12 +2980,20 @@ static const struct token token_list[] = {
},
[ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
.name = "frag_data",
- .help = "Fragment flags and offset",
+ .help = "fragment flags and offset",
.next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
hdr.frag_data)),
},
+ [ITEM_IPV6_FRAG_EXT_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
+ hdr.id)),
+ },
[ITEM_ICMP6] = {
.name = "icmp6",
.help = "match any ICMPv6 header",
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 2/4] common/iavf: add proto header for IP fragment
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
@ 2021-04-11 6:01 ` Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 3/4] net/iavf: support RSS hash " Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
2 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:01 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
Add new virtchnl protocol header type and fields for IP fragment packets
to support RSS hash and FDIR.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/common/iavf/virtchnl.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 6b99e170f0..e3eb767d66 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1415,7 +1415,9 @@ enum virtchnl_proto_hdr_type {
VIRTCHNL_PROTO_HDR_S_VLAN,
VIRTCHNL_PROTO_HDR_C_VLAN,
VIRTCHNL_PROTO_HDR_IPV4,
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG,
VIRTCHNL_PROTO_HDR_IPV6,
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG,
VIRTCHNL_PROTO_HDR_TCP,
VIRTCHNL_PROTO_HDR_UDP,
VIRTCHNL_PROTO_HDR_SCTP,
@@ -1452,6 +1454,8 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV4_DSCP,
VIRTCHNL_PROTO_HDR_IPV4_TTL,
VIRTCHNL_PROTO_HDR_IPV4_PROT,
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4_FRAG),
/* IPV6 */
VIRTCHNL_PROTO_HDR_IPV6_SRC =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6),
@@ -1472,6 +1476,9 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST,
+ /* IPv6 Extension Header Fragment */
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
/* TCP */
VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 3/4] net/iavf: support RSS hash for IP fragment
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 2/4] common/iavf: add proto header " Jeff Guo
@ 2021-04-11 6:01 ` Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
2 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:01 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
New pattern and RSS hash flow parsing are added to handle fragmented
IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_generic_flow.c | 24 ++++++++
drivers/net/iavf/iavf_generic_flow.h | 3 +
drivers/net/iavf/iavf_hash.c | 83 ++++++++++++++++++++++++----
3 files changed, 100 insertions(+), 10 deletions(-)
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 8635ff83ca..242bb4abc5 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -219,6 +219,30 @@ enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[] = {
RTE_FLOW_ITEM_TYPE_END,
};
+enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV6,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index 005eeb3553..32932557ca 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -203,6 +203,9 @@ extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv4_icmp[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[];
+extern enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_udp[];
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index d8d22f8009..5d3d62839b 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -112,6 +112,10 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_DST), {BUFF_NOUSED} }
+#define proto_hdr_ipv6_frag { \
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG, \
+ FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID), {BUFF_NOUSED} }
+
#define proto_hdr_ipv6_with_prot { \
VIRTCHNL_PROTO_HDR_IPV6, \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
@@ -190,6 +194,12 @@ struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
};
+struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
+ TUNNEL_LEVEL_OUTER, 5,
+ {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+ proto_hdr_ipv6, proto_hdr_ipv6_frag}
+};
+
struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
TUNNEL_LEVEL_OUTER, 5,
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
@@ -303,7 +313,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
ETH_RSS_NONFRAG_IPV4_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
@@ -312,6 +323,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 outer */
#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
+ ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_NONFRAG_IPV6_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
@@ -330,6 +343,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
+ ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
@@ -415,10 +430,12 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
/* IPv6 */
{iavf_pattern_eth_ipv6, IAVF_RSS_TYPE_OUTER_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_ipv6_udp, IAVF_RSS_TYPE_OUTER_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv6_tcp, IAVF_RSS_TYPE_OUTER_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_ipv6_sctp, IAVF_RSS_TYPE_OUTER_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
{iavf_pattern_eth_vlan_ipv6, IAVF_RSS_TYPE_VLAN_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_vlan_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
@@ -626,6 +643,29 @@ do { \
REFINE_PROTO_FLD(ADD, fld_2); \
} while (0)
+static void
+iavf_hash_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer)
+{
+ struct virtchnl_proto_hdr *hdr1;
+ struct virtchnl_proto_hdr *hdr2;
+ int i;
+
+ if (layer < 0 || layer > hdrs->count)
+ return;
+
+ /* shift headers layer */
+ for (i = hdrs->count; i >= layer; i--) {
+ hdr1 = &hdrs->proto_hdr[i];
+ hdr2 = &hdrs->proto_hdr[i - 1];
+ *hdr1 = *hdr2;
+ }
+
+ /* adding dummy fragment header */
+ hdr1 = &hdrs->proto_hdr[layer];
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG);
+ hdrs->count = ++layer;
+}
+
/* refine proto hdrs base on l2, l3, l4 rss type */
static void
iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
@@ -647,17 +687,19 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 |
+ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
ETH_RSS_NONFRAG_IPV4_UDP |
ETH_RSS_NONFRAG_IPV4_TCP |
ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & ETH_RSS_FRAG_IPV4) {
+ iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
+ } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (ETH_RSS_L4_SRC_ONLY |
+ ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -665,9 +707,21 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr->field_selector = 0;
}
break;
+ case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
+ if (rss_type &
+ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
+ ETH_RSS_NONFRAG_IPV4_UDP |
+ ETH_RSS_NONFRAG_IPV4_TCP |
+ ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & ETH_RSS_FRAG_IPV4)
+ REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
+ } else {
+ hdr->field_selector = 0;
+ }
+ break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 |
+ (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
ETH_RSS_NONFRAG_IPV6_UDP |
ETH_RSS_NONFRAG_IPV6_TCP |
ETH_RSS_NONFRAG_IPV6_SCTP)) {
@@ -676,8 +730,8 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (ETH_RSS_L4_SRC_ONLY |
+ ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -692,6 +746,13 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
REPALCE_PROTO_FLD(IPV6_DST,
IPV6_PREFIX64_DST);
}
+ break;
+ case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
+ if (rss_type & ETH_RSS_FRAG_IPV6)
+ REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
+ else
+ hdr->field_selector = 0;
+
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
@@ -885,8 +946,10 @@ struct rss_attr_type {
ETH_RSS_NONFRAG_IPV6_TCP | \
ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | VALID_RSS_IPV6_L4)
+#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+ VALID_RSS_IPV4_L4)
+#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+ VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 2/4] common/iavf: add proto header " Jeff Guo
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 3/4] net/iavf: support RSS hash " Jeff Guo
@ 2021-04-11 6:01 ` Jeff Guo
2021-04-12 8:45 ` Xu, Ting
2 siblings, 1 reply; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:01 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_fdir.c | 376 ++++++++++++++++++---------
drivers/net/iavf/iavf_generic_flow.h | 5 +
2 files changed, 257 insertions(+), 124 deletions(-)
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 62f032985a..64c169f8c4 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -34,7 +34,7 @@
#define IAVF_FDIR_INSET_ETH_IPV4 (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
IAVF_INSET_IPV4_PROTO | IAVF_INSET_IPV4_TOS | \
- IAVF_INSET_IPV4_TTL)
+ IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID)
#define IAVF_FDIR_INSET_ETH_IPV4_UDP (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
@@ -56,6 +56,9 @@
IAVF_INSET_IPV6_NEXT_HDR | IAVF_INSET_IPV6_TC | \
IAVF_INSET_IPV6_HOP_LIMIT)
+#define IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT (\
+ IAVF_INSET_IPV6_ID)
+
#define IAVF_FDIR_INSET_ETH_IPV6_UDP (\
IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \
IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \
@@ -143,6 +146,7 @@ static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
{iavf_pattern_eth_ipv4_tcp, IAVF_FDIR_INSET_ETH_IPV4_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv4_sctp, IAVF_FDIR_INSET_ETH_IPV4_SCTP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6, IAVF_FDIR_INSET_ETH_IPV6, IAVF_INSET_NONE},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_udp, IAVF_FDIR_INSET_ETH_IPV6_UDP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_tcp, IAVF_FDIR_INSET_ETH_IPV6_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_sctp, IAVF_FDIR_INSET_ETH_IPV6_SCTP, IAVF_INSET_NONE},
@@ -543,6 +547,29 @@ iavf_fdir_refine_input_set(const uint64_t input_set,
}
}
+static void
+iavf_fdir_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer)
+{
+ struct virtchnl_proto_hdr *hdr1;
+ struct virtchnl_proto_hdr *hdr2;
+ int i;
+
+ if (layer < 0 || layer > hdrs->count)
+ return;
+
+ /* shift headers layer */
+ for (i = hdrs->count; i >= layer; i--) {
+ hdr1 = &hdrs->proto_hdr[i];
+ hdr2 = &hdrs->proto_hdr[i - 1];
+ *hdr1 = *hdr2;
+ }
+
+ /* adding dummy fragment header */
+ hdr1 = &hdrs->proto_hdr[layer];
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG);
+ hdrs->count = ++layer;
+}
+
static int
iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
const struct rte_flow_item pattern[],
@@ -550,12 +577,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
struct rte_flow_error *error,
struct iavf_fdir_conf *filter)
{
- const struct rte_flow_item *item = pattern;
- enum rte_flow_item_type item_type;
+ struct virtchnl_proto_hdrs *hdrs =
+ &filter->add_fltr.rule_cfg.proto_hdrs;
enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
const struct rte_flow_item_eth *eth_spec, *eth_mask;
- const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+ const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_spec;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_last;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_mask;
const struct rte_flow_item_udp *udp_spec, *udp_mask;
const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
const struct rte_flow_item_sctp *sctp_spec, *sctp_mask;
@@ -566,15 +596,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
const struct rte_flow_item_ah *ah_spec, *ah_mask;
const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
const struct rte_flow_item_ecpri *ecpri_spec, *ecpri_mask;
+ const struct rte_flow_item *item = pattern;
+ struct virtchnl_proto_hdr *hdr, *hdr1 = NULL;
struct rte_ecpri_common_hdr ecpri_common;
uint64_t input_set = IAVF_INSET_NONE;
-
+ enum rte_flow_item_type item_type;
enum rte_flow_item_type next_type;
+ uint8_t tun_inner = 0;
uint16_t ether_type;
-
- u8 tun_inner = 0;
int layer = 0;
- struct virtchnl_proto_hdr *hdr;
uint8_t ipv6_addr_mask[16] = {
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
@@ -582,26 +612,28 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
};
for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
+ item_type = item->type;
+
+ if (item->last && (item_type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ item_type !=
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT)) {
rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Not support range");
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Not support range");
}
- item_type = item->type;
-
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
eth_spec = item->spec;
eth_mask = item->mask;
next_type = (item + 1)->type;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr1 = &hdrs->proto_hdr[layer];
- VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ETH);
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, ETH);
if (next_type == RTE_FLOW_ITEM_TYPE_END &&
- (!eth_spec || !eth_mask)) {
+ (!eth_spec || !eth_mask)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item, "NULL eth spec/mask.");
@@ -637,69 +669,117 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
}
input_set |= IAVF_INSET_ETHERTYPE;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ETH, ETHERTYPE);
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
- rte_memcpy(hdr->buffer,
- eth_spec, sizeof(struct rte_ether_hdr));
+ rte_memcpy(hdr1->buffer, eth_spec,
+ sizeof(struct rte_ether_hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
l3 = RTE_FLOW_ITEM_TYPE_IPV4;
ipv4_spec = item->spec;
+ ipv4_last = item->last;
ipv4_mask = item->mask;
+ next_type = (item + 1)->type;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV4);
- if (ipv4_spec && ipv4_mask) {
- if (ipv4_mask->hdr.version_ihl ||
- ipv4_mask->hdr.total_length ||
- ipv4_mask->hdr.packet_id ||
- ipv4_mask->hdr.fragment_offset ||
- ipv4_mask->hdr.hdr_checksum) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv4 mask.");
- return -rte_errno;
- }
+ if (!(ipv4_spec && ipv4_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if (ipv4_mask->hdr.type_of_service ==
- UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TOS;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DSCP);
- }
- if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_PROTO;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, PROT);
- }
- if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TTL;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, TTL);
- }
- if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, SRC);
- }
- if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DST);
- }
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.hdr_checksum) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 mask.");
+ return -rte_errno;
+ }
- if (tun_inner) {
- input_set &= ~IAVF_PROT_IPV4_OUTER;
- input_set |= IAVF_PROT_IPV4_INNER;
- }
+ if (ipv4_last &&
+ (ipv4_last->hdr.version_ihl ||
+ ipv4_last->hdr.type_of_service ||
+ ipv4_last->hdr.time_to_live ||
+ ipv4_last->hdr.total_length |
+ ipv4_last->hdr.next_proto_id ||
+ ipv4_last->hdr.hdr_checksum ||
+ ipv4_last->hdr.src_addr ||
+ ipv4_last->hdr.dst_addr)) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 last.");
+ return -rte_errno;
+ }
- rte_memcpy(hdr->buffer,
- &ipv4_spec->hdr,
- sizeof(ipv4_spec->hdr));
+ if (ipv4_mask->hdr.type_of_service ==
+ UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TOS;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DSCP);
+ }
+
+ if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_PROTO;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ PROT);
+ }
+
+ if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TTL;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ TTL);
+ }
+
+ if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ SRC);
+ }
+
+ if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DST);
+ }
+
+ rte_memcpy(hdr->buffer, &ipv4_spec->hdr,
+ sizeof(ipv4_spec->hdr));
+
+ hdrs->count = ++layer;
+
+ /* only support any packet id for fragment IPv4
+ * any packet_id:
+ * spec is 0, last is 0xffff, mask is 0xffff
+ */
+ if (ipv4_last && ipv4_spec->hdr.packet_id == 0 &&
+ ipv4_last->hdr.packet_id == UINT16_MAX &&
+ ipv4_mask->hdr.packet_id == UINT16_MAX &&
+ ipv4_mask->hdr.fragment_offset == UINT16_MAX) {
+ /* all IPv4 fragment packet has the same
+ * ethertype, if the spec is for all valid
+ * packet id, set ethertype into input set.
+ */
+ input_set |= IAVF_INSET_ETHERTYPE;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
+
+ /* add dummy header for IPv4 Fragment */
+ iavf_fdir_add_fragment_hdr(hdrs, layer);
+ } else if (ipv4_mask->hdr.packet_id == UINT16_MAX) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 mask.");
+ return -rte_errno;
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
@@ -707,63 +787,109 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
ipv6_spec = item->spec;
ipv6_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6);
- if (ipv6_spec && ipv6_mask) {
- if (ipv6_mask->hdr.payload_len) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv6 mask");
- return -rte_errno;
- }
+ if (!(ipv6_spec && ipv6_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if ((ipv6_mask->hdr.vtc_flow &
- rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
- == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
- input_set |= IAVF_INSET_IPV6_TC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC);
- }
- if (ipv6_mask->hdr.proto == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_NEXT_HDR;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT);
- }
- if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT);
- }
- if (!memcmp(ipv6_mask->hdr.src_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.src_addr))) {
- input_set |= IAVF_INSET_IPV6_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC);
- }
- if (!memcmp(ipv6_mask->hdr.dst_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.dst_addr))) {
- input_set |= IAVF_INSET_IPV6_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST);
- }
+ if (ipv6_mask->hdr.payload_len) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv6 mask");
+ return -rte_errno;
+ }
- if (tun_inner) {
- input_set &= ~IAVF_PROT_IPV6_OUTER;
- input_set |= IAVF_PROT_IPV6_INNER;
- }
+ if ((ipv6_mask->hdr.vtc_flow &
+ rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
+ == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
+ input_set |= IAVF_INSET_IPV6_TC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ TC);
+ }
- rte_memcpy(hdr->buffer,
- &ipv6_spec->hdr,
- sizeof(ipv6_spec->hdr));
+ if (ipv6_mask->hdr.proto == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_NEXT_HDR;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ PROT);
+ }
+
+ if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ HOP_LIMIT);
+ }
+
+ if (!memcmp(ipv6_mask->hdr.src_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.src_addr))) {
+ input_set |= IAVF_INSET_IPV6_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ SRC);
+ }
+ if (!memcmp(ipv6_mask->hdr.dst_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.dst_addr))) {
+ input_set |= IAVF_INSET_IPV6_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ DST);
+ }
+
+ rte_memcpy(hdr->buffer, &ipv6_spec->hdr,
+ sizeof(ipv6_spec->hdr));
+
+ hdrs->count = ++layer;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+ ipv6_frag_spec = item->spec;
+ ipv6_frag_last = item->last;
+ ipv6_frag_mask = item->mask;
+ next_type = (item + 1)->type;
+
+ hdr = &hdrs->proto_hdr[layer];
+
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6_EH_FRAG);
+
+ if (!(ipv6_frag_spec && ipv6_frag_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
+
+ /* only support any packet id for fragment IPv6
+ * any packet_id:
+ * spec is 0, last is 0xffffffff, mask is 0xffffffff
+ */
+ if (ipv6_frag_last && ipv6_frag_spec->hdr.id == 0 &&
+ ipv6_frag_last->hdr.id == UINT32_MAX &&
+ ipv6_frag_mask->hdr.id == UINT32_MAX &&
+ ipv6_frag_mask->hdr.frag_data == UINT16_MAX) {
+ /* all IPv6 fragment packet has the same
+ * ethertype, if the spec is for all valid
+ * packet id, set ethertype into input set.
+ */
+ input_set |= IAVF_INSET_ETHERTYPE;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
+
+ rte_memcpy(hdr->buffer, &ipv6_frag_spec->hdr,
+ sizeof(ipv6_frag_spec->hdr));
+ } else if (ipv6_frag_mask->hdr.id == UINT32_MAX) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv6 mask.");
+ return -rte_errno;
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_UDP:
udp_spec = item->spec;
udp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, UDP);
@@ -800,14 +926,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(udp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
tcp_spec = item->spec;
tcp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, TCP);
@@ -849,14 +975,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(tcp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_SCTP:
sctp_spec = item->spec;
sctp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, SCTP);
@@ -887,14 +1013,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(sctp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_GTPU:
gtp_spec = item->spec;
gtp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
@@ -919,14 +1045,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
tun_inner = 1;
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_GTP_PSC:
gtp_psc_spec = item->spec;
gtp_psc_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
if (!gtp_psc_spec)
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_EH);
@@ -947,14 +1073,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*gtp_psc_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_L2TPV3OIP:
l2tpv3oip_spec = item->spec;
l2tpv3oip_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, L2TPV3);
@@ -968,14 +1094,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*l2tpv3oip_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_ESP:
esp_spec = item->spec;
esp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ESP);
@@ -989,14 +1115,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(esp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_AH:
ah_spec = item->spec;
ah_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, AH);
@@ -1010,14 +1136,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*ah_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_PFCP:
pfcp_spec = item->spec;
pfcp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, PFCP);
@@ -1031,7 +1157,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*pfcp_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_ECPRI:
@@ -1040,7 +1166,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
ecpri_common.u32 = rte_be_to_cpu_32(ecpri_spec->hdr.common.u32);
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ECPRI);
@@ -1056,7 +1182,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*ecpri_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_VOID:
@@ -1077,7 +1203,9 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
return -rte_errno;
}
- if (!iavf_fdir_refine_input_set(input_set, input_set_mask, filter)) {
+ if (!iavf_fdir_refine_input_set(input_set,
+ input_set_mask | IAVF_INSET_ETHERTYPE,
+ filter)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM_SPEC, pattern,
"Invalid input set");
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index 32932557ca..e19da15518 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -61,6 +61,7 @@
#define IAVF_PFCP_S_FIELD (1ULL << 44)
#define IAVF_PFCP_SEID (1ULL << 43)
#define IAVF_ECPRI_PC_RTC_ID (1ULL << 42)
+#define IAVF_IP_PKID (1ULL << 41)
/* input set */
@@ -84,6 +85,8 @@
(IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO)
#define IAVF_INSET_IPV4_TTL \
(IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL)
+#define IAVF_INSET_IPV4_ID \
+ (IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_IPV6_SRC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC)
#define IAVF_INSET_IPV6_DST \
@@ -94,6 +97,8 @@
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL)
#define IAVF_INSET_IPV6_TC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS)
+#define IAVF_INSET_IPV6_ID \
+ (IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_TUN_IPV4_SRC \
(IAVF_PROT_IPV4_INNER | IAVF_IP_SRC)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
@ 2021-04-12 8:45 ` Xu, Ting
2021-04-13 1:57 ` Guo, Jia
0 siblings, 1 reply; 36+ messages in thread
From: Xu, Ting @ 2021-04-12 8:45 UTC (permalink / raw)
To: Guo, Jia, orika, Zhang, Qi Z, Xing, Beilei, Li, Xiaoyun, Wu,
Jingjing, Guo, Junfeng
Cc: dev
Hi, Jeff
Best Regards,
Xu Ting
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Sunday, April 11, 2021 2:02 PM
> To: orika@nvidia.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Jia <jia.guo@intel.com>
> Subject: [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet
>
> New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet.
>
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> ---
> drivers/net/iavf/iavf_fdir.c | 376 ++++++++++++++++++---------
> drivers/net/iavf/iavf_generic_flow.h | 5 +
> 2 files changed, 257 insertions(+), 124 deletions(-)
>
> diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index
> 62f032985a..64c169f8c4 100644
> --- a/drivers/net/iavf/iavf_fdir.c
> +++ b/drivers/net/iavf/iavf_fdir.c
> @@ -34,7 +34,7 @@
> #define IAVF_FDIR_INSET_ETH_IPV4 (\
> IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
> IAVF_INSET_IPV4_PROTO | IAVF_INSET_IPV4_TOS | \
> - IAVF_INSET_IPV4_TTL)
> + IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID)
>
Skip...
> + if (ipv4_mask->hdr.version_ihl ||
> + ipv4_mask->hdr.total_length ||
> + ipv4_mask->hdr.hdr_checksum) {
> + rte_flow_error_set(error, EINVAL,
> +
> RTE_FLOW_ERROR_TYPE_ITEM,
> + item, "Invalid IPv4 mask.");
> + return -rte_errno;
> + }
>
> - if (tun_inner) {
> - input_set &=
> ~IAVF_PROT_IPV4_OUTER;
> - input_set |= IAVF_PROT_IPV4_INNER;
> - }
This part "tun_inner" is newly added and needed for GTPU inner, cannot be deleted.
> + if (ipv4_last &&
> + (ipv4_last->hdr.version_ihl ||
> + ipv4_last->hdr.type_of_service ||
> + ipv4_last->hdr.time_to_live ||
> + ipv4_last->hdr.total_length |
> + ipv4_last->hdr.next_proto_id ||
> + ipv4_last->hdr.hdr_checksum ||
> + ipv4_last->hdr.src_addr ||
> + ipv4_last->hdr.dst_addr)) {
> + rte_flow_error_set(error, EINVAL,
> +
> RTE_FLOW_ERROR_TYPE_ITEM,
> + item, "Invalid IPv4 last.");
> + return -rte_errno;
> + }
>
> - rte_memcpy(hdr->buffer,
> - &ipv4_spec->hdr,
> - sizeof(ipv4_spec->hdr));
> + if (ipv4_mask->hdr.type_of_service ==
> + UINT8_MAX) {
> + input_set |= IAVF_INSET_IPV4_TOS;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV4,
> + DSCP);
> + }
> +
> + if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
> + input_set |= IAVF_INSET_IPV4_PROTO;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV4,
> + PROT);
> + }
> +
> + if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
> + input_set |= IAVF_INSET_IPV4_TTL;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV4,
> + TTL);
> + }
> +
> + if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
> + input_set |= IAVF_INSET_IPV4_SRC;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV4,
> + SRC);
> + }
> +
> + if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
> + input_set |= IAVF_INSET_IPV4_DST;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV4,
> + DST);
> + }
> +
> + rte_memcpy(hdr->buffer, &ipv4_spec->hdr,
> + sizeof(ipv4_spec->hdr));
> +
> + hdrs->count = ++layer;
> +
> + /* only support any packet id for fragment IPv4
> + * any packet_id:
> + * spec is 0, last is 0xffff, mask is 0xffff
> + */
> + if (ipv4_last && ipv4_spec->hdr.packet_id == 0 &&
> + ipv4_last->hdr.packet_id == UINT16_MAX &&
> + ipv4_mask->hdr.packet_id == UINT16_MAX &&
> + ipv4_mask->hdr.fragment_offset == UINT16_MAX) {
> + /* all IPv4 fragment packet has the same
> + * ethertype, if the spec is for all valid
> + * packet id, set ethertype into input set.
> + */
> + input_set |= IAVF_INSET_ETHERTYPE;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
> ETH,
> + ETHERTYPE);
> +
> + /* add dummy header for IPv4 Fragment */
> + iavf_fdir_add_fragment_hdr(hdrs, layer);
> + } else if (ipv4_mask->hdr.packet_id == UINT16_MAX) {
> + rte_flow_error_set(error, EINVAL,
> +
> RTE_FLOW_ERROR_TYPE_ITEM,
> + item, "Invalid IPv4 mask.");
> + return -rte_errno;
> }
>
> - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
> break;
>
> case RTE_FLOW_ITEM_TYPE_IPV6:
> @@ -707,63 +787,109 @@ iavf_fdir_parse_pattern(__rte_unused struct
> iavf_adapter *ad,
> ipv6_spec = item->spec;
> ipv6_mask = item->mask;
>
> - hdr = &filter-
> >add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
> + hdr = &hdrs->proto_hdr[layer];
>
> VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6);
>
> - if (ipv6_spec && ipv6_mask) {
> - if (ipv6_mask->hdr.payload_len) {
> - rte_flow_error_set(error, EINVAL,
> -
> RTE_FLOW_ERROR_TYPE_ITEM,
> - item, "Invalid IPv6 mask");
> - return -rte_errno;
> - }
> + if (!(ipv6_spec && ipv6_mask)) {
> + hdrs->count = ++layer;
> + break;
> + }
>
> - if ((ipv6_mask->hdr.vtc_flow &
> -
> rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
> - ==
> rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
> - input_set |= IAVF_INSET_IPV6_TC;
> -
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC);
> - }
> - if (ipv6_mask->hdr.proto == UINT8_MAX) {
> - input_set |=
> IAVF_INSET_IPV6_NEXT_HDR;
> -
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT);
> - }
> - if (ipv6_mask->hdr.hop_limits == UINT8_MAX)
> {
> - input_set |=
> IAVF_INSET_IPV6_HOP_LIMIT;
> -
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT);
> - }
> - if (!memcmp(ipv6_mask->hdr.src_addr,
> - ipv6_addr_mask,
> - RTE_DIM(ipv6_mask->hdr.src_addr))) {
> - input_set |= IAVF_INSET_IPV6_SRC;
> -
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC);
> - }
> - if (!memcmp(ipv6_mask->hdr.dst_addr,
> - ipv6_addr_mask,
> - RTE_DIM(ipv6_mask->hdr.dst_addr)))
> {
> - input_set |= IAVF_INSET_IPV6_DST;
> -
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST);
> - }
> + if (ipv6_mask->hdr.payload_len) {
> + rte_flow_error_set(error, EINVAL,
> +
> RTE_FLOW_ERROR_TYPE_ITEM,
> + item, "Invalid IPv6 mask");
> + return -rte_errno;
> + }
>
> - if (tun_inner) {
> - input_set &=
> ~IAVF_PROT_IPV6_OUTER;
> - input_set |= IAVF_PROT_IPV6_INNER;
> - }
The same as ipv4.
> + if ((ipv6_mask->hdr.vtc_flow &
> + rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
> + == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
> + input_set |= IAVF_INSET_IPV6_TC;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV6,
> + TC);
> + }
>
> - rte_memcpy(hdr->buffer,
> - &ipv6_spec->hdr,
> - sizeof(ipv6_spec->hdr));
> + if (ipv6_mask->hdr.proto == UINT8_MAX) {
> + input_set |= IAVF_INSET_IPV6_NEXT_HDR;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV6,
> + PROT);
> + }
> +
> + if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
> + input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV6,
> + HOP_LIMIT);
> + }
> +
> + if (!memcmp(ipv6_mask->hdr.src_addr,
> ipv6_addr_mask,
> + RTE_DIM(ipv6_mask->hdr.src_addr))) {
> + input_set |= IAVF_INSET_IPV6_SRC;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV6,
> + SRC);
> + }
> + if (!memcmp(ipv6_mask->hdr.dst_addr,
> ipv6_addr_mask,
> + RTE_DIM(ipv6_mask->hdr.dst_addr))) {
> + input_set |= IAVF_INSET_IPV6_DST;
> + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> IPV6,
> + DST);
> + }
> +
> + rte_memcpy(hdr->buffer, &ipv6_spec->hdr,
> + sizeof(ipv6_spec->hdr));
> +
> + hdrs->count = ++layer;
> + break;
> +
Skip...
> @@ -84,6 +85,8 @@
> (IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO) #define
> IAVF_INSET_IPV4_TTL \
> (IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL)
> +#define IAVF_INSET_IPV4_ID \
> + (IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID)
> #define IAVF_INSET_IPV6_SRC \
> (IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC)
> #define IAVF_INSET_IPV6_DST \
> @@ -94,6 +97,8 @@
> (IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL)
> #define IAVF_INSET_IPV6_TC \
> (IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS)
> +#define IAVF_INSET_IPV6_ID \
> + (IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID)
>
> #define IAVF_INSET_TUN_IPV4_SRC \
> (IAVF_PROT_IPV4_INNER | IAVF_IP_SRC)
> --
> 2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet
2021-04-12 8:45 ` Xu, Ting
@ 2021-04-13 1:57 ` Guo, Jia
0 siblings, 0 replies; 36+ messages in thread
From: Guo, Jia @ 2021-04-13 1:57 UTC (permalink / raw)
To: Xu, Ting, orika, Zhang, Qi Z, Xing, Beilei, Li, Xiaoyun, Wu,
Jingjing, Guo, Junfeng
Cc: dev
Hi, Ting
> -----Original Message-----
> From: Xu, Ting <ting.xu@intel.com>
> Sent: Monday, April 12, 2021 4:45 PM
> To: Guo, Jia <jia.guo@intel.com>; orika@nvidia.com; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Guo, Junfeng
> <junfeng.guo@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet
>
> Hi, Jeff
>
> Best Regards,
> Xu Ting
>
> > -----Original Message-----
> > From: Guo, Jia <jia.guo@intel.com>
> > Sent: Sunday, April 11, 2021 2:02 PM
> > To: orika@nvidia.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> > <beilei.xing@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > Jingjing <jingjing.wu@intel.com>
> > Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Jia
> > <jia.guo@intel.com>
> > Subject: [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet
> >
> > New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet.
> >
> > Signed-off-by: Ting Xu <ting.xu@intel.com>
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > ---
> > drivers/net/iavf/iavf_fdir.c | 376 ++++++++++++++++++---------
> > drivers/net/iavf/iavf_generic_flow.h | 5 +
> > 2 files changed, 257 insertions(+), 124 deletions(-)
> >
> > diff --git a/drivers/net/iavf/iavf_fdir.c
> > b/drivers/net/iavf/iavf_fdir.c index
> > 62f032985a..64c169f8c4 100644
> > --- a/drivers/net/iavf/iavf_fdir.c
> > +++ b/drivers/net/iavf/iavf_fdir.c
> > @@ -34,7 +34,7 @@
> > #define IAVF_FDIR_INSET_ETH_IPV4 (\
> > IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
> IAVF_INSET_IPV4_PROTO
> > | IAVF_INSET_IPV4_TOS | \
> > -IAVF_INSET_IPV4_TTL)
> > +IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID)
> >
>
> Skip...
>
> > +if (ipv4_mask->hdr.version_ihl ||
> > + ipv4_mask->hdr.total_length ||
> > + ipv4_mask->hdr.hdr_checksum) {
> > +rte_flow_error_set(error, EINVAL,
> > +
> > RTE_FLOW_ERROR_TYPE_ITEM,
> > + item, "Invalid IPv4 mask.");
> > +return -rte_errno;
> > +}
> >
> > -if (tun_inner) {
> > -input_set &=
> > ~IAVF_PROT_IPV4_OUTER;
> > -input_set |= IAVF_PROT_IPV4_INNER;
> > -}
>
> This part "tun_inner" is newly added and needed for GTPU inner, cannot be
> deleted.
>
Oh, absolution it should not be deleted, will correct it in the coming version. Thanks.
> > +if (ipv4_last &&
> > + (ipv4_last->hdr.version_ihl ||
> > + ipv4_last->hdr.type_of_service ||
> > + ipv4_last->hdr.time_to_live ||
> > + ipv4_last->hdr.total_length |
> > + ipv4_last->hdr.next_proto_id ||
> > + ipv4_last->hdr.hdr_checksum ||
> > + ipv4_last->hdr.src_addr ||
> > + ipv4_last->hdr.dst_addr)) {
> > +rte_flow_error_set(error, EINVAL,
> > +
> > RTE_FLOW_ERROR_TYPE_ITEM,
> > + item, "Invalid IPv4 last.");
> > +return -rte_errno;
> > +}
> >
> > -rte_memcpy(hdr->buffer,
> > -&ipv4_spec->hdr,
> > -sizeof(ipv4_spec->hdr));
> > +if (ipv4_mask->hdr.type_of_service ==
> > + UINT8_MAX) {
> > +input_set |= IAVF_INSET_IPV4_TOS;
> > +VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV4,
> > + DSCP);
> > +}
> > +
> > +if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) { input_set |=
> > +IAVF_INSET_IPV4_PROTO; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV4,
> > + PROT);
> > +}
> > +
> > +if (ipv4_mask->hdr.time_to_live == UINT8_MAX) { input_set |=
> > +IAVF_INSET_IPV4_TTL; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV4,
> > + TTL);
> > +}
> > +
> > +if (ipv4_mask->hdr.src_addr == UINT32_MAX) { input_set |=
> > +IAVF_INSET_IPV4_SRC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV4,
> > + SRC);
> > +}
> > +
> > +if (ipv4_mask->hdr.dst_addr == UINT32_MAX) { input_set |=
> > +IAVF_INSET_IPV4_DST; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV4,
> > + DST);
> > +}
> > +
> > +rte_memcpy(hdr->buffer, &ipv4_spec->hdr,
> > + sizeof(ipv4_spec->hdr));
> > +
> > +hdrs->count = ++layer;
> > +
> > +/* only support any packet id for fragment IPv4
> > + * any packet_id:
> > + * spec is 0, last is 0xffff, mask is 0xffff */ if (ipv4_last &&
> > +ipv4_spec->hdr.packet_id == 0 &&
> > + ipv4_last->hdr.packet_id == UINT16_MAX &&
> > + ipv4_mask->hdr.packet_id == UINT16_MAX &&
> > + ipv4_mask->hdr.fragment_offset == UINT16_MAX) {
> > +/* all IPv4 fragment packet has the same
> > + * ethertype, if the spec is for all valid
> > + * packet id, set ethertype into input set.
> > + */
> > +input_set |= IAVF_INSET_ETHERTYPE;
> > +VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
> > ETH,
> > + ETHERTYPE);
> > +
> > +/* add dummy header for IPv4 Fragment */
> > +iavf_fdir_add_fragment_hdr(hdrs, layer); } else if
> > +(ipv4_mask->hdr.packet_id == UINT16_MAX) { rte_flow_error_set(error,
> > +EINVAL,
> > +
> > RTE_FLOW_ERROR_TYPE_ITEM,
> > + item, "Invalid IPv4 mask.");
> > +return -rte_errno;
> > }
> >
> > -filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
> > break;
> >
> > case RTE_FLOW_ITEM_TYPE_IPV6:
> > @@ -707,63 +787,109 @@ iavf_fdir_parse_pattern(__rte_unused struct
> > iavf_adapter *ad, ipv6_spec = item->spec; ipv6_mask = item->mask;
> >
> > -hdr = &filter-
> > >add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
> > +hdr = &hdrs->proto_hdr[layer];
> >
> > VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6);
> >
> > -if (ipv6_spec && ipv6_mask) {
> > -if (ipv6_mask->hdr.payload_len) {
> > -rte_flow_error_set(error, EINVAL,
> > -
> > RTE_FLOW_ERROR_TYPE_ITEM,
> > -item, "Invalid IPv6 mask");
> > -return -rte_errno;
> > -}
> > +if (!(ipv6_spec && ipv6_mask)) {
> > +hdrs->count = ++layer;
> > +break;
> > +}
> >
> > -if ((ipv6_mask->hdr.vtc_flow &
> > -
> > rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
> > -==
> > rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) { -input_set |=
> > IAVF_INSET_IPV6_TC;
> > -
> > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC); -} -if
> > (ipv6_mask->hdr.proto == UINT8_MAX) { -input_set |=
> > IAVF_INSET_IPV6_NEXT_HDR;
> > -
> > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT); -} -if
> > (ipv6_mask->hdr.hop_limits == UINT8_MAX) { -input_set |=
> > IAVF_INSET_IPV6_HOP_LIMIT;
> > -
> > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT); -} -if
> > (!memcmp(ipv6_mask->hdr.src_addr, -ipv6_addr_mask,
> > -RTE_DIM(ipv6_mask->hdr.src_addr))) {
> > -input_set |= IAVF_INSET_IPV6_SRC;
> > -
> > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC); -} -if
> > (!memcmp(ipv6_mask->hdr.dst_addr, -ipv6_addr_mask,
> > -RTE_DIM(ipv6_mask->hdr.dst_addr)))
> > {
> > -input_set |= IAVF_INSET_IPV6_DST;
> > -
> > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST); -}
> > +if (ipv6_mask->hdr.payload_len) {
> > +rte_flow_error_set(error, EINVAL,
> > +
> > RTE_FLOW_ERROR_TYPE_ITEM,
> > + item, "Invalid IPv6 mask");
> > +return -rte_errno;
> > +}
> >
> > -if (tun_inner) {
> > -input_set &=
> > ~IAVF_PROT_IPV6_OUTER;
> > -input_set |= IAVF_PROT_IPV6_INNER;
> > -}
>
> The same as ipv4.
>
> > +if ((ipv6_mask->hdr.vtc_flow &
> > + rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
> > + == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) { input_set |=
> > +IAVF_INSET_IPV6_TC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV6,
> > + TC);
> > +}
> >
> > -rte_memcpy(hdr->buffer,
> > -&ipv6_spec->hdr,
> > -sizeof(ipv6_spec->hdr));
> > +if (ipv6_mask->hdr.proto == UINT8_MAX) { input_set |=
> > +IAVF_INSET_IPV6_NEXT_HDR;
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV6,
> > + PROT);
> > +}
> > +
> > +if (ipv6_mask->hdr.hop_limits == UINT8_MAX) { input_set |=
> > +IAVF_INSET_IPV6_HOP_LIMIT;
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV6,
> > + HOP_LIMIT);
> > +}
> > +
> > +if (!memcmp(ipv6_mask->hdr.src_addr,
> > ipv6_addr_mask,
> > + RTE_DIM(ipv6_mask->hdr.src_addr))) { input_set |=
> > +IAVF_INSET_IPV6_SRC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV6,
> > + SRC);
> > +}
> > +if (!memcmp(ipv6_mask->hdr.dst_addr,
> > ipv6_addr_mask,
> > + RTE_DIM(ipv6_mask->hdr.dst_addr))) { input_set |=
> > +IAVF_INSET_IPV6_DST; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> > IPV6,
> > + DST);
> > +}
> > +
> > +rte_memcpy(hdr->buffer, &ipv6_spec->hdr,
> > + sizeof(ipv6_spec->hdr));
> > +
> > +hdrs->count = ++layer;
> > +break;
> > +
>
> Skip...
>
> > @@ -84,6 +85,8 @@
> > (IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO) #define
> IAVF_INSET_IPV4_TTL \
> > (IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL)
> > +#define IAVF_INSET_IPV4_ID \
> > +(IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID)
> > #define IAVF_INSET_IPV6_SRC \
> > (IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC)
> > #define IAVF_INSET_IPV6_DST \
> > @@ -94,6 +97,8 @@
> > (IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL)
> > #define IAVF_INSET_IPV6_TC \
> > (IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS)
> > +#define IAVF_INSET_IPV6_ID \
> > +(IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID)
> >
> > #define IAVF_INSET_TUN_IPV4_SRC \
> > (IAVF_PROT_IPV4_INNER | IAVF_IP_SRC)
> > --
> > 2.20.1
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] support flow for IP fragment in ICE
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
` (4 preceding siblings ...)
2021-04-11 6:01 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
@ 2021-04-11 6:07 ` Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 0/4] support flow for IP fragment in IAVF Jeff Guo
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Jeff Guo
7 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:07 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
Support RSS hash and FDIR for IP fragment packets in ICE PMD.
v3:
rebase code and fix some parsing issues.
v2:
add some input check
Jeff Guo (3):
net/ice/base: support IP fragment RSS and FDIR
net/ice: support RSS hash for IP fragment
net/ice: support FDIR for IP fragment packet
drivers/net/ice/base/ice_fdir.c | 50 ++++++++++++-
drivers/net/ice/base/ice_fdir.h | 22 +++++-
drivers/net/ice/base/ice_flow.c | 50 ++++++++++++-
drivers/net/ice/base/ice_flow.h | 5 +-
drivers/net/ice/base/ice_type.h | 1 +
drivers/net/ice/ice_fdir_filter.c | 116 ++++++++++++++++++++++++++---
drivers/net/ice/ice_generic_flow.c | 22 ++++++
drivers/net/ice/ice_generic_flow.h | 6 ++
drivers/net/ice/ice_hash.c | 48 ++++++++++--
9 files changed, 293 insertions(+), 27 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 0/4] support flow for IP fragment in IAVF
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
` (5 preceding siblings ...)
2021-04-11 6:07 ` [dpdk-dev] [PATCH v3 0/3] support flow for IP fragment in ICE Jeff Guo
@ 2021-04-11 6:59 ` Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
` (3 more replies)
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Jeff Guo
7 siblings, 4 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:59 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
support flow for IP fragment in IAVF
v3:
rebase code and fix some parsing issues
v2:
refine some input check
Jeff Guo (4):
app/testpmd: add packet id for IP fragment
common/iavf: add proto header for IP fragment
net/iavf: support RSS hash for IP fragment
net/iavf: support FDIR for IP fragment packet
app/test-pmd/cmdline_flow.c | 21 +-
drivers/common/iavf/virtchnl.h | 7 +
drivers/net/iavf/iavf_fdir.c | 376 ++++++++++++++++++---------
drivers/net/iavf/iavf_generic_flow.c | 24 ++
drivers/net/iavf/iavf_generic_flow.h | 8 +
drivers/net/iavf/iavf_hash.c | 83 +++++-
6 files changed, 384 insertions(+), 135 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 0/4] support flow for IP fragment in IAVF Jeff Guo
@ 2021-04-11 6:59 ` Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 2/4] common/iavf: add proto header " Jeff Guo
` (2 subsequent siblings)
3 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:59 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
Add the new items to support the flow configuration for IP fragment
packets.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb7a3a8bd3..46ae342b85 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -166,6 +166,7 @@ enum index {
ITEM_VLAN_HAS_MORE_VLAN,
ITEM_IPV4,
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -236,6 +237,7 @@ enum index {
ITEM_IPV6_FRAG_EXT,
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_ICMP6,
ITEM_ICMP6_TYPE,
ITEM_ICMP6_CODE,
@@ -1028,6 +1030,7 @@ static const enum index item_vlan[] = {
static const enum index item_ipv4[] = {
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -1164,6 +1167,7 @@ static const enum index item_ipv6_ext[] = {
static const enum index item_ipv6_frag_ext[] = {
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_NEXT,
ZERO,
};
@@ -2466,6 +2470,13 @@ static const struct token token_list[] = {
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
hdr.type_of_service)),
},
+ [ITEM_IPV4_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
+ hdr.packet_id)),
+ },
[ITEM_IPV4_FRAGMENT_OFFSET] = {
.name = "fragment_offset",
.help = "fragmentation flags and fragment offset",
@@ -2969,12 +2980,20 @@ static const struct token token_list[] = {
},
[ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
.name = "frag_data",
- .help = "Fragment flags and offset",
+ .help = "fragment flags and offset",
.next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
hdr.frag_data)),
},
+ [ITEM_IPV6_FRAG_EXT_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
+ hdr.id)),
+ },
[ITEM_ICMP6] = {
.name = "icmp6",
.help = "match any ICMPv6 header",
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 2/4] common/iavf: add proto header for IP fragment
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 0/4] support flow for IP fragment in IAVF Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
@ 2021-04-11 6:59 ` Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 3/4] net/iavf: support RSS hash " Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
3 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:59 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
Add new virtchnl protocol header type and fields for IP fragment packets
to support RSS hash and FDIR.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/common/iavf/virtchnl.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 6b99e170f0..e3eb767d66 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1415,7 +1415,9 @@ enum virtchnl_proto_hdr_type {
VIRTCHNL_PROTO_HDR_S_VLAN,
VIRTCHNL_PROTO_HDR_C_VLAN,
VIRTCHNL_PROTO_HDR_IPV4,
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG,
VIRTCHNL_PROTO_HDR_IPV6,
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG,
VIRTCHNL_PROTO_HDR_TCP,
VIRTCHNL_PROTO_HDR_UDP,
VIRTCHNL_PROTO_HDR_SCTP,
@@ -1452,6 +1454,8 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV4_DSCP,
VIRTCHNL_PROTO_HDR_IPV4_TTL,
VIRTCHNL_PROTO_HDR_IPV4_PROT,
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4_FRAG),
/* IPV6 */
VIRTCHNL_PROTO_HDR_IPV6_SRC =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6),
@@ -1472,6 +1476,9 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST,
+ /* IPv6 Extension Header Fragment */
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
/* TCP */
VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 3/4] net/iavf: support RSS hash for IP fragment
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 0/4] support flow for IP fragment in IAVF Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 2/4] common/iavf: add proto header " Jeff Guo
@ 2021-04-11 6:59 ` Jeff Guo
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
3 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:59 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
New pattern and RSS hash flow parsing are added to handle fragmented
IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_generic_flow.c | 24 ++++++++
drivers/net/iavf/iavf_generic_flow.h | 3 +
drivers/net/iavf/iavf_hash.c | 83 ++++++++++++++++++++++++----
3 files changed, 100 insertions(+), 10 deletions(-)
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 8635ff83ca..242bb4abc5 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -219,6 +219,30 @@ enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[] = {
RTE_FLOW_ITEM_TYPE_END,
};
+enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV6,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index 005eeb3553..32932557ca 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -203,6 +203,9 @@ extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv4_icmp[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[];
+extern enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_udp[];
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index d8d22f8009..5d3d62839b 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -112,6 +112,10 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_DST), {BUFF_NOUSED} }
+#define proto_hdr_ipv6_frag { \
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG, \
+ FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID), {BUFF_NOUSED} }
+
#define proto_hdr_ipv6_with_prot { \
VIRTCHNL_PROTO_HDR_IPV6, \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
@@ -190,6 +194,12 @@ struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
};
+struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
+ TUNNEL_LEVEL_OUTER, 5,
+ {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+ proto_hdr_ipv6, proto_hdr_ipv6_frag}
+};
+
struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
TUNNEL_LEVEL_OUTER, 5,
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
@@ -303,7 +313,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
ETH_RSS_NONFRAG_IPV4_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
@@ -312,6 +323,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 outer */
#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
+ ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_NONFRAG_IPV6_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
@@ -330,6 +343,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
+ ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
@@ -415,10 +430,12 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
/* IPv6 */
{iavf_pattern_eth_ipv6, IAVF_RSS_TYPE_OUTER_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_ipv6_udp, IAVF_RSS_TYPE_OUTER_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv6_tcp, IAVF_RSS_TYPE_OUTER_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_ipv6_sctp, IAVF_RSS_TYPE_OUTER_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
{iavf_pattern_eth_vlan_ipv6, IAVF_RSS_TYPE_VLAN_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_vlan_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
@@ -626,6 +643,29 @@ do { \
REFINE_PROTO_FLD(ADD, fld_2); \
} while (0)
+static void
+iavf_hash_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer)
+{
+ struct virtchnl_proto_hdr *hdr1;
+ struct virtchnl_proto_hdr *hdr2;
+ int i;
+
+ if (layer < 0 || layer > hdrs->count)
+ return;
+
+ /* shift headers layer */
+ for (i = hdrs->count; i >= layer; i--) {
+ hdr1 = &hdrs->proto_hdr[i];
+ hdr2 = &hdrs->proto_hdr[i - 1];
+ *hdr1 = *hdr2;
+ }
+
+ /* adding dummy fragment header */
+ hdr1 = &hdrs->proto_hdr[layer];
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG);
+ hdrs->count = ++layer;
+}
+
/* refine proto hdrs base on l2, l3, l4 rss type */
static void
iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
@@ -647,17 +687,19 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 |
+ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
ETH_RSS_NONFRAG_IPV4_UDP |
ETH_RSS_NONFRAG_IPV4_TCP |
ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & ETH_RSS_FRAG_IPV4) {
+ iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
+ } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (ETH_RSS_L4_SRC_ONLY |
+ ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -665,9 +707,21 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr->field_selector = 0;
}
break;
+ case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
+ if (rss_type &
+ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
+ ETH_RSS_NONFRAG_IPV4_UDP |
+ ETH_RSS_NONFRAG_IPV4_TCP |
+ ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & ETH_RSS_FRAG_IPV4)
+ REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
+ } else {
+ hdr->field_selector = 0;
+ }
+ break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 |
+ (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
ETH_RSS_NONFRAG_IPV6_UDP |
ETH_RSS_NONFRAG_IPV6_TCP |
ETH_RSS_NONFRAG_IPV6_SCTP)) {
@@ -676,8 +730,8 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (ETH_RSS_L4_SRC_ONLY |
+ ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -692,6 +746,13 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
REPALCE_PROTO_FLD(IPV6_DST,
IPV6_PREFIX64_DST);
}
+ break;
+ case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
+ if (rss_type & ETH_RSS_FRAG_IPV6)
+ REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
+ else
+ hdr->field_selector = 0;
+
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
@@ -885,8 +946,10 @@ struct rss_attr_type {
ETH_RSS_NONFRAG_IPV6_TCP | \
ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | VALID_RSS_IPV6_L4)
+#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+ VALID_RSS_IPV4_L4)
+#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+ VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 0/4] support flow for IP fragment in IAVF Jeff Guo
` (2 preceding siblings ...)
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 3/4] net/iavf: support RSS hash " Jeff Guo
@ 2021-04-11 6:59 ` Jeff Guo
3 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-11 6:59 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_fdir.c | 376 ++++++++++++++++++---------
drivers/net/iavf/iavf_generic_flow.h | 5 +
2 files changed, 257 insertions(+), 124 deletions(-)
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 62f032985a..f6db4f5ac8 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -34,7 +34,7 @@
#define IAVF_FDIR_INSET_ETH_IPV4 (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
IAVF_INSET_IPV4_PROTO | IAVF_INSET_IPV4_TOS | \
- IAVF_INSET_IPV4_TTL)
+ IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID)
#define IAVF_FDIR_INSET_ETH_IPV4_UDP (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
@@ -56,6 +56,9 @@
IAVF_INSET_IPV6_NEXT_HDR | IAVF_INSET_IPV6_TC | \
IAVF_INSET_IPV6_HOP_LIMIT)
+#define IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT (\
+ IAVF_INSET_IPV6_ID)
+
#define IAVF_FDIR_INSET_ETH_IPV6_UDP (\
IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \
IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \
@@ -143,6 +146,7 @@ static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
{iavf_pattern_eth_ipv4_tcp, IAVF_FDIR_INSET_ETH_IPV4_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv4_sctp, IAVF_FDIR_INSET_ETH_IPV4_SCTP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6, IAVF_FDIR_INSET_ETH_IPV6, IAVF_INSET_NONE},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_udp, IAVF_FDIR_INSET_ETH_IPV6_UDP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_tcp, IAVF_FDIR_INSET_ETH_IPV6_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_sctp, IAVF_FDIR_INSET_ETH_IPV6_SCTP, IAVF_INSET_NONE},
@@ -543,6 +547,29 @@ iavf_fdir_refine_input_set(const uint64_t input_set,
}
}
+static void
+iavf_fdir_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer)
+{
+ struct virtchnl_proto_hdr *hdr1;
+ struct virtchnl_proto_hdr *hdr2;
+ int i;
+
+ if (layer < 0 || layer > hdrs->count)
+ return;
+
+ /* shift headers layer */
+ for (i = hdrs->count; i >= layer; i--) {
+ hdr1 = &hdrs->proto_hdr[i];
+ hdr2 = &hdrs->proto_hdr[i - 1];
+ *hdr1 = *hdr2;
+ }
+
+ /* adding dummy fragment header */
+ hdr1 = &hdrs->proto_hdr[layer];
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG);
+ hdrs->count = ++layer;
+}
+
static int
iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
const struct rte_flow_item pattern[],
@@ -550,12 +577,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
struct rte_flow_error *error,
struct iavf_fdir_conf *filter)
{
- const struct rte_flow_item *item = pattern;
- enum rte_flow_item_type item_type;
+ struct virtchnl_proto_hdrs *hdrs =
+ &filter->add_fltr.rule_cfg.proto_hdrs;
enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
const struct rte_flow_item_eth *eth_spec, *eth_mask;
- const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+ const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_spec;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_last;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_mask;
const struct rte_flow_item_udp *udp_spec, *udp_mask;
const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
const struct rte_flow_item_sctp *sctp_spec, *sctp_mask;
@@ -566,15 +596,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
const struct rte_flow_item_ah *ah_spec, *ah_mask;
const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
const struct rte_flow_item_ecpri *ecpri_spec, *ecpri_mask;
+ const struct rte_flow_item *item = pattern;
+ struct virtchnl_proto_hdr *hdr, *hdr1 = NULL;
struct rte_ecpri_common_hdr ecpri_common;
uint64_t input_set = IAVF_INSET_NONE;
-
+ enum rte_flow_item_type item_type;
enum rte_flow_item_type next_type;
+ uint8_t tun_inner = 0;
uint16_t ether_type;
-
- u8 tun_inner = 0;
int layer = 0;
- struct virtchnl_proto_hdr *hdr;
uint8_t ipv6_addr_mask[16] = {
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
@@ -582,26 +612,28 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
};
for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
+ item_type = item->type;
+
+ if (item->last && !(item_type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+ item_type ==
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT)) {
rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Not support range");
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Not support range");
}
- item_type = item->type;
-
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
eth_spec = item->spec;
eth_mask = item->mask;
next_type = (item + 1)->type;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr1 = &hdrs->proto_hdr[layer];
- VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ETH);
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, ETH);
if (next_type == RTE_FLOW_ITEM_TYPE_END &&
- (!eth_spec || !eth_mask)) {
+ (!eth_spec || !eth_mask)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item, "NULL eth spec/mask.");
@@ -637,69 +669,117 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
}
input_set |= IAVF_INSET_ETHERTYPE;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ETH, ETHERTYPE);
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
- rte_memcpy(hdr->buffer,
- eth_spec, sizeof(struct rte_ether_hdr));
+ rte_memcpy(hdr1->buffer, eth_spec,
+ sizeof(struct rte_ether_hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
l3 = RTE_FLOW_ITEM_TYPE_IPV4;
ipv4_spec = item->spec;
+ ipv4_last = item->last;
ipv4_mask = item->mask;
+ next_type = (item + 1)->type;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV4);
- if (ipv4_spec && ipv4_mask) {
- if (ipv4_mask->hdr.version_ihl ||
- ipv4_mask->hdr.total_length ||
- ipv4_mask->hdr.packet_id ||
- ipv4_mask->hdr.fragment_offset ||
- ipv4_mask->hdr.hdr_checksum) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv4 mask.");
- return -rte_errno;
- }
+ if (!(ipv4_spec && ipv4_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if (ipv4_mask->hdr.type_of_service ==
- UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TOS;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DSCP);
- }
- if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_PROTO;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, PROT);
- }
- if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TTL;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, TTL);
- }
- if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, SRC);
- }
- if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DST);
- }
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.hdr_checksum) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 mask.");
+ return -rte_errno;
+ }
- if (tun_inner) {
- input_set &= ~IAVF_PROT_IPV4_OUTER;
- input_set |= IAVF_PROT_IPV4_INNER;
- }
+ if (ipv4_last &&
+ (ipv4_last->hdr.version_ihl ||
+ ipv4_last->hdr.type_of_service ||
+ ipv4_last->hdr.time_to_live ||
+ ipv4_last->hdr.total_length |
+ ipv4_last->hdr.next_proto_id ||
+ ipv4_last->hdr.hdr_checksum ||
+ ipv4_last->hdr.src_addr ||
+ ipv4_last->hdr.dst_addr)) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 last.");
+ return -rte_errno;
+ }
- rte_memcpy(hdr->buffer,
- &ipv4_spec->hdr,
- sizeof(ipv4_spec->hdr));
+ if (ipv4_mask->hdr.type_of_service ==
+ UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TOS;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DSCP);
+ }
+
+ if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_PROTO;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ PROT);
+ }
+
+ if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TTL;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ TTL);
+ }
+
+ if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ SRC);
+ }
+
+ if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DST);
+ }
+
+ rte_memcpy(hdr->buffer, &ipv4_spec->hdr,
+ sizeof(ipv4_spec->hdr));
+
+ hdrs->count = ++layer;
+
+ /* only support any packet id for fragment IPv4
+ * any packet_id:
+ * spec is 0, last is 0xffff, mask is 0xffff
+ */
+ if (ipv4_last && ipv4_spec->hdr.packet_id == 0 &&
+ ipv4_last->hdr.packet_id == UINT16_MAX &&
+ ipv4_mask->hdr.packet_id == UINT16_MAX &&
+ ipv4_mask->hdr.fragment_offset == UINT16_MAX) {
+ /* all IPv4 fragment packet has the same
+ * ethertype, if the spec is for all valid
+ * packet id, set ethertype into input set.
+ */
+ input_set |= IAVF_INSET_ETHERTYPE;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
+
+ /* add dummy header for IPv4 Fragment */
+ iavf_fdir_add_fragment_hdr(hdrs, layer);
+ } else if (ipv4_mask->hdr.packet_id == UINT16_MAX) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 mask.");
+ return -rte_errno;
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
@@ -707,63 +787,109 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
ipv6_spec = item->spec;
ipv6_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6);
- if (ipv6_spec && ipv6_mask) {
- if (ipv6_mask->hdr.payload_len) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv6 mask");
- return -rte_errno;
- }
+ if (!(ipv6_spec && ipv6_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if ((ipv6_mask->hdr.vtc_flow &
- rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
- == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
- input_set |= IAVF_INSET_IPV6_TC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC);
- }
- if (ipv6_mask->hdr.proto == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_NEXT_HDR;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT);
- }
- if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT);
- }
- if (!memcmp(ipv6_mask->hdr.src_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.src_addr))) {
- input_set |= IAVF_INSET_IPV6_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC);
- }
- if (!memcmp(ipv6_mask->hdr.dst_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.dst_addr))) {
- input_set |= IAVF_INSET_IPV6_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST);
- }
+ if (ipv6_mask->hdr.payload_len) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv6 mask");
+ return -rte_errno;
+ }
- if (tun_inner) {
- input_set &= ~IAVF_PROT_IPV6_OUTER;
- input_set |= IAVF_PROT_IPV6_INNER;
- }
+ if ((ipv6_mask->hdr.vtc_flow &
+ rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
+ == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
+ input_set |= IAVF_INSET_IPV6_TC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ TC);
+ }
- rte_memcpy(hdr->buffer,
- &ipv6_spec->hdr,
- sizeof(ipv6_spec->hdr));
+ if (ipv6_mask->hdr.proto == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_NEXT_HDR;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ PROT);
+ }
+
+ if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ HOP_LIMIT);
+ }
+
+ if (!memcmp(ipv6_mask->hdr.src_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.src_addr))) {
+ input_set |= IAVF_INSET_IPV6_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ SRC);
+ }
+ if (!memcmp(ipv6_mask->hdr.dst_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.dst_addr))) {
+ input_set |= IAVF_INSET_IPV6_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ DST);
+ }
+
+ rte_memcpy(hdr->buffer, &ipv6_spec->hdr,
+ sizeof(ipv6_spec->hdr));
+
+ hdrs->count = ++layer;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+ ipv6_frag_spec = item->spec;
+ ipv6_frag_last = item->last;
+ ipv6_frag_mask = item->mask;
+ next_type = (item + 1)->type;
+
+ hdr = &hdrs->proto_hdr[layer];
+
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6_EH_FRAG);
+
+ if (!(ipv6_frag_spec && ipv6_frag_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
+
+ /* only support any packet id for fragment IPv6
+ * any packet_id:
+ * spec is 0, last is 0xffffffff, mask is 0xffffffff
+ */
+ if (ipv6_frag_last && ipv6_frag_spec->hdr.id == 0 &&
+ ipv6_frag_last->hdr.id == UINT32_MAX &&
+ ipv6_frag_mask->hdr.id == UINT32_MAX &&
+ ipv6_frag_mask->hdr.frag_data == UINT16_MAX) {
+ /* all IPv6 fragment packet has the same
+ * ethertype, if the spec is for all valid
+ * packet id, set ethertype into input set.
+ */
+ input_set |= IAVF_INSET_ETHERTYPE;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
+
+ rte_memcpy(hdr->buffer, &ipv6_frag_spec->hdr,
+ sizeof(ipv6_frag_spec->hdr));
+ } else if (ipv6_frag_mask->hdr.id == UINT32_MAX) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv6 mask.");
+ return -rte_errno;
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_UDP:
udp_spec = item->spec;
udp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, UDP);
@@ -800,14 +926,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(udp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
tcp_spec = item->spec;
tcp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, TCP);
@@ -849,14 +975,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(tcp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_SCTP:
sctp_spec = item->spec;
sctp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, SCTP);
@@ -887,14 +1013,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(sctp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_GTPU:
gtp_spec = item->spec;
gtp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
@@ -919,14 +1045,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
tun_inner = 1;
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_GTP_PSC:
gtp_psc_spec = item->spec;
gtp_psc_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
if (!gtp_psc_spec)
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_EH);
@@ -947,14 +1073,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*gtp_psc_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_L2TPV3OIP:
l2tpv3oip_spec = item->spec;
l2tpv3oip_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, L2TPV3);
@@ -968,14 +1094,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*l2tpv3oip_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_ESP:
esp_spec = item->spec;
esp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ESP);
@@ -989,14 +1115,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(esp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_AH:
ah_spec = item->spec;
ah_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, AH);
@@ -1010,14 +1136,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*ah_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_PFCP:
pfcp_spec = item->spec;
pfcp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, PFCP);
@@ -1031,7 +1157,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*pfcp_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_ECPRI:
@@ -1040,7 +1166,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
ecpri_common.u32 = rte_be_to_cpu_32(ecpri_spec->hdr.common.u32);
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ECPRI);
@@ -1056,7 +1182,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*ecpri_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_VOID:
@@ -1077,7 +1203,9 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
return -rte_errno;
}
- if (!iavf_fdir_refine_input_set(input_set, input_set_mask, filter)) {
+ if (!iavf_fdir_refine_input_set(input_set,
+ input_set_mask | IAVF_INSET_ETHERTYPE,
+ filter)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM_SPEC, pattern,
"Invalid input set");
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index 32932557ca..e19da15518 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -61,6 +61,7 @@
#define IAVF_PFCP_S_FIELD (1ULL << 44)
#define IAVF_PFCP_SEID (1ULL << 43)
#define IAVF_ECPRI_PC_RTC_ID (1ULL << 42)
+#define IAVF_IP_PKID (1ULL << 41)
/* input set */
@@ -84,6 +85,8 @@
(IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO)
#define IAVF_INSET_IPV4_TTL \
(IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL)
+#define IAVF_INSET_IPV4_ID \
+ (IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_IPV6_SRC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC)
#define IAVF_INSET_IPV6_DST \
@@ -94,6 +97,8 @@
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL)
#define IAVF_INSET_IPV6_TC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS)
+#define IAVF_INSET_IPV6_ID \
+ (IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_TUN_IPV4_SRC \
(IAVF_PROT_IPV4_INNER | IAVF_IP_SRC)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF
2021-03-24 13:48 ` [dpdk-dev] [PATCH v2 0/4] support flow for IP fragment in IAVF Jeff Guo
` (6 preceding siblings ...)
2021-04-11 6:59 ` [dpdk-dev] [PATCH v3 0/4] support flow for IP fragment in IAVF Jeff Guo
@ 2021-04-13 8:10 ` Jeff Guo
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
` (4 more replies)
7 siblings, 5 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-13 8:10 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
support flow for IP fragment in IAVF
v4:
add some part which should not be deleted
v3:
rebase code and fix some parsing issues
v2:
refine some input check
Jeff Guo (4):
app/testpmd: add packet id for IP fragment
common/iavf: add proto header for IP fragment
net/iavf: support RSS hash for IP fragment
net/iavf: support FDIR for IP fragment packet
app/test-pmd/cmdline_flow.c | 21 +-
drivers/common/iavf/virtchnl.h | 7 +
drivers/net/iavf/iavf_fdir.c | 386 ++++++++++++++++++---------
drivers/net/iavf/iavf_generic_flow.c | 24 ++
drivers/net/iavf/iavf_generic_flow.h | 8 +
drivers/net/iavf/iavf_hash.c | 83 +++++-
6 files changed, 394 insertions(+), 135 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Jeff Guo
@ 2021-04-13 8:10 ` Jeff Guo
2021-04-19 7:43 ` Jack Min
` (2 more replies)
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 2/4] common/iavf: add proto header " Jeff Guo
` (3 subsequent siblings)
4 siblings, 3 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-13 8:10 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
Add the new items to support the flow configuration for IP fragment
packets.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb7a3a8bd3..46ae342b85 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -166,6 +166,7 @@ enum index {
ITEM_VLAN_HAS_MORE_VLAN,
ITEM_IPV4,
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -236,6 +237,7 @@ enum index {
ITEM_IPV6_FRAG_EXT,
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_ICMP6,
ITEM_ICMP6_TYPE,
ITEM_ICMP6_CODE,
@@ -1028,6 +1030,7 @@ static const enum index item_vlan[] = {
static const enum index item_ipv4[] = {
ITEM_IPV4_TOS,
+ ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
ITEM_IPV4_PROTO,
@@ -1164,6 +1167,7 @@ static const enum index item_ipv6_ext[] = {
static const enum index item_ipv6_frag_ext[] = {
ITEM_IPV6_FRAG_EXT_NEXT_HDR,
ITEM_IPV6_FRAG_EXT_FRAG_DATA,
+ ITEM_IPV6_FRAG_EXT_ID,
ITEM_NEXT,
ZERO,
};
@@ -2466,6 +2470,13 @@ static const struct token token_list[] = {
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
hdr.type_of_service)),
},
+ [ITEM_IPV4_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
+ hdr.packet_id)),
+ },
[ITEM_IPV4_FRAGMENT_OFFSET] = {
.name = "fragment_offset",
.help = "fragmentation flags and fragment offset",
@@ -2969,12 +2980,20 @@ static const struct token token_list[] = {
},
[ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
.name = "frag_data",
- .help = "Fragment flags and offset",
+ .help = "fragment flags and offset",
.next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
hdr.frag_data)),
},
+ [ITEM_IPV6_FRAG_EXT_ID] = {
+ .name = "packet_id",
+ .help = "fragment packet id",
+ .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext,
+ hdr.id)),
+ },
[ITEM_ICMP6] = {
.name = "icmp6",
.help = "match any ICMPv6 header",
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
@ 2021-04-19 7:43 ` Jack Min
2021-04-19 15:40 ` Ferruh Yigit
2021-04-19 15:37 ` Ferruh Yigit
2021-04-19 17:45 ` Ori Kam
2 siblings, 1 reply; 36+ messages in thread
From: Jack Min @ 2021-04-19 7:43 UTC (permalink / raw)
To: Jeff Guo
Cc: Ori Kam, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu, dev, ting.xu
On Tue, 21-04-13, 16:10, Jeff Guo wrote:
> Add the new items to support the flow configuration for IP fragment
> packets.
>
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> ---
> app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
> 1 file changed, 20 insertions(+), 1 deletion(-)
[...snip...]
> + [ITEM_IPV4_ID] = {
> + .name = "packet_id",
> + .help = "fragment packet id",
> + .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param),
> + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
> + hdr.packet_id)),
> + },
> [ITEM_IPV4_FRAGMENT_OFFSET] = {
> .name = "fragment_offset",
> .help = "fragmentation flags and fragment offset",
> @@ -2969,12 +2980,20 @@ static const struct token token_list[] = {
> },
> [ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
> .name = "frag_data",
> - .help = "Fragment flags and offset",
> + .help = "fragment flags and offset",
Will it be better to have a seperate fix patch for this ?
-Jack
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
2021-04-19 7:43 ` Jack Min
@ 2021-04-19 15:40 ` Ferruh Yigit
2021-04-20 2:21 ` Jack Min
0 siblings, 1 reply; 36+ messages in thread
From: Ferruh Yigit @ 2021-04-19 15:40 UTC (permalink / raw)
To: Jack Min, Jeff Guo
Cc: Ori Kam, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu, dev, ting.xu
On 4/19/2021 8:43 AM, Jack Min wrote:
> On Tue, 21-04-13, 16:10, Jeff Guo wrote:
>> Add the new items to support the flow configuration for IP fragment
>> packets.
>>
>> Signed-off-by: Ting Xu <ting.xu@intel.com>
>> Signed-off-by: Jeff Guo <jia.guo@intel.com>
>> ---
>> app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
>> 1 file changed, 20 insertions(+), 1 deletion(-)
>
> [...snip...]
>
>> + [ITEM_IPV4_ID] = {
>> + .name = "packet_id",
>> + .help = "fragment packet id",
>> + .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param),
>> + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
>> + hdr.packet_id)),
>> + },
>> [ITEM_IPV4_FRAGMENT_OFFSET] = {
>> .name = "fragment_offset",
>> .help = "fragmentation flags and fragment offset",
>> @@ -2969,12 +2980,20 @@ static const struct token token_list[] = {
>> },
>> [ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
>> .name = "frag_data",
>> - .help = "Fragment flags and offset",
>> + .help = "fragment flags and offset",
> Will it be better to have a seperate fix patch for this ?
>
You refer the case update of the help string, 'F' -> 'f', right?
If so this is so small, and cosmetic, update, I think no need to have its own
patch, and while this patch is touching related area, acceptable to fix it here.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
2021-04-19 15:40 ` Ferruh Yigit
@ 2021-04-20 2:21 ` Jack Min
0 siblings, 0 replies; 36+ messages in thread
From: Jack Min @ 2021-04-20 2:21 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jeff Guo, Ori Kam, qi.z.zhang, beilei.xing, xiaoyun.li,
jingjing.wu, dev, ting.xu
On Mon, 21-04-19, 16:40, Ferruh Yigit wrote:
> On 4/19/2021 8:43 AM, Jack Min wrote:
> > On Tue, 21-04-13, 16:10, Jeff Guo wrote:
> > > Add the new items to support the flow configuration for IP fragment
> > > packets.
> > >
> > > Signed-off-by: Ting Xu <ting.xu@intel.com>
> > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > ---
> > > app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
> > > 1 file changed, 20 insertions(+), 1 deletion(-)
> >
> > [...snip...]
> >
> > > + [ITEM_IPV4_ID] = {
> > > + .name = "packet_id",
> > > + .help = "fragment packet id",
> > > + .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param),
> > > + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
> > > + hdr.packet_id)),
> > > + },
> > > [ITEM_IPV4_FRAGMENT_OFFSET] = {
> > > .name = "fragment_offset",
> > > .help = "fragmentation flags and fragment offset",
> > > @@ -2969,12 +2980,20 @@ static const struct token token_list[] = {
> > > },
> > > [ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
> > > .name = "frag_data",
> > > - .help = "Fragment flags and offset",
> > > + .help = "fragment flags and offset",
> > Will it be better to have a seperate fix patch for this ?
> >
>
> You refer the case update of the help string, 'F' -> 'f', right?
Right.
> If so this is so small, and cosmetic, update, I think no need to have its
> own patch, and while this patch is touching related area, acceptable to fix
> it here.
Yes it's so small so I don't insist my point. :)
Reviewed-by: Xiaoyu Min <jackmin@nvidia.com>
-Jack
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
2021-04-19 7:43 ` Jack Min
@ 2021-04-19 15:37 ` Ferruh Yigit
2021-04-19 17:45 ` Ori Kam
2 siblings, 0 replies; 36+ messages in thread
From: Ferruh Yigit @ 2021-04-19 15:37 UTC (permalink / raw)
To: Jeff Guo, orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu
On 4/13/2021 9:10 AM, Jeff Guo wrote:
> Add the new items to support the flow configuration for IP fragment
> packets.
>
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
Hi Ori,
Can you please check this patch?
If you don't have any objection, I am planning to get it for rc1.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
2021-04-19 7:43 ` Jack Min
2021-04-19 15:37 ` Ferruh Yigit
@ 2021-04-19 17:45 ` Ori Kam
2021-04-19 23:01 ` Ferruh Yigit
2 siblings, 1 reply; 36+ messages in thread
From: Ori Kam @ 2021-04-19 17:45 UTC (permalink / raw)
To: Jeff Guo, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu; +Cc: dev, ting.xu
Hi Jeff and Ting,
> -----Original Message-----
> From: Jeff Guo <jia.guo@intel.com>
> Sent: Tuesday, April 13, 2021 11:10 AM
> Subject: [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
>
> Add the new items to support the flow configuration for IP fragment
> packets.
>
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> ---
> app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++-
> 1 file changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index fb7a3a8bd3..46ae342b85 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -166,6 +166,7 @@ enum index {
> ITEM_VLAN_HAS_MORE_VLAN,
> ITEM_IPV4,
> ITEM_IPV4_TOS,
> + ITEM_IPV4_ID,
> ITEM_IPV4_FRAGMENT_OFFSET,
> ITEM_IPV4_TTL,
> ITEM_IPV4_PROTO,
> @@ -236,6 +237,7 @@ enum index {
> ITEM_IPV6_FRAG_EXT,
> ITEM_IPV6_FRAG_EXT_NEXT_HDR,
> ITEM_IPV6_FRAG_EXT_FRAG_DATA,
> + ITEM_IPV6_FRAG_EXT_ID,
> ITEM_ICMP6,
> ITEM_ICMP6_TYPE,
> ITEM_ICMP6_CODE,
> @@ -1028,6 +1030,7 @@ static const enum index item_vlan[] = {
>
> static const enum index item_ipv4[] = {
> ITEM_IPV4_TOS,
> + ITEM_IPV4_ID,
> ITEM_IPV4_FRAGMENT_OFFSET,
> ITEM_IPV4_TTL,
> ITEM_IPV4_PROTO,
> @@ -1164,6 +1167,7 @@ static const enum index item_ipv6_ext[] = {
> static const enum index item_ipv6_frag_ext[] = {
> ITEM_IPV6_FRAG_EXT_NEXT_HDR,
> ITEM_IPV6_FRAG_EXT_FRAG_DATA,
> + ITEM_IPV6_FRAG_EXT_ID,
> ITEM_NEXT,
> ZERO,
> };
> @@ -2466,6 +2470,13 @@ static const struct token token_list[] = {
> .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
> hdr.type_of_service)),
> },
> + [ITEM_IPV4_ID] = {
> + .name = "packet_id",
> + .help = "fragment packet id",
> + .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED),
> item_param),
> + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
> + hdr.packet_id)),
> + },
> [ITEM_IPV4_FRAGMENT_OFFSET] = {
> .name = "fragment_offset",
> .help = "fragmentation flags and fragment offset",
> @@ -2969,12 +2980,20 @@ static const struct token token_list[] = {
> },
> [ITEM_IPV6_FRAG_EXT_FRAG_DATA] = {
> .name = "frag_data",
> - .help = "Fragment flags and offset",
> + .help = "fragment flags and offset",
> .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
> item_param),
> .args = ARGS(ARGS_ENTRY_HTON(struct
> rte_flow_item_ipv6_frag_ext,
> hdr.frag_data)),
> },
> + [ITEM_IPV6_FRAG_EXT_ID] = {
> + .name = "packet_id",
> + .help = "fragment packet id",
> + .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED),
> + item_param),
> + .args = ARGS(ARGS_ENTRY_HTON(struct
> rte_flow_item_ipv6_frag_ext,
> + hdr.id)),
> + },
> [ITEM_ICMP6] = {
> .name = "icmp6",
> .help = "match any ICMPv6 header",
> --
> 2.20.1
Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
2021-04-19 17:45 ` Ori Kam
@ 2021-04-19 23:01 ` Ferruh Yigit
0 siblings, 0 replies; 36+ messages in thread
From: Ferruh Yigit @ 2021-04-19 23:01 UTC (permalink / raw)
To: Ori Kam, Jeff Guo, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu
On 4/19/2021 6:45 PM, Ori Kam wrote:
> Hi Jeff and Ting,
>
>> -----Original Message-----
>> From: Jeff Guo <jia.guo@intel.com>
>> Sent: Tuesday, April 13, 2021 11:10 AM
>> Subject: [PATCH v4 1/4] app/testpmd: add packet id for IP fragment
>>
>> Add the new items to support the flow configuration for IP fragment
>> packets.
>>
>> Signed-off-by: Ting Xu <ting.xu@intel.com>
>> Signed-off-by: Jeff Guo <jia.guo@intel.com>
>
> Acked-by: Ori Kam <orika@nvidia.com>
>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v4 2/4] common/iavf: add proto header for IP fragment
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Jeff Guo
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
@ 2021-04-13 8:10 ` Jeff Guo
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 3/4] net/iavf: support RSS hash " Jeff Guo
` (2 subsequent siblings)
4 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-13 8:10 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
Add new virtchnl protocol header type and fields for IP fragment packets
to support RSS hash and FDIR.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/common/iavf/virtchnl.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 6b99e170f0..e3eb767d66 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1415,7 +1415,9 @@ enum virtchnl_proto_hdr_type {
VIRTCHNL_PROTO_HDR_S_VLAN,
VIRTCHNL_PROTO_HDR_C_VLAN,
VIRTCHNL_PROTO_HDR_IPV4,
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG,
VIRTCHNL_PROTO_HDR_IPV6,
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG,
VIRTCHNL_PROTO_HDR_TCP,
VIRTCHNL_PROTO_HDR_UDP,
VIRTCHNL_PROTO_HDR_SCTP,
@@ -1452,6 +1454,8 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV4_DSCP,
VIRTCHNL_PROTO_HDR_IPV4_TTL,
VIRTCHNL_PROTO_HDR_IPV4_PROT,
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4_FRAG),
/* IPV6 */
VIRTCHNL_PROTO_HDR_IPV6_SRC =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6),
@@ -1472,6 +1476,9 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC,
VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST,
+ /* IPv6 Extension Header Fragment */
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
/* TCP */
VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v4 3/4] net/iavf: support RSS hash for IP fragment
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Jeff Guo
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment Jeff Guo
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 2/4] common/iavf: add proto header " Jeff Guo
@ 2021-04-13 8:10 ` Jeff Guo
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
2021-04-13 9:30 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Zhang, Qi Z
4 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-13 8:10 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
New pattern and RSS hash flow parsing are added to handle fragmented
IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_generic_flow.c | 24 ++++++++
drivers/net/iavf/iavf_generic_flow.h | 3 +
drivers/net/iavf/iavf_hash.c | 83 ++++++++++++++++++++++++----
3 files changed, 100 insertions(+), 10 deletions(-)
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 8635ff83ca..242bb4abc5 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -219,6 +219,30 @@ enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[] = {
RTE_FLOW_ITEM_TYPE_END,
};
+enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[] = {
+ RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_VLAN,
+ RTE_FLOW_ITEM_TYPE_IPV6,
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+ RTE_FLOW_ITEM_TYPE_END,
+};
+
enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV6,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index 005eeb3553..32932557ca 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -203,6 +203,9 @@ extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv4_icmp[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[];
+extern enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[];
+extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[];
extern enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_udp[];
extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_udp[];
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index d8d22f8009..5d3d62839b 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -112,6 +112,10 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_DST), {BUFF_NOUSED} }
+#define proto_hdr_ipv6_frag { \
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG, \
+ FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID), {BUFF_NOUSED} }
+
#define proto_hdr_ipv6_with_prot { \
VIRTCHNL_PROTO_HDR_IPV6, \
FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \
@@ -190,6 +194,12 @@ struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
};
+struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
+ TUNNEL_LEVEL_OUTER, 5,
+ {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+ proto_hdr_ipv6, proto_hdr_ipv6_frag}
+};
+
struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
TUNNEL_LEVEL_OUTER, 5,
{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
@@ -303,7 +313,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
ETH_RSS_NONFRAG_IPV4_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
@@ -312,6 +323,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 outer */
#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
+ ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_NONFRAG_IPV6_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
@@ -330,6 +343,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
+ ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
@@ -415,10 +430,12 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
/* IPv6 */
{iavf_pattern_eth_ipv6, IAVF_RSS_TYPE_OUTER_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_ipv6_udp, IAVF_RSS_TYPE_OUTER_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv6_tcp, IAVF_RSS_TYPE_OUTER_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_ipv6_sctp, IAVF_RSS_TYPE_OUTER_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
{iavf_pattern_eth_vlan_ipv6, IAVF_RSS_TYPE_VLAN_IPV6, &outer_ipv6_tmplt},
+ {iavf_pattern_eth_vlan_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt},
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
@@ -626,6 +643,29 @@ do { \
REFINE_PROTO_FLD(ADD, fld_2); \
} while (0)
+static void
+iavf_hash_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer)
+{
+ struct virtchnl_proto_hdr *hdr1;
+ struct virtchnl_proto_hdr *hdr2;
+ int i;
+
+ if (layer < 0 || layer > hdrs->count)
+ return;
+
+ /* shift headers layer */
+ for (i = hdrs->count; i >= layer; i--) {
+ hdr1 = &hdrs->proto_hdr[i];
+ hdr2 = &hdrs->proto_hdr[i - 1];
+ *hdr1 = *hdr2;
+ }
+
+ /* adding dummy fragment header */
+ hdr1 = &hdrs->proto_hdr[layer];
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG);
+ hdrs->count = ++layer;
+}
+
/* refine proto hdrs base on l2, l3, l4 rss type */
static void
iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
@@ -647,17 +687,19 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 |
+ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
ETH_RSS_NONFRAG_IPV4_UDP |
ETH_RSS_NONFRAG_IPV4_TCP |
ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & ETH_RSS_FRAG_IPV4) {
+ iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
+ } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (ETH_RSS_L4_SRC_ONLY |
+ ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -665,9 +707,21 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr->field_selector = 0;
}
break;
+ case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
+ if (rss_type &
+ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
+ ETH_RSS_NONFRAG_IPV4_UDP |
+ ETH_RSS_NONFRAG_IPV4_TCP |
+ ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & ETH_RSS_FRAG_IPV4)
+ REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
+ } else {
+ hdr->field_selector = 0;
+ }
+ break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 |
+ (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
ETH_RSS_NONFRAG_IPV6_UDP |
ETH_RSS_NONFRAG_IPV6_TCP |
ETH_RSS_NONFRAG_IPV6_SCTP)) {
@@ -676,8 +730,8 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (ETH_RSS_L4_SRC_ONLY |
+ ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -692,6 +746,13 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
REPALCE_PROTO_FLD(IPV6_DST,
IPV6_PREFIX64_DST);
}
+ break;
+ case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
+ if (rss_type & ETH_RSS_FRAG_IPV6)
+ REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
+ else
+ hdr->field_selector = 0;
+
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
@@ -885,8 +946,10 @@ struct rss_attr_type {
ETH_RSS_NONFRAG_IPV6_TCP | \
ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | VALID_RSS_IPV6_L4)
+#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+ VALID_RSS_IPV4_L4)
+#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+ VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v4 4/4] net/iavf: support FDIR for IP fragment packet
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Jeff Guo
` (2 preceding siblings ...)
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 3/4] net/iavf: support RSS hash " Jeff Guo
@ 2021-04-13 8:10 ` Jeff Guo
2021-04-13 9:30 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Zhang, Qi Z
4 siblings, 0 replies; 36+ messages in thread
From: Jeff Guo @ 2021-04-13 8:10 UTC (permalink / raw)
To: orika, qi.z.zhang, beilei.xing, xiaoyun.li, jingjing.wu
Cc: dev, ting.xu, jia.guo
New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
drivers/net/iavf/iavf_fdir.c | 386 ++++++++++++++++++---------
drivers/net/iavf/iavf_generic_flow.h | 5 +
2 files changed, 267 insertions(+), 124 deletions(-)
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 62f032985a..f238a83c84 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -34,7 +34,7 @@
#define IAVF_FDIR_INSET_ETH_IPV4 (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
IAVF_INSET_IPV4_PROTO | IAVF_INSET_IPV4_TOS | \
- IAVF_INSET_IPV4_TTL)
+ IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID)
#define IAVF_FDIR_INSET_ETH_IPV4_UDP (\
IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \
@@ -56,6 +56,9 @@
IAVF_INSET_IPV6_NEXT_HDR | IAVF_INSET_IPV6_TC | \
IAVF_INSET_IPV6_HOP_LIMIT)
+#define IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT (\
+ IAVF_INSET_IPV6_ID)
+
#define IAVF_FDIR_INSET_ETH_IPV6_UDP (\
IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \
IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \
@@ -143,6 +146,7 @@ static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
{iavf_pattern_eth_ipv4_tcp, IAVF_FDIR_INSET_ETH_IPV4_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv4_sctp, IAVF_FDIR_INSET_ETH_IPV4_SCTP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6, IAVF_FDIR_INSET_ETH_IPV6, IAVF_INSET_NONE},
+ {iavf_pattern_eth_ipv6_frag_ext, IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_udp, IAVF_FDIR_INSET_ETH_IPV6_UDP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_tcp, IAVF_FDIR_INSET_ETH_IPV6_TCP, IAVF_INSET_NONE},
{iavf_pattern_eth_ipv6_sctp, IAVF_FDIR_INSET_ETH_IPV6_SCTP, IAVF_INSET_NONE},
@@ -543,6 +547,29 @@ iavf_fdir_refine_input_set(const uint64_t input_set,
}
}
+static void
+iavf_fdir_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer)
+{
+ struct virtchnl_proto_hdr *hdr1;
+ struct virtchnl_proto_hdr *hdr2;
+ int i;
+
+ if (layer < 0 || layer > hdrs->count)
+ return;
+
+ /* shift headers layer */
+ for (i = hdrs->count; i >= layer; i--) {
+ hdr1 = &hdrs->proto_hdr[i];
+ hdr2 = &hdrs->proto_hdr[i - 1];
+ *hdr1 = *hdr2;
+ }
+
+ /* adding dummy fragment header */
+ hdr1 = &hdrs->proto_hdr[layer];
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG);
+ hdrs->count = ++layer;
+}
+
static int
iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
const struct rte_flow_item pattern[],
@@ -550,12 +577,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
struct rte_flow_error *error,
struct iavf_fdir_conf *filter)
{
- const struct rte_flow_item *item = pattern;
- enum rte_flow_item_type item_type;
+ struct virtchnl_proto_hdrs *hdrs =
+ &filter->add_fltr.rule_cfg.proto_hdrs;
enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
const struct rte_flow_item_eth *eth_spec, *eth_mask;
- const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+ const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_spec;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_last;
+ const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_mask;
const struct rte_flow_item_udp *udp_spec, *udp_mask;
const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
const struct rte_flow_item_sctp *sctp_spec, *sctp_mask;
@@ -566,15 +596,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
const struct rte_flow_item_ah *ah_spec, *ah_mask;
const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
const struct rte_flow_item_ecpri *ecpri_spec, *ecpri_mask;
+ const struct rte_flow_item *item = pattern;
+ struct virtchnl_proto_hdr *hdr, *hdr1 = NULL;
struct rte_ecpri_common_hdr ecpri_common;
uint64_t input_set = IAVF_INSET_NONE;
-
+ enum rte_flow_item_type item_type;
enum rte_flow_item_type next_type;
+ uint8_t tun_inner = 0;
uint16_t ether_type;
-
- u8 tun_inner = 0;
int layer = 0;
- struct virtchnl_proto_hdr *hdr;
uint8_t ipv6_addr_mask[16] = {
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
@@ -582,26 +612,28 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
};
for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
+ item_type = item->type;
+
+ if (item->last && !(item_type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+ item_type ==
+ RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT)) {
rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Not support range");
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Not support range");
}
- item_type = item->type;
-
switch (item_type) {
case RTE_FLOW_ITEM_TYPE_ETH:
eth_spec = item->spec;
eth_mask = item->mask;
next_type = (item + 1)->type;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr1 = &hdrs->proto_hdr[layer];
- VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ETH);
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, ETH);
if (next_type == RTE_FLOW_ITEM_TYPE_END &&
- (!eth_spec || !eth_mask)) {
+ (!eth_spec || !eth_mask)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item, "NULL eth spec/mask.");
@@ -637,69 +669,122 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
}
input_set |= IAVF_INSET_ETHERTYPE;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ETH, ETHERTYPE);
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
- rte_memcpy(hdr->buffer,
- eth_spec, sizeof(struct rte_ether_hdr));
+ rte_memcpy(hdr1->buffer, eth_spec,
+ sizeof(struct rte_ether_hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
l3 = RTE_FLOW_ITEM_TYPE_IPV4;
ipv4_spec = item->spec;
+ ipv4_last = item->last;
ipv4_mask = item->mask;
+ next_type = (item + 1)->type;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV4);
- if (ipv4_spec && ipv4_mask) {
- if (ipv4_mask->hdr.version_ihl ||
- ipv4_mask->hdr.total_length ||
- ipv4_mask->hdr.packet_id ||
- ipv4_mask->hdr.fragment_offset ||
- ipv4_mask->hdr.hdr_checksum) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv4 mask.");
- return -rte_errno;
- }
+ if (!(ipv4_spec && ipv4_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if (ipv4_mask->hdr.type_of_service ==
- UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TOS;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DSCP);
- }
- if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_PROTO;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, PROT);
- }
- if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV4_TTL;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, TTL);
- }
- if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, SRC);
- }
- if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
- input_set |= IAVF_INSET_IPV4_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DST);
- }
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.hdr_checksum) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 mask.");
+ return -rte_errno;
+ }
- if (tun_inner) {
- input_set &= ~IAVF_PROT_IPV4_OUTER;
- input_set |= IAVF_PROT_IPV4_INNER;
- }
+ if (ipv4_last &&
+ (ipv4_last->hdr.version_ihl ||
+ ipv4_last->hdr.type_of_service ||
+ ipv4_last->hdr.time_to_live ||
+ ipv4_last->hdr.total_length |
+ ipv4_last->hdr.next_proto_id ||
+ ipv4_last->hdr.hdr_checksum ||
+ ipv4_last->hdr.src_addr ||
+ ipv4_last->hdr.dst_addr)) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 last.");
+ return -rte_errno;
+ }
- rte_memcpy(hdr->buffer,
- &ipv4_spec->hdr,
- sizeof(ipv4_spec->hdr));
+ if (ipv4_mask->hdr.type_of_service ==
+ UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TOS;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DSCP);
+ }
+
+ if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_PROTO;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ PROT);
+ }
+
+ if (ipv4_mask->hdr.time_to_live == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV4_TTL;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ TTL);
+ }
+
+ if (ipv4_mask->hdr.src_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ SRC);
+ }
+
+ if (ipv4_mask->hdr.dst_addr == UINT32_MAX) {
+ input_set |= IAVF_INSET_IPV4_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4,
+ DST);
+ }
+
+ if (tun_inner) {
+ input_set &= ~IAVF_PROT_IPV4_OUTER;
+ input_set |= IAVF_PROT_IPV4_INNER;
+ }
+
+ rte_memcpy(hdr->buffer, &ipv4_spec->hdr,
+ sizeof(ipv4_spec->hdr));
+
+ hdrs->count = ++layer;
+
+ /* only support any packet id for fragment IPv4
+ * any packet_id:
+ * spec is 0, last is 0xffff, mask is 0xffff
+ */
+ if (ipv4_last && ipv4_spec->hdr.packet_id == 0 &&
+ ipv4_last->hdr.packet_id == UINT16_MAX &&
+ ipv4_mask->hdr.packet_id == UINT16_MAX &&
+ ipv4_mask->hdr.fragment_offset == UINT16_MAX) {
+ /* all IPv4 fragment packet has the same
+ * ethertype, if the spec is for all valid
+ * packet id, set ethertype into input set.
+ */
+ input_set |= IAVF_INSET_ETHERTYPE;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
+
+ /* add dummy header for IPv4 Fragment */
+ iavf_fdir_add_fragment_hdr(hdrs, layer);
+ } else if (ipv4_mask->hdr.packet_id == UINT16_MAX) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv4 mask.");
+ return -rte_errno;
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
@@ -707,63 +792,114 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
ipv6_spec = item->spec;
ipv6_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6);
- if (ipv6_spec && ipv6_mask) {
- if (ipv6_mask->hdr.payload_len) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid IPv6 mask");
- return -rte_errno;
- }
+ if (!(ipv6_spec && ipv6_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
- if ((ipv6_mask->hdr.vtc_flow &
- rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
- == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
- input_set |= IAVF_INSET_IPV6_TC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC);
- }
- if (ipv6_mask->hdr.proto == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_NEXT_HDR;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT);
- }
- if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
- input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT);
- }
- if (!memcmp(ipv6_mask->hdr.src_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.src_addr))) {
- input_set |= IAVF_INSET_IPV6_SRC;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC);
- }
- if (!memcmp(ipv6_mask->hdr.dst_addr,
- ipv6_addr_mask,
- RTE_DIM(ipv6_mask->hdr.dst_addr))) {
- input_set |= IAVF_INSET_IPV6_DST;
- VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST);
- }
+ if (ipv6_mask->hdr.payload_len) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv6 mask");
+ return -rte_errno;
+ }
- if (tun_inner) {
- input_set &= ~IAVF_PROT_IPV6_OUTER;
- input_set |= IAVF_PROT_IPV6_INNER;
- }
+ if ((ipv6_mask->hdr.vtc_flow &
+ rte_cpu_to_be_32(IAVF_IPV6_TC_MASK))
+ == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) {
+ input_set |= IAVF_INSET_IPV6_TC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ TC);
+ }
- rte_memcpy(hdr->buffer,
- &ipv6_spec->hdr,
- sizeof(ipv6_spec->hdr));
+ if (ipv6_mask->hdr.proto == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_NEXT_HDR;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ PROT);
+ }
+
+ if (ipv6_mask->hdr.hop_limits == UINT8_MAX) {
+ input_set |= IAVF_INSET_IPV6_HOP_LIMIT;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ HOP_LIMIT);
+ }
+
+ if (!memcmp(ipv6_mask->hdr.src_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.src_addr))) {
+ input_set |= IAVF_INSET_IPV6_SRC;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ SRC);
+ }
+ if (!memcmp(ipv6_mask->hdr.dst_addr, ipv6_addr_mask,
+ RTE_DIM(ipv6_mask->hdr.dst_addr))) {
+ input_set |= IAVF_INSET_IPV6_DST;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6,
+ DST);
+ }
+
+ if (tun_inner) {
+ input_set &= ~IAVF_PROT_IPV6_OUTER;
+ input_set |= IAVF_PROT_IPV6_INNER;
+ }
+
+ rte_memcpy(hdr->buffer, &ipv6_spec->hdr,
+ sizeof(ipv6_spec->hdr));
+
+ hdrs->count = ++layer;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+ ipv6_frag_spec = item->spec;
+ ipv6_frag_last = item->last;
+ ipv6_frag_mask = item->mask;
+ next_type = (item + 1)->type;
+
+ hdr = &hdrs->proto_hdr[layer];
+
+ VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6_EH_FRAG);
+
+ if (!(ipv6_frag_spec && ipv6_frag_mask)) {
+ hdrs->count = ++layer;
+ break;
+ }
+
+ /* only support any packet id for fragment IPv6
+ * any packet_id:
+ * spec is 0, last is 0xffffffff, mask is 0xffffffff
+ */
+ if (ipv6_frag_last && ipv6_frag_spec->hdr.id == 0 &&
+ ipv6_frag_last->hdr.id == UINT32_MAX &&
+ ipv6_frag_mask->hdr.id == UINT32_MAX &&
+ ipv6_frag_mask->hdr.frag_data == UINT16_MAX) {
+ /* all IPv6 fragment packet has the same
+ * ethertype, if the spec is for all valid
+ * packet id, set ethertype into input set.
+ */
+ input_set |= IAVF_INSET_ETHERTYPE;
+ VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
+ ETHERTYPE);
+
+ rte_memcpy(hdr->buffer, &ipv6_frag_spec->hdr,
+ sizeof(ipv6_frag_spec->hdr));
+ } else if (ipv6_frag_mask->hdr.id == UINT32_MAX) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Invalid IPv6 mask.");
+ return -rte_errno;
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_UDP:
udp_spec = item->spec;
udp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, UDP);
@@ -800,14 +936,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(udp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
tcp_spec = item->spec;
tcp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, TCP);
@@ -849,14 +985,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(tcp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_SCTP:
sctp_spec = item->spec;
sctp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, SCTP);
@@ -887,14 +1023,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(sctp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_GTPU:
gtp_spec = item->spec;
gtp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
@@ -919,14 +1055,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
tun_inner = 1;
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_GTP_PSC:
gtp_psc_spec = item->spec;
gtp_psc_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
if (!gtp_psc_spec)
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_EH);
@@ -947,14 +1083,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*gtp_psc_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_L2TPV3OIP:
l2tpv3oip_spec = item->spec;
l2tpv3oip_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, L2TPV3);
@@ -968,14 +1104,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*l2tpv3oip_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_ESP:
esp_spec = item->spec;
esp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ESP);
@@ -989,14 +1125,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(esp_spec->hdr));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_AH:
ah_spec = item->spec;
ah_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, AH);
@@ -1010,14 +1146,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*ah_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_PFCP:
pfcp_spec = item->spec;
pfcp_mask = item->mask;
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, PFCP);
@@ -1031,7 +1167,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*pfcp_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_ECPRI:
@@ -1040,7 +1176,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
ecpri_common.u32 = rte_be_to_cpu_32(ecpri_spec->hdr.common.u32);
- hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer];
+ hdr = &hdrs->proto_hdr[layer];
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ECPRI);
@@ -1056,7 +1192,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
sizeof(*ecpri_spec));
}
- filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer;
+ hdrs->count = ++layer;
break;
case RTE_FLOW_ITEM_TYPE_VOID:
@@ -1077,7 +1213,9 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
return -rte_errno;
}
- if (!iavf_fdir_refine_input_set(input_set, input_set_mask, filter)) {
+ if (!iavf_fdir_refine_input_set(input_set,
+ input_set_mask | IAVF_INSET_ETHERTYPE,
+ filter)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM_SPEC, pattern,
"Invalid input set");
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index 32932557ca..e19da15518 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -61,6 +61,7 @@
#define IAVF_PFCP_S_FIELD (1ULL << 44)
#define IAVF_PFCP_SEID (1ULL << 43)
#define IAVF_ECPRI_PC_RTC_ID (1ULL << 42)
+#define IAVF_IP_PKID (1ULL << 41)
/* input set */
@@ -84,6 +85,8 @@
(IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO)
#define IAVF_INSET_IPV4_TTL \
(IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL)
+#define IAVF_INSET_IPV4_ID \
+ (IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_IPV6_SRC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC)
#define IAVF_INSET_IPV6_DST \
@@ -94,6 +97,8 @@
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL)
#define IAVF_INSET_IPV6_TC \
(IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS)
+#define IAVF_INSET_IPV6_ID \
+ (IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID)
#define IAVF_INSET_TUN_IPV4_SRC \
(IAVF_PROT_IPV4_INNER | IAVF_IP_SRC)
--
2.20.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 0/4] support flow for IP fragment in IAVF Jeff Guo
` (3 preceding siblings ...)
2021-04-13 8:10 ` [dpdk-dev] [PATCH v4 4/4] net/iavf: support FDIR for IP fragment packet Jeff Guo
@ 2021-04-13 9:30 ` Zhang, Qi Z
4 siblings, 0 replies; 36+ messages in thread
From: Zhang, Qi Z @ 2021-04-13 9:30 UTC (permalink / raw)
To: Guo, Jia, orika, Xing, Beilei, Li, Xiaoyun, Wu, Jingjing; +Cc: dev, Xu, Ting
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Tuesday, April 13, 2021 4:10 PM
> To: orika@nvidia.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Jia <jia.guo@intel.com>
> Subject: [PATCH v4 0/4] support flow for IP fragment in IAVF
>
> support flow for IP fragment in IAVF
>
> v4:
> add some part which should not be deleted
> v3:
> rebase code and fix some parsing issues
> v2:
> refine some input check
>
> Jeff Guo (4):
> app/testpmd: add packet id for IP fragment
> common/iavf: add proto header for IP fragment
> net/iavf: support RSS hash for IP fragment
> net/iavf: support FDIR for IP fragment packet
>
> app/test-pmd/cmdline_flow.c | 21 +-
> drivers/common/iavf/virtchnl.h | 7 +
> drivers/net/iavf/iavf_fdir.c | 386 ++++++++++++++++++---------
> drivers/net/iavf/iavf_generic_flow.c | 24 ++
> drivers/net/iavf/iavf_generic_flow.h | 8 +
> drivers/net/iavf/iavf_hash.c | 83 +++++-
> 6 files changed, 394 insertions(+), 135 deletions(-)
>
> --
> 2.20.1
For Patch 2,3,4
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Applied to dpdk-next-net-intel.
Thanks
Qi
^ permalink raw reply [flat|nested] 36+ messages in thread