DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF
@ 2022-04-07  6:27 Junfeng Guo
  2022-04-07  6:27 ` [PATCH 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
                   ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-07  6:27 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

This patch set enabled Protocol Agnostic Flow (raw flow) Offloading
for FDIR in AVF.

[PATCH 1/3] common/iavf: support raw packet in protocol header
[PATCH 2/3] net/iavf: align with proto hdr struct change
[PATCH 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR


Junfeng Guo (3):
  common/iavf: support raw packet in protocol header
  net/iavf: align with proto hdr struct change
  net/iavf: enable Protocol Agnostic Flow Offloading FDIR

 drivers/common/iavf/virtchnl.h       |  20 ++-
 drivers/net/iavf/iavf_fdir.c         |  66 ++++++++++
 drivers/net/iavf/iavf_generic_flow.c |   6 +
 drivers/net/iavf/iavf_generic_flow.h |   3 +
 drivers/net/iavf/iavf_hash.c         | 180 ++++++++++++++-------------
 5 files changed, 183 insertions(+), 92 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 1/3] common/iavf: support raw packet in protocol header
  2022-04-07  6:27 [PATCH 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
@ 2022-04-07  6:27 ` Junfeng Guo
  2022-04-07  6:27 ` [PATCH 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
  2022-04-07  6:27 ` [PATCH 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-07  6:27 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

The patch extends existing virtchnl_proto_hdrs structure to allow VF
to pass a pair of buffers as packet data and mask that describe
a match pattern of a filter rule. Then the kernel PF driver is requested
to parse the pair of buffer and figure out low level hardware metadata
(ptype, profile, field vector.. ) to program the expected FDIR or RSS
rules.

INTERNAL ONLY:

This is requirement from DPDK to support Protocol Agnostic Flow
Offloading(*1). Previously we have integrated the Parser Library(*2)
into DPDK and enabled a raw packet based FDIR and RSS support in DPDK
PF driver(*3,*4), to enable the same feature for AVF driver, we need
Virtual Channel to support raw packet filter rule passing.

[1] https://wiki.ith.intel.com/display/NPGCVL/Protocol+Agnostic+Flow+Offloading
[2] http://patchwork.dpdk.org/project/dpdk/list/?series=19057&archive=both&state=*
[3] http://patchwork.dpdk.org/project/dpdk/list/?series=20254&state=%2A&archive=both
[4] http://patchwork.dpdk.org/project/dpdk/list/?series=20291&state=%2A&archive=both

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/common/iavf/virtchnl.h | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 3e44eca7d8..3975229545 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1482,6 +1482,7 @@ enum virtchnl_vfr_states {
 };
 
 #define VIRTCHNL_MAX_NUM_PROTO_HDRS	32
+#define VIRTCHNL_MAX_SIZE_RAW_PACKET	1024
 #define PROTO_HDR_SHIFT			5
 #define PROTO_HDR_FIELD_START(proto_hdr_type) \
 					(proto_hdr_type << PROTO_HDR_SHIFT)
@@ -1676,14 +1677,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
 struct virtchnl_proto_hdrs {
 	u8 tunnel_level;
 	/**
-	 * specify where protocol header start from.
+	 * specify where protocol header start from. must be 0 when sending a raw packet request.
 	 * 0 - from the outer layer
 	 * 1 - from the first inner layer
 	 * 2 - from the second inner layer
 	 * ....
-	 **/
-	int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
-	struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+	 */
+	int count;
+	/**
+	 * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS
+	 * must be 0 for a raw packet request.
+	 */
+	union {
+		struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+		struct {
+			u16 pkt_len;
+			u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+			u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+		} raw;
+	};
 };
 
 VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 2/3] net/iavf: align with proto hdr struct change
  2022-04-07  6:27 [PATCH 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2022-04-07  6:27 ` [PATCH 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-04-07  6:27 ` Junfeng Guo
  2022-04-07  6:27 ` [PATCH 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-07  6:27 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

Structure virtchnl_proto_headrs is extended with a union struct for
proto_hdr table and raw struct. Thus update the proto_hdrs template
init to align the virtchnl changes.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 180 ++++++++++++++++++-----------------
 1 file changed, 92 insertions(+), 88 deletions(-)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index f35a07653b..278e75117d 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
 /* proto_hdrs template */
 struct virtchnl_proto_hdrs outer_ipv4_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6, proto_hdr_ipv6_frag}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6, proto_hdr_ipv6_frag}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = {
-	2, 1, {proto_hdr_ipv4}
+	2, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = {
-	2, 1, {proto_hdr_ipv6}
+	2, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv6_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs eth_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
+
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 /* rss type super set */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-04-07  6:27 [PATCH 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2022-04-07  6:27 ` [PATCH 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
  2022-04-07  6:27 ` [PATCH 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
@ 2022-04-07  6:27 ` Junfeng Guo
  2022-04-08  8:02   ` [PATCH v2 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-04-07  6:27 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

This patch enabled Protocol Agnostic Flow Offloading FDIR in AVF.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/net/iavf/iavf_fdir.c         | 66 ++++++++++++++++++++++++++++
 drivers/net/iavf/iavf_generic_flow.c |  6 +++
 drivers/net/iavf/iavf_generic_flow.h |  3 ++
 3 files changed, 75 insertions(+)

diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index e9a3566c0d..bd0ae544da 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -194,6 +194,7 @@
 	IAVF_INSET_TUN_TCP_DST_PORT)
 
 static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
+	{iavf_pattern_raw,			 IAVF_INSET_NONE,		IAVF_INSET_NONE},
 	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,		IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4,			 IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4_udp,		 IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
@@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	struct virtchnl_proto_hdrs *hdrs =
 			&filter->add_fltr.rule_cfg.proto_hdrs;
 	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
 	const struct rte_flow_item_eth *eth_spec, *eth_mask;
 	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
 	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
@@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	enum rte_flow_item_type next_type;
 	uint8_t tun_inner = 0;
 	uint16_t ether_type, flags_version;
+	uint8_t item_num = 0;
 	int layer = 0;
 
 	uint8_t  ipv6_addr_mask[16] = {
@@ -763,8 +766,71 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 					   RTE_FLOW_ERROR_TYPE_ITEM, item,
 					   "Not support range");
 		}
+		item_num++;
 
 		switch (item_type) {
+		case RTE_FLOW_ITEM_TYPE_RAW:
+			raw_spec = item->spec;
+			raw_mask = item->mask;
+
+			if (item_num != 1)
+				return -rte_errno;
+
+			if (raw_spec->length != raw_mask->length)
+				return -rte_errno;
+
+			uint16_t pkt_len = 0;
+			uint16_t tmp_val = 0;
+			uint8_t tmp = 0;
+			int i, j;
+
+			pkt_len = raw_spec->length;
+
+			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
+				tmp = raw_spec->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_spec->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.spec[j] = tmp_val;
+
+				tmp = raw_mask->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_mask->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.mask[j] = tmp_val;
+			}
+
+			hdrs->raw.pkt_len = pkt_len / 2;
+			hdrs->tunnel_level = 0;
+			hdrs->count = 0;
+			return 0;
+
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			eth_spec = item->spec;
 			eth_mask = item->mask;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index ddc1fdd22b..e1a611e319 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
 	.query = iavf_flow_query,
 };
 
+/* raw */
+enum rte_flow_item_type iavf_pattern_raw[] = {
+	RTE_FLOW_ITEM_TYPE_RAW,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 /* empty */
 enum rte_flow_item_type iavf_pattern_empty[] = {
 	RTE_FLOW_ITEM_TYPE_END,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index f6af176073..52eb1caf29 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -180,6 +180,9 @@
 #define IAVF_INSET_L2TPV2 \
 	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
 
+/* raw pattern */
+extern enum rte_flow_item_type iavf_pattern_raw[];
+
 /* empty pattern */
 extern enum rte_flow_item_type iavf_pattern_empty[];
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF
  2022-04-07  6:27 ` [PATCH 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
@ 2022-04-08  8:02   ` Junfeng Guo
  2022-04-08  8:02     ` [PATCH v2 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
                       ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  8:02 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

This patch set enabled Protocol Agnostic Flow (raw flow) Offloading
for FDIR in AVF.

[PATCH v2 1/3] common/iavf: support raw packet in protocol header
[PATCH v2 2/3] net/iavf: align with proto hdr struct change
[PATCH v2 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR

v2:
add release notes and document update.

Junfeng Guo (3):
  common/iavf: support raw packet in protocol header
  net/iavf: align with proto hdr struct change
  net/iavf: enable Protocol Agnostic Flow Offloading FDIR

 doc/guides/rel_notes/release_22_07.rst |   4 +
 drivers/common/iavf/virtchnl.h         |  20 ++-
 drivers/net/iavf/iavf_fdir.c           |  66 +++++++++
 drivers/net/iavf/iavf_generic_flow.c   |   6 +
 drivers/net/iavf/iavf_generic_flow.h   |   3 +
 drivers/net/iavf/iavf_hash.c           | 180 +++++++++++++------------
 6 files changed, 187 insertions(+), 92 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 1/3] common/iavf: support raw packet in protocol header
  2022-04-08  8:02   ` [PATCH v2 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
@ 2022-04-08  8:02     ` Junfeng Guo
  2022-04-08  8:02     ` [PATCH v2 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
  2022-04-08  8:02     ` [PATCH v2 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  8:02 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

The patch extends existing virtchnl_proto_hdrs structure to allow VF
to pass a pair of buffers as packet data and mask that describe
a match pattern of a filter rule. Then the kernel PF driver is requested
to parse the pair of buffer and figure out low level hardware metadata
(ptype, profile, field vector.. ) to program the expected FDIR or RSS
rules.

INTERNAL ONLY:

This is requirement from DPDK to support Protocol Agnostic Flow
Offloading(*1). Previously we have integrated the Parser Library(*2)
into DPDK and enabled a raw packet based FDIR and RSS support in DPDK
PF driver(*3,*4), to enable the same feature for AVF driver, we need
Virtual Channel to support raw packet filter rule passing.

[1] https://wiki.ith.intel.com/display/NPGCVL/Protocol+Agnostic+Flow+Offloading
[2] http://patchwork.dpdk.org/project/dpdk/list/?series=19057&archive=both&state=*
[3] http://patchwork.dpdk.org/project/dpdk/list/?series=20254&state=%2A&archive=both
[4] http://patchwork.dpdk.org/project/dpdk/list/?series=20291&state=%2A&archive=both

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/common/iavf/virtchnl.h | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 3e44eca7d8..3975229545 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1482,6 +1482,7 @@ enum virtchnl_vfr_states {
 };
 
 #define VIRTCHNL_MAX_NUM_PROTO_HDRS	32
+#define VIRTCHNL_MAX_SIZE_RAW_PACKET	1024
 #define PROTO_HDR_SHIFT			5
 #define PROTO_HDR_FIELD_START(proto_hdr_type) \
 					(proto_hdr_type << PROTO_HDR_SHIFT)
@@ -1676,14 +1677,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
 struct virtchnl_proto_hdrs {
 	u8 tunnel_level;
 	/**
-	 * specify where protocol header start from.
+	 * specify where protocol header start from. must be 0 when sending a raw packet request.
 	 * 0 - from the outer layer
 	 * 1 - from the first inner layer
 	 * 2 - from the second inner layer
 	 * ....
-	 **/
-	int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
-	struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+	 */
+	int count;
+	/**
+	 * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS
+	 * must be 0 for a raw packet request.
+	 */
+	union {
+		struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+		struct {
+			u16 pkt_len;
+			u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+			u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+		} raw;
+	};
 };
 
 VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 2/3] net/iavf: align with proto hdr struct change
  2022-04-08  8:02   ` [PATCH v2 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2022-04-08  8:02     ` [PATCH v2 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-04-08  8:02     ` Junfeng Guo
  2022-04-08  8:02     ` [PATCH v2 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  8:02 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

Structure virtchnl_proto_headrs is extended with a union struct for
proto_hdr table and raw struct. Thus update the proto_hdrs template
init to align the virtchnl changes.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 180 ++++++++++++++++++-----------------
 1 file changed, 92 insertions(+), 88 deletions(-)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index f35a07653b..278e75117d 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
 /* proto_hdrs template */
 struct virtchnl_proto_hdrs outer_ipv4_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6, proto_hdr_ipv6_frag}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6, proto_hdr_ipv6_frag}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = {
-	2, 1, {proto_hdr_ipv4}
+	2, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = {
-	2, 1, {proto_hdr_ipv6}
+	2, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv6_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs eth_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
+
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 /* rss type super set */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-04-08  8:02   ` [PATCH v2 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2022-04-08  8:02     ` [PATCH v2 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
  2022-04-08  8:02     ` [PATCH v2 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
@ 2022-04-08  8:02     ` Junfeng Guo
  2022-04-08  9:12       ` [PATCH v3 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  8:02 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

This patch enabled Protocol Agnostic Flow Offloading FDIR in AVF.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 doc/guides/rel_notes/release_22_07.rst |  4 ++
 drivers/net/iavf/iavf_fdir.c           | 66 ++++++++++++++++++++++++++
 drivers/net/iavf/iavf_generic_flow.c   |  6 +++
 drivers/net/iavf/iavf_generic_flow.h   |  3 ++
 4 files changed, 79 insertions(+)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d990..43eab0b6d5 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,10 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Updated Intel iavf driver.**
+
+  * Added Protocol Agnostic Flow Offloading support in AVF Flow Director.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index e9a3566c0d..bd0ae544da 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -194,6 +194,7 @@
 	IAVF_INSET_TUN_TCP_DST_PORT)
 
 static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
+	{iavf_pattern_raw,			 IAVF_INSET_NONE,		IAVF_INSET_NONE},
 	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,		IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4,			 IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4_udp,		 IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
@@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	struct virtchnl_proto_hdrs *hdrs =
 			&filter->add_fltr.rule_cfg.proto_hdrs;
 	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
 	const struct rte_flow_item_eth *eth_spec, *eth_mask;
 	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
 	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
@@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	enum rte_flow_item_type next_type;
 	uint8_t tun_inner = 0;
 	uint16_t ether_type, flags_version;
+	uint8_t item_num = 0;
 	int layer = 0;
 
 	uint8_t  ipv6_addr_mask[16] = {
@@ -763,8 +766,71 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 					   RTE_FLOW_ERROR_TYPE_ITEM, item,
 					   "Not support range");
 		}
+		item_num++;
 
 		switch (item_type) {
+		case RTE_FLOW_ITEM_TYPE_RAW:
+			raw_spec = item->spec;
+			raw_mask = item->mask;
+
+			if (item_num != 1)
+				return -rte_errno;
+
+			if (raw_spec->length != raw_mask->length)
+				return -rte_errno;
+
+			uint16_t pkt_len = 0;
+			uint16_t tmp_val = 0;
+			uint8_t tmp = 0;
+			int i, j;
+
+			pkt_len = raw_spec->length;
+
+			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
+				tmp = raw_spec->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_spec->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.spec[j] = tmp_val;
+
+				tmp = raw_mask->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_mask->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.mask[j] = tmp_val;
+			}
+
+			hdrs->raw.pkt_len = pkt_len / 2;
+			hdrs->tunnel_level = 0;
+			hdrs->count = 0;
+			return 0;
+
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			eth_spec = item->spec;
 			eth_mask = item->mask;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index ddc1fdd22b..e1a611e319 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
 	.query = iavf_flow_query,
 };
 
+/* raw */
+enum rte_flow_item_type iavf_pattern_raw[] = {
+	RTE_FLOW_ITEM_TYPE_RAW,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 /* empty */
 enum rte_flow_item_type iavf_pattern_empty[] = {
 	RTE_FLOW_ITEM_TYPE_END,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index f6af176073..52eb1caf29 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -180,6 +180,9 @@
 #define IAVF_INSET_L2TPV2 \
 	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
 
+/* raw pattern */
+extern enum rte_flow_item_type iavf_pattern_raw[];
+
 /* empty pattern */
 extern enum rte_flow_item_type iavf_pattern_empty[];
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v3 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF
  2022-04-08  8:02     ` [PATCH v2 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
@ 2022-04-08  9:12       ` Junfeng Guo
  2022-04-08  9:12         ` [PATCH v3 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
                           ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  9:12 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

This patch set enabled Protocol Agnostic Flow (raw flow) Offloading
for FDIR in AVF.

[PATCH v3 1/3] common/iavf: support raw packet in protocol header
[PATCH v3 2/3] net/iavf: align with proto hdr struct change
[PATCH v3 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR

v3:
fix CI build issue.

v2:
add release notes and document update.

Junfeng Guo (3):
  common/iavf: support raw packet in protocol header
  net/iavf: align with proto hdr struct change
  net/iavf: enable Protocol Agnostic Flow Offloading FDIR

 doc/guides/rel_notes/release_22_07.rst |   4 +
 drivers/common/iavf/virtchnl.h         |  20 ++-
 drivers/net/iavf/iavf_fdir.c           |  67 +++++++++
 drivers/net/iavf/iavf_generic_flow.c   |   6 +
 drivers/net/iavf/iavf_generic_flow.h   |   3 +
 drivers/net/iavf/iavf_hash.c           | 180 +++++++++++++------------
 6 files changed, 188 insertions(+), 92 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v3 1/3] common/iavf: support raw packet in protocol header
  2022-04-08  9:12       ` [PATCH v3 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
@ 2022-04-08  9:12         ` Junfeng Guo
  2022-04-08  9:12         ` [PATCH v3 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
  2022-04-08  9:12         ` [PATCH v3 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  9:12 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

The patch extends existing virtchnl_proto_hdrs structure to allow VF
to pass a pair of buffers as packet data and mask that describe
a match pattern of a filter rule. Then the kernel PF driver is requested
to parse the pair of buffer and figure out low level hardware metadata
(ptype, profile, field vector.. ) to program the expected FDIR or RSS
rules.

INTERNAL ONLY:

This is requirement from DPDK to support Protocol Agnostic Flow
Offloading(*1). Previously we have integrated the Parser Library(*2)
into DPDK and enabled a raw packet based FDIR and RSS support in DPDK
PF driver(*3,*4), to enable the same feature for AVF driver, we need
Virtual Channel to support raw packet filter rule passing.

[1] https://wiki.ith.intel.com/display/NPGCVL/Protocol+Agnostic+Flow+Offloading
[2] http://patchwork.dpdk.org/project/dpdk/list/?series=19057&archive=both&state=*
[3] http://patchwork.dpdk.org/project/dpdk/list/?series=20254&state=%2A&archive=both
[4] http://patchwork.dpdk.org/project/dpdk/list/?series=20291&state=%2A&archive=both

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/common/iavf/virtchnl.h | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 3e44eca7d8..3975229545 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1482,6 +1482,7 @@ enum virtchnl_vfr_states {
 };
 
 #define VIRTCHNL_MAX_NUM_PROTO_HDRS	32
+#define VIRTCHNL_MAX_SIZE_RAW_PACKET	1024
 #define PROTO_HDR_SHIFT			5
 #define PROTO_HDR_FIELD_START(proto_hdr_type) \
 					(proto_hdr_type << PROTO_HDR_SHIFT)
@@ -1676,14 +1677,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
 struct virtchnl_proto_hdrs {
 	u8 tunnel_level;
 	/**
-	 * specify where protocol header start from.
+	 * specify where protocol header start from. must be 0 when sending a raw packet request.
 	 * 0 - from the outer layer
 	 * 1 - from the first inner layer
 	 * 2 - from the second inner layer
 	 * ....
-	 **/
-	int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
-	struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+	 */
+	int count;
+	/**
+	 * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS
+	 * must be 0 for a raw packet request.
+	 */
+	union {
+		struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+		struct {
+			u16 pkt_len;
+			u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+			u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+		} raw;
+	};
 };
 
 VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v3 2/3] net/iavf: align with proto hdr struct change
  2022-04-08  9:12       ` [PATCH v3 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2022-04-08  9:12         ` [PATCH v3 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-04-08  9:12         ` Junfeng Guo
  2022-04-08  9:12         ` [PATCH v3 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  9:12 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

Structure virtchnl_proto_headrs is extended with a union struct for
proto_hdr table and raw struct. Thus update the proto_hdrs template
init to align the virtchnl changes.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 180 ++++++++++++++++++-----------------
 1 file changed, 92 insertions(+), 88 deletions(-)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index f35a07653b..278e75117d 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
 /* proto_hdrs template */
 struct virtchnl_proto_hdrs outer_ipv4_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6, proto_hdr_ipv6_frag}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6, proto_hdr_ipv6_frag}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = {
-	2, 1, {proto_hdr_ipv4}
+	2, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = {
-	2, 1, {proto_hdr_ipv6}
+	2, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv6_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs eth_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
+
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 /* rss type super set */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v3 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-04-08  9:12       ` [PATCH v3 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
  2022-04-08  9:12         ` [PATCH v3 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
  2022-04-08  9:12         ` [PATCH v3 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
@ 2022-04-08  9:12         ` Junfeng Guo
  2022-04-21  3:28           ` [PATCH v4 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-04-08  9:12 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo

This patch enabled Protocol Agnostic Flow Offloading FDIR in AVF.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 doc/guides/rel_notes/release_22_07.rst |  4 ++
 drivers/net/iavf/iavf_fdir.c           | 67 ++++++++++++++++++++++++++
 drivers/net/iavf/iavf_generic_flow.c   |  6 +++
 drivers/net/iavf/iavf_generic_flow.h   |  3 ++
 4 files changed, 80 insertions(+)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d990..43eab0b6d5 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,10 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Updated Intel iavf driver.**
+
+  * Added Protocol Agnostic Flow Offloading support in AVF Flow Director.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index e9a3566c0d..f236260502 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -194,6 +194,7 @@
 	IAVF_INSET_TUN_TCP_DST_PORT)
 
 static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
+	{iavf_pattern_raw,			 IAVF_INSET_NONE,		IAVF_INSET_NONE},
 	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,		IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4,			 IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4_udp,		 IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
@@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	struct virtchnl_proto_hdrs *hdrs =
 			&filter->add_fltr.rule_cfg.proto_hdrs;
 	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
 	const struct rte_flow_item_eth *eth_spec, *eth_mask;
 	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
 	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
@@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	enum rte_flow_item_type next_type;
 	uint8_t tun_inner = 0;
 	uint16_t ether_type, flags_version;
+	uint8_t item_num = 0;
 	int layer = 0;
 
 	uint8_t  ipv6_addr_mask[16] = {
@@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 					   RTE_FLOW_ERROR_TYPE_ITEM, item,
 					   "Not support range");
 		}
+		item_num++;
 
 		switch (item_type) {
+		case RTE_FLOW_ITEM_TYPE_RAW: {
+			raw_spec = item->spec;
+			raw_mask = item->mask;
+
+			if (item_num != 1)
+				return -rte_errno;
+
+			if (raw_spec->length != raw_mask->length)
+				return -rte_errno;
+
+			uint16_t pkt_len = 0;
+			uint16_t tmp_val = 0;
+			uint8_t tmp = 0;
+			int i, j;
+
+			pkt_len = raw_spec->length;
+
+			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
+				tmp = raw_spec->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_spec->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.spec[j] = tmp_val;
+
+				tmp = raw_mask->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_mask->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.mask[j] = tmp_val;
+			}
+
+			hdrs->raw.pkt_len = pkt_len / 2;
+			hdrs->tunnel_level = 0;
+			hdrs->count = 0;
+			return 0;
+		}
+
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			eth_spec = item->spec;
 			eth_mask = item->mask;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index ddc1fdd22b..e1a611e319 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
 	.query = iavf_flow_query,
 };
 
+/* raw */
+enum rte_flow_item_type iavf_pattern_raw[] = {
+	RTE_FLOW_ITEM_TYPE_RAW,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 /* empty */
 enum rte_flow_item_type iavf_pattern_empty[] = {
 	RTE_FLOW_ITEM_TYPE_END,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index f6af176073..52eb1caf29 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -180,6 +180,9 @@
 #define IAVF_INSET_L2TPV2 \
 	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
 
+/* raw pattern */
+extern enum rte_flow_item_type iavf_pattern_raw[];
+
 /* empty pattern */
 extern enum rte_flow_item_type iavf_pattern_empty[];
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v4 0/4] Enable Protocol Agnostic Flow Offloading in AVF
  2022-04-08  9:12         ` [PATCH v3 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
@ 2022-04-21  3:28           ` Junfeng Guo
  2022-04-21  3:28             ` [PATCH v4 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
                               ` (3 more replies)
  0 siblings, 4 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-21  3:28 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

This patch set enabled Protocol Agnostic Flow (raw flow) Offloading
for FDIR and RSS in AVF.

[PATCH v4 1/4] common/iavf: support raw packet in protocol header
[PATCH v4 2/4] net/iavf: align with proto hdr struct change
[PATCH v4 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
[PATCH v4 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS

v4:
add support raw flow for RSS in AVF.

v3:
fix CI build issue.

v2:
add release notes and document update.

Junfeng Guo (3):
  common/iavf: support raw packet in protocol header
  net/iavf: align with proto hdr struct change
  net/iavf: enable Protocol Agnostic Flow Offloading FDIR

Ting Xu (1):
  net/iavf: support Protocol Agnostic Flow Offloading VF RSS

 doc/guides/rel_notes/release_22_07.rst |   1 +
 drivers/common/iavf/virtchnl.h         |  20 +-
 drivers/net/iavf/iavf_fdir.c           |  67 ++++++
 drivers/net/iavf/iavf_generic_flow.c   |   6 +
 drivers/net/iavf/iavf_generic_flow.h   |   3 +
 drivers/net/iavf/iavf_hash.c           | 276 +++++++++++++++++--------
 6 files changed, 281 insertions(+), 92 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v4 1/4] common/iavf: support raw packet in protocol header
  2022-04-21  3:28           ` [PATCH v4 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
@ 2022-04-21  3:28             ` Junfeng Guo
  2022-05-21  1:34               ` Zhang, Qi Z
  2022-04-21  3:28             ` [PATCH v4 2/4] net/iavf: align with proto hdr struct change Junfeng Guo
                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-04-21  3:28 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

The patch extends existing virtchnl_proto_hdrs structure to allow VF
to pass a pair of buffers as packet data and mask that describe
a match pattern of a filter rule. Then the kernel PF driver is requested
to parse the pair of buffer and figure out low level hardware metadata
(ptype, profile, field vector.. ) to program the expected FDIR or RSS
rules.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/common/iavf/virtchnl.h | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 249ae6ed23..c9f6cab55b 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1484,6 +1484,7 @@ enum virtchnl_vfr_states {
 };
 
 #define VIRTCHNL_MAX_NUM_PROTO_HDRS	32
+#define VIRTCHNL_MAX_SIZE_RAW_PACKET	1024
 #define PROTO_HDR_SHIFT			5
 #define PROTO_HDR_FIELD_START(proto_hdr_type) \
 					(proto_hdr_type << PROTO_HDR_SHIFT)
@@ -1678,14 +1679,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
 struct virtchnl_proto_hdrs {
 	u8 tunnel_level;
 	/**
-	 * specify where protocol header start from.
+	 * specify where protocol header start from. must be 0 when sending a raw packet request.
 	 * 0 - from the outer layer
 	 * 1 - from the first inner layer
 	 * 2 - from the second inner layer
 	 * ....
-	 **/
-	int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
-	struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+	 */
+	int count;
+	/**
+	 * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS
+	 * must be 0 for a raw packet request.
+	 */
+	union {
+		struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+		struct {
+			u16 pkt_len;
+			u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+			u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+		} raw;
+	};
 };
 
 VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v4 2/4] net/iavf: align with proto hdr struct change
  2022-04-21  3:28           ` [PATCH v4 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2022-04-21  3:28             ` [PATCH v4 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-04-21  3:28             ` Junfeng Guo
  2022-04-21  3:28             ` [PATCH v4 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2022-04-21  3:28             ` [PATCH v4 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
  3 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-21  3:28 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

Structure virtchnl_proto_headrs is extended with a union struct for
proto_hdr table and raw struct. Thus update the proto_hdrs template
init to align the virtchnl changes.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 180 ++++++++++++++++++-----------------
 1 file changed, 92 insertions(+), 88 deletions(-)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index f35a07653b..278e75117d 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
 /* proto_hdrs template */
 struct virtchnl_proto_hdrs outer_ipv4_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6, proto_hdr_ipv6_frag}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6, proto_hdr_ipv6_frag}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = {
-	2, 1, {proto_hdr_ipv4}
+	2, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = {
-	2, 1, {proto_hdr_ipv6}
+	2, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv6_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs eth_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
+
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 /* rss type super set */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v4 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-04-21  3:28           ` [PATCH v4 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2022-04-21  3:28             ` [PATCH v4 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
  2022-04-21  3:28             ` [PATCH v4 2/4] net/iavf: align with proto hdr struct change Junfeng Guo
@ 2022-04-21  3:28             ` Junfeng Guo
  2022-04-21  3:28             ` [PATCH v4 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
  3 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-04-21  3:28 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

This patch enabled Protocol Agnostic Flow Offloading FDIR in AVF.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 doc/guides/rel_notes/release_22_07.rst |  1 +
 drivers/net/iavf/iavf_fdir.c           | 67 ++++++++++++++++++++++++++
 drivers/net/iavf/iavf_generic_flow.c   |  6 +++
 drivers/net/iavf/iavf_generic_flow.h   |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index f1b4057d70..5091dde171 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -59,6 +59,7 @@ New Features
 
   * Added Tx QoS queue rate limitation support.
   * Added quanta size configuration support.
+  * Added Protocol Agnostic Flow Offloading support in AVF FDIR and RSS.
 
 Removed Items
 -------------
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index e9a3566c0d..f236260502 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -194,6 +194,7 @@
 	IAVF_INSET_TUN_TCP_DST_PORT)
 
 static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
+	{iavf_pattern_raw,			 IAVF_INSET_NONE,		IAVF_INSET_NONE},
 	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,		IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4,			 IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4_udp,		 IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
@@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	struct virtchnl_proto_hdrs *hdrs =
 			&filter->add_fltr.rule_cfg.proto_hdrs;
 	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
 	const struct rte_flow_item_eth *eth_spec, *eth_mask;
 	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
 	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
@@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	enum rte_flow_item_type next_type;
 	uint8_t tun_inner = 0;
 	uint16_t ether_type, flags_version;
+	uint8_t item_num = 0;
 	int layer = 0;
 
 	uint8_t  ipv6_addr_mask[16] = {
@@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 					   RTE_FLOW_ERROR_TYPE_ITEM, item,
 					   "Not support range");
 		}
+		item_num++;
 
 		switch (item_type) {
+		case RTE_FLOW_ITEM_TYPE_RAW: {
+			raw_spec = item->spec;
+			raw_mask = item->mask;
+
+			if (item_num != 1)
+				return -rte_errno;
+
+			if (raw_spec->length != raw_mask->length)
+				return -rte_errno;
+
+			uint16_t pkt_len = 0;
+			uint16_t tmp_val = 0;
+			uint8_t tmp = 0;
+			int i, j;
+
+			pkt_len = raw_spec->length;
+
+			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
+				tmp = raw_spec->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_spec->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.spec[j] = tmp_val;
+
+				tmp = raw_mask->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_mask->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.mask[j] = tmp_val;
+			}
+
+			hdrs->raw.pkt_len = pkt_len / 2;
+			hdrs->tunnel_level = 0;
+			hdrs->count = 0;
+			return 0;
+		}
+
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			eth_spec = item->spec;
 			eth_mask = item->mask;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index ddc1fdd22b..e1a611e319 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
 	.query = iavf_flow_query,
 };
 
+/* raw */
+enum rte_flow_item_type iavf_pattern_raw[] = {
+	RTE_FLOW_ITEM_TYPE_RAW,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 /* empty */
 enum rte_flow_item_type iavf_pattern_empty[] = {
 	RTE_FLOW_ITEM_TYPE_END,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index f6af176073..52eb1caf29 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -180,6 +180,9 @@
 #define IAVF_INSET_L2TPV2 \
 	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
 
+/* raw pattern */
+extern enum rte_flow_item_type iavf_pattern_raw[];
+
 /* empty pattern */
 extern enum rte_flow_item_type iavf_pattern_empty[];
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v4 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS
  2022-04-21  3:28           ` [PATCH v4 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
                               ` (2 preceding siblings ...)
  2022-04-21  3:28             ` [PATCH v4 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
@ 2022-04-21  3:28             ` Junfeng Guo
  2022-05-20  9:16               ` [PATCH v5 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  3 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-04-21  3:28 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

From: Ting Xu <ting.xu@intel.com>

Enable Protocol Agnostic Flow Offloading RSS hash for VF.

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 96 ++++++++++++++++++++++++++++++++++++
 1 file changed, 96 insertions(+)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 278e75117d..42df7c4e48 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -37,6 +37,8 @@
 /* L2TPv2 */
 #define IAVF_PHINT_L2TPV2			BIT_ULL(9)
 #define IAVF_PHINT_L2TPV2_LEN			BIT_ULL(10)
+/* Raw */
+#define IAVF_PHINT_RAW				BIT_ULL(11)
 
 #define IAVF_PHINT_GTPU_MSK	(IAVF_PHINT_GTPU	| \
 				 IAVF_PHINT_GTPU_EH	| \
@@ -58,6 +60,7 @@ struct iavf_hash_match_type {
 struct iavf_rss_meta {
 	struct virtchnl_proto_hdrs proto_hdrs;
 	enum virtchnl_rss_algorithm rss_algorithm;
+	bool raw_ena;
 };
 
 struct iavf_hash_flow_cfg {
@@ -532,6 +535,7 @@ struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
  */
 static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	/* IPv4 */
+	{iavf_pattern_raw,				IAVF_INSET_NONE,		NULL},
 	{iavf_pattern_eth_ipv4,				IAVF_RSS_TYPE_OUTER_IPV4,	&outer_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_udp,			IAVF_RSS_TYPE_OUTER_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_tcp,			IAVF_RSS_TYPE_OUTER_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
@@ -804,6 +808,9 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 		}
 
 		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_RAW:
+			*phint |= IAVF_PHINT_RAW;
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			if (!(*phint & IAVF_PHINT_GTPU_MSK) &&
 			    !(*phint & IAVF_PHINT_GRE) &&
@@ -873,6 +880,80 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 	return 0;
 }
 
+static int
+iavf_hash_parse_raw_pattern(const struct rte_flow_item *item,
+			struct iavf_rss_meta *meta)
+{
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
+	uint8_t *pkt_buf, *msk_buf;
+	uint8_t spec_len, pkt_len;
+	uint8_t tmp_val = 0;
+	uint8_t tmp_c = 0;
+	int i, j;
+
+	raw_spec = item->spec;
+	raw_mask = item->mask;
+
+	spec_len = strlen((char *)(uintptr_t)raw_spec->pattern);
+	if (strlen((char *)(uintptr_t)raw_mask->pattern) !=
+		spec_len)
+		return -rte_errno;
+
+	pkt_len = spec_len / 2;
+
+	pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!pkt_buf)
+		return -ENOMEM;
+
+	msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!msk_buf)
+		return -ENOMEM;
+
+	/* convert string to int array */
+	for (i = 0, j = 0; i < spec_len; i += 2, j++) {
+		tmp_c = raw_spec->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_spec->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 0x57;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 0x37;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			msk_buf[j] = tmp_val * 16 + tmp_c - '0';
+	}
+
+	rte_memcpy(meta->proto_hdrs.raw.spec, pkt_buf, pkt_len);
+	rte_memcpy(meta->proto_hdrs.raw.mask, msk_buf, pkt_len);
+	meta->proto_hdrs.raw.pkt_len = pkt_len;
+
+	rte_free(pkt_buf);
+	rte_free(msk_buf);
+
+	return 0;
+}
+
 #define REFINE_PROTO_FLD(op, fld) \
 	VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld)
 #define REPALCE_PROTO_FLD(fld_1, fld_2) \
@@ -1387,6 +1468,10 @@ iavf_hash_parse_action(struct iavf_pattern_match_item *match_item,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"a non-NULL RSS queue is not supported");
 
+			/* If pattern type is raw, no need to refine rss type */
+			if (pattern_hint == IAVF_PHINT_RAW)
+				break;
+
 			/**
 			 * Check simultaneous use of SRC_ONLY and DST_ONLY
 			 * of the same level.
@@ -1453,6 +1538,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad,
 	if (ret)
 		goto error;
 
+	if (phint == IAVF_PHINT_RAW) {
+		rss_meta_ptr->raw_ena = true;
+		ret = iavf_hash_parse_raw_pattern(pattern, rss_meta_ptr);
+		if (ret) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					   "Parse raw pattern failed");
+			goto error;
+		}
+	}
+
 	ret = iavf_hash_parse_action(pattern_match_item, actions, phint,
 				     rss_meta_ptr, error);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v5 0/4] Enable Protocol Agnostic Flow Offloading in AVF
  2022-04-21  3:28             ` [PATCH v4 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
@ 2022-05-20  9:16               ` Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
                                   ` (3 more replies)
  0 siblings, 4 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-05-20  9:16 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

This patch set enabled Protocol Agnostic Flow (raw flow) Offloading
for FDIR and RSS in AVF, based on the Parser Library feature and the
existing rte_flow `raw` API.

[PATCH v4 1/4] common/iavf: support raw packet in protocol header
[PATCH v4 2/4] net/iavf: align with proto hdr struct change
[PATCH v4 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
[PATCH v4 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS

v5:
code rebase and update commit messages.

v4:
add support raw flow for RSS in AVF.

v3:
fix CI build issue.

v2:
add release notes and document update.

Junfeng Guo (3):
  common/iavf: support raw packet in protocol header
  net/iavf: align with proto hdr struct change
  net/iavf: enable Protocol Agnostic Flow Offloading FDIR

Ting Xu (1):
  net/iavf: support Protocol Agnostic Flow Offloading VF RSS

 doc/guides/rel_notes/release_22_07.rst |   1 +
 drivers/common/iavf/virtchnl.h         |  20 +-
 drivers/net/iavf/iavf_fdir.c           |  67 ++++++
 drivers/net/iavf/iavf_generic_flow.c   |   6 +
 drivers/net/iavf/iavf_generic_flow.h   |   3 +
 drivers/net/iavf/iavf_hash.c           | 276 +++++++++++++++++--------
 6 files changed, 281 insertions(+), 92 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v5 1/4] common/iavf: support raw packet in protocol header
  2022-05-20  9:16               ` [PATCH v5 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
@ 2022-05-20  9:16                 ` Junfeng Guo
  2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 2/4] net/iavf: align with proto hdr struct change Junfeng Guo
                                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-05-20  9:16 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

The patch extends existing virtchnl_proto_hdrs structure to allow VF
to pass a pair of buffers as packet data and mask that describe
a match pattern of a filter rule. Then the kernel PF driver is requested
to parse the pair of buffer and figure out low level hardware metadata
(ptype, profile, field vector.. ) to program the expected FDIR or RSS
rules.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/common/iavf/virtchnl.h | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 2d49f95f84..f123daec8e 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1503,6 +1503,7 @@ enum virtchnl_vfr_states {
 };
 
 #define VIRTCHNL_MAX_NUM_PROTO_HDRS	32
+#define VIRTCHNL_MAX_SIZE_RAW_PACKET	1024
 #define PROTO_HDR_SHIFT			5
 #define PROTO_HDR_FIELD_START(proto_hdr_type) \
 					(proto_hdr_type << PROTO_HDR_SHIFT)
@@ -1697,14 +1698,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
 struct virtchnl_proto_hdrs {
 	u8 tunnel_level;
 	/**
-	 * specify where protocol header start from.
+	 * specify where protocol header start from. must be 0 when sending a raw packet request.
 	 * 0 - from the outer layer
 	 * 1 - from the first inner layer
 	 * 2 - from the second inner layer
 	 * ....
-	 **/
-	int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
-	struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+	 */
+	int count;
+	/**
+	 * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS
+	 * must be 0 for a raw packet request.
+	 */
+	union {
+		struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+		struct {
+			u16 pkt_len;
+			u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+			u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+		} raw;
+	};
 };
 
 VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v5 2/4] net/iavf: align with proto hdr struct change
  2022-05-20  9:16               ` [PATCH v5 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-05-20  9:16                 ` Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
  3 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-05-20  9:16 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

Structure virtchnl_proto_headrs is extended with a union struct for
proto_hdr table and raw struct. Thus update the proto_hdrs template
init to align the virtchnl changes.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 180 ++++++++++++++++++-----------------
 1 file changed, 92 insertions(+), 88 deletions(-)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index f35a07653b..278e75117d 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
 /* proto_hdrs template */
 struct virtchnl_proto_hdrs outer_ipv4_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6, proto_hdr_ipv6_frag}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6, proto_hdr_ipv6_frag}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = {
-	2, 1, {proto_hdr_ipv4}
+	2, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = {
-	2, 1, {proto_hdr_ipv6}
+	2, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv6_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs eth_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
+
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 /* rss type super set */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v5 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-05-20  9:16               ` [PATCH v5 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 2/4] net/iavf: align with proto hdr struct change Junfeng Guo
@ 2022-05-20  9:16                 ` Junfeng Guo
  2022-05-20  9:16                 ` [PATCH v5 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
  3 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-05-20  9:16 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow
Director (FDIR) in AVF, based on the Parser Library feature and the
existing rte_flow `raw` API.

The input spec and mask of raw pattern are first parsed via the
Parser Library, and then passed to the kernel driver to create the
flow rule.

Similar as PF FDIR, each raw flow requires:
1. A byte string of raw target packet bits.
2. A byte string contains mask of target packet.

Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:

flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end

Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 doc/guides/rel_notes/release_22_07.rst |  1 +
 drivers/net/iavf/iavf_fdir.c           | 67 ++++++++++++++++++++++++++
 drivers/net/iavf/iavf_generic_flow.c   |  6 +++
 drivers/net/iavf/iavf_generic_flow.h   |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a0eb6ab61b..829fa6047e 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -65,6 +65,7 @@ New Features
   * Added Tx QoS queue rate limitation support.
   * Added quanta size configuration support.
   * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
+  * Added Protocol Agnostic Flow Offloading support in AVF FDIR and RSS.
 
 * **Updated Intel ice driver.**
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index e9a3566c0d..f236260502 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -194,6 +194,7 @@
 	IAVF_INSET_TUN_TCP_DST_PORT)
 
 static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
+	{iavf_pattern_raw,			 IAVF_INSET_NONE,		IAVF_INSET_NONE},
 	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,		IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4,			 IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4_udp,		 IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
@@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	struct virtchnl_proto_hdrs *hdrs =
 			&filter->add_fltr.rule_cfg.proto_hdrs;
 	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
 	const struct rte_flow_item_eth *eth_spec, *eth_mask;
 	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
 	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
@@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	enum rte_flow_item_type next_type;
 	uint8_t tun_inner = 0;
 	uint16_t ether_type, flags_version;
+	uint8_t item_num = 0;
 	int layer = 0;
 
 	uint8_t  ipv6_addr_mask[16] = {
@@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 					   RTE_FLOW_ERROR_TYPE_ITEM, item,
 					   "Not support range");
 		}
+		item_num++;
 
 		switch (item_type) {
+		case RTE_FLOW_ITEM_TYPE_RAW: {
+			raw_spec = item->spec;
+			raw_mask = item->mask;
+
+			if (item_num != 1)
+				return -rte_errno;
+
+			if (raw_spec->length != raw_mask->length)
+				return -rte_errno;
+
+			uint16_t pkt_len = 0;
+			uint16_t tmp_val = 0;
+			uint8_t tmp = 0;
+			int i, j;
+
+			pkt_len = raw_spec->length;
+
+			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
+				tmp = raw_spec->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_spec->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.spec[j] = tmp_val;
+
+				tmp = raw_mask->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_mask->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.mask[j] = tmp_val;
+			}
+
+			hdrs->raw.pkt_len = pkt_len / 2;
+			hdrs->tunnel_level = 0;
+			hdrs->count = 0;
+			return 0;
+		}
+
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			eth_spec = item->spec;
 			eth_mask = item->mask;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index ddc1fdd22b..e1a611e319 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
 	.query = iavf_flow_query,
 };
 
+/* raw */
+enum rte_flow_item_type iavf_pattern_raw[] = {
+	RTE_FLOW_ITEM_TYPE_RAW,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 /* empty */
 enum rte_flow_item_type iavf_pattern_empty[] = {
 	RTE_FLOW_ITEM_TYPE_END,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index f6af176073..52eb1caf29 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -180,6 +180,9 @@
 #define IAVF_INSET_L2TPV2 \
 	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
 
+/* raw pattern */
+extern enum rte_flow_item_type iavf_pattern_raw[];
+
 /* empty pattern */
 extern enum rte_flow_item_type iavf_pattern_empty[];
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v5 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS
  2022-05-20  9:16               ` [PATCH v5 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
                                   ` (2 preceding siblings ...)
  2022-05-20  9:16                 ` [PATCH v5 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
@ 2022-05-20  9:16                 ` Junfeng Guo
  3 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-05-20  9:16 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

From: Ting Xu <ting.xu@intel.com>

Enable Protocol Agnostic Flow Offloading for RSS hash in VF. It supports
raw pattern flow rule creation in VF based on Parser Library feature. VF
parses the spec and mask input of raw pattern, and passes it to kernel
driver to create the flow rule. Current rte_flow raw API is utilized.

command example:
RSS hash for ipv4-src-dst:
flow create 0 ingress pattern raw pattern spec
00000000000000000000000008004500001400004000401000000000000000000000
pattern mask
0000000000000000000000000000000000000000000000000000ffffffffffffffff /
end actions rss queues end / end

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 96 ++++++++++++++++++++++++++++++++++++
 1 file changed, 96 insertions(+)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 278e75117d..42df7c4e48 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -37,6 +37,8 @@
 /* L2TPv2 */
 #define IAVF_PHINT_L2TPV2			BIT_ULL(9)
 #define IAVF_PHINT_L2TPV2_LEN			BIT_ULL(10)
+/* Raw */
+#define IAVF_PHINT_RAW				BIT_ULL(11)
 
 #define IAVF_PHINT_GTPU_MSK	(IAVF_PHINT_GTPU	| \
 				 IAVF_PHINT_GTPU_EH	| \
@@ -58,6 +60,7 @@ struct iavf_hash_match_type {
 struct iavf_rss_meta {
 	struct virtchnl_proto_hdrs proto_hdrs;
 	enum virtchnl_rss_algorithm rss_algorithm;
+	bool raw_ena;
 };
 
 struct iavf_hash_flow_cfg {
@@ -532,6 +535,7 @@ struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
  */
 static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	/* IPv4 */
+	{iavf_pattern_raw,				IAVF_INSET_NONE,		NULL},
 	{iavf_pattern_eth_ipv4,				IAVF_RSS_TYPE_OUTER_IPV4,	&outer_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_udp,			IAVF_RSS_TYPE_OUTER_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_tcp,			IAVF_RSS_TYPE_OUTER_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
@@ -804,6 +808,9 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 		}
 
 		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_RAW:
+			*phint |= IAVF_PHINT_RAW;
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			if (!(*phint & IAVF_PHINT_GTPU_MSK) &&
 			    !(*phint & IAVF_PHINT_GRE) &&
@@ -873,6 +880,80 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 	return 0;
 }
 
+static int
+iavf_hash_parse_raw_pattern(const struct rte_flow_item *item,
+			struct iavf_rss_meta *meta)
+{
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
+	uint8_t *pkt_buf, *msk_buf;
+	uint8_t spec_len, pkt_len;
+	uint8_t tmp_val = 0;
+	uint8_t tmp_c = 0;
+	int i, j;
+
+	raw_spec = item->spec;
+	raw_mask = item->mask;
+
+	spec_len = strlen((char *)(uintptr_t)raw_spec->pattern);
+	if (strlen((char *)(uintptr_t)raw_mask->pattern) !=
+		spec_len)
+		return -rte_errno;
+
+	pkt_len = spec_len / 2;
+
+	pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!pkt_buf)
+		return -ENOMEM;
+
+	msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!msk_buf)
+		return -ENOMEM;
+
+	/* convert string to int array */
+	for (i = 0, j = 0; i < spec_len; i += 2, j++) {
+		tmp_c = raw_spec->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_spec->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 0x57;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 0x37;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			msk_buf[j] = tmp_val * 16 + tmp_c - '0';
+	}
+
+	rte_memcpy(meta->proto_hdrs.raw.spec, pkt_buf, pkt_len);
+	rte_memcpy(meta->proto_hdrs.raw.mask, msk_buf, pkt_len);
+	meta->proto_hdrs.raw.pkt_len = pkt_len;
+
+	rte_free(pkt_buf);
+	rte_free(msk_buf);
+
+	return 0;
+}
+
 #define REFINE_PROTO_FLD(op, fld) \
 	VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld)
 #define REPALCE_PROTO_FLD(fld_1, fld_2) \
@@ -1387,6 +1468,10 @@ iavf_hash_parse_action(struct iavf_pattern_match_item *match_item,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"a non-NULL RSS queue is not supported");
 
+			/* If pattern type is raw, no need to refine rss type */
+			if (pattern_hint == IAVF_PHINT_RAW)
+				break;
+
 			/**
 			 * Check simultaneous use of SRC_ONLY and DST_ONLY
 			 * of the same level.
@@ -1453,6 +1538,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad,
 	if (ret)
 		goto error;
 
+	if (phint == IAVF_PHINT_RAW) {
+		rss_meta_ptr->raw_ena = true;
+		ret = iavf_hash_parse_raw_pattern(pattern, rss_meta_ptr);
+		if (ret) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					   "Parse raw pattern failed");
+			goto error;
+		}
+	}
+
 	ret = iavf_hash_parse_action(pattern_match_item, actions, phint,
 				     rss_meta_ptr, error);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 1/4] common/iavf: support raw packet in protocol header
  2022-04-21  3:28             ` [PATCH v4 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-05-21  1:34               ` Zhang, Qi Z
  0 siblings, 0 replies; 35+ messages in thread
From: Zhang, Qi Z @ 2022-05-21  1:34 UTC (permalink / raw)
  To: Guo, Junfeng, Wu, Jingjing, Xing, Beilei; +Cc: dev, Xu, Ting



> -----Original Message-----
> From: Guo, Junfeng <junfeng.guo@intel.com>
> Sent: Thursday, April 21, 2022 11:29 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Junfeng
> <junfeng.guo@intel.com>
> Subject: [PATCH v4 1/4] common/iavf: support raw packet in protocol header
> 
> The patch extends existing virtchnl_proto_hdrs structure to allow VF to pass a
> pair of buffers as packet data and mask that describe a match pattern of a
> filter rule. Then the kernel PF driver is requested to parse the pair of buffer and
> figure out low level hardware metadata (ptype, profile, field vector.. ) to
> program the expected FDIR or RSS rules.
> 
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>

This patch broken compilation, please make sure per patch compliable. 

> ---
>  drivers/common/iavf/virtchnl.h | 20 ++++++++++++++++----
>  1 file changed, 16 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
> index 249ae6ed23..c9f6cab55b 100644
> --- a/drivers/common/iavf/virtchnl.h
> +++ b/drivers/common/iavf/virtchnl.h
> @@ -1484,6 +1484,7 @@ enum virtchnl_vfr_states {  };
> 
>  #define VIRTCHNL_MAX_NUM_PROTO_HDRS	32
> +#define VIRTCHNL_MAX_SIZE_RAW_PACKET	1024
>  #define PROTO_HDR_SHIFT			5
>  #define PROTO_HDR_FIELD_START(proto_hdr_type) \
>  					(proto_hdr_type <<
> PROTO_HDR_SHIFT) @@ -1678,14 +1679,25 @@
> VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);  struct
> virtchnl_proto_hdrs {
>  	u8 tunnel_level;
>  	/**
> -	 * specify where protocol header start from.
> +	 * specify where protocol header start from. must be 0 when sending a
> raw packet request.
>  	 * 0 - from the outer layer
>  	 * 1 - from the first inner layer
>  	 * 2 - from the second inner layer
>  	 * ....
> -	 **/
> -	int count; /* the proto layers must <
> VIRTCHNL_MAX_NUM_PROTO_HDRS */
> -	struct virtchnl_proto_hdr
> proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
> +	 */
> +	int count;
> +	/**
> +	 * number of proto layers, must <
> VIRTCHNL_MAX_NUM_PROTO_HDRS
> +	 * must be 0 for a raw packet request.
> +	 */
> +	union {
> +		struct virtchnl_proto_hdr
> proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
> +		struct {
> +			u16 pkt_len;
> +			u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET];
> +			u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET];
> +		} raw;
> +	};
>  };
> 
>  VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF
  2022-05-20  9:16                 ` [PATCH v5 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-05-23  2:31                   ` Junfeng Guo
  2022-05-23  2:31                     ` [PATCH v6 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
                                       ` (5 more replies)
  0 siblings, 6 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-05-23  2:31 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

This patch set enabled Protocol Agnostic Flow (raw flow) Offloading
for FDIR and RSS in AVF, based on the Parser Library feature and the
existing rte_flow `raw` API.

[PATCH v6 1/3] common/iavf: support raw packet in protocol header
[PATCH v6 2/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
[PATCH v6 3/3] net/iavf: support Protocol Agnostic Flow Offloading VF RSS

v6:
merge two commit into one for single commit compilation.

v5:
code rebase and update commit messages.

v4:
add support raw flow for RSS in AVF.

v3:
fix CI build issue.

v2:
add release notes and document update.


Junfeng Guo (2):
  common/iavf: support raw packet in protocol header
  net/iavf: enable Protocol Agnostic Flow Offloading FDIR

Ting Xu (1):
  net/iavf: support Protocol Agnostic Flow Offloading VF RSS

 doc/guides/rel_notes/release_22_07.rst |   1 +
 drivers/common/iavf/virtchnl.h         |  20 +-
 drivers/net/iavf/iavf_fdir.c           |  67 ++++++
 drivers/net/iavf/iavf_generic_flow.c   |   6 +
 drivers/net/iavf/iavf_generic_flow.h   |   3 +
 drivers/net/iavf/iavf_hash.c           | 276 +++++++++++++++++--------
 6 files changed, 281 insertions(+), 92 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v6 1/3] common/iavf: support raw packet in protocol header
  2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
@ 2022-05-23  2:31                     ` Junfeng Guo
  2022-05-23  2:31                     ` [PATCH 1/2] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
                                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-05-23  2:31 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

The patch extends existing virtchnl_proto_hdrs structure to allow VF
to pass a pair of buffers as packet data and mask that describe
a match pattern of a filter rule. Then the kernel PF driver is requested
to parse the pair of buffer and figure out low level hardware metadata
(ptype, profile, field vector.. ) to program the expected FDIR or RSS
rules.

Also update the proto_hdrs template init to align the virtchnl changes.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 drivers/common/iavf/virtchnl.h |  20 +++-
 drivers/net/iavf/iavf_hash.c   | 180 +++++++++++++++++----------------
 2 files changed, 108 insertions(+), 92 deletions(-)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 2d49f95f84..f123daec8e 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1503,6 +1503,7 @@ enum virtchnl_vfr_states {
 };
 
 #define VIRTCHNL_MAX_NUM_PROTO_HDRS	32
+#define VIRTCHNL_MAX_SIZE_RAW_PACKET	1024
 #define PROTO_HDR_SHIFT			5
 #define PROTO_HDR_FIELD_START(proto_hdr_type) \
 					(proto_hdr_type << PROTO_HDR_SHIFT)
@@ -1697,14 +1698,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
 struct virtchnl_proto_hdrs {
 	u8 tunnel_level;
 	/**
-	 * specify where protocol header start from.
+	 * specify where protocol header start from. must be 0 when sending a raw packet request.
 	 * 0 - from the outer layer
 	 * 1 - from the first inner layer
 	 * 2 - from the second inner layer
 	 * ....
-	 **/
-	int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
-	struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+	 */
+	int count;
+	/**
+	 * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS
+	 * must be 0 for a raw packet request.
+	 */
+	union {
+		struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+		struct {
+			u16 pkt_len;
+			u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+			u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET];
+		} raw;
+	};
 };
 
 VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index f35a07653b..278e75117d 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad,
 /* proto_hdrs template */
 struct virtchnl_proto_hdrs outer_ipv4_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6, proto_hdr_ipv6_frag}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6, proto_hdr_ipv6_frag}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
-	 proto_hdr_sctp}
+	{{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6,
+	  proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = {
-	2, 1, {proto_hdr_ipv4}
+	2, 1, {{proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = {
-	2, 1, {proto_hdr_ipv6}
+	2, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = {
-	2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tmplt = {
-	TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6}
+	TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = {
-	TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp}
+	TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv6_esp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 3,
-	{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_ah_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}}
 };
 
 struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}}
 };
 
 struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}}
 };
 
 struct virtchnl_proto_hdrs eth_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
-	TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}
+	TUNNEL_LEVEL_OUTER, 3,
+	{{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = {
 	TUNNEL_LEVEL_INNER, 3,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv4_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv4_with_prot,
+	  proto_hdr_tcp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_udp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_udp}}
 };
 
 struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 	TUNNEL_LEVEL_INNER, 4,
-	{proto_hdr_l2tpv2,
-	 proto_hdr_ppp,
-	 proto_hdr_ipv6_with_prot,
-	 proto_hdr_tcp}
+	{{proto_hdr_l2tpv2,
+	  proto_hdr_ppp,
+	  proto_hdr_ipv6_with_prot,
+	  proto_hdr_tcp}}
+
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = {
 	TUNNEL_LEVEL_OUTER, 4,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2}}
 };
 
 struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv4,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv4,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
 	TUNNEL_LEVEL_OUTER, 5,
-	{proto_hdr_eth,
-	 proto_hdr_ipv6,
-	 proto_hdr_udp,
-	 proto_hdr_l2tpv2,
-	 proto_hdr_ppp}
+	{{proto_hdr_eth,
+	  proto_hdr_ipv6,
+	  proto_hdr_udp,
+	  proto_hdr_l2tpv2,
+	  proto_hdr_ppp}}
 };
 
 /* rss type super set */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 1/2] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2022-05-23  2:31                     ` [PATCH v6 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
@ 2022-05-23  2:31                     ` Junfeng Guo
  2022-05-23  2:44                       ` Guo, Junfeng
  2022-05-23  2:31                     ` [PATCH v6 2/3] " Junfeng Guo
                                       ` (3 subsequent siblings)
  5 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-05-23  2:31 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow
Director (FDIR) in AVF, based on the Parser Library feature and the
existing rte_flow `raw` API.

The input spec and mask of raw pattern are first parsed via the
Parser Library, and then passed to the kernel driver to create the
flow rule.

Similar as PF FDIR, each raw flow requires:
1. A byte string of raw target packet bits.
2. A byte string contains mask of target packet.

Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:

flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end

Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 doc/guides/rel_notes/release_22_07.rst |  1 +
 drivers/net/iavf/iavf_fdir.c           | 67 ++++++++++++++++++++++++++
 drivers/net/iavf/iavf_generic_flow.c   |  6 +++
 drivers/net/iavf/iavf_generic_flow.h   |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a0eb6ab61b..829fa6047e 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -65,6 +65,7 @@ New Features
   * Added Tx QoS queue rate limitation support.
   * Added quanta size configuration support.
   * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
+  * Added Protocol Agnostic Flow Offloading support in AVF FDIR and RSS.
 
 * **Updated Intel ice driver.**
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index e9a3566c0d..f236260502 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -194,6 +194,7 @@
 	IAVF_INSET_TUN_TCP_DST_PORT)
 
 static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
+	{iavf_pattern_raw,			 IAVF_INSET_NONE,		IAVF_INSET_NONE},
 	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,		IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4,			 IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4_udp,		 IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
@@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	struct virtchnl_proto_hdrs *hdrs =
 			&filter->add_fltr.rule_cfg.proto_hdrs;
 	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
 	const struct rte_flow_item_eth *eth_spec, *eth_mask;
 	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
 	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
@@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	enum rte_flow_item_type next_type;
 	uint8_t tun_inner = 0;
 	uint16_t ether_type, flags_version;
+	uint8_t item_num = 0;
 	int layer = 0;
 
 	uint8_t  ipv6_addr_mask[16] = {
@@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 					   RTE_FLOW_ERROR_TYPE_ITEM, item,
 					   "Not support range");
 		}
+		item_num++;
 
 		switch (item_type) {
+		case RTE_FLOW_ITEM_TYPE_RAW: {
+			raw_spec = item->spec;
+			raw_mask = item->mask;
+
+			if (item_num != 1)
+				return -rte_errno;
+
+			if (raw_spec->length != raw_mask->length)
+				return -rte_errno;
+
+			uint16_t pkt_len = 0;
+			uint16_t tmp_val = 0;
+			uint8_t tmp = 0;
+			int i, j;
+
+			pkt_len = raw_spec->length;
+
+			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
+				tmp = raw_spec->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_spec->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.spec[j] = tmp_val;
+
+				tmp = raw_mask->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_mask->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.mask[j] = tmp_val;
+			}
+
+			hdrs->raw.pkt_len = pkt_len / 2;
+			hdrs->tunnel_level = 0;
+			hdrs->count = 0;
+			return 0;
+		}
+
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			eth_spec = item->spec;
 			eth_mask = item->mask;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index ddc1fdd22b..e1a611e319 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
 	.query = iavf_flow_query,
 };
 
+/* raw */
+enum rte_flow_item_type iavf_pattern_raw[] = {
+	RTE_FLOW_ITEM_TYPE_RAW,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 /* empty */
 enum rte_flow_item_type iavf_pattern_empty[] = {
 	RTE_FLOW_ITEM_TYPE_END,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index f6af176073..52eb1caf29 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -180,6 +180,9 @@
 #define IAVF_INSET_L2TPV2 \
 	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
 
+/* raw pattern */
+extern enum rte_flow_item_type iavf_pattern_raw[];
+
 /* empty pattern */
 extern enum rte_flow_item_type iavf_pattern_empty[];
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v6 2/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
  2022-05-23  2:31                     ` [PATCH v6 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
  2022-05-23  2:31                     ` [PATCH 1/2] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
@ 2022-05-23  2:31                     ` Junfeng Guo
  2022-05-23  5:10                       ` Zhang, Qi Z
  2022-05-23  2:31                     ` [PATCH 2/2] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
                                       ` (2 subsequent siblings)
  5 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-05-23  2:31 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow
Director (FDIR) in AVF, based on the Parser Library feature and the
existing rte_flow `raw` API.

The input spec and mask of raw pattern are first parsed via the
Parser Library, and then passed to the kernel driver to create the
flow rule.

Similar as PF FDIR, each raw flow requires:
1. A byte string of raw target packet bits.
2. A byte string contains mask of target packet.

Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:

flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end

Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 doc/guides/rel_notes/release_22_07.rst |  1 +
 drivers/net/iavf/iavf_fdir.c           | 67 ++++++++++++++++++++++++++
 drivers/net/iavf/iavf_generic_flow.c   |  6 +++
 drivers/net/iavf/iavf_generic_flow.h   |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a0eb6ab61b..829fa6047e 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -65,6 +65,7 @@ New Features
   * Added Tx QoS queue rate limitation support.
   * Added quanta size configuration support.
   * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
+  * Added Protocol Agnostic Flow Offloading support in AVF FDIR and RSS.
 
 * **Updated Intel ice driver.**
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index e9a3566c0d..f236260502 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -194,6 +194,7 @@
 	IAVF_INSET_TUN_TCP_DST_PORT)
 
 static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
+	{iavf_pattern_raw,			 IAVF_INSET_NONE,		IAVF_INSET_NONE},
 	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,		IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4,			 IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
 	{iavf_pattern_eth_ipv4_udp,		 IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
@@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	struct virtchnl_proto_hdrs *hdrs =
 			&filter->add_fltr.rule_cfg.proto_hdrs;
 	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
 	const struct rte_flow_item_eth *eth_spec, *eth_mask;
 	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
 	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
@@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 	enum rte_flow_item_type next_type;
 	uint8_t tun_inner = 0;
 	uint16_t ether_type, flags_version;
+	uint8_t item_num = 0;
 	int layer = 0;
 
 	uint8_t  ipv6_addr_mask[16] = {
@@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 					   RTE_FLOW_ERROR_TYPE_ITEM, item,
 					   "Not support range");
 		}
+		item_num++;
 
 		switch (item_type) {
+		case RTE_FLOW_ITEM_TYPE_RAW: {
+			raw_spec = item->spec;
+			raw_mask = item->mask;
+
+			if (item_num != 1)
+				return -rte_errno;
+
+			if (raw_spec->length != raw_mask->length)
+				return -rte_errno;
+
+			uint16_t pkt_len = 0;
+			uint16_t tmp_val = 0;
+			uint8_t tmp = 0;
+			int i, j;
+
+			pkt_len = raw_spec->length;
+
+			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
+				tmp = raw_spec->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_spec->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.spec[j] = tmp_val;
+
+				tmp = raw_mask->pattern[i];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val = tmp - 'a' + 10;
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val = tmp - 'A' + 10;
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val = tmp - '0';
+
+				tmp_val *= 16;
+				tmp = raw_mask->pattern[i + 1];
+				if (tmp >= 'a' && tmp <= 'f')
+					tmp_val += (tmp - 'a' + 10);
+				if (tmp >= 'A' && tmp <= 'F')
+					tmp_val += (tmp - 'A' + 10);
+				if (tmp >= '0' && tmp <= '9')
+					tmp_val += (tmp - '0');
+
+				hdrs->raw.mask[j] = tmp_val;
+			}
+
+			hdrs->raw.pkt_len = pkt_len / 2;
+			hdrs->tunnel_level = 0;
+			hdrs->count = 0;
+			return 0;
+		}
+
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			eth_spec = item->spec;
 			eth_mask = item->mask;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index ddc1fdd22b..e1a611e319 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
 	.query = iavf_flow_query,
 };
 
+/* raw */
+enum rte_flow_item_type iavf_pattern_raw[] = {
+	RTE_FLOW_ITEM_TYPE_RAW,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 /* empty */
 enum rte_flow_item_type iavf_pattern_empty[] = {
 	RTE_FLOW_ITEM_TYPE_END,
diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h
index f6af176073..52eb1caf29 100644
--- a/drivers/net/iavf/iavf_generic_flow.h
+++ b/drivers/net/iavf/iavf_generic_flow.h
@@ -180,6 +180,9 @@
 #define IAVF_INSET_L2TPV2 \
 	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
 
+/* raw pattern */
+extern enum rte_flow_item_type iavf_pattern_raw[];
+
 /* empty pattern */
 extern enum rte_flow_item_type iavf_pattern_empty[];
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 2/2] net/iavf: support Protocol Agnostic Flow Offloading VF RSS
  2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
                                       ` (2 preceding siblings ...)
  2022-05-23  2:31                     ` [PATCH v6 2/3] " Junfeng Guo
@ 2022-05-23  2:31                     ` Junfeng Guo
  2022-05-23  2:45                       ` Guo, Junfeng
  2022-05-23  2:31                     ` [PATCH v6 3/3] " Junfeng Guo
  2022-05-23  5:09                     ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Zhang, Qi Z
  5 siblings, 1 reply; 35+ messages in thread
From: Junfeng Guo @ 2022-05-23  2:31 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

From: Ting Xu <ting.xu@intel.com>

Enable Protocol Agnostic Flow Offloading for RSS hash in VF. It supports
raw pattern flow rule creation in VF based on Parser Library feature. VF
parses the spec and mask input of raw pattern, and passes it to kernel
driver to create the flow rule. Current rte_flow raw API is utilized.

command example:
RSS hash for ipv4-src-dst:
flow create 0 ingress pattern raw pattern spec
00000000000000000000000008004500001400004000401000000000000000000000
pattern mask
0000000000000000000000000000000000000000000000000000ffffffffffffffff /
end actions rss queues end / end

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 96 ++++++++++++++++++++++++++++++++++++
 1 file changed, 96 insertions(+)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 278e75117d..42df7c4e48 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -37,6 +37,8 @@
 /* L2TPv2 */
 #define IAVF_PHINT_L2TPV2			BIT_ULL(9)
 #define IAVF_PHINT_L2TPV2_LEN			BIT_ULL(10)
+/* Raw */
+#define IAVF_PHINT_RAW				BIT_ULL(11)
 
 #define IAVF_PHINT_GTPU_MSK	(IAVF_PHINT_GTPU	| \
 				 IAVF_PHINT_GTPU_EH	| \
@@ -58,6 +60,7 @@ struct iavf_hash_match_type {
 struct iavf_rss_meta {
 	struct virtchnl_proto_hdrs proto_hdrs;
 	enum virtchnl_rss_algorithm rss_algorithm;
+	bool raw_ena;
 };
 
 struct iavf_hash_flow_cfg {
@@ -532,6 +535,7 @@ struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
  */
 static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	/* IPv4 */
+	{iavf_pattern_raw,				IAVF_INSET_NONE,		NULL},
 	{iavf_pattern_eth_ipv4,				IAVF_RSS_TYPE_OUTER_IPV4,	&outer_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_udp,			IAVF_RSS_TYPE_OUTER_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_tcp,			IAVF_RSS_TYPE_OUTER_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
@@ -804,6 +808,9 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 		}
 
 		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_RAW:
+			*phint |= IAVF_PHINT_RAW;
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			if (!(*phint & IAVF_PHINT_GTPU_MSK) &&
 			    !(*phint & IAVF_PHINT_GRE) &&
@@ -873,6 +880,80 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 	return 0;
 }
 
+static int
+iavf_hash_parse_raw_pattern(const struct rte_flow_item *item,
+			struct iavf_rss_meta *meta)
+{
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
+	uint8_t *pkt_buf, *msk_buf;
+	uint8_t spec_len, pkt_len;
+	uint8_t tmp_val = 0;
+	uint8_t tmp_c = 0;
+	int i, j;
+
+	raw_spec = item->spec;
+	raw_mask = item->mask;
+
+	spec_len = strlen((char *)(uintptr_t)raw_spec->pattern);
+	if (strlen((char *)(uintptr_t)raw_mask->pattern) !=
+		spec_len)
+		return -rte_errno;
+
+	pkt_len = spec_len / 2;
+
+	pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!pkt_buf)
+		return -ENOMEM;
+
+	msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!msk_buf)
+		return -ENOMEM;
+
+	/* convert string to int array */
+	for (i = 0, j = 0; i < spec_len; i += 2, j++) {
+		tmp_c = raw_spec->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_spec->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 0x57;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 0x37;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			msk_buf[j] = tmp_val * 16 + tmp_c - '0';
+	}
+
+	rte_memcpy(meta->proto_hdrs.raw.spec, pkt_buf, pkt_len);
+	rte_memcpy(meta->proto_hdrs.raw.mask, msk_buf, pkt_len);
+	meta->proto_hdrs.raw.pkt_len = pkt_len;
+
+	rte_free(pkt_buf);
+	rte_free(msk_buf);
+
+	return 0;
+}
+
 #define REFINE_PROTO_FLD(op, fld) \
 	VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld)
 #define REPALCE_PROTO_FLD(fld_1, fld_2) \
@@ -1387,6 +1468,10 @@ iavf_hash_parse_action(struct iavf_pattern_match_item *match_item,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"a non-NULL RSS queue is not supported");
 
+			/* If pattern type is raw, no need to refine rss type */
+			if (pattern_hint == IAVF_PHINT_RAW)
+				break;
+
 			/**
 			 * Check simultaneous use of SRC_ONLY and DST_ONLY
 			 * of the same level.
@@ -1453,6 +1538,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad,
 	if (ret)
 		goto error;
 
+	if (phint == IAVF_PHINT_RAW) {
+		rss_meta_ptr->raw_ena = true;
+		ret = iavf_hash_parse_raw_pattern(pattern, rss_meta_ptr);
+		if (ret) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					   "Parse raw pattern failed");
+			goto error;
+		}
+	}
+
 	ret = iavf_hash_parse_action(pattern_match_item, actions, phint,
 				     rss_meta_ptr, error);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v6 3/3] net/iavf: support Protocol Agnostic Flow Offloading VF RSS
  2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
                                       ` (3 preceding siblings ...)
  2022-05-23  2:31                     ` [PATCH 2/2] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
@ 2022-05-23  2:31                     ` Junfeng Guo
  2022-05-23  5:09                     ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Zhang, Qi Z
  5 siblings, 0 replies; 35+ messages in thread
From: Junfeng Guo @ 2022-05-23  2:31 UTC (permalink / raw)
  To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, ting.xu, junfeng.guo

From: Ting Xu <ting.xu@intel.com>

Enable Protocol Agnostic Flow Offloading for RSS hash in VF. It supports
raw pattern flow rule creation in VF based on Parser Library feature. VF
parses the spec and mask input of raw pattern, and passes it to kernel
driver to create the flow rule. Current rte_flow raw API is utilized.

command example:
RSS hash for ipv4-src-dst:
flow create 0 ingress pattern raw pattern spec
00000000000000000000000008004500001400004000401000000000000000000000
pattern mask
0000000000000000000000000000000000000000000000000000ffffffffffffffff /
end actions rss queues end / end

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/iavf/iavf_hash.c | 96 ++++++++++++++++++++++++++++++++++++
 1 file changed, 96 insertions(+)

diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 278e75117d..42df7c4e48 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -37,6 +37,8 @@
 /* L2TPv2 */
 #define IAVF_PHINT_L2TPV2			BIT_ULL(9)
 #define IAVF_PHINT_L2TPV2_LEN			BIT_ULL(10)
+/* Raw */
+#define IAVF_PHINT_RAW				BIT_ULL(11)
 
 #define IAVF_PHINT_GTPU_MSK	(IAVF_PHINT_GTPU	| \
 				 IAVF_PHINT_GTPU_EH	| \
@@ -58,6 +60,7 @@ struct iavf_hash_match_type {
 struct iavf_rss_meta {
 	struct virtchnl_proto_hdrs proto_hdrs;
 	enum virtchnl_rss_algorithm rss_algorithm;
+	bool raw_ena;
 };
 
 struct iavf_hash_flow_cfg {
@@ -532,6 +535,7 @@ struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = {
  */
 static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	/* IPv4 */
+	{iavf_pattern_raw,				IAVF_INSET_NONE,		NULL},
 	{iavf_pattern_eth_ipv4,				IAVF_RSS_TYPE_OUTER_IPV4,	&outer_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_udp,			IAVF_RSS_TYPE_OUTER_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_tcp,			IAVF_RSS_TYPE_OUTER_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
@@ -804,6 +808,9 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 		}
 
 		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_RAW:
+			*phint |= IAVF_PHINT_RAW;
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			if (!(*phint & IAVF_PHINT_GTPU_MSK) &&
 			    !(*phint & IAVF_PHINT_GRE) &&
@@ -873,6 +880,80 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
 	return 0;
 }
 
+static int
+iavf_hash_parse_raw_pattern(const struct rte_flow_item *item,
+			struct iavf_rss_meta *meta)
+{
+	const struct rte_flow_item_raw *raw_spec, *raw_mask;
+	uint8_t *pkt_buf, *msk_buf;
+	uint8_t spec_len, pkt_len;
+	uint8_t tmp_val = 0;
+	uint8_t tmp_c = 0;
+	int i, j;
+
+	raw_spec = item->spec;
+	raw_mask = item->mask;
+
+	spec_len = strlen((char *)(uintptr_t)raw_spec->pattern);
+	if (strlen((char *)(uintptr_t)raw_mask->pattern) !=
+		spec_len)
+		return -rte_errno;
+
+	pkt_len = spec_len / 2;
+
+	pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!pkt_buf)
+		return -ENOMEM;
+
+	msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+	if (!msk_buf)
+		return -ENOMEM;
+
+	/* convert string to int array */
+	for (i = 0, j = 0; i < spec_len; i += 2, j++) {
+		tmp_c = raw_spec->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_spec->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			pkt_buf[j] = tmp_val * 16 + tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			tmp_val = tmp_c - 0x57;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			tmp_val = tmp_c - 0x37;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			tmp_val = tmp_c - '0';
+
+		tmp_c = raw_mask->pattern[i + 1];
+		if (tmp_c >= 'a' && tmp_c <= 'f')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
+		if (tmp_c >= 'A' && tmp_c <= 'F')
+			msk_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
+		if (tmp_c >= '0' && tmp_c <= '9')
+			msk_buf[j] = tmp_val * 16 + tmp_c - '0';
+	}
+
+	rte_memcpy(meta->proto_hdrs.raw.spec, pkt_buf, pkt_len);
+	rte_memcpy(meta->proto_hdrs.raw.mask, msk_buf, pkt_len);
+	meta->proto_hdrs.raw.pkt_len = pkt_len;
+
+	rte_free(pkt_buf);
+	rte_free(msk_buf);
+
+	return 0;
+}
+
 #define REFINE_PROTO_FLD(op, fld) \
 	VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld)
 #define REPALCE_PROTO_FLD(fld_1, fld_2) \
@@ -1387,6 +1468,10 @@ iavf_hash_parse_action(struct iavf_pattern_match_item *match_item,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"a non-NULL RSS queue is not supported");
 
+			/* If pattern type is raw, no need to refine rss type */
+			if (pattern_hint == IAVF_PHINT_RAW)
+				break;
+
 			/**
 			 * Check simultaneous use of SRC_ONLY and DST_ONLY
 			 * of the same level.
@@ -1453,6 +1538,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad,
 	if (ret)
 		goto error;
 
+	if (phint == IAVF_PHINT_RAW) {
+		rss_meta_ptr->raw_ena = true;
+		ret = iavf_hash_parse_raw_pattern(pattern, rss_meta_ptr);
+		if (ret) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					   "Parse raw pattern failed");
+			goto error;
+		}
+	}
+
 	ret = iavf_hash_parse_action(pattern_match_item, actions, phint,
 				     rss_meta_ptr, error);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 1/2] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-05-23  2:31                     ` [PATCH 1/2] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
@ 2022-05-23  2:44                       ` Guo, Junfeng
  0 siblings, 0 replies; 35+ messages in thread
From: Guo, Junfeng @ 2022-05-23  2:44 UTC (permalink / raw)
  To: Zhang, Qi Z, Wu, Jingjing, Xing, Beilei; +Cc: dev, Xu, Ting

Sorry to send this commit by mistake. Please drop this one. Thanks!

> -----Original Message-----
> From: Guo, Junfeng <junfeng.guo@intel.com>
> Sent: Monday, May 23, 2022 10:32
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Junfeng
> <junfeng.guo@intel.com>
> Subject: [PATCH 1/2] net/iavf: enable Protocol Agnostic Flow Offloading
> FDIR
> 
> This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow
> Director (FDIR) in AVF, based on the Parser Library feature and the
> existing rte_flow `raw` API.
> 
> The input spec and mask of raw pattern are first parsed via the
> Parser Library, and then passed to the kernel driver to create the
> flow rule.
> 
> Similar as PF FDIR, each raw flow requires:
> 1. A byte string of raw target packet bits.
> 2. A byte string contains mask of target packet.
> 
> Here is an example:
> FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:
> 
> flow create 0 ingress pattern raw \
> pattern spec \
> 000000000000000000000000080045000014000040004010000000000000
> 01020304 \
> pattern mask \
> 000000000000000000000000000000000000000000000000000000000000
> ffffffff \
> / end actions queue index 3 / mark id 3 / end
> 
> Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
> is optional in our cases. To avoid redundancy, we just omit the mask
> of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
> '0x' for the spec and mask byte (hex) strings are also omitted here.
> 
> Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
> ---
>  doc/guides/rel_notes/release_22_07.rst |  1 +
>  drivers/net/iavf/iavf_fdir.c           | 67 ++++++++++++++++++++++++++
>  drivers/net/iavf/iavf_generic_flow.c   |  6 +++
>  drivers/net/iavf/iavf_generic_flow.h   |  3 ++
>  4 files changed, 77 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_22_07.rst
> b/doc/guides/rel_notes/release_22_07.rst
> index a0eb6ab61b..829fa6047e 100644
> --- a/doc/guides/rel_notes/release_22_07.rst
> +++ b/doc/guides/rel_notes/release_22_07.rst
> @@ -65,6 +65,7 @@ New Features
>    * Added Tx QoS queue rate limitation support.
>    * Added quanta size configuration support.
>    * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
> +  * Added Protocol Agnostic Flow Offloading support in AVF FDIR and RSS.
> 
>  * **Updated Intel ice driver.**
> 
> diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
> index e9a3566c0d..f236260502 100644
> --- a/drivers/net/iavf/iavf_fdir.c
> +++ b/drivers/net/iavf/iavf_fdir.c
> @@ -194,6 +194,7 @@
>  	IAVF_INSET_TUN_TCP_DST_PORT)
> 
>  static struct iavf_pattern_match_item iavf_fdir_pattern[] = {
> +	{iavf_pattern_raw,			 IAVF_INSET_NONE,
> 	IAVF_INSET_NONE},
>  	{iavf_pattern_ethertype,		 IAVF_FDIR_INSET_ETH,
> 	IAVF_INSET_NONE},
>  	{iavf_pattern_eth_ipv4,
> IAVF_FDIR_INSET_ETH_IPV4,	IAVF_INSET_NONE},
>  	{iavf_pattern_eth_ipv4_udp,
> IAVF_FDIR_INSET_ETH_IPV4_UDP,	IAVF_INSET_NONE},
> @@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct
> iavf_adapter *ad,
>  	struct virtchnl_proto_hdrs *hdrs =
>  			&filter->add_fltr.rule_cfg.proto_hdrs;
>  	enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
> +	const struct rte_flow_item_raw *raw_spec, *raw_mask;
>  	const struct rte_flow_item_eth *eth_spec, *eth_mask;
>  	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last,
> *ipv4_mask;
>  	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
> @@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct
> iavf_adapter *ad,
>  	enum rte_flow_item_type next_type;
>  	uint8_t tun_inner = 0;
>  	uint16_t ether_type, flags_version;
> +	uint8_t item_num = 0;
>  	int layer = 0;
> 
>  	uint8_t  ipv6_addr_mask[16] = {
> @@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct
> iavf_adapter *ad,
> 
> RTE_FLOW_ERROR_TYPE_ITEM, item,
>  					   "Not support range");
>  		}
> +		item_num++;
> 
>  		switch (item_type) {
> +		case RTE_FLOW_ITEM_TYPE_RAW: {
> +			raw_spec = item->spec;
> +			raw_mask = item->mask;
> +
> +			if (item_num != 1)
> +				return -rte_errno;
> +
> +			if (raw_spec->length != raw_mask->length)
> +				return -rte_errno;
> +
> +			uint16_t pkt_len = 0;
> +			uint16_t tmp_val = 0;
> +			uint8_t tmp = 0;
> +			int i, j;
> +
> +			pkt_len = raw_spec->length;
> +
> +			for (i = 0, j = 0; i < pkt_len; i += 2, j++) {
> +				tmp = raw_spec->pattern[i];
> +				if (tmp >= 'a' && tmp <= 'f')
> +					tmp_val = tmp - 'a' + 10;
> +				if (tmp >= 'A' && tmp <= 'F')
> +					tmp_val = tmp - 'A' + 10;
> +				if (tmp >= '0' && tmp <= '9')
> +					tmp_val = tmp - '0';
> +
> +				tmp_val *= 16;
> +				tmp = raw_spec->pattern[i + 1];
> +				if (tmp >= 'a' && tmp <= 'f')
> +					tmp_val += (tmp - 'a' + 10);
> +				if (tmp >= 'A' && tmp <= 'F')
> +					tmp_val += (tmp - 'A' + 10);
> +				if (tmp >= '0' && tmp <= '9')
> +					tmp_val += (tmp - '0');
> +
> +				hdrs->raw.spec[j] = tmp_val;
> +
> +				tmp = raw_mask->pattern[i];
> +				if (tmp >= 'a' && tmp <= 'f')
> +					tmp_val = tmp - 'a' + 10;
> +				if (tmp >= 'A' && tmp <= 'F')
> +					tmp_val = tmp - 'A' + 10;
> +				if (tmp >= '0' && tmp <= '9')
> +					tmp_val = tmp - '0';
> +
> +				tmp_val *= 16;
> +				tmp = raw_mask->pattern[i + 1];
> +				if (tmp >= 'a' && tmp <= 'f')
> +					tmp_val += (tmp - 'a' + 10);
> +				if (tmp >= 'A' && tmp <= 'F')
> +					tmp_val += (tmp - 'A' + 10);
> +				if (tmp >= '0' && tmp <= '9')
> +					tmp_val += (tmp - '0');
> +
> +				hdrs->raw.mask[j] = tmp_val;
> +			}
> +
> +			hdrs->raw.pkt_len = pkt_len / 2;
> +			hdrs->tunnel_level = 0;
> +			hdrs->count = 0;
> +			return 0;
> +		}
> +
>  		case RTE_FLOW_ITEM_TYPE_ETH:
>  			eth_spec = item->spec;
>  			eth_mask = item->mask;
> diff --git a/drivers/net/iavf/iavf_generic_flow.c
> b/drivers/net/iavf/iavf_generic_flow.c
> index ddc1fdd22b..e1a611e319 100644
> --- a/drivers/net/iavf/iavf_generic_flow.c
> +++ b/drivers/net/iavf/iavf_generic_flow.c
> @@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = {
>  	.query = iavf_flow_query,
>  };
> 
> +/* raw */
> +enum rte_flow_item_type iavf_pattern_raw[] = {
> +	RTE_FLOW_ITEM_TYPE_RAW,
> +	RTE_FLOW_ITEM_TYPE_END,
> +};
> +
>  /* empty */
>  enum rte_flow_item_type iavf_pattern_empty[] = {
>  	RTE_FLOW_ITEM_TYPE_END,
> diff --git a/drivers/net/iavf/iavf_generic_flow.h
> b/drivers/net/iavf/iavf_generic_flow.h
> index f6af176073..52eb1caf29 100644
> --- a/drivers/net/iavf/iavf_generic_flow.h
> +++ b/drivers/net/iavf/iavf_generic_flow.h
> @@ -180,6 +180,9 @@
>  #define IAVF_INSET_L2TPV2 \
>  	(IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID)
> 
> +/* raw pattern */
> +extern enum rte_flow_item_type iavf_pattern_raw[];
> +
>  /* empty pattern */
>  extern enum rte_flow_item_type iavf_pattern_empty[];
> 
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 2/2] net/iavf: support Protocol Agnostic Flow Offloading VF RSS
  2022-05-23  2:31                     ` [PATCH 2/2] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
@ 2022-05-23  2:45                       ` Guo, Junfeng
  0 siblings, 0 replies; 35+ messages in thread
From: Guo, Junfeng @ 2022-05-23  2:45 UTC (permalink / raw)
  To: Zhang, Qi Z, Wu, Jingjing, Xing, Beilei; +Cc: dev, Xu, Ting

Sorry to send this commit by mistake. Please drop this one. Thanks!

> -----Original Message-----
> From: Guo, Junfeng <junfeng.guo@intel.com>
> Sent: Monday, May 23, 2022 10:32
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Junfeng
> <junfeng.guo@intel.com>
> Subject: [PATCH 2/2] net/iavf: support Protocol Agnostic Flow Offloading
> VF RSS
> 
> From: Ting Xu <ting.xu@intel.com>
> 
> Enable Protocol Agnostic Flow Offloading for RSS hash in VF. It supports
> raw pattern flow rule creation in VF based on Parser Library feature. VF
> parses the spec and mask input of raw pattern, and passes it to kernel
> driver to create the flow rule. Current rte_flow raw API is utilized.
> 
> command example:
> RSS hash for ipv4-src-dst:
> flow create 0 ingress pattern raw pattern spec
> 000000000000000000000000080045000014000040004010000000000000
> 00000000
> pattern mask
> 0000000000000000000000000000000000000000000000000000ffffffffffff
> ffff /
> end actions rss queues end / end
> 
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> ---
>  drivers/net/iavf/iavf_hash.c | 96
> ++++++++++++++++++++++++++++++++++++
>  1 file changed, 96 insertions(+)
> 
> diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
> index 278e75117d..42df7c4e48 100644
> --- a/drivers/net/iavf/iavf_hash.c
> +++ b/drivers/net/iavf/iavf_hash.c
> @@ -37,6 +37,8 @@
>  /* L2TPv2 */
>  #define IAVF_PHINT_L2TPV2			BIT_ULL(9)
>  #define IAVF_PHINT_L2TPV2_LEN			BIT_ULL(10)
> +/* Raw */
> +#define IAVF_PHINT_RAW				BIT_ULL(11)
> 
>  #define IAVF_PHINT_GTPU_MSK	(IAVF_PHINT_GTPU	| \
>  				 IAVF_PHINT_GTPU_EH	| \
> @@ -58,6 +60,7 @@ struct iavf_hash_match_type {
>  struct iavf_rss_meta {
>  	struct virtchnl_proto_hdrs proto_hdrs;
>  	enum virtchnl_rss_algorithm rss_algorithm;
> +	bool raw_ena;
>  };
> 
>  struct iavf_hash_flow_cfg {
> @@ -532,6 +535,7 @@ struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt
> = {
>   */
>  static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
>  	/* IPv4 */
> +	{iavf_pattern_raw,
> 	IAVF_INSET_NONE,		NULL},
>  	{iavf_pattern_eth_ipv4,
> 	IAVF_RSS_TYPE_OUTER_IPV4,	&outer_ipv4_tmplt},
>  	{iavf_pattern_eth_ipv4_udp,
> 	IAVF_RSS_TYPE_OUTER_IPV4_UDP,
> 	&outer_ipv4_udp_tmplt},
>  	{iavf_pattern_eth_ipv4_tcp,
> 	IAVF_RSS_TYPE_OUTER_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
> @@ -804,6 +808,9 @@ iavf_hash_parse_pattern(const struct
> rte_flow_item pattern[], uint64_t *phint,
>  		}
> 
>  		switch (item->type) {
> +		case RTE_FLOW_ITEM_TYPE_RAW:
> +			*phint |= IAVF_PHINT_RAW;
> +			break;
>  		case RTE_FLOW_ITEM_TYPE_IPV4:
>  			if (!(*phint & IAVF_PHINT_GTPU_MSK) &&
>  			    !(*phint & IAVF_PHINT_GRE) &&
> @@ -873,6 +880,80 @@ iavf_hash_parse_pattern(const struct
> rte_flow_item pattern[], uint64_t *phint,
>  	return 0;
>  }
> 
> +static int
> +iavf_hash_parse_raw_pattern(const struct rte_flow_item *item,
> +			struct iavf_rss_meta *meta)
> +{
> +	const struct rte_flow_item_raw *raw_spec, *raw_mask;
> +	uint8_t *pkt_buf, *msk_buf;
> +	uint8_t spec_len, pkt_len;
> +	uint8_t tmp_val = 0;
> +	uint8_t tmp_c = 0;
> +	int i, j;
> +
> +	raw_spec = item->spec;
> +	raw_mask = item->mask;
> +
> +	spec_len = strlen((char *)(uintptr_t)raw_spec->pattern);
> +	if (strlen((char *)(uintptr_t)raw_mask->pattern) !=
> +		spec_len)
> +		return -rte_errno;
> +
> +	pkt_len = spec_len / 2;
> +
> +	pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
> +	if (!pkt_buf)
> +		return -ENOMEM;
> +
> +	msk_buf = rte_zmalloc(NULL, pkt_len, 0);
> +	if (!msk_buf)
> +		return -ENOMEM;
> +
> +	/* convert string to int array */
> +	for (i = 0, j = 0; i < spec_len; i += 2, j++) {
> +		tmp_c = raw_spec->pattern[i];
> +		if (tmp_c >= 'a' && tmp_c <= 'f')
> +			tmp_val = tmp_c - 'a' + 10;
> +		if (tmp_c >= 'A' && tmp_c <= 'F')
> +			tmp_val = tmp_c - 'A' + 10;
> +		if (tmp_c >= '0' && tmp_c <= '9')
> +			tmp_val = tmp_c - '0';
> +
> +		tmp_c = raw_spec->pattern[i + 1];
> +		if (tmp_c >= 'a' && tmp_c <= 'f')
> +			pkt_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
> +		if (tmp_c >= 'A' && tmp_c <= 'F')
> +			pkt_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
> +		if (tmp_c >= '0' && tmp_c <= '9')
> +			pkt_buf[j] = tmp_val * 16 + tmp_c - '0';
> +
> +		tmp_c = raw_mask->pattern[i];
> +		if (tmp_c >= 'a' && tmp_c <= 'f')
> +			tmp_val = tmp_c - 0x57;
> +		if (tmp_c >= 'A' && tmp_c <= 'F')
> +			tmp_val = tmp_c - 0x37;
> +		if (tmp_c >= '0' && tmp_c <= '9')
> +			tmp_val = tmp_c - '0';
> +
> +		tmp_c = raw_mask->pattern[i + 1];
> +		if (tmp_c >= 'a' && tmp_c <= 'f')
> +			msk_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10;
> +		if (tmp_c >= 'A' && tmp_c <= 'F')
> +			msk_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10;
> +		if (tmp_c >= '0' && tmp_c <= '9')
> +			msk_buf[j] = tmp_val * 16 + tmp_c - '0';
> +	}
> +
> +	rte_memcpy(meta->proto_hdrs.raw.spec, pkt_buf, pkt_len);
> +	rte_memcpy(meta->proto_hdrs.raw.mask, msk_buf, pkt_len);
> +	meta->proto_hdrs.raw.pkt_len = pkt_len;
> +
> +	rte_free(pkt_buf);
> +	rte_free(msk_buf);
> +
> +	return 0;
> +}
> +
>  #define REFINE_PROTO_FLD(op, fld) \
>  	VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr,
> VIRTCHNL_PROTO_HDR_##fld)
>  #define REPALCE_PROTO_FLD(fld_1, fld_2) \
> @@ -1387,6 +1468,10 @@ iavf_hash_parse_action(struct
> iavf_pattern_match_item *match_item,
> 
> 	RTE_FLOW_ERROR_TYPE_ACTION, action,
>  					"a non-NULL RSS queue is not
> supported");
> 
> +			/* If pattern type is raw, no need to refine rss
> type */
> +			if (pattern_hint == IAVF_PHINT_RAW)
> +				break;
> +
>  			/**
>  			 * Check simultaneous use of SRC_ONLY and
> DST_ONLY
>  			 * of the same level.
> @@ -1453,6 +1538,17 @@
> iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad,
>  	if (ret)
>  		goto error;
> 
> +	if (phint == IAVF_PHINT_RAW) {
> +		rss_meta_ptr->raw_ena = true;
> +		ret = iavf_hash_parse_raw_pattern(pattern,
> rss_meta_ptr);
> +		if (ret) {
> +			rte_flow_error_set(error, EINVAL,
> +
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> +					   "Parse raw pattern failed");
> +			goto error;
> +		}
> +	}
> +
>  	ret = iavf_hash_parse_action(pattern_match_item, actions, phint,
>  				     rss_meta_ptr, error);
> 
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF
  2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
                                       ` (4 preceding siblings ...)
  2022-05-23  2:31                     ` [PATCH v6 3/3] " Junfeng Guo
@ 2022-05-23  5:09                     ` Zhang, Qi Z
  2022-06-05 17:43                       ` Thomas Monjalon
  5 siblings, 1 reply; 35+ messages in thread
From: Zhang, Qi Z @ 2022-05-23  5:09 UTC (permalink / raw)
  To: Guo, Junfeng, Wu, Jingjing, Xing, Beilei; +Cc: dev, Xu, Ting



> -----Original Message-----
> From: Guo, Junfeng <junfeng.guo@intel.com>
> Sent: Monday, May 23, 2022 10:32 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Junfeng
> <junfeng.guo@intel.com>
> Subject: [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF
> 
> This patch set enabled Protocol Agnostic Flow (raw flow) Offloading for FDIR
> and RSS in AVF, based on the Parser Library feature and the existing rte_flow
> `raw` API.
> 
> [PATCH v6 1/3] common/iavf: support raw packet in protocol header [PATCH v6
> 2/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR [PATCH v6 3/3]
> net/iavf: support Protocol Agnostic Flow Offloading VF RSS
> 
> v6:
> merge two commit into one for single commit compilation.
> 
> v5:
> code rebase and update commit messages.
> 
> v4:
> add support raw flow for RSS in AVF.
> 
> v3:
> fix CI build issue.
> 
> v2:
> add release notes and document update.
> 
> 
> Junfeng Guo (2):
>   common/iavf: support raw packet in protocol header
>   net/iavf: enable Protocol Agnostic Flow Offloading FDIR
> 
> Ting Xu (1):
>   net/iavf: support Protocol Agnostic Flow Offloading VF RSS
> 
>  doc/guides/rel_notes/release_22_07.rst |   1 +
>  drivers/common/iavf/virtchnl.h         |  20 +-
>  drivers/net/iavf/iavf_fdir.c           |  67 ++++++
>  drivers/net/iavf/iavf_generic_flow.c   |   6 +
>  drivers/net/iavf/iavf_generic_flow.h   |   3 +
>  drivers/net/iavf/iavf_hash.c           | 276 +++++++++++++++++--------
>  6 files changed, 281 insertions(+), 92 deletions(-)
> 
> --
> 2.25.1

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v6 2/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
  2022-05-23  2:31                     ` [PATCH v6 2/3] " Junfeng Guo
@ 2022-05-23  5:10                       ` Zhang, Qi Z
  0 siblings, 0 replies; 35+ messages in thread
From: Zhang, Qi Z @ 2022-05-23  5:10 UTC (permalink / raw)
  To: Guo, Junfeng, Wu, Jingjing, Xing, Beilei; +Cc: dev, Xu, Ting



> -----Original Message-----
> From: Guo, Junfeng <junfeng.guo@intel.com>
> Sent: Monday, May 23, 2022 10:32 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Xu, Ting <ting.xu@intel.com>; Guo, Junfeng
> <junfeng.guo@intel.com>
> Subject: [PATCH v6 2/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR
> 
> This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow Director
> (FDIR) in AVF, based on the Parser Library feature and the existing rte_flow
> `raw` API.
> 
> The input spec and mask of raw pattern are first parsed via the Parser Library,
> and then passed to the kernel driver to create the flow rule.
> 
> Similar as PF FDIR, 

Re-worded to below during merging.

 "Similar to ice PMD's implementation"

> each raw flow requires:

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF
  2022-05-23  5:09                     ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Zhang, Qi Z
@ 2022-06-05 17:43                       ` Thomas Monjalon
  2022-06-05 23:10                         ` Zhang, Qi Z
  0 siblings, 1 reply; 35+ messages in thread
From: Thomas Monjalon @ 2022-06-05 17:43 UTC (permalink / raw)
  To: Zhang, Qi Z
  Cc: Guo, Junfeng, Wu, Jingjing, Xing, Beilei, dev, Xu, Ting,
	ferruh.yigit, john.mcnamara, david.marchand

23/05/2022 07:09, Zhang, Qi Z:
> > Junfeng Guo (2):
> >   common/iavf: support raw packet in protocol header
> >   net/iavf: enable Protocol Agnostic Flow Offloading FDIR
> > 
> > Ting Xu (1):
> >   net/iavf: support Protocol Agnostic Flow Offloading VF RSS
> > 
> >  doc/guides/rel_notes/release_22_07.rst |   1 +
> >  drivers/common/iavf/virtchnl.h         |  20 +-
> >  drivers/net/iavf/iavf_fdir.c           |  67 ++++++
> >  drivers/net/iavf/iavf_generic_flow.c   |   6 +
> >  drivers/net/iavf/iavf_generic_flow.h   |   3 +
> >  drivers/net/iavf/iavf_hash.c           | 276 +++++++++++++++++--------
> >  6 files changed, 281 insertions(+), 92 deletions(-)
> > 
> > --
> > 2.25.1
> 
> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
> 
> Applied to dpdk-next-net-intel.

You should not have merged this, it is triggering an error
with devtools/check-doc-vs-code.sh

rte_flow doc out of sync for iavf
        item raw

Once again, I will fix it while pulling the tree.
There are a lot of mistakes in this tree,
I hope you understand why I don't have time to pull it
as frequently as you would like.



^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF
  2022-06-05 17:43                       ` Thomas Monjalon
@ 2022-06-05 23:10                         ` Zhang, Qi Z
  0 siblings, 0 replies; 35+ messages in thread
From: Zhang, Qi Z @ 2022-06-05 23:10 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Guo, Junfeng, Wu, Jingjing, Xing, Beilei, dev, Xu, Ting,
	ferruh.yigit, Mcnamara, John, david.marchand



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, June 6, 2022 1:43 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: Guo, Junfeng <junfeng.guo@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; dev@dpdk.org;
> Xu, Ting <ting.xu@intel.com>; ferruh.yigit@amd.com; Mcnamara, John
> <john.mcnamara@intel.com>; david.marchand@redhat.com
> Subject: Re: [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF
> 
> 23/05/2022 07:09, Zhang, Qi Z:
> > > Junfeng Guo (2):
> > >   common/iavf: support raw packet in protocol header
> > >   net/iavf: enable Protocol Agnostic Flow Offloading FDIR
> > >
> > > Ting Xu (1):
> > >   net/iavf: support Protocol Agnostic Flow Offloading VF RSS
> > >
> > >  doc/guides/rel_notes/release_22_07.rst |   1 +
> > >  drivers/common/iavf/virtchnl.h         |  20 +-
> > >  drivers/net/iavf/iavf_fdir.c           |  67 ++++++
> > >  drivers/net/iavf/iavf_generic_flow.c   |   6 +
> > >  drivers/net/iavf/iavf_generic_flow.h   |   3 +
> > >  drivers/net/iavf/iavf_hash.c           | 276 +++++++++++++++++--------
> > >  6 files changed, 281 insertions(+), 92 deletions(-)
> > >
> > > --
> > > 2.25.1
> >
> > Acked-by: Qi Zhang <qi.z.zhang@intel.com>
> >
> > Applied to dpdk-next-net-intel.
> 
> You should not have merged this, it is triggering an error with devtools/check-
> doc-vs-code.sh

my bad, I was reminded by Ferruh previously.
I should add check-doc-vs-code.sh into my routine operation.   

> 
> rte_flow doc out of sync for iavf
>         item raw
> 
> Once again, I will fix it while pulling the tree.

Thanks

> There are a lot of mistakes in this tree, I hope you understand why I don't have
> time to pull it as frequently as you would like.

Understand your concern, I can't guarantee there will be no mistake, but I will try to avoid the same mistake happen again.

> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2022-06-05 23:10 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-07  6:27 [PATCH 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
2022-04-07  6:27 ` [PATCH 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
2022-04-07  6:27 ` [PATCH 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
2022-04-07  6:27 ` [PATCH 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
2022-04-08  8:02   ` [PATCH v2 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
2022-04-08  8:02     ` [PATCH v2 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
2022-04-08  8:02     ` [PATCH v2 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
2022-04-08  8:02     ` [PATCH v2 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
2022-04-08  9:12       ` [PATCH v3 0/3] Enable Protocol Agnostic Flow Offloading FDIR in AVF Junfeng Guo
2022-04-08  9:12         ` [PATCH v3 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
2022-04-08  9:12         ` [PATCH v3 2/3] net/iavf: align with proto hdr struct change Junfeng Guo
2022-04-08  9:12         ` [PATCH v3 3/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
2022-04-21  3:28           ` [PATCH v4 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
2022-04-21  3:28             ` [PATCH v4 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
2022-05-21  1:34               ` Zhang, Qi Z
2022-04-21  3:28             ` [PATCH v4 2/4] net/iavf: align with proto hdr struct change Junfeng Guo
2022-04-21  3:28             ` [PATCH v4 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
2022-04-21  3:28             ` [PATCH v4 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
2022-05-20  9:16               ` [PATCH v5 0/4] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
2022-05-20  9:16                 ` [PATCH v5 1/4] common/iavf: support raw packet in protocol header Junfeng Guo
2022-05-23  2:31                   ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Junfeng Guo
2022-05-23  2:31                     ` [PATCH v6 1/3] common/iavf: support raw packet in protocol header Junfeng Guo
2022-05-23  2:31                     ` [PATCH 1/2] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
2022-05-23  2:44                       ` Guo, Junfeng
2022-05-23  2:31                     ` [PATCH v6 2/3] " Junfeng Guo
2022-05-23  5:10                       ` Zhang, Qi Z
2022-05-23  2:31                     ` [PATCH 2/2] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo
2022-05-23  2:45                       ` Guo, Junfeng
2022-05-23  2:31                     ` [PATCH v6 3/3] " Junfeng Guo
2022-05-23  5:09                     ` [PATCH v6 0/3] Enable Protocol Agnostic Flow Offloading in AVF Zhang, Qi Z
2022-06-05 17:43                       ` Thomas Monjalon
2022-06-05 23:10                         ` Zhang, Qi Z
2022-05-20  9:16                 ` [PATCH v5 2/4] net/iavf: align with proto hdr struct change Junfeng Guo
2022-05-20  9:16                 ` [PATCH v5 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Junfeng Guo
2022-05-20  9:16                 ` [PATCH v5 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Junfeng Guo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).