From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1C712A0C4C; Thu, 14 Oct 2021 09:52:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3C547411C5; Thu, 14 Oct 2021 09:52:23 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id EC506411C4 for ; Thu, 14 Oct 2021 09:52:21 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10136"; a="313823262" X-IronPort-AV: E=Sophos;i="5.85,372,1624345200"; d="scan'208";a="313823262" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2021 00:52:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,372,1624345200"; d="scan'208";a="491841909" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.231]) by orsmga008.jf.intel.com with ESMTP; 14 Oct 2021 00:52:18 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, junfeng.guo@intel.com Date: Thu, 14 Oct 2021 15:37:04 +0000 Message-Id: <20211014153705.3301663-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211014153705.3301663-1-junfeng.guo@intel.com> References: <20210928101821.147053-4-junfeng.guo@intel.com> <20211014153705.3301663-1-junfeng.guo@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v3 4/5] net/ice: enable protocol agnostic flow offloading in FDIR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Protocol agnostic flow offloading in Flow Director is enabled by this patch based on the Parser Library, using existing rte_flow raw API. Note that the raw flow requires: 1. byte string of raw target packet bits. 2. byte string of mask of target packet. Here is an example: FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3: flow create 0 ingress pattern raw \ pattern spec \ 00000000000000000000000008004500001400004000401000000000000001020304 \ pattern mask \ 000000000000000000000000000000000000000000000000000000000000ffffffff \ / end actions queue index 3 / mark id 3 / end Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto) is optional in our cases. To avoid redundancy, we just omit the mask of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix '0x' for the spec and mask byte (hex) strings are also omitted here. Signed-off-by: Junfeng Guo --- drivers/net/ice/ice_ethdev.h | 5 + drivers/net/ice/ice_fdir_filter.c | 172 +++++++++++++++++++++++++++++ drivers/net/ice/ice_generic_flow.c | 7 ++ drivers/net/ice/ice_generic_flow.h | 3 + 4 files changed, 187 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 5845f44c86..e21d2349bc 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -317,6 +317,11 @@ struct ice_fdir_filter_conf { uint64_t input_set_o; /* used for non-tunnel or tunnel outer fields */ uint64_t input_set_i; /* only for tunnel inner fields */ uint32_t mark_flag; + + struct ice_parser_profile *prof; + const u8 *pkt_buf; + bool parser_ena; + u8 pkt_len; }; #define ICE_MAX_FDIR_FILTER_NUM (1024 * 16) diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index bd627e3aa8..4af6f371f4 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -107,6 +107,7 @@ ICE_INSET_NAT_T_ESP_SPI) static struct ice_pattern_match_item ice_fdir_pattern_list[] = { + {pattern_raw, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_ethertype, ICE_FDIR_INSET_ETH, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, ICE_FDIR_INSET_ETH_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4_udp, ICE_FDIR_INSET_ETH_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, @@ -1188,6 +1189,24 @@ ice_fdir_is_tunnel_profile(enum ice_fdir_tunnel_type tunnel_type) return 0; } +static int +ice_fdir_add_del_raw(struct ice_pf *pf, + struct ice_fdir_filter_conf *filter, + bool add) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + + unsigned char *pkt = (unsigned char *)pf->fdir.prg_pkt; + rte_memcpy(pkt, filter->pkt_buf, filter->pkt_len); + + struct ice_fltr_desc desc; + memset(&desc, 0, sizeof(desc)); + filter->input.comp_report = ICE_FXD_FLTR_QW0_COMP_REPORT_SW; + ice_fdir_get_prgm_desc(hw, &filter->input, &desc, add); + + return ice_fdir_programming(pf, &desc); +} + static int ice_fdir_add_del_filter(struct ice_pf *pf, struct ice_fdir_filter_conf *filter, @@ -1304,6 +1323,45 @@ ice_fdir_create_filter(struct ice_adapter *ad, bool is_tun; int ret; + if (filter->parser_ena) { + struct ice_hw *hw = ICE_PF_TO_HW(pf); + + u16 ctrl_vsi = pf->fdir.fdir_vsi->idx; + u16 main_vsi = pf->main_vsi->idx; + + ret = ice_flow_set_hw_prof(hw, main_vsi, ctrl_vsi, + filter->prof, ICE_BLK_FD); + if (ret) + return -rte_errno; + + ret = ice_fdir_add_del_raw(pf, filter, true); + if (ret) + return -rte_errno; + + if (filter->mark_flag == 1) + ice_fdir_rx_parsing_enable(ad, 1); + + entry = rte_zmalloc("fdir_entry", sizeof(*entry), 0); + if (!entry) + return -rte_errno; + + entry->pkt_buf = (u8 *)ice_malloc(hw, filter->pkt_len); + if (!entry->pkt_buf) + return -ENOMEM; + + u8 *pkt_buf = (u8 *)ice_malloc(hw, filter->pkt_len); + if (!pkt_buf) + return -ENOMEM; + + rte_memcpy(entry, filter, sizeof(*filter)); + rte_memcpy(pkt_buf, filter->pkt_buf, filter->pkt_len); + entry->pkt_buf = pkt_buf; + + flow->rule = entry; + + return 0; + } + ice_fdir_extract_fltr_key(&key, filter); node = ice_fdir_entry_lookup(fdir_info, &key); if (node) { @@ -1397,6 +1455,19 @@ ice_fdir_destroy_filter(struct ice_adapter *ad, filter = (struct ice_fdir_filter_conf *)flow->rule; + if (filter->parser_ena) { + ret = ice_fdir_add_del_raw(pf, filter, false); + if (ret) + return -rte_errno; + + filter->pkt_buf = NULL; + flow->rule = NULL; + + rte_free(filter); + + return 0; + } + is_tun = ice_fdir_is_tunnel_profile(filter->tunnel_type); if (filter->counter) { @@ -1675,6 +1746,7 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END; enum rte_flow_item_type l4 = RTE_FLOW_ITEM_TYPE_END; enum ice_fdir_tunnel_type tunnel_type = ICE_FDIR_TUNNEL_TYPE_NONE; + const struct rte_flow_item_raw *raw_spec, *raw_mask; const struct rte_flow_item_eth *eth_spec, *eth_mask; const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask; const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; @@ -1702,6 +1774,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, struct ice_fdir_extra *p_ext_data; struct ice_fdir_v4 *p_v4 = NULL; struct ice_fdir_v6 *p_v6 = NULL; + struct ice_parser_result rslt; + struct ice_parser *psr; + uint8_t item_num = 0; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) @@ -1713,6 +1788,7 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, item->type == RTE_FLOW_ITEM_TYPE_GTP_PSC) { is_outer = false; } + item_num++; } /* This loop parse flow pattern and distinguish Non-tunnel and tunnel @@ -1733,6 +1809,95 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, &input_set_i : &input_set_o; switch (item_type) { + case RTE_FLOW_ITEM_TYPE_RAW: + raw_spec = item->spec; + raw_mask = item->mask; + + if (item_num != 1) + break; + + /* convert raw spec & mask from byte string to int */ + unsigned char *tmp_spec = + (uint8_t *)(uintptr_t)raw_spec->pattern; + unsigned char *tmp_mask = + (uint8_t *)(uintptr_t)raw_mask->pattern; + uint16_t udp_port = 0; + uint16_t tmp_val = 0; + uint8_t pkt_len = 0; + uint8_t tmp = 0; + int i, j; + + pkt_len = strlen((char *)(uintptr_t)raw_spec->pattern); + if (strlen((char *)(uintptr_t)raw_mask->pattern) != + pkt_len) + return -rte_errno; + + for (i = 0, j = 0; i < pkt_len; i += 2, j++) { + tmp = tmp_spec[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = tmp_spec[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_spec[j] = tmp_val + tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_spec[j] = tmp_val + tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_spec[j] = tmp_val + tmp - '0'; + + tmp = tmp_mask[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = tmp_mask[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_mask[j] = tmp_val + tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_mask[j] = tmp_val + tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_mask[j] = tmp_val + tmp - '0'; + } + + pkt_len /= 2; + + if (ice_parser_create(&ad->hw, &psr)) + return -rte_errno; + if (ice_get_open_tunnel_port(&ad->hw, TNL_VXLAN, + &udp_port)) + ice_parser_vxlan_tunnel_set(psr, udp_port, + true); + if (ice_parser_run(psr, tmp_spec, pkt_len, &rslt)) + return -rte_errno; + ice_parser_destroy(psr); + + if (!tmp_mask) + return -rte_errno; + + filter->prof = (struct ice_parser_profile *) + ice_malloc(&ad->hw, sizeof(filter->prof)); + if (!filter->prof) + return -ENOMEM; + if (ice_parser_profile_init(&rslt, tmp_spec, tmp_mask, + pkt_len, ICE_BLK_FD, true, filter->prof)) + return -rte_errno; + + filter->pkt_buf = tmp_spec; + filter->pkt_len = pkt_len; + + filter->parser_ena = true; + + break; + case RTE_FLOW_ITEM_TYPE_ETH: flow_type = ICE_FLTR_PTYPE_NON_IP_L2; eth_spec = item->spec; @@ -2198,6 +2363,7 @@ ice_fdir_parse(struct ice_adapter *ad, struct ice_fdir_filter_conf *filter = &pf->fdir.conf; struct ice_pattern_match_item *item = NULL; uint64_t input_set; + bool raw = false; int ret; memset(filter, 0, sizeof(*filter)); @@ -2213,7 +2379,13 @@ ice_fdir_parse(struct ice_adapter *ad, ret = ice_fdir_parse_pattern(ad, pattern, error, filter); if (ret) goto error; + + if (item->pattern_list[0] == RTE_FLOW_ITEM_TYPE_RAW) + raw = true; + input_set = filter->input_set_o | filter->input_set_i; + input_set = raw ? ~input_set : input_set; + if (!input_set || filter->input_set_o & ~(item->input_set_mask_o | ICE_INSET_ETHERTYPE) || filter->input_set_i & ~item->input_set_mask_i) { diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 02f854666a..d3391c86c0 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -65,6 +65,12 @@ enum rte_flow_item_type pattern_empty[] = { RTE_FLOW_ITEM_TYPE_END, }; +/* raw */ +enum rte_flow_item_type pattern_raw[] = { + RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_END, +}; + /* L2 */ enum rte_flow_item_type pattern_ethertype[] = { RTE_FLOW_ITEM_TYPE_ETH, @@ -2081,6 +2087,7 @@ struct ice_ptype_match { }; static struct ice_ptype_match ice_ptype_map[] = { + {pattern_raw, ICE_PTYPE_IPV4_PAY}, {pattern_eth_ipv4, ICE_PTYPE_IPV4_PAY}, {pattern_eth_ipv4_udp, ICE_PTYPE_IPV4_UDP_PAY}, {pattern_eth_ipv4_tcp, ICE_PTYPE_IPV4_TCP_PAY}, diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h index 8845a3e156..1b030c0466 100644 --- a/drivers/net/ice/ice_generic_flow.h +++ b/drivers/net/ice/ice_generic_flow.h @@ -124,6 +124,9 @@ /* empty pattern */ extern enum rte_flow_item_type pattern_empty[]; +/* raw pattern */ +extern enum rte_flow_item_type pattern_raw[]; + /* L2 */ extern enum rte_flow_item_type pattern_ethertype[]; extern enum rte_flow_item_type pattern_ethertype_vlan[]; -- 2.25.1