From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AC74A2E1B for ; Thu, 5 Sep 2019 14:59:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 04D411EFE1; Thu, 5 Sep 2019 14:59:21 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 251BB1EFE0 for ; Thu, 5 Sep 2019 14:59:17 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Sep 2019 05:59:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,470,1559545200"; d="scan'208";a="190485971" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by FMSMGA003.fm.intel.com with ESMTP; 05 Sep 2019 05:59:16 -0700 Received: from fmsmsx153.amr.corp.intel.com (10.18.125.6) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 5 Sep 2019 05:59:16 -0700 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by FMSMSX153.amr.corp.intel.com (10.18.125.6) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 5 Sep 2019 05:59:16 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.92]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.53]) with mapi id 14.03.0439.000; Thu, 5 Sep 2019 20:59:14 +0800 From: "Wang, Ying A" To: "Ye, Xiaolong" CC: "Zhang, Qi Z" , "Yang, Qiming" , "dev@dpdk.org" , "Zhao1, Wei" Thread-Topic: [PATCH 2/4] net/ice: rework for generic flow enabling Thread-Index: AQHVYus423zjMsZ48E6uuQEvVO4coacbEpeAgAH3bAA= Date: Thu, 5 Sep 2019 12:59:14 +0000 Message-ID: <44DE8E8A53B4014CA1985CEE86C07F2A0B988E93@SHSMSX101.ccr.corp.intel.com> References: <20190903221522.151382-1-ying.a.wang@intel.com> <20190903221522.151382-3-ying.a.wang@intel.com> <20190904144435.GC54897@intel.com> In-Reply-To: <20190904144435.GC54897@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNTBiZDkwZTgtOWRiYy00ZTFhLTliMmItZjY0NjdmNzdlYjdjIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiRldLRWsxVFFueFZ6QVFjcG03c0VGdDNJOGxWalBNY1dzUXpkZDZxQm5iYUpCNXFkNWVjNVZEWk5FU1NiVG1mUCJ9 x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 2/4] net/ice: rework for generic flow enabling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Xiaolong > -----Original Message----- > From: Ye, Xiaolong > Sent: Wednesday, September 4, 2019 10:45 PM > To: Wang, Ying A > Cc: Zhang, Qi Z ; Yang, Qiming > ; dev@dpdk.org; Zhao1, Wei > Subject: Re: [PATCH 2/4] net/ice: rework for generic flow enabling >=20 > On 09/04, Ying Wang wrote: > >The patch reworks the generic flow API (rte_flow) implementation. > >It introduces an abstract layer which provides a unified interface for > >low-level filter engine (switch, fdir, hash) to register supported > >patterns and actions and implement flow validate/create/destroy/flush/ > >query activities. > > > >The patch also removes the existing switch filter implementation to > >avoid compile error. Switch filter implementation for the new framework > >will be added in the following patch. > > > >Signed-off-by: Ying Wang > >--- > > drivers/net/ice/ice_ethdev.c | 22 +- > > drivers/net/ice/ice_ethdev.h | 15 +- > > drivers/net/ice/ice_generic_flow.c | 768 > >+++++++++++++++-------------------- > > drivers/net/ice/ice_generic_flow.h | 782 > >++++++++---------------------------- > > drivers/net/ice/ice_switch_filter.c | 511 ----------------------- > >drivers/net/ice/ice_switch_filter.h | 18 - > > 6 files changed, 525 insertions(+), 1591 deletions(-) >=20 > Please add update to document and release notes. OK, will add it in v2. >=20 > > > >diff --git a/drivers/net/ice/ice_ethdev.c > >b/drivers/net/ice/ice_ethdev.c index 4e0645db1..647aca3ed 100644 > >--- a/drivers/net/ice/ice_ethdev.c > >+++ b/drivers/net/ice/ice_ethdev.c > [snip] > >+int > >+ice_flow_init(struct ice_adapter *ad) > >+{ > >+ int ret =3D 0; > >+ struct ice_pf *pf =3D &ad->pf; > >+ void *temp; > >+ struct ice_flow_engine *engine =3D NULL; > >+ > >+ TAILQ_INIT(&pf->flow_list); > >+ TAILQ_INIT(&pf->rss_parser_list); > >+ TAILQ_INIT(&pf->perm_parser_list); > >+ TAILQ_INIT(&pf->dist_parser_list); > >+ > >+ TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) { > >+ if (engine->init =3D=3D NULL) > >+ return -EINVAL; >=20 > I think ENOTSUP is more preferred here. OK, will fix it in v2. >=20 > >+ > >+ ret =3D engine->init(ad); > >+ if (ret) > >+ return ret; > >+ } > >+ return 0; > >+} > >+ > >+void > >+ice_flow_uninit(struct ice_adapter *ad) { > >+ struct ice_pf *pf =3D &ad->pf; > >+ struct ice_flow_engine *engine; > >+ struct rte_flow *p_flow; > >+ struct ice_flow_parser *p_parser; > >+ void *temp; > >+ > >+ TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) { > >+ if (engine->uninit) > >+ engine->uninit(ad); > >+ } > >+ > >+ /* Remove all flows */ > >+ while ((p_flow =3D TAILQ_FIRST(&pf->flow_list))) { > >+ TAILQ_REMOVE(&pf->flow_list, p_flow, node); > >+ if (p_flow->engine->free) > >+ p_flow->engine->free(p_flow); > >+ rte_free(p_flow); > >+ } > >+ > >+ /* Cleanup parser list */ > >+ while ((p_parser =3D TAILQ_FIRST(&pf->rss_parser_list))) > >+ TAILQ_REMOVE(&pf->rss_parser_list, p_parser, node); > >+ > >+ while ((p_parser =3D TAILQ_FIRST(&pf->perm_parser_list))) > >+ TAILQ_REMOVE(&pf->perm_parser_list, p_parser, node); > >+ > >+ while ((p_parser =3D TAILQ_FIRST(&pf->dist_parser_list))) > >+ TAILQ_REMOVE(&pf->dist_parser_list, p_parser, node); } > >+ > >+int > >+ice_register_parser(struct ice_flow_parser *parser, > >+ struct ice_adapter *ad) > >+{ > >+ struct ice_parser_list *list =3D NULL; > >+ struct ice_pf *pf =3D &ad->pf; > >+ > >+ switch (parser->stage) { > >+ case ICE_FLOW_STAGE_RSS: > >+ list =3D &pf->rss_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_PERMISSION: > >+ list =3D &pf->perm_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_DISTRIBUTOR: > >+ list =3D &pf->dist_parser_list; > >+ break; > >+ default: > >+ return -EINVAL; > >+ } > >+ > >+ if (ad->devargs.pipeline_mode_support) > >+ TAILQ_INSERT_TAIL(list, parser, node); > >+ else { > >+ if (parser->engine->type =3D=3D ICE_FLOW_ENGINE_SWITCH > >+ || parser->engine->type =3D=3D ICE_FLOW_ENGINE_HASH) > >+ TAILQ_INSERT_TAIL(list, parser, node); > >+ else if (parser->engine->type =3D=3D ICE_FLOW_ENGINE_FDIR) > >+ TAILQ_INSERT_HEAD(list, parser, node); > >+ else > >+ return -EINVAL; > >+ } > >+ return 0; > >+} > >+ > >+void > >+ice_unregister_parser(struct ice_flow_parser *parser, > >+ struct ice_adapter *ad) > >+{ > >+ struct ice_pf *pf =3D &ad->pf; > >+ struct ice_parser_list *list; > >+ struct ice_flow_parser *p_parser; > >+ void *temp; > >+ > >+ switch (parser->stage) { > >+ case ICE_FLOW_STAGE_RSS: > >+ list =3D &pf->rss_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_PERMISSION: > >+ list =3D &pf->perm_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_DISTRIBUTOR: > >+ list =3D &pf->dist_parser_list; > >+ break; > >+ default: > >+ return; > >+ } > >+ > >+ TAILQ_FOREACH_SAFE(p_parser, list, node, temp) { > >+ if (p_parser->engine->type =3D=3D parser->engine->type) > >+ TAILQ_REMOVE(list, p_parser, node); > >+ } > >+ > >+} > >+ > > static int > >-ice_flow_valid_attr(const struct rte_flow_attr *attr, > >- struct rte_flow_error *error) > >+ice_flow_valid_attr(struct ice_adapter *ad, > >+ const struct rte_flow_attr *attr, > >+ struct rte_flow_error *error) > > { > > /* Must be input direction */ > > if (!attr->ingress) { > >@@ -61,15 +212,25 @@ ice_flow_valid_attr(const struct rte_flow_attr *att= r, > > attr, "Not support egress."); > > return -rte_errno; > > } > >- > >- /* Not supported */ > >- if (attr->priority) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, > >- attr, "Not support priority."); > >- return -rte_errno; > >+ /* Check pipeline mode support to set classification stage */ > >+ if (ad->devargs.pipeline_mode_support) { > >+ if (0 =3D=3D attr->priority) > >+ ice_pipeline_stage =3D > >+ ICE_FLOW_CLASSIFY_STAGE_PERMISSION; > >+ else > >+ ice_pipeline_stage =3D > >+ ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR; > >+ } else { > >+ ice_pipeline_stage =3D > >+ ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY; > >+ /* Not supported */ > >+ if (attr->priority) { > >+ rte_flow_error_set(error, EINVAL, > >+ > RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, > >+ attr, "Not support priority."); > >+ return -rte_errno; > >+ } > > } > >- > > /* Not supported */ > > if (attr->group) { > > rte_flow_error_set(error, EINVAL, > >@@ -102,7 +263,7 @@ ice_find_first_item(const struct rte_flow_item > >*item, bool is_void) > > /* Skip all VOID items of the pattern */ static void > >ice_pattern_skip_void_item(struct rte_flow_item *items, > >- const struct rte_flow_item *pattern) > >+ const struct rte_flow_item *pattern) >=20 > Unnecessary change here, only indentation changes. For the previous indentation is not tab-aligned, I will add a separate code= cleanup patch for these changes. >=20 > > { > > uint32_t cpy_count =3D 0; > > const struct rte_flow_item *pb =3D pattern, *pe =3D pattern; @@ -124,7 > >+285,6 @@ ice_pattern_skip_void_item(struct rte_flow_item *items, > > items +=3D cpy_count; > > > > if (pe->type =3D=3D RTE_FLOW_ITEM_TYPE_END) { > >- pb =3D pe; >=20 > seems this is some code cleanup, prefer a separate patch, not a strong op= inion > though. OK, will add a separate patch for code cleanup. >=20 > > break; > > } > > > >@@ -151,11 +311,15 @@ ice_match_pattern(enum rte_flow_item_type > *item_array, > > item->type =3D=3D RTE_FLOW_ITEM_TYPE_END); } > > > >-static uint64_t ice_flow_valid_pattern(const struct rte_flow_item > >pattern[], > >+struct ice_pattern_match_item * > >+ice_search_pattern_match_item(const struct rte_flow_item pattern[], > >+ struct ice_pattern_match_item *array, > >+ uint32_t array_len, > > struct rte_flow_error *error) > > { > > uint16_t i =3D 0; > >- uint64_t inset; > >+ struct ice_pattern_match_item *pattern_match_item; > >+ /* need free by each filter */ > > struct rte_flow_item *items; /* used for pattern without VOID items */ > > uint32_t item_num =3D 0; /* non-void item number */ > > > >@@ -172,451 +336,149 @@ static uint64_t ice_flow_valid_pattern(const > struct rte_flow_item pattern[], > > if (!items) { > > rte_flow_error_set(error, ENOMEM, > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > NULL, "No memory for PMD internal items."); > >- return -ENOMEM; > >+ return NULL; > >+ } > >+ pattern_match_item =3D rte_zmalloc("ice_pattern_match_item", > >+ sizeof(struct ice_pattern_match_item), 0); > >+ if (!pattern_match_item) { > >+ PMD_DRV_LOG(ERR, "Failed to allocate memory."); > >+ return NULL; > > } > >- > > ice_pattern_skip_void_item(items, pattern); > > > >- for (i =3D 0; i < RTE_DIM(ice_supported_patterns); i++) > [snip] > > > >+static int > >+ice_flow_validate(struct rte_eth_dev *dev, > >+ const struct rte_flow_attr *attr, > >+ const struct rte_flow_item pattern[], > >+ const struct rte_flow_action actions[], > >+ struct rte_flow_error *error) > >+{ > >+ int ret =3D ICE_ERR_NOT_SUPPORTED; >=20 > Unnecessary initialization. OK, will fix it in v2. >=20 > >+ void *meta =3D NULL; > >+ struct ice_flow_engine *engine =3D NULL; > >+ > >+ ret =3D ice_flow_validate_filter(dev, attr, pattern, actions, > >+ &engine, &meta, error); > >+ return ret; > >+} > >+ > > static struct rte_flow * > > ice_flow_create(struct rte_eth_dev *dev, > >- const struct rte_flow_attr *attr, > >- const struct rte_flow_item pattern[], > >- const struct rte_flow_action actions[], > >- struct rte_flow_error *error) > >+ const struct rte_flow_attr *attr, > >+ const struct rte_flow_item pattern[], > >+ const struct rte_flow_action actions[], > >+ struct rte_flow_error *error) > > { > > struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > struct rte_flow *flow =3D NULL; > >- int ret; > >+ int ret =3D 0; >=20 > Unnecessary initialization. OK, will fix it in v2. >=20 > >+ struct ice_adapter *ad =3D > >+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >+ struct ice_flow_engine *engine =3D NULL; > >+ void *meta =3D NULL; > > > > flow =3D rte_zmalloc("ice_flow", sizeof(struct rte_flow), 0); > > if (!flow) { > >@@ -626,65 +488,105 @@ ice_flow_create(struct rte_eth_dev *dev, > > return flow; > > } > > > >- ret =3D ice_flow_validate(dev, attr, pattern, actions, error); > >+ ret =3D ice_flow_validate_filter(dev, attr, pattern, actions, > >+ &engine, &meta, error); > > if (ret < 0) > > goto free_flow; > > > >- ret =3D ice_create_switch_filter(pf, pattern, actions, flow, error); > >+ if (engine->create =3D=3D NULL) > >+ goto free_flow; > >+ > >+ ret =3D engine->create(ad, flow, meta, error); > > if (ret) > > goto free_flow; > > > >+ flow->engine =3D engine; > > TAILQ_INSERT_TAIL(&pf->flow_list, flow, node); > > return flow; > > > > free_flow: > >- rte_flow_error_set(error, -ret, > >- RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >- "Failed to create flow."); > >+ PMD_DRV_LOG(ERR, "Failed to create flow"); > > rte_free(flow); > > return NULL; > > } > > > > static int > > ice_flow_destroy(struct rte_eth_dev *dev, > >- struct rte_flow *flow, > >- struct rte_flow_error *error) > >+ struct rte_flow *flow, > >+ struct rte_flow_error *error) > > { > > struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >+ struct ice_adapter *ad =3D > >+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > int ret =3D 0; > > > >- ret =3D ice_destroy_switch_filter(pf, flow, error); > >- > >+ if (!flow || !flow->engine->destroy) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, > >+ NULL, "NULL flow or NULL destroy"); > >+ return -rte_errno; > >+ } > >+ ret =3D flow->engine->destroy(ad, flow, error); > > if (!ret) { > > TAILQ_REMOVE(&pf->flow_list, flow, node); > > rte_free(flow); > >- } else { > >- rte_flow_error_set(error, -ret, > >- RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >- "Failed to destroy flow."); > >- } > >+ } else > >+ PMD_DRV_LOG(ERR, "Failed to destroy flow"); > > > > return ret; > > } > > > > static int > > ice_flow_flush(struct rte_eth_dev *dev, > >- struct rte_flow_error *error) > >+ struct rte_flow_error *error) >=20 > Unnecessary change. Will add a separate code cleanup patch for this change, since the previous = indentation is not tab-aligned. >=20 > > { > > struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >- struct rte_flow *p_flow; > >+ struct rte_flow *p_flow =3D NULL; >=20 > Unnecessary initialization. OK, will fix it in v2. >=20 > > void *temp; > > int ret =3D 0; > > > > TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) { > > ret =3D ice_flow_destroy(dev, p_flow, error); > > if (ret) { > >- rte_flow_error_set(error, -ret, > >- RTE_FLOW_ERROR_TYPE_HANDLE, > NULL, > >- "Failed to flush SW flows."); > >- return -rte_errno; > >+ PMD_DRV_LOG(ERR, "Failed to flush flows"); > >+ return -EINVAL; > > } > > } > > > > return ret; > > } > >+ > >+static int > >+ice_flow_query_count(struct rte_eth_dev *dev, > >+ struct rte_flow *flow, > >+ const struct rte_flow_action *actions, > >+ void *data, > >+ struct rte_flow_error *error) > >+{ > >+ int ret =3D -EINVAL; > >+ struct ice_adapter *ad =3D > >+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >+ > >+ if (!flow || !flow->engine->query) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, > >+ NULL, "NULL flow or NULL query"); > >+ return -rte_errno; > >+ } > >+ > >+ for (; actions->type !=3D RTE_FLOW_ACTION_TYPE_END; actions++) { > >+ switch (actions->type) { > >+ case RTE_FLOW_ACTION_TYPE_VOID: > >+ break; > >+ case RTE_FLOW_ACTION_TYPE_COUNT: > >+ ret =3D flow->engine->query(ad, flow, data, error); > >+ break; > >+ default: > >+ return rte_flow_error_set(error, ENOTSUP, > >+ RTE_FLOW_ERROR_TYPE_ACTION, > >+ actions, > >+ "action not supported"); > >+ } > >+ } > >+ return ret; > >+} > [snip] > >+TAILQ_HEAD(ice_engine_list, ice_flow_engine); > >+ > >+/* Struct to store flow created. */ > >+struct rte_flow { > >+TAILQ_ENTRY(rte_flow) node; >=20 > Indentation is needed here. OK, will fix it in v2. >=20 > >+ struct ice_flow_engine *engine; > >+ void *rule; > >+}; > >+ > >+/* Struct to store parser created. */ > >+struct ice_flow_parser { > >+ TAILQ_ENTRY(ice_flow_parser) node; > >+ struct ice_flow_engine *engine; > >+ struct ice_pattern_match_item *array; > >+ uint32_t array_len; > >+ parse_pattern_action_t parse_pattern_action; > >+ enum ice_flow_classification_stage stage; }; > >+ > >+void ice_register_flow_engine(struct ice_flow_engine *engine); int > >+ice_flow_init(struct ice_adapter *ad); void ice_flow_uninit(struct > >+ice_adapter *ad); int ice_register_parser(struct ice_flow_parser > >+*parser, > >+ struct ice_adapter *ad); > >+void ice_unregister_parser(struct ice_flow_parser *parser, > >+ struct ice_adapter *ad); > >+struct ice_pattern_match_item * > >+ice_search_pattern_match_item( > >+ const struct rte_flow_item pattern[], > >+ struct ice_pattern_match_item *array, > >+ uint32_t array_len, > >+ struct rte_flow_error *error); > > > > #endif > >diff --git a/drivers/net/ice/ice_switch_filter.c > >b/drivers/net/ice/ice_switch_filter.c > >index b88b4f59a..6b72bf252 100644 > >--- a/drivers/net/ice/ice_switch_filter.c > >+++ b/drivers/net/ice/ice_switch_filter.c > >@@ -2,515 +2,4 @@ > > * Copyright(c) 2019 Intel Corporation > > */ > > > >-#include > >-#include > >-#include > >-#include > >-#include > >-#include > >-#include > > > >-#include > >-#include > >-#include > >-#include > >-#include > >-#include > >-#include > >-#include > >- > >-#include "ice_logs.h" > >-#include "base/ice_type.h" > >-#include "ice_switch_filter.h" > >- > >-static int > >-ice_parse_switch_filter(const struct rte_flow_item pattern[], > >- const struct rte_flow_action actions[], > >- struct rte_flow_error *error, > >- struct ice_adv_lkup_elem *list, > >- uint16_t *lkups_num, > >- enum ice_sw_tunnel_type tun_type) > >-{ > >- const struct rte_flow_item *item =3D pattern; > >- enum rte_flow_item_type item_type; > >- const struct rte_flow_item_eth *eth_spec, *eth_mask; > >- const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; > >- const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; > >- const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; > >- const struct rte_flow_item_udp *udp_spec, *udp_mask; > >- const struct rte_flow_item_sctp *sctp_spec, *sctp_mask; > >- const struct rte_flow_item_nvgre *nvgre_spec, *nvgre_mask; > >- const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask; > >- uint16_t j, t =3D 0; > >- uint16_t tunnel_valid =3D 0; > >- > >- for (item =3D pattern; item->type !=3D > >- RTE_FLOW_ITEM_TYPE_END; item++) { > >- item_type =3D item->type; > >- > >- switch (item_type) { > >- case RTE_FLOW_ITEM_TYPE_ETH: > >- eth_spec =3D item->spec; > >- eth_mask =3D item->mask; > >- if (eth_spec && eth_mask) { > >- list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >- ICE_MAC_OFOS : ICE_MAC_IL; > >- struct ice_ether_hdr *h; > >- struct ice_ether_hdr *m; > >- uint16_t i =3D 0; > >- h =3D &list[t].h_u.eth_hdr; > >- m =3D &list[t].m_u.eth_hdr; > >- for (j =3D 0; j < RTE_ETHER_ADDR_LEN; j++) { > >- if (eth_mask->src.addr_bytes[j] =3D=3D > >- UINT8_MAX) { > >- h->src_addr[j] =3D > >- eth_spec->src.addr_bytes[j]; > >- m->src_addr[j] =3D > >- eth_mask->src.addr_bytes[j]; > >- i =3D 1; > >- } > >- if (eth_mask->dst.addr_bytes[j] =3D=3D > >- UINT8_MAX) { > >- h->dst_addr[j] =3D > >- eth_spec->dst.addr_bytes[j]; > >- m->dst_addr[j] =3D > >- eth_mask->dst.addr_bytes[j]; > >- i =3D 1; > >- } > >- } > >- if (i) > >- t++; > >- if (eth_mask->type =3D=3D UINT16_MAX) { > >- list[t].type =3D ICE_ETYPE_OL; > >- list[t].h_u.ethertype.ethtype_id =3D > >- eth_spec->type; > >- list[t].m_u.ethertype.ethtype_id =3D > >- UINT16_MAX; > >- t++; > >- } > >- } else if (!eth_spec && !eth_mask) { > >- list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >- ICE_MAC_OFOS : ICE_MAC_IL; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_IPV4: > >- ipv4_spec =3D item->spec; > >- ipv4_mask =3D item->mask; > >- if (ipv4_spec && ipv4_mask) { > >- list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >- ICE_IPV4_OFOS : ICE_IPV4_IL; > >- if (ipv4_mask->hdr.src_addr =3D=3D UINT32_MAX) { > >- list[t].h_u.ipv4_hdr.src_addr =3D > >- ipv4_spec->hdr.src_addr; > >- list[t].m_u.ipv4_hdr.src_addr =3D > >- UINT32_MAX; > >- } > >- if (ipv4_mask->hdr.dst_addr =3D=3D UINT32_MAX) { > >- list[t].h_u.ipv4_hdr.dst_addr =3D > >- ipv4_spec->hdr.dst_addr; > >- list[t].m_u.ipv4_hdr.dst_addr =3D > >- UINT32_MAX; > >- } > >- if (ipv4_mask->hdr.time_to_live =3D=3D UINT8_MAX) > { > >- list[t].h_u.ipv4_hdr.time_to_live =3D > >- ipv4_spec->hdr.time_to_live; > >- list[t].m_u.ipv4_hdr.time_to_live =3D > >- UINT8_MAX; > >- } > >- if (ipv4_mask->hdr.next_proto_id =3D=3D > UINT8_MAX) { > >- list[t].h_u.ipv4_hdr.protocol =3D > >- ipv4_spec->hdr.next_proto_id; > >- list[t].m_u.ipv4_hdr.protocol =3D > >- UINT8_MAX; > >- } > >- if (ipv4_mask->hdr.type_of_service =3D=3D > >- UINT8_MAX) { > >- list[t].h_u.ipv4_hdr.tos =3D > >- ipv4_spec- > >hdr.type_of_service; > >- list[t].m_u.ipv4_hdr.tos =3D UINT8_MAX; > >- } > >- t++; > >- } else if (!ipv4_spec && !ipv4_mask) { > >- list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >- ICE_IPV4_OFOS : ICE_IPV4_IL; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_IPV6: > >- ipv6_spec =3D item->spec; > >- ipv6_mask =3D item->mask; > >- if (ipv6_spec && ipv6_mask) { > >- list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >- ICE_IPV6_OFOS : ICE_IPV6_IL; > >- struct ice_ipv6_hdr *f; > >- struct ice_ipv6_hdr *s; > >- f =3D &list[t].h_u.ipv6_hdr; > >- s =3D &list[t].m_u.ipv6_hdr; > >- for (j =3D 0; j < ICE_IPV6_ADDR_LENGTH; j++) { > >- if (ipv6_mask->hdr.src_addr[j] =3D=3D > >- UINT8_MAX) { > >- f->src_addr[j] =3D > >- ipv6_spec->hdr.src_addr[j]; > >- s->src_addr[j] =3D > >- ipv6_mask->hdr.src_addr[j]; > >- } > >- if (ipv6_mask->hdr.dst_addr[j] =3D=3D > >- UINT8_MAX) { > >- f->dst_addr[j] =3D > >- ipv6_spec->hdr.dst_addr[j]; > >- s->dst_addr[j] =3D > >- ipv6_mask->hdr.dst_addr[j]; > >- } > >- } > >- if (ipv6_mask->hdr.proto =3D=3D UINT8_MAX) { > >- f->next_hdr =3D > >- ipv6_spec->hdr.proto; > >- s->next_hdr =3D UINT8_MAX; > >- } > >- if (ipv6_mask->hdr.hop_limits =3D=3D UINT8_MAX) { > >- f->hop_limit =3D > >- ipv6_spec->hdr.hop_limits; > >- s->hop_limit =3D UINT8_MAX; > >- } > >- t++; > >- } else if (!ipv6_spec && !ipv6_mask) { > >- list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >- ICE_IPV4_OFOS : ICE_IPV4_IL; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_UDP: > >- udp_spec =3D item->spec; > >- udp_mask =3D item->mask; > >- if (udp_spec && udp_mask) { > >- if (tun_type =3D=3D ICE_SW_TUN_VXLAN && > >- tunnel_valid =3D=3D 0) > >- list[t].type =3D ICE_UDP_OF; > >- else > >- list[t].type =3D ICE_UDP_ILOS; > >- if (udp_mask->hdr.src_port =3D=3D UINT16_MAX) { > >- list[t].h_u.l4_hdr.src_port =3D > >- udp_spec->hdr.src_port; > >- list[t].m_u.l4_hdr.src_port =3D > >- udp_mask->hdr.src_port; > >- } > >- if (udp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > >- list[t].h_u.l4_hdr.dst_port =3D > >- udp_spec->hdr.dst_port; > >- list[t].m_u.l4_hdr.dst_port =3D > >- udp_mask->hdr.dst_port; > >- } > >- t++; > >- } else if (!udp_spec && !udp_mask) { > >- list[t].type =3D ICE_UDP_ILOS; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_TCP: > >- tcp_spec =3D item->spec; > >- tcp_mask =3D item->mask; > >- if (tcp_spec && tcp_mask) { > >- list[t].type =3D ICE_TCP_IL; > >- if (tcp_mask->hdr.src_port =3D=3D UINT16_MAX) { > >- list[t].h_u.l4_hdr.src_port =3D > >- tcp_spec->hdr.src_port; > >- list[t].m_u.l4_hdr.src_port =3D > >- tcp_mask->hdr.src_port; > >- } > >- if (tcp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > >- list[t].h_u.l4_hdr.dst_port =3D > >- tcp_spec->hdr.dst_port; > >- list[t].m_u.l4_hdr.dst_port =3D > >- tcp_mask->hdr.dst_port; > >- } > >- t++; > >- } else if (!tcp_spec && !tcp_mask) { > >- list[t].type =3D ICE_TCP_IL; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_SCTP: > >- sctp_spec =3D item->spec; > >- sctp_mask =3D item->mask; > >- if (sctp_spec && sctp_mask) { > >- list[t].type =3D ICE_SCTP_IL; > >- if (sctp_mask->hdr.src_port =3D=3D UINT16_MAX) { > >- list[t].h_u.sctp_hdr.src_port =3D > >- sctp_spec->hdr.src_port; > >- list[t].m_u.sctp_hdr.src_port =3D > >- sctp_mask->hdr.src_port; > >- } > >- if (sctp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > >- list[t].h_u.sctp_hdr.dst_port =3D > >- sctp_spec->hdr.dst_port; > >- list[t].m_u.sctp_hdr.dst_port =3D > >- sctp_mask->hdr.dst_port; > >- } > >- t++; > >- } else if (!sctp_spec && !sctp_mask) { > >- list[t].type =3D ICE_SCTP_IL; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_VXLAN: > >- vxlan_spec =3D item->spec; > >- vxlan_mask =3D item->mask; > >- tunnel_valid =3D 1; > >- if (vxlan_spec && vxlan_mask) { > >- list[t].type =3D ICE_VXLAN; > >- if (vxlan_mask->vni[0] =3D=3D UINT8_MAX && > >- vxlan_mask->vni[1] =3D=3D UINT8_MAX && > >- vxlan_mask->vni[2] =3D=3D UINT8_MAX) { > >- list[t].h_u.tnl_hdr.vni =3D > >- (vxlan_spec->vni[2] << 16) | > >- (vxlan_spec->vni[1] << 8) | > >- vxlan_spec->vni[0]; > >- list[t].m_u.tnl_hdr.vni =3D > >- UINT32_MAX; > >- } > >- t++; > >- } else if (!vxlan_spec && !vxlan_mask) { > >- list[t].type =3D ICE_VXLAN; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_NVGRE: > >- nvgre_spec =3D item->spec; > >- nvgre_mask =3D item->mask; > >- tunnel_valid =3D 1; > >- if (nvgre_spec && nvgre_mask) { > >- list[t].type =3D ICE_NVGRE; > >- if (nvgre_mask->tni[0] =3D=3D UINT8_MAX && > >- nvgre_mask->tni[1] =3D=3D UINT8_MAX && > >- nvgre_mask->tni[2] =3D=3D UINT8_MAX) { > >- list[t].h_u.nvgre_hdr.tni_flow =3D > >- (nvgre_spec->tni[2] << 16) | > >- (nvgre_spec->tni[1] << 8) | > >- nvgre_spec->tni[0]; > >- list[t].m_u.nvgre_hdr.tni_flow =3D > >- UINT32_MAX; > >- } > >- t++; > >- } else if (!nvgre_spec && !nvgre_mask) { > >- list[t].type =3D ICE_NVGRE; > >- } > >- break; > >- > >- case RTE_FLOW_ITEM_TYPE_VOID: > >- case RTE_FLOW_ITEM_TYPE_END: > >- break; > >- > >- default: > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, actions, > >- "Invalid pattern item."); > >- goto out; > >- } > >- } > >- > >- *lkups_num =3D t; > >- > >- return 0; > >-out: > >- return -rte_errno; > >-} > >- > >-/* By now ice switch filter action code implement only > >- * supports QUEUE or DROP. > >- */ > >-static int > >-ice_parse_switch_action(struct ice_pf *pf, > >- const struct rte_flow_action *actions, > >- struct rte_flow_error *error, > >- struct ice_adv_rule_info *rule_info) > >-{ > >- struct ice_vsi *vsi =3D pf->main_vsi; > >- const struct rte_flow_action_queue *act_q; > >- uint16_t base_queue; > >- const struct rte_flow_action *action; > >- enum rte_flow_action_type action_type; > >- > >- base_queue =3D pf->base_queue; > >- for (action =3D actions; action->type !=3D > >- RTE_FLOW_ACTION_TYPE_END; action++) { > >- action_type =3D action->type; > >- switch (action_type) { > >- case RTE_FLOW_ACTION_TYPE_QUEUE: > >- act_q =3D action->conf; > >- rule_info->sw_act.fltr_act =3D > >- ICE_FWD_TO_Q; > >- rule_info->sw_act.fwd_id.q_id =3D > >- base_queue + act_q->index; > >- break; > >- > >- case RTE_FLOW_ACTION_TYPE_DROP: > >- rule_info->sw_act.fltr_act =3D > >- ICE_DROP_PACKET; > >- break; > >- > >- case RTE_FLOW_ACTION_TYPE_VOID: > >- break; > >- > >- default: > >- rte_flow_error_set(error, > >- EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- actions, > >- "Invalid action type"); > >- return -rte_errno; > >- } > >- } > >- > >- rule_info->sw_act.vsi_handle =3D vsi->idx; > >- rule_info->rx =3D 1; > >- rule_info->sw_act.src =3D vsi->idx; > >- rule_info->priority =3D 5; > >- > >- return 0; > >-} > >- > >-static int > >-ice_switch_rule_set(struct ice_pf *pf, > >- struct ice_adv_lkup_elem *list, > >- uint16_t lkups_cnt, > >- struct ice_adv_rule_info *rule_info, > >- struct rte_flow *flow, > >- struct rte_flow_error *error) > >-{ > >- struct ice_hw *hw =3D ICE_PF_TO_HW(pf); > >- int ret; > >- struct ice_rule_query_data rule_added =3D {0}; > >- struct ice_rule_query_data *filter_ptr; > >- > >- if (lkups_cnt > ICE_MAX_CHAIN_WORDS) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, NULL, > >- "item number too large for rule"); > >- return -rte_errno; > >- } > >- if (!list) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, NULL, > >- "lookup list should not be NULL"); > >- return -rte_errno; > >- } > >- > >- ret =3D ice_add_adv_rule(hw, list, lkups_cnt, rule_info, &rule_added); > >- > >- if (!ret) { > >- filter_ptr =3D rte_zmalloc("ice_switch_filter", > >- sizeof(struct ice_rule_query_data), 0); > >- if (!filter_ptr) { > >- PMD_DRV_LOG(ERR, "failed to allocate memory"); > >- return -EINVAL; > >- } > >- flow->rule =3D filter_ptr; > >- rte_memcpy(filter_ptr, > >- &rule_added, > >- sizeof(struct ice_rule_query_data)); > >- } > >- > >- return ret; > >-} > >- > >-int > >-ice_create_switch_filter(struct ice_pf *pf, > >- const struct rte_flow_item pattern[], > >- const struct rte_flow_action actions[], > >- struct rte_flow *flow, > >- struct rte_flow_error *error) > >-{ > >- int ret =3D 0; > >- struct ice_adv_rule_info rule_info =3D {0}; > >- struct ice_adv_lkup_elem *list =3D NULL; > >- uint16_t lkups_num =3D 0; > >- const struct rte_flow_item *item =3D pattern; > >- uint16_t item_num =3D 0; > >- enum ice_sw_tunnel_type tun_type =3D ICE_NON_TUN; > >- > >- for (; item->type !=3D RTE_FLOW_ITEM_TYPE_END; item++) { > >- item_num++; > >- if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_VXLAN) > >- tun_type =3D ICE_SW_TUN_VXLAN; > >- if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_NVGRE) > >- tun_type =3D ICE_SW_TUN_NVGRE; > >- /* reserve one more memory slot for ETH which may > >- * consume 2 lookup items. > >- */ > >- if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_ETH) > >- item_num++; > >- } > >- rule_info.tun_type =3D tun_type; > >- > >- list =3D rte_zmalloc(NULL, item_num * sizeof(*list), 0); > >- if (!list) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >- "No memory for PMD internal items"); > >- return -rte_errno; > >- } > >- > >- ret =3D ice_parse_switch_filter(pattern, actions, error, > >- list, &lkups_num, tun_type); > >- if (ret) > >- goto error; > >- > >- ret =3D ice_parse_switch_action(pf, actions, error, &rule_info); > >- if (ret) > >- goto error; > >- > >- ret =3D ice_switch_rule_set(pf, list, lkups_num, &rule_info, flow, err= or); > >- if (ret) > >- goto error; > >- > >- rte_free(list); > >- return 0; > >- > >-error: > >- rte_free(list); > >- > >- return -rte_errno; > >-} > >- > >-int > >-ice_destroy_switch_filter(struct ice_pf *pf, > >- struct rte_flow *flow, > >- struct rte_flow_error *error) > >-{ > >- struct ice_hw *hw =3D ICE_PF_TO_HW(pf); > >- int ret; > >- struct ice_rule_query_data *filter_ptr; > >- > >- filter_ptr =3D (struct ice_rule_query_data *) > >- flow->rule; > >- > >- if (!filter_ptr) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >- "no such flow" > >- " create by switch filter"); > >- return -rte_errno; > >- } > >- > >- ret =3D ice_rem_adv_rule_by_id(hw, filter_ptr); > >- if (ret) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >- "fail to destroy switch filter rule"); > >- return -rte_errno; > >- } > >- > >- rte_free(filter_ptr); > >- return ret; > >-} > >- > >-void > >-ice_free_switch_filter_rule(void *rule) -{ > >- struct ice_rule_query_data *filter_ptr; > >- > >- filter_ptr =3D (struct ice_rule_query_data *)rule; > >- > >- rte_free(filter_ptr); > >-} > >diff --git a/drivers/net/ice/ice_switch_filter.h > >b/drivers/net/ice/ice_switch_filter.h > >index cea47990e..5afcddeaf 100644 > >--- a/drivers/net/ice/ice_switch_filter.h > >+++ b/drivers/net/ice/ice_switch_filter.h > >@@ -2,23 +2,5 @@ > > * Copyright(c) 2019 Intel Corporation > > */ > > > >-#ifndef _ICE_SWITCH_FILTER_H_ > >-#define _ICE_SWITCH_FILTER_H_ > > > >-#include "base/ice_switch.h" > >-#include "base/ice_type.h" > >-#include "ice_ethdev.h" > > > >-int > >-ice_create_switch_filter(struct ice_pf *pf, > >- const struct rte_flow_item pattern[], > >- const struct rte_flow_action actions[], > >- struct rte_flow *flow, > >- struct rte_flow_error *error); > >-int > >-ice_destroy_switch_filter(struct ice_pf *pf, > >- struct rte_flow *flow, > >- struct rte_flow_error *error); > >-void > >-ice_free_switch_filter_rule(void *rule); -#endif /* > >_ICE_SWITCH_FILTER_H_ */ > >-- > >2.15.1 > >