From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56F18A0471 for ; Mon, 9 Sep 2019 04:12:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D4E2F1E971; Mon, 9 Sep 2019 04:12:21 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 7DE021E962 for ; Mon, 9 Sep 2019 04:12:20 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Sep 2019 19:12:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,483,1559545200"; d="scan'208";a="383832085" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by fmsmga005.fm.intel.com with ESMTP; 08 Sep 2019 19:12:19 -0700 Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.439.0; Sun, 8 Sep 2019 19:12:19 -0700 Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Sun, 8 Sep 2019 19:12:18 -0700 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5 via Frontend Transport; Sun, 8 Sep 2019 19:12:18 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.92]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.132]) with mapi id 14.03.0439.000; Mon, 9 Sep 2019 10:12:16 +0800 From: "Wang, Ying A" To: "Ye, Xiaolong" CC: "Zhang, Qi Z" , "Yang, Qiming" , "dev@dpdk.org" , "Zhao1, Wei" Thread-Topic: [PATCH 2/4] net/ice: rework for generic flow enabling Thread-Index: AQHVYus423zjMsZ48E6uuQEvVO4coachb8kAgAEr4HA= Date: Mon, 9 Sep 2019 02:12:16 +0000 Message-ID: <44DE8E8A53B4014CA1985CEE86C07F2A0B989E2C@SHSMSX101.ccr.corp.intel.com> References: <20190903221522.151382-1-ying.a.wang@intel.com> <20190903221522.151382-3-ying.a.wang@intel.com> <20190908155541.GH110251@intel.com> In-Reply-To: <20190908155541.GH110251@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNjJhMDRhYzctMTc5NS00MjQ0LWFmMzctOGE3OWFiZmE0MTA1IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiMDVYV1MxeVVuc3hvUW9lSEEySkg4dndSSGhqUlwvS3ZMNkNZdE45OTN1SzF1VENZcGJtbkEzSVhoR1Awck55YW4ifQ== x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 2/4] net/ice: rework for generic flow enabling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Xiaolong > -----Original Message----- > From: Ye, Xiaolong > Sent: Sunday, September 8, 2019 11:56 PM > To: Wang, Ying A > Cc: Zhang, Qi Z ; Yang, Qiming > ; dev@dpdk.org; Zhao1, Wei > Subject: Re: [PATCH 2/4] net/ice: rework for generic flow enabling >=20 > On 09/04, Ying Wang wrote: > >The patch reworks the generic flow API (rte_flow) implementation. > >It introduces an abstract layer which provides a unified interface for > >low-level filter engine (switch, fdir, hash) to register supported > >patterns and actions and implement flow validate/create/destroy/flush/ > >query activities. > > > >The patch also removes the existing switch filter implementation to > >avoid compile error. Switch filter implementation for the new framework > >will be added in the following patch. > > > >Signed-off-by: Ying Wang > >--- > > drivers/net/ice/ice_ethdev.c | 22 +- > > drivers/net/ice/ice_ethdev.h | 15 +- > > drivers/net/ice/ice_generic_flow.c | 768 > >+++++++++++++++-------------------- > > drivers/net/ice/ice_generic_flow.h | 782 > >++++++++---------------------------- > > drivers/net/ice/ice_switch_filter.c | 511 ----------------------- > >drivers/net/ice/ice_switch_filter.h | 18 - > > 6 files changed, 525 insertions(+), 1591 deletions(-) > > > >diff --git a/drivers/net/ice/ice_ethdev.c > >b/drivers/net/ice/ice_ethdev.c index 4e0645db1..647aca3ed 100644 > >--- a/drivers/net/ice/ice_ethdev.c > >+++ b/drivers/net/ice/ice_ethdev.c > >@@ -15,7 +15,7 @@ > > #include "base/ice_dcb.h" > > #include "ice_ethdev.h" > > #include "ice_rxtx.h" > >-#include "ice_switch_filter.h" > >+#include "ice_generic_flow.h" > > > > /* devargs */ > > #define ICE_SAFE_MODE_SUPPORT_ARG "safe-mode-support" > >@@ -1677,7 +1677,11 @@ ice_dev_init(struct rte_eth_dev *dev) > > /* get base queue pairs index in the device */ > > ice_base_queue_get(pf); > > > >- TAILQ_INIT(&pf->flow_list); > >+ ret =3D ice_flow_init(ad); > >+ if (ret) { > >+ PMD_INIT_LOG(ERR, "Failed to initialize flow"); > >+ return ret; > >+ } > > > > return 0; > > > >@@ -1796,6 +1800,8 @@ ice_dev_close(struct rte_eth_dev *dev) { > > struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > struct ice_hw *hw =3D ICE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > >+ struct ice_adapter *ad =3D > >+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > > > /* Since stop will make link down, then the link event will be > > * triggered, disable the irq firstly to avoid the port_infoe etc @@ > >-1806,6 +1812,8 @@ ice_dev_close(struct rte_eth_dev *dev) > > > > ice_dev_stop(dev); > > > >+ ice_flow_uninit(ad); > >+ > > /* release all queue resource */ > > ice_free_queues(dev); > > > >@@ -1822,8 +1830,6 @@ ice_dev_uninit(struct rte_eth_dev *dev) { > > struct rte_pci_device *pci_dev =3D RTE_ETH_DEV_TO_PCI(dev); > > struct rte_intr_handle *intr_handle =3D &pci_dev->intr_handle; > >- struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >- struct rte_flow *p_flow; > > > > ice_dev_close(dev); > > > >@@ -1840,14 +1846,6 @@ ice_dev_uninit(struct rte_eth_dev *dev) > > /* unregister callback func from eal lib */ > > rte_intr_callback_unregister(intr_handle, > > ice_interrupt_handler, dev); > >- > >- /* Remove all flows */ > >- while ((p_flow =3D TAILQ_FIRST(&pf->flow_list))) { > >- TAILQ_REMOVE(&pf->flow_list, p_flow, node); > >- ice_free_switch_filter_rule(p_flow->rule); > >- rte_free(p_flow); > >- } > >- > > return 0; > > } > > > >diff --git a/drivers/net/ice/ice_ethdev.h > >b/drivers/net/ice/ice_ethdev.h index 9bf5de08d..d1d07641d 100644 > >--- a/drivers/net/ice/ice_ethdev.h > >+++ b/drivers/net/ice/ice_ethdev.h > >@@ -241,16 +241,14 @@ struct ice_vsi { > > bool offset_loaded; > > }; > > > >-extern const struct rte_flow_ops ice_flow_ops; > >- > >-/* Struct to store flow created. */ > >-struct rte_flow { > >- TAILQ_ENTRY(rte_flow) node; > >- void *rule; > >-}; > > > >+struct rte_flow; > > TAILQ_HEAD(ice_flow_list, rte_flow); > > > >+ > >+struct ice_flow_parser; > >+TAILQ_HEAD(ice_parser_list, ice_flow_parser); > >+ > > struct ice_pf { > > struct ice_adapter *adapter; /* The adapter this PF associate to */ > > struct ice_vsi *main_vsi; /* pointer to main VSI structure */ @@ > >-278,6 +276,9 @@ struct ice_pf { > > bool offset_loaded; > > bool adapter_stopped; > > struct ice_flow_list flow_list; > >+ struct ice_parser_list rss_parser_list; > >+ struct ice_parser_list perm_parser_list; > >+ struct ice_parser_list dist_parser_list; > > }; > > > > /** > >diff --git a/drivers/net/ice/ice_generic_flow.c > >b/drivers/net/ice/ice_generic_flow.c > >index 1c0adc779..aa11d6170 100644 > >--- a/drivers/net/ice/ice_generic_flow.c > >+++ b/drivers/net/ice/ice_generic_flow.c > >@@ -17,7 +17,22 @@ > > > > #include "ice_ethdev.h" > > #include "ice_generic_flow.h" > >-#include "ice_switch_filter.h" > >+ > >+/** > >+ * Non-pipeline mode, fdir and swith both used as distributor, > >+ * fdir used first, switch used as fdir's backup. > >+ */ > >+#define ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY 0 /*Pipeline mode, > >+switch used at permission stage*/ #define > >+ICE_FLOW_CLASSIFY_STAGE_PERMISSION 1 /*Pipeline mode, fdir used at > >+distributor stage*/ #define ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR 2 > >+ > >+static int ice_pipeline_stage =3D > >+ ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY; > >+ > >+static struct ice_engine_list engine_list =3D > >+ TAILQ_HEAD_INITIALIZER(engine_list); > > > > static int ice_flow_validate(struct rte_eth_dev *dev, > > const struct rte_flow_attr *attr, > >@@ -34,17 +49,153 @@ static int ice_flow_destroy(struct rte_eth_dev *dev= , > > struct rte_flow_error *error); > > static int ice_flow_flush(struct rte_eth_dev *dev, > > struct rte_flow_error *error); > >+static int ice_flow_query_count(struct rte_eth_dev *dev, > >+ struct rte_flow *flow, > >+ const struct rte_flow_action *actions, > >+ void *data, > >+ struct rte_flow_error *error); > > > > const struct rte_flow_ops ice_flow_ops =3D { > > .validate =3D ice_flow_validate, > > .create =3D ice_flow_create, > > .destroy =3D ice_flow_destroy, > > .flush =3D ice_flow_flush, > >+ .query =3D ice_flow_query_count, > > }; > > > >+ > >+void > >+ice_register_flow_engine(struct ice_flow_engine *engine) { > >+ TAILQ_INSERT_TAIL(&engine_list, engine, node); } > >+ > >+int > >+ice_flow_init(struct ice_adapter *ad) > >+{ > >+ int ret =3D 0; > >+ struct ice_pf *pf =3D &ad->pf; > >+ void *temp; > >+ struct ice_flow_engine *engine =3D NULL; > >+ > >+ TAILQ_INIT(&pf->flow_list); > >+ TAILQ_INIT(&pf->rss_parser_list); > >+ TAILQ_INIT(&pf->perm_parser_list); > >+ TAILQ_INIT(&pf->dist_parser_list); > >+ > >+ TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) { > >+ if (engine->init =3D=3D NULL) >=20 > What about provide some debug log info here? Adding one engine name > member to struct ice_flow_engine may help. It's a good suggestion. struct ice_flow_engine has engine_type already. I w= ill add debug log info in v2. >=20 > >+ return -EINVAL; > >+ > >+ ret =3D engine->init(ad); > >+ if (ret) > >+ return ret; > >+ } > >+ return 0; > >+} > >+ > >+void > >+ice_flow_uninit(struct ice_adapter *ad) { > >+ struct ice_pf *pf =3D &ad->pf; > >+ struct ice_flow_engine *engine; > >+ struct rte_flow *p_flow; > >+ struct ice_flow_parser *p_parser; > >+ void *temp; > >+ > >+ TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) { > >+ if (engine->uninit) > >+ engine->uninit(ad); > >+ } > >+ > >+ /* Remove all flows */ > >+ while ((p_flow =3D TAILQ_FIRST(&pf->flow_list))) { > >+ TAILQ_REMOVE(&pf->flow_list, p_flow, node); > >+ if (p_flow->engine->free) > >+ p_flow->engine->free(p_flow); > >+ rte_free(p_flow); > >+ } > >+ > >+ /* Cleanup parser list */ > >+ while ((p_parser =3D TAILQ_FIRST(&pf->rss_parser_list))) > >+ TAILQ_REMOVE(&pf->rss_parser_list, p_parser, node); > >+ > >+ while ((p_parser =3D TAILQ_FIRST(&pf->perm_parser_list))) > >+ TAILQ_REMOVE(&pf->perm_parser_list, p_parser, node); > >+ > >+ while ((p_parser =3D TAILQ_FIRST(&pf->dist_parser_list))) > >+ TAILQ_REMOVE(&pf->dist_parser_list, p_parser, node); } > >+ > >+int > >+ice_register_parser(struct ice_flow_parser *parser, > >+ struct ice_adapter *ad) > >+{ > >+ struct ice_parser_list *list =3D NULL; > >+ struct ice_pf *pf =3D &ad->pf; > >+ > >+ switch (parser->stage) { > >+ case ICE_FLOW_STAGE_RSS: > >+ list =3D &pf->rss_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_PERMISSION: > >+ list =3D &pf->perm_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_DISTRIBUTOR: > >+ list =3D &pf->dist_parser_list; > >+ break; > >+ default: > >+ return -EINVAL; > >+ } > >+ > >+ if (ad->devargs.pipeline_mode_support) > >+ TAILQ_INSERT_TAIL(list, parser, node); > >+ else { > >+ if (parser->engine->type =3D=3D ICE_FLOW_ENGINE_SWITCH > >+ || parser->engine->type =3D=3D ICE_FLOW_ENGINE_HASH) > >+ TAILQ_INSERT_TAIL(list, parser, node); > >+ else if (parser->engine->type =3D=3D ICE_FLOW_ENGINE_FDIR) > >+ TAILQ_INSERT_HEAD(list, parser, node); > >+ else > >+ return -EINVAL; > >+ } > >+ return 0; > >+} > >+ > >+void > >+ice_unregister_parser(struct ice_flow_parser *parser, > >+ struct ice_adapter *ad) > >+{ > >+ struct ice_pf *pf =3D &ad->pf; > >+ struct ice_parser_list *list; > >+ struct ice_flow_parser *p_parser; > >+ void *temp; > >+ > >+ switch (parser->stage) { > >+ case ICE_FLOW_STAGE_RSS: > >+ list =3D &pf->rss_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_PERMISSION: > >+ list =3D &pf->perm_parser_list; > >+ break; > >+ case ICE_FLOW_STAGE_DISTRIBUTOR: > >+ list =3D &pf->dist_parser_list; > >+ break; > >+ default: > >+ return; > >+ } > >+ > >+ TAILQ_FOREACH_SAFE(p_parser, list, node, temp) { > >+ if (p_parser->engine->type =3D=3D parser->engine->type) > >+ TAILQ_REMOVE(list, p_parser, node); > >+ } > >+ > >+} > >+ > > static int > >-ice_flow_valid_attr(const struct rte_flow_attr *attr, > >- struct rte_flow_error *error) > >+ice_flow_valid_attr(struct ice_adapter *ad, > >+ const struct rte_flow_attr *attr, > >+ struct rte_flow_error *error) > > { > > /* Must be input direction */ > > if (!attr->ingress) { > >@@ -61,15 +212,25 @@ ice_flow_valid_attr(const struct rte_flow_attr *att= r, > > attr, "Not support egress."); > > return -rte_errno; > > } > >- > >- /* Not supported */ > >- if (attr->priority) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, > >- attr, "Not support priority."); > >- return -rte_errno; > >+ /* Check pipeline mode support to set classification stage */ > >+ if (ad->devargs.pipeline_mode_support) { > >+ if (0 =3D=3D attr->priority) > >+ ice_pipeline_stage =3D > >+ ICE_FLOW_CLASSIFY_STAGE_PERMISSION; > >+ else > >+ ice_pipeline_stage =3D > >+ ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR; > >+ } else { > >+ ice_pipeline_stage =3D > >+ ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY; >=20 > Do we really this assignment? Yes. We use devargs.pipeline_mode_support as a hint to decide which mode to= use, 1 for pipeline mode, 0 for non-pipeline mode. By default, non-pipeline mode is used and both switch/fdir used as distribu= tor, switch is fdir's backup.=20 In pipeline mode, attr->priority is enabled, 0 for permission stage and 1 = for distributor stage. >=20 > >+ /* Not supported */ > >+ if (attr->priority) { > >+ rte_flow_error_set(error, EINVAL, > >+ > RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, > >+ attr, "Not support priority."); > >+ return -rte_errno; > >+ } > > } > >- >=20 > Unrelated change. >=20 > > /* Not supported */ > > if (attr->group) { > > rte_flow_error_set(error, EINVAL, > >@@ -102,7 +263,7 @@ ice_find_first_item(const struct rte_flow_item > >*item, bool is_void) > > /* Skip all VOID items of the pattern */ static void > >ice_pattern_skip_void_item(struct rte_flow_item *items, > >- const struct rte_flow_item *pattern) > >+ const struct rte_flow_item *pattern) > > { > > uint32_t cpy_count =3D 0; > > const struct rte_flow_item *pb =3D pattern, *pe =3D pattern; @@ -124,7 > >+285,6 @@ ice_pattern_skip_void_item(struct rte_flow_item *items, > > items +=3D cpy_count; > > > > if (pe->type =3D=3D RTE_FLOW_ITEM_TYPE_END) { > >- pb =3D pe; > > break; > > } > > > >@@ -151,11 +311,15 @@ ice_match_pattern(enum rte_flow_item_type > *item_array, > > item->type =3D=3D RTE_FLOW_ITEM_TYPE_END); } > > > >-static uint64_t ice_flow_valid_pattern(const struct rte_flow_item > >pattern[], > >+struct ice_pattern_match_item * > >+ice_search_pattern_match_item(const struct rte_flow_item pattern[], > >+ struct ice_pattern_match_item *array, > >+ uint32_t array_len, > > struct rte_flow_error *error) > > { > > uint16_t i =3D 0; > >- uint64_t inset; > >+ struct ice_pattern_match_item *pattern_match_item; > >+ /* need free by each filter */ > > struct rte_flow_item *items; /* used for pattern without VOID items */ > > uint32_t item_num =3D 0; /* non-void item number */ > > > >@@ -172,451 +336,149 @@ static uint64_t ice_flow_valid_pattern(const > struct rte_flow_item pattern[], > > if (!items) { > > rte_flow_error_set(error, ENOMEM, > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > NULL, "No memory for PMD internal items."); > >- return -ENOMEM; > >+ return NULL; > >+ } > >+ pattern_match_item =3D rte_zmalloc("ice_pattern_match_item", > >+ sizeof(struct ice_pattern_match_item), 0); > >+ if (!pattern_match_item) { > >+ PMD_DRV_LOG(ERR, "Failed to allocate memory."); >=20 > Use rte_flow_error_set to align with others. OK, will fix it in v2. >=20 > >+ return NULL; > > } > >- > > ice_pattern_skip_void_item(items, pattern); > > > >- for (i =3D 0; i < RTE_DIM(ice_supported_patterns); i++) > >- if (ice_match_pattern(ice_supported_patterns[i].items, > >+ for (i =3D 0; i < array_len; i++) > >+ if (ice_match_pattern(array[i].pattern_list, > > items)) { > >- inset =3D ice_supported_patterns[i].sw_fields; > >+ pattern_match_item->input_set_mask =3D > >+ array[i].input_set_mask; > >+ pattern_match_item->pattern_list =3D > >+ array[i].pattern_list; > >+ pattern_match_item->meta =3D array[i].meta; > > rte_free(items); > >- return inset; > >+ return pattern_match_item; > > } > > rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, > > pattern, "Unsupported pattern"); > > > > rte_free(items); > >- return 0; > >-} > >- > >-static uint64_t ice_get_flow_field(const struct rte_flow_item pattern[]= , > >- struct rte_flow_error *error) > >-{ > >- const struct rte_flow_item *item =3D pattern; > >- const struct rte_flow_item_eth *eth_spec, *eth_mask; > >- const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; > >- const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; > >- const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; > >- const struct rte_flow_item_udp *udp_spec, *udp_mask; > >- const struct rte_flow_item_sctp *sctp_spec, *sctp_mask; > >- const struct rte_flow_item_icmp *icmp_mask; > >- const struct rte_flow_item_icmp6 *icmp6_mask; > >- const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask; > >- const struct rte_flow_item_nvgre *nvgre_spec, *nvgre_mask; > >- enum rte_flow_item_type item_type; > >- uint8_t ipv6_addr_mask[16] =3D { > >- 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, > >- 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; > >- uint64_t input_set =3D ICE_INSET_NONE; > >- bool is_tunnel =3D false; > >- > >- for (; item->type !=3D RTE_FLOW_ITEM_TYPE_END; item++) { > >- if (item->last) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Not support range"); > >- return 0; > >- } > >- item_type =3D item->type; > >- switch (item_type) { > >- case RTE_FLOW_ITEM_TYPE_ETH: > >- eth_spec =3D item->spec; > >- eth_mask =3D item->mask; > >- > >- if (eth_spec && eth_mask) { > >- if (rte_is_broadcast_ether_addr(ð_mask- > >src)) > >- input_set |=3D ICE_INSET_SMAC; > >- if (rte_is_broadcast_ether_addr(ð_mask- > >dst)) > >- input_set |=3D ICE_INSET_DMAC; > >- if (eth_mask->type =3D=3D RTE_BE16(0xffff)) > >- input_set |=3D ICE_INSET_ETHERTYPE; > >- } > >- break; > >- case RTE_FLOW_ITEM_TYPE_IPV4: > >- ipv4_spec =3D item->spec; > >- ipv4_mask =3D item->mask; > >- > >- if (!(ipv4_spec && ipv4_mask)) > >- break; > >- > >- /* Check IPv4 mask and update input set */ > >- if (ipv4_mask->hdr.version_ihl || > >- ipv4_mask->hdr.total_length || > >- ipv4_mask->hdr.packet_id || > >- ipv4_mask->hdr.hdr_checksum) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid IPv4 mask."); > >- return 0; > >- } > >- > >- if (is_tunnel) { > >- if (ipv4_mask->hdr.src_addr =3D=3D UINT32_MAX) > >- input_set |=3D > ICE_INSET_TUN_IPV4_SRC; > >- if (ipv4_mask->hdr.dst_addr =3D=3D UINT32_MAX) > >- input_set |=3D > ICE_INSET_TUN_IPV4_DST; > >- if (ipv4_mask->hdr.time_to_live =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_TUN_IPV4_TTL; > >- if (ipv4_mask->hdr.next_proto_id =3D=3D > UINT8_MAX) > >- input_set |=3D > ICE_INSET_TUN_IPV4_PROTO; > >- } else { > >- if (ipv4_mask->hdr.src_addr =3D=3D UINT32_MAX) > >- input_set |=3D ICE_INSET_IPV4_SRC; > >- if (ipv4_mask->hdr.dst_addr =3D=3D UINT32_MAX) > >- input_set |=3D ICE_INSET_IPV4_DST; > >- if (ipv4_mask->hdr.time_to_live =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_IPV4_TTL; > >- if (ipv4_mask->hdr.next_proto_id =3D=3D > UINT8_MAX) > >- input_set |=3D ICE_INSET_IPV4_PROTO; > >- if (ipv4_mask->hdr.type_of_service =3D=3D > UINT8_MAX) > >- input_set |=3D ICE_INSET_IPV4_TOS; > >- } > >- break; > >- case RTE_FLOW_ITEM_TYPE_IPV6: > >- ipv6_spec =3D item->spec; > >- ipv6_mask =3D item->mask; > >- > >- if (!(ipv6_spec && ipv6_mask)) > >- break; > >- > >- if (ipv6_mask->hdr.payload_len) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid IPv6 mask"); > >- return 0; > >- } > >- > >- if (is_tunnel) { > >- if (!memcmp(ipv6_mask->hdr.src_addr, > >- ipv6_addr_mask, > >- RTE_DIM(ipv6_mask->hdr.src_addr))) > >- input_set |=3D > ICE_INSET_TUN_IPV6_SRC; > >- if (!memcmp(ipv6_mask->hdr.dst_addr, > >- ipv6_addr_mask, > >- RTE_DIM(ipv6_mask->hdr.dst_addr))) > >- input_set |=3D > ICE_INSET_TUN_IPV6_DST; > >- if (ipv6_mask->hdr.proto =3D=3D UINT8_MAX) > >- input_set |=3D > ICE_INSET_TUN_IPV6_PROTO; > >- if (ipv6_mask->hdr.hop_limits =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_TUN_IPV6_TTL; > >- } else { > >- if (!memcmp(ipv6_mask->hdr.src_addr, > >- ipv6_addr_mask, > >- RTE_DIM(ipv6_mask->hdr.src_addr))) > >- input_set |=3D ICE_INSET_IPV6_SRC; > >- if (!memcmp(ipv6_mask->hdr.dst_addr, > >- ipv6_addr_mask, > >- RTE_DIM(ipv6_mask->hdr.dst_addr))) > >- input_set |=3D ICE_INSET_IPV6_DST; > >- if (ipv6_mask->hdr.proto =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_IPV6_PROTO; > >- if (ipv6_mask->hdr.hop_limits =3D=3D UINT8_MAX) > >- input_set |=3D > ICE_INSET_IPV6_HOP_LIMIT; > >- if ((ipv6_mask->hdr.vtc_flow & > >- > rte_cpu_to_be_32(RTE_IPV6_HDR_TC_MASK)) > >- =3D=3D rte_cpu_to_be_32 > >- (RTE_IPV6_HDR_TC_MASK)) > >- input_set |=3D ICE_INSET_IPV6_TOS; > >- } > >- > >- break; > >- case RTE_FLOW_ITEM_TYPE_UDP: > >- udp_spec =3D item->spec; > >- udp_mask =3D item->mask; > >- > >- if (!(udp_spec && udp_mask)) > >- break; > >- > >- /* Check UDP mask and update input set*/ > >- if (udp_mask->hdr.dgram_len || > >- udp_mask->hdr.dgram_cksum) { > >- rte_flow_error_set(error, EINVAL, > >- > RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid UDP mask"); > >- return 0; > >- } > >- > >- if (is_tunnel) { > >- if (udp_mask->hdr.src_port =3D=3D UINT16_MAX) > >- input_set |=3D > ICE_INSET_TUN_SRC_PORT; > >- if (udp_mask->hdr.dst_port =3D=3D UINT16_MAX) > >- input_set |=3D > ICE_INSET_TUN_DST_PORT; > >- } else { > >- if (udp_mask->hdr.src_port =3D=3D UINT16_MAX) > >- input_set |=3D ICE_INSET_SRC_PORT; > >- if (udp_mask->hdr.dst_port =3D=3D UINT16_MAX) > >- input_set |=3D ICE_INSET_DST_PORT; > >- } > >- > >- break; > >- case RTE_FLOW_ITEM_TYPE_TCP: > >- tcp_spec =3D item->spec; > >- tcp_mask =3D item->mask; > >- > >- if (!(tcp_spec && tcp_mask)) > >- break; > >- > >- /* Check TCP mask and update input set */ > >- if (tcp_mask->hdr.sent_seq || > >- tcp_mask->hdr.recv_ack || > >- tcp_mask->hdr.data_off || > >- tcp_mask->hdr.tcp_flags || > >- tcp_mask->hdr.rx_win || > >- tcp_mask->hdr.cksum || > >- tcp_mask->hdr.tcp_urp) { > >- rte_flow_error_set(error, EINVAL, > >- > RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid TCP mask"); > >- return 0; > >- } > >- > >- if (is_tunnel) { > >- if (tcp_mask->hdr.src_port =3D=3D UINT16_MAX) > >- input_set |=3D > ICE_INSET_TUN_SRC_PORT; > >- if (tcp_mask->hdr.dst_port =3D=3D UINT16_MAX) > >- input_set |=3D > ICE_INSET_TUN_DST_PORT; > >- } else { > >- if (tcp_mask->hdr.src_port =3D=3D UINT16_MAX) > >- input_set |=3D ICE_INSET_SRC_PORT; > >- if (tcp_mask->hdr.dst_port =3D=3D UINT16_MAX) > >- input_set |=3D ICE_INSET_DST_PORT; > >- } > >- > >- break; > >- case RTE_FLOW_ITEM_TYPE_SCTP: > >- sctp_spec =3D item->spec; > >- sctp_mask =3D item->mask; > >- > >- if (!(sctp_spec && sctp_mask)) > >- break; > >- > >- /* Check SCTP mask and update input set */ > >- if (sctp_mask->hdr.cksum) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid SCTP mask"); > >- return 0; > >- } > >- > >- if (is_tunnel) { > >- if (sctp_mask->hdr.src_port =3D=3D UINT16_MAX) > >- input_set |=3D > ICE_INSET_TUN_SRC_PORT; > >- if (sctp_mask->hdr.dst_port =3D=3D UINT16_MAX) > >- input_set |=3D > ICE_INSET_TUN_DST_PORT; > >- } else { > >- if (sctp_mask->hdr.src_port =3D=3D UINT16_MAX) > >- input_set |=3D ICE_INSET_SRC_PORT; > >- if (sctp_mask->hdr.dst_port =3D=3D UINT16_MAX) > >- input_set |=3D ICE_INSET_DST_PORT; > >- } > >- > >- break; > >- case RTE_FLOW_ITEM_TYPE_ICMP: > >- icmp_mask =3D item->mask; > >- if (icmp_mask->hdr.icmp_code || > >- icmp_mask->hdr.icmp_cksum || > >- icmp_mask->hdr.icmp_ident || > >- icmp_mask->hdr.icmp_seq_nb) { > >- rte_flow_error_set(error, EINVAL, > >- > RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid ICMP mask"); > >- return 0; > >- } > >- > >- if (icmp_mask->hdr.icmp_type =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_ICMP; > >- break; > >- case RTE_FLOW_ITEM_TYPE_ICMP6: > >- icmp6_mask =3D item->mask; > >- if (icmp6_mask->code || > >- icmp6_mask->checksum) { > >- rte_flow_error_set(error, EINVAL, > >- > RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid ICMP6 mask"); > >- return 0; > >- } > >- > >- if (icmp6_mask->type =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_ICMP6; > >- break; > >- case RTE_FLOW_ITEM_TYPE_VXLAN: > >- vxlan_spec =3D item->spec; > >- vxlan_mask =3D item->mask; > >- /* Check if VXLAN item is used to describe protocol. > >- * If yes, both spec and mask should be NULL. > >- * If no, both spec and mask shouldn't be NULL. > >- */ > >- if ((!vxlan_spec && vxlan_mask) || > >- (vxlan_spec && !vxlan_mask)) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid VXLAN item"); > >- return 0; > >- } > >- if (vxlan_mask && vxlan_mask->vni[0] =3D=3D UINT8_MAX > && > >- vxlan_mask->vni[1] =3D=3D UINT8_MAX && > >- vxlan_mask->vni[2] =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_TUN_ID; > >- is_tunnel =3D 1; > >- > >- break; > >- case RTE_FLOW_ITEM_TYPE_NVGRE: > >- nvgre_spec =3D item->spec; > >- nvgre_mask =3D item->mask; > >- /* Check if NVGRE item is used to describe protocol. > >- * If yes, both spec and mask should be NULL. > >- * If no, both spec and mask shouldn't be NULL. > >- */ > >- if ((!nvgre_spec && nvgre_mask) || > >- (nvgre_spec && !nvgre_mask)) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid NVGRE item"); > >- return 0; > >- } > >- if (nvgre_mask && nvgre_mask->tni[0] =3D=3D UINT8_MAX > && > >- nvgre_mask->tni[1] =3D=3D UINT8_MAX && > >- nvgre_mask->tni[2] =3D=3D UINT8_MAX) > >- input_set |=3D ICE_INSET_TUN_ID; > >- is_tunnel =3D 1; > >- > >- break; > >- case RTE_FLOW_ITEM_TYPE_VOID: > >- break; > >- default: > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM, > >- item, > >- "Invalid pattern"); > >- break; > >- } > >- } > >- return input_set; > >-} > >- > >-static int ice_flow_valid_inset(const struct rte_flow_item pattern[], > >- uint64_t inset, struct rte_flow_error *error) > >-{ > >- uint64_t fields; > >- > >- /* get valid field */ > >- fields =3D ice_get_flow_field(pattern, error); > >- if (!fields || fields & (~inset)) { > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, > >- pattern, > >- "Invalid input set"); > >- return -rte_errno; > >- } > >- > >- return 0; > >+ rte_free(pattern_match_item); > >+ return NULL; > > } > > > >-static int ice_flow_valid_action(struct rte_eth_dev *dev, > >- const struct rte_flow_action *actions, > >- struct rte_flow_error *error) > >+static struct ice_flow_engine * > >+ice_parse_engine(struct ice_adapter *ad, > >+ struct ice_parser_list *parser_list, > >+ const struct rte_flow_item pattern[], > >+ const struct rte_flow_action actions[], > >+ void **meta, > >+ struct rte_flow_error *error) > > { > >- const struct rte_flow_action_queue *act_q; > >- uint16_t queue; > >- const struct rte_flow_action *action; > >- for (action =3D actions; action->type !=3D > >- RTE_FLOW_ACTION_TYPE_END; action++) { > >- switch (action->type) { > >- case RTE_FLOW_ACTION_TYPE_QUEUE: > >- act_q =3D action->conf; > >- queue =3D act_q->index; > >- if (queue >=3D dev->data->nb_rx_queues) { > >- rte_flow_error_set(error, EINVAL, > >- > RTE_FLOW_ERROR_TYPE_ACTION, > >- actions, "Invalid queue ID for" > >- " switch filter."); > >- return -rte_errno; > >- } > >- break; > >- case RTE_FLOW_ACTION_TYPE_DROP: > >- case RTE_FLOW_ACTION_TYPE_VOID: > >- break; > >- default: > >- rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ACTION, > actions, > >- "Invalid action."); > >- return -rte_errno; > >- } > >+ struct ice_flow_engine *engine =3D NULL; > >+ struct ice_flow_parser *parser =3D NULL; > >+ void *temp; > >+ TAILQ_FOREACH_SAFE(parser, parser_list, node, temp) { > >+ if (parser->parse_pattern_action(ad, parser->array, > >+ parser->array_len, pattern, actions, > >+ meta, error) < 0) > >+ continue; > >+ engine =3D parser->engine; > >+ break; > > } > >- return 0; > >+ return engine; > > } > > > > static int > >-ice_flow_validate(struct rte_eth_dev *dev, > >- const struct rte_flow_attr *attr, > >- const struct rte_flow_item pattern[], > >- const struct rte_flow_action actions[], > >- struct rte_flow_error *error) > >+ice_flow_validate_filter(struct rte_eth_dev *dev, > >+ const struct rte_flow_attr *attr, > >+ const struct rte_flow_item pattern[], > >+ const struct rte_flow_action actions[], > >+ struct ice_flow_engine **engine, > >+ void **meta, > >+ struct rte_flow_error *error) > > { > >- uint64_t inset =3D 0; > > int ret =3D ICE_ERR_NOT_SUPPORTED; > >+ struct ice_adapter *ad =3D > >+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >+ struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > > > if (!pattern) { > > rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > >- NULL, "NULL pattern."); > >+ NULL, "NULL pattern."); > > return -rte_errno; > > } > > > > if (!actions) { > > rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ACTION_NUM, > >- NULL, "NULL action."); > >+ RTE_FLOW_ERROR_TYPE_ACTION_NUM, > >+ NULL, "NULL action."); > > return -rte_errno; > > } > >- > > if (!attr) { > > rte_flow_error_set(error, EINVAL, > >- RTE_FLOW_ERROR_TYPE_ATTR, > >- NULL, "NULL attribute."); > >+ RTE_FLOW_ERROR_TYPE_ATTR, > >+ NULL, "NULL attribute."); > > return -rte_errno; > > } > > > >- ret =3D ice_flow_valid_attr(attr, error); > >+ ret =3D ice_flow_valid_attr(ad, attr, error); > > if (ret) > > return ret; > > > >- inset =3D ice_flow_valid_pattern(pattern, error); > >- if (!inset) > >- return -rte_errno; > >- > >- ret =3D ice_flow_valid_inset(pattern, inset, error); > >- if (ret) > >- return ret; > >+ *engine =3D ice_parse_engine(ad, &pf->rss_parser_list, pattern, action= s, > >+ meta, error); > >+ if (*engine !=3D NULL) > >+ return 0; > >+ > >+ switch (ice_pipeline_stage) { > >+ case ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY: > >+ case ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR: > >+ *engine =3D ice_parse_engine(ad, &pf->dist_parser_list, pattern, > >+ actions, meta, error); > >+ break; > >+ case ICE_FLOW_CLASSIFY_STAGE_PERMISSION: > >+ *engine =3D ice_parse_engine(ad, &pf->perm_parser_list, pattern, > >+ actions, meta, error); > >+ break; > >+ default: > >+ return -EINVAL; > >+ } > > > >- ret =3D ice_flow_valid_action(dev, actions, error); > >- if (ret) > >- return ret; > >+ if (*engine =3D=3D NULL) > >+ return -EINVAL; > > > > return 0; > > } > > > >+static int > >+ice_flow_validate(struct rte_eth_dev *dev, > >+ const struct rte_flow_attr *attr, > >+ const struct rte_flow_item pattern[], > >+ const struct rte_flow_action actions[], > >+ struct rte_flow_error *error) > >+{ > >+ int ret =3D ICE_ERR_NOT_SUPPORTED; > >+ void *meta =3D NULL; > >+ struct ice_flow_engine *engine =3D NULL; > >+ > >+ ret =3D ice_flow_validate_filter(dev, attr, pattern, actions, > >+ &engine, &meta, error); > >+ return ret; > >+} > >+ > > static struct rte_flow * > > ice_flow_create(struct rte_eth_dev *dev, > >- const struct rte_flow_attr *attr, > >- const struct rte_flow_item pattern[], > >- const struct rte_flow_action actions[], > >- struct rte_flow_error *error) > >+ const struct rte_flow_attr *attr, > >+ const struct rte_flow_item pattern[], > >+ const struct rte_flow_action actions[], > >+ struct rte_flow_error *error) > > { > > struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > struct rte_flow *flow =3D NULL; > >- int ret; > >+ int ret =3D 0; > >+ struct ice_adapter *ad =3D > >+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >+ struct ice_flow_engine *engine =3D NULL; > >+ void *meta =3D NULL; > > > > flow =3D rte_zmalloc("ice_flow", sizeof(struct rte_flow), 0); > > if (!flow) { > >@@ -626,65 +488,105 @@ ice_flow_create(struct rte_eth_dev *dev, > > return flow; > > } > > > >- ret =3D ice_flow_validate(dev, attr, pattern, actions, error); > >+ ret =3D ice_flow_validate_filter(dev, attr, pattern, actions, > >+ &engine, &meta, error); > > if (ret < 0) > > goto free_flow; > > > >- ret =3D ice_create_switch_filter(pf, pattern, actions, flow, error); > >+ if (engine->create =3D=3D NULL) > >+ goto free_flow; > >+ > >+ ret =3D engine->create(ad, flow, meta, error); > > if (ret) > > goto free_flow; > > > >+ flow->engine =3D engine; > > TAILQ_INSERT_TAIL(&pf->flow_list, flow, node); > > return flow; > > > > free_flow: > >- rte_flow_error_set(error, -ret, > >- RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >- "Failed to create flow."); > >+ PMD_DRV_LOG(ERR, "Failed to create flow"); >=20 > Why is this change? For framework has passed the "error" to each filter, rte_flow_error_set() w= ill be used within each filter (switch/fdir/rss). If used rte_flow_error_set() here, it will cover the error set value by eac= h filter, so PMD_DRV_LOG is used here. >=20 > > rte_free(flow); > > return NULL; > > } > > > > static int > > ice_flow_destroy(struct rte_eth_dev *dev, > >- struct rte_flow *flow, > >- struct rte_flow_error *error) > >+ struct rte_flow *flow, > >+ struct rte_flow_error *error) > > { > > struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >+ struct ice_adapter *ad =3D > >+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > int ret =3D 0; > > > >- ret =3D ice_destroy_switch_filter(pf, flow, error); > >- > >+ if (!flow || !flow->engine->destroy) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, > >+ NULL, "NULL flow or NULL destroy"); > >+ return -rte_errno; > >+ } > >+ ret =3D flow->engine->destroy(ad, flow, error); > > if (!ret) { > > TAILQ_REMOVE(&pf->flow_list, flow, node); > > rte_free(flow); > >- } else { > >- rte_flow_error_set(error, -ret, > >- RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >- "Failed to destroy flow."); > >- } > >+ } else > >+ PMD_DRV_LOG(ERR, "Failed to destroy flow"); >=20 > Ditto. Ditto. >=20 > > > > return ret; > > } > > > > static int > > ice_flow_flush(struct rte_eth_dev *dev, > >- struct rte_flow_error *error) > >+ struct rte_flow_error *error) > > { > > struct ice_pf *pf =3D ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >- struct rte_flow *p_flow; > >+ struct rte_flow *p_flow =3D NULL; > > void *temp; > > int ret =3D 0; > > > > TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) { > > ret =3D ice_flow_destroy(dev, p_flow, error); > > if (ret) { > >- rte_flow_error_set(error, -ret, > >- RTE_FLOW_ERROR_TYPE_HANDLE, > NULL, > >- "Failed to flush SW flows."); > >- return -rte_errno; > >+ PMD_DRV_LOG(ERR, "Failed to flush flows"); >=20 > Ditto. Ditto. >=20 >=20 > Thanks, > Xiaolong Thanks Ying