From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 634BEA0562; Tue, 31 Mar 2020 07:20:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4B7B22C15; Tue, 31 Mar 2020 07:20:35 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 01055FFA for ; Tue, 31 Mar 2020 07:20:32 +0200 (CEST) IronPort-SDR: zqkeoJKDvUlG4IClSWaauDFyqcGFUN405mNkw5DIahw+MScn7owpCZY7Dkl38tyo8U9CEixy4A CZbLpfxV9DFg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2020 22:20:32 -0700 IronPort-SDR: w6S8fwtVd3JNIYdDpYYTZLIsY4DSgRDlGtBknetGRE4Y7uCBcaJj64bqvCo0vLy1svetpDaKFK D8M8SAx+pHWg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,327,1580803200"; d="scan'208";a="252118374" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by orsmga006.jf.intel.com with ESMTP; 30 Mar 2020 22:20:31 -0700 Received: from shsmsx602.ccr.corp.intel.com (10.109.6.142) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 30 Mar 2020 22:20:30 -0700 Received: from shsmsx606.ccr.corp.intel.com (10.109.6.216) by SHSMSX602.ccr.corp.intel.com (10.109.6.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Tue, 31 Mar 2020 13:20:28 +0800 Received: from shsmsx606.ccr.corp.intel.com ([10.109.6.216]) by SHSMSX606.ccr.corp.intel.com ([10.109.6.216]) with mapi id 15.01.1713.004; Tue, 31 Mar 2020 13:20:28 +0800 From: "Cao, Yahui" To: "Su, Simei" , "Ye, Xiaolong" , "Zhang, Qi Z" CC: "dev@dpdk.org" , "Wu, Jingjing" Thread-Topic: [PATCH 1/5] net/iavf: add support for FDIR basic rule Thread-Index: AQHV/OgwpYDZjrN59EKxYQjnitON6KhiOUVg Date: Tue, 31 Mar 2020 05:20:28 +0000 Message-ID: References: <1584510121-377747-1-git-send-email-simei.su@intel.com> <1584510121-377747-2-git-send-email-simei.su@intel.com> In-Reply-To: <1584510121-377747-2-git-send-email-simei.su@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.2.0.6 x-originating-ip: [10.239.127.36] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 1/5] net/iavf: add support for FDIR basic rule X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Su, Simei > Sent: Wednesday, March 18, 2020 1:42 PM > To: Ye, Xiaolong ; Zhang, Qi Z > Cc: dev@dpdk.org; Cao, Yahui ; Wu, Jingjing > ; Su, Simei > Subject: [PATCH 1/5] net/iavf: add support for FDIR basic rule >=20 > This patch adds FDIR create/destroy/validate function in AVF. > Common pattern and queue/qgroup/passthru/drop actions are supported. >=20 > Signed-off-by: Simei Su > --- > drivers/net/iavf/Makefile | 1 + > drivers/net/iavf/iavf.h | 16 + > drivers/net/iavf/iavf_fdir.c | 762 > ++++++++++++++++++++++++++++++++++++++++++ > drivers/net/iavf/iavf_vchnl.c | 128 ++++++- > drivers/net/iavf/meson.build | 1 + > 5 files changed, 907 insertions(+), 1 deletion(-) create mode 100644 > drivers/net/iavf/iavf_fdir.c >=20 > diff --git a/drivers/net/iavf/Makefile b/drivers/net/iavf/Makefile index > 1bf0f26..193bc55 100644 > --- a/drivers/net/iavf/Makefile > +++ b/drivers/net/iavf/Makefile > @@ -24,6 +24,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) +=3D iavf_ethdev.c > SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) +=3D iavf_vchnl.c > SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) +=3D iavf_rxtx.c > SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) +=3D iavf_generic_flow.c > +SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) +=3D iavf_fdir.c > ifeq ($(CONFIG_RTE_ARCH_X86), y) > SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) +=3D iavf_rxtx_vec_sse.c endif diff = --git > a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 48b9509..62a3eb= 8 > 100644 > --- a/drivers/net/iavf/iavf.h > +++ b/drivers/net/iavf/iavf.h > @@ -99,6 +99,16 @@ struct iavf_vsi { > struct iavf_flow_parser_node; > TAILQ_HEAD(iavf_parser_list, iavf_flow_parser_node); >=20 > +struct iavf_fdir_conf { > + struct virtchnl_fdir_fltr input; > + uint64_t input_set; > + uint32_t flow_id; > +}; > + > +struct iavf_fdir_info { > + struct iavf_fdir_conf conf; > +}; > + > /* TODO: is that correct to assume the max number to be 16 ?*/ > #define IAVF_MAX_MSIX_VECTORS 16 >=20 > @@ -138,6 +148,8 @@ struct iavf_info { > struct iavf_flow_list flow_list; > struct iavf_parser_list rss_parser_list; > struct iavf_parser_list dist_parser_list; > + > + struct iavf_fdir_info fdir; /* flow director info */ > }; >=20 > #define IAVF_MAX_PKT_TYPE 1024 > @@ -260,4 +272,8 @@ int iavf_config_promisc(struct iavf_adapter *adapter, > bool enable_unicast, int iavf_add_del_eth_addr(struct iavf_adapter *adap= ter, > struct rte_ether_addr *addr, bool add); int > iavf_add_del_vlan(struct iavf_adapter *adapter, uint16_t vlanid, bool add= ); > +int iavf_fdir_add(struct iavf_adapter *adapter, struct iavf_fdir_conf > +*filter); int iavf_fdir_del(struct iavf_adapter *adapter, struct > +iavf_fdir_conf *filter); int iavf_fdir_check(struct iavf_adapter *adapte= r, > + struct iavf_fdir_conf *filter); > #endif /* _IAVF_ETHDEV_H_ */ > diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c = new file > mode 100644 index 0000000..dd321ba > --- /dev/null > +++ b/drivers/net/iavf/iavf_fdir.c > @@ -0,0 +1,762 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2019 Intel Corporation > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include > +#include > +#include > +#include > + > +#include "iavf.h" > +#include "iavf_generic_flow.h" > +#include "virtchnl.h" > + > +#define IAVF_FDIR_MAX_QREGION_SIZE 128 > + > +#define IAVF_FDIR_IPV6_TC_OFFSET 20 > +#define IAVF_IPV6_TC_MASK (0xFF << IAVF_FDIR_IPV6_TC_OFFSET) > + > +#define IAVF_FDIR_INSET_ETH (\ > + IAVF_INSET_ETHERTYPE) > + > +#define IAVF_FDIR_INSET_ETH_IPV4 (\ > + IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ > + IAVF_INSET_IPV4_PROTO | IAVF_INSET_IPV4_TOS | \ > + IAVF_INSET_IPV4_TTL) > + > +#define IAVF_FDIR_INSET_ETH_IPV4_UDP (\ > + IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ > + IAVF_INSET_IPV4_TOS | IAVF_INSET_IPV4_TTL | \ > + IAVF_INSET_UDP_SRC_PORT | IAVF_INSET_UDP_DST_PORT) > + > +#define IAVF_FDIR_INSET_ETH_IPV4_TCP (\ > + IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ > + IAVF_INSET_IPV4_TOS | IAVF_INSET_IPV4_TTL | \ > + IAVF_INSET_TCP_SRC_PORT | IAVF_INSET_TCP_DST_PORT) > + > +#define IAVF_FDIR_INSET_ETH_IPV4_SCTP (\ > + IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ > + IAVF_INSET_IPV4_TOS | IAVF_INSET_IPV4_TTL | \ > + IAVF_INSET_SCTP_SRC_PORT | IAVF_INSET_SCTP_DST_PORT) > + > +#define IAVF_FDIR_INSET_ETH_IPV6 (\ > + IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \ > + IAVF_INSET_IPV6_NEXT_HDR | IAVF_INSET_IPV6_TC | \ > + IAVF_INSET_IPV6_HOP_LIMIT) > + > +#define IAVF_FDIR_INSET_ETH_IPV6_UDP (\ > + IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \ > + IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \ > + IAVF_INSET_UDP_SRC_PORT | IAVF_INSET_UDP_DST_PORT) > + > +#define IAVF_FDIR_INSET_ETH_IPV6_TCP (\ > + IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \ > + IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \ > + IAVF_INSET_TCP_SRC_PORT | IAVF_INSET_TCP_DST_PORT) > + > +#define IAVF_FDIR_INSET_ETH_IPV6_SCTP (\ > + IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \ > + IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \ > + IAVF_INSET_SCTP_SRC_PORT | IAVF_INSET_SCTP_DST_PORT) > + > +static struct iavf_pattern_match_item iavf_fdir_pattern[] =3D { > + {iavf_pattern_ethertype, IAVF_FDIR_INSET_ETH, > IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv4, IAVF_FDIR_INSET_ETH_IPV4, > IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv4_udp, > IAVF_FDIR_INSET_ETH_IPV4_UDP, IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv4_tcp, > IAVF_FDIR_INSET_ETH_IPV4_TCP, IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv4_sctp, > IAVF_FDIR_INSET_ETH_IPV4_SCTP, IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv6, IAVF_FDIR_INSET_ETH_IPV6, > IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv6_udp, > IAVF_FDIR_INSET_ETH_IPV6_UDP, IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv6_tcp, > IAVF_FDIR_INSET_ETH_IPV6_TCP, IAVF_INSET_NONE}, > + {iavf_pattern_eth_ipv6_sctp, > IAVF_FDIR_INSET_ETH_IPV6_SCTP, IAVF_INSET_NONE}, > +}; > + > +static struct iavf_flow_parser iavf_fdir_parser; > + > +static int > +iavf_fdir_init(struct iavf_adapter *ad) { > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(ad); > + struct iavf_flow_parser *parser; > + > + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_FDIR_PF) > + parser =3D &iavf_fdir_parser; > + else > + return -ENOTSUP; > + > + return iavf_register_parser(parser, ad); } > + > +static void > +iavf_fdir_uninit(struct iavf_adapter *ad) { > + struct iavf_flow_parser *parser; > + > + parser =3D &iavf_fdir_parser; > + > + iavf_unregister_parser(parser, ad); > +} > + > +static int > +iavf_fdir_create(struct iavf_adapter *ad, > + struct rte_flow *flow, > + void *meta, > + struct rte_flow_error *error) > +{ > + struct iavf_fdir_conf *filter =3D meta; > + struct iavf_fdir_conf *rule; > + int ret; > + > + rule =3D rte_zmalloc("fdir_entry", sizeof(*rule), 0); > + if (!rule) { > + rte_flow_error_set(error, ENOMEM, > + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > + "Failed to allocate memory"); > + return -rte_errno; > + } > + > + ret =3D iavf_fdir_add(ad, filter); > + if (ret) { > + rte_flow_error_set(error, -ret, > + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > + "Add filter rule failed."); > + goto free_entry; > + } > + > + rte_memcpy(rule, filter, sizeof(*rule)); > + flow->rule =3D rule; > + > + return 0; > + > +free_entry: > + rte_free(rule); > + return -rte_errno; > +} > + > +static int > +iavf_fdir_destroy(struct iavf_adapter *ad, > + struct rte_flow *flow, > + struct rte_flow_error *error) > +{ > + struct iavf_fdir_conf *filter; > + int ret; > + > + filter =3D (struct iavf_fdir_conf *)flow->rule; > + > + ret =3D iavf_fdir_del(ad, filter); > + if (ret) { > + rte_flow_error_set(error, -ret, > + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > + "Del filter rule failed."); > + return -rte_errno; > + } > + > + flow->rule =3D NULL; > + rte_free(filter); > + > + return 0; > +} > + > +static int > +iavf_fdir_validation(struct iavf_adapter *ad, > + __rte_unused struct rte_flow *flow, > + void *meta, > + struct rte_flow_error *error) > +{ > + struct iavf_fdir_conf *filter =3D meta; > + int ret; > + > + ret =3D iavf_fdir_check(ad, filter); > + if (ret) { > + rte_flow_error_set(error, -ret, > + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > + "Validate filter rule failed."); > + return -rte_errno; > + } > + > + return 0; > +}; > + > +static struct iavf_flow_engine iavf_fdir_engine =3D { > + .init =3D iavf_fdir_init, > + .uninit =3D iavf_fdir_uninit, > + .create =3D iavf_fdir_create, > + .destroy =3D iavf_fdir_destroy, > + .validation =3D iavf_fdir_validation, > + .type =3D IAVF_FLOW_ENGINE_FDIR, > +}; > + > +static int > +iavf_fdir_parse_action_qregion(struct iavf_adapter *ad, > + struct rte_flow_error *error, > + const struct rte_flow_action *act, > + struct virtchnl_filter_action *filter_action) { > + const struct rte_flow_action_rss *rss =3D act->conf; > + uint32_t i; > + > + if (act->type !=3D RTE_FLOW_ACTION_TYPE_RSS) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, act, > + "Invalid action."); > + return -rte_errno; > + } > + > + if (rss->queue_num <=3D 1) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, act, > + "Queue region size can't be 0 or 1."); > + return -rte_errno; > + } > + > + /* check if queue index for queue region is continuous */ > + for (i =3D 0; i < rss->queue_num - 1; i++) { > + if (rss->queue[i + 1] !=3D rss->queue[i] + 1) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, act, > + "Discontinuous queue region"); > + return -rte_errno; > + } > + } > + > + if (rss->queue[rss->queue_num - 1] >=3D ad->eth_dev->data- > >nb_rx_queues) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, act, > + "Invalid queue region indexes."); > + return -rte_errno; > + } > + > + if (!(rte_is_power_of_2(rss->queue_num) && > + (rss->queue_num <=3D IAVF_FDIR_MAX_QREGION_SIZE))) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, act, > + "The region size should be any of the following > values:" > + "1, 2, 4, 8, 16, 32, 64, 128 as long as the total > number " > + "of queues do not exceed the VSI allocation."); > + return -rte_errno; > + } > + > + filter_action->q_index =3D rss->queue[0]; > + filter_action->q_region =3D rte_fls_u32(rss->queue_num) - 1; > + > + return 0; > +} > + > +static int > +iavf_fdir_parse_action(struct iavf_adapter *ad, > + const struct rte_flow_action actions[], > + struct rte_flow_error *error, > + struct iavf_fdir_conf *filter) > +{ > + const struct rte_flow_action_queue *act_q; > + uint32_t dest_num =3D 0; > + int ret; > + > + int number =3D 0; > + struct virtchnl_filter_action *filter_action; > + > + for (; actions->type !=3D RTE_FLOW_ACTION_TYPE_END; actions++) { > + switch (actions->type) { > + case RTE_FLOW_ACTION_TYPE_VOID: > + break; > + > + case RTE_FLOW_ACTION_TYPE_PASSTHRU: > + dest_num++; > + > + filter_action =3D &filter->input.rule_cfg. > + action_set.actions[number]; > + > + filter_action->type =3D VIRTCHNL_FDIR_ACT_PASSTHRU; > + > + filter->input.rule_cfg.action_set.count =3D ++number; > + break; > + > + case RTE_FLOW_ACTION_TYPE_DROP: > + dest_num++; > + > + filter_action =3D &filter->input.rule_cfg. > + action_set.actions[number]; > + > + filter_action->type =3D VIRTCHNL_FDIR_ACT_DROP; > + > + filter->input.rule_cfg.action_set.count =3D ++number; [Cao, Yahui]=20 It seems there is no count/number upper bound check, there may be out of bo= und index access This also applies to all the count/number statement below. > + break; > + > + case RTE_FLOW_ACTION_TYPE_QUEUE: > + dest_num++; > + > + act_q =3D actions->conf; > + filter_action =3D &filter->input.rule_cfg. > + action_set.actions[number]; > + > + filter_action->type =3D VIRTCHNL_FDIR_ACT_QUEUE; > + filter_action->q_index =3D act_q->index; > + > + if (filter_action->q_index >=3D > + ad->eth_dev->data->nb_rx_queues) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > + actions, "Invalid queue for FDIR."); > + return -rte_errno; > + } > + > + filter->input.rule_cfg.action_set.count =3D ++number; > + break; > + > + case RTE_FLOW_ACTION_TYPE_RSS: > + dest_num++; > + > + filter_action =3D &filter->input.rule_cfg. > + action_set.actions[number]; > + > + filter_action->type =3D VIRTCHNL_FDIR_ACT_Q_REGION; > + > + ret =3D iavf_fdir_parse_action_qregion(ad, > + error, actions, filter_action); > + if (ret) > + return ret; > + > + filter->input.rule_cfg.action_set.count =3D ++number; > + break; > + > + default: > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > actions, > + "Invalid action."); > + return -rte_errno; > + } > + } > + > + if (dest_num =3D=3D 0 || dest_num >=3D 2) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, actions, > + "Unsupported action combination"); > + return -rte_errno; > + } > + > + return 0; > +} > + > +static int > +iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, > + const struct rte_flow_item pattern[], > + struct rte_flow_error *error, > + struct iavf_fdir_conf *filter) > +{ > + const struct rte_flow_item *item =3D pattern; > + enum rte_flow_item_type item_type; > + enum rte_flow_item_type l3 =3D RTE_FLOW_ITEM_TYPE_END; > + const struct rte_flow_item_eth *eth_spec, *eth_mask; > + const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; > + const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; > + const struct rte_flow_item_udp *udp_spec, *udp_mask; > + const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; > + const struct rte_flow_item_sctp *sctp_spec, *sctp_mask; > + uint64_t input_set =3D IAVF_INSET_NONE; > + > + enum rte_flow_item_type next_type; > + uint16_t ether_type; > + > + int layer =3D 0; > + struct virtchnl_proto_hdr *hdr; > + > + uint8_t ipv6_addr_mask[16] =3D { > + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, > + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF > + }; > + > + for (item =3D pattern; item->type !=3D RTE_FLOW_ITEM_TYPE_END; item++) > { > + if (item->last) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "Not support range"); > + } > + > + item_type =3D item->type; > + > + switch (item_type) { > + case RTE_FLOW_ITEM_TYPE_ETH: > + eth_spec =3D item->spec; > + eth_mask =3D item->mask; > + next_type =3D (item + 1)->type; > + > + hdr =3D &filter->input.rule_cfg.proto_stack. > + proto_hdr[layer]; > + > + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ETH); > + > + if (next_type =3D=3D RTE_FLOW_ITEM_TYPE_END && > + (!eth_spec || !eth_mask)) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, > + item, "NULL eth spec/mask."); > + return -rte_errno; > + } > + > + if (eth_spec && eth_mask) { > + if (!rte_is_zero_ether_addr(ð_mask->src) || > + !rte_is_zero_ether_addr(ð_mask->dst)) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, item, > + "Invalid MAC_addr mask."); > + return -rte_errno; > + } > + } > + > + if (eth_spec && eth_mask && eth_mask->type) { > + if (eth_mask->type !=3D RTE_BE16(0xffff)) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Invalid type mask."); > + return -rte_errno; > + } > + > + ether_type =3D rte_be_to_cpu_16(eth_spec- > >type); > + if (ether_type =3D=3D RTE_ETHER_TYPE_IPV4 || > + ether_type =3D=3D RTE_ETHER_TYPE_IPV6) > { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, > + item, > + "Unsupported ether_type."); > + return -rte_errno; > + } > + > + input_set |=3D IAVF_INSET_ETHERTYPE; > + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > + ETH, ETHERTYPE); > + > + rte_memcpy(hdr->buffer, > + eth_spec, sizeof(*eth_spec)); > + } > + > + filter->input.rule_cfg.proto_stack.count =3D ++layer; [Cao, Yahui]=20 It seems there is no count/layer upper bound check, there may be out of bou= nd index access This also applies to all the count/layer statement below. > + break; > + > + case RTE_FLOW_ITEM_TYPE_IPV4: > + l3 =3D RTE_FLOW_ITEM_TYPE_IPV4; > + ipv4_spec =3D item->spec; > + ipv4_mask =3D item->mask; > + > + hdr =3D &filter->input.rule_cfg.proto_stack. > + proto_hdr[layer]; > + > + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV4); > + > + if (ipv4_spec && ipv4_mask) { > + if (ipv4_mask->hdr.version_ihl || > + ipv4_mask->hdr.total_length || > + ipv4_mask->hdr.packet_id || > + ipv4_mask->hdr.fragment_offset || > + ipv4_mask->hdr.hdr_checksum) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Invalid IPv4 mask."); > + return -rte_errno; > + } > + > + if (ipv4_mask->hdr.type_of_service =3D=3D > + UINT8_MAX) { > + input_set |=3D IAVF_INSET_IPV4_TOS; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV4, DSCP); > + } > + if (ipv4_mask->hdr.next_proto_id =3D=3D > UINT8_MAX) { > + input_set |=3D IAVF_INSET_IPV4_PROTO; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV4, PROT); > + } > + if (ipv4_mask->hdr.time_to_live =3D=3D UINT8_MAX) > { > + input_set |=3D IAVF_INSET_IPV4_TTL; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV4, TTL); > + } > + if (ipv4_mask->hdr.src_addr =3D=3D UINT32_MAX) { > + input_set |=3D IAVF_INSET_IPV4_SRC; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV4, SRC); > + } > + if (ipv4_mask->hdr.dst_addr =3D=3D UINT32_MAX) { > + input_set |=3D IAVF_INSET_IPV4_DST; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV4, DST); > + } > + > + rte_memcpy(hdr->buffer, > + &ipv4_spec->hdr, > + sizeof(ipv4_spec->hdr)); > + } > + > + filter->input.rule_cfg.proto_stack.count =3D ++layer; > + break; > + > + case RTE_FLOW_ITEM_TYPE_IPV6: > + l3 =3D RTE_FLOW_ITEM_TYPE_IPV6; > + ipv6_spec =3D item->spec; > + ipv6_mask =3D item->mask; > + > + hdr =3D &filter->input.rule_cfg.proto_stack. > + proto_hdr[layer]; > + > + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6); > + > + if (ipv6_spec && ipv6_mask) { > + if (ipv6_mask->hdr.payload_len) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Invalid IPv6 mask"); > + return -rte_errno; > + } > + > + if ((ipv6_mask->hdr.vtc_flow & > + > rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) > + =3D=3D rte_cpu_to_be_32( > + IAVF_IPV6_TC_MASK)) > { > + input_set |=3D IAVF_INSET_IPV6_TC; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV6, TC); > + } > + if (ipv6_mask->hdr.proto =3D=3D UINT8_MAX) { > + input_set |=3D > IAVF_INSET_IPV6_NEXT_HDR; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV6, PROT); > + } > + if (ipv6_mask->hdr.hop_limits =3D=3D UINT8_MAX) { > + input_set |=3D > IAVF_INSET_IPV6_HOP_LIMIT; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV6, HOP_LIMIT); > + } > + if (!memcmp(ipv6_mask->hdr.src_addr, > + ipv6_addr_mask, > + RTE_DIM(ipv6_mask->hdr.src_addr))) { > + input_set |=3D IAVF_INSET_IPV6_SRC; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV6, SRC); > + } > + if (!memcmp(ipv6_mask->hdr.dst_addr, > + ipv6_addr_mask, > + RTE_DIM(ipv6_mask->hdr.dst_addr))) { > + input_set |=3D IAVF_INSET_IPV6_DST; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, IPV6, DST); > + } > + > + rte_memcpy(hdr->buffer, > + &ipv6_spec->hdr, > + sizeof(ipv6_spec->hdr)); > + } > + > + filter->input.rule_cfg.proto_stack.count =3D ++layer; > + break; > + > + case RTE_FLOW_ITEM_TYPE_UDP: > + udp_spec =3D item->spec; > + udp_mask =3D item->mask; > + > + hdr =3D &filter->input.rule_cfg.proto_stack. > + proto_hdr[layer]; > + > + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, UDP); > + > + if (udp_spec && udp_mask) { > + if (udp_mask->hdr.dgram_len || > + udp_mask->hdr.dgram_cksum) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, item, > + "Invalid UDP mask"); > + return -rte_errno; > + } > + > + if (udp_mask->hdr.src_port =3D=3D UINT16_MAX) { > + input_set |=3D > IAVF_INSET_UDP_SRC_PORT; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, UDP, SRC_PORT); > + } > + if (udp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > + input_set |=3D > IAVF_INSET_UDP_DST_PORT; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, UDP, DST_PORT); > + } > + > + if (l3 =3D=3D RTE_FLOW_ITEM_TYPE_IPV4) > + rte_memcpy(hdr->buffer, > + &udp_spec->hdr, > + sizeof(udp_spec->hdr)); > + else if (l3 =3D=3D RTE_FLOW_ITEM_TYPE_IPV6) > + rte_memcpy(hdr->buffer, > + &udp_spec->hdr, > + sizeof(udp_spec->hdr)); > + } > + > + filter->input.rule_cfg.proto_stack.count =3D ++layer; > + break; > + > + case RTE_FLOW_ITEM_TYPE_TCP: > + tcp_spec =3D item->spec; > + tcp_mask =3D item->mask; > + > + hdr =3D &filter->input.rule_cfg.proto_stack. > + proto_hdr[layer]; > + > + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, TCP); > + > + if (tcp_spec && tcp_mask) { > + if (tcp_mask->hdr.sent_seq || > + tcp_mask->hdr.recv_ack || > + tcp_mask->hdr.data_off || > + tcp_mask->hdr.tcp_flags || > + tcp_mask->hdr.rx_win || > + tcp_mask->hdr.cksum || > + tcp_mask->hdr.tcp_urp) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, item, > + "Invalid TCP mask"); > + return -rte_errno; > + } > + > + if (tcp_mask->hdr.src_port =3D=3D UINT16_MAX) { > + input_set |=3D > IAVF_INSET_TCP_SRC_PORT; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, TCP, SRC_PORT); > + } > + if (tcp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > + input_set |=3D > IAVF_INSET_TCP_DST_PORT; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, TCP, DST_PORT); > + } > + > + if (l3 =3D=3D RTE_FLOW_ITEM_TYPE_IPV4) > + rte_memcpy(hdr->buffer, > + &tcp_spec->hdr, > + sizeof(tcp_spec->hdr)); > + else if (l3 =3D=3D RTE_FLOW_ITEM_TYPE_IPV6) > + rte_memcpy(hdr->buffer, > + &tcp_spec->hdr, > + sizeof(tcp_spec->hdr)); > + } > + > + filter->input.rule_cfg.proto_stack.count =3D ++layer; > + break; > + > + case RTE_FLOW_ITEM_TYPE_SCTP: > + sctp_spec =3D item->spec; > + sctp_mask =3D item->mask; > + > + hdr =3D &filter->input.rule_cfg.proto_stack. > + proto_hdr[layer]; > + > + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, SCTP); > + > + if (sctp_spec && sctp_mask) { > + if (sctp_mask->hdr.cksum) { > + rte_flow_error_set(error, EINVAL, > + > RTE_FLOW_ERROR_TYPE_ITEM, item, > + "Invalid UDP mask"); > + return -rte_errno; > + } > + > + if (sctp_mask->hdr.src_port =3D=3D UINT16_MAX) { > + input_set |=3D > IAVF_INSET_SCTP_SRC_PORT; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, SCTP, SRC_PORT); > + } > + if (sctp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > + input_set |=3D > IAVF_INSET_SCTP_DST_PORT; > + > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT( > + hdr, SCTP, DST_PORT); > + } > + > + if (l3 =3D=3D RTE_FLOW_ITEM_TYPE_IPV4) > + rte_memcpy(hdr->buffer, > + &sctp_spec->hdr, > + sizeof(sctp_spec->hdr)); > + else if (l3 =3D=3D RTE_FLOW_ITEM_TYPE_IPV6) > + rte_memcpy(hdr->buffer, > + &sctp_spec->hdr, > + sizeof(sctp_spec->hdr)); > + } > + > + filter->input.rule_cfg.proto_stack.count =3D ++layer; > + break; > + > + case RTE_FLOW_ITEM_TYPE_VOID: > + break; > + > + default: > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "Invalid pattern item."); > + return -rte_errno; > + } > + } > + > + filter->input_set =3D input_set; > + > + return 0; > +} > + > +static int > +iavf_fdir_parse(struct iavf_adapter *ad, > + struct iavf_pattern_match_item *array, > + uint32_t array_len, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + void **meta, > + struct rte_flow_error *error) > +{ > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(ad); > + struct iavf_fdir_conf *filter =3D &vf->fdir.conf; > + struct iavf_pattern_match_item *item =3D NULL; > + uint64_t input_set; > + int ret; > + > + memset(filter, 0, sizeof(*filter)); > + > + item =3D iavf_search_pattern_match_item(pattern, array, array_len, > error); > + if (!item) > + return -rte_errno; > + > + ret =3D iavf_fdir_parse_pattern(ad, pattern, error, filter); > + if (ret) > + goto error; > + > + input_set =3D filter->input_set; > + if (!input_set || input_set & ~item->input_set_mask) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM_SPEC, pattern, > + "Invalid input set"); > + ret =3D -rte_errno; > + goto error; > + } > + > + ret =3D iavf_fdir_parse_action(ad, actions, error, filter); > + if (ret) > + goto error; > + > + if (meta) > + *meta =3D filter; > + > +error: > + rte_free(item); > + return ret; > +} > + > +static struct iavf_flow_parser iavf_fdir_parser =3D { > + .engine =3D &iavf_fdir_engine, > + .array =3D iavf_fdir_pattern, > + .array_len =3D RTE_DIM(iavf_fdir_pattern), > + .parse_pattern_action =3D iavf_fdir_parse, > + .stage =3D IAVF_FLOW_STAGE_DISTRIBUTOR, > +}; > + > +RTE_INIT(iavf_fdir_engine_register) > +{ > + iavf_register_flow_engine(&iavf_fdir_engine); > +} > diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.= c index > 11c70f5..77bfd1b 100644 > --- a/drivers/net/iavf/iavf_vchnl.c > +++ b/drivers/net/iavf/iavf_vchnl.c > @@ -342,7 +342,8 @@ >=20 > caps =3D IAVF_BASIC_OFFLOAD_CAPS | > VIRTCHNL_VF_CAP_ADV_LINK_SPEED | > VIRTCHNL_VF_OFFLOAD_QUERY_DDP | > - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC; > + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC | > + VIRTCHNL_VF_OFFLOAD_FDIR_PF; >=20 > args.in_args =3D (uint8_t *)∩︀ > args.in_args_size =3D sizeof(caps); > @@ -867,3 +868,128 @@ >=20 > return err; > } > + > +int > +iavf_fdir_add(struct iavf_adapter *adapter, > + struct iavf_fdir_conf *filter) > +{ > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter); > + struct virtchnl_fdir_status *fdir_status; > + > + struct iavf_cmd_info args; > + int err; > + > + filter->input.vsi_id =3D vf->vsi_res->vsi_id; > + filter->input.validate_only =3D 0; > + > + args.ops =3D VIRTCHNL_OP_ADD_FDIR_FILTER; > + args.in_args =3D (uint8_t *)(&filter->input); > + args.in_args_size =3D sizeof(*(&filter->input)); > + args.out_buffer =3D vf->aq_resp; > + args.out_size =3D IAVF_AQ_BUF_SZ; > + > + err =3D iavf_execute_vf_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, "fail to execute command > OP_ADD_FDIR_FILTER"); > + return err; > + } > + > + fdir_status =3D (struct virtchnl_fdir_status *)args.out_buffer; > + filter->flow_id =3D fdir_status->flow_id; > + > + if (fdir_status->status =3D=3D VIRTCHNL_FDIR_SUCCESS) > + PMD_DRV_LOG(INFO, > + "add rule request is successfully done by PF"); > + else if (fdir_status->status =3D=3D > VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE) > + PMD_DRV_LOG(INFO, > + "add rule request is failed due to no hw resource"); > + else if (fdir_status->status =3D=3D > VIRTCHNL_FDIR_FAILURE_RULE_CONFLICT) > + PMD_DRV_LOG(INFO, > + "add rule request is failed due to the rule is already > existed"); > + else if (fdir_status->status =3D=3D VIRTCHNL_FDIR_FAILURE_RULE_INVALID) > + PMD_DRV_LOG(INFO, > + "add rule request is failed due to the hw doesn't > support"); > + else if (fdir_status->status =3D=3D > VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT) > + PMD_DRV_LOG(INFO, > + "add rule request is failed due to time out for > programming"); > + > + return 0; > +}; > + > +int > +iavf_fdir_del(struct iavf_adapter *adapter, > + struct iavf_fdir_conf *filter) > +{ > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter); > + struct virtchnl_fdir_status *fdir_status; > + > + struct iavf_cmd_info args; > + int err; > + > + filter->input.vsi_id =3D vf->vsi_res->vsi_id; > + filter->input.flow_id =3D filter->flow_id; > + > + args.ops =3D VIRTCHNL_OP_DEL_FDIR_FILTER; > + args.in_args =3D (uint8_t *)(&filter->input); > + args.in_args_size =3D sizeof(filter->input); > + args.out_buffer =3D vf->aq_resp; > + args.out_size =3D IAVF_AQ_BUF_SZ; > + > + err =3D iavf_execute_vf_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, "fail to execute command > OP_DEL_FDIR_FILTER"); > + return err; > + } > + > + fdir_status =3D (struct virtchnl_fdir_status *)args.out_buffer; > + > + if (fdir_status->status =3D=3D VIRTCHNL_FDIR_SUCCESS) > + PMD_DRV_LOG(INFO, > + "delete rule request is successfully done by PF"); > + else if (fdir_status->status =3D=3D > VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST) > + PMD_DRV_LOG(INFO, > + "delete rule request is failed due to this rule doesn't > exist"); > + else if (fdir_status->status =3D=3D > VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT) > + PMD_DRV_LOG(INFO, > + "delete rule request is failed due to time out for > programming"); > + > + return 0; > +}; > + > +int > +iavf_fdir_check(struct iavf_adapter *adapter, > + struct iavf_fdir_conf *filter) > +{ > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter); > + struct virtchnl_fdir_status *fdir_status; > + > + struct iavf_cmd_info args; > + int err; > + > + filter->input.vsi_id =3D vf->vsi_res->vsi_id; > + filter->input.validate_only =3D 1; > + > + args.ops =3D VIRTCHNL_OP_ADD_FDIR_FILTER; > + args.in_args =3D (uint8_t *)(&filter->input); > + args.in_args_size =3D sizeof(*(&filter->input)); > + args.out_buffer =3D vf->aq_resp; > + args.out_size =3D IAVF_AQ_BUF_SZ; > + > + err =3D iavf_execute_vf_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, "fail to check flow direcotor rule"); > + return err; > + } > + > + fdir_status =3D (struct virtchnl_fdir_status *)args.out_buffer; > + > + if (fdir_status->status =3D=3D VIRTCHNL_FDIR_SUCCESS) > + PMD_DRV_LOG(INFO, > + "check rule request is successfully done by PF"); > + else if (fdir_status->status =3D=3D VIRTCHNL_FDIR_FAILURE_RULE_INVALID) > + PMD_DRV_LOG(INFO, > + "check rule request is failed due to parameters > validation" > + " or HW doesn't support"); > + > + return 0; > +} > diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build = index > 32eabca..ce71054 100644 > --- a/drivers/net/iavf/meson.build > +++ b/drivers/net/iavf/meson.build > @@ -13,6 +13,7 @@ sources =3D files( > 'iavf_rxtx.c', > 'iavf_vchnl.c', > 'iavf_generic_flow.c', > + 'iavf_fdir.c', > ) >=20 > if arch_subdir =3D=3D 'x86' > -- > 1.8.3.1