From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2F99FA0527; Mon, 9 Nov 2020 04:23:22 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFE4158C4; Mon, 9 Nov 2020 04:23:19 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 362722B93; Mon, 9 Nov 2020 04:23:15 +0100 (CET) IronPort-SDR: 56FFOmMxCXNVBPDZtlJKCbxLyHnfAGU7/K/G8qicCxJDDKsxXRC7OZXiXQiIyQPDZV/T/TfwLn ziApS1zUwIJA== X-IronPort-AV: E=McAfee;i="6000,8403,9799"; a="169857082" X-IronPort-AV: E=Sophos;i="5.77,462,1596524400"; d="log'?scan'208";a="169857082" X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Nov 2020 19:23:12 -0800 IronPort-SDR: LaDGZA2+10xuagWmx9e4ObF8H4x821h9ndORNI7CiO1xsYT8LcsQ+M+bGnsmpuH8b1CekYfNTr zkiHItM+rYzQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,462,1596524400"; d="log'?scan'208";a="530542843" Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by fmsmga005.fm.intel.com with ESMTP; 08 Nov 2020 19:23:11 -0800 Received: from shsmsx602.ccr.corp.intel.com (10.109.6.142) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Sun, 8 Nov 2020 19:23:09 -0800 Received: from shsmsx606.ccr.corp.intel.com (10.109.6.216) by SHSMSX602.ccr.corp.intel.com (10.109.6.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Mon, 9 Nov 2020 11:23:07 +0800 Received: from shsmsx606.ccr.corp.intel.com ([10.109.6.216]) by SHSMSX606.ccr.corp.intel.com ([10.109.6.216]) with mapi id 15.01.1713.004; Mon, 9 Nov 2020 11:23:07 +0800 From: "Zhou, JunX W" To: "Di, ChenxuX" , "dev@dpdk.org" CC: "Xing, Beilei" , "Guo, Jia" , "Wang, Haiyue" , "Di, ChenxuX" , "stable@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v2] net/i40e: fix incorrect FDIR flex configuration Thread-Index: AQHWtArRRPzl9ShuYUKE9KCYCqLJXKm/JlYA Date: Mon, 9 Nov 2020 03:23:07 +0000 Message-ID: References: <20201104082959.63800-1-chenxux.di@intel.com> <20201106064715.72714-1-chenxux.di@intel.com> In-Reply-To: <20201106064715.72714-1-chenxux.di@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.36] MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v2] net/i40e: fix incorrect FDIR flex configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Tested-by: Zhou, Jun -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chenxu Di Sent: Friday, November 6, 2020 2:47 PM To: dev@dpdk.org Cc: Xing, Beilei ; Guo, Jia ; Wan= g, Haiyue ; Di, ChenxuX ; stab= le@dpdk.org Subject: [dpdk-dev] [PATCH v2] net/i40e: fix incorrect FDIR flex configurat= ion The configuration of FDIR flex mask and flex pit should not be set during f= low validate. It should be set when flow create. Fixes: 6ced3dd72f5f ("net/i40e: support flexible payload parsing for FDIR") Cc: stable@dpdk.org Signed-off-by: Chenxu Di --- v2: -Merge two patches into one patch. --- drivers/net/i40e/i40e_ethdev.h | 22 ++-- drivers/net/i40e/i40e_fdir.c | 194 ++++++++++++++++++++++++++++++++ drivers/net/i40e/i40e_flow.c | 195 ++------------------------------- 3 files changed, 216 insertions(+), 195 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.= h index 1466998aa..e00133c88 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -599,11 +599,22 @@ enum i40e_fdir_ip_type { I40E_FDIR_IPTYPE_IPV6, }; =20 +/* + * Structure to store flex pit for flow diretor. + */ +struct i40e_fdir_flex_pit { + uint8_t src_offset; /* offset in words from the beginning of payload */ + uint8_t size; /* size in words */ + uint8_t dst_offset; /* offset in words of flexible payload */ }; + /* A structure used to contain extend input of flow */ struct i40e_fdir_f= low_ext { uint16_t vlan_tci; uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN]; /* It is filled by the flexible payload to match. */ + uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN]; + uint8_t raw_id; uint8_t is_vf; /* 1 for VF, 0 for port dev */ uint16_t dst_id; /* VF ID, available when is_vf is 1*/ bool inner_ip; /* If there is inner ip */ @@ -612,6 +623,8 @@ struct i40e_fdir_flow_ext { bool customized_pctype; /* If customized pctype is used */ bool pkt_template; /* If raw packet template is used */ bool is_udp; /* ipv4|ipv6 udp flow */ + enum i40e_flxpld_layer_idx layer_idx; + struct i40e_fdir_flex_pit flex_pit[I40E_MAX_FLXPLD_LAYER *=20 +I40E_MAX_FLXPLD_FIED]; }; =20 /* A structure used to define the input for a flow director filter entry *= / @@ -663,15 +676,6 @@ struct i40e_fdir_filter_conf { struct i40e_fdir_action action; /* Action taken when match */ }; =20 -/* - * Structure to store flex pit for flow diretor. - */ -struct i40e_fdir_flex_pit { - uint8_t src_offset; /* offset in words from the beginning of payload *= / - uint8_t size; /* size in words */ - uint8_t dst_offset; /* offset in words of flexible payload */ -}; - struct i40e_fdir_flex_mask { uint8_t word_mask; /**< Bit i enables word i of flexible payload */ uint8_t nb_bitmask; diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c in= dex aa8e72949..e64cb2fd0 100644 --- a/drivers/net/i40e/i40e_fdir.c +++ b/drivers/net/i40e/i40e_fdir.c @@ -1765,6 +1765,153 @@ i40e_add_del_fdir_filter(struct rte_eth_dev *dev, return ret; } =20 +static int +i40e_flow_store_flex_pit(struct i40e_pf *pf, + struct i40e_fdir_flex_pit *flex_pit, + enum i40e_flxpld_layer_idx layer_idx, + uint8_t raw_id) +{ + uint8_t field_idx; + + field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + raw_id; + /* Check if the configuration is conflicted */ + if (pf->fdir.flex_pit_flag[layer_idx] && + (pf->fdir.flex_set[field_idx].src_offset !=3D flex_pit->src_offset || + pf->fdir.flex_set[field_idx].size !=3D flex_pit->size || + pf->fdir.flex_set[field_idx].dst_offset !=3D flex_pit->dst_offset)) + return -1; + + /* Check if the configuration exists. */ + if (pf->fdir.flex_pit_flag[layer_idx] && + (pf->fdir.flex_set[field_idx].src_offset =3D=3D flex_pit->src_offset = && + pf->fdir.flex_set[field_idx].size =3D=3D flex_pit->size && + pf->fdir.flex_set[field_idx].dst_offset =3D=3D flex_pit->dst_offset)= ) + return 1; + + pf->fdir.flex_set[field_idx].src_offset =3D + flex_pit->src_offset; + pf->fdir.flex_set[field_idx].size =3D + flex_pit->size; + pf->fdir.flex_set[field_idx].dst_offset =3D + flex_pit->dst_offset; + + return 0; +} + +static void +i40e_flow_set_fdir_flex_pit(struct i40e_pf *pf, + enum i40e_flxpld_layer_idx layer_idx, + uint8_t raw_id) +{ + struct i40e_hw *hw =3D I40E_PF_TO_HW(pf); + uint32_t flx_pit, flx_ort; + uint8_t field_idx; + uint16_t min_next_off =3D 0; /* in words */ + uint8_t i; + + if (raw_id) { + flx_ort =3D (1 << I40E_GLQF_ORT_FLX_PAYLOAD_SHIFT) | + (raw_id << I40E_GLQF_ORT_FIELD_CNT_SHIFT) | + (layer_idx * I40E_MAX_FLXPLD_FIED); + I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(33 + layer_idx), flx_ort); + } + + /* Set flex pit */ + for (i =3D 0; i < raw_id; i++) { + field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + i; + flx_pit =3D MK_FLX_PIT(pf->fdir.flex_set[field_idx].src_offset, + pf->fdir.flex_set[field_idx].size, + pf->fdir.flex_set[field_idx].dst_offset); + + I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(field_idx), flx_pit); + min_next_off =3D pf->fdir.flex_set[field_idx].src_offset + + pf->fdir.flex_set[field_idx].size; + } + + for (; i < I40E_MAX_FLXPLD_FIED; i++) { + /* set the non-used register obeying register's constrain */ + field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + i; + flx_pit =3D MK_FLX_PIT(min_next_off, NONUSE_FLX_PIT_FSIZE, + NONUSE_FLX_PIT_DEST_OFF); + I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(field_idx), flx_pit); + min_next_off++; + } + + pf->fdir.flex_pit_flag[layer_idx] =3D 1; } + +static int +i40e_flow_store_flex_mask(struct i40e_pf *pf, + enum i40e_filter_pctype pctype, + uint8_t *mask) +{ + struct i40e_fdir_flex_mask flex_mask; + uint8_t nb_bitmask =3D 0; + uint16_t mask_tmp; + uint8_t i; + + memset(&flex_mask, 0, sizeof(struct i40e_fdir_flex_mask)); + for (i =3D 0; i < I40E_FDIR_MAX_FLEX_LEN; i +=3D sizeof(uint16_t)) { + mask_tmp =3D I40E_WORD(mask[i], mask[i + 1]); + if (mask_tmp) { + flex_mask.word_mask |=3D + I40E_FLEX_WORD_MASK(i / sizeof(uint16_t)); + if (mask_tmp !=3D UINT16_MAX) { + flex_mask.bitmask[nb_bitmask].mask =3D ~mask_tmp; + flex_mask.bitmask[nb_bitmask].offset =3D + i / sizeof(uint16_t); + nb_bitmask++; + if (nb_bitmask > I40E_FDIR_BITMASK_NUM_WORD) + return -1; + } + } + } + flex_mask.nb_bitmask =3D nb_bitmask; + + if (pf->fdir.flex_mask_flag[pctype] && + (memcmp(&flex_mask, &pf->fdir.flex_mask[pctype], + sizeof(struct i40e_fdir_flex_mask)))) + return -2; + else if (pf->fdir.flex_mask_flag[pctype] && + !(memcmp(&flex_mask, &pf->fdir.flex_mask[pctype], + sizeof(struct i40e_fdir_flex_mask)))) + return 1; + + memcpy(&pf->fdir.flex_mask[pctype], &flex_mask, + sizeof(struct i40e_fdir_flex_mask)); + return 0; +} + +static void +i40e_flow_set_fdir_flex_msk(struct i40e_pf *pf, + enum i40e_filter_pctype pctype) +{ + struct i40e_hw *hw =3D I40E_PF_TO_HW(pf); + struct i40e_fdir_flex_mask *flex_mask; + uint32_t flxinset, fd_mask; + uint8_t i; + + /* Set flex mask */ + flex_mask =3D &pf->fdir.flex_mask[pctype]; + flxinset =3D (flex_mask->word_mask << + I40E_PRTQF_FD_FLXINSET_INSET_SHIFT) & + I40E_PRTQF_FD_FLXINSET_INSET_MASK; + i40e_write_rx_ctl(hw, I40E_PRTQF_FD_FLXINSET(pctype), flxinset); + + for (i =3D 0; i < flex_mask->nb_bitmask; i++) { + fd_mask =3D (flex_mask->bitmask[i].mask << + I40E_PRTQF_FD_MSK_MASK_SHIFT) & + I40E_PRTQF_FD_MSK_MASK_MASK; + fd_mask |=3D ((flex_mask->bitmask[i].offset + + I40E_FLX_OFFSET_IN_FIELD_VECTOR) << + I40E_PRTQF_FD_MSK_OFFSET_SHIFT) & + I40E_PRTQF_FD_MSK_OFFSET_MASK; + i40e_write_rx_ctl(hw, I40E_PRTQF_FD_MSK(pctype, i), fd_mask); + } + + pf->fdir.flex_mask_flag[pctype] =3D 1; +} + static inline unsigned char * i40e_find_available_buffer(struct rte_eth_dev *dev) { @@ -1817,13 +1964,1= 9 @@ i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev, { struct i40e_hw *hw =3D I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct i40e_pf *pf =3D I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + enum i40e_flxpld_layer_idx layer_idx =3D I40E_FLXPLD_L2_IDX; unsigned char *pkt =3D NULL; enum i40e_filter_pctype pctype; struct i40e_fdir_info *fdir_info =3D &pf->fdir; + uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN]; struct i40e_fdir_filter *node; struct i40e_fdir_filter check_filter; /* Check if the filter exists */ + struct i40e_fdir_flex_pit flex_pit; + bool cfg_flex_pit =3D true; bool wait_status =3D true; + uint8_t field_idx; int ret =3D 0; + int i; =20 if (pf->fdir.fdir_vsi =3D=3D NULL) { PMD_DRV_LOG(ERR, "FDIR is not enabled"); @@ -1856,6 +2009,47 @@ i40e_flo= w_add_del_fdir_filter(struct rte_eth_dev *dev, i40e_fdir_filter_convert(filter, &check_filter); =20 if (add) { + if (!filter->input.flow_ext.customized_pctype) { + for (i =3D 0; i < filter->input.flow_ext.raw_id; i++) { + layer_idx =3D filter->input.flow_ext.layer_idx; + field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + i; + flex_pit =3D filter->input.flow_ext.flex_pit[field_idx]; + + /* Store flex pit to SW */ + ret =3D i40e_flow_store_flex_pit(pf, &flex_pit, + layer_idx, i); + if (ret < 0) { + PMD_DRV_LOG(ERR, "Conflict with the" + " first flexible rule."); + return -EINVAL; + } else if (ret > 0) { + cfg_flex_pit =3D false; + } + } + + if (cfg_flex_pit) + i40e_flow_set_fdir_flex_pit(pf, layer_idx, + filter->input.flow_ext.raw_id); + + /* Store flex mask to SW */ + for (i =3D 0; i < I40E_FDIR_MAX_FLEX_LEN; i++) + flex_mask[i] =3D + filter->input.flow_ext.flex_mask[i]; + + ret =3D i40e_flow_store_flex_mask(pf, pctype, flex_mask); + if (ret =3D=3D -1) { + PMD_DRV_LOG(ERR, "Exceed maximal" + " number of bitmasks"); + return -EINVAL; + } else if (ret =3D=3D -2) { + PMD_DRV_LOG(ERR, "Conflict with the" + " first flexible rule"); + return -EINVAL; + } else if (ret =3D=3D 0) { + i40e_flow_set_fdir_flex_msk(pf, pctype); + } + } + ret =3D i40e_sw_fdir_filter_insert(pf, &check_filter); if (ret < 0) { PMD_DRV_LOG(ERR, diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c in= dex adc5da1c5..098ae13ab 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -2240,152 +2240,6 @@ i40e_flow_check_raw_item(const struct rte_flow_item= *item, return 0; } =20 -static int -i40e_flow_store_flex_pit(struct i40e_pf *pf, - struct i40e_fdir_flex_pit *flex_pit, - enum i40e_flxpld_layer_idx layer_idx, - uint8_t raw_id) -{ - uint8_t field_idx; - - field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + raw_id; - /* Check if the configuration is conflicted */ - if (pf->fdir.flex_pit_flag[layer_idx] && - (pf->fdir.flex_set[field_idx].src_offset !=3D flex_pit->src_offset || - pf->fdir.flex_set[field_idx].size !=3D flex_pit->size || - pf->fdir.flex_set[field_idx].dst_offset !=3D flex_pit->dst_offset)) - return -1; - - /* Check if the configuration exists. */ - if (pf->fdir.flex_pit_flag[layer_idx] && - (pf->fdir.flex_set[field_idx].src_offset =3D=3D flex_pit->src_offset = && - pf->fdir.flex_set[field_idx].size =3D=3D flex_pit->size && - pf->fdir.flex_set[field_idx].dst_offset =3D=3D flex_pit->dst_offset)= ) - return 1; - - pf->fdir.flex_set[field_idx].src_offset =3D - flex_pit->src_offset; - pf->fdir.flex_set[field_idx].size =3D - flex_pit->size; - pf->fdir.flex_set[field_idx].dst_offset =3D - flex_pit->dst_offset; - - return 0; -} - -static int -i40e_flow_store_flex_mask(struct i40e_pf *pf, - enum i40e_filter_pctype pctype, - uint8_t *mask) -{ - struct i40e_fdir_flex_mask flex_mask; - uint16_t mask_tmp; - uint8_t i, nb_bitmask =3D 0; - - memset(&flex_mask, 0, sizeof(struct i40e_fdir_flex_mask)); - for (i =3D 0; i < I40E_FDIR_MAX_FLEX_LEN; i +=3D sizeof(uint16_t)) { - mask_tmp =3D I40E_WORD(mask[i], mask[i + 1]); - if (mask_tmp) { - flex_mask.word_mask |=3D - I40E_FLEX_WORD_MASK(i / sizeof(uint16_t)); - if (mask_tmp !=3D UINT16_MAX) { - flex_mask.bitmask[nb_bitmask].mask =3D ~mask_tmp; - flex_mask.bitmask[nb_bitmask].offset =3D - i / sizeof(uint16_t); - nb_bitmask++; - if (nb_bitmask > I40E_FDIR_BITMASK_NUM_WORD) - return -1; - } - } - } - flex_mask.nb_bitmask =3D nb_bitmask; - - if (pf->fdir.flex_mask_flag[pctype] && - (memcmp(&flex_mask, &pf->fdir.flex_mask[pctype], - sizeof(struct i40e_fdir_flex_mask)))) - return -2; - else if (pf->fdir.flex_mask_flag[pctype] && - !(memcmp(&flex_mask, &pf->fdir.flex_mask[pctype], - sizeof(struct i40e_fdir_flex_mask)))) - return 1; - - memcpy(&pf->fdir.flex_mask[pctype], &flex_mask, - sizeof(struct i40e_fdir_flex_mask)); - return 0; -} - -static void -i40e_flow_set_fdir_flex_pit(struct i40e_pf *pf, - enum i40e_flxpld_layer_idx layer_idx, - uint8_t raw_id) -{ - struct i40e_hw *hw =3D I40E_PF_TO_HW(pf); - uint32_t flx_pit, flx_ort; - uint8_t field_idx; - uint16_t min_next_off =3D 0; /* in words */ - uint8_t i; - - if (raw_id) { - flx_ort =3D (1 << I40E_GLQF_ORT_FLX_PAYLOAD_SHIFT) | - (raw_id << I40E_GLQF_ORT_FIELD_CNT_SHIFT) | - (layer_idx * I40E_MAX_FLXPLD_FIED); - I40E_WRITE_GLB_REG(hw, I40E_GLQF_ORT(33 + layer_idx), flx_ort); - } - - /* Set flex pit */ - for (i =3D 0; i < raw_id; i++) { - field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + i; - flx_pit =3D MK_FLX_PIT(pf->fdir.flex_set[field_idx].src_offset, - pf->fdir.flex_set[field_idx].size, - pf->fdir.flex_set[field_idx].dst_offset); - - I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(field_idx), flx_pit); - min_next_off =3D pf->fdir.flex_set[field_idx].src_offset + - pf->fdir.flex_set[field_idx].size; - } - - for (; i < I40E_MAX_FLXPLD_FIED; i++) { - /* set the non-used register obeying register's constrain */ - field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + i; - flx_pit =3D MK_FLX_PIT(min_next_off, NONUSE_FLX_PIT_FSIZE, - NONUSE_FLX_PIT_DEST_OFF); - I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(field_idx), flx_pit); - min_next_off++; - } - - pf->fdir.flex_pit_flag[layer_idx] =3D 1; -} - -static void -i40e_flow_set_fdir_flex_msk(struct i40e_pf *pf, - enum i40e_filter_pctype pctype) -{ - struct i40e_hw *hw =3D I40E_PF_TO_HW(pf); - struct i40e_fdir_flex_mask *flex_mask; - uint32_t flxinset, fd_mask; - uint8_t i; - - /* Set flex mask */ - flex_mask =3D &pf->fdir.flex_mask[pctype]; - flxinset =3D (flex_mask->word_mask << - I40E_PRTQF_FD_FLXINSET_INSET_SHIFT) & - I40E_PRTQF_FD_FLXINSET_INSET_MASK; - i40e_write_rx_ctl(hw, I40E_PRTQF_FD_FLXINSET(pctype), flxinset); - - for (i =3D 0; i < flex_mask->nb_bitmask; i++) { - fd_mask =3D (flex_mask->bitmask[i].mask << - I40E_PRTQF_FD_MSK_MASK_SHIFT) & - I40E_PRTQF_FD_MSK_MASK_MASK; - fd_mask |=3D ((flex_mask->bitmask[i].offset + - I40E_FLX_OFFSET_IN_FIELD_VECTOR) << - I40E_PRTQF_FD_MSK_OFFSET_SHIFT) & - I40E_PRTQF_FD_MSK_OFFSET_MASK; - i40e_write_rx_ctl(hw, I40E_PRTQF_FD_MSK(pctype, i), fd_mask); - } - - pf->fdir.flex_mask_flag[pctype] =3D 1; -} - static int i40e_flow_set_fdir_inset(struct i40e_pf *pf, enum i40e_filter_pctype pctype, @@ -2604,18 +2458,15 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *de= v, uint16_t len_arr[I40E_MAX_FLXPLD_FIED]; struct i40e_fdir_flex_pit flex_pit; uint8_t next_dst_off =3D 0; - uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN]; uint16_t flex_size; - bool cfg_flex_pit =3D true; - bool cfg_flex_msk =3D true; uint16_t ether_type; uint32_t vtc_flow_cpu; bool outer_ip =3D true; + uint8_t field_idx; int ret; =20 memset(off_arr, 0, sizeof(off_arr)); memset(len_arr, 0, sizeof(len_arr)); - memset(flex_mask, 0, I40E_FDIR_MAX_FLEX_LEN); filter->input.flow_ext.customized_pctype =3D false; for (; item->type !=3D RTE_FLOW_ITEM_TYPE_END; item++) { if (item->last) { @@ -3163,6 +3014,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, =20 flex_size =3D 0; memset(&flex_pit, 0, sizeof(struct i40e_fdir_flex_pit)); + field_idx =3D layer_idx * I40E_MAX_FLXPLD_FIED + raw_id; flex_pit.size =3D raw_spec->length / sizeof(uint16_t); flex_pit.dst_offset =3D @@ -3189,27 +3041,21 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *de= v, return -rte_errno; } =20 - /* Store flex pit to SW */ - ret =3D i40e_flow_store_flex_pit(pf, &flex_pit, - layer_idx, raw_id); - if (ret < 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Conflict with the first flexible rule."); - return -rte_errno; - } else if (ret > 0) - cfg_flex_pit =3D false; - for (i =3D 0; i < raw_spec->length; i++) { j =3D i + next_dst_off; filter->input.flow_ext.flexbytes[j] =3D raw_spec->pattern[i]; - flex_mask[j] =3D raw_mask->pattern[i]; + filter->input.flow_ext.flex_mask[j] =3D + raw_mask->pattern[i]; } =20 next_dst_off +=3D raw_spec->length; raw_id++; + + memcpy(&filter->input.flow_ext.flex_pit[field_idx], + &flex_pit, sizeof(struct i40e_fdir_flex_pit)); + filter->input.flow_ext.layer_idx =3D layer_idx; + filter->input.flow_ext.raw_id =3D raw_id; break; case RTE_FLOW_ITEM_TYPE_VF: vf_spec =3D item->spec; @@ -3295,29 +3141,6 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev= , "Invalid pattern mask."); return -rte_errno; } - - /* Store flex mask to SW */ - ret =3D i40e_flow_store_flex_mask(pf, pctype, flex_mask); - if (ret =3D=3D -1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Exceed maximal number of bitmasks"); - return -rte_errno; - } else if (ret =3D=3D -2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Conflict with the first flexible rule"); - return -rte_errno; - } else if (ret > 0) - cfg_flex_msk =3D false; - - if (cfg_flex_pit) - i40e_flow_set_fdir_flex_pit(pf, layer_idx, raw_id); - - if (cfg_flex_msk) - i40e_flow_set_fdir_flex_msk(pf, pctype); } =20 filter->input.pctype =3D pctype; -- 2.17.1