From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 099C6A0518; Fri, 24 Jul 2020 09:13:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0C1F21C010; Fri, 24 Jul 2020 09:13:15 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 21FF41BFE7 for ; Fri, 24 Jul 2020 09:13:12 +0200 (CEST) IronPort-SDR: XWeX6o7wbZ53T+E/F8Pb8r/6FmGHU3LnG55Ax1xZ4GWNNRIn+rpRpPuvyW2LrEE4xkmZBER3vN PSitV66HYN0Q== X-IronPort-AV: E=McAfee;i="6000,8403,9691"; a="138167580" X-IronPort-AV: E=Sophos;i="5.75,389,1589266800"; d="scan'208";a="138167580" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jul 2020 00:13:11 -0700 IronPort-SDR: /SXHfAbnkU1x4bOAGxrTg8iWweCSFVUtaudQmMrdP4CuCqnHaYHWlYWf6gY2BIxp1uVG/pUpWV KSJGPbTpBDgg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,389,1589266800"; d="scan'208";a="302581559" Received: from jguo15x-mobl.ccr.corp.intel.com (HELO [10.67.68.150]) ([10.67.68.150]) by orsmga002.jf.intel.com with ESMTP; 24 Jul 2020 00:13:10 -0700 To: Simei Su , qi.z.zhang@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org References: <1595556601-87800-1-git-send-email-simei.su@intel.com> From: Jeff Guo Message-ID: Date: Fri, 24 Jul 2020 15:13:09 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <1595556601-87800-1-git-send-email-simei.su@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Subject: Re: [dpdk-dev] [PATCH] net/ice: fix GTPU down/uplink and extension conflict X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" hi, simei On 7/24/2020 10:10 AM, Simei Su wrote: > When adding a RSS rule with GTPU_DWN/UP, it will search profile > table from the top index. If a RSS rule with GTPU_EH already exists, > then GTPU_DWN/UP packet will match GTPU_EH profile. This patch solves > this issue by removing existed GTPU_EH rule before creating a new > GTPU_DWN/UP rule. Suggest interpret the relation ship bettween GTPU_EH_UPLINK/DWNLINK with GTPU_EH to help knowledge the reason. > Fixes: 2e2810fc1868 ("net/ice: fix GTPU RSS") > > Signed-off-by: Simei Su > --- > drivers/net/ice/ice_ethdev.c | 47 +++++++++++++++ > drivers/net/ice/ice_ethdev.h | 15 +++++ > drivers/net/ice/ice_hash.c | 139 +++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 201 insertions(+) > > diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c > index a4a0390..8839146 100644 > --- a/drivers/net/ice/ice_ethdev.c > +++ b/drivers/net/ice/ice_ethdev.c > @@ -2538,6 +2538,11 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) > if (ret) > PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV4 rss flow fail %d", > __func__, ret); A blank line need. > + /* Store hash field and header for gtpu_eh ipv4 */ > + pf->gtpu_eh.ipv4.hash_fld = ICE_FLOW_HASH_IPV4; > + pf->gtpu_eh.ipv4.pkt_hdr = ICE_FLOW_SEG_HDR_GTPU_EH | > + ICE_FLOW_SEG_HDR_IPV4 | > + ICE_FLOW_SEG_HDR_IPV_OTHER; > > ret = ice_add_rss_cfg(hw, vsi->idx, ICE_FLOW_HASH_IPV4, > ICE_FLOW_SEG_HDR_PPPOE | > @@ -2564,6 +2569,11 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) > if (ret) > PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV6 rss flow fail %d", > __func__, ret); > + /* Store hash field and header for gtpu_eh ipv6 */ > + pf->gtpu_eh.ipv6.hash_fld = ICE_FLOW_HASH_IPV6; > + pf->gtpu_eh.ipv6.pkt_hdr = ICE_FLOW_SEG_HDR_GTPU_EH | > + ICE_FLOW_SEG_HDR_IPV6 | > + ICE_FLOW_SEG_HDR_IPV_OTHER; > > ret = ice_add_rss_cfg(hw, vsi->idx, ICE_FLOW_HASH_IPV6, > ICE_FLOW_SEG_HDR_PPPOE | > @@ -2586,6 +2596,9 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) > if (ret) > PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV4_UDP rss flow fail %d", > __func__, ret); > + /* Store hash field and header for gtpu_eh ipv4_udp */ > + pf->gtpu_eh.ipv4_udp.hash_fld = ICE_HASH_UDP_IPV4; > + pf->gtpu_eh.ipv4_udp.pkt_hdr = ICE_FLOW_SEG_HDR_GTPU_EH; > > ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_UDP_IPV4, > ICE_FLOW_SEG_HDR_PPPOE, 0); > @@ -2606,6 +2619,9 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) > if (ret) > PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV6_UDP rss flow fail %d", > __func__, ret); > + /* Store hash field and header for gtpu_eh ipv6_udp */ > + pf->gtpu_eh.ipv6_udp.hash_fld = ICE_HASH_UDP_IPV6; > + pf->gtpu_eh.ipv6_udp.pkt_hdr = ICE_FLOW_SEG_HDR_GTPU_EH; > > ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_UDP_IPV6, > ICE_FLOW_SEG_HDR_PPPOE, 0); > @@ -2626,6 +2642,9 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) > if (ret) > PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV4_TCP rss flow fail %d", > __func__, ret); > + /* Store hash field and header for gtpu_eh ipv4_tcp */ > + pf->gtpu_eh.ipv4_tcp.hash_fld = ICE_HASH_TCP_IPV4; > + pf->gtpu_eh.ipv4_tcp.pkt_hdr = ICE_FLOW_SEG_HDR_GTPU_EH; > > ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_TCP_IPV4, > ICE_FLOW_SEG_HDR_PPPOE, 0); > @@ -2646,6 +2665,9 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) > if (ret) > PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV6_TCP rss flow fail %d", > __func__, ret); > + /* Store hash field and header for gtpu_eh ipv6_tcp */ > + pf->gtpu_eh.ipv6_tcp.hash_fld = ICE_HASH_TCP_IPV6; > + pf->gtpu_eh.ipv6_tcp.pkt_hdr = ICE_FLOW_SEG_HDR_GTPU_EH; > > ret = ice_add_rss_cfg(hw, vsi->idx, ICE_HASH_TCP_IPV6, > ICE_FLOW_SEG_HDR_PPPOE, 0); > @@ -2695,6 +2717,28 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) > } > } > > +static void > +ice_rss_ctx_init(struct ice_pf *pf) > +{ > + pf->gtpu_eh.ipv4.hash_fld = 0; > + pf->gtpu_eh.ipv4.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv6.hash_fld = 0; > + pf->gtpu_eh.ipv6.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv4_udp.hash_fld = 0; > + pf->gtpu_eh.ipv4_udp.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv6_udp.hash_fld = 0; > + pf->gtpu_eh.ipv6_udp.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv4_tcp.hash_fld = 0; > + pf->gtpu_eh.ipv4_tcp.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv6_tcp.hash_fld = 0; > + pf->gtpu_eh.ipv6_tcp.pkt_hdr = 0; > +} > + > static int ice_init_rss(struct ice_pf *pf) > { > struct ice_hw *hw = ICE_PF_TO_HW(pf); > @@ -2755,6 +2799,9 @@ static int ice_init_rss(struct ice_pf *pf) > (1 << VSIQF_HASH_CTL_HASH_SCHEME_S); > ICE_WRITE_REG(hw, VSIQF_HASH_CTL(vsi->vsi_id), reg); > > + /* Initialize RSS context for gtpu_eh */ > + ice_rss_ctx_init(pf); > + > /* RSS hash configuration */ > ice_rss_hash_set(pf, rss_conf->rss_hf); > > diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h > index 87984ef..1baf0b4 100644 > --- a/drivers/net/ice/ice_ethdev.h > +++ b/drivers/net/ice/ice_ethdev.h > @@ -358,6 +358,20 @@ struct ice_fdir_info { > struct ice_fdir_counter_pool_container counter; > }; > > +struct ice_gtpu_eh { > + uint32_t pkt_hdr; > + uint64_t hash_fld; > +}; > + The naming "ice_gtpu_eh" is not clear, ice_hash_gtpu_eh_ctx? > +struct ice_hash_gtpu_eh { > + struct ice_gtpu_eh ipv4; > + struct ice_gtpu_eh ipv6; > + struct ice_gtpu_eh ipv4_udp; > + struct ice_gtpu_eh ipv6_udp; > + struct ice_gtpu_eh ipv4_tcp; > + struct ice_gtpu_eh ipv6_tcp; > +}; > + I think you don't need to define struct for each pattern and set/unset value for all of them, what you considate just 3 item, that are hdr, hash filed and the pattern type, so you could just defined as below struct ice_hash_gtpu_eh_ctx {             uint32_t pkt_hdr;     uint64_t hash_fld;                uint64_t hdr_hint;    // for example: hdr = ICE_FLOW_SEG_HDR_IPV4 |  ICE_FLOW_SEG_HDR_UDP } and then only check this hdr_hint when set uplink/dwnlink if(hdr & pf->gtpu_eh_ctx->hdr_hint)     ice_rem_rss_cfg > struct ice_pf { > struct ice_adapter *adapter; /* The adapter this PF associate to */ > struct ice_vsi *main_vsi; /* pointer to main VSI structure */ > @@ -381,6 +395,7 @@ struct ice_pf { > uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */ > uint16_t fdir_qp_offset; > struct ice_fdir_info fdir; /* flow director info */ > + struct ice_hash_gtpu_eh gtpu_eh; > uint16_t hw_prof_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX]; > uint16_t fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX]; > struct ice_hw_port_stats stats_offset; > diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c > index e535e4b..dd70353 100644 > --- a/drivers/net/ice/ice_hash.c > +++ b/drivers/net/ice/ice_hash.c > @@ -1232,6 +1232,117 @@ struct ice_hash_match_type ice_hash_type_list[] = { > } > > static int > +ice_add_rss_cfg_pre(struct ice_pf *pf, uint32_t hdr, uint64_t fld) > +{ > + struct ice_hw *hw = ICE_PF_TO_HW(pf); > + struct ice_vsi *vsi = pf->main_vsi; > + int ret; > + > + /** > + * If header field contains GTPU_EH, store gtpu_eh context. > + * If header field contains GTPU_DWN/UP, remove existed gtpu_eh. > + */ > + if ((hdr & ICE_FLOW_SEG_HDR_GTPU_EH) && > + (hdr & (ICE_FLOW_SEG_HDR_GTPU_DWN | > + ICE_FLOW_SEG_HDR_GTPU_UP)) == 0) { No need to check !=DWN/UP here, EH/DWN/UP are mutual exclusion, they also handler when  parse pattern. > + if ((hdr & ICE_FLOW_SEG_HDR_IPV4) && > + (hdr & ICE_FLOW_SEG_HDR_UDP)) { Alignment should match open parenthesis. Below is the same. > + pf->gtpu_eh.ipv4_udp.pkt_hdr = hdr; > + pf->gtpu_eh.ipv4_udp.hash_fld = fld; > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV6) && > + (hdr & ICE_FLOW_SEG_HDR_UDP)) { > + pf->gtpu_eh.ipv6_udp.pkt_hdr = hdr; > + pf->gtpu_eh.ipv6_udp.hash_fld = fld; > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV4) && > + (hdr & ICE_FLOW_SEG_HDR_TCP)) { > + pf->gtpu_eh.ipv4_tcp.pkt_hdr = hdr; > + pf->gtpu_eh.ipv4_tcp.hash_fld = fld; > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV6) && > + (hdr & ICE_FLOW_SEG_HDR_TCP)) { > + pf->gtpu_eh.ipv6_tcp.pkt_hdr = hdr; > + pf->gtpu_eh.ipv6_tcp.hash_fld = fld; > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV4) && > + (hdr & (ICE_FLOW_SEG_HDR_UDP | > + ICE_FLOW_SEG_HDR_TCP)) == 0) { > + pf->gtpu_eh.ipv4.pkt_hdr = hdr; > + pf->gtpu_eh.ipv4.hash_fld = fld; > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV6) && > + (hdr & (ICE_FLOW_SEG_HDR_UDP | > + ICE_FLOW_SEG_HDR_TCP)) == 0) { > + pf->gtpu_eh.ipv6.pkt_hdr = hdr; > + pf->gtpu_eh.ipv6.hash_fld = fld; > + } > + } else if (hdr & (ICE_FLOW_SEG_HDR_GTPU_DWN | > + ICE_FLOW_SEG_HDR_GTPU_UP)) { > + if ((hdr & ICE_FLOW_SEG_HDR_IPV4) && > + (hdr & ICE_FLOW_SEG_HDR_UDP)) { > + if (pf->gtpu_eh.ipv4_udp.hash_fld && > + pf->gtpu_eh.ipv4_udp.pkt_hdr) { > + ret = ice_rem_rss_cfg(hw, vsi->idx, > + pf->gtpu_eh.ipv4_udp.hash_fld, > + pf->gtpu_eh.ipv4_udp.pkt_hdr); > + if (ret) > + return -rte_errno; > + } Is it better to use a local variable and then call ice_rem_rss_cfg one time after if-else? > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV6) && > + (hdr & ICE_FLOW_SEG_HDR_UDP)) { > + if (pf->gtpu_eh.ipv6_udp.hash_fld && > + pf->gtpu_eh.ipv6_udp.pkt_hdr) { > + ret = ice_rem_rss_cfg(hw, vsi->idx, > + pf->gtpu_eh.ipv6_udp.hash_fld, > + pf->gtpu_eh.ipv6_udp.pkt_hdr); > + if (ret) > + return -rte_errno; > + } > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV4) && > + (hdr & ICE_FLOW_SEG_HDR_TCP)) { > + if (pf->gtpu_eh.ipv4_tcp.hash_fld && > + pf->gtpu_eh.ipv4_tcp.pkt_hdr) { > + ret = ice_rem_rss_cfg(hw, vsi->idx, > + pf->gtpu_eh.ipv4_tcp.hash_fld, > + pf->gtpu_eh.ipv4_tcp.pkt_hdr); > + if (ret) > + return -rte_errno; > + } > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV6) && > + (hdr & ICE_FLOW_SEG_HDR_TCP)) { > + if (pf->gtpu_eh.ipv6_tcp.hash_fld && > + pf->gtpu_eh.ipv6_tcp.pkt_hdr) { > + ret = ice_rem_rss_cfg(hw, vsi->idx, > + pf->gtpu_eh.ipv6_tcp.hash_fld, > + pf->gtpu_eh.ipv6_tcp.pkt_hdr); > + if (ret) > + return -rte_errno; > + } > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV4) && > + (hdr & (ICE_FLOW_SEG_HDR_UDP | > + ICE_FLOW_SEG_HDR_TCP)) == 0) { > + if (pf->gtpu_eh.ipv4.hash_fld && > + pf->gtpu_eh.ipv4.pkt_hdr) { > + ret = ice_rem_rss_cfg(hw, vsi->idx, > + pf->gtpu_eh.ipv4.hash_fld, > + pf->gtpu_eh.ipv4.pkt_hdr); > + if (ret) > + return -rte_errno; > + } > + } else if ((hdr & ICE_FLOW_SEG_HDR_IPV6) && > + (hdr & (ICE_FLOW_SEG_HDR_UDP | > + ICE_FLOW_SEG_HDR_TCP)) == 0) { > + if (pf->gtpu_eh.ipv6.hash_fld && > + pf->gtpu_eh.ipv6.pkt_hdr) { > + ret = ice_rem_rss_cfg(hw, vsi->idx, > + pf->gtpu_eh.ipv6.hash_fld, > + pf->gtpu_eh.ipv6.pkt_hdr); > + if (ret) > + return -rte_errno; > + } > + } > + } > + > + return 0; > +} > + > +static int > ice_hash_create(struct ice_adapter *ad, > struct rte_flow *flow, > void *meta, > @@ -1248,6 +1359,10 @@ struct ice_hash_match_type ice_hash_type_list[] = { > uint64_t hash_field = ((struct rss_meta *)meta)->hash_flds; > uint8_t hash_function = ((struct rss_meta *)meta)->hash_function; > > + ret = ice_add_rss_cfg_pre(pf, headermask, hash_field); > + if (ret) > + return -rte_errno; > + > filter_ptr = rte_zmalloc("ice_rss_filter", > sizeof(struct ice_hash_flow_cfg), 0); > if (!filter_ptr) { > @@ -1297,6 +1412,28 @@ struct ice_hash_match_type ice_hash_type_list[] = { > return -rte_errno; > } > > +static void > +ice_rem_rss_cfg_post(struct ice_pf *pf) > +{ > + pf->gtpu_eh.ipv4.hash_fld = 0; > + pf->gtpu_eh.ipv4.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv6.hash_fld = 0; > + pf->gtpu_eh.ipv6.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv4_udp.hash_fld = 0; > + pf->gtpu_eh.ipv4_udp.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv6_udp.hash_fld = 0; > + pf->gtpu_eh.ipv6_udp.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv4_tcp.hash_fld = 0; > + pf->gtpu_eh.ipv4_tcp.pkt_hdr = 0; > + > + pf->gtpu_eh.ipv6_tcp.hash_fld = 0; > + pf->gtpu_eh.ipv6_tcp.pkt_hdr = 0; > +} > + > static int > ice_hash_destroy(struct ice_adapter *ad, > struct rte_flow *flow, > @@ -1334,6 +1471,8 @@ struct ice_hash_match_type ice_hash_type_list[] = { > } > } > > + ice_rem_rss_cfg_post(pf); > + > rte_free(filter_ptr); > return 0; >