DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Yang, Qiming" <qiming.yang@intel.com>
To: "Jiang, JunyuX" <junyux.jiang@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Sun, GuinanX" <guinanx.sun@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2 3/5] net/ice: support flow mark in AVX path
Date: Tue, 8 Sep 2020 07:54:01 +0000	[thread overview]
Message-ID: <BN6PR11MB001735E8B971C4A79C1DE756E5290@BN6PR11MB0017.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20200907091711.5980-4-junyux.jiang@intel.com>



> -----Original Message-----
> From: Jiang, JunyuX <junyux.jiang@intel.com>
> Sent: Monday, September 7, 2020 17:17
> To: dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Sun, GuinanX <guinanx.sun@intel.com>
> Subject: [PATCH v2 3/5] net/ice: support flow mark in AVX path
> 
> From: Guinan Sun <guinanx.sun@intel.com>
> 
> Support Flow Director mark ID parsing from Flex Rx descriptor in AVX path.
Same comments.

> 
> Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
> ---
>  drivers/net/ice/ice_rxtx_vec_avx2.c | 64
> ++++++++++++++++++++++++++++-
>  1 file changed, 63 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c
> b/drivers/net/ice/ice_rxtx_vec_avx2.c
> index 07d129e3f..70e4b76db 100644
> --- a/drivers/net/ice/ice_rxtx_vec_avx2.c
> +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
> @@ -132,6 +132,25 @@ ice_rxq_rearm(struct ice_rx_queue *rxq)
>  	ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);  }
> 
> +static inline __m256i
> +ice_flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7) { #define
> +FDID_MIS_MAGIC 0xFFFFFFFF
> +	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
> +	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
> +	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
> +			PKT_RX_FDIR_ID);
> +	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
> +	const __m256i fdir_mis_mask =
> _mm256_set1_epi32(FDID_MIS_MAGIC);
> +	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
> +			fdir_mis_mask);
> +	/* this XOR op results to bit-reverse the fdir_mask */
> +	fdir_mask = _mm256_xor_si256(fdir_mask, fdir_mis_mask);
> +	const __m256i fdir_flags = _mm256_and_si256(fdir_mask,
> pkt_fdir_bit);
> +
> +	return fdir_flags;
> +}
> +
>  static inline uint16_t
>  _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf
> **rx_pkts,
>  			    uint16_t nb_pkts, uint8_t *split_packet) @@ -459,9
> +478,51 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
>  					    rss_vlan_flag_bits);
> 
>  		/* merge flags */
> -		const __m256i mbuf_flags = _mm256_or_si256(l3_l4_flags,
> +		__m256i mbuf_flags = _mm256_or_si256(l3_l4_flags,
>  				rss_vlan_flags);
> 
> +		if (rxq->fdir_enabled) {
> +			const __m256i fdir_id4_7 =
> +				_mm256_unpackhi_epi32(raw_desc6_7,
> raw_desc4_5);
> +
> +			const __m256i fdir_id0_3 =
> +				_mm256_unpackhi_epi32(raw_desc2_3,
> raw_desc0_1);
> +
> +			const __m256i fdir_id0_7 =
> +				_mm256_unpackhi_epi64(fdir_id4_7,
> fdir_id0_3);
> +
> +			const __m256i fdir_flags =
> +
> 	ice_flex_rxd_to_fdir_flags_vec_avx2(fdir_id0_7);
> +
> +			/* merge with fdir_flags */
> +			mbuf_flags = _mm256_or_si256(mbuf_flags,
> fdir_flags);
> +
> +			/* write to mbuf: have to use scalar store here */
> +			rx_pkts[i + 0]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 3);
> +
> +			rx_pkts[i + 1]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 7);
> +
> +			rx_pkts[i + 2]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 2);
> +
> +			rx_pkts[i + 3]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 6);
> +
> +			rx_pkts[i + 4]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 1);
> +
> +			rx_pkts[i + 5]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 5);
> +
> +			rx_pkts[i + 6]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 0);
> +
> +			rx_pkts[i + 7]->hash.fdir.hi =
> +				_mm256_extract_epi32(fdir_id0_7, 4);
> +		} /* if() on fdir_enabled */
> +
>  #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
>  		/**
>  		 * needs to load 2nd 16B of each desc for RSS hash parsing,
> @@ -551,6 +612,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue
> *rxq, struct rte_mbuf **rx_pkts,
>  			mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
>  		} /* if() on RSS hash parsing */
>  #endif
> +
>  		/**
>  		 * At this point, we have the 8 sets of flags in the low 16-bits
>  		 * of each 32-bit value in vlan0.
> --
> 2.17.1


  reply	other threads:[~2020-09-08  7:54 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-26  7:54 [dpdk-dev] [PATCH 0/7] support RXDID22 and FDID22 Guinan Sun
2020-08-26  7:54 ` [dpdk-dev] [PATCH 1/7] net/ice: change RSS hash parsing in AVX path Guinan Sun
2020-08-26  7:54 ` [dpdk-dev] [PATCH 2/7] net/ice: change RSS hash parsing in SSE path Guinan Sun
2020-08-26  7:54 ` [dpdk-dev] [PATCH 3/7] net/ice: support flexible descriptor RxDID #22 Guinan Sun
2020-08-26  7:54 ` [dpdk-dev] [PATCH 4/7] net/ice: remove devargs flow-mark-support Guinan Sun
2020-08-26  7:54 ` [dpdk-dev] [PATCH 5/7] net/ice: add flow director enabled switch value Guinan Sun
2020-08-26  7:55 ` [dpdk-dev] [PATCH 6/7] net/ice: support Flex Rx desc and flow mark in AVX path Guinan Sun
2020-08-26  7:55 ` [dpdk-dev] [PATCH 7/7] net/ice: support Flex Rx desc and flow mark in SSE path Guinan Sun
2020-09-07  5:43 ` [dpdk-dev] [PATCH 0/7] support RXDID22 and FDID22 Zhang, Qi Z
2020-09-07  5:55   ` Jiang, JunyuX
2020-09-07  9:17 ` [dpdk-dev] [PATCH v2 0/5] supports RxDID #22 and FDID Junyu Jiang
2020-09-07  9:17   ` [dpdk-dev] [PATCH v2 1/5] net/ice: support flex Rx descriptor RxDID #22 Junyu Jiang
2020-09-07  9:17   ` [dpdk-dev] [PATCH v2 2/5] net/ice: add flow director enabled switch value Junyu Jiang
2020-09-08  7:52     ` Yang, Qiming
2020-09-07  9:17   ` [dpdk-dev] [PATCH v2 3/5] net/ice: support flow mark in AVX path Junyu Jiang
2020-09-08  7:54     ` Yang, Qiming [this message]
2020-09-07  9:17   ` [dpdk-dev] [PATCH v2 4/5] net/ice: support flow mark in SSE path Junyu Jiang
2020-09-07  9:17   ` [dpdk-dev] [PATCH v2 5/5] net/ice: remove devargs flow-mark-support Junyu Jiang
2020-09-08  7:55     ` Yang, Qiming
2020-09-16  3:09 ` [dpdk-dev] [PATCH v3 0/5] supports RxDID #22 and FDID Junyu Jiang
2020-09-16  3:09   ` [dpdk-dev] [PATCH v3 1/5] net/ice: support flex Rx descriptor RxDID #22 Junyu Jiang
2020-09-16  3:09   ` [dpdk-dev] [PATCH v3 2/5] net/ice: add flow director enabled switch value Junyu Jiang
2020-09-16  3:10   ` [dpdk-dev] [PATCH v3 3/5] net/ice: support flow mark in AVX path Junyu Jiang
2020-09-16  3:10   ` [dpdk-dev] [PATCH v3 4/5] net/ice: support flow mark in SSE path Junyu Jiang
2020-09-16  3:10   ` [dpdk-dev] [PATCH v3 5/5] net/ice: remove devargs flow-mark-support Junyu Jiang
2020-09-16  6:30   ` [dpdk-dev] [PATCH v3 0/5] supports RxDID #22 and FDID Rong, Leyi
2020-09-16  6:42     ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BN6PR11MB001735E8B971C4A79C1DE756E5290@BN6PR11MB0017.namprd11.prod.outlook.com \
    --to=qiming.yang@intel.com \
    --cc=dev@dpdk.org \
    --cc=guinanx.sun@intel.com \
    --cc=junyux.jiang@intel.com \
    --cc=qi.z.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).