From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D919A0561; Mon, 20 Apr 2020 08:27:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1E7611D51C; Mon, 20 Apr 2020 08:26:43 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 7B1F41D514 for ; Mon, 20 Apr 2020 08:26:38 +0200 (CEST) IronPort-SDR: lMBA3fF8NcGnovR4Vs88lXNPWR3dtVe6z185BIK0wj4/fPs8J0huQWcppthxt5dE4H9NPkkiGI R+jmEcofVzzg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Apr 2020 23:26:38 -0700 IronPort-SDR: 0n+R02L9nqLre2OkFVse69/vqirV7soCmd7EPR/ou+BPhqfzXRQAmMCKyIq5oGjHiavp9tqtON YchDfp7jCu3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,405,1580803200"; d="scan'208";a="258241924" Received: from dpdk-lrong-srv-04.sh.intel.com ([10.67.119.221]) by orsmga006.jf.intel.com with ESMTP; 19 Apr 2020 23:26:36 -0700 From: Leyi Rong To: jingjing.wu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com, xiaolong.ye@intel.com Cc: dev@dpdk.org, Leyi Rong Date: Mon, 20 Apr 2020 14:16:22 +0800 Message-Id: <20200420061625.21424-9-leyi.rong@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200420061625.21424-1-leyi.rong@intel.com> References: <20200316074603.10998-1-leyi.rong@intel.com> <20200420061625.21424-1-leyi.rong@intel.com> Subject: [dpdk-dev] [PATCH v6 08/11] net/iavf: support flow mark in AVX path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support Flow Director mark ID parsing from Flex Rx descriptor in AVX path. Signed-off-by: Leyi Rong --- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 72 +++++++++++++++++++++++++-- 1 file changed, 67 insertions(+), 5 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index 04603e7d4..9f467d4fc 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -614,6 +614,25 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, return received; } +static inline __m256i +flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7) +{ +#define FDID_MIS_MAGIC 0xFFFFFFFF + RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2)); + RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13)); + const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR | + PKT_RX_FDIR_ID); + /* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */ + const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC); + __m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7, + fdir_mis_mask); + /* this XOR op results to bit-reverse the fdir_mask */ + fdir_mask = _mm256_xor_si256(fdir_mask, fdir_mis_mask); + const __m256i fdir_flags = _mm256_and_si256(fdir_mask, pkt_fdir_bit); + + return fdir_flags; +} + static inline uint16_t _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -675,8 +694,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, const __m256i shuf_msk = _mm256_set_epi8 (/* first descriptor */ - 15, 14, - 13, 12, /* octet 12~15, 32 bits rss */ + 0xFF, 0xFF, + 0xFF, 0xFF, /* rss not supported */ 11, 10, /* octet 10~11, 16 bits vlan_macip */ 5, 4, /* octet 4~5, 16 bits data_len */ 0xFF, 0xFF, /* skip hi 16 bits pkt_len, zero out */ @@ -684,8 +703,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, 0xFF, 0xFF, /* pkt_type set as unknown */ 0xFF, 0xFF, /*pkt_type set as unknown */ /* second descriptor */ - 15, 14, - 13, 12, /* octet 12~15, 32 bits rss */ + 0xFF, 0xFF, + 0xFF, 0xFF, /* rss not supported */ 11, 10, /* octet 10~11, 16 bits vlan_macip */ 5, 4, /* octet 4~5, 16 bits data_len */ 0xFF, 0xFF, /* skip hi 16 bits pkt_len, zero out */ @@ -927,8 +946,51 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, rss_vlan_flag_bits); /* merge flags */ - const __m256i mbuf_flags = _mm256_or_si256(l3_l4_flags, + __m256i mbuf_flags = _mm256_or_si256(l3_l4_flags, rss_vlan_flags); + + if (rxq->fdir_enabled) { + const __m256i fdir_id4_7 = + _mm256_unpackhi_epi32(raw_desc6_7, raw_desc4_5); + + const __m256i fdir_id0_3 = + _mm256_unpackhi_epi32(raw_desc2_3, raw_desc0_1); + + const __m256i fdir_id0_7 = + _mm256_unpackhi_epi64(fdir_id4_7, fdir_id0_3); + + const __m256i fdir_flags = + flex_rxd_to_fdir_flags_vec_avx2(fdir_id0_7); + + /* merge with fdir_flags */ + mbuf_flags = _mm256_or_si256(mbuf_flags, fdir_flags); + + /* write to mbuf: have to use scalar store here */ + rx_pkts[i + 0]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 3); + + rx_pkts[i + 1]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 7); + + rx_pkts[i + 2]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 2); + + rx_pkts[i + 3]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 6); + + rx_pkts[i + 4]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 1); + + rx_pkts[i + 5]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 5); + + rx_pkts[i + 6]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 0); + + rx_pkts[i + 7]->hash.fdir.hi = + _mm256_extract_epi32(fdir_id0_7, 4); + } /* if() on fdir_enabled */ + /** * At this point, we have the 8 sets of flags in the low 16-bits * of each 32-bit value in vlan0. -- 2.17.1