From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1EC5FA09DF for ; Thu, 3 Dec 2020 11:00:06 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E59E8C940; Thu, 3 Dec 2020 11:00:04 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id A5EB9C940 for ; Thu, 3 Dec 2020 11:00:03 +0100 (CET) IronPort-SDR: OnaZFCXSY3odFf6hapAS4EeN8rb8NTYiIZNJzb6yv1DgzJQKmr7kBGank1DUy+EDVT7ovmWM8Z jAhMKnXvBl3A== X-IronPort-AV: E=McAfee;i="6000,8403,9823"; a="173325787" X-IronPort-AV: E=Sophos;i="5.78,389,1599548400"; d="scan'208";a="173325787" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2020 02:00:03 -0800 IronPort-SDR: /krPVDZfC4DLewwHApu4Uhiu5d9GzYMMtXQDntlfWaRfK0Gpy+AiauBRFaefqmcC760YItXalq QK0jpYXsJ11A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,389,1599548400"; d="scan'208";a="365689407" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by fmsmga004.fm.intel.com with ESMTP; 03 Dec 2020 02:00:00 -0800 From: Jeff Guo To: ktraynor@redhat.com Cc: stable@dpdk.org, jia.guo@intel.com, =?UTF-8?q?Morten=20Br=C3=B8rup?= , Wei Ling , Qi Zhang Date: Thu, 3 Dec 2020 17:47:02 +0800 Message-Id: <20201203094702.25734-3-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201203094702.25734-1-jia.guo@intel.com> References: <20201203094702.25734-1-jia.guo@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] [PATCH 18.11 3/3] net/avf: fix vector Rx X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" [ upstream commit 851b22ff688e759a961ed969ea620372b20581d9 ] The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Bugzilla ID: 516 Fixes: 319c421f3890 ("net/avf: enable SSE Rx Tx") Signed-off-by: Jeff Guo Acked-by: Morten Brørup Tested-by: Wei Ling Acked-by: Qi Zhang --- drivers/net/avf/avf_rxtx_vec_sse.c | 49 ++++++++++++++++++++++-------- 1 file changed, 37 insertions(+), 12 deletions(-) diff --git a/drivers/net/avf/avf_rxtx_vec_sse.c b/drivers/net/avf/avf_rxtx_vec_sse.c index 13e94cebc0..4aa209f702 100644 --- a/drivers/net/avf/avf_rxtx_vec_sse.c +++ b/drivers/net/avf/avf_rxtx_vec_sse.c @@ -227,10 +227,12 @@ desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts) rx_pkts[3]->packet_type = type_table[_mm_extract_epi8(ptype1, 8)]; } -/* Notice: +/** + * vPMD raw receive routine, only accept(nb_pkts >= AVF_VPMD_DESCS_PER_LOOP) + * + * Notice: * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST - * numbers of DD bits + * - floor align nb_pkts to a AVF_VPMD_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -260,9 +262,6 @@ _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP); @@ -486,15 +485,15 @@ avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * * Notice: * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST - * numbers of DD bits */ -uint16_t -avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +avf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct avf_rx_queue *rxq = rx_queue; uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0}; @@ -527,6 +526,32 @@ avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > AVF_VPMD_RX_MAX_BURST) { + uint16_t burst; + + burst = avf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + AVF_VPMD_RX_MAX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < AVF_VPMD_RX_MAX_BURST) + return retval; + } + + return retval + avf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) { -- 2.20.1