From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0ED2A0C46; Fri, 23 Jul 2021 05:11:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 302F7410EE; Fri, 23 Jul 2021 05:11:17 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 8A2E5410E7 for ; Fri, 23 Jul 2021 05:11:15 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0F1A511D4; Thu, 22 Jul 2021 20:11:15 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.99]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F0B2B3F694; Thu, 22 Jul 2021 20:11:12 -0700 (PDT) From: Feifei Wang To: Ruifeng Wang , Beilei Xing Cc: dev@dpdk.org, nd@arm.com, Feifei Wang , Joyce Kong Date: Fri, 23 Jul 2021 11:10:48 +0800 Message-Id: <20210723031049.2201665-4-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723031049.2201665-1-feifei.wang2@arm.com> References: <20210723031049.2201665-1-feifei.wang2@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v1 3/4] net/i40e: reorder Rx NEON code for better readability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rearrange the code in logical order for better readability and maintenance convenience in Rx NEON path. No performance change with this patch in arm platform. Suggested-by: Joyce Kong Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang --- drivers/net/i40e/i40e_rxtx_vec_neon.c | 99 ++++++++++++--------------- 1 file changed, 44 insertions(+), 55 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index fb624a4882..8f3188e910 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -280,24 +280,18 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, int32x4_t len_shl = {0, 0, 0, PKTLEN_SHIFT}; - /* B.1 load 2 mbuf point */ - mbp1 = vld1q_u64((uint64_t *)&sw_ring[pos]); - /* Read desc statuses backwards to avoid race condition */ - /* A.1 load desc[3] */ + /* A.1 load desc[3-0] */ descs[3] = vld1q_u64((uint64_t *)(rxdp + 3)); - - /* B.2 copy 2 mbuf point into rx_pkts */ - vst1q_u64((uint64_t *)&rx_pkts[pos], mbp1); - - /* B.1 load 2 mbuf point */ - mbp2 = vld1q_u64((uint64_t *)&sw_ring[pos + 2]); - - /* A.1 load desc[2-0] */ descs[2] = vld1q_u64((uint64_t *)(rxdp + 2)); descs[1] = vld1q_u64((uint64_t *)(rxdp + 1)); descs[0] = vld1q_u64((uint64_t *)(rxdp)); - /* B.2 copy 2 mbuf point into rx_pkts */ + /* B.1 load 4 mbuf point */ + mbp1 = vld1q_u64((uint64_t *)&sw_ring[pos]); + mbp2 = vld1q_u64((uint64_t *)&sw_ring[pos + 2]); + + /* B.2 copy 4 mbuf point into rx_pkts */ + vst1q_u64((uint64_t *)&rx_pkts[pos], mbp1); vst1q_u64((uint64_t *)&rx_pkts[pos + 2], mbp2); if (split_packet) { @@ -307,28 +301,9 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, rte_mbuf_prefetch_part2(rx_pkts[pos + 3]); } - /* pkt 3,4 shift the pktlen field to be 16-bit aligned*/ - uint32x4_t len3 = vshlq_u32(vreinterpretq_u32_u64(descs[3]), - len_shl); - descs[3] = vreinterpretq_u64_u16(vsetq_lane_u16 - (vgetq_lane_u16(vreinterpretq_u16_u32(len3), 7), - vreinterpretq_u16_u64(descs[3]), - 7)); - uint32x4_t len2 = vshlq_u32(vreinterpretq_u32_u64(descs[2]), - len_shl); - descs[2] = vreinterpretq_u64_u16(vsetq_lane_u16 - (vgetq_lane_u16(vreinterpretq_u16_u32(len2), 7), - vreinterpretq_u16_u64(descs[2]), - 7)); - - /* D.1 pkt 3,4 convert format from desc to pktmbuf */ - pkt_mb4 = vqtbl1q_u8(vreinterpretq_u8_u64(descs[3]), shuf_msk); - pkt_mb3 = vqtbl1q_u8(vreinterpretq_u8_u64(descs[2]), shuf_msk); - /* C.1 4=>2 filter staterr info only */ sterr_tmp2 = vzipq_u16(vreinterpretq_u16_u64(descs[1]), vreinterpretq_u16_u64(descs[3])); - /* C.1 4=>2 filter staterr info only */ sterr_tmp1 = vzipq_u16(vreinterpretq_u16_u64(descs[0]), vreinterpretq_u16_u64(descs[2])); @@ -338,13 +313,19 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, desc_to_olflags_v(rxq, descs, &rx_pkts[pos]); - /* D.2 pkt 3,4 set in_port/nb_seg and remove crc */ - tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb4), crc_adjust); - pkt_mb4 = vreinterpretq_u8_u16(tmp); - tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb3), crc_adjust); - pkt_mb3 = vreinterpretq_u8_u16(tmp); - - /* pkt 1,2 shift the pktlen field to be 16-bit aligned*/ + /* pkts shift the pktlen field to be 16-bit aligned*/ + uint32x4_t len3 = vshlq_u32(vreinterpretq_u32_u64(descs[3]), + len_shl); + descs[3] = vreinterpretq_u64_u16(vsetq_lane_u16 + (vgetq_lane_u16(vreinterpretq_u16_u32(len3), 7), + vreinterpretq_u16_u64(descs[3]), + 7)); + uint32x4_t len2 = vshlq_u32(vreinterpretq_u32_u64(descs[2]), + len_shl); + descs[2] = vreinterpretq_u64_u16(vsetq_lane_u16 + (vgetq_lane_u16(vreinterpretq_u16_u32(len2), 7), + vreinterpretq_u16_u64(descs[2]), + 7)); uint32x4_t len1 = vshlq_u32(vreinterpretq_u32_u64(descs[1]), len_shl); descs[1] = vreinterpretq_u64_u16(vsetq_lane_u16 @@ -358,22 +339,38 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, vreinterpretq_u16_u64(descs[0]), 7)); - /* D.1 pkt 1,2 convert format from desc to pktmbuf */ + /* D.1 pkts convert format from desc to pktmbuf */ + pkt_mb4 = vqtbl1q_u8(vreinterpretq_u8_u64(descs[3]), shuf_msk); + pkt_mb3 = vqtbl1q_u8(vreinterpretq_u8_u64(descs[2]), shuf_msk); pkt_mb2 = vqtbl1q_u8(vreinterpretq_u8_u64(descs[1]), shuf_msk); pkt_mb1 = vqtbl1q_u8(vreinterpretq_u8_u64(descs[0]), shuf_msk); - /* D.3 copy final 3,4 data to rx_pkts */ - vst1q_u8((void *)&rx_pkts[pos + 3]->rx_descriptor_fields1, - pkt_mb4); - vst1q_u8((void *)&rx_pkts[pos + 2]->rx_descriptor_fields1, - pkt_mb3); - - /* D.2 pkt 1,2 set in_port/nb_seg and remove crc */ + /* D.2 pkts set in_port/nb_seg and remove crc */ + tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb4), crc_adjust); + pkt_mb4 = vreinterpretq_u8_u16(tmp); + tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb3), crc_adjust); + pkt_mb3 = vreinterpretq_u8_u16(tmp); tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb2), crc_adjust); pkt_mb2 = vreinterpretq_u8_u16(tmp); tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb1), crc_adjust); pkt_mb1 = vreinterpretq_u8_u16(tmp); + /* D.3 copy final data to rx_pkts */ + vst1q_u8((void *)&rx_pkts[pos + 3]->rx_descriptor_fields1, + pkt_mb4); + vst1q_u8((void *)&rx_pkts[pos + 2]->rx_descriptor_fields1, + pkt_mb3); + vst1q_u8((void *)&rx_pkts[pos + 1]->rx_descriptor_fields1, + pkt_mb2); + vst1q_u8((void *)&rx_pkts[pos]->rx_descriptor_fields1, + pkt_mb1); + + desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); + + if (likely(pos + RTE_I40E_DESCS_PER_LOOP < nb_pkts)) { + rte_prefetch_non_temporal(rxdp + RTE_I40E_DESCS_PER_LOOP); + } + /* C* extract and record EOP bit */ if (split_packet) { uint8x16_t eop_shuf_mask = { @@ -411,14 +408,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, I40E_UINT16_BIT - 1)); stat = ~vgetq_lane_u64(vreinterpretq_u64_u16(staterr), 0); - rte_prefetch_non_temporal(rxdp + RTE_I40E_DESCS_PER_LOOP); - - /* D.3 copy final 1,2 data to rx_pkts */ - vst1q_u8((void *)&rx_pkts[pos + 1]->rx_descriptor_fields1, - pkt_mb2); - vst1q_u8((void *)&rx_pkts[pos]->rx_descriptor_fields1, - pkt_mb1); - desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); /* C.4 calc avaialbe number of desc */ if (unlikely(stat == 0)) { nb_pkts_recd += RTE_I40E_DESCS_PER_LOOP; -- 2.25.1