From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 7912D5947 for ; Tue, 29 Sep 2015 15:14:28 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP; 29 Sep 2015 06:14:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,608,1437462000"; d="scan'208";a="815290656" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by fmsmga002.fm.intel.com with ESMTP; 29 Sep 2015 06:14:27 -0700 Received: from irsmsx111.ger.corp.intel.com (10.108.20.4) by IRSMSX104.ger.corp.intel.com (163.33.3.159) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 29 Sep 2015 14:14:26 +0100 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.252]) by irsmsx111.ger.corp.intel.com ([169.254.2.89]) with mapi id 14.03.0248.002; Tue, 29 Sep 2015 14:14:26 +0100 From: "Ananyev, Konstantin" To: "Chen, Jing D" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH 06/14] fm10k: add Vector RX function Thread-Index: AQHQ+rejnU0SJhCfa02iIZQ6t+2kdJ5Tev2g Date: Tue, 29 Sep 2015 13:14:26 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836AA15D5@irsmsx105.ger.corp.intel.com> References: <1443531824-22767-1-git-send-email-jing.d.chen@intel.com> <1443531824-22767-7-git-send-email-jing.d.chen@intel.com> In-Reply-To: <1443531824-22767-7-git-send-email-jing.d.chen@intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 06/14] fm10k: add Vector RX function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Sep 2015 13:14:30 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chen Jing D(Mark) > Sent: Tuesday, September 29, 2015 2:04 PM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH 06/14] fm10k: add Vector RX function >=20 > From: "Chen Jing D(Mark)" >=20 > Add func fm10k_recv_raw_pkts_vec to parse raw packets, in which > includes possible chained packets. > Add func fm10k_recv_pkts_vec to receive single mbuf packet. >=20 > Signed-off-by: Chen Jing D(Mark) > --- > drivers/net/fm10k/fm10k.h | 1 + > drivers/net/fm10k/fm10k_rxtx_vec.c | 213 ++++++++++++++++++++++++++++++= ++++++ > 2 files changed, 214 insertions(+), 0 deletions(-) >=20 > diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h > index d924cae..285254f 100644 > --- a/drivers/net/fm10k/fm10k.h > +++ b/drivers/net/fm10k/fm10k.h > @@ -327,4 +327,5 @@ uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_m= buf **tx_pkts, > uint16_t nb_pkts); >=20 > int fm10k_rxq_vec_setup(struct fm10k_rx_queue *rxq); > +uint16_t fm10k_recv_pkts_vec(void *, struct rte_mbuf **, uint16_t); > #endif > diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k= _rxtx_vec.c > index 581a309..63b34b5 100644 > --- a/drivers/net/fm10k/fm10k_rxtx_vec.c > +++ b/drivers/net/fm10k/fm10k_rxtx_vec.c > @@ -281,3 +281,216 @@ fm10k_rxq_rearm(struct fm10k_rx_queue *rxq) > /* Update the tail pointer on the NIC */ > FM10K_PCI_REG_WRITE(rxq->tail_ptr, rx_id); > } > + > +/* > + * vPMD receive routine, now only accept (nb_pkts =3D=3D RTE_IXGBE_VPMD_= RX_BURST) > + * in one loop > + * > + * Notice: > + * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, just return no packet > + * - nb_pkts > RTE_IXGBE_VPMD_RX_BURST, only scan RTE_IXGBE_VPMD_RX_BURS= T > + * numbers of DD bit > + * - don't support ol_flags for rss and csum err > + */ > +static inline uint16_t > +fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts, uint8_t *split_packet) > +{ > + volatile union fm10k_rx_desc *rxdp; > + struct rte_mbuf **mbufp; > + uint16_t nb_pkts_recd; > + int pos; > + struct fm10k_rx_queue *rxq =3D rx_queue; > + uint64_t var; > + __m128i shuf_msk; > + __m128i dd_check, eop_check; > + uint16_t next_dd; > + > + next_dd =3D rxq->next_dd; > + > + if (unlikely(nb_pkts < RTE_FM10K_MAX_RX_BURST)) > + return 0; > + > + /* Just the act of getting into the function from the application is > + * going to cost about 7 cycles > + */ > + rxdp =3D rxq->hw_ring + next_dd; > + > + _mm_prefetch((const void *)rxdp, _MM_HINT_T0); > + > + /* See if we need to rearm the RX queue - gives the prefetch a bit > + * of time to act > + */ > + if (rxq->rxrearm_nb > RTE_FM10K_RXQ_REARM_THRESH) > + fm10k_rxq_rearm(rxq); > + > + /* Before we start moving massive data around, check to see if > + * there is actually a packet available > + */ > + if (!(rxdp->d.staterr & FM10K_RXD_STATUS_DD)) > + return 0; > + > + /* 4 packets DD mask */ > + dd_check =3D _mm_set_epi64x(0x0000000100000001LL, 0x0000000100000001LL)= ; > + > + /* 4 packets EOP mask */ > + eop_check =3D _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL= ); > + > + /* mask to shuffle from desc. to mbuf */ > + shuf_msk =3D _mm_set_epi8( > + 7, 6, 5, 4, /* octet 4~7, 32bits rss */ > + 15, 14, /* octet 14~15, low 16 bits vlan_macip */ > + 13, 12, /* octet 12~13, 16 bits data_len */ > + 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */ > + 13, 12, /* octet 12~13, low 16 bits pkt_len */ > + 0xFF, 0xFF, /* skip high 16 bits pkt_type */ > + 0xFF, 0xFF /* Skip pkt_type field in shuffle operation */ > + ); > + > + /* Cache is empty -> need to scan the buffer rings, but first move > + * the next 'n' mbufs into the cache > + */ > + mbufp =3D &rxq->sw_ring[next_dd]; > + > + /* A. load 4 packet in one loop > + * [A*. mask out 4 unused dirty field in desc] > + * B. copy 4 mbuf point from swring to rx_pkts > + * C. calc the number of DD bits among the 4 packets > + * [C*. extract the end-of-packet bit, if requested] > + * D. fill info. from desc to mbuf > + */ > + for (pos =3D 0, nb_pkts_recd =3D 0; pos < nb_pkts; > + pos +=3D RTE_FM10K_DESCS_PER_LOOP, > + rxdp +=3D RTE_FM10K_DESCS_PER_LOOP) { > + __m128i descs0[RTE_FM10K_DESCS_PER_LOOP]; > + __m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4; > + __m128i zero, staterr, sterr_tmp1, sterr_tmp2; > + __m128i mbp1, mbp2; /* two mbuf pointer in one XMM reg. */ > + > + if (split_packet) { > + rte_prefetch0(&rx_pkts[pos]->cacheline1); > + rte_prefetch0(&rx_pkts[pos + 1]->cacheline1); > + rte_prefetch0(&rx_pkts[pos + 2]->cacheline1); > + rte_prefetch0(&rx_pkts[pos + 3]->cacheline1); > + } Same thing as with i40e vPMD: You are pretching junk addreses here. Check out Zoltan's patch: http://dpdk.org/dev/patchwork/patch/7190/ and related conversation: http://dpdk.org/ml/archives/dev/2015-September/023715.html I think there is the same issue here. Konstantin=20 > + > + /* B.1 load 1 mbuf point */ > + mbp1 =3D _mm_loadu_si128((__m128i *)&mbufp[pos]); > + > + /* Read desc statuses backwards to avoid race condition */ > + /* A.1 load 4 pkts desc */ > + descs0[3] =3D _mm_loadu_si128((__m128i *)(rxdp + 3)); > + > + /* B.2 copy 2 mbuf point into rx_pkts */ > + _mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1); > + > + /* B.1 load 1 mbuf point */ > + mbp2 =3D _mm_loadu_si128((__m128i *)&mbufp[pos+2]); > + > + descs0[2] =3D _mm_loadu_si128((__m128i *)(rxdp + 2)); > + /* B.1 load 2 mbuf point */ > + descs0[1] =3D _mm_loadu_si128((__m128i *)(rxdp + 1)); > + descs0[0] =3D _mm_loadu_si128((__m128i *)(rxdp)); > + > + /* B.2 copy 2 mbuf point into rx_pkts */ > + _mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2); > + > + /* avoid compiler reorder optimization */ > + rte_compiler_barrier(); > + > + /* D.1 pkt 3,4 convert format from desc to pktmbuf */ > + pkt_mb4 =3D _mm_shuffle_epi8(descs0[3], shuf_msk); > + pkt_mb3 =3D _mm_shuffle_epi8(descs0[2], shuf_msk); > + > + /* C.1 4=3D>2 filter staterr info only */ > + sterr_tmp2 =3D _mm_unpackhi_epi32(descs0[3], descs0[2]); > + /* C.1 4=3D>2 filter staterr info only */ > + sterr_tmp1 =3D _mm_unpackhi_epi32(descs0[1], descs0[0]); > + > + /* set ol_flags with vlan packet type */ > + fm10k_desc_to_olflags_v(descs0, &rx_pkts[pos]); > + > + /* D.1 pkt 1,2 convert format from desc to pktmbuf */ > + pkt_mb2 =3D _mm_shuffle_epi8(descs0[1], shuf_msk); > + pkt_mb1 =3D _mm_shuffle_epi8(descs0[0], shuf_msk); > + > + /* C.2 get 4 pkts staterr value */ > + zero =3D _mm_xor_si128(dd_check, dd_check); > + staterr =3D _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); > + > + /* D.3 copy final 3,4 data to rx_pkts */ > + _mm_storeu_si128((void *)&rx_pkts[pos+3]->rx_descriptor_fields1, > + pkt_mb4); > + _mm_storeu_si128((void *)&rx_pkts[pos+2]->rx_descriptor_fields1, > + pkt_mb3); > + > + /* C* extract and record EOP bit */ > + if (split_packet) { > + __m128i eop_shuf_mask =3D _mm_set_epi8( > + 0xFF, 0xFF, 0xFF, 0xFF, > + 0xFF, 0xFF, 0xFF, 0xFF, > + 0xFF, 0xFF, 0xFF, 0xFF, > + 0x04, 0x0C, 0x00, 0x08 > + ); > + > + /* and with mask to extract bits, flipping 1-0 */ > + __m128i eop_bits =3D _mm_andnot_si128(staterr, eop_check); > + /* the staterr values are not in order, as the count > + * count of dd bits doesn't care. However, for end of > + * packet tracking, we do care, so shuffle. This also > + * compresses the 32-bit values to 8-bit > + */ > + eop_bits =3D _mm_shuffle_epi8(eop_bits, eop_shuf_mask); > + /* store the resulting 32-bit value */ > + *(int *)split_packet =3D _mm_cvtsi128_si32(eop_bits); > + split_packet +=3D RTE_FM10K_DESCS_PER_LOOP; > + > + /* zero-out next pointers */ > + rx_pkts[pos]->next =3D NULL; > + rx_pkts[pos + 1]->next =3D NULL; > + rx_pkts[pos + 2]->next =3D NULL; > + rx_pkts[pos + 3]->next =3D NULL; > + } > + > + /* C.3 calc available number of desc */ > + staterr =3D _mm_and_si128(staterr, dd_check); > + staterr =3D _mm_packs_epi32(staterr, zero); > + > + /* D.3 copy final 1,2 data to rx_pkts */ > + _mm_storeu_si128((void *)&rx_pkts[pos+1]->rx_descriptor_fields1, > + pkt_mb2); > + _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, > + pkt_mb1); > + > + fm10k_desc_to_pktype_v(descs0, &rx_pkts[pos]); > + > + /* C.4 calc avaialbe number of desc */ > + var =3D __builtin_popcountll(_mm_cvtsi128_si64(staterr)); > + nb_pkts_recd +=3D var; > + if (likely(var !=3D RTE_FM10K_DESCS_PER_LOOP)) > + break; > + } > + > + /* Update our internal tail pointer */ > + rxq->next_dd =3D (uint16_t)(rxq->next_dd + nb_pkts_recd); > + rxq->next_dd =3D (uint16_t)(rxq->next_dd & (rxq->nb_desc - 1)); > + rxq->rxrearm_nb =3D (uint16_t)(rxq->rxrearm_nb + nb_pkts_recd); > + > + return nb_pkts_recd; > +} > + > +/* vPMD receive routine, only accept(nb_pkts >=3D RTE_IXGBE_DESCS_PER_LO= OP) > + * > + * Notice: > + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > + * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST > + * numbers of DD bit > + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > + * - don't support ol_flags for rss and csum err > + */ > +uint16_t > +fm10k_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + return fm10k_recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); > +} > -- > 1.7.7.6