From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0E66A09FF; Tue, 5 Jan 2021 15:34:11 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23F4B160765; Tue, 5 Jan 2021 15:34:11 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by mails.dpdk.org (Postfix) with ESMTP id D176D160717 for ; Tue, 5 Jan 2021 15:34:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1609857249; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4+2HCw2b1NiWK0BrYDBxNa+dZOLZtHA7pyDlHaGM8GA=; b=LydhGGBDTjg51gSHG0gLN3nESkldsyf0nak4iXqlUF7k3G3x+dodsKDWllf1+nHNv0/g5H iNRmldOGsVt5qdyxWFXHJfCGZn+ne2cVDPE4Jk/SFiaYjK7gC7GJjXfWKMEmBpD/roEwRC teZs/SxywT0nwoA226RZ9tl9H5FDm5w= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-174-XdiMoDwaMnGvXtoqFuE32w-1; Tue, 05 Jan 2021 09:34:05 -0500 X-MC-Unique: XdiMoDwaMnGvXtoqFuE32w-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1729CB810A; Tue, 5 Jan 2021 14:34:04 +0000 (UTC) Received: from [10.36.110.9] (unknown [10.36.110.9]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DE75210016FE; Tue, 5 Jan 2021 14:34:01 +0000 (UTC) To: Joyce Kong , chenbo.xia@intel.com, jerinj@marvell.com, ruifeng.wang@arm.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, nd@arm.com References: <20200911120906.45995-1-joyce.kong@arm.com> <20201117100635.27690-1-joyce.kong@arm.com> <20201117100635.27690-4-joyce.kong@arm.com> From: Maxime Coquelin Message-ID: Date: Tue, 5 Jan 2021 15:33:59 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20201117100635.27690-4-joyce.kong@arm.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v1 3/4] net/virtio: add vectorized packed ring Tx NEON path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 11/17/20 11:06 AM, Joyce Kong wrote: > Optimize packed ring Tx batch path with NEON instructions. > > Signed-off-by: Joyce Kong > Reviewed-by: Ruifeng Wang > --- > drivers/net/virtio/virtio_rxtx_packed.h | 6 +- > drivers/net/virtio/virtio_rxtx_packed_neon.h | 143 +++++++++++++++++++ > 2 files changed, 148 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/virtio/virtio_rxtx_packed.h b/drivers/net/virtio/virtio_rxtx_packed.h > index 8f5198ad7..016b6fb24 100644 > --- a/drivers/net/virtio/virtio_rxtx_packed.h > +++ b/drivers/net/virtio/virtio_rxtx_packed.h > @@ -28,6 +28,8 @@ > /* flag bits offset in packed ring desc from ID */ > #define FLAGS_BITS_OFFSET ((offsetof(struct vring_packed_desc, flags) - \ > offsetof(struct vring_packed_desc, id)) * BYTE_SIZE) > +#define FLAGS_LEN_BITS_OFFSET ((offsetof(struct vring_packed_desc, flags) - \ > + offsetof(struct vring_packed_desc, len)) * BYTE_SIZE) > #endif > > #define PACKED_FLAGS_MASK ((0ULL | VRING_PACKED_DESC_F_AVAIL_USED) << \ > @@ -36,13 +38,15 @@ > /* reference count offset in mbuf rearm data */ > #define REFCNT_BITS_OFFSET ((offsetof(struct rte_mbuf, refcnt) - \ > offsetof(struct rte_mbuf, rearm_data)) * BYTE_SIZE) > + > +#ifdef CC_AVX512_SUPPORT > /* segment number offset in mbuf rearm data */ > #define SEG_NUM_BITS_OFFSET ((offsetof(struct rte_mbuf, nb_segs) - \ > offsetof(struct rte_mbuf, rearm_data)) * BYTE_SIZE) > - > /* default rearm data */ > #define DEFAULT_REARM_DATA (1ULL << SEG_NUM_BITS_OFFSET | \ > 1ULL << REFCNT_BITS_OFFSET) > +#endif > > /* id bits offset in packed ring desc higher 64bits */ > #define ID_BITS_OFFSET ((offsetof(struct vring_packed_desc, id) - \ > diff --git a/drivers/net/virtio/virtio_rxtx_packed_neon.h b/drivers/net/virtio/virtio_rxtx_packed_neon.h > index fb1e49909..041f771ea 100644 > --- a/drivers/net/virtio/virtio_rxtx_packed_neon.h > +++ b/drivers/net/virtio/virtio_rxtx_packed_neon.h > @@ -16,6 +16,149 @@ > #include "virtio_rxtx_packed.h" > #include "virtqueue.h" > > +static inline int > +virtqueue_enqueue_batch_packed_vec(struct virtnet_tx *txvq, > + struct rte_mbuf **tx_pkts) > +{ > + struct virtqueue *vq = txvq->vq; > + uint16_t head_size = vq->hw->vtnet_hdr_size; > + uint16_t idx = vq->vq_avail_idx; > + struct virtio_net_hdr *hdr; > + struct vq_desc_extra *dxp; > + struct vring_packed_desc *p_desc; > + uint16_t i; > + > + if (idx & PACKED_BATCH_MASK) > + return -1; > + > + if (unlikely((idx + PACKED_BATCH_SIZE) > vq->vq_nentries)) > + return -1; > + > + /* Map four refcnt and nb_segs from mbufs to one NEON register. */ > + uint8x16_t ref_seg_msk = { > + 2, 3, 4, 5, > + 10, 11, 12, 13, > + 18, 19, 20, 21, > + 26, 27, 28, 29 > + }; > + > + /* Map four data_off from mbufs to one NEON register. */ > + uint8x8_t data_msk = { > + 0, 1, > + 8, 9, > + 16, 17, > + 24, 25 > + }; > + > + uint16x8_t net_hdr_msk = { > + 0xFFFF, 0xFFFF, > + 0, 0, 0, 0 > + }; > + > + uint16x4_t pkts[PACKED_BATCH_SIZE]; > + uint8x16x2_t mbuf; > + /* Load four mbufs rearm data. */ > + RTE_BUILD_BUG_ON(REFCNT_BITS_OFFSET >= 64); > + pkts[0] = vld1_u16((uint16_t *)&tx_pkts[0]->rearm_data); > + pkts[1] = vld1_u16((uint16_t *)&tx_pkts[1]->rearm_data); > + pkts[2] = vld1_u16((uint16_t *)&tx_pkts[2]->rearm_data); > + pkts[3] = vld1_u16((uint16_t *)&tx_pkts[3]->rearm_data); > + > + mbuf.val[0] = vreinterpretq_u8_u16(vcombine_u16(pkts[0], pkts[1])); > + mbuf.val[1] = vreinterpretq_u8_u16(vcombine_u16(pkts[2], pkts[3])); > + > + /* refcnt = 1 and nb_segs = 1 */ > + uint32x4_t def_ref_seg = vdupq_n_u32(0x10001); > + /* Check refcnt and nb_segs. */ > + uint32x4_t ref_seg = vreinterpretq_u32_u8(vqtbl2q_u8(mbuf, ref_seg_msk)); > + poly128_t cmp1 = vreinterpretq_p128_u32(~vceqq_u32(ref_seg, def_ref_seg)); > + if (unlikely(cmp1)) > + return -1; > + > + /* Check headroom is enough. */ > + uint16x4_t head_rooms = vdup_n_u16(head_size); > + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) != > + offsetof(struct rte_mbuf, rearm_data)); > + uint16x4_t data_offset = vreinterpret_u16_u8(vqtbl2_u8(mbuf, data_msk)); > + uint64x1_t cmp2 = vreinterpret_u64_u16(vclt_u16(data_offset, head_rooms)); > + if (unlikely(vget_lane_u64(cmp2, 0))) > + return -1; > + > + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > + dxp = &vq->vq_descx[idx + i]; > + dxp->ndescs = 1; > + dxp->cookie = tx_pkts[i]; > + } > + > + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > + tx_pkts[i]->data_off -= head_size; > + tx_pkts[i]->data_len += head_size; > + } > + > + uint64x2x2_t desc[PACKED_BATCH_SIZE / 2]; > + uint64x2_t base_addr0 = { > + VIRTIO_MBUF_ADDR(tx_pkts[0], vq) + tx_pkts[0]->data_off, > + VIRTIO_MBUF_ADDR(tx_pkts[1], vq) + tx_pkts[1]->data_off > + }; > + uint64x2_t base_addr1 = { > + VIRTIO_MBUF_ADDR(tx_pkts[2], vq) + tx_pkts[2]->data_off, > + VIRTIO_MBUF_ADDR(tx_pkts[3], vq) + tx_pkts[3]->data_off > + }; > + > + desc[0].val[0] = base_addr0; > + desc[1].val[0] = base_addr1; > + > + uint64_t flags = (uint64_t)vq->vq_packed.cached_flags << FLAGS_LEN_BITS_OFFSET; > + uint64x2_t tx_desc0 = { > + flags | (uint64_t)idx << ID_BITS_OFFSET | tx_pkts[0]->data_len, > + flags | (uint64_t)(idx + 1) << ID_BITS_OFFSET | tx_pkts[1]->data_len > + }; > + > + uint64x2_t tx_desc1 = { > + flags | (uint64_t)(idx + 2) << ID_BITS_OFFSET | tx_pkts[2]->data_len, > + flags | (uint64_t)(idx + 3) << ID_BITS_OFFSET | tx_pkts[3]->data_len > + }; > + > + desc[0].val[1] = tx_desc0; > + desc[1].val[1] = tx_desc1; > + > + if (!vq->hw->has_tx_offload) { > + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > + hdr = rte_pktmbuf_mtod_offset(tx_pkts[i], > + struct virtio_net_hdr *, -head_size); > + /* Clear net hdr. */ > + uint16x8_t v_hdr = vld1q_u16((void *)hdr); > + vst1q_u16((void *)hdr, vandq_u16(v_hdr, net_hdr_msk)); > + } > + } else { > + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > + hdr = rte_pktmbuf_mtod_offset(tx_pkts[i], > + struct virtio_net_hdr *, -head_size); > + virtqueue_xmit_offload(hdr, tx_pkts[i], true); > + } > + } > + > + /* Enqueue packet buffers. */ > + p_desc = &vq->vq_packed.ring.desc[idx]; > + vst2q_u64((uint64_t *)p_desc, desc[0]); > + vst2q_u64((uint64_t *)(p_desc + 2), desc[1]); > + > + virtio_update_batch_stats(&txvq->stats, tx_pkts[0]->pkt_len, > + tx_pkts[1]->pkt_len, tx_pkts[2]->pkt_len, > + tx_pkts[3]->pkt_len); > + > + vq->vq_avail_idx += PACKED_BATCH_SIZE; > + vq->vq_free_cnt -= PACKED_BATCH_SIZE; > + > + if (vq->vq_avail_idx >= vq->vq_nentries) { > + vq->vq_avail_idx -= vq->vq_nentries; > + vq->vq_packed.cached_flags ^= > + VRING_PACKED_DESC_F_AVAIL_USED; > + } > + > + return 0; > +} > + > static inline uint16_t > virtqueue_dequeue_batch_packed_vec(struct virtnet_rx *rxvq, > struct rte_mbuf **rx_pkts) > Reviewed-by: Maxime Coquelin Thanks, Maxime