From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87EC9A00BE; Mon, 27 Apr 2020 13:21:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0F5421D14D; Mon, 27 Apr 2020 13:21:02 +0200 (CEST) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by dpdk.org (Postfix) with ESMTP id 3D8DC1C23C for ; Mon, 27 Apr 2020 13:21:01 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587986460; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=9qfbEfztND0Y5Hw0TLQVv5SJVg8rX9yGOXhpa4FWhbQ=; b=SigZLVPvDN6bBpzPIKYC4EwOirMYoYbORLE7BCWh6wNBAH7APh/Fo3Mr9tlomoDIGDpULg ii9/uiMeFBvoG9MfulCOdg7BPHOp9cABw1EiIseFzP3Ve9XlkuB53w2pZo5KnYvH0v9UOE AzjrdCp+lePVO43aWGBMRGLesMOUypE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-308-5Kya2FDENcyRO10LEH7aQw-1; Mon, 27 Apr 2020 07:20:59 -0400 X-MC-Unique: 5Kya2FDENcyRO10LEH7aQw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CBAB91899520; Mon, 27 Apr 2020 11:20:57 +0000 (UTC) Received: from [10.36.110.26] (unknown [10.36.110.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EBA5C605CB; Mon, 27 Apr 2020 11:20:55 +0000 (UTC) To: Marvin Liu , xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org References: <20200313174230.74661-1-yong.liu@intel.com> <20200426021943.43158-1-yong.liu@intel.com> <20200426021943.43158-7-yong.liu@intel.com> From: Maxime Coquelin Autocrypt: addr=maxime.coquelin@redhat.com; keydata= mQINBFOEQQIBEADjNLYZZqghYuWv1nlLisptPJp+TSxE/KuP7x47e1Gr5/oMDJ1OKNG8rlNg kLgBQUki3voWhUbMb69ybqdMUHOl21DGCj0BTU3lXwapYXOAnsh8q6RRM+deUpasyT+Jvf3a gU35dgZcomRh5HPmKMU4KfeA38cVUebsFec1HuJAWzOb/UdtQkYyZR4rbzw8SbsOemtMtwOx YdXodneQD7KuRU9IhJKiEfipwqk2pufm2VSGl570l5ANyWMA/XADNhcEXhpkZ1Iwj3TWO7XR uH4xfvPl8nBsLo/EbEI7fbuUULcAnHfowQslPUm6/yaGv6cT5160SPXT1t8U9QDO6aTSo59N jH519JS8oeKZB1n1eLDslCfBpIpWkW8ZElGkOGWAN0vmpLfdyiqBNNyS3eGAfMkJ6b1A24un /TKc6j2QxM0QK4yZGfAxDxtvDv9LFXec8ENJYsbiR6WHRHq7wXl/n8guyh5AuBNQ3LIK44x0 KjGXP1FJkUhUuruGyZsMrDLBRHYi+hhDAgRjqHgoXi5XGETA1PAiNBNnQwMf5aubt+mE2Q5r qLNTgwSo2dpTU3+mJ3y3KlsIfoaxYI7XNsPRXGnZi4hbxmeb2NSXgdCXhX3nELUNYm4ArKBP LugOIT/zRwk0H0+RVwL2zHdMO1Tht1UOFGfOZpvuBF60jhMzbQARAQABtCxNYXhpbWUgQ29x dWVsaW4gPG1heGltZS5jb3F1ZWxpbkByZWRoYXQuY29tPokCOAQTAQIAIgUCV3u/5QIbAwYL CQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQyjiNKEaHD4ma2g/+P+Hg9WkONPaY1J4AR7Uf kBneosS4NO3CRy0x4WYmUSLYMLx1I3VH6SVjqZ6uBoYy6Fs6TbF6SHNc7QbB6Qjo3neqnQR1 71Ua1MFvIob8vUEl3jAR/+oaE1UJKrxjWztpppQTukIk4oJOmXbL0nj3d8dA2QgHdTyttZ1H xzZJWWz6vqxCrUqHU7RSH9iWg9R2iuTzii4/vk1oi4Qz7y/q8ONOq6ffOy/t5xSZOMtZCspu Mll2Szzpc/trFO0pLH4LZZfz/nXh2uuUbk8qRIJBIjZH3ZQfACffgfNefLe2PxMqJZ8mFJXc RQO0ONZvwoOoHL6CcnFZp2i0P5ddduzwPdGsPq1bnIXnZqJSl3dUfh3xG5ArkliZ/++zGF1O wvpGvpIuOgLqjyCNNRoR7cP7y8F24gWE/HqJBXs1qzdj/5Hr68NVPV1Tu/l2D1KMOcL5sOrz 2jLXauqDWn1Okk9hkXAP7+0Cmi6QwAPuBT3i6t2e8UdtMtCE4sLesWS/XohnSFFscZR6Vaf3 gKdWiJ/fW64L6b9gjkWtHd4jAJBAIAx1JM6xcA1xMbAFsD8gA2oDBWogHGYcScY/4riDNKXi lw92d6IEHnSf6y7KJCKq8F+Jrj2BwRJiFKTJ6ChbOpyyR6nGTckzsLgday2KxBIyuh4w+hMq TGDSp2rmWGJjASq5Ag0EVPSbkwEQAMkaNc084Qvql+XW+wcUIY+Dn9A2D1gMr2BVwdSfVDN7 0ZYxo9PvSkzh6eQmnZNQtl8WSHl3VG3IEDQzsMQ2ftZn2sxjcCadexrQQv3Lu60Tgj7YVYRM H+fLYt9W5YuWduJ+FPLbjIKynBf6JCRMWr75QAOhhhaI0tsie3eDsKQBA0w7WCuPiZiheJaL 4MDe9hcH4rM3ybnRW7K2dLszWNhHVoYSFlZGYh+MGpuODeQKDS035+4H2rEWgg+iaOwqD7bg CQXwTZ1kSrm8NxIRVD3MBtzp9SZdUHLfmBl/tLVwDSZvHZhhvJHC6Lj6VL4jPXF5K2+Nn/Su CQmEBisOmwnXZhhu8ulAZ7S2tcl94DCo60ReheDoPBU8PR2TLg8rS5f9w6mLYarvQWL7cDtT d2eX3Z6TggfNINr/RTFrrAd7NHl5h3OnlXj7PQ1f0kfufduOeCQddJN4gsQfxo/qvWVB7PaE 1WTIggPmWS+Xxijk7xG6x9McTdmGhYaPZBpAxewK8ypl5+yubVsE9yOOhKMVo9DoVCjh5To5 aph7CQWfQsV7cd9PfSJjI2lXI0dhEXhQ7lRCFpf3V3mD6CyrhpcJpV6XVGjxJvGUale7+IOp sQIbPKUHpB2F+ZUPWds9yyVxGwDxD8WLqKKy0WLIjkkSsOb9UBNzgRyzrEC9lgQ/ABEBAAGJ Ah8EGAECAAkFAlT0m5MCGwwACgkQyjiNKEaHD4nU8hAAtt0xFJAy0sOWqSmyxTc7FUcX+pbD KVyPlpl6urKKMk1XtVMUPuae/+UwvIt0urk1mXi6DnrAN50TmQqvdjcPTQ6uoZ8zjgGeASZg jj0/bJGhgUr9U7oG7Hh2F8vzpOqZrdd65MRkxmc7bWj1k81tOU2woR/Gy8xLzi0k0KUa8ueB iYOcZcIGTcs9CssVwQjYaXRoeT65LJnTxYZif2pfNxfINFzCGw42s3EtZFteczClKcVSJ1+L +QUY/J24x0/ocQX/M1PwtZbB4c/2Pg/t5FS+s6UB1Ce08xsJDcwyOPIH6O3tccZuriHgvqKP yKz/Ble76+NFlTK1mpUlfM7PVhD5XzrDUEHWRTeTJSvJ8TIPL4uyfzhjHhlkCU0mw7Pscyxn DE8G0UYMEaNgaZap8dcGMYH/96EfE5s/nTX0M6MXV0yots7U2BDb4soLCxLOJz4tAFDtNFtA wLBhXRSvWhdBJZiig/9CG3dXmKfi2H+wdUCSvEFHRpgo7GK8/Kh3vGhgKmnnxhl8ACBaGy9n fxjSxjSO6rj4/MeenmlJw1yebzkX8ZmaSi8BHe+n6jTGEFNrbiOdWpJgc5yHIZZnwXaW54QT UhhSjDL1rV2B4F28w30jYmlRmm2RdN7iCZfbyP3dvFQTzQ4ySquuPkIGcOOHrvZzxbRjzMx1 Mwqu3GQ= Message-ID: <672a584a-46d1-c78b-7b21-9ed7bc060814@redhat.com> Date: Mon, 27 Apr 2020 13:20:53 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <20200426021943.43158-7-yong.liu@intel.com> Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH v10 6/9] net/virtio: add vectorized packed ring Rx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 4/26/20 4:19 AM, Marvin Liu wrote: > Optimize packed ring Rx path with SIMD instructions. Solution of > optimization is pretty like vhost, is that split path into batch and > single functions. Batch function is further optimized by AVX512 > instructions. Also pad desc extra structure to 16 bytes aligned, thus > four elements will be saved in one batch. >=20 > Signed-off-by: Marvin Liu >=20 > diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile > index c9edb84ee..102b1deab 100644 > --- a/drivers/net/virtio/Makefile > +++ b/drivers/net/virtio/Makefile > @@ -36,6 +36,41 @@ else ifneq ($(filter y,$(CONFIG_RTE_ARCH_ARM) $(CONFIG= _RTE_ARCH_ARM64)),) > SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) +=3D virtio_rxtx_simple_neon.c > endif > =20 > +ifneq ($(FORCE_DISABLE_AVX512), y) > +=09CC_AVX512_SUPPORT=3D\ > +=09$(shell $(CC) -march=3Dnative -dM -E - &1 | \ > +=09sed '/./{H;$$!d} ; x ; /AVX512F/!d; /AVX512BW/!d; /AVX512VL/!d' | \ > +=09grep -q AVX512 && echo 1) > +endif > + > +ifeq ($(CC_AVX512_SUPPORT), 1) > +CFLAGS +=3D -DCC_AVX512_SUPPORT > +SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) +=3D virtio_rxtx_packed_avx.c > + > +ifeq ($(RTE_TOOLCHAIN), gcc) > +ifeq ($(shell test $(GCC_VERSION) -ge 83 && echo 1), 1) > +CFLAGS +=3D -DVIRTIO_GCC_UNROLL_PRAGMA > +endif > +endif > + > +ifeq ($(RTE_TOOLCHAIN), clang) > +ifeq ($(shell test $(CLANG_MAJOR_VERSION)$(CLANG_MINOR_VERSION) -ge 37 &= & echo 1), 1) > +CFLAGS +=3D -DVIRTIO_CLANG_UNROLL_PRAGMA > +endif > +endif > + > +ifeq ($(RTE_TOOLCHAIN), icc) > +ifeq ($(shell test $(ICC_MAJOR_VERSION) -ge 16 && echo 1), 1) > +CFLAGS +=3D -DVIRTIO_ICC_UNROLL_PRAGMA > +endif > +endif > + > +CFLAGS_virtio_rxtx_packed_avx.o +=3D -mavx512f -mavx512bw -mavx512vl > +ifeq ($(shell test $(GCC_VERSION) -ge 100 && echo 1), 1) > +CFLAGS_virtio_rxtx_packed_avx.o +=3D -Wno-zero-length-bounds > +endif > +endif > + > ifeq ($(CONFIG_RTE_VIRTIO_USER),y) > SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) +=3D virtio_user/vhost_user.c > SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) +=3D virtio_user/vhost_kernel.c > diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.bu= ild > index 15150eea1..8e68c3039 100644 > --- a/drivers/net/virtio/meson.build > +++ b/drivers/net/virtio/meson.build > @@ -9,6 +9,20 @@ sources +=3D files('virtio_ethdev.c', > deps +=3D ['kvargs', 'bus_pci'] > =20 > if arch_subdir =3D=3D 'x86' > +=09if '-mno-avx512f' not in machine_args > +=09=09if cc.has_argument('-mavx512f') and cc.has_argument('-mavx512vl') = and cc.has_argument('-mavx512bw') > +=09=09=09cflags +=3D ['-mavx512f', '-mavx512bw', '-mavx512vl'] > +=09=09=09cflags +=3D ['-DCC_AVX512_SUPPORT'] > +=09=09=09if (toolchain =3D=3D 'gcc' and cc.version().version_compare('>= =3D8.3.0')) > +=09=09=09=09cflags +=3D '-DVHOST_GCC_UNROLL_PRAGMA' > +=09=09=09elif (toolchain =3D=3D 'clang' and cc.version().version_compare= ('>=3D3.7.0')) > +=09=09=09=09cflags +=3D '-DVHOST_CLANG_UNROLL_PRAGMA' > +=09=09=09elif (toolchain =3D=3D 'icc' and cc.version().version_compare('= >=3D16.0.0')) > +=09=09=09=09cflags +=3D '-DVHOST_ICC_UNROLL_PRAGMA' > +=09=09=09endif > +=09=09=09sources +=3D files('virtio_rxtx_packed_avx.c') > +=09=09endif > +=09endif > =09sources +=3D files('virtio_rxtx_simple_sse.c') > elif arch_subdir =3D=3D 'ppc' > =09sources +=3D files('virtio_rxtx_simple_altivec.c') > diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virt= io_ethdev.h > index febaf17a8..5c112cac7 100644 > --- a/drivers/net/virtio/virtio_ethdev.h > +++ b/drivers/net/virtio/virtio_ethdev.h > @@ -105,6 +105,9 @@ uint16_t virtio_xmit_pkts_inorder(void *tx_queue, str= uct rte_mbuf **tx_pkts, > uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > =09=09uint16_t nb_pkts); > =20 > +uint16_t virtio_recv_pkts_packed_vec(void *rx_queue, struct rte_mbuf **r= x_pkts, > +=09=09uint16_t nb_pkts); > + > int eth_virtio_dev_init(struct rte_eth_dev *eth_dev); > =20 > void virtio_interrupt_handler(void *param); > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio= _rxtx.c > index a549991aa..534562cca 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -2030,3 +2030,11 @@ virtio_xmit_pkts_inorder(void *tx_queue, > =20 > =09return nb_tx; > } > + > +__rte_weak uint16_t > +virtio_recv_pkts_packed_vec(void *rx_queue __rte_unused, > +=09=09=09 struct rte_mbuf **rx_pkts __rte_unused, > +=09=09=09 uint16_t nb_pkts __rte_unused) > +{ > +=09return 0; > +} > diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.c b/drivers/net/vi= rtio/virtio_rxtx_packed_avx.c > new file mode 100644 > index 000000000..8a7b459eb > --- /dev/null > +++ b/drivers/net/virtio/virtio_rxtx_packed_avx.c > @@ -0,0 +1,374 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2010-2020 Intel Corporation > + */ > + > +#include > +#include > +#include > +#include > +#include > + > +#include > + > +#include "virtio_logs.h" > +#include "virtio_ethdev.h" > +#include "virtio_pci.h" > +#include "virtqueue.h" > + > +#define BYTE_SIZE 8 > +/* flag bits offset in packed ring desc higher 64bits */ > +#define FLAGS_BITS_OFFSET ((offsetof(struct vring_packed_desc, flags) - = \ > +=09offsetof(struct vring_packed_desc, len)) * BYTE_SIZE) > + > +#define PACKED_FLAGS_MASK ((0ULL | VRING_PACKED_DESC_F_AVAIL_USED) << \ > +=09FLAGS_BITS_OFFSET) > + > +#define PACKED_BATCH_SIZE (RTE_CACHE_LINE_SIZE / \ > +=09sizeof(struct vring_packed_desc)) > +#define PACKED_BATCH_MASK (PACKED_BATCH_SIZE - 1) > + > +#ifdef VIRTIO_GCC_UNROLL_PRAGMA > +#define virtio_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll = 4") \ > +=09for (iter =3D val; iter < size; iter++) > +#endif > + > +#ifdef VIRTIO_CLANG_UNROLL_PRAGMA > +#define virtio_for_each_try_unroll(iter, val, size) _Pragma("unroll 4") = \ > +=09for (iter =3D val; iter < size; iter++) > +#endif > + > +#ifdef VIRTIO_ICC_UNROLL_PRAGMA > +#define virtio_for_each_try_unroll(iter, val, size) _Pragma("unroll (4)"= ) \ > +=09for (iter =3D val; iter < size; iter++) > +#endif > + > +#ifndef virtio_for_each_try_unroll > +#define virtio_for_each_try_unroll(iter, val, num) \ > +=09for (iter =3D val; iter < num; iter++) > +#endif > + > +static inline void > +virtio_update_batch_stats(struct virtnet_stats *stats, > +=09=09=09 uint16_t pkt_len1, > +=09=09=09 uint16_t pkt_len2, > +=09=09=09 uint16_t pkt_len3, > +=09=09=09 uint16_t pkt_len4) > +{ > +=09stats->bytes +=3D pkt_len1; > +=09stats->bytes +=3D pkt_len2; > +=09stats->bytes +=3D pkt_len3; > +=09stats->bytes +=3D pkt_len4; > +} > + > +/* Optionally fill offload information in structure */ > +static inline int > +virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) > +{ > +=09struct rte_net_hdr_lens hdr_lens; > +=09uint32_t hdrlen, ptype; > +=09int l4_supported =3D 0; > + > +=09/* nothing to do */ > +=09if (hdr->flags =3D=3D 0) > +=09=09return 0; > + > +=09/* GSO not support in vec path, skip check */ > +=09m->ol_flags |=3D PKT_RX_IP_CKSUM_UNKNOWN; > + > +=09ptype =3D rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK); > +=09m->packet_type =3D ptype; > +=09if ((ptype & RTE_PTYPE_L4_MASK) =3D=3D RTE_PTYPE_L4_TCP || > +=09 (ptype & RTE_PTYPE_L4_MASK) =3D=3D RTE_PTYPE_L4_UDP || > +=09 (ptype & RTE_PTYPE_L4_MASK) =3D=3D RTE_PTYPE_L4_SCTP) > +=09=09l4_supported =3D 1; > + > +=09if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { > +=09=09hdrlen =3D hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len; > +=09=09if (hdr->csum_start <=3D hdrlen && l4_supported) { > +=09=09=09m->ol_flags |=3D PKT_RX_L4_CKSUM_NONE; > +=09=09} else { > +=09=09=09/* Unknown proto or tunnel, do sw cksum. We can assume > +=09=09=09 * the cksum field is in the first segment since the > +=09=09=09 * buffers we provided to the host are large enough. > +=09=09=09 * In case of SCTP, this will be wrong since it's a CRC > +=09=09=09 * but there's nothing we can do. > +=09=09=09 */ > +=09=09=09uint16_t csum =3D 0, off; > + > +=09=09=09rte_raw_cksum_mbuf(m, hdr->csum_start, > +=09=09=09=09rte_pktmbuf_pkt_len(m) - hdr->csum_start, > +=09=09=09=09&csum); > +=09=09=09if (likely(csum !=3D 0xffff)) > +=09=09=09=09csum =3D ~csum; > +=09=09=09off =3D hdr->csum_offset + hdr->csum_start; > +=09=09=09if (rte_pktmbuf_data_len(m) >=3D off + 1) > +=09=09=09=09*rte_pktmbuf_mtod_offset(m, uint16_t *, > +=09=09=09=09=09off) =3D csum; > +=09=09} > +=09} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) = { > +=09=09m->ol_flags |=3D PKT_RX_L4_CKSUM_GOOD; > +=09} > + > +=09return 0; > +} > + > +static inline uint16_t > +virtqueue_dequeue_batch_packed_vec(struct virtnet_rx *rxvq, > +=09=09=09=09 struct rte_mbuf **rx_pkts) > +{ > +=09struct virtqueue *vq =3D rxvq->vq; > +=09struct virtio_hw *hw =3D vq->hw; > +=09uint16_t hdr_size =3D hw->vtnet_hdr_size; > +=09uint64_t addrs[PACKED_BATCH_SIZE]; > +=09uint16_t id =3D vq->vq_used_cons_idx; > +=09uint8_t desc_stats; > +=09uint16_t i; > +=09void *desc_addr; > + > +=09if (id & PACKED_BATCH_MASK) > +=09=09return -1; > + > +=09if (unlikely((id + PACKED_BATCH_SIZE) > vq->vq_nentries)) > +=09=09return -1; > + > +=09/* only care avail/used bits */ > +=09__m512i v_mask =3D _mm512_maskz_set1_epi64(0xaa, PACKED_FLAGS_MASK); > +=09desc_addr =3D &vq->vq_packed.ring.desc[id]; > + > +=09__m512i v_desc =3D _mm512_loadu_si512(desc_addr); > +=09__m512i v_flag =3D _mm512_and_epi64(v_desc, v_mask); > + > +=09__m512i v_used_flag =3D _mm512_setzero_si512(); > +=09if (vq->vq_packed.used_wrap_counter) > +=09=09v_used_flag =3D _mm512_maskz_set1_epi64(0xaa, PACKED_FLAGS_MASK); > + > +=09/* Check all descs are used */ > +=09desc_stats =3D _mm512_cmpneq_epu64_mask(v_flag, v_used_flag); > +=09if (desc_stats) > +=09=09return -1; > + > +=09virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > +=09=09rx_pkts[i] =3D (struct rte_mbuf *)vq->vq_descx[id + i].cookie; > +=09=09rte_packet_prefetch(rte_pktmbuf_mtod(rx_pkts[i], void *)); > + > +=09=09addrs[i] =3D (uint64_t)rx_pkts[i]->rx_descriptor_fields1; > +=09} > + > +=09/* > +=09 * load len from desc, store into mbuf pkt_len and data_len > +=09 * len limiated by l6bit buf_len, pkt_len[16:31] can be ignored > +=09 */ > +=09const __mmask16 mask =3D 0x6 | 0x6 << 4 | 0x6 << 8 | 0x6 << 12; > +=09__m512i values =3D _mm512_maskz_shuffle_epi32(mask, v_desc, 0xAA); > + > +=09/* reduce hdr_len from pkt_len and data_len */ > +=09__m512i mbuf_len_offset =3D _mm512_maskz_set1_epi32(mask, > +=09=09=09(uint32_t)-hdr_size); > + > +=09__m512i v_value =3D _mm512_add_epi32(values, mbuf_len_offset); > + > +=09/* assert offset of data_len */ > +=09RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=3D > +=09=09offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); > + > +=09__m512i v_index =3D _mm512_set_epi64(addrs[3] + 8, addrs[3], > +=09=09=09=09=09 addrs[2] + 8, addrs[2], > +=09=09=09=09=09 addrs[1] + 8, addrs[1], > +=09=09=09=09=09 addrs[0] + 8, addrs[0]); > +=09/* batch store into mbufs */ > +=09_mm512_i64scatter_epi64(0, v_index, v_value, 1); > + > +=09if (hw->has_rx_offload) { > +=09=09virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > +=09=09=09char *addr =3D (char *)rx_pkts[i]->buf_addr + > +=09=09=09=09RTE_PKTMBUF_HEADROOM - hdr_size; > +=09=09=09virtio_vec_rx_offload(rx_pkts[i], > +=09=09=09=09=09(struct virtio_net_hdr *)addr); > +=09=09} > +=09} > + > +=09virtio_update_batch_stats(&rxvq->stats, rx_pkts[0]->pkt_len, > +=09=09=09rx_pkts[1]->pkt_len, rx_pkts[2]->pkt_len, > +=09=09=09rx_pkts[3]->pkt_len); > + > +=09vq->vq_free_cnt +=3D PACKED_BATCH_SIZE; > + > +=09vq->vq_used_cons_idx +=3D PACKED_BATCH_SIZE; > +=09if (vq->vq_used_cons_idx >=3D vq->vq_nentries) { > +=09=09vq->vq_used_cons_idx -=3D vq->vq_nentries; > +=09=09vq->vq_packed.used_wrap_counter ^=3D 1; > +=09} > + > +=09return 0; > +} > + > +static uint16_t > +virtqueue_dequeue_single_packed_vec(struct virtnet_rx *rxvq, > +=09=09=09=09 struct rte_mbuf **rx_pkts) > +{ > +=09uint16_t used_idx, id; > +=09uint32_t len; > +=09struct virtqueue *vq =3D rxvq->vq; > +=09struct virtio_hw *hw =3D vq->hw; > +=09uint32_t hdr_size =3D hw->vtnet_hdr_size; > +=09struct virtio_net_hdr *hdr; > +=09struct vring_packed_desc *desc; > +=09struct rte_mbuf *cookie; > + > +=09desc =3D vq->vq_packed.ring.desc; > +=09used_idx =3D vq->vq_used_cons_idx; > +=09if (!desc_is_used(&desc[used_idx], vq)) > +=09=09return -1; > + > +=09len =3D desc[used_idx].len; > +=09id =3D desc[used_idx].id; > +=09cookie =3D (struct rte_mbuf *)vq->vq_descx[id].cookie; > +=09if (unlikely(cookie =3D=3D NULL)) { > +=09=09PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u", > +=09=09=09=09vq->vq_used_cons_idx); > +=09=09return -1; > +=09} > +=09rte_prefetch0(cookie); > +=09rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *)); > + > +=09cookie->data_off =3D RTE_PKTMBUF_HEADROOM; > +=09cookie->ol_flags =3D 0; > +=09cookie->pkt_len =3D (uint32_t)(len - hdr_size); > +=09cookie->data_len =3D (uint32_t)(len - hdr_size); > + > +=09hdr =3D (struct virtio_net_hdr *)((char *)cookie->buf_addr + > +=09=09=09=09=09RTE_PKTMBUF_HEADROOM - hdr_size); > +=09if (hw->has_rx_offload) > +=09=09virtio_vec_rx_offload(cookie, hdr); > + > +=09*rx_pkts =3D cookie; > + > +=09rxvq->stats.bytes +=3D cookie->pkt_len; > + > +=09vq->vq_free_cnt++; > +=09vq->vq_used_cons_idx++; > +=09if (vq->vq_used_cons_idx >=3D vq->vq_nentries) { > +=09=09vq->vq_used_cons_idx -=3D vq->vq_nentries; > +=09=09vq->vq_packed.used_wrap_counter ^=3D 1; > +=09} > + > +=09return 0; > +} > + > +static inline void > +virtio_recv_refill_packed_vec(struct virtnet_rx *rxvq, > +=09=09=09 struct rte_mbuf **cookie, > +=09=09=09 uint16_t num) > +{ > +=09struct virtqueue *vq =3D rxvq->vq; > +=09struct vring_packed_desc *start_dp =3D vq->vq_packed.ring.desc; > +=09uint16_t flags =3D vq->vq_packed.cached_flags; > +=09struct virtio_hw *hw =3D vq->hw; > +=09struct vq_desc_extra *dxp; > +=09uint16_t idx, i; > +=09uint16_t batch_num, total_num =3D 0; > +=09uint16_t head_idx =3D vq->vq_avail_idx; > +=09uint16_t head_flag =3D vq->vq_packed.cached_flags; > +=09uint64_t addr; > + > +=09do { > +=09=09idx =3D vq->vq_avail_idx; > + > +=09=09batch_num =3D PACKED_BATCH_SIZE; > +=09=09if (unlikely((idx + PACKED_BATCH_SIZE) > vq->vq_nentries)) > +=09=09=09batch_num =3D vq->vq_nentries - idx; > +=09=09if (unlikely((total_num + batch_num) > num)) > +=09=09=09batch_num =3D num - total_num; > + > +=09=09virtio_for_each_try_unroll(i, 0, batch_num) { > +=09=09=09dxp =3D &vq->vq_descx[idx + i]; > +=09=09=09dxp->cookie =3D (void *)cookie[total_num + i]; > + > +=09=09=09addr =3D VIRTIO_MBUF_ADDR(cookie[total_num + i], vq) + > +=09=09=09=09RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > +=09=09=09start_dp[idx + i].addr =3D addr; > +=09=09=09start_dp[idx + i].len =3D cookie[total_num + i]->buf_len > +=09=09=09=09- RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; > +=09=09=09if (total_num || i) { > +=09=09=09=09virtqueue_store_flags_packed(&start_dp[idx + i], > +=09=09=09=09=09=09flags, hw->weak_barriers); > +=09=09=09} > +=09=09} > + > +=09=09vq->vq_avail_idx +=3D batch_num; > +=09=09if (vq->vq_avail_idx >=3D vq->vq_nentries) { > +=09=09=09vq->vq_avail_idx -=3D vq->vq_nentries; > +=09=09=09vq->vq_packed.cached_flags ^=3D > +=09=09=09=09VRING_PACKED_DESC_F_AVAIL_USED; > +=09=09=09flags =3D vq->vq_packed.cached_flags; > +=09=09} > +=09=09total_num +=3D batch_num; > +=09} while (total_num < num); > + > +=09virtqueue_store_flags_packed(&start_dp[head_idx], head_flag, > +=09=09=09=09hw->weak_barriers); > +=09vq->vq_free_cnt =3D (uint16_t)(vq->vq_free_cnt - num); > +} > + > +uint16_t > +virtio_recv_pkts_packed_vec(void *rx_queue, > +=09=09=09 struct rte_mbuf **rx_pkts, > +=09=09=09 uint16_t nb_pkts) > +{ > +=09struct virtnet_rx *rxvq =3D rx_queue; > +=09struct virtqueue *vq =3D rxvq->vq; > +=09struct virtio_hw *hw =3D vq->hw; > +=09uint16_t num, nb_rx =3D 0; > +=09uint32_t nb_enqueued =3D 0; > +=09uint16_t free_cnt =3D vq->vq_free_thresh; > + > +=09if (unlikely(hw->started =3D=3D 0)) > +=09=09return nb_rx; > + > +=09num =3D RTE_MIN(VIRTIO_MBUF_BURST_SZ, nb_pkts); > +=09if (likely(num > PACKED_BATCH_SIZE)) > +=09=09num =3D num - ((vq->vq_used_cons_idx + num) % PACKED_BATCH_SIZE); > + > +=09while (num) { > +=09=09if (!virtqueue_dequeue_batch_packed_vec(rxvq, > +=09=09=09=09=09&rx_pkts[nb_rx])) { > +=09=09=09nb_rx +=3D PACKED_BATCH_SIZE; > +=09=09=09num -=3D PACKED_BATCH_SIZE; > +=09=09=09continue; > +=09=09} > +=09=09if (!virtqueue_dequeue_single_packed_vec(rxvq, > +=09=09=09=09=09&rx_pkts[nb_rx])) { > +=09=09=09nb_rx++; > +=09=09=09num--; > +=09=09=09continue; > +=09=09} > +=09=09break; > +=09}; > + > +=09PMD_RX_LOG(DEBUG, "dequeue:%d", num); > + > +=09rxvq->stats.packets +=3D nb_rx; > + > +=09if (likely(vq->vq_free_cnt >=3D free_cnt)) { > +=09=09struct rte_mbuf *new_pkts[free_cnt]; > +=09=09if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, > +=09=09=09=09=09=09free_cnt) =3D=3D 0)) { > +=09=09=09virtio_recv_refill_packed_vec(rxvq, new_pkts, > +=09=09=09=09=09free_cnt); > +=09=09=09nb_enqueued +=3D free_cnt; > +=09=09} else { > +=09=09=09struct rte_eth_dev *dev =3D > +=09=09=09=09&rte_eth_devices[rxvq->port_id]; > +=09=09=09dev->data->rx_mbuf_alloc_failed +=3D free_cnt; > +=09=09} > +=09} > + > +=09if (likely(nb_enqueued)) { > +=09=09if (unlikely(virtqueue_kick_prepare_packed(vq))) { > +=09=09=09virtqueue_notify(vq); > +=09=09=09PMD_RX_LOG(DEBUG, "Notified"); > +=09=09} > +=09} > + > +=09return nb_rx; > +} > diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio= /virtio_user_ethdev.c > index 40ad786cc..c54698ad1 100644 > --- a/drivers/net/virtio/virtio_user_ethdev.c > +++ b/drivers/net/virtio/virtio_user_ethdev.c > @@ -528,6 +528,7 @@ virtio_user_eth_dev_alloc(struct rte_vdev_device *vde= v) > =09hw->use_msix =3D 1; > =09hw->modern =3D 0; > =09hw->use_vec_rx =3D 0; > +=09hw->use_vec_tx =3D 0; > =09hw->use_inorder_rx =3D 0; > =09hw->use_inorder_tx =3D 0; > =09hw->virtio_user_dev =3D dev; > @@ -739,8 +740,19 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev) > =09=09goto end; > =09} > =20 > -=09if (vectorized) > -=09=09hw->use_vec_rx =3D 1; > +=09if (vectorized) { > +=09=09if (packed_vq) { > +#if defined(CC_AVX512_SUPPORT) > +=09=09=09hw->use_vec_rx =3D 1; > +=09=09=09hw->use_vec_tx =3D 1; > +#else > +=09=09=09PMD_INIT_LOG(INFO, > +=09=09=09=09"building environment do not support packed ring vectorized"= ); > +#endif > +=09=09} else { > +=09=09=09hw->use_vec_rx =3D 1; > +=09=09} > +=09} > =20 > =09rte_eth_dev_probing_finish(eth_dev); > =09ret =3D 0; > diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueu= e.h > index ca1c10499..ce0340743 100644 > --- a/drivers/net/virtio/virtqueue.h > +++ b/drivers/net/virtio/virtqueue.h > @@ -239,7 +239,8 @@ struct vq_desc_extra { > =09void *cookie; > =09uint16_t ndescs; > =09uint16_t next; > -}; > +=09uint8_t padding[4]; > +} __rte_packed __rte_aligned(16); Can't this introduce a performance impact for the non-vectorized case? I think of worse cache liens utilization. For example with a burst of 32 descriptors with 32B cachelines, before it would take 14 cachelines, after 16. So for each burst, one could face 2 extra cache misses. If you could run non-vectorized benchamrks with and without that patch, I would be grateful. Reviewed-by: Maxime Coquelin Thanks, Maxime