From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 77F26A2EFC for ; Tue, 15 Oct 2019 14:46:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 25F8D1E9A8; Tue, 15 Oct 2019 14:46:00 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 99FFB1E9A7 for ; Tue, 15 Oct 2019 14:45:58 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Oct 2019 05:45:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,300,1566889200"; d="scan'208";a="208176418" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by fmsmga001.fm.intel.com with ESMTP; 15 Oct 2019 05:45:57 -0700 Received: from fmsmsx158.amr.corp.intel.com (10.18.116.75) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 15 Oct 2019 05:45:57 -0700 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by fmsmsx158.amr.corp.intel.com (10.18.116.75) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 15 Oct 2019 05:45:57 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.165]) by shsmsx102.ccr.corp.intel.com ([169.254.2.176]) with mapi id 14.03.0439.000; Tue, 15 Oct 2019 20:45:54 +0800 From: "Liu, Yong" To: "Gavin Hu (Arm Technology China)" , "maxime.coquelin@redhat.com" , "Bie, Tiwei" , "Wang, Zhihong" , "stephen@networkplumber.org" CC: "dev@dpdk.org" , nd Thread-Topic: [PATCH v6 04/13] vhost: add packed ring batch enqueue Thread-Index: AQHVgzKdtLx3OhBtF0eYRI/Ucy9F9qdbDOkAgACWFzA= Date: Tue, 15 Oct 2019 12:45:54 +0000 Message-ID: <86228AFD5BCD8E4EBFD2B90117B5E81E633D0B68@SHSMSX103.ccr.corp.intel.com> References: <20191015143014.1656-1-yong.liu@intel.com> <20191015160739.51940-1-yong.liu@intel.com> <20191015160739.51940-5-yong.liu@intel.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYmVjNWI3ZTMtNmEyZS00ZjU2LTlkZTctODRmNjcyZTg4MDAyIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiQzFrZ3paYXV5SGhrWVV4UE91TzQ2TlVLVTlnY3VqSnFEUFYzaGZhc1dtTXZDTzdMTzJLNURTcjFvZ0Z3SlR6YiJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v6 04/13] vhost: add packed ring batch enqueue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Gavin Hu (Arm Technology China) [mailto:Gavin.Hu@arm.com] > Sent: Tuesday, October 15, 2019 7:36 PM > To: Liu, Yong ; maxime.coquelin@redhat.com; Bie, Tiwe= i > ; Wang, Zhihong ; > stephen@networkplumber.org > Cc: dev@dpdk.org; nd > Subject: RE: [PATCH v6 04/13] vhost: add packed ring batch enqueue >=20 > Hi Marvin, >=20 > > -----Original Message----- > > From: Marvin Liu > > Sent: Wednesday, October 16, 2019 12:08 AM > > To: maxime.coquelin@redhat.com; tiwei.bie@intel.com; > > zhihong.wang@intel.com; stephen@networkplumber.org; Gavin Hu (Arm > > Technology China) > > Cc: dev@dpdk.org; Marvin Liu > > Subject: [PATCH v6 04/13] vhost: add packed ring batch enqueue > > > > Batch enqueue function will first check whether descriptors are cache > > aligned. It will also check prerequisites in the beginning. Batch > > enqueue function do not support chained mbufs, single packet enqueue > > function will handle it. > > > > Signed-off-by: Marvin Liu > > Reviewed-by: Maxime Coquelin > > > > diff --git a/lib/librte_vhost/virtio_net.c > b/lib/librte_vhost/virtio_net.c > > index 142c14e04..a8130dc06 100644 > > --- a/lib/librte_vhost/virtio_net.c > > +++ b/lib/librte_vhost/virtio_net.c > > @@ -881,6 +881,76 @@ virtio_dev_rx_split(struct virtio_net *dev, struct > > vhost_virtqueue *vq, > > return pkt_idx; > > } > > > > +static __rte_unused int > > +virtio_dev_rx_batch_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + struct rte_mbuf **pkts) > > +{ > > + bool wrap_counter =3D vq->avail_wrap_counter; > > + struct vring_packed_desc *descs =3D vq->desc_packed; > > + uint16_t avail_idx =3D vq->last_avail_idx; > > + uint64_t desc_addrs[PACKED_BATCH_SIZE]; > > + struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; > > + uint32_t buf_offset =3D dev->vhost_hlen; > > + uint64_t lens[PACKED_BATCH_SIZE]; > > + uint16_t i; > > + > > + if (unlikely(avail_idx & PACKED_BATCH_MASK)) > > + return -1; > > + > > + if (unlikely((avail_idx + PACKED_BATCH_SIZE) > vq->size)) > > + return -1; > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > > + if (unlikely(pkts[i]->next !=3D NULL)) > > + return -1; > > + if (unlikely(!desc_is_avail(&descs[avail_idx + i], > > + wrap_counter))) > > + return -1; > > + } > > + > > + rte_smp_rmb(); > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) > > + lens[i] =3D descs[avail_idx + i].len; > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > > + if (unlikely(pkts[i]->pkt_len > (lens[i] - buf_offset))) > > + return -1; > > + } > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) > > + desc_addrs[i] =3D vhost_iova_to_vva(dev, vq, > > + descs[avail_idx + i].addr, > > + &lens[i], > > + VHOST_ACCESS_RW); > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > > + if (unlikely(lens[i] !=3D descs[avail_idx + i].len)) > > + return -1; > > + } > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > > + rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); > > + hdrs[i] =3D (struct virtio_net_hdr_mrg_rxbuf *) > > + (uintptr_t)desc_addrs[i]; > > + lens[i] =3D pkts[i]->pkt_len + dev->vhost_hlen; > > + } > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) > > + virtio_enqueue_offload(pkts[i], &hdrs[i]->hdr); > > + > > + vq_inc_last_avail_packed(vq, PACKED_BATCH_SIZE); >=20 > Is the last_avail_idx a shared variable? Why is updated before the > following payload copy? > This will cause the other side get earlier-than-arrival data? > /Gavin Hi Gavin, Last_avail_idx and last_used_idx are all vhost local variables.=20 They are used for tracking next available and used index of virtqueue. Last avail_idx value should increase after descs are consumed. Last used_idx value should increase after descs flags are updated. Thanks, Marvin > > + > > + for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { > > + rte_memcpy((void *)(uintptr_t)(desc_addrs[i] + buf_offset), > > + rte_pktmbuf_mtod_offset(pkts[i], void *, 0), > > + pkts[i]->pkt_len); > > + } > > + > > + return 0; > > +} > > + > > static __rte_unused int16_t > > virtio_dev_rx_single_packed(struct virtio_net *dev, > > struct vhost_virtqueue *vq, > > -- > > 2.17.1