From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <zhihong.wang@intel.com>
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by dpdk.org (Postfix) with ESMTP id 51C928D4D
 for <dev@dpdk.org>; Wed, 14 Sep 2016 10:43:34 +0200 (CEST)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga101.fm.intel.com with ESMTP; 14 Sep 2016 01:43:33 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.30,333,1470726000"; d="scan'208";a="760546358"
Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205])
 by FMSMGA003.fm.intel.com with ESMTP; 14 Sep 2016 01:43:33 -0700
Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by
 fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 14 Sep 2016 01:43:33 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
 FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 14 Sep 2016 01:43:32 -0700
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.102]) by
 SHSMSX151.ccr.corp.intel.com ([169.254.3.250]) with mapi id 14.03.0248.002;
 Wed, 14 Sep 2016 16:43:30 +0800
From: "Wang, Zhihong" <zhihong.wang@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>, "dev@dpdk.org" <dev@dpdk.org>
CC: "yuanhan.liu@linux.intel.com" <yuanhan.liu@linux.intel.com>,
 "thomas.monjalon@6wind.com" <thomas.monjalon@6wind.com>
Thread-Topic: [PATCH v5 5/6] vhost: batch update used ring
Thread-Index: AQHSCoeQzsSjk8GNGUa+xR0CoUAoi6B1fkEAgAM0D/A=
Date: Wed, 14 Sep 2016 08:43:30 +0000
Message-ID: <8F6C2BD409508844A0EFC19955BE09414E70FB6A@SHSMSX103.ccr.corp.intel.com>
References: <1471319402-112998-1-git-send-email-zhihong.wang@intel.com>
 <1473392368-84903-1-git-send-email-zhihong.wang@intel.com>
 <1473392368-84903-6-git-send-email-zhihong.wang@intel.com>
 <473ef253-86bf-9a7a-d028-21c27690a421@redhat.com>
In-Reply-To: <473ef253-86bf-9a7a-d028-21c27690a421@redhat.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v5 5/6] vhost: batch update used ring
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 14 Sep 2016 08:43:34 -0000



> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Monday, September 12, 2016 11:46 PM
> To: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
> Cc: yuanhan.liu@linux.intel.com; thomas.monjalon@6wind.com
> Subject: Re: [PATCH v5 5/6] vhost: batch update used ring
>=20
>=20
>=20
> On 09/09/2016 05:39 AM, Zhihong Wang wrote:
> > This patch enables batch update of the used ring for better efficiency.
> >
> > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
> > ---
> > Changes in v4:
> >
> >  1. Free shadow used ring in the right place.
> >
> >  2. Add failure check for shadow used ring malloc.
> >
> >  lib/librte_vhost/vhost.c      | 20 ++++++++++++--
> >  lib/librte_vhost/vhost.h      |  4 +++
> >  lib/librte_vhost/vhost_user.c | 31 +++++++++++++++++----
> >  lib/librte_vhost/virtio_net.c | 64
> +++++++++++++++++++++++++++++++++++--------
> >  4 files changed, 101 insertions(+), 18 deletions(-)
> >
> > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> > index 46095c3..cb31cdd 100644
> > --- a/lib/librte_vhost/vhost.c
> > +++ b/lib/librte_vhost/vhost.c
> > @@ -119,10 +119,26 @@ cleanup_device(struct virtio_net *dev, int
> destroy)
> >  static void
> >  free_device(struct virtio_net *dev)
> >  {
> > +	struct vhost_virtqueue *vq_0;
> > +	struct vhost_virtqueue *vq_1;
> >  	uint32_t i;
> >
> > -	for (i =3D 0; i < dev->virt_qp_nb; i++)
> > -		rte_free(dev->virtqueue[i * VIRTIO_QNUM]);
> > +	for (i =3D 0; i < dev->virt_qp_nb; i++) {
> > +		vq_0 =3D dev->virtqueue[i * VIRTIO_QNUM];
> > +		if (vq_0->shadow_used_ring) {
> > +			rte_free(vq_0->shadow_used_ring);
> > +			vq_0->shadow_used_ring =3D NULL;
> > +		}
> > +
> > +		vq_1 =3D dev->virtqueue[i * VIRTIO_QNUM + 1];
> > +		if (vq_1->shadow_used_ring) {
> > +			rte_free(vq_1->shadow_used_ring);
> > +			vq_1->shadow_used_ring =3D NULL;
> > +		}
> > +
> > +		/* malloc together, free together */
> > +		rte_free(vq_0);
> > +	}
> >
> >  	rte_free(dev);
> >  }
> > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> > index 9707dfc..381dc27 100644
> > --- a/lib/librte_vhost/vhost.h
> > +++ b/lib/librte_vhost/vhost.h
> > @@ -85,6 +85,10 @@ struct vhost_virtqueue {
> >
> >  	/* Physical address of used ring, for logging */
> >  	uint64_t		log_guest_addr;
> > +
> > +	/* Shadow used ring for performance */
> > +	struct vring_used_elem	*shadow_used_ring;
> > +	uint32_t		shadow_used_idx;
> >  } __rte_cache_aligned;
> >
> >  /* Old kernels have no such macro defined */
> > diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_use=
r.c
> > index eee99e9..d7cf1ed 100644
> > --- a/lib/librte_vhost/vhost_user.c
> > +++ b/lib/librte_vhost/vhost_user.c
> > @@ -193,7 +193,21 @@ static int
> >  vhost_user_set_vring_num(struct virtio_net *dev,
> >  			 struct vhost_vring_state *state)
> >  {
> > -	dev->virtqueue[state->index]->size =3D state->num;
> > +	struct vhost_virtqueue *vq;
> > +
> > +	vq =3D dev->virtqueue[state->index];
> > +	vq->size =3D state->num;
> > +	if (!vq->shadow_used_ring) {
> > +		vq->shadow_used_ring =3D rte_malloc(NULL,
> > +				vq->size * sizeof(struct vring_used_elem),
> > +				RTE_CACHE_LINE_SIZE);
> > +		if (!vq->shadow_used_ring) {
> > +			RTE_LOG(ERR, VHOST_CONFIG,
> > +				"Failed to allocate memory"
> > +				" for shadow used ring.\n");
> > +			return -1;
> > +		}
> > +	}
> >
> >  	return 0;
> >  }
> > @@ -611,14 +625,21 @@ static int
> >  vhost_user_get_vring_base(struct virtio_net *dev,
> >  			  struct vhost_vring_state *state)
> >  {
> > +	struct vhost_virtqueue *vq;
> > +
> >  	/* We have to stop the queue (virtio) if it is running. */
> >  	if (dev->flags & VIRTIO_DEV_RUNNING) {
> >  		dev->flags &=3D ~VIRTIO_DEV_RUNNING;
> >  		notify_ops->destroy_device(dev->vid);
> >  	}
> >
> > +	vq =3D dev->virtqueue[state->index];
> >  	/* Here we are safe to get the last used index */
> > -	state->num =3D dev->virtqueue[state->index]->last_used_idx;
> > +	state->num =3D vq->last_used_idx;
> > +	if (vq->shadow_used_ring) {
> > +		rte_free(vq->shadow_used_ring);
> > +		vq->shadow_used_ring =3D NULL;
> > +	}
> >
> >  	RTE_LOG(INFO, VHOST_CONFIG,
> >  		"vring base idx:%d file:%d\n", state->index, state->num);
> > @@ -627,10 +648,10 @@ vhost_user_get_vring_base(struct virtio_net
> *dev,
> >  	 * sent and only sent in vhost_vring_stop.
> >  	 * TODO: cleanup the vring, it isn't usable since here.
> >  	 */
> > -	if (dev->virtqueue[state->index]->kickfd >=3D 0)
> > -		close(dev->virtqueue[state->index]->kickfd);
> > +	if (vq->kickfd >=3D 0)
> > +		close(vq->kickfd);
> >
> > -	dev->virtqueue[state->index]->kickfd =3D
> VIRTIO_UNINITIALIZED_EVENTFD;
> > +	vq->kickfd =3D VIRTIO_UNINITIALIZED_EVENTFD;
> >
> >  	return 0;
> >  }
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_ne=
t.c
> > index b38f18f..e9f6353 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -134,17 +134,52 @@ virtio_enqueue_offload(struct rte_mbuf
> *m_buf, struct virtio_net_hdr *net_hdr)
> >  }
> >
> >  static inline void __attribute__((always_inline))
> > -update_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
> > -		uint32_t desc_chain_head, uint32_t desc_chain_len)
> > +update_used_ring(struct vhost_virtqueue *vq, uint32_t
> desc_chain_head,
> > +		uint32_t desc_chain_len)
> >  {
> > -	uint32_t used_idx =3D vq->last_used_idx & (vq->size - 1);
> > -
> > -	vq->used->ring[used_idx].id =3D desc_chain_head;
> > -	vq->used->ring[used_idx].len =3D desc_chain_len;
> > +	vq->shadow_used_ring[vq->shadow_used_idx].id  =3D
> desc_chain_head;
> > +	vq->shadow_used_ring[vq->shadow_used_idx].len =3D
> desc_chain_len;
> > +	vq->shadow_used_idx++;
> >  	vq->last_used_idx++;
> > -	vhost_log_used_vring(dev, vq, offsetof(struct vring_used,
> > -				ring[used_idx]),
> > -			sizeof(vq->used->ring[used_idx]));
> > +}
> > +
> > +static inline void __attribute__((always_inline))
> > +flush_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
> > +		uint32_t used_idx_start)
> > +{
> > +	if (used_idx_start + vq->shadow_used_idx < vq->size) {
> > +		rte_memcpy(&vq->used->ring[used_idx_start],
> > +				&vq->shadow_used_ring[0],
> > +				vq->shadow_used_idx *
> > +				sizeof(struct vring_used_elem));
> > +		vhost_log_used_vring(dev, vq,
> > +				offsetof(struct vring_used,
> > +					ring[used_idx_start]),
> > +				vq->shadow_used_idx *
> > +				sizeof(struct vring_used_elem));
> > +	} else {
> > +		uint32_t part_1 =3D vq->size - used_idx_start;
> > +		uint32_t part_2 =3D vq->shadow_used_idx - part_1;
> > +
> > +		rte_memcpy(&vq->used->ring[used_idx_start],
> > +				&vq->shadow_used_ring[0],
> > +				part_1 *
> > +				sizeof(struct vring_used_elem));
> > +		vhost_log_used_vring(dev, vq,
> > +				offsetof(struct vring_used,
> > +					ring[used_idx_start]),
> > +				part_1 *
> > +				sizeof(struct vring_used_elem));
> > +		rte_memcpy(&vq->used->ring[0],
> > +				&vq->shadow_used_ring[part_1],
> > +				part_2 *
> > +				sizeof(struct vring_used_elem));
> > +		vhost_log_used_vring(dev, vq,
> > +				offsetof(struct vring_used,
> > +					ring[0]),
> > +				part_2 *
> > +				sizeof(struct vring_used_elem));
> > +	}
> >  }
> Is expanding the code done for performance purpose?

Hi Maxime,

Yes theoretically this has the least branch number.
And I think the logic is simpler this way.

Thanks
Zhihong

> Or maybe we could have a loop to do that?
> Something like this (not compiled, not tested):
>=20
> static inline void __attribute__((always_inline))
> flush_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
> {
> 	uint32_t to =3D used_idx_start;
> 	uint32_t from =3D 0;
> 	uint32_t count;
>=20
> 	if (used_idx_start + vq->shadow_used_idx < vq->size)
> 		count =3D vq->shadow_used_idx;
> 	else
> 		count =3D vq->size - used_idx_start;
>=20
> 	do {
> 		rte_memcpy(&vq->used->ring[to],
> 				&vq->shadow_used_ring[from],
> 				count * sizeof(struct vring_used_elem));
> 		vhost_log_used_vring(dev, vq,
> 				offsetof(struct vring_used, ring[to]),
> 				count * sizeof(struct vring_used_elem));
>=20
> 		to =3D (to + count) & (vq->size - 1);
> 		from +=3D count;
> 		count =3D vq->shadow_used_idx - count;
> 	} while (count);
> }
>=20
> Regards,
> Maxime