DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Fu, Patrick" <patrick.fu@intel.com>
To: "Liu, Yong" <yong.liu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
	"Xia, Chenbo" <chenbo.xia@intel.com>,
	"Wang, Zhihong" <zhihong.wang@intel.com>
Cc: "Wang, Yinan" <yinan.wang@intel.com>,
	"Jiang, Cheng1" <cheng1.jiang@intel.com>,
	"Liang, Cunming" <cunming.liang@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2 2/2] vhost: introduce async enqueue for split	ring
Date: Thu, 2 Jul 2020 12:21:28 +0000	[thread overview]
Message-ID: <BYAPR11MB37351910DF2454A1755F0E26846D0@BYAPR11MB3735.namprd11.prod.outlook.com> (raw)
In-Reply-To: <86228AFD5BCD8E4EBFD2B90117B5E81E63602CC3@SHSMSX103.ccr.corp.intel.com>

Thanks Marvin, my comments inline:

> -----Original Message-----
> From: Liu, Yong <yong.liu@intel.com>
> Sent: Wednesday, July 1, 2020 4:51 PM
> To: Fu, Patrick <patrick.fu@intel.com>; dev@dpdk.org;
> maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Wang,
> Zhihong <zhihong.wang@intel.com>
> Cc: Fu, Patrick <patrick.fu@intel.com>; Wang, Yinan
> <yinan.wang@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>; Liang,
> Cunming <cunming.liang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] vhost: introduce async enqueue for
> split ring
> 
> >
> > +#define VHOST_ASYNC_BATCH_THRESHOLD 8
> > +
> 
> Not very clear about why batch number is 8. It is better to save it in
> rte_vhost_async_features if the value come from hardware requirement.
> 
We are in the progress of benchmarking how this value will have impact to the final performance,
and we will have a more reasonable manner to handle this macro. 

> > +
> > +static __rte_noinline uint32_t
> > +virtio_dev_rx_async_submit_split(struct virtio_net *dev,
> > +	struct vhost_virtqueue *vq, uint16_t queue_id,
> > +	struct rte_mbuf **pkts, uint32_t count) {
> > +	uint32_t pkt_idx = 0, pkt_burst_idx = 0;
> > +	uint16_t num_buffers;
> > +	struct buf_vector buf_vec[BUF_VECTOR_MAX];
> > +	uint16_t avail_head, last_idx, shadow_idx;
> > +
> > +	struct rte_vhost_iov_iter *it_pool = vq->it_pool;
> > +	struct iovec *vec_pool = vq->vec_pool;
> > +	struct rte_vhost_async_desc tdes[MAX_PKT_BURST];
> > +	struct iovec *src_iovec = vec_pool;
> > +	struct iovec *dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1);
> > +	struct rte_vhost_iov_iter *src_it = it_pool;
> > +	struct rte_vhost_iov_iter *dst_it = it_pool + 1;
> > +	uint16_t n_free_slot, slot_idx;
> > +	int n_pkts = 0;
> > +
> > +	avail_head = *((volatile uint16_t *)&vq->avail->idx);
> > +	last_idx = vq->last_avail_idx;
> > +	shadow_idx = vq->shadow_used_idx;
> > +
> > +	/*
> > +	 * The ordering between avail index and
> > +	 * desc reads needs to be enforced.
> > +	 */
> > +	rte_smp_rmb();
> > +
> > +	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size -
> > +1)]);
> > +
> > +	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
> > +		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
> > +		uint16_t nr_vec = 0;
> > +
> > +		if (unlikely(reserve_avail_buf_split(dev, vq,
> > +						pkt_len, buf_vec,
> > &num_buffers,
> > +						avail_head, &nr_vec) < 0)) {
> > +			VHOST_LOG_DATA(DEBUG,
> > +				"(%d) failed to get enough desc from
> > vring\n",
> > +				dev->vid);
> > +			vq->shadow_used_idx -= num_buffers;
> > +			break;
> > +		}
> > +
> > +		VHOST_LOG_DATA(DEBUG, "(%d) current index %d | end
> > index %d\n",
> > +			dev->vid, vq->last_avail_idx,
> > +			vq->last_avail_idx + num_buffers);
> > +
> > +		if (async_mbuf_to_desc(dev, vq, pkts[pkt_idx],
> > +				buf_vec, nr_vec, num_buffers,
> > +				src_iovec, dst_iovec, src_it, dst_it) < 0) {
> > +			vq->shadow_used_idx -= num_buffers;
> > +			break;
> > +		}
> > +
> > +		slot_idx = (vq->async_pkts_idx + pkt_idx) & (vq->size - 1);
> > +		if (src_it->count) {
> > +			async_fill_des(&tdes[pkt_burst_idx], src_it, dst_it);
> > +			pkt_burst_idx++;
> > +			vq->async_pending_info[slot_idx] =
> > +				num_buffers | (src_it->nr_segs << 16);
> > +			src_iovec += src_it->nr_segs;
> > +			dst_iovec += dst_it->nr_segs;
> > +			src_it += 2;
> > +			dst_it += 2;
> 
> Patrick,
> In my understanding, nr_segs type definition can follow nr_vec type
> definition (uint16_t). By that can short the data saved in async_pkts_pending
> from 64bit to 32bit.
> Since those information will be used in datapath, the smaller size will get the
> better perf.
> 
> It is better to replace integer 2 with macro.
> 
will update the code as you suggested.

Thanks,

Patrick


      reply	other threads:[~2020-07-02 12:21 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-29 14:44 [dpdk-dev] [PATCH v2 0/2] introduce asynchronous data path for vhost patrick.fu
2020-06-29 14:44 ` [dpdk-dev] [PATCH v2 1/2] vhost: introduce async enqueue registration API patrick.fu
2020-06-29 14:44 ` [dpdk-dev] [PATCH v2 2/2] vhost: introduce async enqueue for split ring patrick.fu
2020-07-01  8:50   ` Liu, Yong
2020-07-02 12:21     ` Fu, Patrick [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR11MB37351910DF2454A1755F0E26846D0@BYAPR11MB3735.namprd11.prod.outlook.com \
    --to=patrick.fu@intel.com \
    --cc=chenbo.xia@intel.com \
    --cc=cheng1.jiang@intel.com \
    --cc=cunming.liang@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=yinan.wang@intel.com \
    --cc=yong.liu@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).