DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Chen, Junjie J" <junjie.j.chen@intel.com>
To: "Yang, Zhiyong" <zhiyong.yang@intel.com>,
	"yliu@fridaylinux.org" <yliu@fridaylinux.org>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] vhost: do deep copy while reallocate vq
Date: Tue, 16 Jan 2018 07:38:41 +0000	[thread overview]
Message-ID: <AA85A5A5E706C44BACB0BEFD5AC08BF63132C4EF@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <E182254E98A5DA4EB1E657AC7CB9BD2A8B025827@BGSMSX101.gar.corp.intel.com>

Hi
> > > > @@ -227,6 +227,7 @@ vhost_user_set_vring_num(struct virtio_net
> *dev,
> > > >  				"zero copy is force disabled\n");
> > > >  			dev->dequeue_zero_copy = 0;
> > > >  		}
> > > > +		TAILQ_INIT(&vq->zmbuf_list);
> > > >  	}
> > > >
> > > >  	vq->shadow_used_ring = rte_malloc(NULL, @@ -261,6 +262,9
> @@
> > > > numa_realloc(struct virtio_net *dev, int index)
> > > >  	int oldnode, newnode;
> > > >  	struct virtio_net *old_dev;
> > > >  	struct vhost_virtqueue *old_vq, *vq;
> > > > +	struct zcopy_mbuf *new_zmbuf;
> > > > +	struct vring_used_elem *new_shadow_used_ring;
> > > > +	struct batch_copy_elem *new_batch_copy_elems;
> > > >  	int ret;
> > > >
> > > >  	old_dev = dev;
> > > > @@ -285,6 +289,33 @@ numa_realloc(struct virtio_net *dev, int
> index)
> > > >  			return dev;
> > > >
> > > >  		memcpy(vq, old_vq, sizeof(*vq));
> > > > +		TAILQ_INIT(&vq->zmbuf_list);
> > > > +
> > > > +		new_zmbuf = rte_malloc_socket(NULL, vq->zmbuf_size *
> > > > +			sizeof(struct zcopy_mbuf), 0, newnode);
> > > > +		if (new_zmbuf) {
> > > > +			rte_free(vq->zmbufs);
> > > > +			vq->zmbufs = new_zmbuf;
> > > > +		}
> > >
> > > You need to consider how to handle the case  ( rte_malloc_socket
> > > return NULL).
> >
> > If it failed to allocate new_zmbuf, it uses old zmbufs, so as to keep
> > vhost alive.
> 
> It sounds reasonable, another question is, for the 3 blocks of memory being
> allocated,  If some succeed , others fails,  Does it mean that the code will
> run on different socket?  What's the perf impact if it happens.

The original code doesn't do deep copy and thus access memory on different socket, this patch is to mitigate this situation. It does access remote memory when one of above allocation failed. 

I saw some performance improvement (24.8Gbits/s -> 26.1Gbit/s) on my dev machine when only reallocate for zmbufs, while I didn't see significant performance difference when allocating vring_used_elem 
and batch_copy_elem.

  reply	other threads:[~2018-01-16  7:38 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-15 11:32 Junjie Chen
2018-01-15  9:05 ` Yang, Zhiyong
2018-01-15  9:14   ` Chen, Junjie J
2018-01-16  0:57     ` Yang, Zhiyong
2018-01-16  7:38       ` Chen, Junjie J [this message]
2018-01-17  1:36         ` Yang, Zhiyong
2018-01-16  8:54 ` Maxime Coquelin
2018-01-17 14:46 ` Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AA85A5A5E706C44BACB0BEFD5AC08BF63132C4EF@SHSMSX101.ccr.corp.intel.com \
    --to=junjie.j.chen@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=yliu@fridaylinux.org \
    --cc=zhiyong.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).