DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xie, Huawei" <huawei.xie@intel.com>
To: "Ouyang, Changchun" <changchun.ouyang@intel.com>,
	Thomas Monjalon <thomas.monjalon@6wind.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] vhost: Fix the vhost broken issue
Date: Fri, 17 Oct 2014 03:19:45 +0000	[thread overview]
Message-ID: <C37D651A908B024F974696C65296B57B0F2C0119@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <F52918179C57134FAEC9EA62FA2F962511864157@shsmsx102.ccr.corp.intel.com>

Thomas:
Thoughts about this? Could I send example patch with this walk around?

> -----Original Message-----
> From: Ouyang, Changchun
> Sent: Thursday, October 16, 2014 8:15 PM
> To: Xie, Huawei; Thomas Monjalon; dev@dpdk.org
> Cc: Cao, Waterman; Richardson, Bruce; Ouyang, Changchun
> Subject: RE: [PATCH] vhost: Fix the vhost broken issue
> 
> No problem, I will investigate the root cause.
> 
> But before figuring out the root cause, I think we could add check in your new
> sample to
> Check the INC_VEC is enable or not, If it is enabled, print error message and hint
> user
> Disable it in config file if mergeable feature is triggered in vhost. Then this issue
> should not
> block you from sending out your vhost app patch.
> 
> Your vhost app patch will block my another patch about multicast feature as
> your vhost lib patch delete the vhost sample app totally.
> So expect your vhost app patch send out soon.
> 
> Thanks and regards,
> Changchun
> 
> 
> -----Original Message-----
> From: Xie, Huawei
> Sent: Thursday, October 16, 2014 3:20 AM
> To: Ouyang, Changchun; Thomas Monjalon; dev@dpdk.org
> Cc: Cao, Waterman; Richardson, Bruce
> Subject: RE: [PATCH] vhost: Fix the vhost broken issue
> 
> I generated the vhost example patch based on vhost library, but find there is
> issue with --mergeable feature.
> Only thousands of packets could be sent.
> Then I tried the latest vhost example,  which is just before my vhost lib patch, I
> found that not only it has the issue Which is fixed by the following patch but the
> --mergeable feature also doesn't work.
> Haven't got the change to dig into it.
> 
> Hints here:
> 1. mbuf allocation failure after thousands of packets.
> 2. disable INC_VEC(vectore scatter receive) in configure could solve this.
> 3. tried sending the packets directly out after receiving from vmdq queue, it
> works.
> 
> Could you root cause the issue, Changchun? You could work on the most recent
> example.
> 
> > -----Original Message-----
> > From: Ouyang, Changchun
> > Sent: Monday, October 13, 2014 12:48 AM
> > To: Thomas Monjalon; dev@dpdk.org
> > Cc: Xie, Huawei; Cao, Waterman; Ouyang, Changchun
> > Subject: RE: [PATCH] vhost: Fix the vhost broken issue
> >
> > Hi Thomas,
> >
> > If HuaweiXie's patch set for vhost library and new vhost sample could
> > be applied into dpdk.org very soon, Then this patch could be
> > depressed/superseded, I think his patch can fix this issue.
> > Otherwise, this patch could be high priority as the vhost is broken in
> > the tip code due to recent commit related to mbuf change.
> >
> > Thanks and regards,
> > Changchun
> >
> > > -----Original Message-----
> > > From: Ouyang, Changchun
> > > Sent: Monday, October 13, 2014 3:40 PM
> > > To: dev@dpdk.org
> > > Cc: Cao, Waterman; Ouyang, Changchun
> > > Subject: [PATCH] vhost: Fix the vhost broken issue
> > >
> > > As the vhost sample is broken by the following commit,
> > >   commit 08b563ffb19d8baf59dd84200f25bc85031d18a7
> > >   Author: Olivier Matz <olivier.matz@6wind.com>
> > >   Date:   Thu Sep 11 14:15:35 2014 +0100
> > >   mbuf: replace data pointer by an offset
> > >
> > > It leads to segment fault error in vhost when binding a virtio
> > > device MAC address to its corresponding VMDq pool by executing
> > > command line 'start tx- first' in test-pmd on guest.
> > >
> > > This patch fixes that issue.
> > >
> > > Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
> > > ---
> > >  examples/vhost/main.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c index
> > > 9cf8e20..a6db607 100644
> > > --- a/examples/vhost/main.c
> > > +++ b/examples/vhost/main.c
> > > @@ -1782,6 +1782,7 @@ virtio_dev_tx(struct virtio_net* dev, struct
> > > rte_mempool *mbuf_pool)
> > >  		/* Setup dummy mbuf. This is copied to a real mbuf if
> transmitted
> > > out the physical port. */
> > >  		m.data_len = desc->len;
> > >  		m.pkt_len = desc->len;
> > > +		m.buf_addr = (void *)(uintptr_t)buff_addr;
> > >  		m.data_off = 0;
> > >
> > >  		PRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);
> > > --
> > > 1.8.4.2

  reply	other threads:[~2014-10-17  3:12 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-13  7:39 Ouyang Changchun
2014-10-13  7:48 ` Ouyang, Changchun
2014-10-15 19:20   ` Xie, Huawei
2014-10-17  3:15     ` Ouyang, Changchun
2014-10-17  3:19       ` Xie, Huawei [this message]
2014-10-17  7:28         ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C37D651A908B024F974696C65296B57B0F2C0119@SHSMSX101.ccr.corp.intel.com \
    --to=huawei.xie@intel.com \
    --cc=changchun.ouyang@intel.com \
    --cc=dev@dpdk.org \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).