DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Tan, Jianfeng" <jianfeng.tan@intel.com>
To: Pavel Fedin <p.fedin@samsung.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [RFC 0/5] virtio support for container
Date: Thu, 31 Dec 2015 10:02:45 +0000	[thread overview]
Message-ID: <ED26CBA2FAD1BF48A8719AEF02201E36031B678C@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <002401d143af$38a6fa60$a9f4ef20$@samsung.com>



> -----Original Message-----
> From: Pavel Fedin [mailto:p.fedin@samsung.com]
> Sent: Thursday, December 31, 2015 5:40 PM
> To: Tan, Jianfeng; dev@dpdk.org
> Subject: RE: [dpdk-dev] [RFC 0/5] virtio support for container
> 
>  Hello!
> 
> > First of all, when you say openvswitch, are you referring to ovs-dpdk?
> 
>  I am referring to mainline ovs, compiled with dpdk, and using userspace
> dataplane.
>  AFAIK ovs-dpdk is early Intel fork, which is abandoned at the moment.
> 
> > And can you detail your test case? Like, how do you want ovs_on_host and
> ovs_in_container to
> > be connected?
> > Through two-direct-connected physical NICs, or one vhost port in
> ovs_on_host and one virtio
> > port in ovs_in_container?
> 
>  vhost port. i. e.
> 
>                              |
> LOCAL------dpdkvhostuser<----+---->cvio----->LOCAL
>       ovs                    |          ovs
>                              |
>                 host         |        container
> 
>  By this time i advanced in my research. ovs not only crashes by itself, but
> manages to crash host side. It does this by doing
> reconfiguration sequence without sending VHOST_USER_SET_MEM_TABLE,
> therefore host-side ovs tries to refer old addresses and dies
> badly.

Yes, this case is exactly suited for this patchset.

Before you start another ovs_in_container, previous ones get killed? If so, vhost information
in ovs_on_host will be wiped as the unix socket is broken.
And by the way, ovs just allows one virtio for one vhost port, much different from the exmpale,
vhost-switch.

Thanks,
Jianfeng  

>  Those messages about memory pool already being present are perhaps OK.
> 
> Kind regards,
> Pavel Fedin
> Expert Engineer
> Samsung Electronics Research center Russia
> 

  reply	other threads:[~2015-12-31 10:02 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-30  9:46 Pavel Fedin
2015-12-31  9:19 ` Tan, Jianfeng
2015-12-31  9:40   ` Pavel Fedin
2015-12-31 10:02     ` Tan, Jianfeng [this message]
2015-12-31 10:38       ` Pavel Fedin
2015-12-31 11:58         ` Tan, Jianfeng
2015-12-31 12:44           ` Pavel Fedin
2015-12-31 12:54             ` Tan, Jianfeng
2015-12-31 13:07               ` Pavel Fedin
2015-12-31 13:47           ` Pavel Fedin
2015-12-31 15:39           ` Pavel Fedin
2016-01-06  5:47             ` Tan, Jianfeng
  -- strict thread matches above, loose matches on Subject: below --
2017-06-15  8:21 Avi Cohen (A)
2015-11-05 18:31 Jianfeng Tan
2015-11-24  3:53 ` Zhuangyanying
2015-11-24  6:19   ` Tan, Jianfeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ED26CBA2FAD1BF48A8719AEF02201E36031B678C@shsmsx102.ccr.corp.intel.com \
    --to=jianfeng.tan@intel.com \
    --cc=dev@dpdk.org \
    --cc=p.fedin@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).