DPDK usage discussions
 help / color / mirror / Atom feed
From: "Avi Cohen (A)" <avi.cohen@huawei.com>
To: "Mooney, Sean K" <sean.k.mooney@intel.com>,
	"dpdk-ovs@lists.01.org" <dpdk-ovs@lists.01.org>,
	"users@dpdk.org" <users@dpdk.org>,
	"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
Subject: Re: [dpdk-users] OVS-DPDK - Very poor performance when connected to namespace/container
Date: Thu, 15 Jun 2017 08:49:36 +0000	[thread overview]
Message-ID: <B84047ECBD981D4B93EAE5A6245AA361013BE2F8@FRAEML521-MBX.china.huawei.com> (raw)
In-Reply-To: <4B1BB321037C0849AAE171801564DFA6888840AA@IRSMSX107.ger.corp.intel.com>



> -----Original Message-----
> From: Mooney, Sean K [mailto:sean.k.mooney@intel.com]
> Sent: Thursday, 15 June, 2017 11:24 AM
> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org; ovs-
> discuss@openvswitch.org
> Cc: Mooney, Sean K
> Subject: RE: OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> 
> 
> > -----Original Message-----
> > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of Avi
> > Cohen (A)
> > Sent: Thursday, June 15, 2017 8:14 AM
> > To: dpdk-ovs@lists.01.org; users@dpdk.org; ovs-discuss@openvswitch.org
> > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> > Hello   All,
> > I have OVS-DPDK connected to a namespace via veth pair device.
> >
> > I've got a very poor performance - compared to normal OVS (i.e. no
> > DPDK).
> > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps , OVS-
> > DPDK 1.7 Gbps.
> >
> > This can be explained as follows:
> > veth is implemented in kernel - in OVS-DPDK data is transferred from
> > veth to user space while in normal OVS we save this transfer
> [Mooney, Sean K] that is part of the reason, the other reson this is slow and The
> main limiter to scalling adding veth pairs or ovs internal port to ovs with dpdk is
> That these linux kernel ports are not processed by the dpdk pmds. They are
> server by the Ovs-vswitchd main thread via a fall back to the non dpdk
> acclarated netdev implementation.
> >
> > Is there any other device to connect to namespace ? something like
> > vhost-user ? I understand that vhost-user cannot be used for namespace
> [Mooney, Sean K] I have been doing some experiments in this regard.
> You should be able to use the tap, pcap or afpacket pmd to add a vedv that will
> improve Performance. I have seen some strange issue with the tap pmd that
> cause packet to be drop By the kernel on tx on some ports but not others so
> there may be issues with that dirver.
> 
> Previous experiment with libpcap seemed to work well with ovs 2.5 but I have
> not tried it With ovs 2.7/master since the introduction of generic vdev support
> at runtime. Previously vdevs And to be allocated using the dpdk args.
> 
> I would try following the af_packet example here
> https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d6809
> 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> 
[Avi Cohen (A)] 
Thank you Mooney, Sean K
I already tried to connect the namespace with a tap device (see 1 & 2 below)  - and got the worst performance 
 for some reason the packet  is cut to default MTU inside the  OVS-DPDK which transmit the packet to its peer. - although all interfaces MTU were set to 9000.

 1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1 type=internal
 
 2. ip link set tap1 netns ns1 // attach it to namespace

I'm looking at your link to create a virtual PMD with vdev support - I see there a creation of a virtual PMD device , but I'm not sure how this is connected to the namespace ?  what device should I assign to the namespace ? 

Best Regards
avi

> if you happen to be investigating this for use with openstack routers we Are
> currently working on a way to remove the use of namespace entirely for dvr
> when using The default neutron agent and sdn controllers such as ovn already
> provide that functionality.
> >
> > Best Regards
> > avi
> > _______________________________________________
> > Dpdk-ovs mailing list
> > Dpdk-ovs@lists.01.org
> > https://lists.01.org/mailman/listinfo/dpdk-ovs

  parent reply	other threads:[~2017-06-15  8:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-15  7:13 Avi Cohen (A)
     [not found] ` <4B1BB321037C0849AAE171801564DFA6888840AA@IRSMSX107.ger.corp.intel.com>
2017-06-15  8:49   ` Avi Cohen (A) [this message]
     [not found]     ` <4B1BB321037C0849AAE171801564DFA6888842FA@IRSMSX107.ger.corp.intel.com>
2017-06-15 12:13       ` Avi Cohen (A)
2017-06-16  8:56       ` Gray, Mark D
2017-06-16 16:53         ` [dpdk-users] [ovs-discuss] " Darrell Ball
2017-06-16 17:01           ` Mooney, Sean K
2017-06-16 17:25             ` Darrell Ball
2017-06-18  6:51               ` Avi Cohen (A)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=B84047ECBD981D4B93EAE5A6245AA361013BE2F8@FRAEML521-MBX.china.huawei.com \
    --to=avi.cohen@huawei.com \
    --cc=dpdk-ovs@lists.01.org \
    --cc=ovs-discuss@openvswitch.org \
    --cc=sean.k.mooney@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).