DPDK usage discussions
 help / color / mirror / Atom feed
From: "Avi Cohen (A)" <avi.cohen@huawei.com>
To: "Mooney, Sean K" <sean.k.mooney@intel.com>,
	"dpdk-ovs@lists.01.org" <dpdk-ovs@lists.01.org>,
	"users@dpdk.org" <users@dpdk.org>,
	"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
Subject: Re: [dpdk-users] OVS-DPDK - Very poor performance when connected to namespace/container
Date: Thu, 15 Jun 2017 12:13:45 +0000	[thread overview]
Message-ID: <B84047ECBD981D4B93EAE5A6245AA361013BE369@FRAEML521-MBX.china.huawei.com> (raw)
In-Reply-To: <4B1BB321037C0849AAE171801564DFA6888842FA@IRSMSX107.ger.corp.intel.com>



> -----Original Message-----
> From: Mooney, Sean K [mailto:sean.k.mooney@intel.com]
> Sent: Thursday, 15 June, 2017 2:33 PM
> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org; ovs-
> discuss@openvswitch.org
> Subject: RE: OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> 
> 
> > -----Original Message-----
> > From: Avi Cohen (A) [mailto:avi.cohen@huawei.com]
> > Sent: Thursday, June 15, 2017 9:50 AM
> > To: Mooney, Sean K <sean.k.mooney@intel.com>; dpdk-ovs@lists.01.org;
> > users@dpdk.org; ovs-discuss@openvswitch.org
> > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> >
> >
> > > -----Original Message-----
> > > From: Mooney, Sean K [mailto:sean.k.mooney@intel.com]
> > > Sent: Thursday, 15 June, 2017 11:24 AM
> > > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org; ovs-
> > > discuss@openvswitch.org
> > > Cc: Mooney, Sean K
> > > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > > namespace/container
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> > > > Avi Cohen (A)
> > > > Sent: Thursday, June 15, 2017 8:14 AM
> > > > To: dpdk-ovs@lists.01.org; users@dpdk.org;
> > > > ovs-discuss@openvswitch.org
> > > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when
> > > > connected to namespace/container
> > > >
> > > > Hello   All,
> > > > I have OVS-DPDK connected to a namespace via veth pair device.
> > > >
> > > > I've got a very poor performance - compared to normal OVS (i.e. no
> > > > DPDK).
> > > > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps ,
> > OVS-
> > > > DPDK 1.7 Gbps.
> > > >
> > > > This can be explained as follows:
> > > > veth is implemented in kernel - in OVS-DPDK data is transferred
> > from
> > > > veth to user space while in normal OVS we save this transfer
> > > [Mooney, Sean K] that is part of the reason, the other reson this is
> > > slow and The main limiter to scalling adding veth pairs or ovs
> > > internal port to ovs with dpdk is That these linux kernel ports are
> > > not processed by the dpdk pmds. They are server by the Ovs-vswitchd
> > > main thread via a fall back to the non dpdk acclarated netdev
> > implementation.
> > > >
> > > > Is there any other device to connect to namespace ? something like
> > > > vhost-user ? I understand that vhost-user cannot be used for
> > > > namespace
> > > [Mooney, Sean K] I have been doing some experiments in this regard.
> > > You should be able to use the tap, pcap or afpacket pmd to add a
> > > vedv that will improve Performance. I have seen some strange issue
> > > with
> > the
> > > tap pmd that cause packet to be drop By the kernel on tx on some
> > ports
> > > but not others so there may be issues with that dirver.
> > >
> > > Previous experiment with libpcap seemed to work well with ovs 2.5
> > > but I have not tried it With ovs 2.7/master since the introduction
> > > of generic vdev support at runtime. Previously vdevs And to be
> > > allocated
> > using the dpdk args.
> > >
> > > I would try following the af_packet example here
> > >
> > https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d680
> > > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> > >
> > [Avi Cohen (A)]
> > Thank you Mooney, Sean K
> > I already tried to connect the namespace with a tap device (see 1 & 2
> > below)  - and got the worst performance  for some reason the packet
> > is cut to default MTU inside the  OVS-DPDK which transmit the packet
> > to its peer. - although all interfaces MTU were set to 9000.
> >
> >  1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1
> > type=internal
> >
> >  2. ip link set tap1 netns ns1 // attach it to namespace
> [Mooney, Sean K] this is not using the dpdk tap pmd , internal port and veth
> ports If added to ovs will not be accelerated by dpdk unless you use a vdev to
> attach them.
> >
> > I'm looking at your link to create a virtual PMD with vdev support - I
> > see there a creation of a virtual PMD device , but I'm not sure how
> > this is connected to the namespace ?  what device should I assign to
> > the namespace ?
> [Mooney, Sean K]
> You would use it as follows
> 
> ip tuntap add dev tap1 mode tap
> 
> ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
> options:dpdk-devargs=eth_af_packet0,iface=tap1
[Avi Cohen (A)] 
Thanks Sean - are u sure about the syntax - I get an error msg  [could not open network device tap1 - No such device]  - when I add-port
The syntax in your link is different  - note there is myeth0 and eth0 while in your command only tap1 
The command in the link is as follows:
" ovs-vsctl add-port br0 myeth0 -- set Interface myeth0 type=dpdk \
    options:dpdk-devargs=eth_af_packet0,iface=eth0"

> 
> ip link set tap1 netns ns1
> 
> ip netns exec ns1 ifconfig 192.168.1.1/24 up
> 
> in general though if you are using ovs-dpdk you should avoid using network
> namespace and the kernel where possible but the above should improve you
> performance. One caveat, the amount of vdev+phyical interfaces is limited by
> how dpdk is compiled by default to 32 devices but it can be increased to 256 if
> required.
> 
> >
> > Best Regards
> > avi
> >
> > > if you happen to be investigating this for use with openstack
> > > routers we Are currently working on a way to remove the use of
> > > namespace entirely for dvr when using The default neutron agent and
> > > sdn controllers such as ovn already provide that functionality.
> > > >
> > > > Best Regards
> > > > avi
> > > > _______________________________________________
> > > > Dpdk-ovs mailing list
> > > > Dpdk-ovs@lists.01.org
> > > > https://lists.01.org/mailman/listinfo/dpdk-ovs

  parent reply	other threads:[~2017-06-15 12:14 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-15  7:13 Avi Cohen (A)
     [not found] ` <4B1BB321037C0849AAE171801564DFA6888840AA@IRSMSX107.ger.corp.intel.com>
2017-06-15  8:49   ` Avi Cohen (A)
     [not found]     ` <4B1BB321037C0849AAE171801564DFA6888842FA@IRSMSX107.ger.corp.intel.com>
2017-06-15 12:13       ` Avi Cohen (A) [this message]
2017-06-16  8:56       ` Gray, Mark D
2017-06-16 16:53         ` [dpdk-users] [ovs-discuss] " Darrell Ball
2017-06-16 17:01           ` Mooney, Sean K
2017-06-16 17:25             ` Darrell Ball
2017-06-18  6:51               ` Avi Cohen (A)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=B84047ECBD981D4B93EAE5A6245AA361013BE369@FRAEML521-MBX.china.huawei.com \
    --to=avi.cohen@huawei.com \
    --cc=dpdk-ovs@lists.01.org \
    --cc=ovs-discuss@openvswitch.org \
    --cc=sean.k.mooney@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).