From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by dpdk.org (Postfix) with ESMTP id 277A52BBB for ; Thu, 15 Jun 2017 14:14:09 +0200 (CEST) Received: from 172.30.72.57 (EHLO nkgeml411-hub.china.huawei.com) ([172.30.72.57]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQJ04376; Thu, 15 Jun 2017 20:13:55 +0800 (CST) Received: from FRAEML704-CAH.china.huawei.com (10.206.14.35) by nkgeml411-hub.china.huawei.com (10.98.56.70) with Microsoft SMTP Server (TLS) id 14.3.235.1; Thu, 15 Jun 2017 20:13:54 +0800 Received: from FRAEML521-MBX.china.huawei.com ([169.254.1.104]) by FRAEML704-CAH.china.huawei.com ([10.206.14.35]) with mapi id 14.03.0301.000; Thu, 15 Jun 2017 14:13:46 +0200 From: "Avi Cohen (A)" To: "Mooney, Sean K" , "dpdk-ovs@lists.01.org" , "users@dpdk.org" , "ovs-discuss@openvswitch.org" Thread-Topic: OVS-DPDK - Very poor performance when connected to namespace/container Thread-Index: AdLlprmlumv79YaXSQqQzuNgf5nTCwAB618QAAEOzgAABaj6gAABwZOw Date: Thu, 15 Jun 2017 12:13:45 +0000 Message-ID: References: <4B1BB321037C0849AAE171801564DFA6888840AA@IRSMSX107.ger.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6888842FA@IRSMSX107.ger.corp.intel.com> In-Reply-To: <4B1BB321037C0849AAE171801564DFA6888842FA@IRSMSX107.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.200.202.183] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.59427A03.00C4, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.1.104, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: be471d86d5b3e33d59da79175e5ee688 Subject: Re: [dpdk-users] OVS-DPDK - Very poor performance when connected to namespace/container X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Jun 2017 12:14:13 -0000 > -----Original Message----- > From: Mooney, Sean K [mailto:sean.k.mooney@intel.com] > Sent: Thursday, 15 June, 2017 2:33 PM > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org; ovs- > discuss@openvswitch.org > Subject: RE: OVS-DPDK - Very poor performance when connected to > namespace/container >=20 >=20 >=20 > > -----Original Message----- > > From: Avi Cohen (A) [mailto:avi.cohen@huawei.com] > > Sent: Thursday, June 15, 2017 9:50 AM > > To: Mooney, Sean K ; dpdk-ovs@lists.01.org; > > users@dpdk.org; ovs-discuss@openvswitch.org > > Subject: RE: OVS-DPDK - Very poor performance when connected to > > namespace/container > > > > > > > > > -----Original Message----- > > > From: Mooney, Sean K [mailto:sean.k.mooney@intel.com] > > > Sent: Thursday, 15 June, 2017 11:24 AM > > > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org; ovs- > > > discuss@openvswitch.org > > > Cc: Mooney, Sean K > > > Subject: RE: OVS-DPDK - Very poor performance when connected to > > > namespace/container > > > > > > > > > > > > > -----Original Message----- > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of > > > > Avi Cohen (A) > > > > Sent: Thursday, June 15, 2017 8:14 AM > > > > To: dpdk-ovs@lists.01.org; users@dpdk.org; > > > > ovs-discuss@openvswitch.org > > > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when > > > > connected to namespace/container > > > > > > > > Hello All, > > > > I have OVS-DPDK connected to a namespace via veth pair device. > > > > > > > > I've got a very poor performance - compared to normal OVS (i.e. no > > > > DPDK). > > > > For example - TCP jumbo pkts throughput: normal OVS ~ 10Gbps , > > OVS- > > > > DPDK 1.7 Gbps. > > > > > > > > This can be explained as follows: > > > > veth is implemented in kernel - in OVS-DPDK data is transferred > > from > > > > veth to user space while in normal OVS we save this transfer > > > [Mooney, Sean K] that is part of the reason, the other reson this is > > > slow and The main limiter to scalling adding veth pairs or ovs > > > internal port to ovs with dpdk is That these linux kernel ports are > > > not processed by the dpdk pmds. They are server by the Ovs-vswitchd > > > main thread via a fall back to the non dpdk acclarated netdev > > implementation. > > > > > > > > Is there any other device to connect to namespace ? something like > > > > vhost-user ? I understand that vhost-user cannot be used for > > > > namespace > > > [Mooney, Sean K] I have been doing some experiments in this regard. > > > You should be able to use the tap, pcap or afpacket pmd to add a > > > vedv that will improve Performance. I have seen some strange issue > > > with > > the > > > tap pmd that cause packet to be drop By the kernel on tx on some > > ports > > > but not others so there may be issues with that dirver. > > > > > > Previous experiment with libpcap seemed to work well with ovs 2.5 > > > but I have not tried it With ovs 2.7/master since the introduction > > > of generic vdev support at runtime. Previously vdevs And to be > > > allocated > > using the dpdk args. > > > > > > I would try following the af_packet example here > > > > > https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d680 > > > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support > > > > > [Avi Cohen (A)] > > Thank you Mooney, Sean K > > I already tried to connect the namespace with a tap device (see 1 & 2 > > below) - and got the worst performance for some reason the packet > > is cut to default MTU inside the OVS-DPDK which transmit the packet > > to its peer. - although all interfaces MTU were set to 9000. > > > > 1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1 > > type=3Dinternal > > > > 2. ip link set tap1 netns ns1 // attach it to namespace > [Mooney, Sean K] this is not using the dpdk tap pmd , internal port and v= eth > ports If added to ovs will not be accelerated by dpdk unless you use a vd= ev to > attach them. > > > > I'm looking at your link to create a virtual PMD with vdev support - I > > see there a creation of a virtual PMD device , but I'm not sure how > > this is connected to the namespace ? what device should I assign to > > the namespace ? > [Mooney, Sean K] > You would use it as follows >=20 > ip tuntap add dev tap1 mode tap >=20 > ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=3Ddpdk \ > options:dpdk-devargs=3Deth_af_packet0,iface=3Dtap1 [Avi Cohen (A)]=20 Thanks Sean - are u sure about the syntax - I get an error msg [could not = open network device tap1 - No such device] - when I add-port The syntax in your link is different - note there is myeth0 and eth0 while= in your command only tap1=20 The command in the link is as follows: " ovs-vsctl add-port br0 myeth0 -- set Interface myeth0 type=3Ddpdk \ options:dpdk-devargs=3Deth_af_packet0,iface=3Deth0" >=20 > ip link set tap1 netns ns1 >=20 > ip netns exec ns1 ifconfig 192.168.1.1/24 up >=20 > in general though if you are using ovs-dpdk you should avoid using networ= k > namespace and the kernel where possible but the above should improve you > performance. One caveat, the amount of vdev+phyical interfaces is limited= by > how dpdk is compiled by default to 32 devices but it can be increased to = 256 if > required. >=20 > > > > Best Regards > > avi > > > > > if you happen to be investigating this for use with openstack > > > routers we Are currently working on a way to remove the use of > > > namespace entirely for dvr when using The default neutron agent and > > > sdn controllers such as ovn already provide that functionality. > > > > > > > > Best Regards > > > > avi > > > > _______________________________________________ > > > > Dpdk-ovs mailing list > > > > Dpdk-ovs@lists.01.org > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs