From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from inmumg02.tcs.com (Inmumg02.tcs.com [219.64.33.222]) by dpdk.org (Postfix) with ESMTP id 21FFBC1C6 for ; Fri, 5 Feb 2016 17:20:12 +0100 (CET) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CZAgASy7RW/2AKEaxehAyJSKEjj2sNgWaCXYMwAoFsFAEBAQEBAQGBCoRBAQEFGgFTCwwECw0EAwEBASEBBgdGCQgGCwi4RgEBAY9whhKDPHuEGjgNgn6BDwWOGohbixuEEBY0g3mIVUSFKoR+g1IeAoJygTyKXgEBAQ X-IPAS-Result: A2CZAgASy7RW/2AKEaxehAyJSKEjj2sNgWaCXYMwAoFsFAEBAQEBAQGBCoRBAQEFGgFTCwwECw0EAwEBASEBBgdGCQgGCwi4RgEBAY9whhKDPHuEGjgNgn6BDwWOGohbixuEEBY0g3mIVUSFKoR+g1IeAoJygTyKXgEBAQ X-IronPort-AV: E=Sophos;i="5.22,401,1449513000"; d="scan'208";a="31595946" X-DISCLAIMER: FALSE In-Reply-To: <2EF2F5C0CC56984AA024D0B180335FCB13D17763@IRSMSX102.ger.corp.intel.com> References: <2EF2F5C0CC56984AA024D0B180335FCB13D17763@IRSMSX102.ger.corp.intel.com> To: "Chandran, Sugesh" , "Czesnowicz, Przemyslaw" MIME-Version: 1.0 X-KeepSent: 960FCBE1:3BB07B42-65257F50:0052C73A; type=4; name=$KeepSent Message-ID: From: Abhijeet Karve Date: Fri, 5 Feb 2016 20:46:20 +0530 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" , "discuss@openvswitch.org" Subject: Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Feb 2016 16:20:13 -0000 Dear Sugesh, Przemek, Thanks for your extended support. We are done with setting up Open-vSwitch = netdev-dpdk with vhost-user in openstack environment. Following changes=20 what we made in our infrastructure. 1. As Vxlan was not been supported in our external bare metal=20 equipment(Gateways), So we replaced vxlan network with vlan network type=20 in our openstack setup. 2. Added arp entries for br-dpdk bridge for entertaining any arp requests. 3. MTU size settings done in our external N/w equipment.. Thank you once again everyone. Thanks & Regards Abhijeet Karve From: "Chandran, Sugesh" To: Abhijeet Karve , "Czesnowicz, Przemyslaw"=20 Cc: "dev@dpdk.org" , "discuss@openvswitch.org"=20 Date: 02/04/2016 11:08 PM Subject: RE: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does=20 issue with igb=5Fuio in ovs+dpdk setup Hi Abhijeet, It looks to me that the arp entries may not populated right for the VxLAN=20 ports in OVS. Can you please refer the debug section in=20 http://openvswitch.org/support/config-cookbooks/userspace-tunneling/ to verify and insert right arp entries in case they are missing ?? Regards =5FSugesh > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve > Sent: Thursday, February 4, 2016 2:55 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org > Subject: Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue=20 with > igb=5Fuio in ovs+dpdk setup >=20 > Hi All, >=20 > The issue which we are facing as described in previous threads thats > beccause of seting up ovs+dpdk with igb=5Fuio driver instead of vfio=5Fpc= i? >=20 > Would appriciate if get any suggestions on this. >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 > To: przemyslaw.czesnowicz@intel.com > From: Abhijeet Karve > Date: 01/30/2016 07:32PM > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Subject: Fw: RE: [dpdk-dev] DPDK vhostuser with vxlan >=20 >=20 > Hi przemek, >=20 >=20 > We have setup vxlan tunnel between our two compute nodes, Can see the > traafic in vxlan port on br-tun of source instance's compute node. >=20 > We are in same situation which is being described in below thread, i=20 looked > dev mailing archieves for it but seems no one has responded it. >=20 >=20 > http://comments.gmane.org/gmane.linux.network.openvswitch.general/98 > 78 >=20 > Would be really appriciate if provide us any suggestions on it. >=20 >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 > -----Forwarded by on 01/30/2016 07:24PM ----- >=20 > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > To: "Czesnowicz, Przemyslaw" > From: Abhijeet Karve/AHD/TCS@TCS > Date: 01/27/2016 09:52PM > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Inter-VM communication & IP allocation through DHCP issue > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Hi Przemek, >=20 > Thanks for the quick response. Now able to get the DHCP ip's for 2=20 vhostuser > instances and able to ping each other. Isssue was a bug in cirros 0.3.0=20 images > which we were using in openstack after using 0.3.1 image as given in the > URL(https://www.redhat.com/archives/rhos-list/2013- > August/msg00032.html), able to get the IP's in vhostuser VM instances. >=20 > As per our understanding, Packet flow across DPDK datapath will be like > vhostuser ports are connected to the br-int bridge & same is being=20 patched > to the br-dpdk bridge where in our physical network (NIC) is connected=20 with > dpdk0 port. >=20 > So for testing the flow we have to connect that physical network(NIC)=20 with > external packet generator (e.g - ixia, iperf) & run the testpmd=20 application in > the vhostuser VM, right? >=20 > Does it required to add any flows/efforts in bridge=20 configurations(either br- > int or br-dpdk)? >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 01/27/2016 05:11 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Inter-VM communication & IP allocation through DHCP issue >=20 >=20 >=20 > Hi Abhijeet, >=20 >=20 > It seems you are almost there! > When booting the VM’s do you request hugepage memory for them > (by setting hw:mem=5Fpage=5Fsize=3Dlarge in flavor extra=5Fspec)? > If not then please do, if yes then please look into libvirt logfiles for = the > VM’s (in /var/log/libvirt/qemu/instance-xxx), I think there could=20 be a > clue. >=20 >=20 > Regards > Przemek >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Monday, January 25, 2016 6:13 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Inter-VM communication & IP allocation through DHCP issue >=20 > Hi Przemek, >=20 > Thank you for your response, It really provided us breakthrough. >=20 > After setting up DPDK on compute node for stable/kilo, We are trying to=20 set > up Openstack stable/liberty all-in-one setup, At present we are not able = to > get the IP allocation for the vhost type instances through DHCP. Also we = tried > assigning IP's manually to them but the inter-VM communication also not > happening, >=20 > #neutron agent-list > root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list >=20 +--------------------------------------+--------------------+--------------= -----+-------+-- > --------------+---------------------------+ > | id | agent=5Ftype | host | = alive | admin=5Fstate=5Fup > | binary | >=20 +--------------------------------------+--------------------+--------------= -----+-------+-- > --------------+---------------------------+ > | 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent | nfv-dpdk- > devstack | :-) | True | neutron-openvswitch-agent | > | 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent | nfv-dpdk- > devstack | :-) | True | neutron-l3-agent | > | 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent | nfv-dpdk- > devstack | :-) | True | neutron-dhcp-agent | > | b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk- > devstack | :-) | True | neutron-linuxbridge-agent | > | c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent | nfv-dpdk- > devstack | :-) | True | neutron-metadata-agent | > | f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk- > devstack | xxx | True | neutron-openvswitch-agent | >=20 +--------------------------------------+--------------------+--------------= -----+-------+-- > --------------+---------------------------+ >=20 >=20 > ovs-vsctl show output# > -------------------------------------------------------- > Bridge br-dpdk > Port br-dpdk > Interface br-dpdk > type: internal > Port phy-br-dpdk > Interface phy-br-dpdk > type: patch > options: {peer=3Dint-br-dpdk} > Bridge br-int > fail=5Fmode: secure > Port "vhufa41e799-f2" > tag: 5 > Interface "vhufa41e799-f2" > type: dpdkvhostuser > Port int-br-dpdk > Interface int-br-dpdk > type: patch > options: {peer=3Dphy-br-dpdk} > Port "tap4e19f8e1-59" > tag: 5 > Interface "tap4e19f8e1-59" > type: internal > Port "vhu05734c49-3b" > tag: 5 > Interface "vhu05734c49-3b" > type: dpdkvhostuser > Port "vhu10c06b4d-84" > tag: 5 > Interface "vhu10c06b4d-84" > type: dpdkvhostuser > Port patch-tun > Interface patch-tun > type: patch > options: {peer=3Dpatch-int} > Port "vhue169c581-ef" > tag: 5 > Interface "vhue169c581-ef" > type: dpdkvhostuser > Port br-int > Interface br-int > type: internal > Bridge br-tun > fail=5Fmode: secure > Port br-tun > Interface br-tun > type: internal > error: "could not open network device br-tun (Invalid=20 argument)" > Port patch-int > Interface patch-int > type: patch > options: {peer=3Dpatch-tun} > ovs=5Fversion: "2.4.0" > -------------------------------------------------------- >=20 >=20 > ovs-ofctl dump-flows br-int# > -------------------------------------------------------- > root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int > NXST=5FFLOW reply (xid=3D0x4): > cookie=3D0xaaa002bb2bcf827b, duration=3D2410.012s, table=3D0, n=5Fpacket= s=3D0, > n=5Fbytes=3D0, idle=5Fage=3D2410, priority=3D10,icmp6,in=5Fport=3D43,icmp= =5Ftype=3D136 > actions=3Dresubmit(,24) cookie=3D0xaaa002bb2bcf827b, duration=3D2409.480= s, > table=3D0, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, > priority=3D10,icmp6,in=5Fport=3D44,icmp=5Ftype=3D136 actions=3Dresubmit(,= 24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.704s, table=3D0, n=5Fpackets= =3D0, > n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D10,icmp6,in=5Fport=3D45,icmp= =5Ftype=3D136 > actions=3Dresubmit(,24) cookie=3D0xaaa002bb2bcf827b, duration=3D2408.155= s, > table=3D0, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, > priority=3D10,icmp6,in=5Fport=3D42,icmp=5Ftype=3D136 actions=3Dresubmit(,= 24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2409.858s, table=3D0, n=5Fpackets= =3D0, > n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D10,arp,in=5Fport=3D43=20 actions=3Dresubmit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2409.314s, table=3D0, n=5Fpackets= =3D0, > n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D10,arp,in=5Fport=3D44=20 actions=3Dresubmit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.564s, table=3D0, n=5Fpackets= =3D0, > n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D10,arp,in=5Fport=3D45=20 actions=3Dresubmit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.019s, table=3D0, n=5Fpackets= =3D0, > n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D10,arp,in=5Fport=3D42=20 actions=3Dresubmit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2411.538s, table=3D0, n=5Fpackets= =3D0, > n=5Fbytes=3D0, idle=5Fage=3D2411, priority=3D3,in=5Fport=3D1,dl=5Fvlan=3D= 346 > actions=3Dmod=5Fvlan=5Fvid:5,NORMAL cookie=3D0xaaa002bb2bcf827b, > duration=3D2415.038s, table=3D0, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fa= ge=3D2415, > priority=3D2,in=5Fport=3D1 actions=3Ddrop cookie=3D0xaaa002bb2bcf827b, > duration=3D2416.148s, table=3D0, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fa= ge=3D2416, > priority=3D0 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, > duration=3D2416.059s, table=3D23, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5F= age=3D2416, > priority=3D0 actions=3Ddrop cookie=3D0xaaa002bb2bcf827b, duration=3D2410= .101s, > table=3D24, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2410, >=20 priority=3D2,icmp6,in=5Fport=3D43,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f81= 6:3eff:fe81: > da61 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2409.571s, > table=3D24, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, >=20 priority=3D2,icmp6,in=5Fport=3D44,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f81= 6:3eff:fe73: > 254 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2408.775s, > table=3D24, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, >=20 priority=3D2,icmp6,in=5Fport=3D45,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f81= 6:3eff:fe88: > 5cc actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2408.231s, > table=3D24, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, >=20 priority=3D2,icmp6,in=5Fport=3D42,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f81= 6:3eff:fe86: > f5f7 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2409.930s, > table=3D24, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, > priority=3D2,arp,in=5Fport=3D43,arp=5Fspa=3D20.20.20.14 actions=3DNORMAL > cookie=3D0xaaa002bb2bcf827b, duration=3D2409.389s, table=3D24, n=5Fpacket= s=3D0, > n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D2,arp,in=5Fport=3D44,arp=5Fs= pa=3D20.20.20.16 > actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2408.633s, > table=3D24, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, > priority=3D2,arp,in=5Fport=3D45,arp=5Fspa=3D20.20.20.17 actions=3DNORMAL > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.085s, table=3D24, n=5Fpacket= s=3D0, > n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D2,arp,in=5Fport=3D42,arp=5Fs= pa=3D20.20.20.13 > actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2415.974s, > table=3D24, n=5Fpackets=3D0, n=5Fbytes=3D0, idle=5Fage=3D2415, priority= =3D0 actions=3Ddrop > root@nfv-dpdk-devstack:/etc/neutron# > -------------------------------------------------------- >=20 >=20 >=20 >=20 >=20 > Also attaching Neutron-server, nova-compute & nova-scheduler logs. >=20 > It will be really great for us if get any hint to overcome out of this=20 Inter-VM & > DHCP communication issue, >=20 >=20 >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 01/04/2016 07:54 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's=20 Resolved# > Getting memory backing issues with qemu parameter passing >=20 >=20 >=20 >=20 > You should be able to clone networking-ovs-dpdk, switch to kilo branch,=20 and > run python setup.py install in the root of networking-ovs-dpdk, that=20 should > install agent and mech driver. > Then you would need to enable mech driver (ovsdpdk) on the controller in > the /etc/neutron/plugins/ml2/ml2=5Fconf.ini > And run the right agent on the computes (networking-ovs-dpdk-agent). >=20 >=20 > There should be pip packeges of networking-ovs-dpdk available shortly, > I=0F’ll let you know when that happens. >=20 > Przemek >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Thursday, December 24, 2015 6:42 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Getting memory backing issues with qemu parameter passing >=20 > Hi Przemek, >=20 > Thank you so much for your quick response. >=20 > The guide(https://github.com/openstack/networking-ovs- > dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have > suggested that is for openstack vhost user installations with devstack. > Can't we have any reference for including ovs-dpdk mechanisam driver for > openstack Ubuntu distribution which we are following for > compute+controller node setup?" >=20 > We are facing below listed issues With the current approach of setting=20 up > openstack kilo interactively + replacing ovs with ovs-dpdk enabled and > Instance creation in openstack with > passing that instance id to QEMU command line which further passes the > vhost-user sockets to instances for enabling the DPDK libraries in it. >=20 >=20 > 1. Created a flavor m1.hugepages which is backed by hugepage memory, > unable to spawn instance with this flavor =0F– Getting a issue like= :=20 No > matching hugetlbfs for the number of hugepages assigned to the flavor. > 2. Passing socket info to instances via qemu manually and instnaces=20 created > are not persistent. >=20 > Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism > driver and agent all of that in our openstack ubuntu distribution. >=20 > Would be really appriciate if get any help or ref with explanation. >=20 > We are using compute + controller node setup and we are using following > software platform on compute node: > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F > Openstack: Kilo > Distribution: Ubuntu 14.04 > OVS Version: 2.4.0 > DPDK 2.0.0 > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F >=20 > Thanks, > Abhijeet Karve >=20 >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 12/17/2015 06:32 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's=20 Resolved# > Successfully setup DPDK OVS with vhostuser >=20 >=20 >=20 >=20 >=20 > I haven=0F’t tried that approach not sure if that would work, it=20 seems > clunky. >=20 > If you enable ovsdpdk ml2 mechanism driver and agent all of that (add=20 ports > to ovs with the right type, pass the sockets to qemu) would be done by > OpenStack. >=20 > Przemek >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Thursday, December 17, 2015 12:41 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Successfully setup DPDK OVS with vhostuser >=20 > Hi Przemek, >=20 > Thank you so much for sharing the ref guide. >=20 > Would be appreciate if clear one doubt. >=20 > At present we are setting up openstack kilo interactively and further > replacing ovs with ovs-dpdk enabled. > Once the above setup done, We are creating instance in openstack and > passing that instance id to QEMU command line which further passes the > vhost-user sockets to instances, enabling the DPDK libraries in it. >=20 > Isn't this the correct way of integrating ovs-dpdk with openstack? >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 12/17/2015 05:27 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's=20 Resolved# > Successfully setup DPDK OVS with vhostuser >=20 >=20 >=20 >=20 >=20 >=20 > HI Abhijeet, >=20 > For Kilo you need to use ovsdpdk mechanism driver and a matching agent=20 to > integrate ovs-dpdk with OpenStack. >=20 > The guide you are following only talks about running ovs-dpdk not how it > should be integrated with OpenStack. >=20 > Please follow this guide: > https://github.com/openstack/networking-ovs- > dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst >=20 > Best regards > Przemek >=20 >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Wednesday, December 16, 2015 9:37 AM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Successfully setup DPDK OVS with vhostuser >=20 > Hi Przemek, >=20 >=20 > We have configured the accelerated data path between a physical=20 interface > to the VM using openvswitch netdev-dpdk with vhost-user support. The VM > created with this special data path and vhost library, I am calling as=20 DPDK > instance. >=20 > If assigning ip manually to the newly created Cirros VM instance, We are = able > to make 2 VM's to communicate on the same compute node. Else it's not > associating any ip through DHCP though DHCP is in compute node only. >=20 > Yes it's a compute + controller node setup and we are using following > software platform on compute node: > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F > Openstack: Kilo > Distribution: Ubuntu 14.04 > OVS Version: 2.4.0 > DPDK 2.0.0 > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F >=20 > We are following the intel guide https://software.intel.com/en- > us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200 >=20 > When doing "ovs-vsctl show" in compute node, it shows below output: > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F > ovs-vsctl show > c2ec29a5-992d-4875-8adc-1265c23e0304 > Bridge br-ex > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=3Dint-br-ex} > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > fail=5Fmode: secure > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=3Dpatch-tun} > Bridge br-int > fail=5Fmode: secure > Port "qvo0ae19a43-b6" > tag: 2 > Interface "qvo0ae19a43-b6" > Port br-int > Interface br-int > type: internal > Port "qvo31c89856-a2" > tag: 1 > Interface "qvo31c89856-a2" > Port patch-tun > Interface patch-tun > type: patch > options: {peer=3Dpatch-int} > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=3Dphy-br-ex} > Port "qvo97fef28a-ec" > tag: 2 > Interface "qvo97fef28a-ec" > Bridge br-dpdk > Port br-dpdk > Interface br-dpdk > type: internal > Bridge "br0" > Port "br0" > Interface "br0" > type: internal > Port "dpdk0" > Interface "dpdk0" > type: dpdk > Port "vhost-user-2" > Interface "vhost-user-2" > type: dpdkvhostuser > Port "vhost-user-0" > Interface "vhost-user-0" > type: dpdkvhostuser > Port "vhost-user-1" > Interface "vhost-user-1" > type: dpdkvhostuser > ovs=5Fversion: "2.4.0" > root@dpdk:~# > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F >=20 > Open flows output in bridge in compute node are as below: > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F > root@dpdk:~# ovs-ofctl dump-flows br-tun > NXST=5FFLOW reply (xid=3D0x4): > cookie=3D0x0, duration=3D71796.741s, table=3D0, n=5Fpackets=3D519, n=5Fby= tes=3D33794, > idle=5Fage=3D19982, hard=5Fage=3D65534, priority=3D1,in=5Fport=3D1=20 actions=3Dresubmit(,2) > cookie=3D0x0, duration=3D71796.700s, table=3D0, n=5Fpackets=3D0, n=5Fbyte= s=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop > cookie=3D0x0, duration=3D71796.649s, table=3D2, n=5Fpackets=3D0, n=5Fbyte= s=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, > priority=3D0,dl=5Fdst=3D00:00:00:00:00:00/01:00:00:00:00:00=20 actions=3Dresubmit(,20) > cookie=3D0x0, duration=3D71796.610s, table=3D2, n=5Fpackets=3D519, n=5Fby= tes=3D33794, > idle=5Fage=3D19982, hard=5Fage=3D65534, > priority=3D0,dl=5Fdst=3D01:00:00:00:00:00/01:00:00:00:00:00=20 actions=3Dresubmit(,22) > cookie=3D0x0, duration=3D71794.631s, table=3D3, n=5Fpackets=3D0, n=5Fbyte= s=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D1,tun=5Fid=3D0x5c > actions=3Dmod=5Fvlan=5Fvid:2,resubmit(,10) > cookie=3D0x0, duration=3D71794.316s, table=3D3, n=5Fpackets=3D0, n=5Fbyte= s=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D1,tun=5Fid=3D0x57 > actions=3Dmod=5Fvlan=5Fvid:1,resubmit(,10) > cookie=3D0x0, duration=3D71796.565s, table=3D3, n=5Fpackets=3D0, n=5Fbyte= s=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop > cookie=3D0x0, duration=3D71796.522s, table=3D4, n=5Fpackets=3D0, n=5Fbyte= s=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop > cookie=3D0x0, duration=3D71796.481s, table=3D10, n=5Fpackets=3D0, n=5Fbyt= es=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D1 > actions=3Dlearn(table=3D20,hard=5Ftimeout=3D300,priority=3D1,NXM=5FOF=5FV= LAN=5FTCI[0.. > 11],NXM=5FOF=5FETH=5FDST[]=3DNXM=5FOF=5FETH=5FSRC[],load:0- > >NXM=5FOF=5FVLAN=5FTCI[],load:NXM=5FNX=5FTUN=5FID[]- > >NXM=5FNX=5FTUN=5FID[],output:NXM=5FOF=5FIN=5FPORT[]),output:1 > cookie=3D0x0, duration=3D71796.439s, table=3D20, n=5Fpackets=3D0, n=5Fbyt= es=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Dresubmit(,= 22) > cookie=3D0x0, duration=3D71796.398s, table=3D22, n=5Fpackets=3D519, n=5Fb= ytes=3D33794, > idle=5Fage=3D19982, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop > root@dpdk:~# > root@dpdk:~# > root@dpdk:~# > root@dpdk:~# ovs-ofctl dump-flows br-tun > int NXST=5FFLOW reply (xid=3D0x4): > cookie=3D0x0, duration=3D71801.275s, table=3D0, n=5Fpackets=3D0, n=5Fbyte= s=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D2,in=5Fport=3D10 actio= ns=3Ddrop > cookie=3D0x0, duration=3D71801.862s, table=3D0, n=5Fpackets=3D661, n=5Fby= tes=3D48912, > idle=5Fage=3D19981, hard=5Fage=3D65534, priority=3D1 actions=3DNORMAL > cookie=3D0x0, duration=3D71801.817s, table=3D23, n=5Fpackets=3D0, n=5Fbyt= es=3D0, > idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop > root@dpdk:~# > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F >=20 >=20 > Further we don't know what all the network changes(Packet Flow addition) = if > required for associating IP address through the DHCP. >=20 > Would be really appreciate if have clarity on DHCP flow establishment. >=20 >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve , "Gray, Mark D" > > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > > Date: 12/15/2015 09:13 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's=20 Resolved# > Successfully setup DPDK OVS with vhostuser >=20 >=20 >=20 >=20 >=20 >=20 >=20 > Hi Abhijeet, >=20 > If you answer below questions it will help me understand your problem. >=20 > What do you mean by DPDK instance? > Are you able to communicate with other VM's on the same compute node? > Can you check if the DHCP requests arrive on the controller node? (I'm > assuming this is at least compute+ controller setup) >=20 > Best regards > Przemek >=20 > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve > > Sent: Tuesday, December 15, 2015 5:56 AM > > To: Gray, Mark D > > Cc: dev@dpdk.org; discuss@openvswitch.org > > Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > > Successfully setup DPDK OVS with vhostuser > > > > Dear All, > > > > After seting up system boot parameters as shown below, the issue is > > resolved now & we are able to successfully setup openvswitch netdev- > dpdk > > with vhostuser support. > > > > > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F > > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F > > Setup 2 sets of huge pages with different sizes. One for Vhost and=20 another > > for Guest VM. > > Edit /etc/default/grub. > > GRUB=5FCMDLINE=5FLINUX=3D"iommu=3Dpt intel=5Fiommu=3Don > hugepagesz=3D1G > > hugepages=3D10 hugepagesz=3D2M hugepages=3D4096" > > # update-grub > > - Mount the huge pages into different directory. > > # sudo mount -t hugetlbfs nodev /mnt/huge=5F2M -o pagesize=3D= 2M > > # sudo mount -t hugetlbfs nodev /mnt/huge=5F1G -o pagesize=3D= 1G > > > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F > > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F > > > > At present we are facing an issue in Testing DPDK application on=20 setup. In > our > > scenario, We have DPDK instance launched on top of the Openstack Kilo > > compute node. Not able to assign DHCP IP from controller. > > > > > > Thanks & Regards > > Abhijeet Karve > > > > =3D=3D=3D=3D=3D-----=3D=3D=3D=3D=3D-----=3D=3D=3D=3D=3D > > Notice: The information contained in this e-mail message and/or > > attachments to it may contain confidential or privileged information.=20 If you > > are not the intended recipient, any dissemination, use, review, > distribution, > > printing or copying of the information contained in this e-mail=20 message > > and/or attachments to it are strictly prohibited. If you have received = this > > communication in error, please notify us by reply e-mail or telephone=20 and > > immediately and permanently delete the message and any attachments. > > Thank you > > >=20