From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 57A7295DB for ; Thu, 4 Feb 2016 18:38:12 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP; 04 Feb 2016 09:37:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,396,1449561600"; d="scan'208";a="42506666" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by fmsmga004.fm.intel.com with ESMTP; 04 Feb 2016 09:37:48 -0800 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.97]) by IRSMSX153.ger.corp.intel.com ([169.254.9.69]) with mapi id 14.03.0248.002; Thu, 4 Feb 2016 17:37:47 +0000 From: "Chandran, Sugesh" To: Abhijeet Karve , "Czesnowicz, Przemyslaw" Thread-Topic: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup Thread-Index: AQHRX1wLsPfxU8hnDEWqJh59Xhr4zJ8cJZaA Date: Thu, 4 Feb 2016 17:37:47 +0000 Message-ID: <2EF2F5C0CC56984AA024D0B180335FCB13D17763@IRSMSX102.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZGI4YzY2ODUtOTg1Yy00ZjZmLThiMGQtOGVlYWM2NmQ5ZGI0IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX1BVQkxJQyJ9XX1dfSwiU3ViamVjdExhYmVscyI6W10sIlRNQ1ZlcnNpb24iOiIxNS40LjEwLjE5IiwiVHJ1c3RlZExhYmVsSGFzaCI6IityV0RtYUdnQVJsWjFvR0hnMXZibDFcLytZQjhmcUdzbzd1SERrXC9EaHN5WT0ifQ== x-ctpclassification: CTP_PUBLIC x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" , "discuss@openvswitch.org" Subject: Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Feb 2016 17:38:13 -0000 Hi Abhijeet, It looks to me that the arp entries may not populated right for the VxLAN p= orts in OVS. Can you please refer the debug section in=20 http://openvswitch.org/support/config-cookbooks/userspace-tunneling/ to verify and insert right arp entries in case they are missing ?? Regards _Sugesh > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve > Sent: Thursday, February 4, 2016 2:55 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org > Subject: Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue wit= h > igb_uio in ovs+dpdk setup >=20 > Hi All, >=20 > The issue which we are facing as described in previous threads thats > beccause of seting up ovs+dpdk with igb_uio driver instead of vfio_pci? >=20 > Would appriciate if get any suggestions on this. >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 > To: przemyslaw.czesnowicz@intel.com > From: Abhijeet Karve > Date: 01/30/2016 07:32PM > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Subject: Fw: RE: [dpdk-dev] DPDK vhostuser with vxlan >=20 >=20 > Hi przemek, >=20 >=20 > We have setup vxlan tunnel between our two compute nodes, Can see the > traafic in vxlan port on br-tun of source instance's compute node. >=20 > We are in same situation which is being described in below thread, i look= ed > dev mailing archieves for it but seems no one has responded it. >=20 >=20 > http://comments.gmane.org/gmane.linux.network.openvswitch.general/98 > 78 >=20 > Would be really appriciate if provide us any suggestions on it. >=20 >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 > -----Forwarded by on 01/30/2016 07:24PM ----- >=20 > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > To: "Czesnowicz, Przemyslaw" > From: Abhijeet Karve/AHD/TCS@TCS > Date: 01/27/2016 09:52PM > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Inter-VM communication & IP allocation through DHCP issue > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Hi Przemek, >=20 > Thanks for the quick response. Now able to get the DHCP ip's for 2 vhost= user > instances and able to ping each other. Isssue was a bug in cirros 0.3.0 i= mages > which we were using in openstack after using 0.3.1 image as given in the > URL(https://www.redhat.com/archives/rhos-list/2013- > August/msg00032.html), able to get the IP's in vhostuser VM instances. >=20 > As per our understanding, Packet flow across DPDK datapath will be like > vhostuser ports are connected to the br-int bridge & same is being patche= d > to the br-dpdk bridge where in our physical network (NIC) is connected wi= th > dpdk0 port. >=20 > So for testing the flow we have to connect that physical network(NIC) wit= h > external packet generator (e.g - ixia, iperf) & run the testpmd applicati= on in > the vhostuser VM, right? >=20 > Does it required to add any flows/efforts in bridge configurations(either= br- > int or br-dpdk)? >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 01/27/2016 05:11 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Inter-VM communication & IP allocation through DHCP issue >=20 >=20 >=20 > Hi Abhijeet, >=20 >=20 > It seems you are almost there! > When booting the VM’s do you request hugepage memory for them > (by setting hw:mem_page_size=3Dlarge in flavor extra_spec)? > If not then please do, if yes then please look into libvirt logfiles for = the > VM’s (in /var/log/libvirt/qemu/instance-xxx), I think there could b= e a > clue. >=20 >=20 > Regards > Przemek >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Monday, January 25, 2016 6:13 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Inter-VM communication & IP allocation through DHCP issue >=20 > Hi Przemek, >=20 > Thank you for your response, It really provided us breakthrough. >=20 > After setting up DPDK on compute node for stable/kilo, We are trying to s= et > up Openstack stable/liberty all-in-one setup, At present we are not able = to > get the IP allocation for the vhost type instances through DHCP. Also we = tried > assigning IP's manually to them but the inter-VM communication also not > happening, >=20 > #neutron agent-list > root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list > +--------------------------------------+--------------------+------------= -------+-------+-- > --------------+---------------------------+ > | id | agent_type | host = | alive | admin_state_up > | binary | > +--------------------------------------+--------------------+------------= -------+-------+-- > --------------+---------------------------+ > | 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent | nfv-dpdk- > devstack | :-) | True | neutron-openvswitch-agent | > | 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent | nfv-dpdk- > devstack | :-) | True | neutron-l3-agent | > | 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent | nfv-dpdk- > devstack | :-) | True | neutron-dhcp-agent | > | b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk- > devstack | :-) | True | neutron-linuxbridge-agent | > | c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent | nfv-dpdk- > devstack | :-) | True | neutron-metadata-agent | > | f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk- > devstack | xxx | True | neutron-openvswitch-agent | > +--------------------------------------+--------------------+------------= -------+-------+-- > --------------+---------------------------+ >=20 >=20 > ovs-vsctl show output# > -------------------------------------------------------- > Bridge br-dpdk > Port br-dpdk > Interface br-dpdk > type: internal > Port phy-br-dpdk > Interface phy-br-dpdk > type: patch > options: {peer=3Dint-br-dpdk} > Bridge br-int > fail_mode: secure > Port "vhufa41e799-f2" > tag: 5 > Interface "vhufa41e799-f2" > type: dpdkvhostuser > Port int-br-dpdk > Interface int-br-dpdk > type: patch > options: {peer=3Dphy-br-dpdk} > Port "tap4e19f8e1-59" > tag: 5 > Interface "tap4e19f8e1-59" > type: internal > Port "vhu05734c49-3b" > tag: 5 > Interface "vhu05734c49-3b" > type: dpdkvhostuser > Port "vhu10c06b4d-84" > tag: 5 > Interface "vhu10c06b4d-84" > type: dpdkvhostuser > Port patch-tun > Interface patch-tun > type: patch > options: {peer=3Dpatch-int} > Port "vhue169c581-ef" > tag: 5 > Interface "vhue169c581-ef" > type: dpdkvhostuser > Port br-int > Interface br-int > type: internal > Bridge br-tun > fail_mode: secure > Port br-tun > Interface br-tun > type: internal > error: "could not open network device br-tun (Invalid arg= ument)" > Port patch-int > Interface patch-int > type: patch > options: {peer=3Dpatch-tun} > ovs_version: "2.4.0" > -------------------------------------------------------- >=20 >=20 > ovs-ofctl dump-flows br-int# > -------------------------------------------------------- > root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int > NXST_FLOW reply (xid=3D0x4): > cookie=3D0xaaa002bb2bcf827b, duration=3D2410.012s, table=3D0, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2410, priority=3D10,icmp6,in_port=3D43,icmp_type= =3D136 > actions=3Dresubmit(,24) cookie=3D0xaaa002bb2bcf827b, duration=3D2409.480= s, > table=3D0, n_packets=3D0, n_bytes=3D0, idle_age=3D2409, > priority=3D10,icmp6,in_port=3D44,icmp_type=3D136 actions=3Dresubmit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.704s, table=3D0, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2408, priority=3D10,icmp6,in_port=3D45,icmp_type= =3D136 > actions=3Dresubmit(,24) cookie=3D0xaaa002bb2bcf827b, duration=3D2408.155= s, > table=3D0, n_packets=3D0, n_bytes=3D0, idle_age=3D2408, > priority=3D10,icmp6,in_port=3D42,icmp_type=3D136 actions=3Dresubmit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2409.858s, table=3D0, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2409, priority=3D10,arp,in_port=3D43 actions=3Dre= submit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2409.314s, table=3D0, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2409, priority=3D10,arp,in_port=3D44 actions=3Dre= submit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.564s, table=3D0, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2408, priority=3D10,arp,in_port=3D45 actions=3Dre= submit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.019s, table=3D0, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2408, priority=3D10,arp,in_port=3D42 actions=3Dre= submit(,24) > cookie=3D0xaaa002bb2bcf827b, duration=3D2411.538s, table=3D0, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2411, priority=3D3,in_port=3D1,dl_vlan=3D346 > actions=3Dmod_vlan_vid:5,NORMAL cookie=3D0xaaa002bb2bcf827b, > duration=3D2415.038s, table=3D0, n_packets=3D0, n_bytes=3D0, idle_age=3D2= 415, > priority=3D2,in_port=3D1 actions=3Ddrop cookie=3D0xaaa002bb2bcf827b, > duration=3D2416.148s, table=3D0, n_packets=3D0, n_bytes=3D0, idle_age=3D2= 416, > priority=3D0 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, > duration=3D2416.059s, table=3D23, n_packets=3D0, n_bytes=3D0, idle_age=3D= 2416, > priority=3D0 actions=3Ddrop cookie=3D0xaaa002bb2bcf827b, duration=3D2410= .101s, > table=3D24, n_packets=3D0, n_bytes=3D0, idle_age=3D2410, > priority=3D2,icmp6,in_port=3D43,icmp_type=3D136,nd_target=3Dfe80::f816:3e= ff:fe81: > da61 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2409.571s, > table=3D24, n_packets=3D0, n_bytes=3D0, idle_age=3D2409, > priority=3D2,icmp6,in_port=3D44,icmp_type=3D136,nd_target=3Dfe80::f816:3e= ff:fe73: > 254 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2408.775s, > table=3D24, n_packets=3D0, n_bytes=3D0, idle_age=3D2408, > priority=3D2,icmp6,in_port=3D45,icmp_type=3D136,nd_target=3Dfe80::f816:3e= ff:fe88: > 5cc actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2408.231s, > table=3D24, n_packets=3D0, n_bytes=3D0, idle_age=3D2408, > priority=3D2,icmp6,in_port=3D42,icmp_type=3D136,nd_target=3Dfe80::f816:3e= ff:fe86: > f5f7 actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2409.930s, > table=3D24, n_packets=3D0, n_bytes=3D0, idle_age=3D2409, > priority=3D2,arp,in_port=3D43,arp_spa=3D20.20.20.14 actions=3DNORMAL > cookie=3D0xaaa002bb2bcf827b, duration=3D2409.389s, table=3D24, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2409, priority=3D2,arp,in_port=3D44,arp_spa=3D20.= 20.20.16 > actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2408.633s, > table=3D24, n_packets=3D0, n_bytes=3D0, idle_age=3D2408, > priority=3D2,arp,in_port=3D45,arp_spa=3D20.20.20.17 actions=3DNORMAL > cookie=3D0xaaa002bb2bcf827b, duration=3D2408.085s, table=3D24, n_packets= =3D0, > n_bytes=3D0, idle_age=3D2408, priority=3D2,arp,in_port=3D42,arp_spa=3D20.= 20.20.13 > actions=3DNORMAL cookie=3D0xaaa002bb2bcf827b, duration=3D2415.974s, > table=3D24, n_packets=3D0, n_bytes=3D0, idle_age=3D2415, priority=3D0 act= ions=3Ddrop > root@nfv-dpdk-devstack:/etc/neutron# > -------------------------------------------------------- >=20 >=20 >=20 >=20 >=20 > Also attaching Neutron-server, nova-compute & nova-scheduler logs. >=20 > It will be really great for us if get any hint to overcome out of this In= ter-VM & > DHCP communication issue, >=20 >=20 >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 01/04/2016 07:54 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved= # > Getting memory backing issues with qemu parameter passing >=20 >=20 >=20 >=20 > You should be able to clone networking-ovs-dpdk, switch to kilo branch, = and > run python setup.py install in the root of networking-ovs-dpdk, that shou= ld > install agent and mech driver. > Then you would need to enable mech driver (ovsdpdk) on the controller in > the /etc/neutron/plugins/ml2/ml2_conf.ini > And run the right agent on the computes (networking-ovs-dpdk-agent). >=20 >=20 > There should be pip packeges of networking-ovs-dpdk available shortly, > I=0F’ll let you know when that happens. >=20 > Przemek >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Thursday, December 24, 2015 6:42 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Getting memory backing issues with qemu parameter passing >=20 > Hi Przemek, >=20 > Thank you so much for your quick response. >=20 > The guide(https://github.com/openstack/networking-ovs- > dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have > suggested that is for openstack vhost user installations with devstack. > Can't we have any reference for including ovs-dpdk mechanisam driver for > openstack Ubuntu distribution which we are following for > compute+controller node setup?" >=20 > We are facing below listed issues With the current approach of setting up > openstack kilo interactively + replacing ovs with ovs-dpdk enabled and > Instance creation in openstack with > passing that instance id to QEMU command line which further passes the > vhost-user sockets to instances for enabling the DPDK libraries in it. >=20 >=20 > 1. Created a flavor m1.hugepages which is backed by hugepage memory, > unable to spawn instance with this flavor =0F– Getting a issue like= : No > matching hugetlbfs for the number of hugepages assigned to the flavor. > 2. Passing socket info to instances via qemu manually and instnaces creat= ed > are not persistent. >=20 > Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism > driver and agent all of that in our openstack ubuntu distribution. >=20 > Would be really appriciate if get any help or ref with explanation. >=20 > We are using compute + controller node setup and we are using following > software platform on compute node: > _____________ > Openstack: Kilo > Distribution: Ubuntu 14.04 > OVS Version: 2.4.0 > DPDK 2.0.0 > _____________ >=20 > Thanks, > Abhijeet Karve >=20 >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 12/17/2015 06:32 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved= # > Successfully setup DPDK OVS with vhostuser >=20 >=20 >=20 >=20 >=20 > I haven=0F’t tried that approach not sure if that would work, it se= ems > clunky. >=20 > If you enable ovsdpdk ml2 mechanism driver and agent all of that (add por= ts > to ovs with the right type, pass the sockets to qemu) would be done by > OpenStack. >=20 > Przemek >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Thursday, December 17, 2015 12:41 PM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Successfully setup DPDK OVS with vhostuser >=20 > Hi Przemek, >=20 > Thank you so much for sharing the ref guide. >=20 > Would be appreciate if clear one doubt. >=20 > At present we are setting up openstack kilo interactively and further > replacing ovs with ovs-dpdk enabled. > Once the above setup done, We are creating instance in openstack and > passing that instance id to QEMU command line which further passes the > vhost-user sockets to instances, enabling the DPDK libraries in it. >=20 > Isn't this the correct way of integrating ovs-dpdk with openstack? >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > , "Gray, Mark D" > Date: 12/17/2015 05:27 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved= # > Successfully setup DPDK OVS with vhostuser >=20 >=20 >=20 >=20 >=20 >=20 > HI Abhijeet, >=20 > For Kilo you need to use ovsdpdk mechanism driver and a matching agent to > integrate ovs-dpdk with OpenStack. >=20 > The guide you are following only talks about running ovs-dpdk not how it > should be integrated with OpenStack. >=20 > Please follow this guide: > https://github.com/openstack/networking-ovs- > dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst >=20 > Best regards > Przemek >=20 >=20 > From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] > Sent: Wednesday, December 16, 2015 9:37 AM > To: Czesnowicz, Przemyslaw > Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Successfully setup DPDK OVS with vhostuser >=20 > Hi Przemek, >=20 >=20 > We have configured the accelerated data path between a physical interface > to the VM using openvswitch netdev-dpdk with vhost-user support. The VM > created with this special data path and vhost library, I am calling as DP= DK > instance. >=20 > If assigning ip manually to the newly created Cirros VM instance, We are = able > to make 2 VM's to communicate on the same compute node. Else it's not > associating any ip through DHCP though DHCP is in compute node only. >=20 > Yes it's a compute + controller node setup and we are using following > software platform on compute node: > _____________ > Openstack: Kilo > Distribution: Ubuntu 14.04 > OVS Version: 2.4.0 > DPDK 2.0.0 > _____________ >=20 > We are following the intel guide https://software.intel.com/en- > us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200 >=20 > When doing "ovs-vsctl show" in compute node, it shows below output: > _____________________________________________ > ovs-vsctl show > c2ec29a5-992d-4875-8adc-1265c23e0304 > Bridge br-ex > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=3Dint-br-ex} > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > fail_mode: secure > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=3Dpatch-tun} > Bridge br-int > fail_mode: secure > Port "qvo0ae19a43-b6" > tag: 2 > Interface "qvo0ae19a43-b6" > Port br-int > Interface br-int > type: internal > Port "qvo31c89856-a2" > tag: 1 > Interface "qvo31c89856-a2" > Port patch-tun > Interface patch-tun > type: patch > options: {peer=3Dpatch-int} > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=3Dphy-br-ex} > Port "qvo97fef28a-ec" > tag: 2 > Interface "qvo97fef28a-ec" > Bridge br-dpdk > Port br-dpdk > Interface br-dpdk > type: internal > Bridge "br0" > Port "br0" > Interface "br0" > type: internal > Port "dpdk0" > Interface "dpdk0" > type: dpdk > Port "vhost-user-2" > Interface "vhost-user-2" > type: dpdkvhostuser > Port "vhost-user-0" > Interface "vhost-user-0" > type: dpdkvhostuser > Port "vhost-user-1" > Interface "vhost-user-1" > type: dpdkvhostuser > ovs_version: "2.4.0" > root@dpdk:~# > _____________________________________________ >=20 > Open flows output in bridge in compute node are as below: > _____________________________________________ > root@dpdk:~# ovs-ofctl dump-flows br-tun > NXST_FLOW reply (xid=3D0x4): > cookie=3D0x0, duration=3D71796.741s, table=3D0, n_packets=3D519, n_bytes= =3D33794, > idle_age=3D19982, hard_age=3D65534, priority=3D1,in_port=3D1 actions=3Dre= submit(,2) > cookie=3D0x0, duration=3D71796.700s, table=3D0, n_packets=3D0, n_bytes=3D= 0, > idle_age=3D65534, hard_age=3D65534, priority=3D0 actions=3Ddrop > cookie=3D0x0, duration=3D71796.649s, table=3D2, n_packets=3D0, n_bytes=3D= 0, > idle_age=3D65534, hard_age=3D65534, > priority=3D0,dl_dst=3D00:00:00:00:00:00/01:00:00:00:00:00 actions=3Dresub= mit(,20) > cookie=3D0x0, duration=3D71796.610s, table=3D2, n_packets=3D519, n_bytes= =3D33794, > idle_age=3D19982, hard_age=3D65534, > priority=3D0,dl_dst=3D01:00:00:00:00:00/01:00:00:00:00:00 actions=3Dresub= mit(,22) > cookie=3D0x0, duration=3D71794.631s, table=3D3, n_packets=3D0, n_bytes=3D= 0, > idle_age=3D65534, hard_age=3D65534, priority=3D1,tun_id=3D0x5c > actions=3Dmod_vlan_vid:2,resubmit(,10) > cookie=3D0x0, duration=3D71794.316s, table=3D3, n_packets=3D0, n_bytes=3D= 0, > idle_age=3D65534, hard_age=3D65534, priority=3D1,tun_id=3D0x57 > actions=3Dmod_vlan_vid:1,resubmit(,10) > cookie=3D0x0, duration=3D71796.565s, table=3D3, n_packets=3D0, n_bytes=3D= 0, > idle_age=3D65534, hard_age=3D65534, priority=3D0 actions=3Ddrop > cookie=3D0x0, duration=3D71796.522s, table=3D4, n_packets=3D0, n_bytes=3D= 0, > idle_age=3D65534, hard_age=3D65534, priority=3D0 actions=3Ddrop > cookie=3D0x0, duration=3D71796.481s, table=3D10, n_packets=3D0, n_bytes= =3D0, > idle_age=3D65534, hard_age=3D65534, priority=3D1 > actions=3Dlearn(table=3D20,hard_timeout=3D300,priority=3D1,NXM_OF_VLAN_TC= I[0.. > 11],NXM_OF_ETH_DST[]=3DNXM_OF_ETH_SRC[],load:0- > >NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]- > >NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 > cookie=3D0x0, duration=3D71796.439s, table=3D20, n_packets=3D0, n_bytes= =3D0, > idle_age=3D65534, hard_age=3D65534, priority=3D0 actions=3Dresubmit(,22) > cookie=3D0x0, duration=3D71796.398s, table=3D22, n_packets=3D519, n_bytes= =3D33794, > idle_age=3D19982, hard_age=3D65534, priority=3D0 actions=3Ddrop > root@dpdk:~# > root@dpdk:~# > root@dpdk:~# > root@dpdk:~# ovs-ofctl dump-flows br-tun > int NXST_FLOW reply (xid=3D0x4): > cookie=3D0x0, duration=3D71801.275s, table=3D0, n_packets=3D0, n_bytes=3D= 0, > idle_age=3D65534, hard_age=3D65534, priority=3D2,in_port=3D10 actions=3Dd= rop > cookie=3D0x0, duration=3D71801.862s, table=3D0, n_packets=3D661, n_bytes= =3D48912, > idle_age=3D19981, hard_age=3D65534, priority=3D1 actions=3DNORMAL > cookie=3D0x0, duration=3D71801.817s, table=3D23, n_packets=3D0, n_bytes= =3D0, > idle_age=3D65534, hard_age=3D65534, priority=3D0 actions=3Ddrop > root@dpdk:~# > _____________________________________________ >=20 >=20 > Further we don't know what all the network changes(Packet Flow addition) = if > required for associating IP address through the DHCP. >=20 > Would be really appreciate if have clarity on DHCP flow establishment. >=20 >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 >=20 >=20 >=20 >=20 > From: "Czesnowicz, Przemyslaw" > To: Abhijeet Karve , "Gray, Mark D" > > Cc: "dev@dpdk.org" , "discuss@openvswitch.org" > > Date: 12/15/2015 09:13 PM > Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved= # > Successfully setup DPDK OVS with vhostuser >=20 >=20 >=20 >=20 >=20 >=20 >=20 > Hi Abhijeet, >=20 > If you answer below questions it will help me understand your problem. >=20 > What do you mean by DPDK instance? > Are you able to communicate with other VM's on the same compute node? > Can you check if the DHCP requests arrive on the controller node? (I'm > assuming this is at least compute+ controller setup) >=20 > Best regards > Przemek >=20 > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve > > Sent: Tuesday, December 15, 2015 5:56 AM > > To: Gray, Mark D > > Cc: dev@dpdk.org; discuss@openvswitch.org > > Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > > Successfully setup DPDK OVS with vhostuser > > > > Dear All, > > > > After seting up system boot parameters as shown below, the issue is > > resolved now & we are able to successfully setup openvswitch netdev- > dpdk > > with vhostuser support. > > > > > __________________________________________________________ > > _______________________________________________________ > > Setup 2 sets of huge pages with different sizes. One for Vhost and anot= her > > for Guest VM. > > Edit /etc/default/grub. > > GRUB_CMDLINE_LINUX=3D"iommu=3Dpt intel_iommu=3Don > hugepagesz=3D1G > > hugepages=3D10 hugepagesz=3D2M hugepages=3D4096" > > # update-grub > > - Mount the huge pages into different directory. > > # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=3D2M > > # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=3D1G > > > __________________________________________________________ > > _______________________________________________________ > > > > At present we are facing an issue in Testing DPDK application on setup.= In > our > > scenario, We have DPDK instance launched on top of the Openstack Kilo > > compute node. Not able to assign DHCP IP from controller. > > > > > > Thanks & Regards > > Abhijeet Karve > > > > =3D=3D=3D=3D=3D-----=3D=3D=3D=3D=3D-----=3D=3D=3D=3D=3D > > Notice: The information contained in this e-mail message and/or > > attachments to it may contain confidential or privileged information. I= f you > > are not the intended recipient, any dissemination, use, review, > distribution, > > printing or copying of the information contained in this e-mail message > > and/or attachments to it are strictly prohibited. If you have received = this > > communication in error, please notify us by reply e-mail or telephone a= nd > > immediately and permanently delete the message and any attachments. > > Thank you > > >=20