From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from inmumg02.tcs.com (Inmumg02.tcs.com [219.64.33.222]) by dpdk.org (Postfix) with ESMTP id 5535E91E8 for ; Tue, 26 Jan 2016 20:15:07 +0100 (CET) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DQAQC+wqdW/2AKEaxegm6BHm2IV6pthzIBDYFiJII7gmZKAoIHFAEBAQEBAQGBCoRBAQEBBBoBDFIMCQQCDQQDAQEBIQECBE0JCAYLCBGIGJJDnCwBAQGQBQEBAQEBAQEBAQEBAQEBAQEBAQEXhjKDaoEDhDM3AQcFAQmCdoEPBYdZWoVtiFlohF6FVoQYFjSDeohXhXKEfoNTHgEBgkMYG4E9YgGIQwEBAQ X-IPAS-Result: A2DQAQC+wqdW/2AKEaxegm6BHm2IV6pthzIBDYFiJII7gmZKAoIHFAEBAQEBAQGBCoRBAQEBBBoBDFIMCQQCDQQDAQEBIQECBE0JCAYLCBGIGJJDnCwBAQGQBQEBAQEBAQEBAQEBAQEBAQEBAQEXhjKDaoEDhDM3AQcFAQmCdoEPBYdZWoVtiFlohF6FVoQYFjSDeohXhXKEfoNTHgEBgkMYG4E9YgGIQwEBAQ X-IronPort-AV: E=Sophos;i="5.22,351,1449513000"; d="scan'208,217";a="28524942" X-DISCLAIMER: FALSE MIME-Version: 1.0 Importance: Normal X-Priority: 3 (Normal) In-Reply-To: References: From: Abhijeet Karve To: "Czesnowicz, Przemyslaw" Message-ID: Date: Wed, 27 Jan 2016 00:44:58 +0530 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" , "discuss@openvswitch.org" Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Jan 2016 19:15:08 -0000 Hi Przemek, Thank you for your response, It's really provided us breakthrough.=20 After setting up DPDK on compute node for stable/kilo, Trying to set up Ope= nstack stable/liberty all-in-one setup, At present not able to get the IP a= llocation for the vhost type instances through DHCP. Also tried assigning I= P's manually to them but the inter-VM communication also not happening, #neutron agent-list root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list +--------------------------------------+--------------------+--------------= -----+-------+----------------+---------------------------+ | id =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 | = agent=5Ftype =A0 =A0 =A0 =A0 | host =A0 =A0 =A0 =A0 =A0 =A0 =A0| alive | ad= min=5Fstate=5Fup | binary =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| +--------------------------------------+--------------------+--------------= -----+-------+----------------+---------------------------+ | 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent =A0 =A0 | nfv-dpdk-= devstack | :-) =A0 | True =A0 =A0 =A0 =A0 =A0 | neutron-openvswitch-agent | | 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent =A0 =A0 =A0 =A0 =A0 | nfv= -dpdk-devstack | :-) =A0 | True =A0 =A0 =A0 =A0 =A0 | neutron-l3-agent =A0 = =A0 =A0 =A0 =A0| | 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent =A0 =A0 =A0 =A0 | nfv-d= pdk-devstack | :-) =A0 | True =A0 =A0 =A0 =A0 =A0 | neutron-dhcp-agent =A0 = =A0 =A0 =A0| | b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-devs= tack | :-) =A0 | True =A0 =A0 =A0 =A0 =A0 | neutron-linuxbridge-agent | | c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent =A0 =A0 | nfv-dpdk-= devstack | :-) =A0 | True =A0 =A0 =A0 =A0 =A0 | neutron-metadata-agent =A0 = =A0| | f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-devs= tack | xxx =A0 | True =A0 =A0 =A0 =A0 =A0 | neutron-openvswitch-agent | +--------------------------------------+--------------------+--------------= -----+-------+----------------+---------------------------+ ovs-vsctl show output# -------------------------------------------------------- Bridge br-dpdk =A0 =A0 =A0 =A0 Port br-dpdk =A0 =A0 =A0 =A0 =A0 =A0 Interface br-dpdk =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: internal =A0 =A0 =A0 =A0 Port phy-br-dpdk =A0 =A0 =A0 =A0 =A0 =A0 Interface phy-br-dpdk =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: patch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 options: {peer=3Dint-br-dpdk} =A0 =A0 Bridge br-int =A0 =A0 =A0 =A0 fail=5Fmode: secure =A0 =A0 =A0 =A0 Port "vhufa41e799-f2" =A0 =A0 =A0 =A0 =A0 =A0 tag: 5 =A0 =A0 =A0 =A0 =A0 =A0 Interface "vhufa41e799-f2" =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: dpdkvhostuser =A0 =A0 =A0 =A0 Port int-br-dpdk =A0 =A0 =A0 =A0 =A0 =A0 Interface int-br-dpdk =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: patch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 options: {peer=3Dphy-br-dpdk} =A0 =A0 =A0 =A0 Port "tap4e19f8e1-59" =A0 =A0 =A0 =A0 =A0 =A0 tag: 5 =A0 =A0 =A0 =A0 =A0 =A0 Interface "tap4e19f8e1-59" =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: internal =A0 =A0 =A0 =A0 Port "vhu05734c49-3b" =A0 =A0 =A0 =A0 =A0 =A0 tag: 5 =A0 =A0 =A0 =A0 =A0 =A0 Interface "vhu05734c49-3b" =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: dpdkvhostuser =A0 =A0 =A0 =A0 Port "vhu10c06b4d-84" =A0 =A0 =A0 =A0 =A0 =A0 tag: 5 =A0 =A0 =A0 =A0 =A0 =A0 Interface "vhu10c06b4d-84" =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: dpdkvhostuser =A0 =A0 =A0 =A0 Port patch-tun =A0 =A0 =A0 =A0 =A0 =A0 Interface patch-tun =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: patch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 options: {peer=3Dpatch-int} =A0 =A0 =A0 =A0 Port "vhue169c581-ef" =A0 =A0 =A0 =A0 =A0 =A0 tag: 5 =A0 =A0 =A0 =A0 =A0 =A0 Interface "vhue169c581-ef" =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: dpdkvhostuser =A0 =A0 =A0 =A0 Port br-int =A0 =A0 =A0 =A0 =A0 =A0 Interface br-int =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: internal =A0 =A0 Bridge br-tun =A0 =A0 =A0 =A0 fail=5Fmode: secure =A0 =A0 =A0 =A0 Port br-tun =A0 =A0 =A0 =A0 =A0 =A0 Interface br-tun =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: internal =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 error: "could not open network device br-tu= n (Invalid argument)" =A0 =A0 =A0 =A0 Port patch-int =A0 =A0 =A0 =A0 =A0 =A0 Interface patch-int =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type: patch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 options: {peer=3Dpatch-tun} =A0 =A0 ovs=5Fversion: "2.4.0" -------------------------------------------------------- ovs-ofctl dump-flows br-int# -------------------------------------------------------- root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int NXST=5FFLOW reply (xid=3D0x4): =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2410.012s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2410, priority=3D10,icmp6,in=5Fport=3D43= ,icmp=5Ftype=3D136 actions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2409.480s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D10,icmp6,in=5Fport=3D44= ,icmp=5Ftype=3D136 actions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2408.704s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D10,icmp6,in=5Fport=3D45= ,icmp=5Ftype=3D136 actions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2408.155s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D10,icmp6,in=5Fport=3D42= ,icmp=5Ftype=3D136 actions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2409.858s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D10,arp,in=5Fport=3D43 a= ctions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2409.314s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D10,arp,in=5Fport=3D44 a= ctions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2408.564s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D10,arp,in=5Fport=3D45 a= ctions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2408.019s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D10,arp,in=5Fport=3D42 a= ctions=3Dresubmit(,24) =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2411.538s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2411, priority=3D3,in=5Fport=3D1,dl=5Fvl= an=3D346 actions=3Dmod=5Fvlan=5Fvid:5,NORMAL =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2415.038s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2415, priority=3D2,in=5Fport=3D1 actions= =3Ddrop =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2416.148s, table=3D0, n=5Fpacket= s=3D0, n=5Fbytes=3D0, idle=5Fage=3D2416, priority=3D0 actions=3DNORMAL =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2416.059s, table=3D23, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2416, priority=3D0 actions=3Ddrop =A0cookie=3D0xaaa002bb2bcf827b, duration=3D2410.101s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2410, priority=3D2,icmp6,in=5Fport=3D43= ,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f816:3eff:fe81:da61 actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2409.571s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D2,icmp6,in=5Fport=3D44= ,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f816:3eff:fe73:254 actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2408.775s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D2,icmp6,in=5Fport=3D45= ,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f816:3eff:fe88:5cc actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2408.231s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D2,icmp6,in=5Fport=3D42= ,icmp=5Ftype=3D136,nd=5Ftarget=3Dfe80::f816:3eff:fe86:f5f7 actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2409.930s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D2,arp,in=5Fport=3D43,a= rp=5Fspa=3D20.20.20.14 actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2409.389s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2409, priority=3D2,arp,in=5Fport=3D44,a= rp=5Fspa=3D20.20.20.16 actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2408.633s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D2,arp,in=5Fport=3D45,a= rp=5Fspa=3D20.20.20.17 actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2408.085s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2408, priority=3D2,arp,in=5Fport=3D42,a= rp=5Fspa=3D20.20.20.13 actions=3DNORMAL =FFcookie=3D0xaaa002bb2bcf827b, duration=3D2415.974s, table=3D24, n=5Fpacke= ts=3D0, n=5Fbytes=3D0, idle=5Fage=3D2415, priority=3D0 actions=3Ddrop root@nfv-dpdk-devstack:/etc/neutron# -------------------------------------------------------- It will be really great for us if get any hint to overcome out of this Inte= r-VM & DHCP communication issue, Thanks & Regards Abhijeet Karve "Czesnowicz, Przemyslaw" ---01/04/2016 07:54:52 PM---You should be able to = clone networking-ovs-dpdk, switch to kilo branch, =FFand run python setup.p= y ins From: "Czesnowicz, Przemyslaw" To: Abhijeet Karve Cc: "dev@dpdk.org" , "discuss@openvswitch.org" , "Gray, Mark D" Date: 01/04/2016 07:54 PM Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting= memory backing issues with qemu parameter passing You should be able to clone networking-ovs-dpdk, switch to kilo branch, =FF= and run=20 python setup.py install=20 in the root of networking-ovs-dpdk, that should install agent and mech driv= er. Then you would need to enable mech driver (ovsdpdk) on the controller in th= e /etc/neutron/plugins/ml2/ml2=5Fconf.ini And run the right agent on the computes (networking-ovs-dpdk-agent). =FF =FF There should be pip packeges of networking-ovs-dpdk available shortly, I=0F= ’ll let you know when that happens. =FF Przemek=20 =FF From:=FFAbhijeet Karve [mailto:abhijeet.karve@tcs.com]=20 Sent:=FFThursday, December 24, 2015 6:42 PM To:=FFCzesnowicz, Przemyslaw Cc:=FFdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D Subject:=FFRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getti= ng memory backing issues with qemu parameter passing =FF Hi Przemek,=FF Thank you so much for your quick response.=20 The guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo= /doc/source/getstarted/ubuntu.rst) which you have suggested that is for ope= nstack vhost user installations with devstack.=20 Can't we have any reference for including ovs-dpdk mechanisam driver for op= enstack Ubuntu distribution which we are following for=20 compute+controller node setup?"=20 We are facing below listed issues With the current approach of setting up o= penstack kilo interactively + replacing ovs with ovs-dpdk enabled and Insta= nce creation in openstack with=FF passing that instance id to QEMU command line which further passes the vhos= t-user sockets to instances for enabling the DPDK libraries in it.=FF 1. Created a flavor m1.hugepages which is backed by hugepage memory, unable= to spawn instance with this flavor =0F– Getting a issue like: No mat= ching hugetlbfs for the number of hugepages assigned to the flavor.=FF 2. Passing socket info to instances via qemu manually and instnaces created= are not persistent.=FF Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driv= er and agent all of that in our openstack ubuntu distribution.=FF Would be really appriciate if get any help or ref with explanation.=FF We are using compute + controller node setup and we are using following sof= tware platform on compute node:=20 =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=20 Openstack: Kilo=20 Distribution: Ubuntu 14.04=20 OVS Version: 2.4.0=20 DPDK 2.0.0=20 =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=20 Thanks,=FF Abhijeet Karve=FF From: =FF =FF =FF =FF"Czesnowicz, Przemyslaw" =FF To: =FF =FF =FF =FFAbhijeet Karve =FF Cc: =FF =FF =FF =FF"dev@dpdk.org"=FF, "discuss@openvswitch.or= g"=FF, "Gray, Mark D" =FF Date: =FF =FF =FF =FF12/17/2015 06:32 PM=FF Subject: =FF =FF =FF =FFRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Re= solved# Successfully setup DPDK OVS with vhostuser=FF I haven=0F’t tried that approach not sure if that would work, it seem= s clunky.=FF =FF If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports= to ovs with the right type, pass the sockets to qemu) would be done by Ope= nStack.=FF =FF Przemek=FF =FF From:=FFAbhijeet Karve [mailto:abhijeet.karve@tcs.com]=20 Sent:=FFThursday, December 17, 2015 12:41 PM To:=FFCzesnowicz, Przemyslaw Cc:=FFdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D Subject:=FFRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Succe= ssfully setup DPDK OVS with vhostuser=FF =FF Hi Przemek,=FF Thank you so much for sharing the ref guide.=FF Would be appreciate if clear one doubt.=20 At present we are setting up openstack kilo interactively and further repla= cing ovs with ovs-dpdk enabled.=20 Once the above setup done, We are creating instance in openstack and passin= g that instance id to QEMU command line which further passes the vhost-user= sockets to instances, enabling the DPDK libraries in it.=FF Isn't this the correct way of integrating ovs-dpdk with openstack?=FF Thanks & Regards Abhijeet Karve From: =FF =FF =FF =FF"Czesnowicz, Przemyslaw" =FF To: =FF =FF =FF =FFAbhijeet Karve =FF Cc: =FF =FF =FF =FF"dev@dpdk.org"=FF, "discuss@openvswitch.or= g"=FF, "Gray, Mark D" =FF Date: =FF =FF =FF =FF12/17/2015 05:27 PM=FF Subject: =FF =FF =FF =FFRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Re= solved# Successfully setup DPDK OVS with vhostuser=FF HI Abhijeet,=FF For Kilo you need to use ovsdpdk mechanism driver and a matching agent to i= ntegrate ovs-dpdk with OpenStack.=FF The guide you are following only talks about running ovs-dpdk not how it sh= ould be integrated with OpenStack.=FF Please follow this guide:=FF https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/sourc= e/getstarted/ubuntu.rst=FF Best regards=FF Przemek=FF From:=FFAbhijeet Karve [mailto:abhijeet.karve@tcs.com]=20 Sent:=FFWednesday, December 16, 2015 9:37 AM To:=FFCzesnowicz, Przemyslaw Cc:=FFdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D Subject:=FFRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Succe= ssfully setup DPDK OVS with vhostuser=FF Hi Przemek,=FF We have configured the=FFaccelerated data path between a physical interface= to the VM using openvswitch netdev-dpdk with vhost-user support. The VM cr= eated with this special data path and vhost library, I am calling as DPDK i= nstance.=20 If assigning ip manually to the newly created Cirros VM instance, We are ab= le to make 2 VM's to communicate on the same compute node. Else it's not as= sociating any ip through DHCP though DHCP is in compute node only.=FF Yes it's a compute + controller node setup and we are using following softw= are platform on compute node:=FF =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=FF Openstack: Kilo=FF Distribution: Ubuntu 14.04=FF OVS Version: 2.4.0=FF DPDK 2.0.0=FF =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=FF We are following the intel guide https://software.intel.com/en-us/blogs/201= 5/06/09/building-vhost-user-for-ovs-today-using-dpdk-200=FF When doing "ovs-vsctl show" in compute node, it shows below output:=FF =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=FF ovs-vsctl show=FF c2ec29a5-992d-4875-8adc-1265c23e0304=FF =FFBridge br-ex=FF =FF =FF =FFPort phy-br-ex=FF =FF =FF =FF =FF =FFInterface phy-br-ex=FF =FF =FF =FF =FF =FF =FF =FFtype: patch=FF =FF =FF =FF =FF =FF =FF =FFoptions: {peer=3Dint-br-ex}=FF =FF =FF =FFPort br-ex=FF =FF =FF =FF =FF =FFInterface br-ex=FF =FF =FF =FF =FF =FF =FF =FFtype: internal=FF =FFBridge br-tun=FF =FF =FF =FFfail=5Fmode: secure=FF =FF =FF =FFPort br-tun=FF =FF =FF =FF =FF =FFInterface br-tun=FF =FF =FF =FF =FF =FF =FF =FFtype: internal=FF =FF =FF =FFPort patch-int=FF =FF =FF =FF =FF =FFInterface patch-int=FF =FF =FF =FF =FF =FF =FF =FFtype: patch=FF =FF =FF =FF =FF =FF =FF =FFoptions: {peer=3Dpatch-tun}=FF =FFBridge br-int=FF =FF =FF =FFfail=5Fmode: secure=FF =FF =FF =FFPort "qvo0ae19a43-b6"=FF =FF =FF =FF =FF =FFtag: 2=FF =FF =FF =FF =FF =FFInterface "qvo0ae19a43-b6"=FF =FF =FF =FFPort br-int=FF =FF =FF =FF =FF =FFInterface br-int=FF =FF =FF =FF =FF =FF =FF =FFtype: internal=FF =FF =FF =FFPort "qvo31c89856-a2"=FF =FF =FF =FF =FF =FFtag: 1=FF =FF =FF =FF =FF =FFInterface "qvo31c89856-a2"=FF =FF =FF =FFPort patch-tun=FF =FF =FF =FF =FF =FFInterface patch-tun=FF =FF =FF =FF =FF =FF =FF =FFtype: patch=FF =FF =FF =FF =FF =FF =FF =FFoptions: {peer=3Dpatch-int}=FF =FF =FF =FFPort int-br-ex=FF =FF =FF =FF =FF =FFInterface int-br-ex=FF =FF =FF =FF =FF =FF =FF =FFtype: patch=FF =FF =FF =FF =FF =FF =FF =FFoptions: {peer=3Dphy-br-ex}=FF =FF =FF =FFPort "qvo97fef28a-ec"=FF =FF =FF =FF =FF =FFtag: 2=FF =FF =FF =FF =FF =FFInterface "qvo97fef28a-ec"=FF =FFBridge br-dpdk=FF =FF =FF =FFPort br-dpdk=FF =FF =FF =FF =FF =FFInterface br-dpdk=FF =FF =FF =FF =FF =FF =FF =FFtype: internal=FF =FFBridge "br0"=FF =FF =FF =FFPort "br0"=FF =FF =FF =FF =FF =FFInterface "br0"=FF =FF =FF =FF =FF =FF =FF =FFtype: internal=FF =FF =FF =FFPort "dpdk0"=FF =FF =FF =FF =FF =FFInterface "dpdk0"=FF =FF =FF =FF =FF =FF =FF =FFtype: dpdk=FF =FF =FF =FFPort "vhost-user-2"=FF =FF =FF =FF =FF =FFInterface "vhost-user-2"=FF =FF =FF =FF =FF =FF =FF =FFtype: dpdkvhostuser=FF =FF =FF =FFPort "vhost-user-0"=FF =FF =FF =FF =FF =FFInterface "vhost-user-0"=FF =FF =FF =FF =FF =FF =FF =FFtype: dpdkvhostuser=FF =FF =FF =FFPort "vhost-user-1"=FF =FF =FF =FF =FF =FFInterface "vhost-user-1"=FF =FF =FF =FF =FF =FF =FF =FFtype: dpdkvhostuser=FF =FFovs=5Fversion: "2.4.0"=FF root@dpdk:~#=20 =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=FF Open flows output in bridge in compute node are as below:=FF =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=FF root@dpdk:~# ovs-ofctl dump-flows br-tun=FF NXST=5FFLOW reply (xid=3D0x4):=FF cookie=3D0x0, duration=3D71796.741s, table=3D0, n=5Fpackets=3D519, n=5Fbyte= s=3D33794, idle=5Fage=3D19982, hard=5Fage=3D65534, priority=3D1,in=5Fport= =3D1 actions=3Dresubmit(,2)=FF cookie=3D0x0, duration=3D71796.700s, table=3D0, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop=FF cookie=3D0x0, duration=3D71796.649s, table=3D2, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0,dl=5Fdst=3D00:00= :00:00:00:00/01:00:00:00:00:00 actions=3Dresubmit(,20)=FF cookie=3D0x0, duration=3D71796.610s, table=3D2, n=5Fpackets=3D519, n=5Fbyte= s=3D33794, idle=5Fage=3D19982, hard=5Fage=3D65534, priority=3D0,dl=5Fdst=3D= 01:00:00:00:00:00/01:00:00:00:00:00 actions=3Dresubmit(,22)=FF cookie=3D0x0, duration=3D71794.631s, table=3D3, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D1,tun=5Fid=3D0x5c = actions=3Dmod=5Fvlan=5Fvid:2,resubmit(,10)=FF cookie=3D0x0, duration=3D71794.316s, table=3D3, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D1,tun=5Fid=3D0x57 = actions=3Dmod=5Fvlan=5Fvid:1,resubmit(,10)=FF cookie=3D0x0, duration=3D71796.565s, table=3D3, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop=FF cookie=3D0x0, duration=3D71796.522s, table=3D4, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop=FF cookie=3D0x0, duration=3D71796.481s, table=3D10, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D1 actions=3Dlearn(= table=3D20,hard=5Ftimeout=3D300,priority=3D1,NXM=5FOF=5FVLAN=5FTCI[0..11],N= XM=5FOF=5FETH=5FDST[]=3DNXM=5FOF=5FETH=5FSRC[],load:0->NXM=5FOF=5FVLAN=5FTC= I[],load:NXM=5FNX=5FTUN=5FID[]->NXM=5FNX=5FTUN=5FID[],output:NXM=5FOF=5FIN= =5FPORT[]),output:1=FF cookie=3D0x0, duration=3D71796.439s, table=3D20, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Dresubm= it(,22)=FF cookie=3D0x0, duration=3D71796.398s, table=3D22, n=5Fpackets=3D519, n=5Fbyt= es=3D33794, idle=5Fage=3D19982, hard=5Fage=3D65534, priority=3D0 actions=3D= drop=FF root@dpdk:~#=20 root@dpdk:~#=20 root@dpdk:~#=20 root@dpdk:~# ovs-ofctl dump-flows br-tun=FF int NXST=5FFLOW reply (xid=3D0x4):=FF cookie=3D0x0, duration=3D71801.275s, table=3D0, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D2,in=5Fport=3D10 a= ctions=3Ddrop=FF cookie=3D0x0, duration=3D71801.862s, table=3D0, n=5Fpackets=3D661, n=5Fbyte= s=3D48912, idle=5Fage=3D19981, hard=5Fage=3D65534, priority=3D1 actions=3DN= ORMAL=FF cookie=3D0x0, duration=3D71801.817s, table=3D23, n=5Fpackets=3D0, n=5Fbytes= =3D0, idle=5Fage=3D65534, hard=5Fage=3D65534, priority=3D0 actions=3Ddrop=FF root@dpdk:~#=20 =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=FF Further we don't know what all the network changes(Packet Flow addition) if= required for associating IP address through the DHCP.=FF Would be really appreciate if have clarity on DHCP flow establishment.=20 Thanks & Regards Abhijeet Karve From: =FF =FF =FF =FF"Czesnowicz, Przemyslaw" =FF To: =FF =FF =FF =FFAbhijeet Karve , "Gray, Mark D" = =FF Cc: =FF =FF =FF =FF"dev@dpdk.org"=FF, "discuss@openvswitch.or= g"=FF=FF Date: =FF =FF =FF =FF12/15/2015 09:13 PM=FF Subject: =FF =FF =FF =FFRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Re= solved# Successfully setup DPDK OVS with vhostuser=FF Hi Abhijeet, If you answer below questions it will help me understand your problem. What do you mean by DPDK instance? Are you able to communicate with other VM's on the same compute node? Can you check if the DHCP requests arrive on the controller node? (I'm assu= ming this is at least compute+ controller setup) Best regards Przemek > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve > Sent: Tuesday, December 15, 2015 5:56 AM > To: Gray, Mark D > Cc: dev@dpdk.org; discuss@openvswitch.org > Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# > Successfully setup DPDK OVS with vhostuser >=20 > Dear All, >=20 > After seting up system boot parameters as shown below, the issue is > resolved now & we are able to successfully setup openvswitch netdev-dpdk > with vhostuser support. >=20 > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F > Setup 2 sets of huge pages with different sizes. One for Vhost and another > for Guest VM. > =FF =FF =FF =FF =FFEdit /etc/default/grub. > =FF =FF =FF =FF =FF =FF GRUB=5FCMDLINE=5FLINUX=3D"iommu=3Dpt intel=5Fiomm= u=3Don =FFhugepagesz=3D1G > hugepages=3D10 hugepagesz=3D2M hugepages=3D4096" > =FF =FF =FF =FF =FF# update-grub > =FF =FF =FF =FF- Mount the huge pages into different directory. > =FF =FF =FF =FF =FF # sudo mount -t hugetlbfs nodev /mnt/huge=5F2M -o pag= esize=3D2M > =FF =FF =FF =FF =FF # sudo mount -t hugetlbfs nodev /mnt/huge=5F1G -o pag= esize=3D1G > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F >=20 > At present we are facing an issue in Testing DPDK application on setup. I= n our > scenario, We have DPDK instance launched on top of the Openstack Kilo > compute node. Not able to assign DHCP IP from controller. >=20 >=20 > Thanks & Regards > Abhijeet Karve >=20 > =3D=3D=3D=3D=3D-----=3D=3D=3D=3D=3D-----=3D=3D=3D=3D=3D > Notice: The information contained in this e-mail message and/or > attachments to it may contain confidential or privileged information. If = you > are not the intended recipient, any dissemination, use, review, distribut= ion, > printing or copying of the information contained in this e-mail message > and/or attachments to it are strictly prohibited. If you have received th= is > communication in error, please notify us by reply e-mail or telephone and > immediately and permanently delete the message and any attachments. > Thank you > [attachment "nova-scheduler.log" removed by Abhijeet Karve/AHD/TCS] [attachment "nova-compute.log" removed by Abhijeet Karve/AHD/TCS] [attachment "neutron-server.log" removed by Abhijeet Karve/AHD/TCS] >From aconole@redhat.com Tue Jan 26 20:25:49 2016 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 84AB78E94 for ; Tue, 26 Jan 2016 20:25:49 +0100 (CET) Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id 0412532D3CD for ; Tue, 26 Jan 2016 19:25:49 +0000 (UTC) Received: from aconole-fed23 (dhcp-25-194.bos.redhat.com [10.18.25.194]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u0QJPmSR011385 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Tue, 26 Jan 2016 14:25:48 -0500 From: Aaron Conole To: dev@dpdk.org In-Reply-To: <1449850823-29017-1-git-send-email-aconole@redhat.com> Date: Tue, 26 Jan 2016 14:25:48 -0500 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1.50 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 Subject: Re: [dpdk-dev] [PATCH 2.3] tools/dpdk_nic_bind.py: Verbosely warn the user on bind X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Jan 2016 19:25:49 -0000 Ping... This patch has been sitting^Hrotting for a bit over a month. > DPDK ports are only detected during the EAL initialization. After that, any > new DPDK ports which are bound will not be visible to the application. > > The dpdk_nic_bind.py can be a bit more helpful to let users know that DPDK > enabled applications will not find rebound ports until after they have been > restarted. > > Signed-off-by: Aaron Conole > > --- > tools/dpdk_nic_bind.py | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/tools/dpdk_nic_bind.py b/tools/dpdk_nic_bind.py > index f02454e..ca39389 100755 > --- a/tools/dpdk_nic_bind.py > +++ b/tools/dpdk_nic_bind.py > @@ -344,8 +344,10 @@ def bind_one(dev_id, driver, force): > dev["Driver_str"] = "" # clear driver string > > # if we are binding to one of DPDK drivers, add PCI id's to that driver > + bDpdkDriver = False > if driver in dpdk_drivers: > filename = "/sys/bus/pci/drivers/%s/new_id" % driver > + bDpdkDriver = True > try: > f = open(filename, "w") > except: > @@ -371,12 +373,18 @@ def bind_one(dev_id, driver, force): > try: > f.write(dev_id) > f.close() > + if bDpdkDriver: > + print "Device rebound to dpdk driver." > + print "Remember to restart any application that will use this port." > except: > # for some reason, closing dev_id after adding a new PCI ID to new_id > # results in IOError. however, if the device was successfully bound, > # we don't care for any errors and can safely ignore IOError > tmp = get_pci_device_details(dev_id) > if "Driver_str" in tmp and tmp["Driver_str"] == driver: > + if bDpdkDriver: > + print "Device rebound to dpdk driver." > + print "Remember to restart any application that will use this port." > return > print "Error: bind failed for %s - Cannot bind to driver %s" % (dev_id, driver) > if saved_driver is not None: # restore any previous driver