DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup
       [not found] <OF9062862D.67700483-ON65257F4A.004D1715-65257F4A.004D1738@LocalDomain>
@ 2016-02-04 14:54 ` Abhijeet Karve
  2016-02-04 17:37   ` Chandran, Sugesh
  0 siblings, 1 reply; 3+ messages in thread
From: Abhijeet Karve @ 2016-02-04 14:54 UTC (permalink / raw)
  To: przemyslaw.czesnowicz; +Cc: dev, discuss

Hi All,

The issue which we are facing as described in previous threads thats beccause of seting up ovs+dpdk with igb_uio driver instead of vfio_pci? 

Would appriciate if get any suggestions on this.

Thanks & Regards
Abhijeet Karve


To: przemyslaw.czesnowicz@intel.com
From: Abhijeet Karve
Date: 01/30/2016 07:32PM
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Subject: Fw: RE: [dpdk-dev] DPDK vhostuser with vxlan


 Hi przemek,


We have setup vxlan tunnel between our two compute nodes, Can see the traafic in vxlan port on br-tun of source instance's compute node.

We are in same situation which is being described in below thread, i looked dev mailing archieves for it but seems no one has responded it.

 http://comments.gmane.org/gmane.linux.network.openvswitch.general/9878

Would be really appriciate if provide us any suggestions on it.



Thanks & Regards
Abhijeet Karve

 -----Forwarded by on 01/30/2016 07:24PM -----

 =======================
 To: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
 From: Abhijeet Karve/AHD/TCS@TCS
 Date: 01/27/2016 09:52PM 
 Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
 Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
 =======================
   Hi Przemek,

Thanks for the quick response. Now  able to get the DHCP ip's for 2 vhostuser instances and able to ping each other. Isssue was a bug in cirros 0.3.0 images which we were using in openstack after using 0.3.1 image as given in the URL(https://www.redhat.com/archives/rhos-list/2013-August/msg00032.html), able to get the IP's in vhostuser VM instances.

As per our understanding, Packet flow across DPDK datapath will be like vhostuser ports are connected to the br-int bridge & same is being patched to the br-dpdk bridge where in our physical network (NIC) is connected with dpdk0 port. 

So for testing the flow we have to connect that physical network(NIC) with external packet generator (e.g - ixia, iperf) & run the testpmd application in the vhostuser VM, right?

Does it required to add any flows/efforts in bridge configurations(either br-int or br-dpdk)?


Thanks & Regards
Abhijeet Karve




From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To: Abhijeet Karve <abhijeet.karve@tcs.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date: 01/27/2016 05:11 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue



Hi Abhijeet,
 
 
It seems you are almost there! 
When booting the VM&#8217;s do you request hugepage memory for them (by setting hw:mem_page_size=large in flavor extra_spec)?
If not then please do, if yes then please look into libvirt logfiles for the VM&#8217;s (in /var/log/libvirt/qemu/instance-xxx), I think there could be a clue.
 
 
Regards
Przemek
 
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Monday, January 25, 2016 6:13 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
 
Hi Przemek, 

Thank you for your response, It really provided us breakthrough. 

After setting up DPDK on compute node for stable/kilo, We are trying to set up Openstack stable/liberty all-in-one setup, At present we are not able to get the IP allocation for the vhost type instances through DHCP. Also we tried assigning IP's manually to them but the inter-VM communication also not happening, 

#neutron agent-list 
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 
| id                                   | agent_type         | host              | alive | admin_state_up | binary                    | 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 
| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent     | nfv-dpdk-devstack | :-)   | True           | neutron-openvswitch-agent | 
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent           | nfv-dpdk-devstack | :-)   | True           | neutron-l3-agent          | 
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent         | nfv-dpdk-devstack | :-)   | True           | neutron-dhcp-agent        | 
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-devstack | :-)   | True           | neutron-linuxbridge-agent | 
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent     | nfv-dpdk-devstack | :-)   | True           | neutron-metadata-agent    | 
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-devstack | xxx   | True           | neutron-openvswitch-agent | 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 


ovs-vsctl show output# 
-------------------------------------------------------- 
Bridge br-dpdk 
        Port br-dpdk 
            Interface br-dpdk 
                type: internal 
        Port phy-br-dpdk 
            Interface phy-br-dpdk 
                type: patch 
                options: {peer=int-br-dpdk} 
    Bridge br-int 
        fail_mode: secure 
        Port "vhufa41e799-f2" 
            tag: 5 
            Interface "vhufa41e799-f2" 
                type: dpdkvhostuser 
        Port int-br-dpdk 
            Interface int-br-dpdk 
                type: patch 
                options: {peer=phy-br-dpdk} 
        Port "tap4e19f8e1-59" 
            tag: 5 
            Interface "tap4e19f8e1-59" 
                type: internal 
        Port "vhu05734c49-3b" 
            tag: 5 
            Interface "vhu05734c49-3b" 
                type: dpdkvhostuser 
        Port "vhu10c06b4d-84" 
            tag: 5 
            Interface "vhu10c06b4d-84" 
                type: dpdkvhostuser 
        Port patch-tun 
            Interface patch-tun 
                type: patch 
                options: {peer=patch-int} 
        Port "vhue169c581-ef" 
            tag: 5 
            Interface "vhue169c581-ef" 
                type: dpdkvhostuser 
        Port br-int 
            Interface br-int 
                type: internal 
    Bridge br-tun 
        fail_mode: secure 
        Port br-tun 
            Interface br-tun 
                type: internal 
                error: "could not open network device br-tun (Invalid argument)" 
        Port patch-int 
            Interface patch-int 
                type: patch 
                options: {peer=patch-tun} 
    ovs_version: "2.4.0" 
-------------------------------------------------------- 


ovs-ofctl dump-flows br-int# 
-------------------------------------------------------- 
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int 
NXST_FLOW reply (xid=0x4): 
 cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0, n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0, n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346 actions=mod_vlan_vid:5,NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0, n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop 
 cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=drop 
 cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0, n_bytes=0, idle_age=2410, priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0, n_bytes=0, idle_age=2415, priority=0 actions=drop 
root@nfv-dpdk-devstack:/etc/neutron# 
-------------------------------------------------------- 


                                              


Also attaching Neutron-server, nova-compute & nova-scheduler logs. 

It will be really great for us if get any hint to overcome out of this Inter-VM & DHCP communication issue, 




Thanks & Regards
Abhijeet Karve



From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com> 
Date:        01/04/2016 07:54 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing 




You should be able to clone networking-ovs-dpdk, switch to kilo branch,  and run 
python setup.py install 
in the root of networking-ovs-dpdk, that should install agent and mech driver. 
Then you would need to enable mech driver (ovsdpdk) on the controller in the /etc/neutron/plugins/ml2/ml2_conf.ini 
And run the right agent on the computes (networking-ovs-dpdk-agent). 
  
  
There should be pip packeges of networking-ovs-dpdk available shortly, I\x0f&#8217;ll let you know when that happens. 
  
Przemek 
  
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Thursday, December 24, 2015 6:42 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing 
  
Hi Przemek, 

Thank you so much for your quick response. 

The guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have suggested that is for openstack vhost user installations with devstack. 
Can't we have any reference for including ovs-dpdk mechanisam driver for openstack Ubuntu distribution which we are following for 
compute+controller node setup?" 

We are facing below listed issues With the current approach of setting up openstack kilo interactively + replacing ovs with ovs-dpdk enabled and Instance creation in openstack with 
passing that instance id to QEMU command line which further passes the vhost-user sockets to instances for enabling the DPDK libraries in it. 


1. Created a flavor m1.hugepages which is backed by hugepage memory, unable to spawn instance with this flavor \x0f&#8211; Getting a issue like: No matching hugetlbfs for the number of hugepages assigned to the flavor. 
2. Passing socket info to instances via qemu manually and instnaces created are not persistent. 

Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driver and agent all of that in our openstack ubuntu distribution. 

Would be really appriciate if get any help or ref with explanation. 

We are using compute + controller node setup and we are using following software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

Thanks, 
Abhijeet Karve 





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com> 
Date:        12/17/2015 06:32 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser 





I haven\x0f&#8217;t tried that approach not sure if that would work, it seems clunky. 
 
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to ovs with the right type, pass the sockets to qemu) would be done by OpenStack. 
 
Przemek 
 
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser 
 
Hi Przemek, 

Thank you so much for sharing the ref guide. 

Would be appreciate if clear one doubt. 

At present we are setting up openstack kilo interactively and further replacing ovs with ovs-dpdk enabled. 
Once the above setup done, We are creating instance in openstack and passing that instance id to QEMU command line which further passes the vhost-user sockets to instances, enabling the DPDK libraries in it. 

Isn't this the correct way of integrating ovs-dpdk with openstack? 


Thanks & Regards
Abhijeet Karve




From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com> 
Date:        12/17/2015 05:27 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser 






HI Abhijeet, 

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack. 

The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack. 

Please follow this guide: 
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst 

Best regards 
Przemek 


From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser 

Hi Przemek, 


We have configured the accelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance. 

If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only. 

Yes it's a compute + controller node setup and we are using following software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200 

When doing "ovs-vsctl show" in compute node, it shows below output: 
_____________________________________________ 
ovs-vsctl show 
c2ec29a5-992d-4875-8adc-1265c23e0304 
 Bridge br-ex 
     Port phy-br-ex 
         Interface phy-br-ex 
             type: patch 
             options: {peer=int-br-ex} 
     Port br-ex 
         Interface br-ex 
             type: internal 
 Bridge br-tun 
     fail_mode: secure 
     Port br-tun 
         Interface br-tun 
             type: internal 
     Port patch-int 
         Interface patch-int 
             type: patch 
             options: {peer=patch-tun} 
 Bridge br-int 
     fail_mode: secure 
     Port "qvo0ae19a43-b6" 
         tag: 2 
         Interface "qvo0ae19a43-b6" 
     Port br-int 
         Interface br-int 
             type: internal 
     Port "qvo31c89856-a2" 
         tag: 1 
         Interface "qvo31c89856-a2" 
     Port patch-tun 
         Interface patch-tun 
             type: patch 
             options: {peer=patch-int} 
     Port int-br-ex 
         Interface int-br-ex 
             type: patch 
             options: {peer=phy-br-ex} 
     Port "qvo97fef28a-ec" 
         tag: 2 
         Interface "qvo97fef28a-ec" 
 Bridge br-dpdk 
     Port br-dpdk 
         Interface br-dpdk 
             type: internal 
 Bridge "br0" 
     Port "br0" 
         Interface "br0" 
             type: internal 
     Port "dpdk0" 
         Interface "dpdk0" 
             type: dpdk 
     Port "vhost-user-2" 
         Interface "vhost-user-2" 
             type: dpdkvhostuser 
     Port "vhost-user-0" 
         Interface "vhost-user-0" 
             type: dpdkvhostuser 
     Port "vhost-user-1" 
         Interface "vhost-user-1" 
             type: dpdkvhostuser 
 ovs_version: "2.4.0" 
root@dpdk:~# 
_____________________________________________ 

Open flows output in bridge in compute node are as below: 
_____________________________________________ 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2) 
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) 
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22) 
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10) 
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10) 
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22) 
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
int NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop 
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMAL 
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
_____________________________________________ 


Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP. 

Would be really appreciate if have clarity on DHCP flow establishment. 



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" <mark.d.gray@intel.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org> 
Date:        12/15/2015 09:13 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser 







Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
> 
    
From olivier.matz@6wind.com  Thu Feb  4 16:03:00 2016
Return-Path: <olivier.matz@6wind.com>
Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com
 [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 2B0D62A5F
 for <dev@dpdk.org>; Thu,  4 Feb 2016 16:03:00 +0100 (CET)
Received: from [10.16.0.195] (unknown [10.16.0.195])
 by proxy.6wind.com (Postfix) with ESMTP id 7DF612A350;
 Thu,  4 Feb 2016 16:02:16 +0100 (CET)
Message-ID: <56B36815.6030001@6wind.com>
Date: Thu, 04 Feb 2016 16:02:45 +0100
From: Olivier MATZ <olivier.matz@6wind.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
 rv:31.0) Gecko/20100101 Icedove/31.7.0
MIME-Version: 1.0
To: David Hunt <david.hunt@intel.com>, dev@dpdk.org
References: <1453829155-1366-1-git-send-email-david.hunt@intel.com>
 <1453829155-1366-3-git-send-email-david.hunt@intel.com>
In-Reply-To: <1453829155-1366-3-git-send-email-david.hunt@intel.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Subject: Re: [dpdk-dev] [PATCH 2/5] memool: add stack (lifo) based external
	mempool handler
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Thu, 04 Feb 2016 15:03:00 -0000

Hi,

> [PATCH 2/5] memool: add stack (lifo) based external mempool handler

typo in the patch title: memool -> mempool


On 01/26/2016 06:25 PM, David Hunt wrote:
> adds a simple stack based mempool handler
>
> Signed-off-by: David Hunt <david.hunt@intel.com>

What is the purpose of this mempool handler?

Is it an example or is it something that could be useful for
dpdk applications?

If it's just an example, I think we could move this code
in app/test.

> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -52,7 +52,6 @@
>   #include <rte_lcore.h>
>   #include <rte_atomic.h>
>   #include <rte_branch_prediction.h>
> -#include <rte_ring.h>
>   #include <rte_mempool.h>
>   #include <rte_spinlock.h>
>   #include <rte_malloc.h>

Is this change related?



> +struct rte_mempool_common_stack {
> +	/* Spinlock to protect access */
> +	rte_spinlock_t sl;
> +
> +	uint32_t size;
> +	uint32_t len;
> +	void *objs[];
> +
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> +#endif

There is nothing inside the #ifdef


> +static void *
> +common_stack_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags)
> +{
> +	struct rte_mempool_common_stack *s;
> +	char stack_name[RTE_RING_NAMESIZE];
> +
> +	int size = sizeof(*s) + (n+16)*sizeof(void *);
> +
> +	flags = flags;
> +
> +	/* Allocate our local memory structure */
> +	snprintf(stack_name, sizeof(stack_name), "%s-common-stack", name);
> +	s = rte_zmalloc_socket(stack_name,
> +					size, RTE_CACHE_LINE_SIZE, socket_id);
> +	if (s == NULL) {
> +		RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
> +		return NULL;
> +	}
> +
> +	/* And the spinlock we use to protect access */
> +	rte_spinlock_init(&s->sl);
> +
> +	s->size = n;
> +	mp->rt_pool = (void *) s;
> +	mp->handler_idx = rte_get_mempool_handler("stack");
> +
> +	return (void *) s;
> +}

The explicit casts could be removed I think.


> +
> +static int common_stack_put(void *p, void * const *obj_table,
> +		unsigned n)
> +{
> +	struct rte_mempool_common_stack *s > +				(struct rte_mempool_common_stack *)p;

indent issue (same in get() and count())

> +	void **cache_objs;
> +	unsigned index;
> +
> +	/* Acquire lock */
> +	rte_spinlock_lock(&s->sl);
> +	cache_objs = &s->objs[s->len];
> +
> +	/* Is there sufficient space in the stack ? */
> +	if ((s->len + n) > s->size) {
> +		rte_spinlock_unlock(&s->sl);
> +		return -ENOENT;
> +	}

I think this cannot happen as there is a check in the get().
I wonder if a rte_panic() wouldn't be better here.



Regards,
Olivier

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup
  2016-02-04 14:54 ` [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup Abhijeet Karve
@ 2016-02-04 17:37   ` Chandran, Sugesh
  2016-02-05 15:16     ` Abhijeet Karve
  0 siblings, 1 reply; 3+ messages in thread
From: Chandran, Sugesh @ 2016-02-04 17:37 UTC (permalink / raw)
  To: Abhijeet Karve, Czesnowicz, Przemyslaw; +Cc: dev, discuss

Hi Abhijeet,

It looks to me that the arp entries may not populated right for the VxLAN ports in OVS.
Can you please refer the debug section in 
http://openvswitch.org/support/config-cookbooks/userspace-tunneling/
to verify and insert right arp entries in case they are missing ??


Regards
_Sugesh


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Thursday, February 4, 2016 2:55 PM
> To: Czesnowicz, Przemyslaw <przemyslaw.czesnowicz@intel.com>
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with
> igb_uio in ovs+dpdk setup
> 
> Hi All,
> 
> The issue which we are facing as described in previous threads thats
> beccause of seting up ovs+dpdk with igb_uio driver instead of vfio_pci?
> 
> Would appriciate if get any suggestions on this.
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> To: przemyslaw.czesnowicz@intel.com
> From: Abhijeet Karve
> Date: 01/30/2016 07:32PM
> Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Subject: Fw: RE: [dpdk-dev] DPDK vhostuser with vxlan
> 
> 
>  Hi przemek,
> 
> 
> We have setup vxlan tunnel between our two compute nodes, Can see the
> traafic in vxlan port on br-tun of source instance's compute node.
> 
> We are in same situation which is being described in below thread, i looked
> dev mailing archieves for it but seems no one has responded it.
> 
> 
> http://comments.gmane.org/gmane.linux.network.openvswitch.general/98
> 78
> 
> Would be really appriciate if provide us any suggestions on it.
> 
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
>  -----Forwarded by on 01/30/2016 07:24PM -----
> 
>  =======================
>  To: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
>  From: Abhijeet Karve/AHD/TCS@TCS
>  Date: 01/27/2016 09:52PM
>  Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
>  Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Inter-VM communication & IP allocation through DHCP issue
> =======================
>    Hi Przemek,
> 
> Thanks for the quick response. Now  able to get the DHCP ip's for 2 vhostuser
> instances and able to ping each other. Isssue was a bug in cirros 0.3.0 images
> which we were using in openstack after using 0.3.1 image as given in the
> URL(https://www.redhat.com/archives/rhos-list/2013-
> August/msg00032.html), able to get the IP's in vhostuser VM instances.
> 
> As per our understanding, Packet flow across DPDK datapath will be like
> vhostuser ports are connected to the br-int bridge & same is being patched
> to the br-dpdk bridge where in our physical network (NIC) is connected with
> dpdk0 port.
> 
> So for testing the flow we have to connect that physical network(NIC) with
> external packet generator (e.g - ixia, iperf) & run the testpmd application in
> the vhostuser VM, right?
> 
> Does it required to add any flows/efforts in bridge configurations(either br-
> int or br-dpdk)?
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> 
> From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To: Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date: 01/27/2016 05:11 PM
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Inter-VM communication & IP allocation through DHCP issue
> 
> 
> 
> Hi Abhijeet,
> 
> 
> It seems you are almost there!
> When booting the VM&#8217;s do you request hugepage memory for them
> (by setting hw:mem_page_size=large in flavor extra_spec)?
> If not then please do, if yes then please look into libvirt logfiles for the
> VM&#8217;s (in /var/log/libvirt/qemu/instance-xxx), I think there could be a
> clue.
> 
> 
> Regards
> Przemek
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Monday, January 25, 2016 6:13 PM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Inter-VM communication & IP allocation through DHCP issue
> 
> Hi Przemek,
> 
> Thank you for your response, It really provided us breakthrough.
> 
> After setting up DPDK on compute node for stable/kilo, We are trying to set
> up Openstack stable/liberty all-in-one setup, At present we are not able to
> get the IP allocation for the vhost type instances through DHCP. Also we tried
> assigning IP's manually to them but the inter-VM communication also not
> happening,
> 
> #neutron agent-list
> root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
> +--------------------------------------+--------------------+-------------------+-------+--
> --------------+---------------------------+
> | id                                   | agent_type         | host              | alive | admin_state_up
> | binary                    |
> +--------------------------------------+--------------------+-------------------+-------+--
> --------------+---------------------------+
> | 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent     | nfv-dpdk-
> devstack | :-)   | True           | neutron-openvswitch-agent |
> | 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent           | nfv-dpdk-
> devstack | :-)   | True           | neutron-l3-agent          |
> | 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent         | nfv-dpdk-
> devstack | :-)   | True           | neutron-dhcp-agent        |
> | b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-
> devstack | :-)   | True           | neutron-linuxbridge-agent |
> | c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent     | nfv-dpdk-
> devstack | :-)   | True           | neutron-metadata-agent    |
> | f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-
> devstack | xxx   | True           | neutron-openvswitch-agent |
> +--------------------------------------+--------------------+-------------------+-------+--
> --------------+---------------------------+
> 
> 
> ovs-vsctl show output#
> --------------------------------------------------------
> Bridge br-dpdk
>         Port br-dpdk
>             Interface br-dpdk
>                 type: internal
>         Port phy-br-dpdk
>             Interface phy-br-dpdk
>                 type: patch
>                 options: {peer=int-br-dpdk}
>     Bridge br-int
>         fail_mode: secure
>         Port "vhufa41e799-f2"
>             tag: 5
>             Interface "vhufa41e799-f2"
>                 type: dpdkvhostuser
>         Port int-br-dpdk
>             Interface int-br-dpdk
>                 type: patch
>                 options: {peer=phy-br-dpdk}
>         Port "tap4e19f8e1-59"
>             tag: 5
>             Interface "tap4e19f8e1-59"
>                 type: internal
>         Port "vhu05734c49-3b"
>             tag: 5
>             Interface "vhu05734c49-3b"
>                 type: dpdkvhostuser
>         Port "vhu10c06b4d-84"
>             tag: 5
>             Interface "vhu10c06b4d-84"
>                 type: dpdkvhostuser
>         Port patch-tun
>             Interface patch-tun
>                 type: patch
>                 options: {peer=patch-int}
>         Port "vhue169c581-ef"
>             tag: 5
>             Interface "vhue169c581-ef"
>                 type: dpdkvhostuser
>         Port br-int
>             Interface br-int
>                 type: internal
>     Bridge br-tun
>         fail_mode: secure
>         Port br-tun
>             Interface br-tun
>                 type: internal
>                 error: "could not open network device br-tun (Invalid argument)"
>         Port patch-int
>             Interface patch-int
>                 type: patch
>                 options: {peer=patch-tun}
>     ovs_version: "2.4.0"
> --------------------------------------------------------
> 
> 
> ovs-ofctl dump-flows br-int#
> --------------------------------------------------------
> root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
> NXST_FLOW reply (xid=0x4):
>  cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0,
> n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136
> actions=resubmit(,24)  cookie=0xaaa002bb2bcf827b, duration=2409.480s,
> table=0, n_packets=0, n_bytes=0, idle_age=2409,
> priority=10,icmp6,in_port=44,icmp_type=136 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0,
> n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136
> actions=resubmit(,24)  cookie=0xaaa002bb2bcf827b, duration=2408.155s,
> table=0, n_packets=0, n_bytes=0, idle_age=2408,
> priority=10,icmp6,in_port=42,icmp_type=136 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0,
> n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0,
> n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0,
> n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0,
> n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0,
> n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346
> actions=mod_vlan_vid:5,NORMAL  cookie=0xaaa002bb2bcf827b,
> duration=2415.038s, table=0, n_packets=0, n_bytes=0, idle_age=2415,
> priority=2,in_port=1 actions=drop  cookie=0xaaa002bb2bcf827b,
> duration=2416.148s, table=0, n_packets=0, n_bytes=0, idle_age=2416,
> priority=0 actions=NORMAL  cookie=0xaaa002bb2bcf827b,
> duration=2416.059s, table=23, n_packets=0, n_bytes=0, idle_age=2416,
> priority=0 actions=drop  cookie=0xaaa002bb2bcf827b, duration=2410.101s,
> table=24, n_packets=0, n_bytes=0, idle_age=2410,
> priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:
> da61 actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2409.571s,
> table=24, n_packets=0, n_bytes=0, idle_age=2409,
> priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:
> 254 actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2408.775s,
> table=24, n_packets=0, n_bytes=0, idle_age=2408,
> priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:
> 5cc actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2408.231s,
> table=24, n_packets=0, n_bytes=0, idle_age=2408,
> priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:
> f5f7 actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2409.930s,
> table=24, n_packets=0, n_bytes=0, idle_age=2409,
> priority=2,arp,in_port=43,arp_spa=20.20.20.14 actions=NORMAL
> cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0,
> n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16
> actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2408.633s,
> table=24, n_packets=0, n_bytes=0, idle_age=2408,
> priority=2,arp,in_port=45,arp_spa=20.20.20.17 actions=NORMAL
> cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0,
> n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13
> actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2415.974s,
> table=24, n_packets=0, n_bytes=0, idle_age=2415, priority=0 actions=drop
> root@nfv-dpdk-devstack:/etc/neutron#
> --------------------------------------------------------
> 
> 
> 
> 
> 
> Also attaching Neutron-server, nova-compute & nova-scheduler logs.
> 
> It will be really great for us if get any hint to overcome out of this Inter-VM &
> DHCP communication issue,
> 
> 
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date:        01/04/2016 07:54 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Getting memory backing issues with qemu parameter passing
> 
> 
> 
> 
> You should be able to clone networking-ovs-dpdk, switch to kilo branch,  and
> run python setup.py install in the root of networking-ovs-dpdk, that should
> install agent and mech driver.
> Then you would need to enable mech driver (ovsdpdk) on the controller in
> the /etc/neutron/plugins/ml2/ml2_conf.ini
> And run the right agent on the computes (networking-ovs-dpdk-agent).
> 
> 
> There should be pip packeges of networking-ovs-dpdk available shortly,
> I\x0f&#8217;ll let you know when that happens.
> 
> Przemek
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Thursday, December 24, 2015 6:42 PM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Getting memory backing issues with qemu parameter passing
> 
> Hi Przemek,
> 
> Thank you so much for your quick response.
> 
> The guide(https://github.com/openstack/networking-ovs-
> dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have
> suggested that is for openstack vhost user installations with devstack.
> Can't we have any reference for including ovs-dpdk mechanisam driver for
> openstack Ubuntu distribution which we are following for
> compute+controller node setup?"
> 
> We are facing below listed issues With the current approach of setting up
> openstack kilo interactively + replacing ovs with ovs-dpdk enabled and
> Instance creation in openstack with
> passing that instance id to QEMU command line which further passes the
> vhost-user sockets to instances for enabling the DPDK libraries in it.
> 
> 
> 1. Created a flavor m1.hugepages which is backed by hugepage memory,
> unable to spawn instance with this flavor \x0f&#8211; Getting a issue like: No
> matching hugetlbfs for the number of hugepages assigned to the flavor.
> 2. Passing socket info to instances via qemu manually and instnaces created
> are not persistent.
> 
> Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism
> driver and agent all of that in our openstack ubuntu distribution.
> 
> Would be really appriciate if get any help or ref with explanation.
> 
> We are using compute + controller node setup and we are using following
> software platform on compute node:
> _____________
> Openstack: Kilo
> Distribution: Ubuntu 14.04
> OVS Version: 2.4.0
> DPDK 2.0.0
> _____________
> 
> Thanks,
> Abhijeet Karve
> 
> 
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date:        12/17/2015 06:32 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> 
> 
> 
> 
> I haven\x0f&#8217;t tried that approach not sure if that would work, it seems
> clunky.
> 
> If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports
> to ovs with the right type, pass the sockets to qemu) would be done by
> OpenStack.
> 
> Przemek
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Thursday, December 17, 2015 12:41 PM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Hi Przemek,
> 
> Thank you so much for sharing the ref guide.
> 
> Would be appreciate if clear one doubt.
> 
> At present we are setting up openstack kilo interactively and further
> replacing ovs with ovs-dpdk enabled.
> Once the above setup done, We are creating instance in openstack and
> passing that instance id to QEMU command line which further passes the
> vhost-user sockets to instances, enabling the DPDK libraries in it.
> 
> Isn't this the correct way of integrating ovs-dpdk with openstack?
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date:        12/17/2015 05:27 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> 
> 
> 
> 
> 
> HI Abhijeet,
> 
> For Kilo you need to use ovsdpdk mechanism driver and a matching agent to
> integrate ovs-dpdk with OpenStack.
> 
> The guide you are following only talks about running ovs-dpdk not how it
> should be integrated with OpenStack.
> 
> Please follow this guide:
> https://github.com/openstack/networking-ovs-
> dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
> 
> Best regards
> Przemek
> 
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Wednesday, December 16, 2015 9:37 AM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Hi Przemek,
> 
> 
> We have configured the accelerated data path between a physical interface
> to the VM using openvswitch netdev-dpdk with vhost-user support. The VM
> created with this special data path and vhost library, I am calling as DPDK
> instance.
> 
> If assigning ip manually to the newly created Cirros VM instance, We are able
> to make 2 VM's to communicate on the same compute node. Else it's not
> associating any ip through DHCP though DHCP is in compute node only.
> 
> Yes it's a compute + controller node setup and we are using following
> software platform on compute node:
> _____________
> Openstack: Kilo
> Distribution: Ubuntu 14.04
> OVS Version: 2.4.0
> DPDK 2.0.0
> _____________
> 
> We are following the intel guide https://software.intel.com/en-
> us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200
> 
> When doing "ovs-vsctl show" in compute node, it shows below output:
> _____________________________________________
> ovs-vsctl show
> c2ec29a5-992d-4875-8adc-1265c23e0304
>  Bridge br-ex
>      Port phy-br-ex
>          Interface phy-br-ex
>              type: patch
>              options: {peer=int-br-ex}
>      Port br-ex
>          Interface br-ex
>              type: internal
>  Bridge br-tun
>      fail_mode: secure
>      Port br-tun
>          Interface br-tun
>              type: internal
>      Port patch-int
>          Interface patch-int
>              type: patch
>              options: {peer=patch-tun}
>  Bridge br-int
>      fail_mode: secure
>      Port "qvo0ae19a43-b6"
>          tag: 2
>          Interface "qvo0ae19a43-b6"
>      Port br-int
>          Interface br-int
>              type: internal
>      Port "qvo31c89856-a2"
>          tag: 1
>          Interface "qvo31c89856-a2"
>      Port patch-tun
>          Interface patch-tun
>              type: patch
>              options: {peer=patch-int}
>      Port int-br-ex
>          Interface int-br-ex
>              type: patch
>              options: {peer=phy-br-ex}
>      Port "qvo97fef28a-ec"
>          tag: 2
>          Interface "qvo97fef28a-ec"
>  Bridge br-dpdk
>      Port br-dpdk
>          Interface br-dpdk
>              type: internal
>  Bridge "br0"
>      Port "br0"
>          Interface "br0"
>              type: internal
>      Port "dpdk0"
>          Interface "dpdk0"
>              type: dpdk
>      Port "vhost-user-2"
>          Interface "vhost-user-2"
>              type: dpdkvhostuser
>      Port "vhost-user-0"
>          Interface "vhost-user-0"
>              type: dpdkvhostuser
>      Port "vhost-user-1"
>          Interface "vhost-user-1"
>              type: dpdkvhostuser
>  ovs_version: "2.4.0"
> root@dpdk:~#
> _____________________________________________
> 
> Open flows output in bridge in compute node are as below:
> _____________________________________________
> root@dpdk:~# ovs-ofctl dump-flows br-tun
> NXST_FLOW reply (xid=0x4):
> cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794,
> idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
> cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534,
> priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
> cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794,
> idle_age=19982, hard_age=65534,
> priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
> cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c
> actions=mod_vlan_vid:2,resubmit(,10)
> cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=1,tun_id=0x57
> actions=mod_vlan_vid:1,resubmit(,10)
> cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=1
> actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..
> 11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-
> >NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]-
> >NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
> cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
> cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794,
> idle_age=19982, hard_age=65534, priority=0 actions=drop
> root@dpdk:~#
> root@dpdk:~#
> root@dpdk:~#
> root@dpdk:~# ovs-ofctl dump-flows br-tun
> int NXST_FLOW reply (xid=0x4):
> cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
> cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912,
> idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
> cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> root@dpdk:~#
> _____________________________________________
> 
> 
> Further we don't know what all the network changes(Packet Flow addition) if
> required for associating IP address through the DHCP.
> 
> Would be really appreciate if have clarity on DHCP flow establishment.
> 
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D"
> <mark.d.gray@intel.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>
> Date:        12/15/2015 09:13 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> 
> 
> 
> 
> 
> 
> Hi Abhijeet,
> 
> If you answer below questions it will help me understand your problem.
> 
> What do you mean by DPDK instance?
> Are you able to communicate with other VM's on the same compute node?
> Can you check if the DHCP requests arrive on the controller node? (I'm
> assuming this is at least compute+ controller setup)
> 
> Best regards
> Przemek
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> > Sent: Tuesday, December 15, 2015 5:56 AM
> > To: Gray, Mark D
> > Cc: dev@dpdk.org; discuss@openvswitch.org
> > Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> > Successfully setup DPDK OVS with vhostuser
> >
> > Dear All,
> >
> > After seting up system boot parameters as shown below, the issue is
> > resolved now & we are able to successfully setup openvswitch netdev-
> dpdk
> > with vhostuser support.
> >
> >
> __________________________________________________________
> > _______________________________________________________
> > Setup 2 sets of huge pages with different sizes. One for Vhost and another
> > for Guest VM.
> >          Edit /etc/default/grub.
> >             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on
> hugepagesz=1G
> > hugepages=10 hugepagesz=2M hugepages=4096"
> >          # update-grub
> >        - Mount the huge pages into different directory.
> >           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
> >           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> >
> __________________________________________________________
> > _______________________________________________________
> >
> > At present we are facing an issue in Testing DPDK application on setup. In
> our
> > scenario, We have DPDK instance launched on top of the Openstack Kilo
> > compute node. Not able to assign DHCP IP from controller.
> >
> >
> > Thanks & Regards
> > Abhijeet Karve
> >
> > =====-----=====-----=====
> > Notice: The information contained in this e-mail message and/or
> > attachments to it may contain confidential or privileged information. If you
> > are not the intended recipient, any dissemination, use, review,
> distribution,
> > printing or copying of the information contained in this e-mail message
> > and/or attachments to it are strictly prohibited. If you have received this
> > communication in error, please notify us by reply e-mail or telephone and
> > immediately and permanently delete the message and any attachments.
> > Thank you
> >
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup
  2016-02-04 17:37   ` Chandran, Sugesh
@ 2016-02-05 15:16     ` Abhijeet Karve
  0 siblings, 0 replies; 3+ messages in thread
From: Abhijeet Karve @ 2016-02-05 15:16 UTC (permalink / raw)
  To: Chandran, Sugesh, Czesnowicz, Przemyslaw; +Cc: dev, discuss

Dear Sugesh, Przemek,


Thanks for your extended support. We are done with setting up Open-vSwitch 
netdev-dpdk with vhost-user in openstack environment. Following changes 
what we made in our infrastructure.

1. As Vxlan was not been supported in our external bare metal 
equipment(Gateways), So we replaced vxlan network with vlan network type 
in our openstack setup.
2. Added arp entries for br-dpdk bridge for entertaining any arp requests.
3. MTU size settings done in our external N/w equipment..


Thank you once again everyone.


Thanks & Regards
Abhijeet Karve




From:   "Chandran, Sugesh" <sugesh.chandran@intel.com>
To:     Abhijeet Karve <abhijeet.karve@tcs.com>, "Czesnowicz, Przemyslaw" 
<przemyslaw.czesnowicz@intel.com>
Cc:     "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" 
<discuss@openvswitch.org>
Date:   02/04/2016 11:08 PM
Subject:        RE: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does 
issue with igb_uio in ovs+dpdk setup



Hi Abhijeet,

It looks to me that the arp entries may not populated right for the VxLAN 
ports in OVS.
Can you please refer the debug section in 
http://openvswitch.org/support/config-cookbooks/userspace-tunneling/
to verify and insert right arp entries in case they are missing ??


Regards
_Sugesh


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Thursday, February 4, 2016 2:55 PM
> To: Czesnowicz, Przemyslaw <przemyslaw.czesnowicz@intel.com>
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue 
with
> igb_uio in ovs+dpdk setup
> 
> Hi All,
> 
> The issue which we are facing as described in previous threads thats
> beccause of seting up ovs+dpdk with igb_uio driver instead of vfio_pci?
> 
> Would appriciate if get any suggestions on this.
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> To: przemyslaw.czesnowicz@intel.com
> From: Abhijeet Karve
> Date: 01/30/2016 07:32PM
> Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Subject: Fw: RE: [dpdk-dev] DPDK vhostuser with vxlan
> 
> 
>  Hi przemek,
> 
> 
> We have setup vxlan tunnel between our two compute nodes, Can see the
> traafic in vxlan port on br-tun of source instance's compute node.
> 
> We are in same situation which is being described in below thread, i 
looked
> dev mailing archieves for it but seems no one has responded it.
> 
> 
> http://comments.gmane.org/gmane.linux.network.openvswitch.general/98
> 78
> 
> Would be really appriciate if provide us any suggestions on it.
> 
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
>  -----Forwarded by on 01/30/2016 07:24PM -----
> 
>  =======================
>  To: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
>  From: Abhijeet Karve/AHD/TCS@TCS
>  Date: 01/27/2016 09:52PM
>  Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
>  Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Inter-VM communication & IP allocation through DHCP issue
> =======================
>    Hi Przemek,
> 
> Thanks for the quick response. Now  able to get the DHCP ip's for 2 
vhostuser
> instances and able to ping each other. Isssue was a bug in cirros 0.3.0 
images
> which we were using in openstack after using 0.3.1 image as given in the
> URL(https://www.redhat.com/archives/rhos-list/2013-
> August/msg00032.html), able to get the IP's in vhostuser VM instances.
> 
> As per our understanding, Packet flow across DPDK datapath will be like
> vhostuser ports are connected to the br-int bridge & same is being 
patched
> to the br-dpdk bridge where in our physical network (NIC) is connected 
with
> dpdk0 port.
> 
> So for testing the flow we have to connect that physical network(NIC) 
with
> external packet generator (e.g - ixia, iperf) & run the testpmd 
application in
> the vhostuser VM, right?
> 
> Does it required to add any flows/efforts in bridge 
configurations(either br-
> int or br-dpdk)?
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> 
> From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To: Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date: 01/27/2016 05:11 PM
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Inter-VM communication & IP allocation through DHCP issue
> 
> 
> 
> Hi Abhijeet,
> 
> 
> It seems you are almost there!
> When booting the VM&#8217;s do you request hugepage memory for them
> (by setting hw:mem_page_size=large in flavor extra_spec)?
> If not then please do, if yes then please look into libvirt logfiles for 
the
> VM&#8217;s (in /var/log/libvirt/qemu/instance-xxx), I think there could 
be a
> clue.
> 
> 
> Regards
> Przemek
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Monday, January 25, 2016 6:13 PM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Inter-VM communication & IP allocation through DHCP issue
> 
> Hi Przemek,
> 
> Thank you for your response, It really provided us breakthrough.
> 
> After setting up DPDK on compute node for stable/kilo, We are trying to 
set
> up Openstack stable/liberty all-in-one setup, At present we are not able 
to
> get the IP allocation for the vhost type instances through DHCP. Also we 
tried
> assigning IP's manually to them but the inter-VM communication also not
> happening,
> 
> #neutron agent-list
> root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
> 
+--------------------------------------+--------------------+-------------------+-------+--
> --------------+---------------------------+
> | id                                   | agent_type         | host   | 
alive | admin_state_up
> | binary                    |
> 
+--------------------------------------+--------------------+-------------------+-------+--
> --------------+---------------------------+
> | 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent     | nfv-dpdk-
> devstack | :-)   | True           | neutron-openvswitch-agent |
> | 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent           | nfv-dpdk-
> devstack | :-)   | True           | neutron-l3-agent          |
> | 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent         | nfv-dpdk-
> devstack | :-)   | True           | neutron-dhcp-agent        |
> | b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-
> devstack | :-)   | True           | neutron-linuxbridge-agent |
> | c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent     | nfv-dpdk-
> devstack | :-)   | True           | neutron-metadata-agent    |
> | f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-
> devstack | xxx   | True           | neutron-openvswitch-agent |
> 
+--------------------------------------+--------------------+-------------------+-------+--
> --------------+---------------------------+
> 
> 
> ovs-vsctl show output#
> --------------------------------------------------------
> Bridge br-dpdk
>         Port br-dpdk
>             Interface br-dpdk
>                 type: internal
>         Port phy-br-dpdk
>             Interface phy-br-dpdk
>                 type: patch
>                 options: {peer=int-br-dpdk}
>     Bridge br-int
>         fail_mode: secure
>         Port "vhufa41e799-f2"
>             tag: 5
>             Interface "vhufa41e799-f2"
>                 type: dpdkvhostuser
>         Port int-br-dpdk
>             Interface int-br-dpdk
>                 type: patch
>                 options: {peer=phy-br-dpdk}
>         Port "tap4e19f8e1-59"
>             tag: 5
>             Interface "tap4e19f8e1-59"
>                 type: internal
>         Port "vhu05734c49-3b"
>             tag: 5
>             Interface "vhu05734c49-3b"
>                 type: dpdkvhostuser
>         Port "vhu10c06b4d-84"
>             tag: 5
>             Interface "vhu10c06b4d-84"
>                 type: dpdkvhostuser
>         Port patch-tun
>             Interface patch-tun
>                 type: patch
>                 options: {peer=patch-int}
>         Port "vhue169c581-ef"
>             tag: 5
>             Interface "vhue169c581-ef"
>                 type: dpdkvhostuser
>         Port br-int
>             Interface br-int
>                 type: internal
>     Bridge br-tun
>         fail_mode: secure
>         Port br-tun
>             Interface br-tun
>                 type: internal
>                 error: "could not open network device br-tun (Invalid 
argument)"
>         Port patch-int
>             Interface patch-int
>                 type: patch
>                 options: {peer=patch-tun}
>     ovs_version: "2.4.0"
> --------------------------------------------------------
> 
> 
> ovs-ofctl dump-flows br-int#
> --------------------------------------------------------
> root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
> NXST_FLOW reply (xid=0x4):
>  cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0,
> n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136
> actions=resubmit(,24)  cookie=0xaaa002bb2bcf827b, duration=2409.480s,
> table=0, n_packets=0, n_bytes=0, idle_age=2409,
> priority=10,icmp6,in_port=44,icmp_type=136 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0,
> n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136
> actions=resubmit(,24)  cookie=0xaaa002bb2bcf827b, duration=2408.155s,
> table=0, n_packets=0, n_bytes=0, idle_age=2408,
> priority=10,icmp6,in_port=42,icmp_type=136 actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0,
> n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 
actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0,
> n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 
actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0,
> n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 
actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0,
> n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 
actions=resubmit(,24)
> cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0,
> n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346
> actions=mod_vlan_vid:5,NORMAL  cookie=0xaaa002bb2bcf827b,
> duration=2415.038s, table=0, n_packets=0, n_bytes=0, idle_age=2415,
> priority=2,in_port=1 actions=drop  cookie=0xaaa002bb2bcf827b,
> duration=2416.148s, table=0, n_packets=0, n_bytes=0, idle_age=2416,
> priority=0 actions=NORMAL  cookie=0xaaa002bb2bcf827b,
> duration=2416.059s, table=23, n_packets=0, n_bytes=0, idle_age=2416,
> priority=0 actions=drop  cookie=0xaaa002bb2bcf827b, duration=2410.101s,
> table=24, n_packets=0, n_bytes=0, idle_age=2410,
> 
priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:
> da61 actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2409.571s,
> table=24, n_packets=0, n_bytes=0, idle_age=2409,
> 
priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:
> 254 actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2408.775s,
> table=24, n_packets=0, n_bytes=0, idle_age=2408,
> 
priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:
> 5cc actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2408.231s,
> table=24, n_packets=0, n_bytes=0, idle_age=2408,
> 
priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:
> f5f7 actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2409.930s,
> table=24, n_packets=0, n_bytes=0, idle_age=2409,
> priority=2,arp,in_port=43,arp_spa=20.20.20.14 actions=NORMAL
> cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0,
> n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16
> actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2408.633s,
> table=24, n_packets=0, n_bytes=0, idle_age=2408,
> priority=2,arp,in_port=45,arp_spa=20.20.20.17 actions=NORMAL
> cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0,
> n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13
> actions=NORMAL  cookie=0xaaa002bb2bcf827b, duration=2415.974s,
> table=24, n_packets=0, n_bytes=0, idle_age=2415, priority=0 actions=drop
> root@nfv-dpdk-devstack:/etc/neutron#
> --------------------------------------------------------
> 
> 
> 
> 
> 
> Also attaching Neutron-server, nova-compute & nova-scheduler logs.
> 
> It will be really great for us if get any hint to overcome out of this 
Inter-VM &
> DHCP communication issue,
> 
> 
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date:        01/04/2016 07:54 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's 
Resolved#
> Getting memory backing issues with qemu parameter passing
> 
> 
> 
> 
> You should be able to clone networking-ovs-dpdk, switch to kilo branch, 
and
> run python setup.py install in the root of networking-ovs-dpdk, that 
should
> install agent and mech driver.
> Then you would need to enable mech driver (ovsdpdk) on the controller in
> the /etc/neutron/plugins/ml2/ml2_conf.ini
> And run the right agent on the computes (networking-ovs-dpdk-agent).
> 
> 
> There should be pip packeges of networking-ovs-dpdk available shortly,
> I\x0f&#8217;ll let you know when that happens.
> 
> Przemek
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Thursday, December 24, 2015 6:42 PM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Getting memory backing issues with qemu parameter passing
> 
> Hi Przemek,
> 
> Thank you so much for your quick response.
> 
> The guide(https://github.com/openstack/networking-ovs-
> dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have
> suggested that is for openstack vhost user installations with devstack.
> Can't we have any reference for including ovs-dpdk mechanisam driver for
> openstack Ubuntu distribution which we are following for
> compute+controller node setup?"
> 
> We are facing below listed issues With the current approach of setting 
up
> openstack kilo interactively + replacing ovs with ovs-dpdk enabled and
> Instance creation in openstack with
> passing that instance id to QEMU command line which further passes the
> vhost-user sockets to instances for enabling the DPDK libraries in it.
> 
> 
> 1. Created a flavor m1.hugepages which is backed by hugepage memory,
> unable to spawn instance with this flavor \x0f&#8211; Getting a issue like: 
No
> matching hugetlbfs for the number of hugepages assigned to the flavor.
> 2. Passing socket info to instances via qemu manually and instnaces 
created
> are not persistent.
> 
> Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism
> driver and agent all of that in our openstack ubuntu distribution.
> 
> Would be really appriciate if get any help or ref with explanation.
> 
> We are using compute + controller node setup and we are using following
> software platform on compute node:
> _____________
> Openstack: Kilo
> Distribution: Ubuntu 14.04
> OVS Version: 2.4.0
> DPDK 2.0.0
> _____________
> 
> Thanks,
> Abhijeet Karve
> 
> 
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date:        12/17/2015 06:32 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's 
Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> 
> 
> 
> 
> I haven\x0f&#8217;t tried that approach not sure if that would work, it 
seems
> clunky.
> 
> If you enable ovsdpdk ml2 mechanism driver and agent all of that (add 
ports
> to ovs with the right type, pass the sockets to qemu) would be done by
> OpenStack.
> 
> Przemek
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Thursday, December 17, 2015 12:41 PM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Hi Przemek,
> 
> Thank you so much for sharing the ref guide.
> 
> Would be appreciate if clear one doubt.
> 
> At present we are setting up openstack kilo interactively and further
> replacing ovs with ovs-dpdk enabled.
> Once the above setup done, We are creating instance in openstack and
> passing that instance id to QEMU command line which further passes the
> vhost-user sockets to instances, enabling the DPDK libraries in it.
> 
> Isn't this the correct way of integrating ovs-dpdk with openstack?
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
> Date:        12/17/2015 05:27 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's 
Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> 
> 
> 
> 
> 
> HI Abhijeet,
> 
> For Kilo you need to use ovsdpdk mechanism driver and a matching agent 
to
> integrate ovs-dpdk with OpenStack.
> 
> The guide you are following only talks about running ovs-dpdk not how it
> should be integrated with OpenStack.
> 
> Please follow this guide:
> https://github.com/openstack/networking-ovs-
> dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
> 
> Best regards
> Przemek
> 
> 
> From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
> Sent: Wednesday, December 16, 2015 9:37 AM
> To: Czesnowicz, Przemyslaw
> Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
> Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Hi Przemek,
> 
> 
> We have configured the accelerated data path between a physical 
interface
> to the VM using openvswitch netdev-dpdk with vhost-user support. The VM
> created with this special data path and vhost library, I am calling as 
DPDK
> instance.
> 
> If assigning ip manually to the newly created Cirros VM instance, We are 
able
> to make 2 VM's to communicate on the same compute node. Else it's not
> associating any ip through DHCP though DHCP is in compute node only.
> 
> Yes it's a compute + controller node setup and we are using following
> software platform on compute node:
> _____________
> Openstack: Kilo
> Distribution: Ubuntu 14.04
> OVS Version: 2.4.0
> DPDK 2.0.0
> _____________
> 
> We are following the intel guide https://software.intel.com/en-
> us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200
> 
> When doing "ovs-vsctl show" in compute node, it shows below output:
> _____________________________________________
> ovs-vsctl show
> c2ec29a5-992d-4875-8adc-1265c23e0304
>  Bridge br-ex
>      Port phy-br-ex
>          Interface phy-br-ex
>              type: patch
>              options: {peer=int-br-ex}
>      Port br-ex
>          Interface br-ex
>              type: internal
>  Bridge br-tun
>      fail_mode: secure
>      Port br-tun
>          Interface br-tun
>              type: internal
>      Port patch-int
>          Interface patch-int
>              type: patch
>              options: {peer=patch-tun}
>  Bridge br-int
>      fail_mode: secure
>      Port "qvo0ae19a43-b6"
>          tag: 2
>          Interface "qvo0ae19a43-b6"
>      Port br-int
>          Interface br-int
>              type: internal
>      Port "qvo31c89856-a2"
>          tag: 1
>          Interface "qvo31c89856-a2"
>      Port patch-tun
>          Interface patch-tun
>              type: patch
>              options: {peer=patch-int}
>      Port int-br-ex
>          Interface int-br-ex
>              type: patch
>              options: {peer=phy-br-ex}
>      Port "qvo97fef28a-ec"
>          tag: 2
>          Interface "qvo97fef28a-ec"
>  Bridge br-dpdk
>      Port br-dpdk
>          Interface br-dpdk
>              type: internal
>  Bridge "br0"
>      Port "br0"
>          Interface "br0"
>              type: internal
>      Port "dpdk0"
>          Interface "dpdk0"
>              type: dpdk
>      Port "vhost-user-2"
>          Interface "vhost-user-2"
>              type: dpdkvhostuser
>      Port "vhost-user-0"
>          Interface "vhost-user-0"
>              type: dpdkvhostuser
>      Port "vhost-user-1"
>          Interface "vhost-user-1"
>              type: dpdkvhostuser
>  ovs_version: "2.4.0"
> root@dpdk:~#
> _____________________________________________
> 
> Open flows output in bridge in compute node are as below:
> _____________________________________________
> root@dpdk:~# ovs-ofctl dump-flows br-tun
> NXST_FLOW reply (xid=0x4):
> cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794,
> idle_age=19982, hard_age=65534, priority=1,in_port=1 
actions=resubmit(,2)
> cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534,
> priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,20)
> cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794,
> idle_age=19982, hard_age=65534,
> priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,22)
> cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c
> actions=mod_vlan_vid:2,resubmit(,10)
> cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=1,tun_id=0x57
> actions=mod_vlan_vid:1,resubmit(,10)
> cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=1
> actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..
> 11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-
> >NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]-
> >NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
> cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
> cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794,
> idle_age=19982, hard_age=65534, priority=0 actions=drop
> root@dpdk:~#
> root@dpdk:~#
> root@dpdk:~#
> root@dpdk:~# ovs-ofctl dump-flows br-tun
> int NXST_FLOW reply (xid=0x4):
> cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
> cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912,
> idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
> cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0,
> idle_age=65534, hard_age=65534, priority=0 actions=drop
> root@dpdk:~#
> _____________________________________________
> 
> 
> Further we don't know what all the network changes(Packet Flow addition) 
if
> required for associating IP address through the DHCP.
> 
> Would be really appreciate if have clarity on DHCP flow establishment.
> 
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> 
> 
> 
> 
> From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
> To:        Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D"
> <mark.d.gray@intel.com>
> Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
> <discuss@openvswitch.org>
> Date:        12/15/2015 09:13 PM
> Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's 
Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> 
> 
> 
> 
> 
> 
> Hi Abhijeet,
> 
> If you answer below questions it will help me understand your problem.
> 
> What do you mean by DPDK instance?
> Are you able to communicate with other VM's on the same compute node?
> Can you check if the DHCP requests arrive on the controller node? (I'm
> assuming this is at least compute+ controller setup)
> 
> Best regards
> Przemek
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> > Sent: Tuesday, December 15, 2015 5:56 AM
> > To: Gray, Mark D
> > Cc: dev@dpdk.org; discuss@openvswitch.org
> > Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> > Successfully setup DPDK OVS with vhostuser
> >
> > Dear All,
> >
> > After seting up system boot parameters as shown below, the issue is
> > resolved now & we are able to successfully setup openvswitch netdev-
> dpdk
> > with vhostuser support.
> >
> >
> __________________________________________________________
> > _______________________________________________________
> > Setup 2 sets of huge pages with different sizes. One for Vhost and 
another
> > for Guest VM.
> >          Edit /etc/default/grub.
> >             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on
> hugepagesz=1G
> > hugepages=10 hugepagesz=2M hugepages=4096"
> >          # update-grub
> >        - Mount the huge pages into different directory.
> >           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
> >           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> >
> __________________________________________________________
> > _______________________________________________________
> >
> > At present we are facing an issue in Testing DPDK application on 
setup. In
> our
> > scenario, We have DPDK instance launched on top of the Openstack Kilo
> > compute node. Not able to assign DHCP IP from controller.
> >
> >
> > Thanks & Regards
> > Abhijeet Karve
> >
> > =====-----=====-----=====
> > Notice: The information contained in this e-mail message and/or
> > attachments to it may contain confidential or privileged information. 
If you
> > are not the intended recipient, any dissemination, use, review,
> distribution,
> > printing or copying of the information contained in this e-mail 
message
> > and/or attachments to it are strictly prohibited. If you have received 
this
> > communication in error, please notify us by reply e-mail or telephone 
and
> > immediately and permanently delete the message and any attachments.
> > Thank you
> >
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-02-05 16:20 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <OF9062862D.67700483-ON65257F4A.004D1715-65257F4A.004D1738@LocalDomain>
2016-02-04 14:54 ` [dpdk-dev] Fw: RE: DPDK vhostuser with vxlan# Does issue with igb_uio in ovs+dpdk setup Abhijeet Karve
2016-02-04 17:37   ` Chandran, Sugesh
2016-02-05 15:16     ` Abhijeet Karve

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).