* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
@ 2016-01-26 19:14 Abhijeet Karve
0 siblings, 0 replies; 3+ messages in thread
From: Abhijeet Karve @ 2016-01-26 19:14 UTC (permalink / raw)
To: Czesnowicz, Przemyslaw; +Cc: dev, discuss
Hi Przemek,
Thank you for your response, It's really provided us breakthrough.
After setting up DPDK on compute node for stable/kilo, Trying to set up Openstack stable/liberty all-in-one setup, At present not able to get the IP allocation for the vhost type instances through DHCP. Also tried assigning IP's manually to them but the inter-VM communication also not happening,
#neutron agent-list
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent | nfv-dpdk-devstack | :-) | True | neutron-openvswitch-agent |
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent | nfv-dpdk-devstack | :-) | True | neutron-l3-agent |
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent | nfv-dpdk-devstack | :-) | True | neutron-dhcp-agent |
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-devstack | :-) | True | neutron-linuxbridge-agent |
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent | nfv-dpdk-devstack | :-) | True | neutron-metadata-agent |
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-devstack | xxx | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
ovs-vsctl show output#
--------------------------------------------------------
Bridge br-dpdk
Port br-dpdk
Interface br-dpdk
type: internal
Port phy-br-dpdk
Interface phy-br-dpdk
type: patch
options: {peer=int-br-dpdk}
Bridge br-int
fail_mode: secure
Port "vhufa41e799-f2"
tag: 5
Interface "vhufa41e799-f2"
type: dpdkvhostuser
Port int-br-dpdk
Interface int-br-dpdk
type: patch
options: {peer=phy-br-dpdk}
Port "tap4e19f8e1-59"
tag: 5
Interface "tap4e19f8e1-59"
type: internal
Port "vhu05734c49-3b"
tag: 5
Interface "vhu05734c49-3b"
type: dpdkvhostuser
Port "vhu10c06b4d-84"
tag: 5
Interface "vhu10c06b4d-84"
type: dpdkvhostuser
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "vhue169c581-ef"
tag: 5
Interface "vhue169c581-ef"
type: dpdkvhostuser
Port br-int
Interface br-int
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
error: "could not open network device br-tun (Invalid argument)"
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.4.0"
--------------------------------------------------------
ovs-ofctl dump-flows br-int#
--------------------------------------------------------
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0, n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0, n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346 actions=mod_vlan_vid:5,NORMAL
cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0, n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0, n_bytes=0, idle_age=2410, priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0, n_bytes=0, idle_age=2415, priority=0 actions=drop
root@nfv-dpdk-devstack:/etc/neutron#
--------------------------------------------------------
It will be really great for us if get any hint to overcome out of this Inter-VM & DHCP communication issue,
Thanks & Regards
Abhijeet Karve
"Czesnowicz, Przemyslaw" ---01/04/2016 07:54:52 PM---You should be able to clone networking-ovs-dpdk, switch to kilo branch, ÿand run python setup.py ins
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To: Abhijeet Karve <abhijeet.karve@tcs.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date: 01/04/2016 07:54 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
You should be able to clone networking-ovs-dpdk, switch to kilo branch, ÿand run
python setup.py install
in the root of networking-ovs-dpdk, that should install agent and mech driver.
Then you would need to enable mech driver (ovsdpdk) on the controller in the /etc/neutron/plugins/ml2/ml2_conf.ini
And run the right agent on the computes (networking-ovs-dpdk-agent).
ÿ
ÿ
There should be pip packeges of networking-ovs-dpdk available shortly, I\x0f’ll let you know when that happens.
ÿ
Przemek
ÿ
From:ÿAbhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent:ÿThursday, December 24, 2015 6:42 PM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
ÿ
Hi Przemek,ÿ
Thank you so much for your quick response.
The guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have suggested that is for openstack vhost user installations with devstack.
Can't we have any reference for including ovs-dpdk mechanisam driver for openstack Ubuntu distribution which we are following for
compute+controller node setup?"
We are facing below listed issues With the current approach of setting up openstack kilo interactively + replacing ovs with ovs-dpdk enabled and Instance creation in openstack withÿ
passing that instance id to QEMU command line which further passes the vhost-user sockets to instances for enabling the DPDK libraries in it.ÿ
1. Created a flavor m1.hugepages which is backed by hugepage memory, unable to spawn instance with this flavor \x0f– Getting a issue like: No matching hugetlbfs for the number of hugepages assigned to the flavor.ÿ
2. Passing socket info to instances via qemu manually and instnaces created are not persistent.ÿ
Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driver and agent all of that in our openstack ubuntu distribution.ÿ
Would be really appriciate if get any help or ref with explanation.ÿ
We are using compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________
Thanks,ÿ
Abhijeet Karveÿ
From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.karve@tcs.com>ÿ
Cc: ÿ ÿ ÿ ÿ"dev@dpdk.org"ÿ<dev@dpdk.org>, "discuss@openvswitch.org"ÿ<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>ÿ
Date: ÿ ÿ ÿ ÿ12/17/2015 06:32 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ
I haven\x0f’t tried that approach not sure if that would work, it seems clunky.ÿ
ÿ
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to ovs with the right type, pass the sockets to qemu) would be done by OpenStack.ÿ
ÿ
Przemekÿ
ÿ
From:ÿAbhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent:ÿThursday, December 17, 2015 12:41 PM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ
ÿ
Hi Przemek,ÿ
Thank you so much for sharing the ref guide.ÿ
Would be appreciate if clear one doubt.
At present we are setting up openstack kilo interactively and further replacing ovs with ovs-dpdk enabled.
Once the above setup done, We are creating instance in openstack and passing that instance id to QEMU command line which further passes the vhost-user sockets to instances, enabling the DPDK libraries in it.ÿ
Isn't this the correct way of integrating ovs-dpdk with openstack?ÿ
Thanks & Regards
Abhijeet Karve
From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.karve@tcs.com>ÿ
Cc: ÿ ÿ ÿ ÿ"dev@dpdk.org"ÿ<dev@dpdk.org>, "discuss@openvswitch.org"ÿ<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>ÿ
Date: ÿ ÿ ÿ ÿ12/17/2015 05:27 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ
HI Abhijeet,ÿ
For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack.ÿ
The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack.ÿ
Please follow this guide:ÿ
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rstÿ
Best regardsÿ
Przemekÿ
From:ÿAbhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent:ÿWednesday, December 16, 2015 9:37 AM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ
Hi Przemek,ÿ
We have configured theÿaccelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance.
If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only.ÿ
Yes it's a compute + controller node setup and we are using following software platform on compute node:ÿ
_____________ÿ
Openstack: Kiloÿ
Distribution: Ubuntu 14.04ÿ
OVS Version: 2.4.0ÿ
DPDK 2.0.0ÿ
_____________ÿ
We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200ÿ
When doing "ovs-vsctl show" in compute node, it shows below output:ÿ
_____________________________________________ÿ
ovs-vsctl showÿ
c2ec29a5-992d-4875-8adc-1265c23e0304ÿ
ÿBridge br-exÿ
ÿ ÿ ÿPort phy-br-exÿ
ÿ ÿ ÿ ÿ ÿInterface phy-br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=int-br-ex}ÿ
ÿ ÿ ÿPort br-exÿ
ÿ ÿ ÿ ÿ ÿInterface br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿBridge br-tunÿ
ÿ ÿ ÿfail_mode: secureÿ
ÿ ÿ ÿPort br-tunÿ
ÿ ÿ ÿ ÿ ÿInterface br-tunÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort patch-intÿ
ÿ ÿ ÿ ÿ ÿInterface patch-intÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=patch-tun}ÿ
ÿBridge br-intÿ
ÿ ÿ ÿfail_mode: secureÿ
ÿ ÿ ÿPort "qvo0ae19a43-b6"ÿ
ÿ ÿ ÿ ÿ ÿtag: 2ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo0ae19a43-b6"ÿ
ÿ ÿ ÿPort br-intÿ
ÿ ÿ ÿ ÿ ÿInterface br-intÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort "qvo31c89856-a2"ÿ
ÿ ÿ ÿ ÿ ÿtag: 1ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo31c89856-a2"ÿ
ÿ ÿ ÿPort patch-tunÿ
ÿ ÿ ÿ ÿ ÿInterface patch-tunÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=patch-int}ÿ
ÿ ÿ ÿPort int-br-exÿ
ÿ ÿ ÿ ÿ ÿInterface int-br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=phy-br-ex}ÿ
ÿ ÿ ÿPort "qvo97fef28a-ec"ÿ
ÿ ÿ ÿ ÿ ÿtag: 2ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo97fef28a-ec"ÿ
ÿBridge br-dpdkÿ
ÿ ÿ ÿPort br-dpdkÿ
ÿ ÿ ÿ ÿ ÿInterface br-dpdkÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿBridge "br0"ÿ
ÿ ÿ ÿPort "br0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "br0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort "dpdk0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "dpdk0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkÿ
ÿ ÿ ÿPort "vhost-user-2"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-2"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿ ÿ ÿPort "vhost-user-0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿ ÿ ÿPort "vhost-user-1"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-1"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿovs_version: "2.4.0"ÿ
root@dpdk:~#
_____________________________________________ÿ
Open flows output in bridge in compute node are as below:ÿ
_____________________________________________ÿ
root@dpdk:~# ovs-ofctl dump-flows br-tunÿ
NXST_FLOW reply (xid=0x4):ÿ
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)ÿ
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)ÿ
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)ÿ
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10)ÿ
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10)ÿ
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1ÿ
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)ÿ
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=dropÿ
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tunÿ
int NXST_FLOW reply (xid=0x4):ÿ
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=dropÿ
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMALÿ
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
root@dpdk:~#
_____________________________________________ÿ
Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP.ÿ
Would be really appreciate if have clarity on DHCP flow establishment.
Thanks & Regards
Abhijeet Karve
From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" <mark.d.gray@intel.com>ÿ
Cc: ÿ ÿ ÿ ÿ"dev@dpdk.org"ÿ<dev@dpdk.org>, "discuss@openvswitch.org"ÿ<discuss@openvswitch.org>ÿ
Date: ÿ ÿ ÿ ÿ12/15/2015 09:13 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ
Hi Abhijeet,
If you answer below questions it will help me understand your problem.
What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)
Best regards
Przemek
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
> ÿ ÿ ÿ ÿ ÿEdit /etc/default/grub.
> ÿ ÿ ÿ ÿ ÿ ÿ GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on ÿhugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
> ÿ ÿ ÿ ÿ ÿ# update-grub
> ÿ ÿ ÿ ÿ- Mount the huge pages into different directory.
> ÿ ÿ ÿ ÿ ÿ # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
> ÿ ÿ ÿ ÿ ÿ # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>
[attachment "nova-scheduler.log" removed by Abhijeet Karve/AHD/TCS]
[attachment "nova-compute.log" removed by Abhijeet Karve/AHD/TCS]
[attachment "neutron-server.log" removed by Abhijeet Karve/AHD/TCS]
From aconole@redhat.com Tue Jan 26 20:25:49 2016
Return-Path: <aconole@redhat.com>
Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28])
by dpdk.org (Postfix) with ESMTP id 84AB78E94
for <dev@dpdk.org>; Tue, 26 Jan 2016 20:25:49 +0100 (CET)
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
by mx1.redhat.com (Postfix) with ESMTPS id 0412532D3CD
for <dev@dpdk.org>; Tue, 26 Jan 2016 19:25:49 +0000 (UTC)
Received: from aconole-fed23 (dhcp-25-194.bos.redhat.com [10.18.25.194])
by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id
u0QJPmSR011385
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits%6 verify=NO)
for <dev@dpdk.org>; Tue, 26 Jan 2016 14:25:48 -0500
From: Aaron Conole <aconole@redhat.com>
To: dev@dpdk.org
In-Reply-To: <1449850823-29017-1-git-send-email-aconole@redhat.com>
Date: Tue, 26 Jan 2016 14:25:48 -0500
Message-ID: <f7ta8nsxhvn.fsf@redhat.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1.50 (gnu/linux)
MIME-Version: 1.0
Content-Type: text/plain
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Subject: Re: [dpdk-dev] [PATCH 2.3] tools/dpdk_nic_bind.py: Verbosely warn
the user on bind
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
<mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
<mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Tue, 26 Jan 2016 19:25:49 -0000
Ping... This patch has been sitting^Hrotting for a bit over a month.
> DPDK ports are only detected during the EAL initialization. After that, any
> new DPDK ports which are bound will not be visible to the application.
>
> The dpdk_nic_bind.py can be a bit more helpful to let users know that DPDK
> enabled applications will not find rebound ports until after they have been
> restarted.
>
> Signed-off-by: Aaron Conole <aconole@redhat.com>
>
> ---
> tools/dpdk_nic_bind.py | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/tools/dpdk_nic_bind.py b/tools/dpdk_nic_bind.py
> index f02454e..ca39389 100755
> --- a/tools/dpdk_nic_bind.py
> +++ b/tools/dpdk_nic_bind.py
> @@ -344,8 +344,10 @@ def bind_one(dev_id, driver, force):
> dev["Driver_str"] = "" # clear driver string
>
> # if we are binding to one of DPDK drivers, add PCI id's to that driver
> + bDpdkDriver = False
> if driver in dpdk_drivers:
> filename = "/sys/bus/pci/drivers/%s/new_id" % driver
> + bDpdkDriver = True
> try:
> f = open(filename, "w")
> except:
> @@ -371,12 +373,18 @@ def bind_one(dev_id, driver, force):
> try:
> f.write(dev_id)
> f.close()
> + if bDpdkDriver:
> + print "Device rebound to dpdk driver."
> + print "Remember to restart any application that will use this port."
> except:
> # for some reason, closing dev_id after adding a new PCI ID to new_id
> # results in IOError. however, if the device was successfully bound,
> # we don't care for any errors and can safely ignore IOError
> tmp = get_pci_device_details(dev_id)
> if "Driver_str" in tmp and tmp["Driver_str"] == driver:
> + if bDpdkDriver:
> + print "Device rebound to dpdk driver."
> + print "Remember to restart any application that will use this port."
> return
> print "Error: bind failed for %s - Cannot bind to driver %s" % (dev_id, driver)
> if saved_driver is not None: # restore any previous driver
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
2016-01-27 11:41 ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue Czesnowicz, Przemyslaw
@ 2016-01-27 16:22 ` Abhijeet Karve
0 siblings, 0 replies; 3+ messages in thread
From: Abhijeet Karve @ 2016-01-27 16:22 UTC (permalink / raw)
To: Czesnowicz, Przemyslaw; +Cc: dev, discuss
Hi Przemek,
Thanks for the quick response. Now able to get the DHCP ip's for 2
vhostuser instances and able to ping each other. Isssue was a bug in
cirros 0.3.0 images which we were using in openstack after using 0.3.1
image as given in the URL(
https://www.redhat.com/archives/rhos-list/2013-August/msg00032.html), able
to get the IP's in vhostuser VM instances.
As per our understanding, Packet flow across DPDK datapath will be like
vhostuser ports are connected to the br-int bridge & same is being patched
to the br-dpdk bridge where in our physical network (NIC) is connected
with dpdk0 port.
So for testing the flow we have to connect that physical network(NIC) with
external packet generator (e.g - ixia, iperf) & run the testpmd
application in the vhostuser VM, right?
Does it required to add any flows/efforts in bridge configurations(either
br-int or br-dpdk)?
Thanks & Regards
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To: Abhijeet Karve <abhijeet.karve@tcs.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org"
<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date: 01/27/2016 05:11 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Inter-VM communication & IP allocation through DHCP issue
Hi Abhijeet,
It seems you are almost there!
When booting the VM’s do you request hugepage memory for them (by setting
hw:mem_page_size=large in flavor extra_spec)?
If not then please do, if yes then please look into libvirt logfiles for
the VM’s (in /var/log/libvirt/qemu/instance-xxx), I think there could be a
clue.
Regards
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Monday, January 25, 2016 6:13 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Inter-VM communication & IP allocation through DHCP issue
Hi Przemek,
Thank you for your response, It really provided us breakthrough.
After setting up DPDK on compute node for stable/kilo, We are trying to
set up Openstack stable/liberty all-in-one setup, At present we are not
able to get the IP allocation for the vhost type instances through DHCP.
Also we tried assigning IP's manually to them but the inter-VM
communication also not happening,
#neutron agent-list
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host |
alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent |
nfv-dpdk-devstack | :-) | True | neutron-openvswitch-agent |
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent |
nfv-dpdk-devstack | :-) | True | neutron-l3-agent |
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent |
nfv-dpdk-devstack | :-) | True | neutron-dhcp-agent |
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent |
nfv-dpdk-devstack | :-) | True | neutron-linuxbridge-agent |
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent |
nfv-dpdk-devstack | :-) | True | neutron-metadata-agent |
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent |
nfv-dpdk-devstack | xxx | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
ovs-vsctl show output#
--------------------------------------------------------
Bridge br-dpdk
Port br-dpdk
Interface br-dpdk
type: internal
Port phy-br-dpdk
Interface phy-br-dpdk
type: patch
options: {peer=int-br-dpdk}
Bridge br-int
fail_mode: secure
Port "vhufa41e799-f2"
tag: 5
Interface "vhufa41e799-f2"
type: dpdkvhostuser
Port int-br-dpdk
Interface int-br-dpdk
type: patch
options: {peer=phy-br-dpdk}
Port "tap4e19f8e1-59"
tag: 5
Interface "tap4e19f8e1-59"
type: internal
Port "vhu05734c49-3b"
tag: 5
Interface "vhu05734c49-3b"
type: dpdkvhostuser
Port "vhu10c06b4d-84"
tag: 5
Interface "vhu10c06b4d-84"
type: dpdkvhostuser
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "vhue169c581-ef"
tag: 5
Interface "vhue169c581-ef"
type: dpdkvhostuser
Port br-int
Interface br-int
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
error: "could not open network device br-tun (Invalid
argument)"
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.4.0"
--------------------------------------------------------
ovs-ofctl dump-flows br-int#
--------------------------------------------------------
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0,
n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0,
n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0,
n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0,
n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0,
n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346
actions=mod_vlan_vid:5,NORMAL
cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0,
n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0,
n_bytes=0, idle_age=2416, priority=0 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0,
n_bytes=0, idle_age=2416, priority=0 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0,
n_bytes=0, idle_age=2410,
priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0,
n_bytes=0, idle_age=2409,
priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0,
n_bytes=0, idle_age=2408,
priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0,
n_bytes=0, idle_age=2408,
priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0,
n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0,
n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0,
n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0,
n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13
actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0,
n_bytes=0, idle_age=2415, priority=0 actions=drop
root@nfv-dpdk-devstack:/etc/neutron#
--------------------------------------------------------
Also attaching Neutron-server, nova-compute & nova-scheduler logs.
It will be really great for us if get any hint to overcome out of this
Inter-VM & DHCP communication issue,
Thanks & Regards
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To: Abhijeet Karve <abhijeet.karve@tcs.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date: 01/04/2016 07:54 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Getting memory backing issues with qemu parameter passing
You should be able to clone networking-ovs-dpdk, switch to kilo branch,
and run
python setup.py install
in the root of networking-ovs-dpdk, that should install agent and mech
driver.
Then you would need to enable mech driver (ovsdpdk) on the controller in
the /etc/neutron/plugins/ml2/ml2_conf.ini
And run the right agent on the computes (networking-ovs-dpdk-agent).
There should be pip packeges of networking-ovs-dpdk available shortly,
I’ll let you know when that happens.
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 24, 2015 6:42 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Getting memory backing issues with qemu parameter passing
Hi Przemek,
Thank you so much for your quick response.
The guide(
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
) which you have suggested that is for openstack vhost user installations
with devstack.
Can't we have any reference for including ovs-dpdk mechanisam driver for
openstack Ubuntu distribution which we are following for
compute+controller node setup?"
We are facing below listed issues With the current approach of setting up
openstack kilo interactively + replacing ovs with ovs-dpdk enabled and
Instance creation in openstack with
passing that instance id to QEMU command line which further passes the
vhost-user sockets to instances for enabling the DPDK libraries in it.
1. Created a flavor m1.hugepages which is backed by hugepage memory,
unable to spawn instance with this flavor – Getting a issue like: No
matching hugetlbfs for the number of hugepages assigned to the flavor.
2. Passing socket info to instances via qemu manually and instnaces
created are not persistent.
Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism
driver and agent all of that in our openstack ubuntu distribution.
Would be really appriciate if get any help or ref with explanation.
We are using compute + controller node setup and we are using following
software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________
Thanks,
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To: Abhijeet Karve <abhijeet.karve@tcs.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date: 12/17/2015 06:32 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuser
I haven’t tried that approach not sure if that would work, it seems
clunky.
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add
ports to ovs with the right type, pass the sockets to qemu) would be done
by OpenStack.
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuser
Hi Przemek,
Thank you so much for sharing the ref guide.
Would be appreciate if clear one doubt.
At present we are setting up openstack kilo interactively and further
replacing ovs with ovs-dpdk enabled.
Once the above setup done, We are creating instance in openstack and
passing that instance id to QEMU command line which further passes the
vhost-user sockets to instances, enabling the DPDK libraries in it.
Isn't this the correct way of integrating ovs-dpdk with openstack?
Thanks & Regards
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To: Abhijeet Karve <abhijeet.karve@tcs.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date: 12/17/2015 05:27 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuser
HI Abhijeet,
For Kilo you need to use ovsdpdk mechanism driver and a matching agent to
integrate ovs-dpdk with OpenStack.
The guide you are following only talks about running ovs-dpdk not how it
should be integrated with OpenStack.
Please follow this guide:
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
Best regards
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuser
Hi Przemek,
We have configured the accelerated data path between a physical interface
to the VM using openvswitch netdev-dpdk with vhost-user support. The VM
created with this special data path and vhost library, I am calling as
DPDK instance.
If assigning ip manually to the newly created Cirros VM instance, We are
able to make 2 VM's to communicate on the same compute node. Else it's not
associating any ip through DHCP though DHCP is in compute node only.
Yes it's a compute + controller node setup and we are using following
software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________
We are following the intel guide
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200
When doing "ovs-vsctl show" in compute node, it shows below output:
_____________________________________________
ovs-vsctl show
c2ec29a5-992d-4875-8adc-1265c23e0304
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port "qvo0ae19a43-b6"
tag: 2
Interface "qvo0ae19a43-b6"
Port br-int
Interface br-int
type: internal
Port "qvo31c89856-a2"
tag: 1
Interface "qvo31c89856-a2"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "qvo97fef28a-ec"
tag: 2
Interface "qvo97fef28a-ec"
Bridge br-dpdk
Port br-dpdk
Interface br-dpdk
type: internal
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "dpdk0"
Interface "dpdk0"
type: dpdk
Port "vhost-user-2"
Interface "vhost-user-2"
type: dpdkvhostuser
Port "vhost-user-0"
Interface "vhost-user-0"
type: dpdkvhostuser
Port "vhost-user-1"
Interface "vhost-user-1"
type: dpdkvhostuser
ovs_version: "2.4.0"
root@dpdk:~#
_____________________________________________
Open flows output in bridge in compute node are as below:
_____________________________________________
root@dpdk:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794,
idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534,
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,20)
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794,
idle_age=19982, hard_age=65534,
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,22)
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c
actions=mod_vlan_vid:2,resubmit(,10)
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=1,tun_id=0x57
actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=1
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794,
idle_age=19982, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tun
int NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912,
idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
_____________________________________________
Further we don't know what all the network changes(Packet Flow addition)
if required for associating IP address through the DHCP.
Would be really appreciate if have clarity on DHCP flow establishment.
Thanks & Regards
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To: Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" <
mark.d.gray@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>
Date: 12/15/2015 09:13 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuser
Hi Abhijeet,
If you answer below questions it will help me understand your problem.
What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm
assuming this is at least compute+ controller setup)
Best regards
Przemek
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and
another
> for Guest VM.
> Edit /etc/default/grub.
> GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
> # update-grub
> - Mount the huge pages into different directory.
> # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
> # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup.
In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If
you
> are not the intended recipient, any dissemination, use, review,
distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received
this
> communication in error, please notify us by reply e-mail or telephone
and
> immediately and permanently delete the message and any attachments.
> Thank you
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
[not found] ` <OF7B9ED0F7.5B3B2C67-ON65257F45.0055D550-65257F45.005E9455@tcs.com>
@ 2016-01-27 11:41 ` Czesnowicz, Przemyslaw
2016-01-27 16:22 ` Abhijeet Karve
0 siblings, 1 reply; 3+ messages in thread
From: Czesnowicz, Przemyslaw @ 2016-01-27 11:41 UTC (permalink / raw)
To: Abhijeet Karve; +Cc: dev, discuss
Hi Abhijeet,
It seems you are almost there!
When booting the VM’s do you request hugepage memory for them (by setting hw:mem_page_size=large in flavor extra_spec)?
If not then please do, if yes then please look into libvirt logfiles for the VM’s (in /var/log/libvirt/qemu/instance-xxx), I think there could be a clue.
Regards
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Monday, January 25, 2016 6:13 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
Hi Przemek,
Thank you for your response, It really provided us breakthrough.
After setting up DPDK on compute node for stable/kilo, We are trying to set up Openstack stable/liberty all-in-one setup, At present we are not able to get the IP allocation for the vhost type instances through DHCP. Also we tried assigning IP's manually to them but the inter-VM communication also not happening,
#neutron agent-list
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent | nfv-dpdk-devstack | :-) | True | neutron-openvswitch-agent |
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent | nfv-dpdk-devstack | :-) | True | neutron-l3-agent |
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent | nfv-dpdk-devstack | :-) | True | neutron-dhcp-agent |
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-devstack | :-) | True | neutron-linuxbridge-agent |
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent | nfv-dpdk-devstack | :-) | True | neutron-metadata-agent |
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-devstack | xxx | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
ovs-vsctl show output#
--------------------------------------------------------
Bridge br-dpdk
Port br-dpdk
Interface br-dpdk
type: internal
Port phy-br-dpdk
Interface phy-br-dpdk
type: patch
options: {peer=int-br-dpdk}
Bridge br-int
fail_mode: secure
Port "vhufa41e799-f2"
tag: 5
Interface "vhufa41e799-f2"
type: dpdkvhostuser
Port int-br-dpdk
Interface int-br-dpdk
type: patch
options: {peer=phy-br-dpdk}
Port "tap4e19f8e1-59"
tag: 5
Interface "tap4e19f8e1-59"
type: internal
Port "vhu05734c49-3b"
tag: 5
Interface "vhu05734c49-3b"
type: dpdkvhostuser
Port "vhu10c06b4d-84"
tag: 5
Interface "vhu10c06b4d-84"
type: dpdkvhostuser
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "vhue169c581-ef"
tag: 5
Interface "vhue169c581-ef"
type: dpdkvhostuser
Port br-int
Interface br-int
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
error: "could not open network device br-tun (Invalid argument)"
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.4.0"
--------------------------------------------------------
ovs-ofctl dump-flows br-int#
--------------------------------------------------------
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0, n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0, n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346 actions=mod_vlan_vid:5,NORMAL
cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0, n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0, n_bytes=0, idle_age=2410, priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0, n_bytes=0, idle_age=2415, priority=0 actions=drop
root@nfv-dpdk-devstack:/etc/neutron#
--------------------------------------------------------
Also attaching Neutron-server, nova-compute & nova-scheduler logs.
It will be really great for us if get any hint to overcome out of this Inter-VM & DHCP communication issue,
Thanks & Regards
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To: Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc: "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date: 01/04/2016 07:54 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
________________________________
You should be able to clone networking-ovs-dpdk, switch to kilo branch, and run
python setup.py install
in the root of networking-ovs-dpdk, that should install agent and mech driver.
Then you would need to enable mech driver (ovsdpdk) on the controller in the /etc/neutron/plugins/ml2/ml2_conf.ini
And run the right agent on the computes (networking-ovs-dpdk-agent).
There should be pip packeges of networking-ovs-dpdk available shortly, I’ll let you know when that happens.
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 24, 2015 6:42 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
Hi Przemek,
Thank you so much for your quick response.
The guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have suggested that is for openstack vhost user installations with devstack.
Can't we have any reference for including ovs-dpdk mechanisam driver for openstack Ubuntu distribution which we are following for
compute+controller node setup?"
We are facing below listed issues With the current approach of setting up openstack kilo interactively + replacing ovs with ovs-dpdk enabled and Instance creation in openstack with
passing that instance id to QEMU command line which further passes the vhost-user sockets to instances for enabling the DPDK libraries in it.
1. Created a flavor m1.hugepages which is backed by hugepage memory, unable to spawn instance with this flavor – Getting a issue like: No matching hugetlbfs for the number of hugepages assigned to the flavor.
2. Passing socket info to instances via qemu manually and instnaces created are not persistent.
Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driver and agent all of that in our openstack ubuntu distribution.
Would be really appriciate if get any help or ref with explanation.
We are using compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________
Thanks,
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To: Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc: "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date: 12/17/2015 06:32 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________
I haven’t tried that approach not sure if that would work, it seems clunky.
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to ovs with the right type, pass the sockets to qemu) would be done by OpenStack.
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
Hi Przemek,
Thank you so much for sharing the ref guide.
Would be appreciate if clear one doubt.
At present we are setting up openstack kilo interactively and further replacing ovs with ovs-dpdk enabled.
Once the above setup done, We are creating instance in openstack and passing that instance id to QEMU command line which further passes the vhost-user sockets to instances, enabling the DPDK libraries in it.
Isn't this the correct way of integrating ovs-dpdk with openstack?
Thanks & Regards
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To: Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc: "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date: 12/17/2015 05:27 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________
HI Abhijeet,
For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack.
The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack.
Please follow this guide:
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
Best regards
Przemek
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
Hi Przemek,
We have configured the accelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance.
If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only.
Yes it's a compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________
We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200
When doing "ovs-vsctl show" in compute node, it shows below output:
_____________________________________________
ovs-vsctl show
c2ec29a5-992d-4875-8adc-1265c23e0304
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port "qvo0ae19a43-b6"
tag: 2
Interface "qvo0ae19a43-b6"
Port br-int
Interface br-int
type: internal
Port "qvo31c89856-a2"
tag: 1
Interface "qvo31c89856-a2"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "qvo97fef28a-ec"
tag: 2
Interface "qvo97fef28a-ec"
Bridge br-dpdk
Port br-dpdk
Interface br-dpdk
type: internal
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "dpdk0"
Interface "dpdk0"
type: dpdk
Port "vhost-user-2"
Interface "vhost-user-2"
type: dpdkvhostuser
Port "vhost-user-0"
Interface "vhost-user-0"
type: dpdkvhostuser
Port "vhost-user-1"
Interface "vhost-user-1"
type: dpdkvhostuser
ovs_version: "2.4.0"
root@dpdk:~#
_____________________________________________
Open flows output in bridge in compute node are as below:
_____________________________________________
root@dpdk:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10)
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tun
int NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
_____________________________________________
Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP.
Would be really appreciate if have clarity on DHCP flow establishment.
Thanks & Regards
Abhijeet Karve
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To: Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Cc: "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>
Date: 12/15/2015 09:13 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________
Hi Abhijeet,
If you answer below questions it will help me understand your problem.
What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)
Best regards
Przemek
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
> Edit /etc/default/grub.
> GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
> # update-grub
> - Mount the huge pages into different directory.
> # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
> # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2016-01-27 16:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-26 19:14 [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue Abhijeet Karve
-- strict thread matches above, loose matches on Subject: below --
2015-12-01 6:13 [dpdk-dev] DPDK OVS on Ubuntu 14.04 Abhijeet Karve
2015-12-01 14:46 ` Polehn, Mike A
2015-12-02 14:52 ` Gray, Mark D
2015-12-15 5:55 ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser Abhijeet Karve
2015-12-15 15:42 ` Czesnowicz, Przemyslaw
2015-12-16 9:36 ` Abhijeet Karve
2015-12-17 11:57 ` Czesnowicz, Przemyslaw
2015-12-17 12:40 ` Abhijeet Karve
2015-12-17 13:01 ` Czesnowicz, Przemyslaw
2015-12-24 17:41 ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing Abhijeet Karve
2016-01-04 14:24 ` Czesnowicz, Przemyslaw
[not found] ` <OF7B9ED0F7.5B3B2C67-ON65257F45.0055D550-65257F45.005E9455@tcs.com>
2016-01-27 11:41 ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue Czesnowicz, Przemyslaw
2016-01-27 16:22 ` Abhijeet Karve
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).