DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] DPDK OVS on Ubuntu 14.04
@ 2015-12-01  6:13 Abhijeet Karve
  2015-12-01 14:46 ` Polehn, Mike A
  0 siblings, 1 reply; 14+ messages in thread
From: Abhijeet Karve @ 2015-12-01  6:13 UTC (permalink / raw)
  To: dev; +Cc: bhavya.addep

[-- Attachment #1: Type: text/plain, Size: 3575 bytes --]

Dear All,


We are trying to install DPDK OVS on top of the openstack juno in Ubuntu 
14.04 single server. We are referring following steps for executing the 
same.
 
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200
 
During execution we are getting some issues with ovs-vswitchd service as 
its getting hang during starting.
_________________________________________________________________________

nfv-dpdk@nfv-dpdk:~$ tail -f /var/log/openvswitch/ovs-vswitchd.log
2015-11-24T10:54:34.036Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connecting...
2015-11-24T10:54:34.036Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connected
2015-11-24T10:54:34.064Z|00008|bridge|INFO|ovs-vswitchd (Open vSwitch) 
2.4.90
2015-11-24T11:03:42.957Z|00002|vlog|INFO|opened log file 
/var/log/openvswitch/ov          
                                                                 s-vswitchd.log
2015-11-24T11:03:42.958Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on 
NUMA 
nod                                                                           
e 0
2015-11-24T11:03:42.958Z|00004|ovs_numa|INFO|Discovered 24 CPU cores on 
NUMA 
nod                                                                           
e 1
2015-11-24T11:03:42.958Z|00005|ovs_numa|INFO|Discovered 2 NUMA nodes and 
48 CPU                                   
                                         cores
2015-11-24T11:03:42.958Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connecting...
2015-11-24T11:03:42.958Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connected
2015-11-24T11:03:42.961Z|00008|bridge|INFO|ovs-vswitchd (Open vSwitch) 
2.4.90
_________________________________________________________________________

Also, attaching output(Hugepage.txt) of  “ ./ovs-vswitchd --dpdk -c 0x0FF8 
-n 4 --socket-mem 1024,0 -- 
--log-file=/var/log/openvswitch/ovs-vswitchd.log 
--pidfile=/var/run/oppenvswitch/ovs-vswitchd.pid” 
 
-          We tried seting up echo 0 > 
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages, but couldn’t 
succeeded.
      Can anyone please help us in getting the things if we are missing 
any and causing ovs-vswitchd to stuck while starting?
 
Also, when we create vm in openstack with DPDK OVS, dpdkvhost-user type 
interfaces are getting created automatically. If this interfaces are 
getting mapped with regular br-int bridge rather than DPDK bridge br0 then 
is this mean that we have successfully enabled DPDK with netdev datapath?

 

We really appreciate for all the advice if you have.

Thanks,
Abhijeet 
Thanks & Regards
Abhijeet Karve

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you



[-- Attachment #2: image002.png --]
[-- Type: image/png, Size: 108749 bytes --]

[-- Attachment #3: Hugepage issue.txt --]
[-- Type: text/plain, Size: 18156 bytes --]

root@nfv-dpdk:/opt/stack/ovs/vswitchd# ./ovs-vswitchd --dpdk -c 0x0FF8 -n 4 --socket-mem 1024,0 -- --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/op\rpenvswitch/ovs-vswitchd.pid
\r\rroot@nfv-dpdk:/opt/stack/ovs/vswitchd# killall ovs-vswitchd

ovs-vswitchd: no process found
root@nfv-dpdk:/opt/stack/ovs/vswitchd#./ovs-vswitchd --dpdk -c 0x0FF8 -n 4 --socket-mem 1024,0 -- --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/op\rpenvswitch/ovs-vswitchd.pid
2015-11-24T13:14:09Z|00001|dpdk|INFO|No -vhost_sock_dir provided - defaulting to /var/run/openvswitch
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 8 on socket 0
EAL: Detected lcore 7 as core 9 on socket 0
EAL: Detected lcore 8 as core 10 on socket 0
EAL: Detected lcore 9 as core 11 on socket 0
EAL: Detected lcore 10 as core 12 on socket 0
EAL: Detected lcore 11 as core 13 on socket 0
EAL: Detected lcore 12 as core 0 on socket 1
EAL: Detected lcore 13 as core 1 on socket 1
EAL: Detected lcore 14 as core 2 on socket 1
EAL: Detected lcore 15 as core 3 on socket 1
EAL: Detected lcore 16 as core 4 on socket 1
EAL: Detected lcore 17 as core 5 on socket 1
EAL: Detected lcore 18 as core 8 on socket 1
EAL: Detected lcore 19 as core 9 on socket 1
EAL: Detected lcore 20 as core 10 on socket 1
EAL: Detected lcore 21 as core 11 on socket 1
EAL: Detected lcore 22 as core 12 on socket 1
EAL: Detected lcore 23 as core 13 on socket 1
EAL: Detected lcore 24 as core 0 on socket 0
EAL: Detected lcore 25 as core 1 on socket 0
EAL: Detected lcore 26 as core 2 on socket 0
EAL: Detected lcore 27 as core 3 on socket 0
EAL: Detected lcore 28 as core 4 on socket 0
EAL: Detected lcore 29 as core 5 on socket 0
EAL: Detected lcore 30 as core 8 on socket 0
EAL: Detected lcore 31 as core 9 on socket 0
EAL: Detected lcore 32 as core 10 on socket 0
EAL: Detected lcore 33 as core 11 on socket 0
EAL: Detected lcore 34 as core 12 on socket 0
EAL: Detected lcore 35 as core 13 on socket 0
EAL: Detected lcore 36 as core 0 on socket 1
EAL: Detected lcore 37 as core 1 on socket 1
EAL: Detected lcore 38 as core 2 on socket 1
EAL: Detected lcore 39 as core 3 on socket 1
EAL: Detected lcore 40 as core 4 on socket 1
EAL: Detected lcore 41 as core 5 on socket 1
EAL: Detected lcore 42 as core 8 on socket 1
EAL: Detected lcore 43 as core 9 on socket 1
EAL: Detected lcore 44 as core 10 on socket 1
EAL: Detected lcore 45 as core 11 on socket 1
EAL: Detected lcore 46 as core 12 on socket 1
EAL: Detected lcore 47 as core 13 on socket 1
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 48 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Searching for IVSHMEM devices...
EAL: No IVSHMEM configuration found! 
EAL: Setting up memory...
EAL: Ask a virtual area of 0x8a00000 bytes
EAL: Virtual area found at 0x7f2cf6800000 (size = 0x8a00000)
EAL: Ask a virtual area of 0x4800000 bytes
EAL: Virtual area found at 0x7f2d1ba00000 (size = 0x4800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d1b600000 (size = 0x200000)
EAL: Ask a virtual area of 0x7800000 bytes
EAL: Virtual area found at 0x7f2d13c00000 (size = 0x7800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d13800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d13400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d13000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d12c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7f2cf5e00000 (size = 0x800000)
EAL: Ask a virtual area of 0x8800000 bytes
EAL: Virtual area found at 0x7f2ced400000 (size = 0x8800000)
EAL: Ask a virtual area of 0x6400000 bytes
EAL: Virtual area found at 0x7f2d0c600000 (size = 0x6400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0c200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0be00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0ba00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0b600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0b200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0ae00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0aa00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0a600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d0a200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d09e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d09a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2d09400000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d09000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d08c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d08800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d08400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d08000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d07c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d07800000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2d07200000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d06e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d06a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ced000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d06600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cecc00000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2cec600000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d06200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d05e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d05a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cec200000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2cebc00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2d05400000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d05000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d04c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2ceb600000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d04800000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2ceb000000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d04400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d04000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d03c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ceac00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d03800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cea800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d03400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cea400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d03000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cea000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d02c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce9c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d02800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce9800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d02400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce9400000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2ce8e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d02000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d01c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d01800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce8a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d01400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce8600000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2ce8000000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d01000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d00c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d00800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce7c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d00400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce7800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2d00000000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2ce7200000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cffc00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce6e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cff800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce6a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce6600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce6200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2cff400000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce5e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce5a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce5600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2ce5200000 (size = 0x200000)
EAL: Ask a virtual area of 0x14800000 bytes
EAL: Virtual area found at 0x7f2cd0800000 (size = 0x14800000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7f2ccfe00000 (size = 0x800000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2ccf800000 (size = 0x400000)
EAL: Ask a virtual area of 0xc00000 bytes
EAL: Virtual area found at 0x7f2ccea00000 (size = 0xc00000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7f2cce000000 (size = 0x800000)
EAL: Ask a virtual area of 0x17a00000 bytes
EAL: Virtual area found at 0x7f2cb6400000 (size = 0x17a00000)
EAL: Ask a virtual area of 0x42200000 bytes
EAL: Virtual area found at 0x7f2c74000000 (size = 0x42200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2c73c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x11800000 bytes
EAL: Virtual area found at 0x7f2c62200000 (size = 0x11800000)
EAL: Ask a virtual area of 0x4800000 bytes
EAL: Virtual area found at 0x7f2c5d800000 (size = 0x4800000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2c5d200000 (size = 0x400000)
EAL: Ask a virtual area of 0x9200000 bytes
EAL: Virtual area found at 0x7f2c53e00000 (size = 0x9200000)
EAL: Ask a virtual area of 0x1800000 bytes
EAL: Virtual area found at 0x7f2c52400000 (size = 0x1800000)
EAL: Ask a virtual area of 0x1ee00000 bytes
EAL: Virtual area found at 0x7f2c33400000 (size = 0x1ee00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2c33000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f2c32c00000 (size = 0x200000)
EAL: Ask a virtual area of 0xa800000 bytes
EAL: Virtual area found at 0x7f2c28200000 (size = 0xa800000)
EAL: Ask a virtual area of 0x15200000 bytes
EAL: Virtual area found at 0x7f2c12e00000 (size = 0x15200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f2c12800000 (size = 0x400000)
EAL: Requesting 512 pages of size 2MB from socket 0
EAL: TSC frequency is ~2596991 KHz
EAL: Master lcore 3 is ready (tid=21ffa700;cpuset=[3])
PMD: ENICPMD trace: rte_enic_pmd_init
EAL: lcore 5 is ready (tid=e49fe700;cpuset=[5])
EAL: lcore 4 is ready (tid=e51ff700;cpuset=[4])
EAL: lcore 9 is ready (tid=e29fa700;cpuset=[9])
EAL: lcore 6 is ready (tid=e41fd700;cpuset=[6])
EAL: lcore 10 is ready (tid=e21f9700;cpuset=[10])
EAL: lcore 7 is ready (tid=e39fc700;cpuset=[7])
EAL: lcore 11 is ready (tid=e19f8700;cpuset=[11])
EAL: lcore 8 is ready (tid=e31fb700;cpuset=[8])
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
Zone 0: name:<MALLOC_S0_HEAP_0>, phys:0x30400000, len:0xb00000, virt:0x7f2d1ba00000, socket_id:0, flags:0
Zone 1: name:<RG_MP_log_history>, phys:0x35400000, len:0x2080, virt:0x7f2d1b600000, socket_id:0, flags:0
Zone 2: name:<MP_log_history>, phys:0xe3ce00000, len:0x28a0c0, virt:0x7f2d09400000, socket_id:0, flags:0
2015-11-24T13:14:12Z|00002|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2015-11-24T13:14:12Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 0
2015-11-24T13:14:12Z|00004|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 1
2015-11-24T13:14:12Z|00005|ovs_numa|INFO|Discovered 2 NUMA nodes and 48 CPU cores
2015-11-24T13:14:12Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
2015-11-24T13:14:12Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
2015-11-24T13:14:12Z|00008|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation
2015-11-24T13:14:12Z|00009|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3
2015-11-24T13:14:12Z|00010|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports unique flow ids
2015-11-24T13:14:12Z|00011|dpif_netlink|ERR|Generic Netlink family 'ovs_datapath' does not exist. The Open vSwitch kernel module is probably not loaded.
2015-11-24T13:14:12Z|00012|dpif|WARN|failed to enumerate system datapaths: No such file or directory
2015-11-24T13:14:12Z|00013|dpif|WARN|failed to create datapath ovs-system: No such file or directory
2015-11-24T13:14:12Z|00014|ofproto_dpif|ERR|failed to open datapath of type system: No such file or directory
2015-11-24T13:14:12Z|00015|ofproto|ERR|failed to open datapath br-eth1: No such file or directory
2015-11-24T13:14:12Z|00016|bridge|ERR|failed to create bridge br-eth1: No such file or directory
2015-11-24T13:14:12Z|00017|bridge|INFO|bridge br-ex: added interface br-ex on port 65534
2015-11-24T13:14:12Z|00018|bridge|INFO|bridge br-int: added interface tapb6327a6f-e3 on port 1
2015-11-24T13:14:12Z|00019|bridge|INFO|bridge br-int: added interface br-int on port 65534
2015-11-24T13:14:12Z|00020|bridge|INFO|bridge br-ex: using datapath ID 0000be3137b62448
2015-11-24T13:14:12Z|00021|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt"
2015-11-24T13:14:12Z|00022|bridge|INFO|bridge br-int: using datapath ID 000016d2ab534949
2015-11-24T13:14:12Z|00023|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt"
2015-11-24T13:14:12Z|00024|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.4.90
2015-11-24T13:14:19Z|00025|memory|INFO|25200 kB peak resident set size after 10.3 seconds
2015-11-24T13:14:19Z|00026|memory|INFO|handlers:17 ports:3 revalidators:7 rules:9

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04
  2015-12-01  6:13 [dpdk-dev] DPDK OVS on Ubuntu 14.04 Abhijeet Karve
@ 2015-12-01 14:46 ` Polehn, Mike A
  2015-12-02 14:52   ` Gray, Mark D
  0 siblings, 1 reply; 14+ messages in thread
From: Polehn, Mike A @ 2015-12-01 14:46 UTC (permalink / raw)
  To: Abhijeet Karve, dev; +Cc: bhavya.addep

May need to setup huge pages on kernel boot line (this is example, you may need to adjust): 

The huge page configuration can be added to the default configuration file /etc/default/grub by adding to the GRUB_CMDLINE_LINUX and the grub configuration file regenerated to get an updated configuration file for Linux boot. 
# vim /etc/default/grub            // edit file

. . .
GRUB_CMDLINE_LINUX_DEFAULT="... default_hugepagesz=1GB hugepagesz=1GB hugepages=4 hugepagesize=2m hugepages=2048 ..."
. . .


This example sets up huge pages for both 1 GB pages for 4 GB of 1 GB hugepage memory and 2 MB pages for 4 GB of 2 MB hugepage memory. After boot the number of 1 GB pages cannot be changed, but the number of 2 MB pages can be changed.

After editing configuration file /etc/default/grub , the new grub.cfg boot file needs to be regenerated: 
# update-grub

And reboot. After reboot memory managers need to be setup:

If /dev/hugepages does not exist:    # mkdir /dev/hugepages

# mount -t hugetlbfs nodev   /dev/hugepages

# mkdir /dev/hugepages_2mb
# mount -t hugetlbfs nodev /dev/hugepages_2mb -o pagesize=2MB

Mike

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
Sent: Monday, November 30, 2015 10:14 PM
To: dev@dpdk.org
Cc: bhavya.addep@gmail.com
Subject: [dpdk-dev] DPDK OVS on Ubuntu 14.04

Dear All,


We are trying to install DPDK OVS on top of the openstack juno in Ubuntu
14.04 single server. We are referring following steps for executing the same.
 
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200
 
During execution we are getting some issues with ovs-vswitchd service as its getting hang during starting.
_________________________________________________________________________

nfv-dpdk@nfv-dpdk:~$ tail -f /var/log/openvswitch/ovs-vswitchd.log
2015-11-24T10:54:34.036Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connecting...
2015-11-24T10:54:34.036Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connected
2015-11-24T10:54:34.064Z|00008|bridge|INFO|ovs-vswitchd (Open vSwitch)
2.4.90
2015-11-24T11:03:42.957Z|00002|vlog|INFO|opened log file 
/var/log/openvswitch/ov          
                                                                 s-vswitchd.log 2015-11-24T11:03:42.958Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on NUMA 
nod                                                                           
e 0
2015-11-24T11:03:42.958Z|00004|ovs_numa|INFO|Discovered 24 CPU cores on NUMA 
nod                                                                           
e 1
2015-11-24T11:03:42.958Z|00005|ovs_numa|INFO|Discovered 2 NUMA nodes and 
48 CPU                                   
                                         cores
2015-11-24T11:03:42.958Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connecting...
2015-11-24T11:03:42.958Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.sock:                                                                            
connected
2015-11-24T11:03:42.961Z|00008|bridge|INFO|ovs-vswitchd (Open vSwitch)
2.4.90
_________________________________________________________________________

Also, attaching output(Hugepage.txt) of  “ ./ovs-vswitchd --dpdk -c 0x0FF8 -n 4 --socket-mem 1024,0 -- --log-file=/var/log/openvswitch/ovs-vswitchd.log
--pidfile=/var/run/oppenvswitch/ovs-vswitchd.pid” 
 
-          We tried seting up echo 0 > 
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages, but couldn’t succeeded.
      Can anyone please help us in getting the things if we are missing any and causing ovs-vswitchd to stuck while starting?
 
Also, when we create vm in openstack with DPDK OVS, dpdkvhost-user type interfaces are getting created automatically. If this interfaces are getting mapped with regular br-int bridge rather than DPDK bridge br0 then is this mean that we have successfully enabled DPDK with netdev datapath?

 

We really appreciate for all the advice if you have.

Thanks,
Abhijeet
Thanks & Regards
Abhijeet Karve

=====-----=====-----=====
Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04
  2015-12-01 14:46 ` Polehn, Mike A
@ 2015-12-02 14:52   ` Gray, Mark D
  2015-12-15  5:55     ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser Abhijeet Karve
  0 siblings, 1 reply; 14+ messages in thread
From: Gray, Mark D @ 2015-12-02 14:52 UTC (permalink / raw)
  To: Polehn, Mike A, Abhijeet Karve, dev; +Cc: bhavya.addep, discuss

+ discuss@openvswitch.org

one comment below: 

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Polehn, Mike A
> Sent: Tuesday, December 1, 2015 2:46 PM
> To: Abhijeet Karve; dev@dpdk.org
> Cc: bhavya.addep@gmail.com
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04
> 
> May need to setup huge pages on kernel boot line (this is example, you may
> need to adjust):
> 
> The huge page configuration can be added to the default configuration file
> /etc/default/grub by adding to the GRUB_CMDLINE_LINUX and the grub
> configuration file regenerated to get an updated configuration file for Linux
> boot.
> # vim /etc/default/grub            // edit file
> 
> . . .
> GRUB_CMDLINE_LINUX_DEFAULT="... default_hugepagesz=1GB
> hugepagesz=1GB hugepages=4 hugepagesize=2m hugepages=2048 ..."
> . . .
> 
> 
> This example sets up huge pages for both 1 GB pages for 4 GB of 1 GB
> hugepage memory and 2 MB pages for 4 GB of 2 MB hugepage memory.
> After boot the number of 1 GB pages cannot be changed, but the number of
> 2 MB pages can be changed.
> 
> After editing configuration file /etc/default/grub , the new grub.cfg boot file
> needs to be regenerated:
> # update-grub
> 
> And reboot. After reboot memory managers need to be setup:
> 
> If /dev/hugepages does not exist:    # mkdir /dev/hugepages
> 
> # mount -t hugetlbfs nodev   /dev/hugepages
> 
> # mkdir /dev/hugepages_2mb
> # mount -t hugetlbfs nodev /dev/hugepages_2mb -o pagesize=2MB
> 
> Mike
> 
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Monday, November 30, 2015 10:14 PM
> To: dev@dpdk.org
> Cc: bhavya.addep@gmail.com
> Subject: [dpdk-dev] DPDK OVS on Ubuntu 14.04
> 
> Dear All,
> 
> 
> We are trying to install DPDK OVS on top of the openstack juno in Ubuntu
> 14.04 single server. We are referring following steps for executing the same.
> 
> https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-
> ovs-today-using-dpdk-200
> 
> During execution we are getting some issues with ovs-vswitchd service as its
> getting hang during starting.
> __________________________________________________________
> _______________
> 
> nfv-dpdk@nfv-dpdk:~$ tail -f /var/log/openvswitch/ovs-vswitchd.log
> 2015-11-
> 24T10:54:34.036Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.so
> ck:
> connecting...
> 2015-11-
> 24T10:54:34.036Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.so
> ck:
> connected
> 2015-11-24T10:54:34.064Z|00008|bridge|INFO|ovs-vswitchd (Open vSwitch)
> 2.4.90
> 2015-11-24T11:03:42.957Z|00002|vlog|INFO|opened log file
> /var/log/openvswitch/ov
>                                                                  s-vswitchd.log 2015-11-
> 24T11:03:42.958Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on
> NUMA
> nod
> e 0
> 2015-11-24T11:03:42.958Z|00004|ovs_numa|INFO|Discovered 24 CPU cores
> on NUMA
> nod
> e 1
> 2015-11-24T11:03:42.958Z|00005|ovs_numa|INFO|Discovered 2 NUMA
> nodes and
> 48 CPU
>                                          cores
> 2015-11-
> 24T11:03:42.958Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.so
> ck:
> connecting...
> 2015-11-
> 24T11:03:42.958Z|00007|reconnect|INFO|unix:/var/run/openvswitch/db.so
> ck:
> connected
> 2015-11-24T11:03:42.961Z|00008|bridge|INFO|ovs-vswitchd (Open vSwitch)
> 2.4.90
> __________________________________________________________
> _______________
> 
> Also, attaching output(Hugepage.txt) of  “ ./ovs-vswitchd --dpdk -c 0x0FF8 -n
> 4 --socket-mem 1024,0 -- --log-file=/var/log/openvswitch/ovs-vswitchd.log
> --pidfile=/var/run/oppenvswitch/ovs-vswitchd.pid”
> 
> -          We tried seting up echo 0 >
> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages, but couldn’t
> succeeded.
>       Can anyone please help us in getting the things if we are missing any and
> causing ovs-vswitchd to stuck while starting?
> 
> Also, when we create vm in openstack with DPDK OVS, dpdkvhost-user type
> interfaces are getting created automatically. If this interfaces are getting
> mapped with regular br-int bridge rather than DPDK bridge br0 then is this

You can still have a bridge named br-int  that is backed with a userspace datapath. You can't add
a dpdkvhostuser port to a kernel space datapath. So in this case, I think you are ok and are
using DPDK.

> mean that we have successfully enabled DPDK with netdev datapath?
> 
> 
> 
> We really appreciate for all the advice if you have.
> 
> Thanks,
> Abhijeet
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
  2015-12-02 14:52   ` Gray, Mark D
@ 2015-12-15  5:55     ` Abhijeet Karve
  2015-12-15 15:42       ` Czesnowicz, Przemyslaw
  0 siblings, 1 reply; 14+ messages in thread
From: Abhijeet Karve @ 2015-12-15  5:55 UTC (permalink / raw)
  To: Gray, Mark D; +Cc: dev, discuss

Dear All,

After seting up system boot parameters as shown below, the issue is 
resolved now & we are able to successfully setup openvswitch netdev-dpdk 
with vhostuser support.

_________________________________________________________________________________________________________________
Setup 2 sets of huge pages with different sizes. One for Vhost and another 
for Guest VM.
         Edit /etc/default/grub.
            GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G 
hugepages=10 hugepagesz=2M hugepages=4096"
         # update-grub
       - Mount the huge pages into different directory.
          # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
          # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
_________________________________________________________________________________________________________________

At present we are facing an issue in Testing DPDK application on setup. In 
our scenario, We have DPDK instance launched on top of the Openstack Kilo 
compute node. Not able to assign DHCP IP from controller. 


Thanks & Regards
Abhijeet Karve

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
  2015-12-15  5:55     ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser Abhijeet Karve
@ 2015-12-15 15:42       ` Czesnowicz, Przemyslaw
  2015-12-16  9:36         ` Abhijeet Karve
  0 siblings, 1 reply; 14+ messages in thread
From: Czesnowicz, Przemyslaw @ 2015-12-15 15:42 UTC (permalink / raw)
  To: Abhijeet Karve, Gray, Mark D; +Cc: dev, discuss

Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
  2015-12-15 15:42       ` Czesnowicz, Przemyslaw
@ 2015-12-16  9:36         ` Abhijeet Karve
  2015-12-17 11:57           ` Czesnowicz, Przemyslaw
  0 siblings, 1 reply; 14+ messages in thread
From: Abhijeet Karve @ 2015-12-16  9:36 UTC (permalink / raw)
  To: Czesnowicz, Przemyslaw; +Cc: dev, discuss

Hi Przemek,


We have configured the accelerated data path between a physical interface 
to the VM using openvswitch netdev-dpdk with vhost-user support. The VM 
created with this special data path and vhost library, I am calling as 
DPDK instance. 

If assigning ip manually to the newly created Cirros VM instance, We are 
able to make 2 VM's to communicate on the same compute node. Else it's not 
associating any ip through DHCP though DHCP is in compute node only.

Yes it's a compute + controller node setup and we are using following 
software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

We are following the intel guide 
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200

When doing "ovs-vsctl show" in compute node, it shows below output:
_____________________________________________
ovs-vsctl show
c2ec29a5-992d-4875-8adc-1265c23e0304
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port "qvo0ae19a43-b6"
            tag: 2
            Interface "qvo0ae19a43-b6"
        Port br-int
            Interface br-int
                type: internal
        Port "qvo31c89856-a2"
            tag: 1
            Interface "qvo31c89856-a2"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qvo97fef28a-ec"
            tag: 2
            Interface "qvo97fef28a-ec"
    Bridge br-dpdk
        Port br-dpdk
            Interface br-dpdk
                type: internal
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "vhost-user-2"
            Interface "vhost-user-2"
                type: dpdkvhostuser
        Port "vhost-user-0"
            Interface "vhost-user-0"
                type: dpdkvhostuser
        Port "vhost-user-1"
            Interface "vhost-user-1"
                type: dpdkvhostuser
    ovs_version: "2.4.0"
root@dpdk:~# 
_____________________________________________

Open flows output in bridge in compute node are as below:
_____________________________________________
root@dpdk:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, 
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,20)
 cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, 
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,22)
 cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c 
actions=mod_vlan_vid:2,resubmit(,10)
 cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 
actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
 cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=0 actions=drop
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# ovs-ofctl dump-flows br-tun
int NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
 cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, 
idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop
root@dpdk:~# 
_____________________________________________


Further we don't know what all the network changes(Packet Flow addition) 
if required for associating IP address through the DHCP.

Would be really appreciate if have clarity on DHCP flow establishment. 



Thanks & Regards
Abhijeet Karve





From:   "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To:     Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" 
<mark.d.gray@intel.com>
Cc:     "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" 
<discuss@openvswitch.org>
Date:   12/15/2015 09:13 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser



Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm 
assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and 
another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. 
In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If 
you
> are not the intended recipient, any dissemination, use, review, 
distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received 
this
> communication in error, please notify us by reply e-mail or telephone 
and
> immediately and permanently delete the message and any attachments.
> Thank you
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
  2015-12-16  9:36         ` Abhijeet Karve
@ 2015-12-17 11:57           ` Czesnowicz, Przemyslaw
  2015-12-17 12:40             ` Abhijeet Karve
  0 siblings, 1 reply; 14+ messages in thread
From: Czesnowicz, Przemyslaw @ 2015-12-17 11:57 UTC (permalink / raw)
  To: Abhijeet Karve; +Cc: dev, discuss

HI Abhijeet,

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack.

The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack.

Please follow this guide:
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst

Best regards
Przemek


From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser

Hi Przemek,


We have configured the accelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance.

If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only.

Yes it's a compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200

When doing "ovs-vsctl show" in compute node, it shows below output:
_____________________________________________
ovs-vsctl show
c2ec29a5-992d-4875-8adc-1265c23e0304
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port "qvo0ae19a43-b6"
            tag: 2
            Interface "qvo0ae19a43-b6"
        Port br-int
            Interface br-int
                type: internal
        Port "qvo31c89856-a2"
            tag: 1
            Interface "qvo31c89856-a2"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qvo97fef28a-ec"
            tag: 2
            Interface "qvo97fef28a-ec"
    Bridge br-dpdk
        Port br-dpdk
            Interface br-dpdk
                type: internal
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "vhost-user-2"
            Interface "vhost-user-2"
                type: dpdkvhostuser
        Port "vhost-user-0"
            Interface "vhost-user-0"
                type: dpdkvhostuser
        Port "vhost-user-1"
            Interface "vhost-user-1"
                type: dpdkvhostuser
    ovs_version: "2.4.0"
root@dpdk:~#
_____________________________________________

Open flows output in bridge in compute node are as below:
_____________________________________________
root@dpdk:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
 cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10)
 cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
 cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tun
int NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
 cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
_____________________________________________


Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP.

Would be really appreciate if have clarity on DHCP flow establishment.



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>
Date:        12/15/2015 09:13 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________



Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
  2015-12-17 11:57           ` Czesnowicz, Przemyslaw
@ 2015-12-17 12:40             ` Abhijeet Karve
  2015-12-17 13:01               ` Czesnowicz, Przemyslaw
  0 siblings, 1 reply; 14+ messages in thread
From: Abhijeet Karve @ 2015-12-17 12:40 UTC (permalink / raw)
  To: Czesnowicz, Przemyslaw; +Cc: dev, discuss

Hi Przemek,

Thank you so much for sharing the ref guide.

Would be appreciate if clear one doubt. 

At present we are setting up openstack kilo interactively and further 
replacing ovs with ovs-dpdk enabled. 
Once the above setup done, We are creating instance in openstack and 
passing that instance id to QEMU command line which further passes the 
vhost-user sockets to instances, enabling the DPDK libraries in it.

Isn't this the correct way of integrating ovs-dpdk with openstack?


Thanks & Regards
Abhijeet Karve




From:   "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To:     Abhijeet Karve <abhijeet.karve@tcs.com>
Cc:     "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" 
<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date:   12/17/2015 05:27 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser



HI Abhijeet,
 
For Kilo you need to use ovsdpdk mechanism driver and a matching agent to 
integrate ovs-dpdk with OpenStack.
 
The guide you are following only talks about running ovs-dpdk not how it 
should be integrated with OpenStack.
 
Please follow this guide:
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
 
Best regards
Przemek
 
 
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser
 
Hi Przemek, 


We have configured the accelerated data path between a physical interface 
to the VM using openvswitch netdev-dpdk with vhost-user support. The VM 
created with this special data path and vhost library, I am calling as 
DPDK instance. 

If assigning ip manually to the newly created Cirros VM instance, We are 
able to make 2 VM's to communicate on the same compute node. Else it's not 
associating any ip through DHCP though DHCP is in compute node only. 

Yes it's a compute + controller node setup and we are using following 
software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

We are following the intel guide 
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200 


When doing "ovs-vsctl show" in compute node, it shows below output: 
_____________________________________________ 
ovs-vsctl show 
c2ec29a5-992d-4875-8adc-1265c23e0304 
    Bridge br-ex 
        Port phy-br-ex 
            Interface phy-br-ex 
                type: patch 
                options: {peer=int-br-ex} 
        Port br-ex 
            Interface br-ex 
                type: internal 
    Bridge br-tun 
        fail_mode: secure 
        Port br-tun 
            Interface br-tun 
                type: internal 
        Port patch-int 
            Interface patch-int 
                type: patch 
                options: {peer=patch-tun} 
    Bridge br-int 
        fail_mode: secure 
        Port "qvo0ae19a43-b6" 
            tag: 2 
            Interface "qvo0ae19a43-b6" 
        Port br-int 
            Interface br-int 
                type: internal 
        Port "qvo31c89856-a2" 
            tag: 1 
            Interface "qvo31c89856-a2" 
        Port patch-tun 
            Interface patch-tun 
                type: patch 
                options: {peer=patch-int} 
        Port int-br-ex 
            Interface int-br-ex 
                type: patch 
                options: {peer=phy-br-ex} 
        Port "qvo97fef28a-ec" 
            tag: 2 
            Interface "qvo97fef28a-ec" 
    Bridge br-dpdk 
        Port br-dpdk 
            Interface br-dpdk 
                type: internal 
    Bridge "br0" 
        Port "br0" 
            Interface "br0" 
                type: internal 
        Port "dpdk0" 
            Interface "dpdk0" 
                type: dpdk 
        Port "vhost-user-2" 
            Interface "vhost-user-2" 
                type: dpdkvhostuser 
        Port "vhost-user-0" 
            Interface "vhost-user-0" 
                type: dpdkvhostuser 
        Port "vhost-user-1" 
            Interface "vhost-user-1" 
                type: dpdkvhostuser 
    ovs_version: "2.4.0" 
root@dpdk:~# 
_____________________________________________ 

Open flows output in bridge in compute node are as below: 
_____________________________________________ 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
NXST_FLOW reply (xid=0x4): 
 cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2) 
 cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
 cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, 
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,20) 
 cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, 
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,22) 
 cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c 
actions=mod_vlan_vid:2,resubmit(,10) 
 cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 
actions=mod_vlan_vid:1,resubmit(,10) 
 cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
 cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
 cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 

 cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22) 
 cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
int NXST_FLOW reply (xid=0x4): 
 cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop 
 cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, 
idle_age=19981, hard_age=65534, priority=1 actions=NORMAL 
 cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
_____________________________________________ 


Further we don't know what all the network changes(Packet Flow addition) 
if required for associating IP address through the DHCP. 

Would be really appreciate if have clarity on DHCP flow establishment. 



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" <
mark.d.gray@intel.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org> 
Date:        12/15/2015 09:13 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 




Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm 
assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and 
another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. 
In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If 
you
> are not the intended recipient, any dissemination, use, review, 
distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received 
this
> communication in error, please notify us by reply e-mail or telephone 
and
> immediately and permanently delete the message and any attachments.
> Thank you
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
  2015-12-17 12:40             ` Abhijeet Karve
@ 2015-12-17 13:01               ` Czesnowicz, Przemyslaw
  2015-12-24 17:41                 ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing Abhijeet Karve
  0 siblings, 1 reply; 14+ messages in thread
From: Czesnowicz, Przemyslaw @ 2015-12-17 13:01 UTC (permalink / raw)
  To: Abhijeet Karve; +Cc: dev, discuss

I haven't tried that approach not sure if that would work, it seems clunky.

If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to ovs with the right type, pass the sockets to qemu) would be done by OpenStack.

Przemek

From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser

Hi Przemek,

Thank you so much for sharing the ref guide.

Would be appreciate if clear one doubt.

At present we are setting up openstack kilo interactively and further replacing ovs with ovs-dpdk enabled.
Once the above setup done, We are creating instance in openstack and passing that instance id to QEMU command line which further passes the vhost-user sockets to instances, enabling the DPDK libraries in it.

Isn't this the correct way of integrating ovs-dpdk with openstack?


Thanks & Regards
Abhijeet Karve




From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date:        12/17/2015 05:27 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________



HI Abhijeet,

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack.

The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack.

Please follow this guide:
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst

Best regards
Przemek


From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser

Hi Przemek,


We have configured the accelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance.

If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only.

Yes it's a compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200

When doing "ovs-vsctl show" in compute node, it shows below output:
_____________________________________________
ovs-vsctl show
c2ec29a5-992d-4875-8adc-1265c23e0304
   Bridge br-ex
       Port phy-br-ex
           Interface phy-br-ex
               type: patch
               options: {peer=int-br-ex}
       Port br-ex
           Interface br-ex
               type: internal
   Bridge br-tun
       fail_mode: secure
       Port br-tun
           Interface br-tun
               type: internal
       Port patch-int
           Interface patch-int
               type: patch
               options: {peer=patch-tun}
   Bridge br-int
       fail_mode: secure
       Port "qvo0ae19a43-b6"
           tag: 2
           Interface "qvo0ae19a43-b6"
       Port br-int
           Interface br-int
               type: internal
       Port "qvo31c89856-a2"
           tag: 1
           Interface "qvo31c89856-a2"
       Port patch-tun
           Interface patch-tun
               type: patch
               options: {peer=patch-int}
       Port int-br-ex
           Interface int-br-ex
               type: patch
               options: {peer=phy-br-ex}
       Port "qvo97fef28a-ec"
           tag: 2
           Interface "qvo97fef28a-ec"
   Bridge br-dpdk
       Port br-dpdk
           Interface br-dpdk
               type: internal
   Bridge "br0"
       Port "br0"
           Interface "br0"
               type: internal
       Port "dpdk0"
           Interface "dpdk0"
               type: dpdk
       Port "vhost-user-2"
           Interface "vhost-user-2"
               type: dpdkvhostuser
       Port "vhost-user-0"
           Interface "vhost-user-0"
               type: dpdkvhostuser
       Port "vhost-user-1"
           Interface "vhost-user-1"
               type: dpdkvhostuser
   ovs_version: "2.4.0"
root@dpdk:~#
_____________________________________________

Open flows output in bridge in compute node are as below:
_____________________________________________
root@dpdk:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10)
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tun
int NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
_____________________________________________


Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP.

Would be really appreciate if have clarity on DHCP flow establishment.



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>
Date:        12/15/2015 09:13 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________




Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
  2015-12-17 13:01               ` Czesnowicz, Przemyslaw
@ 2015-12-24 17:41                 ` Abhijeet Karve
  2016-01-04 14:24                   ` Czesnowicz, Przemyslaw
  0 siblings, 1 reply; 14+ messages in thread
From: Abhijeet Karve @ 2015-12-24 17:41 UTC (permalink / raw)
  To: Czesnowicz, Przemyslaw; +Cc: dev, discuss

Hi Przemek,

Thank you so much for your quick response. 

The guide(
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
) which you have suggested that is for openstack vhost user installations 
with devstack. 
Can't we have any reference for including ovs-dpdk mechanisam driver for 
openstack Ubuntu distribution which we are following for 
compute+controller node setup?" 

We are facing below listed issues With the current approach of setting up 
openstack kilo interactively + replacing ovs with ovs-dpdk enabled and 
Instance creation in openstack with
passing that instance id to QEMU command line which further passes the 
vhost-user sockets to instances for enabling the DPDK libraries in it.


1. Created a flavor m1.hugepages which is backed by hugepage memory, 
unable to spawn instance with this flavor – Getting a issue like: No 
matching hugetlbfs for the number of hugepages assigned to the flavor.
2. Passing socket info to instances via qemu manually and instnaces 
created are not persistent.

Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism 
driver and agent all of that in our openstack ubuntu distribution.

Would be really appriciate if get any help or ref with explanation.

We are using compute + controller node setup and we are using following 
software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

Thanks,
Abhijeet Karve





From:   "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To:     Abhijeet Karve <abhijeet.karve@tcs.com>
Cc:     "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" 
<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date:   12/17/2015 06:32 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser



I haven’t tried that approach not sure if that would work, it seems 
clunky.
 
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add 
ports to ovs with the right type, pass the sockets to qemu) would be done 
by OpenStack.
 
Przemek
 
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser
 
Hi Przemek, 

Thank you so much for sharing the ref guide. 

Would be appreciate if clear one doubt. 

At present we are setting up openstack kilo interactively and further 
replacing ovs with ovs-dpdk enabled. 
Once the above setup done, We are creating instance in openstack and 
passing that instance id to QEMU command line which further passes the 
vhost-user sockets to instances, enabling the DPDK libraries in it. 

Isn't this the correct way of integrating ovs-dpdk with openstack? 


Thanks & Regards
Abhijeet Karve




From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com> 
Date:        12/17/2015 05:27 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 




HI Abhijeet, 
  
For Kilo you need to use ovsdpdk mechanism driver and a matching agent to 
integrate ovs-dpdk with OpenStack. 
  
The guide you are following only talks about running ovs-dpdk not how it 
should be integrated with OpenStack. 
  
Please follow this guide: 
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst 

  
Best regards 
Przemek 
  
  
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 
  
Hi Przemek, 


We have configured the accelerated data path between a physical interface 
to the VM using openvswitch netdev-dpdk with vhost-user support. The VM 
created with this special data path and vhost library, I am calling as 
DPDK instance. 

If assigning ip manually to the newly created Cirros VM instance, We are 
able to make 2 VM's to communicate on the same compute node. Else it's not 
associating any ip through DHCP though DHCP is in compute node only. 

Yes it's a compute + controller node setup and we are using following 
software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

We are following the intel guide 
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200 


When doing "ovs-vsctl show" in compute node, it shows below output: 
_____________________________________________ 
ovs-vsctl show 
c2ec29a5-992d-4875-8adc-1265c23e0304 
   Bridge br-ex 
       Port phy-br-ex 
           Interface phy-br-ex 
               type: patch 
               options: {peer=int-br-ex} 
       Port br-ex 
           Interface br-ex 
               type: internal 
   Bridge br-tun 
       fail_mode: secure 
       Port br-tun 
           Interface br-tun 
               type: internal 
       Port patch-int 
           Interface patch-int 
               type: patch 
               options: {peer=patch-tun} 
   Bridge br-int 
       fail_mode: secure 
       Port "qvo0ae19a43-b6" 
           tag: 2 
           Interface "qvo0ae19a43-b6" 
       Port br-int 
           Interface br-int 
               type: internal 
       Port "qvo31c89856-a2" 
           tag: 1 
           Interface "qvo31c89856-a2" 
       Port patch-tun 
           Interface patch-tun 
               type: patch 
               options: {peer=patch-int} 
       Port int-br-ex 
           Interface int-br-ex 
               type: patch 
               options: {peer=phy-br-ex} 
       Port "qvo97fef28a-ec" 
           tag: 2 
           Interface "qvo97fef28a-ec" 
   Bridge br-dpdk 
       Port br-dpdk 
           Interface br-dpdk 
               type: internal 
   Bridge "br0" 
       Port "br0" 
           Interface "br0" 
               type: internal 
       Port "dpdk0" 
           Interface "dpdk0" 
               type: dpdk 
       Port "vhost-user-2" 
           Interface "vhost-user-2" 
               type: dpdkvhostuser 
       Port "vhost-user-0" 
           Interface "vhost-user-0" 
               type: dpdkvhostuser 
       Port "vhost-user-1" 
           Interface "vhost-user-1" 
               type: dpdkvhostuser 
   ovs_version: "2.4.0" 
root@dpdk:~# 
_____________________________________________ 

Open flows output in bridge in compute node are as below: 
_____________________________________________ 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2) 
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, 
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,20) 
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, 
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,22) 
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c 
actions=mod_vlan_vid:2,resubmit(,10) 
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 
actions=mod_vlan_vid:1,resubmit(,10) 
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 

cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22) 
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
int NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop 
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, 
idle_age=19981, hard_age=65534, priority=1 actions=NORMAL 
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
_____________________________________________ 


Further we don't know what all the network changes(Packet Flow addition) 
if required for associating IP address through the DHCP. 

Would be really appreciate if have clarity on DHCP flow establishment. 



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" <
mark.d.gray@intel.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org> 
Date:        12/15/2015 09:13 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 





Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm 
assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and 
another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. 
In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If 
you
> are not the intended recipient, any dissemination, use, review, 
distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received 
this
> communication in error, please notify us by reply e-mail or telephone 
and
> immediately and permanently delete the message and any attachments.
> Thank you
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
  2015-12-24 17:41                 ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing Abhijeet Karve
@ 2016-01-04 14:24                   ` Czesnowicz, Przemyslaw
       [not found]                     ` <OF7B9ED0F7.5B3B2C67-ON65257F45.0055D550-65257F45.005E9455@tcs.com>
  0 siblings, 1 reply; 14+ messages in thread
From: Czesnowicz, Przemyslaw @ 2016-01-04 14:24 UTC (permalink / raw)
  To: Abhijeet Karve; +Cc: dev, discuss

You should be able to clone networking-ovs-dpdk, switch to kilo branch,  and run
python setup.py install
in the root of networking-ovs-dpdk, that should install agent and mech driver.
Then you would need to enable mech driver (ovsdpdk) on the controller in the /etc/neutron/plugins/ml2/ml2_conf.ini
And run the right agent on the computes (networking-ovs-dpdk-agent).


There should be pip packeges of networking-ovs-dpdk available shortly, I’ll let you know when that happens.

Przemek

From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 24, 2015 6:42 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing

Hi Przemek,

Thank you so much for your quick response.

The guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have suggested that is for openstack vhost user installations with devstack.
Can't we have any reference for including ovs-dpdk mechanisam driver for openstack Ubuntu distribution which we are following for
compute+controller node setup?"

We are facing below listed issues With the current approach of setting up openstack kilo interactively + replacing ovs with ovs-dpdk enabled and Instance creation in openstack with
passing that instance id to QEMU command line which further passes the vhost-user sockets to instances for enabling the DPDK libraries in it.


1. Created a flavor m1.hugepages which is backed by hugepage memory, unable to spawn instance with this flavor – Getting a issue like: No matching hugetlbfs for the number of hugepages assigned to the flavor.
2. Passing socket info to instances via qemu manually and instnaces created are not persistent.

Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driver and agent all of that in our openstack ubuntu distribution.

Would be really appriciate if get any help or ref with explanation.

We are using compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

Thanks,
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date:        12/17/2015 06:32 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________



I haven’t tried that approach not sure if that would work, it seems clunky.

If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to ovs with the right type, pass the sockets to qemu) would be done by OpenStack.

Przemek

From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser

Hi Przemek,

Thank you so much for sharing the ref guide.

Would be appreciate if clear one doubt.

At present we are setting up openstack kilo interactively and further replacing ovs with ovs-dpdk enabled.
Once the above setup done, We are creating instance in openstack and passing that instance id to QEMU command line which further passes the vhost-user sockets to instances, enabling the DPDK libraries in it.

Isn't this the correct way of integrating ovs-dpdk with openstack?


Thanks & Regards
Abhijeet Karve




From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date:        12/17/2015 05:27 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________




HI Abhijeet,

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack.

The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack.

Please follow this guide:
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst

Best regards
Przemek


From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser

Hi Przemek,


We have configured the accelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance.

If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only.

Yes it's a compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200

When doing "ovs-vsctl show" in compute node, it shows below output:
_____________________________________________
ovs-vsctl show
c2ec29a5-992d-4875-8adc-1265c23e0304
  Bridge br-ex
      Port phy-br-ex
          Interface phy-br-ex
              type: patch
              options: {peer=int-br-ex}
      Port br-ex
          Interface br-ex
              type: internal
  Bridge br-tun
      fail_mode: secure
      Port br-tun
          Interface br-tun
              type: internal
      Port patch-int
          Interface patch-int
              type: patch
              options: {peer=patch-tun}
  Bridge br-int
      fail_mode: secure
      Port "qvo0ae19a43-b6"
          tag: 2
          Interface "qvo0ae19a43-b6"
      Port br-int
          Interface br-int
              type: internal
      Port "qvo31c89856-a2"
          tag: 1
          Interface "qvo31c89856-a2"
      Port patch-tun
          Interface patch-tun
              type: patch
              options: {peer=patch-int}
      Port int-br-ex
          Interface int-br-ex
              type: patch
              options: {peer=phy-br-ex}
      Port "qvo97fef28a-ec"
          tag: 2
          Interface "qvo97fef28a-ec"
  Bridge br-dpdk
      Port br-dpdk
          Interface br-dpdk
              type: internal
  Bridge "br0"
      Port "br0"
          Interface "br0"
              type: internal
      Port "dpdk0"
          Interface "dpdk0"
              type: dpdk
      Port "vhost-user-2"
          Interface "vhost-user-2"
              type: dpdkvhostuser
      Port "vhost-user-0"
          Interface "vhost-user-0"
              type: dpdkvhostuser
      Port "vhost-user-1"
          Interface "vhost-user-1"
              type: dpdkvhostuser
  ovs_version: "2.4.0"
root@dpdk:~#
_____________________________________________

Open flows output in bridge in compute node are as below:
_____________________________________________
root@dpdk:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10)
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tun
int NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
_____________________________________________


Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP.

Would be really appreciate if have clarity on DHCP flow establishment.



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>
Date:        12/15/2015 09:13 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________





Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
       [not found]                     ` <OF7B9ED0F7.5B3B2C67-ON65257F45.0055D550-65257F45.005E9455@tcs.com>
@ 2016-01-27 11:41                       ` Czesnowicz, Przemyslaw
  2016-01-27 16:22                         ` Abhijeet Karve
  0 siblings, 1 reply; 14+ messages in thread
From: Czesnowicz, Przemyslaw @ 2016-01-27 11:41 UTC (permalink / raw)
  To: Abhijeet Karve; +Cc: dev, discuss

Hi Abhijeet,


It seems you are almost there!
When booting the VM’s do you request hugepage memory for them (by setting hw:mem_page_size=large in flavor extra_spec)?
If not then please do, if yes then please look into libvirt logfiles for the VM’s (in /var/log/libvirt/qemu/instance-xxx), I think there could be a clue.


Regards
Przemek

From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Monday, January 25, 2016 6:13 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue

Hi Przemek,

Thank you for your response, It really provided us breakthrough.

After setting up DPDK on compute node for stable/kilo, We are trying to set up Openstack stable/liberty all-in-one setup, At present we are not able to get the IP allocation for the vhost type instances through DHCP. Also we tried assigning IP's manually to them but the inter-VM communication also not happening,

#neutron agent-list
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host              | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent     | nfv-dpdk-devstack | :-)   | True           | neutron-openvswitch-agent |
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent           | nfv-dpdk-devstack | :-)   | True           | neutron-l3-agent          |
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent         | nfv-dpdk-devstack | :-)   | True           | neutron-dhcp-agent        |
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-devstack | :-)   | True           | neutron-linuxbridge-agent |
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent     | nfv-dpdk-devstack | :-)   | True           | neutron-metadata-agent    |
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-devstack | xxx   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+


ovs-vsctl show output#
--------------------------------------------------------
Bridge br-dpdk
        Port br-dpdk
            Interface br-dpdk
                type: internal
        Port phy-br-dpdk
            Interface phy-br-dpdk
                type: patch
                options: {peer=int-br-dpdk}
    Bridge br-int
        fail_mode: secure
        Port "vhufa41e799-f2"
            tag: 5
            Interface "vhufa41e799-f2"
                type: dpdkvhostuser
        Port int-br-dpdk
            Interface int-br-dpdk
                type: patch
                options: {peer=phy-br-dpdk}
        Port "tap4e19f8e1-59"
            tag: 5
            Interface "tap4e19f8e1-59"
                type: internal
        Port "vhu05734c49-3b"
            tag: 5
            Interface "vhu05734c49-3b"
                type: dpdkvhostuser
        Port "vhu10c06b4d-84"
            tag: 5
            Interface "vhu10c06b4d-84"
                type: dpdkvhostuser
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "vhue169c581-ef"
            tag: 5
            Interface "vhue169c581-ef"
                type: dpdkvhostuser
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
                error: "could not open network device br-tun (Invalid argument)"
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.4.0"
--------------------------------------------------------


ovs-ofctl dump-flows br-int#
--------------------------------------------------------
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0, n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0, n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346 actions=mod_vlan_vid:5,NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0, n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop
 cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=drop
 cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0, n_bytes=0, idle_age=2410, priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0, n_bytes=0, idle_age=2415, priority=0 actions=drop
root@nfv-dpdk-devstack:/etc/neutron#
--------------------------------------------------------





Also attaching Neutron-server, nova-compute & nova-scheduler logs.

It will be really great for us if get any hint to overcome out of this Inter-VM & DHCP communication issue,




Thanks & Regards
Abhijeet Karve



From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date:        01/04/2016 07:54 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
________________________________



You should be able to clone networking-ovs-dpdk, switch to kilo branch,  and run
python setup.py install
in the root of networking-ovs-dpdk, that should install agent and mech driver.
Then you would need to enable mech driver (ovsdpdk) on the controller in the /etc/neutron/plugins/ml2/ml2_conf.ini
And run the right agent on the computes (networking-ovs-dpdk-agent).


There should be pip packeges of networking-ovs-dpdk available shortly, I’ll let you know when that happens.

Przemek

From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 24, 2015 6:42 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing

Hi Przemek,

Thank you so much for your quick response.

The guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have suggested that is for openstack vhost user installations with devstack.
Can't we have any reference for including ovs-dpdk mechanisam driver for openstack Ubuntu distribution which we are following for
compute+controller node setup?"

We are facing below listed issues With the current approach of setting up openstack kilo interactively + replacing ovs with ovs-dpdk enabled and Instance creation in openstack with
passing that instance id to QEMU command line which further passes the vhost-user sockets to instances for enabling the DPDK libraries in it.


1. Created a flavor m1.hugepages which is backed by hugepage memory, unable to spawn instance with this flavor – Getting a issue like: No matching hugetlbfs for the number of hugepages assigned to the flavor.
2. Passing socket info to instances via qemu manually and instnaces created are not persistent.

Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driver and agent all of that in our openstack ubuntu distribution.

Would be really appriciate if get any help or ref with explanation.

We are using compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

Thanks,
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date:        12/17/2015 06:32 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________




I haven’t tried that approach not sure if that would work, it seems clunky.

If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to ovs with the right type, pass the sockets to qemu) would be done by OpenStack.

Przemek

From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser

Hi Przemek,

Thank you so much for sharing the ref guide.

Would be appreciate if clear one doubt.

At present we are setting up openstack kilo interactively and further replacing ovs with ovs-dpdk enabled.
Once the above setup done, We are creating instance in openstack and passing that instance id to QEMU command line which further passes the vhost-user sockets to instances, enabling the DPDK libraries in it.

Isn't this the correct way of integrating ovs-dpdk with openstack?


Thanks & Regards
Abhijeet Karve




From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Date:        12/17/2015 05:27 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________





HI Abhijeet,

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack.

The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack.

Please follow this guide:
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst

Best regards
Przemek


From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com]
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser

Hi Przemek,


We have configured the accelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance.

If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only.

Yes it's a compute + controller node setup and we are using following software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200

When doing "ovs-vsctl show" in compute node, it shows below output:
_____________________________________________
ovs-vsctl show
c2ec29a5-992d-4875-8adc-1265c23e0304
 Bridge br-ex
     Port phy-br-ex
         Interface phy-br-ex
             type: patch
             options: {peer=int-br-ex}
     Port br-ex
         Interface br-ex
             type: internal
 Bridge br-tun
     fail_mode: secure
     Port br-tun
         Interface br-tun
             type: internal
     Port patch-int
         Interface patch-int
             type: patch
             options: {peer=patch-tun}
 Bridge br-int
     fail_mode: secure
     Port "qvo0ae19a43-b6"
         tag: 2
         Interface "qvo0ae19a43-b6"
     Port br-int
         Interface br-int
             type: internal
     Port "qvo31c89856-a2"
         tag: 1
         Interface "qvo31c89856-a2"
     Port patch-tun
         Interface patch-tun
             type: patch
             options: {peer=patch-int}
     Port int-br-ex
         Interface int-br-ex
             type: patch
             options: {peer=phy-br-ex}
     Port "qvo97fef28a-ec"
         tag: 2
         Interface "qvo97fef28a-ec"
 Bridge br-dpdk
     Port br-dpdk
         Interface br-dpdk
             type: internal
 Bridge "br0"
     Port "br0"
         Interface "br0"
             type: internal
     Port "dpdk0"
         Interface "dpdk0"
             type: dpdk
     Port "vhost-user-2"
         Interface "vhost-user-2"
             type: dpdkvhostuser
     Port "vhost-user-0"
         Interface "vhost-user-0"
             type: dpdkvhostuser
     Port "vhost-user-1"
         Interface "vhost-user-1"
             type: dpdkvhostuser
 ovs_version: "2.4.0"
root@dpdk:~#
_____________________________________________

Open flows output in bridge in compute node are as below:
_____________________________________________
root@dpdk:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10)
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tun
int NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMAL
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
root@dpdk:~#
_____________________________________________


Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP.

Would be really appreciate if have clarity on DHCP flow establishment.



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com<mailto:przemyslaw.czesnowicz@intel.com>>
To:        Abhijeet Karve <abhijeet.karve@tcs.com<mailto:abhijeet.karve@tcs.com>>, "Gray, Mark D" <mark.d.gray@intel.com<mailto:mark.d.gray@intel.com>>
Cc:        "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "discuss@openvswitch.org<mailto:discuss@openvswitch.org>" <discuss@openvswitch.org<mailto:discuss@openvswitch.org>>
Date:        12/15/2015 09:13 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser
________________________________






Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org<mailto:dev@dpdk.org>; discuss@openvswitch.org<mailto:discuss@openvswitch.org>
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
  2016-01-27 11:41                       ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue Czesnowicz, Przemyslaw
@ 2016-01-27 16:22                         ` Abhijeet Karve
  0 siblings, 0 replies; 14+ messages in thread
From: Abhijeet Karve @ 2016-01-27 16:22 UTC (permalink / raw)
  To: Czesnowicz, Przemyslaw; +Cc: dev, discuss

Hi Przemek,

Thanks for the quick response. Now  able to get the DHCP ip's for 2 
vhostuser instances and able to ping each other. Isssue was a bug in 
cirros 0.3.0 images which we were using in openstack after using 0.3.1 
image as given in the URL(
https://www.redhat.com/archives/rhos-list/2013-August/msg00032.html), able 
to get the IP's in vhostuser VM instances.

As per our understanding, Packet flow across DPDK datapath will be like 
vhostuser ports are connected to the br-int bridge & same is being patched 
to the br-dpdk bridge where in our physical network (NIC) is connected 
with dpdk0 port. 

So for testing the flow we have to connect that physical network(NIC) with 
external packet generator (e.g - ixia, iperf) & run the testpmd 
application in the vhostuser VM, right?

Does it required to add any flows/efforts in bridge configurations(either 
br-int or br-dpdk)?


Thanks & Regards
Abhijeet Karve




From:   "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To:     Abhijeet Karve <abhijeet.karve@tcs.com>
Cc:     "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" 
<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date:   01/27/2016 05:11 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Inter-VM communication & IP allocation through DHCP issue



Hi Abhijeet,
 
 
It seems you are almost there! 
When booting the VM’s do you request hugepage memory for them (by setting 
hw:mem_page_size=large in flavor extra_spec)?
If not then please do, if yes then please look into libvirt logfiles for 
the VM’s (in /var/log/libvirt/qemu/instance-xxx), I think there could be a 
clue.
 
 
Regards
Przemek
 
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Monday, January 25, 2016 6:13 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Inter-VM communication & IP allocation through DHCP issue
 
Hi Przemek, 

Thank you for your response, It really provided us breakthrough. 

After setting up DPDK on compute node for stable/kilo, We are trying to 
set up Openstack stable/liberty all-in-one setup, At present we are not 
able to get the IP allocation for the vhost type instances through DHCP. 
Also we tried assigning IP's manually to them but the inter-VM 
communication also not happening, 

#neutron agent-list 
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 

| id                                   | agent_type         | host      | 
alive | admin_state_up | binary                    | 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 

| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent     | 
nfv-dpdk-devstack | :-)   | True           | neutron-openvswitch-agent | 
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent           | 
nfv-dpdk-devstack | :-)   | True           | neutron-l3-agent          | 
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent         | 
nfv-dpdk-devstack | :-)   | True           | neutron-dhcp-agent        | 
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | 
nfv-dpdk-devstack | :-)   | True           | neutron-linuxbridge-agent | 
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent     | 
nfv-dpdk-devstack | :-)   | True           | neutron-metadata-agent    | 
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | 
nfv-dpdk-devstack | xxx   | True           | neutron-openvswitch-agent | 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 



ovs-vsctl show output# 
-------------------------------------------------------- 
Bridge br-dpdk 
        Port br-dpdk 
            Interface br-dpdk 
                type: internal 
        Port phy-br-dpdk 
            Interface phy-br-dpdk 
                type: patch 
                options: {peer=int-br-dpdk} 
    Bridge br-int 
        fail_mode: secure 
        Port "vhufa41e799-f2" 
            tag: 5 
            Interface "vhufa41e799-f2" 
                type: dpdkvhostuser 
        Port int-br-dpdk 
            Interface int-br-dpdk 
                type: patch 
                options: {peer=phy-br-dpdk} 
        Port "tap4e19f8e1-59" 
            tag: 5 
            Interface "tap4e19f8e1-59" 
                type: internal 
        Port "vhu05734c49-3b" 
            tag: 5 
            Interface "vhu05734c49-3b" 
                type: dpdkvhostuser 
        Port "vhu10c06b4d-84" 
            tag: 5 
            Interface "vhu10c06b4d-84" 
                type: dpdkvhostuser 
        Port patch-tun 
            Interface patch-tun 
                type: patch 
                options: {peer=patch-int} 
        Port "vhue169c581-ef" 
            tag: 5 
            Interface "vhue169c581-ef" 
                type: dpdkvhostuser 
        Port br-int 
            Interface br-int 
                type: internal 
    Bridge br-tun 
        fail_mode: secure 
        Port br-tun 
            Interface br-tun 
                type: internal 
                error: "could not open network device br-tun (Invalid 
argument)" 
        Port patch-int 
            Interface patch-int 
                type: patch 
                options: {peer=patch-tun} 
    ovs_version: "2.4.0" 
-------------------------------------------------------- 


ovs-ofctl dump-flows br-int# 
-------------------------------------------------------- 
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int 
NXST_FLOW reply (xid=0x4): 
 cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0, 
n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0, 
n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0, 
n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0, 
n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0, 
n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346 
actions=mod_vlan_vid:5,NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0, 
n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop 
 cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0, 
n_bytes=0, idle_age=2416, priority=0 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0, 
n_bytes=0, idle_age=2416, priority=0 actions=drop 
 cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0, 
n_bytes=0, idle_age=2410, 
priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0, 
n_bytes=0, idle_age=2409, 
priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, 
priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, 
priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0, 
n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0, 
n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0, 
n_bytes=0, idle_age=2415, priority=0 actions=drop 
root@nfv-dpdk-devstack:/etc/neutron# 
-------------------------------------------------------- 


                                              


Also attaching Neutron-server, nova-compute & nova-scheduler logs. 

It will be really great for us if get any hint to overcome out of this 
Inter-VM & DHCP communication issue, 




Thanks & Regards
Abhijeet Karve



From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com> 
Date:        01/04/2016 07:54 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Getting memory backing issues with qemu parameter passing 




You should be able to clone networking-ovs-dpdk, switch to kilo branch, 
and run 
python setup.py install 
in the root of networking-ovs-dpdk, that should install agent and mech 
driver. 
Then you would need to enable mech driver (ovsdpdk) on the controller in 
the /etc/neutron/plugins/ml2/ml2_conf.ini 
And run the right agent on the computes (networking-ovs-dpdk-agent). 
  
  
There should be pip packeges of networking-ovs-dpdk available shortly, 
I’ll let you know when that happens. 
  
Przemek 
  
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Thursday, December 24, 2015 6:42 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Getting memory backing issues with qemu parameter passing 
  
Hi Przemek, 

Thank you so much for your quick response. 

The guide(
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
) which you have suggested that is for openstack vhost user installations 
with devstack. 
Can't we have any reference for including ovs-dpdk mechanisam driver for 
openstack Ubuntu distribution which we are following for 
compute+controller node setup?" 

We are facing below listed issues With the current approach of setting up 
openstack kilo interactively + replacing ovs with ovs-dpdk enabled and 
Instance creation in openstack with 
passing that instance id to QEMU command line which further passes the 
vhost-user sockets to instances for enabling the DPDK libraries in it. 


1. Created a flavor m1.hugepages which is backed by hugepage memory, 
unable to spawn instance with this flavor – Getting a issue like: No 
matching hugetlbfs for the number of hugepages assigned to the flavor. 
2. Passing socket info to instances via qemu manually and instnaces 
created are not persistent. 

Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism 
driver and agent all of that in our openstack ubuntu distribution. 

Would be really appriciate if get any help or ref with explanation. 

We are using compute + controller node setup and we are using following 
software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

Thanks, 
Abhijeet Karve 





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com> 
Date:        12/17/2015 06:32 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 





I haven’t tried that approach not sure if that would work, it seems 
clunky. 
 
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add 
ports to ovs with the right type, pass the sockets to qemu) would be done 
by OpenStack. 
 
Przemek 
 
From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 
 
Hi Przemek, 

Thank you so much for sharing the ref guide. 

Would be appreciate if clear one doubt. 

At present we are setting up openstack kilo interactively and further 
replacing ovs with ovs-dpdk enabled. 
Once the above setup done, We are creating instance in openstack and 
passing that instance id to QEMU command line which further passes the 
vhost-user sockets to instances, enabling the DPDK libraries in it. 

Isn't this the correct way of integrating ovs-dpdk with openstack? 


Thanks & Regards
Abhijeet Karve




From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com> 
Date:        12/17/2015 05:27 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 






HI Abhijeet, 

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to 
integrate ovs-dpdk with OpenStack. 

The guide you are following only talks about running ovs-dpdk not how it 
should be integrated with OpenStack. 

Please follow this guide: 
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst 


Best regards 
Przemek 


From: Abhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 

Hi Przemek, 


We have configured the accelerated data path between a physical interface 
to the VM using openvswitch netdev-dpdk with vhost-user support. The VM 
created with this special data path and vhost library, I am calling as 
DPDK instance. 

If assigning ip manually to the newly created Cirros VM instance, We are 
able to make 2 VM's to communicate on the same compute node. Else it's not 
associating any ip through DHCP though DHCP is in compute node only. 

Yes it's a compute + controller node setup and we are using following 
software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

We are following the intel guide 
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200 


When doing "ovs-vsctl show" in compute node, it shows below output: 
_____________________________________________ 
ovs-vsctl show 
c2ec29a5-992d-4875-8adc-1265c23e0304 
 Bridge br-ex 
     Port phy-br-ex 
         Interface phy-br-ex 
             type: patch 
             options: {peer=int-br-ex} 
     Port br-ex 
         Interface br-ex 
             type: internal 
 Bridge br-tun 
     fail_mode: secure 
     Port br-tun 
         Interface br-tun 
             type: internal 
     Port patch-int 
         Interface patch-int 
             type: patch 
             options: {peer=patch-tun} 
 Bridge br-int 
     fail_mode: secure 
     Port "qvo0ae19a43-b6" 
         tag: 2 
         Interface "qvo0ae19a43-b6" 
     Port br-int 
         Interface br-int 
             type: internal 
     Port "qvo31c89856-a2" 
         tag: 1 
         Interface "qvo31c89856-a2" 
     Port patch-tun 
         Interface patch-tun 
             type: patch 
             options: {peer=patch-int} 
     Port int-br-ex 
         Interface int-br-ex 
             type: patch 
             options: {peer=phy-br-ex} 
     Port "qvo97fef28a-ec" 
         tag: 2 
         Interface "qvo97fef28a-ec" 
 Bridge br-dpdk 
     Port br-dpdk 
         Interface br-dpdk 
             type: internal 
 Bridge "br0" 
     Port "br0" 
         Interface "br0" 
             type: internal 
     Port "dpdk0" 
         Interface "dpdk0" 
             type: dpdk 
     Port "vhost-user-2" 
         Interface "vhost-user-2" 
             type: dpdkvhostuser 
     Port "vhost-user-0" 
         Interface "vhost-user-0" 
             type: dpdkvhostuser 
     Port "vhost-user-1" 
         Interface "vhost-user-1" 
             type: dpdkvhostuser 
 ovs_version: "2.4.0" 
root@dpdk:~# 
_____________________________________________ 

Open flows output in bridge in compute node are as below: 
_____________________________________________ 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2) 
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, 
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,20) 
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, 
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,22) 
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c 
actions=mod_vlan_vid:2,resubmit(,10) 
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 
actions=mod_vlan_vid:1,resubmit(,10) 
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 

cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22) 
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# ovs-ofctl dump-flows br-tun 
int NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop 
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, 
idle_age=19981, hard_age=65534, priority=1 actions=NORMAL 
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
root@dpdk:~# 
_____________________________________________ 


Further we don't know what all the network changes(Packet Flow addition) 
if required for associating IP address through the DHCP. 

Would be really appreciate if have clarity on DHCP flow establishment. 



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com> 
To:        Abhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" <
mark.d.gray@intel.com> 
Cc:        "dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <
discuss@openvswitch.org> 
Date:        12/15/2015 09:13 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 







Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm 
assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and 
another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. 
In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If 
you
> are not the intended recipient, any dissemination, use, review, 
distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received 
this
> communication in error, please notify us by reply e-mail or telephone 
and
> immediately and permanently delete the message and any attachments.
> Thank you
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue
@ 2016-01-26 19:14 Abhijeet Karve
  0 siblings, 0 replies; 14+ messages in thread
From: Abhijeet Karve @ 2016-01-26 19:14 UTC (permalink / raw)
  To: Czesnowicz, Przemyslaw; +Cc: dev, discuss

Hi Przemek,

Thank you for your response, It's really provided us breakthrough. 

After setting up DPDK on compute node for stable/kilo, Trying to set up Openstack stable/liberty all-in-one setup, At present not able to get the IP allocation for the vhost type instances through DHCP. Also tried assigning IP's manually to them but the inter-VM communication also not happening,

#neutron agent-list
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host              | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent     | nfv-dpdk-devstack | :-)   | True           | neutron-openvswitch-agent |
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent           | nfv-dpdk-devstack | :-)   | True           | neutron-l3-agent          |
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent         | nfv-dpdk-devstack | :-)   | True           | neutron-dhcp-agent        |
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-devstack | :-)   | True           | neutron-linuxbridge-agent |
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent     | nfv-dpdk-devstack | :-)   | True           | neutron-metadata-agent    |
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-devstack | xxx   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+


ovs-vsctl show output#
--------------------------------------------------------
Bridge br-dpdk
        Port br-dpdk
            Interface br-dpdk
                type: internal
        Port phy-br-dpdk
            Interface phy-br-dpdk
                type: patch
                options: {peer=int-br-dpdk}
    Bridge br-int
        fail_mode: secure
        Port "vhufa41e799-f2"
            tag: 5
            Interface "vhufa41e799-f2"
                type: dpdkvhostuser
        Port int-br-dpdk
            Interface int-br-dpdk
                type: patch
                options: {peer=phy-br-dpdk}
        Port "tap4e19f8e1-59"
            tag: 5
            Interface "tap4e19f8e1-59"
                type: internal
        Port "vhu05734c49-3b"
            tag: 5
            Interface "vhu05734c49-3b"
                type: dpdkvhostuser
        Port "vhu10c06b4d-84"
            tag: 5
            Interface "vhu10c06b4d-84"
                type: dpdkvhostuser
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "vhue169c581-ef"
            tag: 5
            Interface "vhue169c581-ef"
                type: dpdkvhostuser
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
                error: "could not open network device br-tun (Invalid argument)"
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.4.0"
--------------------------------------------------------


ovs-ofctl dump-flows br-int#
--------------------------------------------------------
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0, n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0, n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0, n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24)
 cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0, n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346 actions=mod_vlan_vid:5,NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0, n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop
 cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=NORMAL
 cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0, n_bytes=0, idle_age=2416, priority=0 actions=drop
 cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0, n_bytes=0, idle_age=2410, priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0, n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0, n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13 actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0, n_bytes=0, idle_age=2415, priority=0 actions=drop
root@nfv-dpdk-devstack:/etc/neutron#
--------------------------------------------------------


It will be really great for us if get any hint to overcome out of this Inter-VM & DHCP communication issue,




Thanks & Regards
Abhijeet Karve

"Czesnowicz, Przemyslaw" ---01/04/2016 07:54:52 PM---You should be able to clone networking-ovs-dpdk, switch to kilo branch, ÿand run python setup.py ins

From:	"Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>
To:	Abhijeet Karve <abhijeet.karve@tcs.com>
Cc:	"dev@dpdk.org" <dev@dpdk.org>, "discuss@openvswitch.org" <discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>
Date:	01/04/2016 07:54 PM
Subject:	RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing



You should be able to clone networking-ovs-dpdk, switch to kilo branch, ÿand run 
python setup.py install 
in the root of networking-ovs-dpdk, that should install agent and mech driver.
Then you would need to enable mech driver (ovsdpdk) on the controller in the /etc/neutron/plugins/ml2/ml2_conf.ini
And run the right agent on the computes (networking-ovs-dpdk-agent).
ÿ
ÿ
There should be pip packeges of networking-ovs-dpdk available shortly, I\x0f&#8217;ll let you know when that happens.
ÿ
Przemek 
ÿ
From:ÿAbhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent:ÿThursday, December 24, 2015 6:42 PM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing
ÿ
Hi Przemek,ÿ

Thank you so much for your quick response. 

The guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst) which you have suggested that is for openstack vhost user installations with devstack. 
Can't we have any reference for including ovs-dpdk mechanisam driver for openstack Ubuntu distribution which we are following for 
compute+controller node setup?" 

We are facing below listed issues With the current approach of setting up openstack kilo interactively + replacing ovs with ovs-dpdk enabled and Instance creation in openstack withÿ
passing that instance id to QEMU command line which further passes the vhost-user sockets to instances for enabling the DPDK libraries in it.ÿ


1. Created a flavor m1.hugepages which is backed by hugepage memory, unable to spawn instance with this flavor \x0f&#8211; Getting a issue like: No matching hugetlbfs for the number of hugepages assigned to the flavor.ÿ
2. Passing socket info to instances via qemu manually and instnaces created are not persistent.ÿ

Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driver and agent all of that in our openstack ubuntu distribution.ÿ

Would be really appriciate if get any help or ref with explanation.ÿ

We are using compute + controller node setup and we are using following software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

Thanks,ÿ
Abhijeet Karveÿ





From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.karve@tcs.com>ÿ
Cc: ÿ ÿ ÿ ÿ"dev@dpdk.org"ÿ<dev@dpdk.org>, "discuss@openvswitch.org"ÿ<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>ÿ
Date: ÿ ÿ ÿ ÿ12/17/2015 06:32 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ



I haven\x0f&#8217;t tried that approach not sure if that would work, it seems clunky.ÿ
ÿ
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to ovs with the right type, pass the sockets to qemu) would be done by OpenStack.ÿ
ÿ
Przemekÿ
ÿ
From:ÿAbhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent:ÿThursday, December 17, 2015 12:41 PM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ
ÿ
Hi Przemek,ÿ

Thank you so much for sharing the ref guide.ÿ

Would be appreciate if clear one doubt. 

At present we are setting up openstack kilo interactively and further replacing ovs with ovs-dpdk enabled. 
Once the above setup done, We are creating instance in openstack and passing that instance id to QEMU command line which further passes the vhost-user sockets to instances, enabling the DPDK libraries in it.ÿ

Isn't this the correct way of integrating ovs-dpdk with openstack?ÿ


Thanks & Regards
Abhijeet Karve




From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.karve@tcs.com>ÿ
Cc: ÿ ÿ ÿ ÿ"dev@dpdk.org"ÿ<dev@dpdk.org>, "discuss@openvswitch.org"ÿ<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.gray@intel.com>ÿ
Date: ÿ ÿ ÿ ÿ12/17/2015 05:27 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ




HI Abhijeet,ÿ

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to integrate ovs-dpdk with OpenStack.ÿ

The guide you are following only talks about running ovs-dpdk not how it should be integrated with OpenStack.ÿ

Please follow this guide:ÿ
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rstÿ

Best regardsÿ
Przemekÿ


From:ÿAbhijeet Karve [mailto:abhijeet.karve@tcs.com] 
Sent:ÿWednesday, December 16, 2015 9:37 AM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿdev@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ

Hi Przemek,ÿ


We have configured theÿaccelerated data path between a physical interface to the VM using openvswitch netdev-dpdk with vhost-user support. The VM created with this special data path and vhost library, I am calling as DPDK instance. 

If assigning ip manually to the newly created Cirros VM instance, We are able to make 2 VM's to communicate on the same compute node. Else it's not associating any ip through DHCP though DHCP is in compute node only.ÿ

Yes it's a compute + controller node setup and we are using following software platform on compute node:ÿ
_____________ÿ
Openstack: Kiloÿ
Distribution: Ubuntu 14.04ÿ
OVS Version: 2.4.0ÿ
DPDK 2.0.0ÿ
_____________ÿ

We are following the intel guide https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200ÿ

When doing "ovs-vsctl show" in compute node, it shows below output:ÿ
_____________________________________________ÿ
ovs-vsctl showÿ
c2ec29a5-992d-4875-8adc-1265c23e0304ÿ
ÿBridge br-exÿ
ÿ ÿ ÿPort phy-br-exÿ
ÿ ÿ ÿ ÿ ÿInterface phy-br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=int-br-ex}ÿ
ÿ ÿ ÿPort br-exÿ
ÿ ÿ ÿ ÿ ÿInterface br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿBridge br-tunÿ
ÿ ÿ ÿfail_mode: secureÿ
ÿ ÿ ÿPort br-tunÿ
ÿ ÿ ÿ ÿ ÿInterface br-tunÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort patch-intÿ
ÿ ÿ ÿ ÿ ÿInterface patch-intÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=patch-tun}ÿ
ÿBridge br-intÿ
ÿ ÿ ÿfail_mode: secureÿ
ÿ ÿ ÿPort "qvo0ae19a43-b6"ÿ
ÿ ÿ ÿ ÿ ÿtag: 2ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo0ae19a43-b6"ÿ
ÿ ÿ ÿPort br-intÿ
ÿ ÿ ÿ ÿ ÿInterface br-intÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort "qvo31c89856-a2"ÿ
ÿ ÿ ÿ ÿ ÿtag: 1ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo31c89856-a2"ÿ
ÿ ÿ ÿPort patch-tunÿ
ÿ ÿ ÿ ÿ ÿInterface patch-tunÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=patch-int}ÿ
ÿ ÿ ÿPort int-br-exÿ
ÿ ÿ ÿ ÿ ÿInterface int-br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=phy-br-ex}ÿ
ÿ ÿ ÿPort "qvo97fef28a-ec"ÿ
ÿ ÿ ÿ ÿ ÿtag: 2ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo97fef28a-ec"ÿ
ÿBridge br-dpdkÿ
ÿ ÿ ÿPort br-dpdkÿ
ÿ ÿ ÿ ÿ ÿInterface br-dpdkÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿBridge "br0"ÿ
ÿ ÿ ÿPort "br0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "br0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort "dpdk0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "dpdk0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkÿ
ÿ ÿ ÿPort "vhost-user-2"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-2"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿ ÿ ÿPort "vhost-user-0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿ ÿ ÿPort "vhost-user-1"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-1"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿovs_version: "2.4.0"ÿ
root@dpdk:~# 
_____________________________________________ÿ

Open flows output in bridge in compute node are as below:ÿ
_____________________________________________ÿ
root@dpdk:~# ovs-ofctl dump-flows br-tunÿ
NXST_FLOW reply (xid=0x4):ÿ
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)ÿ
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)ÿ
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)ÿ
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c actions=mod_vlan_vid:2,resubmit(,10)ÿ
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 actions=mod_vlan_vid:1,resubmit(,10)ÿ
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1ÿ
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)ÿ
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, idle_age=19982, hard_age=65534, priority=0 actions=dropÿ
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# 
root@dpdk:~# ovs-ofctl dump-flows br-tunÿ
int NXST_FLOW reply (xid=0x4):ÿ
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=dropÿ
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, idle_age=19981, hard_age=65534, priority=1 actions=NORMALÿ
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
root@dpdk:~# 
_____________________________________________ÿ


Further we don't know what all the network changes(Packet Flow addition) if required for associating IP address through the DHCP.ÿ

Would be really appreciate if have clarity on DHCP flow establishment. 



Thanks & Regards
Abhijeet Karve





From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.karve@tcs.com>, "Gray, Mark D" <mark.d.gray@intel.com>ÿ
Cc: ÿ ÿ ÿ ÿ"dev@dpdk.org"ÿ<dev@dpdk.org>, "discuss@openvswitch.org"ÿ<discuss@openvswitch.org>ÿ
Date: ÿ ÿ ÿ ÿ12/15/2015 09:13 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuserÿ





Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
> ÿ ÿ ÿ ÿ ÿEdit /etc/default/grub.
> ÿ ÿ ÿ ÿ ÿ ÿ GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on ÿhugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
> ÿ ÿ ÿ ÿ ÿ# update-grub
> ÿ ÿ ÿ ÿ- Mount the huge pages into different directory.
> ÿ ÿ ÿ ÿ ÿ # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
> ÿ ÿ ÿ ÿ ÿ # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>


[attachment "nova-scheduler.log" removed by Abhijeet Karve/AHD/TCS]
[attachment "nova-compute.log" removed by Abhijeet Karve/AHD/TCS]
[attachment "neutron-server.log" removed by Abhijeet Karve/AHD/TCS]
From aconole@redhat.com  Tue Jan 26 20:25:49 2016
Return-Path: <aconole@redhat.com>
Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28])
 by dpdk.org (Postfix) with ESMTP id 84AB78E94
 for <dev@dpdk.org>; Tue, 26 Jan 2016 20:25:49 +0100 (CET)
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
 (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
 by mx1.redhat.com (Postfix) with ESMTPS id 0412532D3CD
 for <dev@dpdk.org>; Tue, 26 Jan 2016 19:25:49 +0000 (UTC)
Received: from aconole-fed23 (dhcp-25-194.bos.redhat.com [10.18.25.194])
 by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id
 u0QJPmSR011385
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits%6 verify=NO)
 for <dev@dpdk.org>; Tue, 26 Jan 2016 14:25:48 -0500
From: Aaron Conole <aconole@redhat.com>
To: dev@dpdk.org
In-Reply-To: <1449850823-29017-1-git-send-email-aconole@redhat.com>
Date: Tue, 26 Jan 2016 14:25:48 -0500
Message-ID: <f7ta8nsxhvn.fsf@redhat.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1.50 (gnu/linux)
MIME-Version: 1.0
Content-Type: text/plain
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Subject: Re: [dpdk-dev] [PATCH 2.3] tools/dpdk_nic_bind.py: Verbosely warn
	the user on bind
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Tue, 26 Jan 2016 19:25:49 -0000

Ping... This patch has been sitting^Hrotting for a bit over a month.

> DPDK ports are only detected during the EAL initialization. After that, any
> new DPDK ports which are bound will not be visible to the application.
>
> The dpdk_nic_bind.py can be a bit more helpful to let users know that DPDK
> enabled applications will not find rebound ports until after they have been
> restarted.
>
> Signed-off-by: Aaron Conole <aconole@redhat.com>
>
> ---
> tools/dpdk_nic_bind.py | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/tools/dpdk_nic_bind.py b/tools/dpdk_nic_bind.py
> index f02454e..ca39389 100755
> --- a/tools/dpdk_nic_bind.py
> +++ b/tools/dpdk_nic_bind.py
> @@ -344,8 +344,10 @@ def bind_one(dev_id, driver, force):
>              dev["Driver_str"] = "" # clear driver string
>
>      # if we are binding to one of DPDK drivers, add PCI id's to that driver
> +    bDpdkDriver = False
>      if driver in dpdk_drivers:
>          filename = "/sys/bus/pci/drivers/%s/new_id" % driver
> +        bDpdkDriver = True
>          try:
>              f = open(filename, "w")
>          except:
> @@ -371,12 +373,18 @@ def bind_one(dev_id, driver, force):
>      try:
>          f.write(dev_id)
>          f.close()
> +        if bDpdkDriver:
> +            print "Device rebound to dpdk driver."
> +            print "Remember to restart any application that will use this port."
>      except:
>          # for some reason, closing dev_id after adding a new PCI ID to new_id
>          # results in IOError. however, if the device was successfully bound,
>          # we don't care for any errors and can safely ignore IOError
>          tmp = get_pci_device_details(dev_id)
>          if "Driver_str" in tmp and tmp["Driver_str"] == driver:
> +            if bDpdkDriver:
> +                print "Device rebound to dpdk driver."
> +                print "Remember to restart any application that will use this port."
>              return
>          print "Error: bind failed for %s - Cannot bind to driver %s" % (dev_id, driver)
>          if saved_driver is not None: # restore any previous driver

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-01-27 16:22 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-01  6:13 [dpdk-dev] DPDK OVS on Ubuntu 14.04 Abhijeet Karve
2015-12-01 14:46 ` Polehn, Mike A
2015-12-02 14:52   ` Gray, Mark D
2015-12-15  5:55     ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Successfully setup DPDK OVS with vhostuser Abhijeet Karve
2015-12-15 15:42       ` Czesnowicz, Przemyslaw
2015-12-16  9:36         ` Abhijeet Karve
2015-12-17 11:57           ` Czesnowicz, Przemyslaw
2015-12-17 12:40             ` Abhijeet Karve
2015-12-17 13:01               ` Czesnowicz, Przemyslaw
2015-12-24 17:41                 ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting memory backing issues with qemu parameter passing Abhijeet Karve
2016-01-04 14:24                   ` Czesnowicz, Przemyslaw
     [not found]                     ` <OF7B9ED0F7.5B3B2C67-ON65257F45.0055D550-65257F45.005E9455@tcs.com>
2016-01-27 11:41                       ` [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue Czesnowicz, Przemyslaw
2016-01-27 16:22                         ` Abhijeet Karve
2016-01-26 19:14 Abhijeet Karve

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).