* [dpdk-users] Poor OVS-DPDK performance
@ 2017-12-08 5:48 abhishek jain
2017-12-08 9:33 ` abhishek jain
0 siblings, 1 reply; 6+ messages in thread
From: abhishek jain @ 2017-12-08 5:48 UTC (permalink / raw)
To: users, Mooney, Sean K
Hi Team
Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also
having one vnf with vhost interfaces mapped to OVS bridge br-int.
However when I'm performing throughput with the same vnf,I'm getting very
less throughput.
Please provide me some pointers to boost the performance of vnf with
OVS-DPDK configuration.
Regards
Abhishek Jain
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] Poor OVS-DPDK performance
2017-12-08 5:48 [dpdk-users] Poor OVS-DPDK performance abhishek jain
@ 2017-12-08 9:33 ` abhishek jain
2017-12-08 12:41 ` Mooney, Sean K
0 siblings, 1 reply; 6+ messages in thread
From: abhishek jain @ 2017-12-08 9:33 UTC (permalink / raw)
To: users, Mooney, Sean K
Hi Team
Below is my OVS configuration..
root@node-3:~# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024",
pmd-cpu-mask="1800"}
root@node-3:~#
root@node-3:~#
root@node-3:~# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.13.0-137-generic root=/dev/mapper/os-root ro
console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90 nomodeset
root=UUID=2949f531-bedc-47a0-a2f2-6ebf8e1d1edb iommu=pt intel_iommu=on
isolcpus=11,12,13,14,15,16
root@node-3:~#
root@node-3:~# ovs-appctl dpif-netdev/pmd-stats-show
main thread:
emc hits:2
megaflow hits:0
miss:2
lost:0
polling cycles:99607459 (98.80%)
processing cycles:1207437 (1.20%)
avg cycles per packet: 25203724.00 (100814896/4)
avg processing cycles per packet: 301859.25 (1207437/4)
pmd thread numa_id 0 core_id 11:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:272926895316 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 12:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:240339950037 (100.00%)
processing cycles:0 (0.00%)
root@node-3:~#
root@node-3:~#
root@node-3:~# grep -r Huge /proc/meminfo
AnonHugePages: 59392 kB
HugePages_Total: 5126
HugePages_Free: 4870
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
root@node-3:~#
I'm using OVS version 2.4.1
Regards
Abhishek Jain
On Fri, Dec 8, 2017 at 11:18 AM, abhishek jain <ashujain9727@gmail.com>
wrote:
> Hi Team
>
>
> Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also
> having one vnf with vhost interfaces mapped to OVS bridge br-int.
> However when I'm performing throughput with the same vnf,I'm getting very
> less throughput.
>
> Please provide me some pointers to boost the performance of vnf with
> OVS-DPDK configuration.
>
> Regards
> Abhishek Jain
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] Poor OVS-DPDK performance
2017-12-08 9:33 ` abhishek jain
@ 2017-12-08 12:41 ` Mooney, Sean K
2017-12-11 6:55 ` abhishek jain
0 siblings, 1 reply; 6+ messages in thread
From: Mooney, Sean K @ 2017-12-08 12:41 UTC (permalink / raw)
To: abhishek jain, users; +Cc: Mooney, Sean K
Hi can you provide the qemu commandline you are using for the vm
As well as the following commands
Ovs-vsctl show
Ovs-vsctl list bridge
Ovs-vsctl list interface
Ovs-vsctl list port
Basically with the topology info above I want to confirm:
- All bridges are interconnected by patch ports not veth,
- All bridges are datapath type netdev
- Vm is connected by vhost-user interfaces not kernel vhost(it still works with ovs-dpdk but is really slow)
Looking at you core masks the vswitch I incorrectly tunned
{dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", pmd-cpu-mask="1800"}
root@node-3:~#
You have configured 10 lcores which are not used for packet processing and only the 1st of which will actually be used by ovs-dpdk
And you have configured a single core for the pmd
You have not said if you are on a multi socket system of single but assuming that it is a single socket system try this instead.
{dpdk-init="true", dpdk-lcore-mask="0x2", dpdk-socket-mem="1024", pmd-cpu-mask="0xC"}
Above im assuming core 0 is used by os core 1 will be used for lcore thread and cores 2 and 3 will be used for pmd threads which does all the packet forwarding.
If you have hyperthreads turn on on your host add the hyperthread siblings to the pmd-cpu-mask
For example if you had a 16 core cpu with 32 threads the pmd-cpu-mask should be “0x60006”
On the kernel cmdline change the isolcpus to isolcpus=2,3 or isolcpus=2,3,18,19 for hyper threading.
With the HT config you should be able to handel upto 30mpps on a 2.2GHz cpu assuming you compiled ovs and dpdk with “-fPIC -02 –march=native” and link statically.
If you have a faster cpu you should get more but as a rough estimate when using ovs 2.4 and dpdk 2.0 you should
expect between 6.5-8mpps phy to phy per physical core + an additional 70-80% if hyper thereading is used.
Your phy vm phy numbers will be a little lower as the vhost-user pmd takes more clock cycles to process packets then
The physical nic drivers do but that should help set your expectations.
Ovs 2.4 and dpdk 2.0 are quite old at this point but they still should give a significant performance increase over kernel ovs.
From: abhishek jain [mailto:ashujain9727@gmail.com]
Sent: Friday, December 8, 2017 9:34 AM
To: users@dpdk.org; Mooney, Sean K <sean.k.mooney@intel.com>
Subject: Re: Poor OVS-DPDK performance
Hi Team
Below is my OVS configuration..
root@node-3:~# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", pmd-cpu-mask="1800"}
root@node-3:~#
root@node-3:~#
root@node-3:~# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.13.0-137-generic root=/dev/mapper/os-root ro console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90 nomodeset root=UUID=2949f531-bedc-47a0-a2f2-6ebf8e1d1edb iommu=pt intel_iommu=on isolcpus=11,12,13,14,15,16
root@node-3:~#
root@node-3:~# ovs-appctl dpif-netdev/pmd-stats-show
main thread:
emc hits:2
megaflow hits:0
miss:2
lost:0
polling cycles:99607459 (98.80%)
processing cycles:1207437 (1.20%)
avg cycles per packet: 25203724.00 (100814896/4)
avg processing cycles per packet: 301859.25 (1207437/4)
pmd thread numa_id 0 core_id 11:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:272926895316 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 12:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:240339950037 (100.00%)
processing cycles:0 (0.00%)
root@node-3:~#
root@node-3:~#
root@node-3:~# grep -r Huge /proc/meminfo
AnonHugePages: 59392 kB
HugePages_Total: 5126
HugePages_Free: 4870
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
root@node-3:~#
I'm using OVS version 2.4.1
Regards
Abhishek Jain
On Fri, Dec 8, 2017 at 11:18 AM, abhishek jain <ashujain9727@gmail.com<mailto:ashujain9727@gmail.com>> wrote:
Hi Team
Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also having one vnf with vhost interfaces mapped to OVS bridge br-int.
However when I'm performing throughput with the same vnf,I'm getting very less throughput.
Please provide me some pointers to boost the performance of vnf with OVS-DPDK configuration.
Regards
Abhishek Jain
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] Poor OVS-DPDK performance
2017-12-08 12:41 ` Mooney, Sean K
@ 2017-12-11 6:55 ` abhishek jain
2017-12-11 21:28 ` [dpdk-users] Fwd: " abhishek jain
2017-12-12 12:55 ` [dpdk-users] " Mooney, Sean K
0 siblings, 2 replies; 6+ messages in thread
From: abhishek jain @ 2017-12-11 6:55 UTC (permalink / raw)
To: Mooney, Sean K; +Cc: users
Hi Sean
Thanks for looking into it.
You got it right, I'm targeting phy-VM-phy numbers
I'm having 8 core host with HT turned on with 16 threads.Below is the
output describing it details..
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 86
Stepping: 3
CPU MHz: 800.000
BogoMIPS: 4199.99
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-15
Please let me know the appropriate configuration for better performance
with this available setup.Also find the attached notepad file covering all
the OVS-DPDK details of your queries in the above mail.
One more query,I have configured my phy interface with vfio DPDK compatible
driver,below is the output ..
dpdk_nic_bind --status
Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci
unused=
0000:03:00.1 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci
unused=
0000:05:00.1 'I350 Gigabit Network Connection' drv=vfio-pci unused=
Once I have configured the vfio,I'm not able to receive any packets on that
particular physical interface.
Thanks again for you time.
Regards
Abhishek Jain
On Fri, Dec 8, 2017 at 6:11 PM, Mooney, Sean K <sean.k.mooney@intel.com>
wrote:
> Hi can you provide the qemu commandline you are using for the vm
>
> As well as the following commands
>
>
>
> Ovs-vsctl show
>
> Ovs-vsctl list bridge
>
> Ovs-vsctl list interface
>
> Ovs-vsctl list port
>
>
>
> Basically with the topology info above I want to confirm:
>
> - All bridges are interconnected by patch ports not veth,
>
> - All bridges are datapath type netdev
>
> - Vm is connected by vhost-user interfaces not kernel vhost(it
> still works with ovs-dpdk but is really slow)
>
>
>
> Looking at you core masks the vswitch I incorrectly tunned
>
>
>
> {dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024",
> pmd-cpu-mask="1800"}
> root@node-3:~#
>
>
>
> You have configured 10 lcores which are not used for packet processing and
> only the 1st of which will actually be used by ovs-dpdk
>
> And you have configured a single core for the pmd
>
>
>
> You have not said if you are on a multi socket system of single but
> assuming that it is a single socket system try this instead.
>
>
> {dpdk-init="true", dpdk-lcore-mask="0x2", dpdk-socket-mem="1024",
> pmd-cpu-mask="0xC"}
>
> Above im assuming core 0 is used by os core 1 will be used for lcore
> thread and cores 2 and 3 will be used for pmd threads which does all the
> packet forwarding.
>
>
>
> If you have hyperthreads turn on on your host add the hyperthread siblings
> to the pmd-cpu-mask
>
> For example if you had a 16 core cpu with 32 threads the pmd-cpu-mask
> should be “0x60006”
>
>
>
> On the kernel cmdline change the isolcpus to isolcpus=2,3 or
> isolcpus=2,3,18,19 for hyper threading.
>
>
>
> With the HT config you should be able to handel upto 30mpps on a 2.2GHz
> cpu assuming you compiled ovs and dpdk with “-fPIC -02 –march=native” and
> link statically.
>
> If you have a faster cpu you should get more but as a rough estimate when
> using ovs 2.4 and dpdk 2.0 you should
>
> expect between 6.5-8mpps phy to phy per physical core + an additional
> 70-80% if hyper thereading is used.
>
>
>
> Your phy vm phy numbers will be a little lower as the vhost-user pmd takes
> more clock cycles to process packets then
>
> The physical nic drivers do but that should help set your expectations.
>
>
>
> Ovs 2.4 and dpdk 2.0 are quite old at this point but they still should
> give a significant performance increase over kernel ovs.
>
>
>
>
>
>
>
>
>
> *From:* abhishek jain [mailto:ashujain9727@gmail.com]
> *Sent:* Friday, December 8, 2017 9:34 AM
> *To:* users@dpdk.org; Mooney, Sean K <sean.k.mooney@intel.com>
> *Subject:* Re: Poor OVS-DPDK performance
>
>
>
> Hi Team
>
> Below is my OVS configuration..
>
> root@node-3:~# ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024",
> pmd-cpu-mask="1800"}
> root@node-3:~#
>
> root@node-3:~#
> root@node-3:~# cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-3.13.0-137-generic root=/dev/mapper/os-root ro
> console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90 nomodeset
> root=UUID=2949f531-bedc-47a0-a2f2-6ebf8e1d1edb iommu=pt intel_iommu=on
> isolcpus=11,12,13,14,15,16
>
> root@node-3:~#
> root@node-3:~# ovs-appctl dpif-netdev/pmd-stats-show
> main thread:
> emc hits:2
> megaflow hits:0
> miss:2
> lost:0
> polling cycles:99607459 (98.80%)
> processing cycles:1207437 (1.20%)
> avg cycles per packet: 25203724.00 (100814896/4)
> avg processing cycles per packet: 301859.25 (1207437/4)
> pmd thread numa_id 0 core_id 11:
> emc hits:0
> megaflow hits:0
> miss:0
> lost:0
> polling cycles:272926895316 (100.00%)
> processing cycles:0 (0.00%)
> pmd thread numa_id 0 core_id 12:
> emc hits:0
> megaflow hits:0
> miss:0
> lost:0
> polling cycles:240339950037 (100.00%)
> processing cycles:0 (0.00%)
> root@node-3:~#
>
> root@node-3:~#
> root@node-3:~# grep -r Huge /proc/meminfo
> AnonHugePages: 59392 kB
> HugePages_Total: 5126
> HugePages_Free: 4870
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> root@node-3:~#
>
> I'm using OVS version 2.4.1
>
> Regards
>
> Abhishek Jain
>
>
>
> On Fri, Dec 8, 2017 at 11:18 AM, abhishek jain <ashujain9727@gmail.com>
> wrote:
>
> Hi Team
>
> Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also
> having one vnf with vhost interfaces mapped to OVS bridge br-int.
>
> However when I'm performing throughput with the same vnf,I'm getting very
> less throughput.
>
>
> Please provide me some pointers to boost the performance of vnf with
> OVS-DPDK configuration.
>
> Regards
>
> Abhishek Jain
>
>
>
-------------- next part --------------
root@node-3:~# ovs-vsctl show
5aefe57f-04ca-4af2-8560-1530c1377979
Bridge br-prv
fail_mode: secure
Port "p_eeee51a2-0"
Interface "p_eeee51a2-0"
type: internal
Port br-prv
Interface br-prv
type: internal
Port phy-br-prv
Interface phy-br-prv
type: patch
options: {peer=int-br-prv}
Bridge br-int
fail_mode: secure
Port "vhu991ad32b-06"
tag: 4095
Interface "vhu991ad32b-06"
type: dpdkvhostuser
Port int-br-prv
Interface int-br-prv
type: patch
options: {peer=phy-br-prv}
Port "vhu92271be8-c4"
tag: 5
Interface "vhu92271be8-c4"
type: dpdkvhostuser
Port "vhu81f441b6-79"
tag: 4095
Interface "vhu81f441b6-79"
type: dpdkvhostuser
Port "vhu5e2186f6-11"
tag: 4095
Interface "vhu5e2186f6-11"
type: dpdkvhostuser
Port br-int
Interface br-int
type: internal
Port "vhu886d07e9-46"
Interface "vhu886d07e9-46"
type: dpdkvhostuser
Port "vhufee9e9cc-6b"
tag: 6
Interface "vhufee9e9cc-6b"
type: dpdkvhostuser
Port "eth2"
tag: 5
Interface "eth2"
type: dpdkvhostuser
Port "eth3"
tag: 6
Interface "eth3"
type: dpdkvhostuser
Port "vhu28ec9b13-f1"
tag: 4095
Interface "vhu28ec9b13-f1"
type: dpdkvhostuser
Port "vhu8afd22fd-6e"
tag: 4095
Interface "vhu8afd22fd-6e"
type: dpdkvhostuser
Port "vhud3f01a02-cd"
Interface "vhud3f01a02-cd"
type: dpdkvhostuser
ovs_version: "2.4.1"
=========================================================================================================================================================================================
root@node-3:~# ovs-vsctl list bridge
_uuid : 0571de2c-bb4c-4bc1-b255-d657d659ee3c
auto_attach : []
controller : []
datapath_id : "00002ede7105c14b"
datapath_type : netdev
datapath_version : "<built-in>"
external_ids : {}
fail_mode : secure
flood_vlans : []
flow_tables : {}
ipfix : []
mcast_snooping_enable: false
mirrors : []
name : br-prv
netflow : []
other_config : {}
ports : [02493a07-6696-4357-8437-2918252dcbf2, 88005c02-d3f0-496e-901d-4b54626c5d7b, ef52d491-6983-4192-bb4e-3ec563afdec3]
protocols : ["OpenFlow10"]
rstp_enable : false
rstp_status : {}
sflow : []
status : {}
stp_enable : false
_uuid : 18f243cf-bace-4458-a860-6b6ed6729f6f
auto_attach : []
controller : []
datapath_id : "0000ce43f2185844"
datapath_type : netdev
datapath_version : "<built-in>"
external_ids : {}
fail_mode : secure
flood_vlans : []
flow_tables : {}
ipfix : []
mcast_snooping_enable: false
mirrors : []
name : br-int
netflow : []
other_config : {}
ports : [093ff049-8f3c-46c5-964e-b2c73dcde879, 10aef80f-87c7-48d3-8a7b-234bc3f511e6, 20b6fe99-b7a4-42ec-a3ae-e1651ba608fa, 392000ec-65a8-421d-8b03-15f894ea6e0a, 4a42eb6e-cdca-4d0d-bd9f-dbc8b1f5df35, 92fd3ded-4250-462d-b316-84199e5df3d4, a39606c3-e0b2-4f6a-a716-66fa4b872459, a9ad87a6-a9f8-42e2-98b3-ce8c716156e7, b8be07f1-d121-4860-968c-0979b5f4c2d3, cb3905b0-900d-41d9-b481-4669cf457446, d009fbcd-d765-4378-83be-fdb8c7552d8c, db0499c7-e0cb-4ff0-b92c-2068894fccf1, e8d1d946-3857-48e6-9222-b9eb8c332b97]
protocols : ["OpenFlow10"]
rstp_enable : false
rstp_status : {}
sflow : []
status : {}
stp_enable : false
==========================================================================================================================================================================================
root@node-3:~# ovs-vsctl list interface
_uuid : 6c0c15e1-9e09-40f8-ae54-348e0d66af12
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : up
lldp : {}
mac : []
mac_in_use : "ee:32:48:f2:26:d4"
mtu : []
name : int-br-prv
ofport : 6
ofport_request : []
options : {peer=phy-br-prv}
other_config : {}
statistics : {collisions=0, rx_bytes=2523472, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=41805, tx_bytes=170650, tx_dropped=0, tx_errors=0, tx_packets=3780}
status : {}
type : patch
_uuid : ac67d5c2-8075-40c7-84a7-e80822822992
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:db:e7:6c", iface-id="5e2186f6-1182-4530-9a9f-dd88f567259b", iface-status=active, vm-uuid="9f52a6d5-fc0f-4217-a126-9ba2dcb0ec4c"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu5e2186f6-11"
ofport : 4
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : eceefc24-087b-47d8-b51e-2158f4fcec4f
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : full
error : []
external_ids : {}
ifindex : 16
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 5
link_speed : 10000000
link_state : up
lldp : {}
mac : []
mac_in_use : "ce:43:f2:18:58:44"
mtu : 1450
name : br-int
ofport : 65534
ofport_request : []
options : {}
other_config : {}
statistics : {collisions=0, rx_bytes=1944, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=24, tx_bytes=167336, tx_dropped=0, tx_errors=0, tx_packets=3594}
status : {driver_name=tun, driver_version="1.6", firmware_version=""}
type : internal
_uuid : 1b2abc15-9b21-4182-bc6a-77c1b35ab1f4
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:89:ea:8e", iface-id="fee9e9cc-6bcd-4718-9b03-7e4f7db85c37", iface-status=active, vm-uuid="f521fdc1-b0b9-4269-8691-70b1eab24f30"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhufee9e9cc-6b"
ofport : 16
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=3720, tx_dropped=0, tx_packets=172}
status : {}
type : dpdkvhostuser
_uuid : b9edb0f5-6c69-4f43-8c39-2eda7e0a702f
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:a3:64:fc", iface-id="991ad32b-06ca-49ee-ba7e-5b1a2f607336", iface-status=active, vm-uuid="a99654c1-ef8d-4f8a-b0ef-3ff8f210ee62"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu991ad32b-06"
ofport : 8
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 1b4cf917-bf5a-42cf-b318-bc087b8559d7
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : full
error : []
external_ids : {}
ifindex : 20
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 3
link_speed : 10000000
link_state : up
lldp : {}
mac : []
mac_in_use : "5e:c2:e0:d5:40:cc"
mtu : 1500
name : "p_eeee51a2-0"
ofport : 1
ofport_request : []
options : {}
other_config : {}
statistics : {collisions=0, rx_bytes=1780832, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=41812, tx_bytes=184800, tx_dropped=0, tx_errors=0, tx_packets=3769}
status : {driver_name=tun, driver_version="1.6", firmware_version=""}
type : internal
_uuid : b47fec99-b31d-4a61-a69f-5b2f4ce103cf
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:76:f3:36", iface-id="28ec9b13-f1f3-45e5-8c03-4ac217b40281", iface-status=active, vm-uuid="3f90ed6a-f02c-4575-b3c7-b3d333f5d602"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu28ec9b13-f1"
ofport : 2
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : abb93f99-2f85-4d08-b264-fe724685c302
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "eth3"
ofport : 18
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=3550, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 0e3973d9-7b12-45a5-9e2e-a2ce8bb55a11
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:43:ae:07", iface-id="886d07e9-46bd-41c6-8a22-2f0ca5c97269", iface-status=active, vm-uuid="3f90ed6a-f02c-4575-b3c7-b3d333f5d602"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu886d07e9-46"
ofport : 3
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=3618, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : bb8552eb-cff4-40e6-a2c1-b944c98a7d8e
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:8a:70:70", iface-id="81f441b6-7975-4303-8baf-21956c5a08fa", iface-status=active, vm-uuid="7a2e668c-46b1-4a34-b75d-eb75ee10c148"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu81f441b6-79"
ofport : 5
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 4fee906b-31c8-45b9-bbc8-246e53a80a57
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "eth2"
ofport : 19
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=8, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 33e6500c-20df-4b11-9747-d40561fe1bdf
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:48:ce:16", iface-id="92271be8-c45c-46a3-b71d-8c1ad87364b3", iface-status=active, vm-uuid="f521fdc1-b0b9-4269-8691-70b1eab24f30"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu92271be8-c4"
ofport : 17
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=8, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 6560b21c-fca7-4df7-88ed-f9308c39af25
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : full
error : []
external_ids : {}
ifindex : 21
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 1
link_speed : 10000000
link_state : up
lldp : {}
mac : []
mac_in_use : "2e:de:71:05:c1:4b"
mtu : 1500
name : br-prv
ofport : 65534
ofport_request : []
options : {}
other_config : {}
statistics : {collisions=0, rx_bytes=648, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=8, tx_bytes=2673896, tx_dropped=3, tx_errors=0, tx_packets=45231}
status : {driver_name=tun, driver_version="1.6", firmware_version=""}
type : internal
_uuid : 7c6cfe9d-a24e-4431-b405-dbe80af119e7
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:3c:4c:a5", iface-id="d3f01a02-cdec-41df-adc2-83bf88bea1d7", iface-status=active, vm-uuid="7a2e668c-46b1-4a34-b75d-eb75ee10c148"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhud3f01a02-cd"
ofport : 1
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=3618, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : fa8a093d-e617-491e-880f-b59bebd76bd3
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:4c:dd:fa", iface-id="8afd22fd-6e69-427e-9e8b-0dc7b3bc92a6", iface-status=active, vm-uuid="a99654c1-ef8d-4f8a-b0ef-3ff8f210ee62"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu8afd22fd-6e"
ofport : 7
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 458e50ee-ceb7-4d5c-a049-08cbf0b79a17
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : up
lldp : {}
mac : []
mac_in_use : "b6:0e:28:55:8a:1d"
mtu : []
name : phy-br-prv
ofport : 2
ofport_request : []
options : {peer=int-br-prv}
other_config : {}
statistics : {collisions=0, rx_bytes=170650, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=3780, tx_bytes=2523472, tx_dropped=0, tx_errors=0, tx_packets=41805}
status : {}
type : patch
=========================================================================================================================================================================================
root@node-3:~# ovs-vsctl list port
_uuid : cb3905b0-900d-41d9-b481-4669cf457446
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [abb93f99-2f85-4d08-b264-fe724685c302]
lacp : []
mac : []
name : "eth3"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 6
trunks : []
vlan_mode : []
_uuid : ef52d491-6983-4192-bb4e-3ec563afdec3
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [458e50ee-ceb7-4d5c-a049-08cbf0b79a17]
lacp : []
mac : []
name : phy-br-prv
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : b8be07f1-d121-4860-968c-0979b5f4c2d3
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [4fee906b-31c8-45b9-bbc8-246e53a80a57]
lacp : []
mac : []
name : "eth2"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 5
trunks : []
vlan_mode : []
_uuid : 88005c02-d3f0-496e-901d-4b54626c5d7b
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [6560b21c-fca7-4df7-88ed-f9308c39af25]
lacp : []
mac : []
name : br-prv
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a39606c3-e0b2-4f6a-a716-66fa4b872459
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [0e3973d9-7b12-45a5-9e2e-a2ce8bb55a11]
lacp : []
mac : []
name : "vhu886d07e9-46"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : e8d1d946-3857-48e6-9222-b9eb8c332b97
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [7c6cfe9d-a24e-4431-b405-dbe80af119e7]
lacp : []
mac : []
name : "vhud3f01a02-cd"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a9ad87a6-a9f8-42e2-98b3-ce8c716156e7
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [1b2abc15-9b21-4182-bc6a-77c1b35ab1f4]
lacp : []
mac : []
name : "vhufee9e9cc-6b"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="6"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 6
trunks : []
vlan_mode : []
_uuid : db0499c7-e0cb-4ff0-b92c-2068894fccf1
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [fa8a093d-e617-491e-880f-b59bebd76bd3]
lacp : []
mac : []
name : "vhu8afd22fd-6e"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="1"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 02493a07-6696-4357-8437-2918252dcbf2
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [1b4cf917-bf5a-42cf-b318-bc087b8559d7]
lacp : []
mac : []
name : "p_eeee51a2-0"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 093ff049-8f3c-46c5-964e-b2c73dcde879
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [b9edb0f5-6c69-4f43-8c39-2eda7e0a702f]
lacp : []
mac : []
name : "vhu991ad32b-06"
other_config : {net_uuid="5d150735-c43d-4726-aa4a-59fd5d0c96cd", network_type=vlan, physical_network="physnet2", segmentation_id="1006", tag="2"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 20b6fe99-b7a4-42ec-a3ae-e1651ba608fa
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [33e6500c-20df-4b11-9747-d40561fe1bdf]
lacp : []
mac : []
name : "vhu92271be8-c4"
other_config : {net_uuid="5d150735-c43d-4726-aa4a-59fd5d0c96cd", network_type=vlan, physical_network="physnet2", segmentation_id="1006", tag="5"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 5
trunks : []
vlan_mode : []
_uuid : 392000ec-65a8-421d-8b03-15f894ea6e0a
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [bb8552eb-cff4-40e6-a2c1-b944c98a7d8e]
lacp : []
mac : []
name : "vhu81f441b6-79"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="4"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 92fd3ded-4250-462d-b316-84199e5df3d4
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [eceefc24-087b-47d8-b51e-2158f4fcec4f]
lacp : []
mac : []
name : br-int
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : d009fbcd-d765-4378-83be-fdb8c7552d8c
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [b47fec99-b31d-4a61-a69f-5b2f4ce103cf]
lacp : []
mac : []
name : "vhu28ec9b13-f1"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="3"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 4a42eb6e-cdca-4d0d-bd9f-dbc8b1f5df35
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [ac67d5c2-8075-40c7-84a7-e80822822992]
lacp : []
mac : []
name : "vhu5e2186f6-11"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="4"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 10aef80f-87c7-48d3-8a7b-234bc3f511e6
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [6c0c15e1-9e09-40f8-ae54-348e0d66af12]
lacp : []
mac : []
name : int-br-prv
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
========================================================================================================================================================================================
^ permalink raw reply [flat|nested] 6+ messages in thread
* [dpdk-users] Fwd: Re: Poor OVS-DPDK performance
2017-12-11 6:55 ` abhishek jain
@ 2017-12-11 21:28 ` abhishek jain
2017-12-12 12:55 ` [dpdk-users] " Mooney, Sean K
1 sibling, 0 replies; 6+ messages in thread
From: abhishek jain @ 2017-12-11 21:28 UTC (permalink / raw)
To: users; +Cc: dev
---------- Forwarded message ----------
From: "abhishek jain" <ashujain9727@gmail.com>
Date: Dec 11, 2017 12:25
Subject: Re: Poor OVS-DPDK performance
To: "Mooney, Sean K" <sean.k.mooney@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Hi Team
I'm targeting phy-VM-phy numbers
I'm having 8 core host with HT turned on with 16 threads.Below is the
output describing it details..
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 86
Stepping: 3
CPU MHz: 800.000
BogoMIPS: 4199.99
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-15
Please let me know the appropriate configuration for better performance
with this available setup.Also find the attached notepad file covering all
the OVS-DPDK details of your queries in the above mail.
One more query,I have configured my phy interface with vfio DPDK compatible
driver,below is the output ..
dpdk_nic_bind --status
Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci
unused=
0000:03:00.1 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci
unused=
0000:05:00.1 'I350 Gigabit Network Connection' drv=vfio-pci unused=
Once I have configured the vfio,I'm not able to receive any packets on that
particular physical interface.
Thanks again for you time.
Regards
Abhishek Jain
On Fri, Dec 8, 2017 at 6:11 PM, Mooney, Sean K <sean.k.mooney@intel.com>
wrote:
> Hi can you provide the qemu commandline you are using for the vm
>
> As well as the following commands
>
>
>
> Ovs-vsctl show
>
> Ovs-vsctl list bridge
>
> Ovs-vsctl list interface
>
> Ovs-vsctl list port
>
>
>
> Basically with the topology info above I want to confirm:
>
> - All bridges are interconnected by patch ports not veth,
>
> - All bridges are datapath type netdev
>
> - Vm is connected by vhost-user interfaces not kernel vhost(it
> still works with ovs-dpdk but is really slow)
>
>
>
> Looking at you core masks the vswitch I incorrectly tunned
>
>
>
> {dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024",
> pmd-cpu-mask="1800"}
> root@node-3:~#
>
>
>
> You have configured 10 lcores which are not used for packet processing and
> only the 1st of which will actually be used by ovs-dpdk
>
> And you have configured a single core for the pmd
>
>
>
> You have not said if you are on a multi socket system of single but
> assuming that it is a single socket system try this instead.
>
>
> {dpdk-init="true", dpdk-lcore-mask="0x2", dpdk-socket-mem="1024",
> pmd-cpu-mask="0xC"}
>
> Above im assuming core 0 is used by os core 1 will be used for lcore
> thread and cores 2 and 3 will be used for pmd threads which does all the
> packet forwarding.
>
>
>
> If you have hyperthreads turn on on your host add the hyperthread siblings
> to the pmd-cpu-mask
>
> For example if you had a 16 core cpu with 32 threads the pmd-cpu-mask
> should be “0x60006”
>
>
>
> On the kernel cmdline change the isolcpus to isolcpus=2,3 or
> isolcpus=2,3,18,19 for hyper threading.
>
>
>
> With the HT config you should be able to handel upto 30mpps on a 2.2GHz
> cpu assuming you compiled ovs and dpdk with “-fPIC -02 –march=native” and
> link statically.
>
> If you have a faster cpu you should get more but as a rough estimate when
> using ovs 2.4 and dpdk 2.0 you should
>
> expect between 6.5-8mpps phy to phy per physical core + an additional
> 70-80% if hyper thereading is used.
>
>
>
> Your phy vm phy numbers will be a little lower as the vhost-user pmd takes
> more clock cycles to process packets then
>
> The physical nic drivers do but that should help set your expectations.
>
>
>
> Ovs 2.4 and dpdk 2.0 are quite old at this point but they still should
> give a significant performance increase over kernel ovs.
>
>
>
>
>
>
>
>
>
> *From:* abhishek jain [mailto:ashujain9727@gmail.com]
> *Sent:* Friday, December 8, 2017 9:34 AM
> *To:* users@dpdk.org; Mooney, Sean K <sean.k.mooney@intel.com>
> *Subject:* Re: Poor OVS-DPDK performance
>
>
>
> Hi Team
>
> Below is my OVS configuration..
>
> root@node-3:~# ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024",
> pmd-cpu-mask="1800"}
> root@node-3:~#
>
> root@node-3:~#
> root@node-3:~# cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-3.13.0-137-generic root=/dev/mapper/os-root ro
> console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90 nomodeset
> root=UUID=2949f531-bedc-47a0-a2f2-6ebf8e1d1edb iommu=pt intel_iommu=on
> isolcpus=11,12,13,14,15,16
>
> root@node-3:~#
> root@node-3:~# ovs-appctl dpif-netdev/pmd-stats-show
> main thread:
> emc hits:2
> megaflow hits:0
> miss:2
> lost:0
> polling cycles:99607459 (98.80%)
> processing cycles:1207437 (1.20%)
> avg cycles per packet: 25203724.00 (100814896/4)
> avg processing cycles per packet: 301859.25 (1207437/4)
> pmd thread numa_id 0 core_id 11:
> emc hits:0
> megaflow hits:0
> miss:0
> lost:0
> polling cycles:272926895316 (100.00%)
> processing cycles:0 (0.00%)
> pmd thread numa_id 0 core_id 12:
> emc hits:0
> megaflow hits:0
> miss:0
> lost:0
> polling cycles:240339950037 (100.00%)
> processing cycles:0 (0.00%)
> root@node-3:~#
>
> root@node-3:~#
> root@node-3:~# grep -r Huge /proc/meminfo
> AnonHugePages: 59392 kB
> HugePages_Total: 5126
> HugePages_Free: 4870
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> root@node-3:~#
>
> I'm using OVS version 2.4.1
>
> Regards
>
> Abhishek Jain
>
>
>
> On Fri, Dec 8, 2017 at 11:18 AM, abhishek jain <ashujain9727@gmail.com>
> wrote:
>
> Hi Team
>
> Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also
> having one vnf with vhost interfaces mapped to OVS bridge br-int.
>
> However when I'm performing throughput with the same vnf,I'm getting very
> less throughput.
>
>
> Please provide me some pointers to boost the performance of vnf with
> OVS-DPDK configuration.
>
> Regards
>
> Abhishek Jain
>
>
>
-------------- next part --------------
root@node-3:~# ovs-vsctl show
5aefe57f-04ca-4af2-8560-1530c1377979
Bridge br-prv
fail_mode: secure
Port "p_eeee51a2-0"
Interface "p_eeee51a2-0"
type: internal
Port br-prv
Interface br-prv
type: internal
Port phy-br-prv
Interface phy-br-prv
type: patch
options: {peer=int-br-prv}
Bridge br-int
fail_mode: secure
Port "vhu991ad32b-06"
tag: 4095
Interface "vhu991ad32b-06"
type: dpdkvhostuser
Port int-br-prv
Interface int-br-prv
type: patch
options: {peer=phy-br-prv}
Port "vhu92271be8-c4"
tag: 5
Interface "vhu92271be8-c4"
type: dpdkvhostuser
Port "vhu81f441b6-79"
tag: 4095
Interface "vhu81f441b6-79"
type: dpdkvhostuser
Port "vhu5e2186f6-11"
tag: 4095
Interface "vhu5e2186f6-11"
type: dpdkvhostuser
Port br-int
Interface br-int
type: internal
Port "vhu886d07e9-46"
Interface "vhu886d07e9-46"
type: dpdkvhostuser
Port "vhufee9e9cc-6b"
tag: 6
Interface "vhufee9e9cc-6b"
type: dpdkvhostuser
Port "eth2"
tag: 5
Interface "eth2"
type: dpdkvhostuser
Port "eth3"
tag: 6
Interface "eth3"
type: dpdkvhostuser
Port "vhu28ec9b13-f1"
tag: 4095
Interface "vhu28ec9b13-f1"
type: dpdkvhostuser
Port "vhu8afd22fd-6e"
tag: 4095
Interface "vhu8afd22fd-6e"
type: dpdkvhostuser
Port "vhud3f01a02-cd"
Interface "vhud3f01a02-cd"
type: dpdkvhostuser
ovs_version: "2.4.1"
=========================================================================================================================================================================================
root@node-3:~# ovs-vsctl list bridge
_uuid : 0571de2c-bb4c-4bc1-b255-d657d659ee3c
auto_attach : []
controller : []
datapath_id : "00002ede7105c14b"
datapath_type : netdev
datapath_version : "<built-in>"
external_ids : {}
fail_mode : secure
flood_vlans : []
flow_tables : {}
ipfix : []
mcast_snooping_enable: false
mirrors : []
name : br-prv
netflow : []
other_config : {}
ports : [02493a07-6696-4357-8437-2918252dcbf2, 88005c02-d3f0-496e-901d-4b54626c5d7b, ef52d491-6983-4192-bb4e-3ec563afdec3]
protocols : ["OpenFlow10"]
rstp_enable : false
rstp_status : {}
sflow : []
status : {}
stp_enable : false
_uuid : 18f243cf-bace-4458-a860-6b6ed6729f6f
auto_attach : []
controller : []
datapath_id : "0000ce43f2185844"
datapath_type : netdev
datapath_version : "<built-in>"
external_ids : {}
fail_mode : secure
flood_vlans : []
flow_tables : {}
ipfix : []
mcast_snooping_enable: false
mirrors : []
name : br-int
netflow : []
other_config : {}
ports : [093ff049-8f3c-46c5-964e-b2c73dcde879, 10aef80f-87c7-48d3-8a7b-234bc3f511e6, 20b6fe99-b7a4-42ec-a3ae-e1651ba608fa, 392000ec-65a8-421d-8b03-15f894ea6e0a, 4a42eb6e-cdca-4d0d-bd9f-dbc8b1f5df35, 92fd3ded-4250-462d-b316-84199e5df3d4, a39606c3-e0b2-4f6a-a716-66fa4b872459, a9ad87a6-a9f8-42e2-98b3-ce8c716156e7, b8be07f1-d121-4860-968c-0979b5f4c2d3, cb3905b0-900d-41d9-b481-4669cf457446, d009fbcd-d765-4378-83be-fdb8c7552d8c, db0499c7-e0cb-4ff0-b92c-2068894fccf1, e8d1d946-3857-48e6-9222-b9eb8c332b97]
protocols : ["OpenFlow10"]
rstp_enable : false
rstp_status : {}
sflow : []
status : {}
stp_enable : false
==========================================================================================================================================================================================
root@node-3:~# ovs-vsctl list interface
_uuid : 6c0c15e1-9e09-40f8-ae54-348e0d66af12
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : up
lldp : {}
mac : []
mac_in_use : "ee:32:48:f2:26:d4"
mtu : []
name : int-br-prv
ofport : 6
ofport_request : []
options : {peer=phy-br-prv}
other_config : {}
statistics : {collisions=0, rx_bytes=2523472, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=41805, tx_bytes=170650, tx_dropped=0, tx_errors=0, tx_packets=3780}
status : {}
type : patch
_uuid : ac67d5c2-8075-40c7-84a7-e80822822992
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:db:e7:6c", iface-id="5e2186f6-1182-4530-9a9f-dd88f567259b", iface-status=active, vm-uuid="9f52a6d5-fc0f-4217-a126-9ba2dcb0ec4c"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu5e2186f6-11"
ofport : 4
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : eceefc24-087b-47d8-b51e-2158f4fcec4f
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : full
error : []
external_ids : {}
ifindex : 16
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 5
link_speed : 10000000
link_state : up
lldp : {}
mac : []
mac_in_use : "ce:43:f2:18:58:44"
mtu : 1450
name : br-int
ofport : 65534
ofport_request : []
options : {}
other_config : {}
statistics : {collisions=0, rx_bytes=1944, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=24, tx_bytes=167336, tx_dropped=0, tx_errors=0, tx_packets=3594}
status : {driver_name=tun, driver_version="1.6", firmware_version=""}
type : internal
_uuid : 1b2abc15-9b21-4182-bc6a-77c1b35ab1f4
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:89:ea:8e", iface-id="fee9e9cc-6bcd-4718-9b03-7e4f7db85c37", iface-status=active, vm-uuid="f521fdc1-b0b9-4269-8691-70b1eab24f30"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhufee9e9cc-6b"
ofport : 16
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=3720, tx_dropped=0, tx_packets=172}
status : {}
type : dpdkvhostuser
_uuid : b9edb0f5-6c69-4f43-8c39-2eda7e0a702f
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:a3:64:fc", iface-id="991ad32b-06ca-49ee-ba7e-5b1a2f607336", iface-status=active, vm-uuid="a99654c1-ef8d-4f8a-b0ef-3ff8f210ee62"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu991ad32b-06"
ofport : 8
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 1b4cf917-bf5a-42cf-b318-bc087b8559d7
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : full
error : []
external_ids : {}
ifindex : 20
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 3
link_speed : 10000000
link_state : up
lldp : {}
mac : []
mac_in_use : "5e:c2:e0:d5:40:cc"
mtu : 1500
name : "p_eeee51a2-0"
ofport : 1
ofport_request : []
options : {}
other_config : {}
statistics : {collisions=0, rx_bytes=1780832, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=41812, tx_bytes=184800, tx_dropped=0, tx_errors=0, tx_packets=3769}
status : {driver_name=tun, driver_version="1.6", firmware_version=""}
type : internal
_uuid : b47fec99-b31d-4a61-a69f-5b2f4ce103cf
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:76:f3:36", iface-id="28ec9b13-f1f3-45e5-8c03-4ac217b40281", iface-status=active, vm-uuid="3f90ed6a-f02c-4575-b3c7-b3d333f5d602"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu28ec9b13-f1"
ofport : 2
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : abb93f99-2f85-4d08-b264-fe724685c302
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "eth3"
ofport : 18
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=3550, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 0e3973d9-7b12-45a5-9e2e-a2ce8bb55a11
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:43:ae:07", iface-id="886d07e9-46bd-41c6-8a22-2f0ca5c97269", iface-status=active, vm-uuid="3f90ed6a-f02c-4575-b3c7-b3d333f5d602"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu886d07e9-46"
ofport : 3
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=3618, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : bb8552eb-cff4-40e6-a2c1-b944c98a7d8e
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:8a:70:70", iface-id="81f441b6-7975-4303-8baf-21956c5a08fa", iface-status=active, vm-uuid="7a2e668c-46b1-4a34-b75d-eb75ee10c148"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu81f441b6-79"
ofport : 5
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 4fee906b-31c8-45b9-bbc8-246e53a80a57
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "eth2"
ofport : 19
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=8, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 33e6500c-20df-4b11-9747-d40561fe1bdf
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:48:ce:16", iface-id="92271be8-c45c-46a3-b71d-8c1ad87364b3", iface-status=active, vm-uuid="f521fdc1-b0b9-4269-8691-70b1eab24f30"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu92271be8-c4"
ofport : 17
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=8, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 6560b21c-fca7-4df7-88ed-f9308c39af25
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : full
error : []
external_ids : {}
ifindex : 21
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 1
link_speed : 10000000
link_state : up
lldp : {}
mac : []
mac_in_use : "2e:de:71:05:c1:4b"
mtu : 1500
name : br-prv
ofport : 65534
ofport_request : []
options : {}
other_config : {}
statistics : {collisions=0, rx_bytes=648, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=8, tx_bytes=2673896, tx_dropped=3, tx_errors=0, tx_packets=45231}
status : {driver_name=tun, driver_version="1.6", firmware_version=""}
type : internal
_uuid : 7c6cfe9d-a24e-4431-b405-dbe80af119e7
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:3c:4c:a5", iface-id="d3f01a02-cdec-41df-adc2-83bf88bea1d7", iface-status=active, vm-uuid="7a2e668c-46b1-4a34-b75d-eb75ee10c148"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhud3f01a02-cd"
ofport : 1
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=3618, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : fa8a093d-e617-491e-880f-b59bebd76bd3
admin_state : down
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:4c:dd:fa", iface-id="8afd22fd-6e69-427e-9e8b-0dc7b3bc92a6", iface-status=active, vm-uuid="a99654c1-ef8d-4f8a-b0ef-3ff8f210ee62"}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 1500
name : "vhu8afd22fd-6e"
ofport : 7
ofport_request : []
options : {}
other_config : {}
statistics : {rx_packets=0, tx_dropped=0, tx_packets=0}
status : {}
type : dpdkvhostuser
_uuid : 458e50ee-ceb7-4d5c-a049-08cbf0b79a17
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : up
lldp : {}
mac : []
mac_in_use : "b6:0e:28:55:8a:1d"
mtu : []
name : phy-br-prv
ofport : 2
ofport_request : []
options : {peer=int-br-prv}
other_config : {}
statistics : {collisions=0, rx_bytes=170650, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=3780, tx_bytes=2523472, tx_dropped=0, tx_errors=0, tx_packets=41805}
status : {}
type : patch
=========================================================================================================================================================================================
root@node-3:~# ovs-vsctl list port
_uuid : cb3905b0-900d-41d9-b481-4669cf457446
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [abb93f99-2f85-4d08-b264-fe724685c302]
lacp : []
mac : []
name : "eth3"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 6
trunks : []
vlan_mode : []
_uuid : ef52d491-6983-4192-bb4e-3ec563afdec3
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [458e50ee-ceb7-4d5c-a049-08cbf0b79a17]
lacp : []
mac : []
name : phy-br-prv
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : b8be07f1-d121-4860-968c-0979b5f4c2d3
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [4fee906b-31c8-45b9-bbc8-246e53a80a57]
lacp : []
mac : []
name : "eth2"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 5
trunks : []
vlan_mode : []
_uuid : 88005c02-d3f0-496e-901d-4b54626c5d7b
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [6560b21c-fca7-4df7-88ed-f9308c39af25]
lacp : []
mac : []
name : br-prv
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a39606c3-e0b2-4f6a-a716-66fa4b872459
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [0e3973d9-7b12-45a5-9e2e-a2ce8bb55a11]
lacp : []
mac : []
name : "vhu886d07e9-46"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : e8d1d946-3857-48e6-9222-b9eb8c332b97
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [7c6cfe9d-a24e-4431-b405-dbe80af119e7]
lacp : []
mac : []
name : "vhud3f01a02-cd"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a9ad87a6-a9f8-42e2-98b3-ce8c716156e7
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [1b2abc15-9b21-4182-bc6a-77c1b35ab1f4]
lacp : []
mac : []
name : "vhufee9e9cc-6b"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="6"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 6
trunks : []
vlan_mode : []
_uuid : db0499c7-e0cb-4ff0-b92c-2068894fccf1
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [fa8a093d-e617-491e-880f-b59bebd76bd3]
lacp : []
mac : []
name : "vhu8afd22fd-6e"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="1"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 02493a07-6696-4357-8437-2918252dcbf2
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [1b4cf917-bf5a-42cf-b318-bc087b8559d7]
lacp : []
mac : []
name : "p_eeee51a2-0"
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 093ff049-8f3c-46c5-964e-b2c73dcde879
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [b9edb0f5-6c69-4f43-8c39-2eda7e0a702f]
lacp : []
mac : []
name : "vhu991ad32b-06"
other_config : {net_uuid="5d150735-c43d-4726-aa4a-59fd5d0c96cd", network_type=vlan, physical_network="physnet2", segmentation_id="1006", tag="2"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 20b6fe99-b7a4-42ec-a3ae-e1651ba608fa
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [33e6500c-20df-4b11-9747-d40561fe1bdf]
lacp : []
mac : []
name : "vhu92271be8-c4"
other_config : {net_uuid="5d150735-c43d-4726-aa4a-59fd5d0c96cd", network_type=vlan, physical_network="physnet2", segmentation_id="1006", tag="5"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 5
trunks : []
vlan_mode : []
_uuid : 392000ec-65a8-421d-8b03-15f894ea6e0a
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [bb8552eb-cff4-40e6-a2c1-b944c98a7d8e]
lacp : []
mac : []
name : "vhu81f441b6-79"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="4"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 92fd3ded-4250-462d-b316-84199e5df3d4
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [eceefc24-087b-47d8-b51e-2158f4fcec4f]
lacp : []
mac : []
name : br-int
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : d009fbcd-d765-4378-83be-fdb8c7552d8c
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [b47fec99-b31d-4a61-a69f-5b2f4ce103cf]
lacp : []
mac : []
name : "vhu28ec9b13-f1"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="3"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 4a42eb6e-cdca-4d0d-bd9f-dbc8b1f5df35
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [ac67d5c2-8075-40c7-84a7-e80822822992]
lacp : []
mac : []
name : "vhu5e2186f6-11"
other_config : {net_uuid="a54335e4-33c7-48aa-825c-be30fadb8961", network_type=vlan, physical_network="physnet2", segmentation_id="1015", tag="4"}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 4095
trunks : []
vlan_mode : []
_uuid : 10aef80f-87c7-48d3-8a7b-234bc3f511e6
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [6c0c15e1-9e09-40f8-ae54-348e0d66af12]
lacp : []
mac : []
name : int-br-prv
other_config : {}
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
========================================================================================================================================================================================
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] Poor OVS-DPDK performance
2017-12-11 6:55 ` abhishek jain
2017-12-11 21:28 ` [dpdk-users] Fwd: " abhishek jain
@ 2017-12-12 12:55 ` Mooney, Sean K
1 sibling, 0 replies; 6+ messages in thread
From: Mooney, Sean K @ 2017-12-12 12:55 UTC (permalink / raw)
To: abhishek jain; +Cc: users
Some comments inline
From: abhishek jain [mailto:ashujain9727@gmail.com]
Sent: Monday, December 11, 2017 6:55 AM
To: Mooney, Sean K <sean.k.mooney@intel.com>
Cc: users@dpdk.org
Subject: Re: Poor OVS-DPDK performance
Hi Sean
Thanks for looking into it.
You got it right, I'm targeting phy-VM-phy numbers
I'm having 8 core host with HT turned on with 16 threads.Below is the output describing it details.
[Mooney, Sean K] given the resource limits of the plathform in your case I would recommend staring with less core for the vswitch
to leave as many free for vms as possible so I would set
{dpdk-init="true", dpdk-lcore-mask="0x1", dpdk-socket-mem="1024", pmd-cpu-mask="0x202"}
And set isolcpus to isolcpus=1,9
if you want to further incress the performace you can also add the following to your kernel cmdlin
nohz=on nohz_full="1-15" rcu_nocbs="1-15" nosoftlockup mce=ignore_ce audit=0 elevator=deadline
if you are using openstack you nova vcpu_pin_set should be vcpu_pin_set=”2-7,10-15”
so core 0 and 8(it hyper thread) are used for the os core 1 and 9 are used for dpdk pmds leaving 6 physical cores/12 hw threads free for vms.
nohz is disableing kernel things on core that are idel(is is a power saving feature more or less) and nohz_full disable ticks on cores with 1 active process
and no other process pending scheduling to the same core. This is an important optimization for realtime systems but it does not hurt to do for dpdk either.
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 86
Stepping: 3
CPU MHz: 800.000
BogoMIPS: 4199.99
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-15
Please let me know the appropriate configuration for better performance with this available setup.Also find the attached notepad file covering all the OVS-DPDK details of your queries in the above mail.
One more query,I have configured my phy interface with vfio DPDK compatible driver,below is the output ..
dpdk_nic_bind --status
Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci unused=
0000:03:00.1 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci unused=
0000:05:00.1 'I350 Gigabit Network Connection' drv=vfio-pci unused=
Once I have configured the vfio,I'm not able to receive any packets on that particular physical interface.
[Mooney, Sean K] this is expected. The vfio-pci driver is not a network adapter driver it is a generic pci device driver
to facilitate control of the device form userspace. As a result it does not provide netdevs for physical functions
or attach those interfaces to the kernel networking stack. With ovs 2.4 these interfaces are added to bridges
using the dpdkX naming scheme. The dpdkX naming scheme is rather annoying to explain but basically the X is the index into the
Array of device bound to a dpdk driver sorted by pci address and filtered by the optional dpdk whitelist.
In the above case dpdk0 = 0000:03:00.0, dpdk1 = 0000:03:00.1, dpdk2 = 0000:05:00.1
If –w 0000:05:00.1 was added to the dpdk section of the ovs-dpdk commandlin then that would whitelist only
0000:05:00.1 and dpdk0 would map to 0000:05:00.1
Reviewing the attached files I noticed that while all the bridge are correctly configured to use the netdev
datapath and are correctly connected via patch ports the br-prv does not have a physical interface attached.
it instand has an ovs internal port"p_eeee51a2-0" which from the name scheme I assume means you deployed with fule.
if that assumption is correct I belive the p_eeee51a2-0 will be added to a linux bridge and that bridce connect to a physical interfaces.
intenal interfaces are not dpdk accelerated when using ovs-dpdk and fallback to the legacy netdev implementation which runs in the vswitchd main thread.
the port stats seam to also show that this interface is actively sending and reciving packets
{collisions=0, rx_bytes=1780832, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=41812, tx_bytes=184800, tx_dropped=0, tx_errors=0, tx_packets=3769}
status : {driver_name=tun, driver_version="1.6", firmware_version=""}
I would suggest removing the interface
sudo ovs-vsctl rm-port p_eeee51a2-0
and replace it with a dpdk interface
sudo ovs-vsctl add-port br-prv dpdk0 -- set interface dpdk0 type=dpdk
once this is done you likely need to restart the vswitchd for it ot take effect as ovs 2.4 does not
support adding dpdk physical interfaces are the vswith is running unless they were bound to the
vfio-pci driver then the vswitch was started.
Thanks again for you time.
Regards
Abhishek Jain
On Fri, Dec 8, 2017 at 6:11 PM, Mooney, Sean K <sean.k.mooney@intel.com<mailto:sean.k.mooney@intel.com>> wrote:
Hi can you provide the qemu commandline you are using for the vm
As well as the following commands
Ovs-vsctl show
Ovs-vsctl list bridge
Ovs-vsctl list interface
Ovs-vsctl list port
Basically with the topology info above I want to confirm:
- All bridges are interconnected by patch ports not veth,
- All bridges are datapath type netdev
- Vm is connected by vhost-user interfaces not kernel vhost(it still works with ovs-dpdk but is really slow)
Looking at you core masks the vswitch I incorrectly tunned
{dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", pmd-cpu-mask="1800"}
root@node-3:~#
You have configured 10 lcores which are not used for packet processing and only the 1st of which will actually be used by ovs-dpdk
And you have configured a single core for the pmd
You have not said if you are on a multi socket system of single but assuming that it is a single socket system try this instead.
{dpdk-init="true", dpdk-lcore-mask="0x2", dpdk-socket-mem="1024", pmd-cpu-mask="0xC"}
Above im assuming core 0 is used by os core 1 will be used for lcore thread and cores 2 and 3 will be used for pmd threads which does all the packet forwarding.
If you have hyperthreads turn on on your host add the hyperthread siblings to the pmd-cpu-mask
For example if you had a 16 core cpu with 32 threads the pmd-cpu-mask should be “0x60006”
On the kernel cmdline change the isolcpus to isolcpus=2,3 or isolcpus=2,3,18,19 for hyper threading.
With the HT config you should be able to handel upto 30mpps on a 2.2GHz cpu assuming you compiled ovs and dpdk with “-fPIC -02 –march=native” and link statically.
If you have a faster cpu you should get more but as a rough estimate when using ovs 2.4 and dpdk 2.0 you should
expect between 6.5-8mpps phy to phy per physical core + an additional 70-80% if hyper thereading is used.
Your phy vm phy numbers will be a little lower as the vhost-user pmd takes more clock cycles to process packets then
The physical nic drivers do but that should help set your expectations.
Ovs 2.4 and dpdk 2.0 are quite old at this point but they still should give a significant performance increase over kernel ovs.
From: abhishek jain [mailto:ashujain9727@gmail.com<mailto:ashujain9727@gmail.com>]
Sent: Friday, December 8, 2017 9:34 AM
To: users@dpdk.org<mailto:users@dpdk.org>; Mooney, Sean K <sean.k.mooney@intel.com<mailto:sean.k.mooney@intel.com>>
Subject: Re: Poor OVS-DPDK performance
Hi Team
Below is my OVS configuration..
root@node-3:~# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", pmd-cpu-mask="1800"}
root@node-3:~#
root@node-3:~#
root@node-3:~# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.13.0-137-generic root=/dev/mapper/os-root ro console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90 nomodeset root=UUID=2949f531-bedc-47a0-a2f2-6ebf8e1d1edb iommu=pt intel_iommu=on isolcpus=11,12,13,14,15,16
root@node-3:~#
root@node-3:~# ovs-appctl dpif-netdev/pmd-stats-show
main thread:
emc hits:2
megaflow hits:0
miss:2
lost:0
polling cycles:99607459 (98.80%)
processing cycles:1207437 (1.20%)
avg cycles per packet: 25203724.00 (100814896/4)
avg processing cycles per packet: 301859.25 (1207437/4)
pmd thread numa_id 0 core_id 11:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:272926895316 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 12:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:240339950037 (100.00%)
processing cycles:0 (0.00%)
root@node-3:~#
root@node-3:~#
root@node-3:~# grep -r Huge /proc/meminfo
AnonHugePages: 59392 kB
HugePages_Total: 5126
HugePages_Free: 4870
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
root@node-3:~#
I'm using OVS version 2.4.1
Regards
Abhishek Jain
On Fri, Dec 8, 2017 at 11:18 AM, abhishek jain <ashujain9727@gmail.com<mailto:ashujain9727@gmail.com>> wrote:
Hi Team
Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also having one vnf with vhost interfaces mapped to OVS bridge br-int.
However when I'm performing throughput with the same vnf,I'm getting very less throughput.
Please provide me some pointers to boost the performance of vnf with OVS-DPDK configuration.
Regards
Abhishek Jain
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-12-12 12:55 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-08 5:48 [dpdk-users] Poor OVS-DPDK performance abhishek jain
2017-12-08 9:33 ` abhishek jain
2017-12-08 12:41 ` Mooney, Sean K
2017-12-11 6:55 ` abhishek jain
2017-12-11 21:28 ` [dpdk-users] Fwd: " abhishek jain
2017-12-12 12:55 ` [dpdk-users] " Mooney, Sean K
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).