---------- Forwarded message ---------- From: "abhishek jain" Date: Dec 11, 2017 12:25 Subject: Re: Poor OVS-DPDK performance To: "Mooney, Sean K" Cc: "users@dpdk.org" Hi Team I'm targeting phy-VM-phy numbers I'm having 8 core host with HT turned on with 16 threads.Below is the output describing it details.. lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 86 Stepping: 3 CPU MHz: 800.000 BogoMIPS: 4199.99 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12288K NUMA node0 CPU(s): 0-15 Please let me know the appropriate configuration for better performance with this available setup.Also find the attached notepad file covering all the OVS-DPDK details of your queries in the above mail. One more query,I have configured my phy interface with vfio DPDK compatible driver,below is the output .. dpdk_nic_bind --status Network devices using DPDK-compatible driver ============================================ 0000:03:00.0 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci unused= 0000:03:00.1 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=vfio-pci unused= 0000:05:00.1 'I350 Gigabit Network Connection' drv=vfio-pci unused= Once I have configured the vfio,I'm not able to receive any packets on that particular physical interface. Thanks again for you time. Regards Abhishek Jain On Fri, Dec 8, 2017 at 6:11 PM, Mooney, Sean K wrote: > Hi can you provide the qemu commandline you are using for the vm > > As well as the following commands > > > > Ovs-vsctl show > > Ovs-vsctl list bridge > > Ovs-vsctl list interface > > Ovs-vsctl list port > > > > Basically with the topology info above I want to confirm: > > - All bridges are interconnected by patch ports not veth, > > - All bridges are datapath type netdev > > - Vm is connected by vhost-user interfaces not kernel vhost(it > still works with ovs-dpdk but is really slow) > > > > Looking at you core masks the vswitch I incorrectly tunned > > > > {dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", > pmd-cpu-mask="1800"} > root@node-3:~# > > > > You have configured 10 lcores which are not used for packet processing and > only the 1st of which will actually be used by ovs-dpdk > > And you have configured a single core for the pmd > > > > You have not said if you are on a multi socket system of single but > assuming that it is a single socket system try this instead. > > > {dpdk-init="true", dpdk-lcore-mask="0x2", dpdk-socket-mem="1024", > pmd-cpu-mask="0xC"} > > Above im assuming core 0 is used by os core 1 will be used for lcore > thread and cores 2 and 3 will be used for pmd threads which does all the > packet forwarding. > > > > If you have hyperthreads turn on on your host add the hyperthread siblings > to the pmd-cpu-mask > > For example if you had a 16 core cpu with 32 threads the pmd-cpu-mask > should be “0x60006” > > > > On the kernel cmdline change the isolcpus to isolcpus=2,3 or > isolcpus=2,3,18,19 for hyper threading. > > > > With the HT config you should be able to handel upto 30mpps on a 2.2GHz > cpu assuming you compiled ovs and dpdk with “-fPIC -02 –march=native” and > link statically. > > If you have a faster cpu you should get more but as a rough estimate when > using ovs 2.4 and dpdk 2.0 you should > > expect between 6.5-8mpps phy to phy per physical core + an additional > 70-80% if hyper thereading is used. > > > > Your phy vm phy numbers will be a little lower as the vhost-user pmd takes > more clock cycles to process packets then > > The physical nic drivers do but that should help set your expectations. > > > > Ovs 2.4 and dpdk 2.0 are quite old at this point but they still should > give a significant performance increase over kernel ovs. > > > > > > > > > > *From:* abhishek jain [mailto:ashujain9727@gmail.com] > *Sent:* Friday, December 8, 2017 9:34 AM > *To:* users@dpdk.org; Mooney, Sean K > *Subject:* Re: Poor OVS-DPDK performance > > > > Hi Team > > Below is my OVS configuration.. > > root@node-3:~# ovs-vsctl get Open_vSwitch . other_config > {dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", > pmd-cpu-mask="1800"} > root@node-3:~# > > root@node-3:~# > root@node-3:~# cat /proc/cmdline > BOOT_IMAGE=/vmlinuz-3.13.0-137-generic root=/dev/mapper/os-root ro > console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90 nomodeset > root=UUID=2949f531-bedc-47a0-a2f2-6ebf8e1d1edb iommu=pt intel_iommu=on > isolcpus=11,12,13,14,15,16 > > root@node-3:~# > root@node-3:~# ovs-appctl dpif-netdev/pmd-stats-show > main thread: > emc hits:2 > megaflow hits:0 > miss:2 > lost:0 > polling cycles:99607459 (98.80%) > processing cycles:1207437 (1.20%) > avg cycles per packet: 25203724.00 (100814896/4) > avg processing cycles per packet: 301859.25 (1207437/4) > pmd thread numa_id 0 core_id 11: > emc hits:0 > megaflow hits:0 > miss:0 > lost:0 > polling cycles:272926895316 (100.00%) > processing cycles:0 (0.00%) > pmd thread numa_id 0 core_id 12: > emc hits:0 > megaflow hits:0 > miss:0 > lost:0 > polling cycles:240339950037 (100.00%) > processing cycles:0 (0.00%) > root@node-3:~# > > root@node-3:~# > root@node-3:~# grep -r Huge /proc/meminfo > AnonHugePages: 59392 kB > HugePages_Total: 5126 > HugePages_Free: 4870 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB > root@node-3:~# > > I'm using OVS version 2.4.1 > > Regards > > Abhishek Jain > > > > On Fri, Dec 8, 2017 at 11:18 AM, abhishek jain > wrote: > > Hi Team > > Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also > having one vnf with vhost interfaces mapped to OVS bridge br-int. > > However when I'm performing throughput with the same vnf,I'm getting very > less throughput. > > > Please provide me some pointers to boost the performance of vnf with > OVS-DPDK configuration. > > Regards > > Abhishek Jain > > >